As Seagate ramps up shipments of its new heat assisted magnetic recording (HAMR)-based Mozaic 3+ hard drive platform, the company is both in the enviable position of shipping the first major new hard drive technology in a decade, and the much less enviable position of proving the reliability of the first major new hard drive technology in a decade. Due to HAMR's use of temporal heating with its platters, as well as all-new read/write heads, HAMR introduces multiple new changes at once that have raise questions about how reliable the technology will be. Looking to address these matters (and further promote their HAMR drives), Seagate has published a fresh blog post outlining the company's R&D efforts, and why the company expects their HAMR drives to last several years – as long or longer than current PMR hard drives.

According to the company, the reliability of Mozaic 3+ drives on par with traditional drives relying on perpendicular magnetic recording (PMR), the company says. In fact, components of HAMR HDDs have demonstrated a 50% increase in reliability over the past two years. Seagate says that Mozaic 3+ drives boast impressive durability metrics: their read/write heads have demonstrated capacity to handle over 3.2 petabytes of data transfer over 6,000 hours of operation, which exceeds data transfers of typical nearline hard drives by 20 times. Accordingly, Seagate is rating these drives for a mean time between failure (MTBF) 2.5 million hours, which is in-line with PMR-based drives.

Based on their field stress tests, involving over 500,000 Mozaic 3+ drives, Seagate says that the heads of Mozaic 3+ drives will last over seven years, surpassing the typical lifespan of current PMR-based drives. Generally, customers anticipate that modern PMR drives will last between four and five years with average usage, so these drives would exceed current expectations.

Altogether, Seagate is continuing aim for a seamless transition from PMR to HAMR drives in customer systems. That means ensuring that these new drives can fit into existing data center infrastructures without requiring any changes to enterprise specifications, warranty conditions, or form factors.

Source: Seagate

POST A COMMENT

13 Comments

View All Comments

  • StormyParis - Tuesday, April 23, 2024 - link

    Whaaaaat ? 4-5 yrs for a HDD ? Reply
  • Scott_T - Tuesday, April 23, 2024 - link

    I hope the 10 year old Hitachi drive in my server doesnt find out about that. Reply
  • dgingeri - Wednesday, April 24, 2024 - link

    I have 8X 4TB Ultrastars that are over 10 years old now. Reply
  • pjcamp - Wednesday, April 24, 2024 - link

    Of course, Hitachi is now Western Digital.

    So there's that.
    Reply
  • Threska - Tuesday, April 23, 2024 - link

    Might be the higher-density required which is why older technologies last longer. Reply
  • artifex - Wednesday, April 24, 2024 - link

    The thing about the 7 year number is that it's just for the heads. Assuming these still need helium, I suspect they won't actually warranty the drives that long. Reply
  • skyvory - Wednesday, April 24, 2024 - link

    7 doesn't really mean anything unless they dare to give warranty for that amount of years. 20+ drives I have have exceeded 7 years and are still working anyway, proofing further that 7 number is empty theory. Reply
  • haukionkannel - Wednesday, April 24, 2024 - link

    Only 7 years... sounds really bad!
    Have to avoid these non lasting HD techs!
    Reply
  • agoyal - Wednesday, April 24, 2024 - link

    How can you have MTBF of 2.5 Million hours (285 years) if you expect the head to last only 7 years?(4-5 years for PMR). Can someone who understands this please elaborate. This does not make any sense to me. Reply
  • PEJUman - Thursday, April 25, 2024 - link

    MBTF is statistical term, it expects 1 failure every 2.5 million hours. they can convert this to AFR (annualized failure rate) that makes more sense to direct consumers with single drive: 0.35% AFR. essentially, if you only own 1 drive, you have 0.35% chance it dies in 1 year. If you own 10.000, then 35 will likely die.

    thus 10k drives * 365 * 24 hrs, divided by 35 drives = Voila! 2.5 million hours between failures (per drive).
    Reply

Log in

Don't have an account? Sign up now