Future Storage: The View from 2001
M. David StoneSurely one of the biggest success stories in technology has been the hard disk drive. Invented in the 1950s, and an absolute requirement for PCs since the mid-1980s, hard disk drives have an impressive record of increasing capacity and speed, shrinking physical size and cost, and finding new ways to shatter barriers to continued progress. It's the recording technology of choice for primary storage in computers and increasingly for consumer electronics like digital video recorders (DVRs). And it's likely to remain so for the foreseeable future.
Since 1997, drive capacity has been growing at a spectacular 100 to 150 percent per year--doubling drive size in as little as eight months. It's long been a truism that you can't have too much storage, but for the first time ever, capacity has reached a point where affordable drives have more capacity than most desktop computers need. And the industry is already turning that quantitative change in capacity and reduced drive cost into a qualitative change that takes advantage of that capacity. In addition to DVR and other home entertainment center uses, we'll see such features as portions of drives being used for incremental system restore features (similar to GoBack and Windows System Restore utilities) using drive firmware that automatically handles the chore, shrinking form factors that even today can fit a disk in PDAs, cameras, and other consumer products.
Hand in hand with capacity growth has been faster throughput, and the imminent need for faster interfaces to keep up with internal data rates (the speed data is transferred between the media surface and a data buffer or onboard cache). All of which makes this a good time to take a look at the future of storage. We'll concentrate on the next three to five years, which is a reasonably predictable range.
From a Modest Start Hard disk drives didn't exist in January 1952, when IBM tapped one of its engineers, Reynold Johnson, to set up, manage, and lead a new research laboratory in San Jose, California. Johnson, who had been with IBM for 18 years at the time and by then held over 50 patents, had free rein to choose the areas for the lab to work on. One of the first projects he chose was a random access storage device. The result was the world's first hard disk drive, with a prototype up and running in 1955, and the first product--the RAMAC 350--shipped in June, 1956.
That first hard disk drive used 24-inch disks (50 of them). It had a 5 MB capacity (which works out to 0.1 MB per platter), a 1200 RPM rotation rate, 1 second average access time, and an areal density (the number of bits per unit of area) of 2 kilobits per square inch. The unit was huge--roughly the size of two refrigerators--and cost $10,000 per megabyte.
Every one of these specifications is far outdone by any garden variety disk drive today, but, as Dr. Paul D. Frank, Executive Director of the National Storage Industry Consortium (NSIC) points out, the primary measure of progress in hard disk drive technology has been areal density as we'll discuss below. (Dr. Frank's comments are available on the Engineering Research Centers Association Web site. NSIC's Web site is www.nsic.org.)
Areal density is the most significant factor for hard disks largely because it affects most of the other specifications for drives--capacity, drive size, performance, and cost per megabyte. A greater areal density translates to more data per platter, which means that for any given size drive and number of platters, you get more capacity. Alternatively, for any given capacity you can build smaller drives, with fewer and smaller platters.
The effects of areal density improvements on performance come from two factors. First, packing bits closer together on each track means more bits per track, so the head can read or write more data per revolution. Second, as platters get smaller, the heads have less distance to travel across the platter, so smaller platters mean improved average access time -- or, more precisely, improved average seek time, which is the time it takes, on average, for the head to reach the right track. (For our purposes seek time includes small startup and head-settling delays, but note that some vendors use different definitions, not including settling time in seek time, for example.) The other component of access time is latency, the time it takes to reach the right place on the track after getting to the right track. Latency is determined entirely by rotation rate.
Increased areal density even affects cost per megabyte. For any given physical size -- 3.5-inch drives, say -- the cost for building a drive with a given number of platters and heads stays pretty much the same as areal density improves. So as the number of megabytes on a platter goes up, the cost per megabyte goes down.
The chart below, from IBM, shows the growth in areal density for specific IBM drive models from the first RAMAC to IBM's first drive with AFC media (which IBM calls pixie dust, and which we'll discuss below.)
As the chart shows, from roughly 1970 to 1991, areal density grew at rate of about 25 percent per year. (Dr. Frank pointed out that estimates ranged from 25 to 30 percent). Starting in 1991, the rate increased to 60 percent (though some suggest it was as high as 80 percent.) And since 1997, it's jumped yet again to 100 percent (but as Dr. Frank also points out, at times it may have been as high as 200 percent in short bursts.) The faster rate of growth since 1997 has been largely a result of IBM's introduction of the GMR (Giant Magnetoresistive) head to replace the earlier MR (Magnetoresistive) technology, a move that the rest of the industry quickly adopted.
The practical relationship between areal density, capacity, and drive size over time is easy to see. In 1980, areal density had reached roughly 2 megabits per square inch, and Seagate introduced the first 5.25-inch hard disk, with a 5 MB capacity. The drive was twice the height of a modern CD-ROM drive and filled what was then called a full height drive bay, but would now count as two drive bays. Today's drives offer areal densities ranging from about 15 gigabits per square inch to something approaching 40 gigabits per square inch, and 3.5-inch drives are the norm. In fact, you can find 40-gigabyte, 3.5-inch, single platter drives. You can also find 1 gigabyte, 1-inch, single platter drives housed in a Type II Compact Flash Card format.
Areal density depends on two factors: Track density, measured as the number of tracks per inch (TPI), and linear density, which is a measure of the number of bits along a given length of each track. As the graph below shows, increases in both at the same time during any given period between 1991 and 2001--and particularly between 1997 and 2001--have added together to produce a noticeably faster rate of growth in areal density than in either track density or linear density alone.
Both track density and linear density depend on the size of each bit, which is a magnetized area on the disk. As the magnetized areas get smaller along the width of a track, tracks get thinner, so you can fit more tracks on a given size disk. As the bits get smaller along the length of a track, you can pack more bits into each track. The diagram below illustrates the shrinking bit size from 1990 to 2002.
The big question, of course, is how long bits can continue to shrink (and hard disk capacity can continue to grow) before hard disks hit a wall, as they inevitably will. At some point, the magnetized areas, called magnetic domains, will be small enough so the energy they need to retain magnetization will be equal to the thermal energy of the environment. In other words, given a small enough magnetic domain, the drive's operating temperature--or even room temperature--will provide enough energy to randomly flip bits. That point is called the superparamagnetic limit, or superparamagnetic effect. And it may be just over the horizon. Or maybe not.
Just where the superparamagnetic effect lies isn't clear. As recently as 1996, the conventional wisdom was that it would crop up somewhere between an areal density of 50 and 100 gigabits per square inch. But, as the chart from NSIC below shows, the industry has already produced a laboratory demonstration of 106 gigabits per square inch. (The graph charts technology demonstrations, not actual products. The high-end of areal density for current products is indicated by the asterisk, at 35.1 gigabits per square inch.)
The current conventional wisdom is that the superparamagnetic effect lurks at 150 to 200 gigabits per square inch. And we've seen at least one article recently suggesting that drives will come up against that wall in the next three to five years, with the next major step in technology not available to deal with that roadblock until at least a decade from now. However, everyone we've spoken to, including such drive manufacturers as Seagate, Maxtor, and IBM, disagrees with the second part of that conclusion.
Dr. Mark Kryder is Senior Vice President, Seagate Research. His comments are typical. "A change will have to be made somewhere around 150 to 200 gigabits per square inch," he says, "But in the next few years, the basic technology is not going to change drastically." Dr. Kryder says that the improvements needed in technology over that time frame--including things like lower fly heights for the heads and smaller grains for the magnetic media--are straightforward evolutionary changes, similar in kind to the changes over the past few years. He expects that four to five years from now a new technology will be needed, but he also expects that the industry will be ready to introduce it then.
Not so incidentally, the most likely next technology is something called perpendicular recording--in contrast to the longitudinal recording used in current disk drives. (In longitudinal recording, the north and south poles lie along the surface of the disk. In perpendicular recording they are perpendicular to the surface. The technology for perpendicular recording has actually been around for a while, as discussed in the Good Bets for the Future sidebar, but there's been no compelling reason to switch to it. And assuming the industry does indeed move to perpendicular recording, Dr. Kryder says that it's not likely to last for an extended time period. He believes that perpendicular recording might take areal density to as much as 1 terabit per square inch, but doesn't expect it to go any further than that. On the other hand, he points out that work is continuing on other technologies that can be ready when the industry needs them. Heat assisted magnetic recording (known as thermally assisted recording until recently) and patterned media recording are the two most favored alternatives. (For more on these technologies, see the sidebar: Good Bets for the Future.)
At Maxtor, Ted Deffenbaugh, Vice President of Product Strategy, has much the same view. He points out that there has always been a debate on the superparamagnetic effect, and says that it may well be at 150 gigabits per square inch (or maybe not). "But by the time we get there," he says, "we think we have some other tricks." He also points out that no one is likely to introduce a new approach until it's needed. "Anytime you do a new trick, it costs money, so no one is going to do it until we run out of runway on this road."
Another way to look at it, as Bob Scranton, Manager of Recording Heads research at IBM's Almaden Research Center, points out, is that one reason why other technologies aren't yet ready to replace current hard disk technology is because they aren't needed yet. "The major players all believe that there is room in the straight ahead evolution of the existing technology," he says. "And because everybody believes that the evolution [of the current technology] will continue at a decent pace, people are not investing a lot of money in and focusing on these other approaches. That's why they will take so long to get in."
In short, the major players aren't worried about hitting a wall because they're confident that they'll be able to sidestep it when they get there. The more problematical issue is whether disk drives can keep up their current rate of growth in areal density. The answer is likely not, but even the predicted decreased growth rate is none too shabby. Dr. Frank says, "The developing consensus is that growth may slow down to 60% [per year] over the next decade." And, indeed, we heard that prediction from most people we talked to. "But," he adds, "the real question is when will the slowdown start. It may have already started, or it may start a year from now. We won't know until we see the results over time."
In any case, disk drive technology is not standing still. The latest wrinkle--recently introduced by IBM, and waiting in wings for other manufacturers, most of whom feel it isn't needed yet--is something called antiferromagnetically-coupled (AFC) media. IBM has whimsically dubbed a three-atom thick layer of the nonmagnetic element Rutherium used in AFC media as "pixie dust," and they are currently shipping various drives with pixie dust. The images below show the difference between the structures of traditional magnetic media and AFC media.
AFC media is a way around a basic problem for growth in areal density. In general, as areal density increases, the physical thickness of the recording layer needs to decrease, to keep the magnetic transitions in the recording media sharp enough for the head to read them well. But as the thickness decreases, thermal instability--the basis for the superparamagnetic effect--increases. AFC media is a way out of that box.
In IBM's implementation, the ruthenium layer is sandwiched between two magnetic layers. The physics of the layered structure forces the two magnetic layers to be magnetically coupled in opposite directions (north pole to south pole)--the "anti" part of the antiferromagnetic coupling. Without getting into details, this has the same effect on maintaining readability as reducing the thickness of the recording layer in traditional disk drive media, but with much better thermal stability--which is to say, it pushes back the superparamagnetic effect. (You can find more on AFC media, and IBM's implementation of the concept, at this link.)
The really good news is that this new type of media comes under the category of straightforward improvements in current technology, with little or no additional cost, according to IBM, and without needing changes to the disk drive recording head or electronic components. (For IBM's white paper on the subject, see this link.) And, IBM says, this one step, which most drive manufacturers haven't even bothered taking yet, offers a clear pathway for areal densities as high as 100 gigabits per square inch.
Given the current state of the art as already shown in laboratory demonstrations, projections for hard disks over the next four years are uniformly positive. Dave Reinsel, Research Manager for Hard Disk Drives and Components at IDC, projects that the sweet spot for hard disks (i.e. the best selling capacity) will grow to 100 gigabytes in 2005. (And he also notes that the highest capacity hard disks have historically averaged about 6 times the capacity of the sweet spot at any given time.) As the graph below shows, in 1996, this sweet spot was only about 600 MB, so that the 100 gigabyte projection represents a growth of 1,667 times in the ten years from 1996 to 2005.
The related graph below shows the growth in average capacity on the desktop from 1999 to 2003. The historical data is from Dave Reinsel at IDC. The projections are Maxtor's.
This growth in disk capacity is based on a projected growth in areal density. Dave Reinsel projects that areal density will continue growing dramatically, reaching 154 gigabits per square inch in 2005. The graph below shows the projected highest areal density for drives though 2005. And note that these are for actual commercial products, not technology demonstrations.
Exploration of new technology for future growth isn't stopping anytime soon either. NSIC's current target for future technology demonstrations is to reach 1-terabit per square inch areal density in 2006 (which is almost 30x today's highest areal density of 35 gigabits/square inch). The chart below, which includes the historical laboratory demonstrations depicted earlier, shows the target path for increasing areal density through early 2006.
One last projection worth including here is for falling prices. The chart below, also from Dave Reinsel at IDC, forgoes the traditional price per megabyte designations in favor of showing the price per gigabyte. The expectation is for prices to drop to $1.00 per gigabyte in 2003, and to about half that in 2005. As a point of reference, in June 1993--just ten years earlier than this projection of a $1.00 per gigabyte cost--hard disks still hadn't reached the magic price of $1.00 per megabyte. That's a drop in price by a factor of 1000 in under ten years. Note that the price scale on the graph below is logarithmic.
Hard disk drives don't exist in a vacuum. As drive capacities and performance increase, drives bump up against limits. Consider capacity first: To use any given capacity drive, you need a BIOS, an operating system's file system, and a drive interface that all work together to support that capacity. As shown in the chart below, some of the limits ATA drives have hit in the past include barriers at 528 megabytes (because of discrepancies in the way ATA drives and system BIOSes defined the number of cylinders, heads, and sectors), 2.1 gigabytes (because of file system limitations), and 32 gigabytes (because of BIOS limitations). As the chart also shows, the next looming barrier for ATA drives is at 137 gigabytes--or, more precisely, 137.4 gigabytes.
The problem in this case is that the ATA interface was originally defined with 28 bits to specify sectors for data, which works out to 2^28, or 268,435,456 sectors. And since each sector can hold 512 bytes, 28-bits yields 268,435,456 times 512 bytes, or 137.4 gigabytes maximum capacity for a drive.
Historically, each time drives have approached a capacity limit, the industry has raised the bar. And with today's drives bumping up against the 137-gigabyte limit (at this writing, Maxtor has already announced a 160-gigabyte ATA drive), the industry is poised to raise the bar again, with an approach developed by Maxtor along with Compaq, Microsoft, VIA Technologies, and others.
In June 2001, Maxtor announced that it had submitted a proposal for raising the capacity limit to the ANSI T13 committee, which is responsible for overseeing the ATA specification (www.t13.org). The proposal, which is now incorporated in the draft for ATA/ATAPI-6, specifies 48-bits for defining sector addresses. That works out to enough for 248 sectors times 512 bytes per sector, or 144 petabytes--that's 144 million gigabytes, or 144,000 terabytes of data.
This still leaves other lesser limits to deal with in the future. In particular, as Maxtor points out, most operating systems today offer 32-bit addressing for sectors. That translates to allowing 2.2 terabytes (2200 gigabyte) maximum capacity--a capacity that Maxtor says drives could hit as early as 2004. And those limits, as well as BIOS limits, need to be dealt with before then as well. But the ATA interface itself will no longer be the limiting factor. (For a more detailed look at this issue, see Maxtor's white paper on the subject.). Not so incidentally, note that the current limit for SCSI drives is also 2.2 terabytes. That too will have to change.
Another important limit on drives is the interface. As drive performance improves, with data moving faster between the platter and the drive head, drive interfaces have to grow faster as well, or they would become a performance bottleneck. Both the ATA and SCSI interface have long done a good job of staying ahead of where they need to be, and both have well defined roadmaps for continuing to stay ahead. The immediate future for ATA is a bit confused, with part of the industry waiting for the 150 megabyte per second Serial ATA as the next step after ATA-100 (at 100 MB/sec), and part moving to ATA-133 (at 133 MB/sec) as an interim step. But even those arguing for ATA-133, most notably Maxtor, expect to move to Serial ATA once it becomes a reality.
The full story for interfaces is a bit more complicated. IEEE-1394 (aka FireWire) and USB 2.0 both offer reasonably high speed interfaces for external drives on the desktop. And for SAN and NAS installations, SCSI and fibre channel drives are about to be challenged by Serial ATA in some cases, and fibre channel more generally by iSCSI and InfiniBand. (See the sidebar Making Connections). It will likely be awhile before the dust settles on the interface issues. The key point, however, is that future interfaces should keep up with drive performance.
Some changes in drives will be driven by consumer electronics. Rob Pait, Senior Sales and Marketing Manager at Seagate, points to what he says is a paradigm shift in the way consumers can control entertainment. "That shift," he says, "is due to what hard drives can do for them." And he expects to soon see "huge numbers" of hard drives going into the living room, primarily in game consoles, personal video recorders, and audio jukebox recorders. Seagate is already working with, among others, companies in the audio market who build rack stereo components. "We are working to understand what they need--how quiet does the hard drive need to be, and what kind of temperature extremes does the hard drive need to operate in," Pait says.
As it happens, consumer electronics have a different set of requirements than PCs. One requirement that will benefit PC users too is the need for quieter drives. Hearing a drive churn away is simply not acceptable for a drive in something like a jukebox or video recorder. According to Pait, the most recent Seagate Barracuda ATA drives operate at 20 decibels, with the limit of human hearing pegged at 25 decibels. He says that it's possible, but hard, to hear the drive seek, and that sound levels for drives in general are dropping dramatically.
An important key to producing quieter drives is using a fluid dynamic bearing (FDB) motor, instead of the traditional ball bearing motor. As the diagram below shows, the FDB design replaces ball bearings with oil. Since the bearings are one source of acoustic noise in a motor, replacing them with a fluid eliminates a source of noise. In addition, the fluid dampens noise from other components.
FDB motors have at least one other important benefit. Variances in the roundness of ball bearings cause something called non-repeatable runout--a tendency of the actuator arm (the arm holding the drive head) to get slightly out of alignment over the course of a rotation because the tracks as written are not quite circular. FDB motors minimize this problem, which allows more tracks per inch, and therefore greater capacity per platter.
Reliability--and, more pointedly, ruggedness--is also more critical for consumer electronics. As Ted Deffenbaugh at Maxtor points out, a computer is much less likely to get picked up by an eight-year old and dropped to the ground than a game console in a living room. "The industry has appropriate sensors for the drive to see that it's at zero G, and lock the head before it hits the floor," he says. According to Deffenbaugh, these mechanisms are just starting to appear in some of the portable formats today, and they should be fairly standard in three to five years. For obvious reasons, any such increase in reliability will benefit drives in computers as well.
Deffenbaugh also has some more creative ideas for directions for hard disks. Desktop hard disks now offer more capacity than many people need over the lifetime of a system, and drive manufacturers are looking for ways to use that extra capacity. Deffenbaugh says that Maxtor is looking at the possibility of putting a physical switch on the outside of a disk drive that would let users return to "yesterday's data" literally at the flip of a switch, and provide a simple way to recover from a virus or a program installation gone wrong.
"The concept is clear and compelling," he says, "and we are thinking about how to do that in a very simplified version." What makes this particularly inviting is that adding this feature isn't all that expensive, since you can double the capacity of a hard disk today for relatively little. Deffenbaugh points out that you can buy software today that will do the same thing, but putting this capability in the drive firmware is a simpler, foolproof alternative. He thinks we'll start seeing this feature in the next two to three years.
Beyond that, but still within the next five years, drives may take different approaches to writing data. One example: "What normally slows things down is that you're writing to various parts of the drive," Deffenbaugh says, "Maybe you could write in sequence, and then go back and optimize the data [for reading] when not using the system." Again, these capabilities are applied in software today, but moving them to hardware, to eliminate the need for a layer of software, offers a potentially simpler, more foolproof approach.
Whether these and other possibilities will turn into realities remains to be seen. But this much is certainly true: The rotating magnetic disk has held off all comers for 45 years, and it promises to be the primary storage technology of choice for some time to come.
Copyright © 2004 Ziff Davis Media Inc. All Rights Reserved. Originally appearing in ExtremeTech.