Some Nerdy Stuff

April 8, 2010

Why are SSD’s so expensive?

Filed under: Uncategorized — aaronls @ 10:55 am

Basically a SSD is just a bunch of flash chips, with each chip providing a specific amount of storage space. Larger SSDs require more of these chips. These flash memory chips are built using 32nm and 45nm manufacturing processes on silicon, similar to CPUs. However, flash memory isn’t as complicated as a CPU and there are things like multi-level cell storage that flash uses to fit more data into a smaller space, so the cost isn’t quite as much as a CPU. Still, it takes several flash chips to provide the storage for an SSD. Additionally SSDs have additional components that add to the cost. For example chips that handle the flow of data to and from the flash memory. Some SSDs have extra flash storage that you don’t know about, because it is there on standby in case some of the flash memory goes bad and can replace the bad memory, so that as a user of the drive you don’t know something has gone wrong.  The process of making these chips is expensive.  Even if a single chip can hold 16 GB of data, it will take 4 of these to make a 64 GB SSD, and if you use the price of a really cheap CPU as a benchmark, then you are talking about $30 at least per chip.  So that’s $120 just for the flash memory on the drive and that doesn’t include the extra components that comprise the drive.

So it is a matter of the process of making flash memory is expensive. The other big factor is storage density. HDD manufacturers have managed to continually evolve hard drives such that they can fit more data in the same space, and thus similar manufacturing processes can produce more storage. For flash memory chips to get cheaper, manufacturers have to figure out how they can use the same/similar manufacturing processes but instead fit more storage space into the same flash memory chip. Increasing the storage density of the chip will make the cost per gigabyte of storage lower, since you are producing a chip with more storage using the same manufacturing process. So your manufacturing costs don’t go up significantly, but the amount of storage you’re producing does. The issue here is that increasing the storage density on silicon involves significant technology challenges to make the components of the chip smaller. The 45 nm and 32 nm processes currently used are named for the size of features on the chip, and making these features smaller so that more can be fit on the chip is a really difficult challenge.

Some recent research may improve storage density significantly in the next three years:

Another article explaining SSD cost in greater detail:


April 5, 2010

Potential Cost Savings of SSDs

Filed under: Uncategorized — aaronls @ 9:36 am

Solid state disks (SSDs) are currently very expensive in terms of price per gigabyte, at $1.95/GB, when compared to hard disk drives (HDDs) at around $0.08/GB for a terabyte HDD, $0.18/GB for a 250GB HDD, or if you wanted a 146GB 2.5″ 15K RPM server drive(henceforth enterprise HDD) it’d run you a whopping $3.63/GB.  So already the SSD is looking great when compared to a high end server drive.  However, for the enthusiast/professional/gamer market, where a high performance drive is mouth watering, but the price of a SSD is not, we should take into consideration the fact that an SSD’s cost will be offset to some extent by it’s power and cooling costs savings.  An SSD uses a small fraction of the power that an HDD does, and this also translates to a significant decrease in heat dissipation.  This means lower power costs, lower cooling costs, and lower cost for infrastructure to support cooling.  Additionally, SSDs are touted as being significantly more reliable than HDDs.

What I want to do is calculate the cost of a mainstream HDD over a 5 year period that includes it’s cost to power, cool, and replace(in the event of a failure) the drive.  We will do this by taking it’s purchase price, adding to that the cost to power it, adding the estimated cost to cool it, and also consider the average failure rate and use that to estimate average replacement costs.  We might even add a premium on top of the replacement cost to account for the overhead of swapping the replacement.

Then we will calculate the same cost for an SSD of similar size, and we will do the same for a enterprise drive.  When we are done we can calculate a $/GB that represents a total cost of ownership.  To do this accurately, I feel we first need to target a specific drive size.  The sizes of SSDs and enterprise HDDs don’t come close to what is available in mainstream sizes, and since we are considering some fairly pricey drives, we probably can’t afford to splurge on massive amounts of storage.  Let’s say around 128 GB is the space we anticipate needing for most of our tasks.  With that space, we can install video/audio production tools, development tools, and/or several games and still have a significant amount of working space for the data we would produce with those tools.  Additionally, the 128 GB SSDs seem to be a sweet spot on the $/GB for SSDs and enterprise HDDs.  We could try to save some money by going for a 64 GB drive, but the $/GB is higher for both SSDs, mainstream HDDs, and enterprise HDDs when we start going for smaller drives.

First let’s spec out the stats of our drives in terms of power consumption.  I’m going off of’s benchmark database for the HDD estimates, and for SSDs I am going to use some benchmarking articles that target a specific SSD in our 128 GB $250 price point category.

Estimated Active Consumption (Watts):

Mainstream HDDs = 10 W

Enterprise HDDs = 18 W

SSD = 5 W

Estimated Idle Consumption (Watts):

Mainstream HDDs = 7 W

Enterprise HDDs = 14 W

SSD = .7 W

Let’s assume that since we are considering a costly SSD, that we are considering putting the drive to use in a way that makes it worth the hefty price tag.  So I will assume 4 hours a day of active usage, and 20 hours of idle usage.  This assumes it will be on 24/7, and I think that is a fair balance since there will be scenarios where the drive has more active usage, and at the other end of the spectrum, scenarios where the drive will be idle all day or the system may be powered down.  Based on these assumptions, let’s calculate the daily watt/hour consumption of each drive:

Mainstream HDDs = 140 Wh idle per day + 40 Wh active per day = 180 Wh

Enterprise HDDs = 280 Wh idle per day + 72 Wh active per day = 352 Wh per day

SSD = 1.4 Wh idle per day + 20 Wh active per day = 21.4 Wh per day

To roughly estimate cooling cost we will assume the drive dissipates the same amount of heat energy that it consumes in electricity, and then also assume that it costs that amount of energy to provide air conditioning that removes the generated heat.  The result is that cooling cost is equal to the power consumption cost.  So we can essentially just double the power consumption calculated above to give us a combined consumption for power and cooling.  These assumptions are not necessarily accurate, but they provide a fair estimate.  It is likely that some energy supplied to the drive is not converted to heat, making the actual heat dissipation lower than we calculated, but this is offset by the fact that it is likely that the inefficiencies of air conditioners means it will cost more to cool that energy than we estimate.  My guess is that in real life the cost of cooling will be much higher than our estimates.

So now we will calculate a 5 year total cost of power and cooling, and assume that the cost of electricity is $0.13 per kilowatt:

Mainstream HDDs = 180 Wh/day x 365 days/year x 5 years x $.13/kW x 2 = $85.41/5 years

Enterprise HDDs = 352 Wh/day x 365 days/year x 5 years x $.13/kW x 2 = $167.02/5 years

SSDs = 21.4 Wh per day x 365 days/year x 5 years x $.13/kW x 2 = $10.15/5 years

Now we will also estimate the average failure rate (AFR) over a 5 year period.  For the HDDs we will use statistics from Google’s white paper on HDD reliability:

Since SSDs haven’t been in wide usage as long or as much as HDDs, I doubt there are similar analysis available on the reliability of SSDs.  However, we can use manufacturer’s claimed AFRs as a guideline.  We can’t take the manufacturer’s reported AFR for face value however, as the above report and many others show that manufacturers can’t be trusted to report accurate AFR’s as HDDs have been shown to fail more often than manufacturers would like you to believe.  Seagate claims a 0.44% AFR on a particular SSD model, 0.55% AFR on a particular 15K RPM enterprise HDD, and 0.73% AFR on a particular 7200 RPM enterprise drive .  While these claims are likely inaccurate, we can hope that Seagate has kept them relative to each other in that one can estimate that SSDs have a one to two fifths lower AFR.  I couldn’t find any AFR claims for mainstream HDDs from Seagate for comparison, but one would expect the AFRs for these are slightly lower, so we will assume SSDs are 2/5ths more reliable than HDDs.

Based on Google’s yearly AFR statistics one can calculate the 5 year survival rate of an HDD like so:

.98 x .92 x .91 x .94 x .93 = .717 => 71.7% HDDs survive 5 years

This means there is a 28.3% chance an HDDs will need to be replaced in a 5 year period, and additionally there is a small chance that replacement will fail in the remaining years, that replacement’s replacement will fail, and so on.  However, calculating the probabilities for these subsequent failures involves going down several branches of conditional probabilities and will not amount to more than a few percentage points difference.

Based on the relative AFRs claimed by Seagate, we will estimate that SSDs have a 2/5ths lower 5 year failure probability than HDDs, putting an SSD’s at a 16.98% 5 year failure probability.  I don’t really know if Google used mainstream or enterprise HDDs, I have heard rumors that they just use lots of low end hardware and cluster it, but we will give enterprise HDDs the advantage by saying they are 1/4th more reliable than the HDDs Google tested putting them at a 21.225% 5 year failure probability.

If we wanted to be really accurate we would actually calculate the survival rate per year and calculate the cost to replace based on the depreciated retail prices of the same drive, however for the sake of simplicity we will assume the drives cost half as much to replace as it initially did to purchase.  On average this would mean a $100 drive that fails 25% of the time in a 5 year lifetime will cost an average of $12.5 (half of $25) to replace per drive for the first 5 years of it’s life.

Mainstream HDDs: .283 x $38 (160GB) / 2 = $5.38

Enterprise HDDs: .21225 x $260 (147 GB) / 2 = $27.59

SSDs: .1698 x $260 (128 GB) / 2 = $22.07

I find it interesting that the cheapest 147GB enterprise HDD and 128GB SSD are the same price on NewEgg, and additionally are both 2.5″ form factor.  One might think these SSDs were priced to be competitive with the high performance enterprise HDDs, but on the other hand SSDs’ prices are supposedly determined by flash prices.

Now we sum the purchase, power/cooling, and replacement costs to give us a 5 year cost and also a 5 year cost per gigabyte:

Mainstream HDDs: $38 + $5.38 + $85.41 = $128.79 => $128.79 / 160GB = $.80/GB

Enterprise HDDs: $260 + $27.59 + $167.02 = $454.61 => $454.61 / 147GB = $3.09/GB

SSDs: $260 + $22.07 + $10.15 = $292.22 => $292.22 / 128GB = $2.28/GB

We see that a HDD costs an estimated $75 more to power and cool.   What these $/GB calculations show is that when you take in to account the long term costs of each type of drive, the gap between mainstream and SSDs is closed significantly.  Rather than an SSD be 20 times the cost, it is now less than 3 times the cost of a mainstream HDD.  It would be even closer if we had considered some of the 10K drives, but on the other hand we didn’t consider the $/GB of mainstream terabyte HDDs that would probably be around a 5 year effective cost of $.20/GB.


What I really wanted to glean from this cost analysis is how much of a premium I would be paying for a high performance 128GB SSD.  They roughly outperform HDDs by 10 times, require significanlty less cooling which translates to less infrastructure and space, are more reliable, and are completely silent.  However they cost 20 to 30 times as much as a mainstream HDD.  This seems a hefty premium to pay for performance.  However, when we considered the reduced heat and power consumption of the drive, then the SSD was less than 3 times as expensive as the mainstream HDD of similar size!  Now that factor of ten performance increase, quiet operation, and reduced cooling infrastructure doesn’t seem so expensive.  With SSDs getting cheaper they will find their place in the mainstream as laptops and desktops as at least a main OS and application drive.  As sales increase and storage density increases, we will hopefully see prices fall fairly quickly for SSDs.  However the flash thumbdrive market had a pretty good run and may have already sucked out the initial rapid price drops that we hope for, and we may only see the usual steady decline that we see with most other silicon based products.  My one fear is that SSD densities may already be close to a slow rising ceiling, since they already rely on 32nm processes, there are ever increasing challenges as chip developers work toward the smaller semiconductor processes.  A smaller process size means more transistors can fit in the same space, and generally this means you get more GB for your money.  This is what is loosely known as Moore’s law, which some speculate is reaching a slowing point.  It may turn out that Moore’s Law is more logarithmic than linear.

Getting back to cost analysis, when compared to the enterprise HDDs the SSDs are cheaper as well, and from spot checking various benchmarks it seems the SSDs can outperform them on average, although I can imagine there are probably specific access patterns that the SSDs may fall behind on.  So the SSD seemingly wins on all fronts against the enterprise HDD.  Additionally, the comparison with the enterprise HDD is much fairer since the drive we compared against is in a $/GB sweet spot for 15K RPM drives.  With the hope for improved reliability with SSDs, we would hope for reduced costs in dealing with failed drives and the risks they pose.  Even when in a RAID 5 array, a failed drive leaves the array vulnerable, where an additional drive failure will compromise the array.  Additionally the potential for user error and other risks when swapping the failed drive pose additional risks to the array.  If I were a server administrator, I would seriously give SSDs a try.  There will probably be new challenges, but it’s worth working through the initial challenges.  Too often I see people incorrectly assess a challenge instead as a road block, and thus give up without looking for a solution to that challenge.  One issue I’ve not seen addressed yet is the lack of TRIM support by RAID controllers.  Additionally, there are alot going on in the firmware of enterprise HDDs and RAID controllers that manage error checking and error handling.  I don’t know all the details of these algorithms, but some of them seem very specialized to dealing with various nuances present only in spinning disk drives.  I expect it will take some time for software and firmware for SSDs to mature and bring out the full potential benefits that SSDs could bring to enterprise storage.  There will likely be rumors, gossip, and debate all along the way about what the best approach is.  In truth the people who have spent their whole lives writing these advanced algorithms will probably go off into a corner, reassess the situation, and do a far better job than any of us could.


Yes I know it is unfair to refer to 15K RPM 2.5″ enterprise drives generically as “enterprise HDDs” since there are cheaper enterprise drives, but for the sake of simplicity I just wanted to consider the high performance drives since we are considering SSDs for their high performance characteristics.  In retrospect I wish I had done the same for mainstream HDDs by considering one of the mainstream high performance 10K RPM drives.

Create a free website or blog at