THANK YOU FOR SUBSCRIBING

A Choice between Equals: How and Why SSDs and CFMs aren't very different at all
Paul Haverfield, Storage CTO, HPE


Paul Haverfield, Storage CTO, HPE
Storage solutions today are evolving at an impressive rate. The impetus behind this evolution has primarily been the shift from hardware to software-based storage, and the rapidly approaching total replacement of HDD’s with All-Flash Storage. With its advantages of no moving parts, greater capacity, faster response times, and reduced energy consumption, flash storage has come to occupy a decisive chunk of the storage market, generating as much as $794.8 million in revenue, according to IDC. This form of storage is primarily of two types: All-Flash Arrays (AFAs) and Hybrid Flash Arrays (HFAs), of which AFAs are the most commonly used.
Flash media is typically packaged into two major forms: Solid State Drives (SSDs) and Custom Flash Modules (CFMs). SSDs are an industry standard packaging of NAND flash chips with a flash controller and typically an industry standard SAS or SATA interface. CFMs are a custom designed (proprietary) packaging of NAND flash chips, a controller, and an interface – sometimes SAS or PCIe or some other. CFMs are proprietary in nature as companies such as IBM, Hitachi and Violin design their own CFMs as opposed to SSDs which are an industry standard package.
The perceived belief in the industry today with regard to both these types of flash packaging is that CFMs hold a distinct advantage in terms of storage and processing capacity, cost and speed. This is theoretically true due to the holistic approach that CFMs hold in managing flash media. In addition to this, proprietary flash modules allow vendors to customize their flash storage devices to adapt to their dynamic demands, thereby allowing for greater innovation and utility. The performance of CFMs in the real world presents a very different case altogether.
Processing Capacity
CFM vendors claim that their processing speeds are increased as they eliminate the extra layer between flash media and the actual workload that the packaging of SSDs utilizes. The measurement of processing capacity is done in terms of input-output per second (IOPS). These claims of a higher processing speed do not hold true as my analysis of Storage Performance Council SPC-1 performance results indicates. The graph shows that SSDs have almost twice the amount of IOPS per GB as CFMs, contradicting the theoretical claim to superiority.
Cost and Performance
One of the primary reasons for CFM adoption has been the claim by vendors that it is far more economical compared to SSDs while providing greater levels of performance and customization. They also claim that the holistic approach adopted by CFMs to control the entire flash array would enable greater productivity compared to SSDs while maintaining costs to a minimum.
“One of the primary reasons for CFM adoption has been the claim by vendors that it is far more economical compared to SSDs while providing greater levels of performance and customization”
This claim is once again negated by my SPC-1 result analysis that shows almost no difference in cost per GB between CFMs and SSDs. Performance too shows little difference as despite their approach, CFMs are unable to completely utilize all the capacity and power an array can provide.Response Times
An overall claim as a result of all the advantages that CFMs are perceived to possess is a faster response time compared to SSDs due to a more optimized system of storage and wear leveling. With more control over this essential aspect of flash, such a claim would solidify CFMs superior position. This final claim too is contradicted by the SPC-1 result analysis, that shows almost no difference between systems that use SSD storage and those that use CFM storage; both having times of less than a millisecond and hence making even minor differences negligible.
Why?
In theory, CFMs should be better; however, in practice, when configured into an AFA and supporting real-life workloads they are not – and the SPC-1 results of SSD and CFM systems illustrates this point. Why is this so?
The primary reason for this is that the bottleneck with flash storage is not within the flash media itself. Within any All-Flash Array, the bottleneck is and always will be with the controllers. Both SSDs and CFMs hence become merely the media component to be used in the flash storage space, rather than creating the bottlenecks themselves.
As a result, the focus of the evaluation of an AFA must shift towards choosing the right system that utilizes a balanced set of attributes, rather than focusing on how one component is connected and packaged within that system.
Don’t take just my word for it. Recently, IDC’s Eric Burgener wrote a very well balanced technology assessment on this very topic: Flash Media Packaging Decisions Secondry to System - Level Consideration in the All- Flash Array Market.
Here’s a snippet from his assessment - "As customers evaluate AFA platforms to replace aging legacy storage infrastructure, they will find excellent, enterprise-class options built around either custom flash modules or solid state disks," said Eric Burgener, Research Director for Storage. "When selecting the solution that best meets individual requirements, flash media packaging decisions should play a secondary role relative to the features an AFA delivers in terms of performance, endurance, availability, reliability, storage density, recovery times, and cost at the system level."
So What Does the Future Hold?
Much of this discussion is a moot point as it really only relates to the current generation of AFAs on the market. For the most part, I expect the next generation of AFA platforms to all be using the NVMe interface for back-end media connectivity.
So for today it pays to remember that we should focus our evaluations on the system, and recognise that the performance of an AFA is very rarely determined by the back-end media; it’s the system architecture that counts!
Founded in 2015, Hewlett Packard Enterprise (NYSE: HPE) is a business-focused organization with four divisions: Enterprise Group, which works in servers, storage, networking, consulting and support; Services; Software; and Financial Services.
Weekly Brief
I agree We use cookies on this website to enhance your user experience. By clicking any link on this page you are giving your consent for us to set cookies. More info
Read Also
