Flash Storage has invaded the storage industry. It has been touted one of the performance saviours and with the $/GB dropping significantly (although nowhere near the $/GB of spinning hard disk drives at this point in time), Flash Storage has become the poster child of the industry recently.
In general, many of us assume (under the umbrella term) of Flash Storage is basically NAND Flash. However, there are many different types and many different forms of “Flash Storage”. In fact, to be absolutely precise, the correct term is Solid State Storage. That is why SNIA (the Americans pronounce it as “sneee-ya”), the vendor-neutral storage networking industry association that connects almost all the storage networking and data management companies and standards bodies got it right.
The SNIA definition of Solid State Storage in a nutshell is “Data storage made from silicon chips (instead of spinning metal platters or streaming tape) is called solid state storage.” And SNIA has a technology community called Solid State Storage Initiative (SSSI) that helps form and parent the development of solid state storage for the industry.
Flash Storage delivers high IOPS and throughput to the applications, or to be more specific, the CPU-Memory complex. Ultimately, pieces of the application or in its entirety, runs in the CPU-Memory complex.
Let’s take the most common form of Flash Storage in the market today, which is the ever present SATA III SSD. SSD has a SATA III interface as shown below:
The SATA interfaces (within the computing platform) into an AHCI (Advanced Host Controller Interface) HBA, which in turns connects to the PCIe bus. The PCIe delivered the data bits in bytes to the CPU-Memory complex. The diagram below describes the various interfaces involved from where the data blocks reside (i.e. SATA SSD) to where the data is processed (i.e. CPU-Memory complex) and back again.
The introduction of Flash Storage has definitely transformed our views on I/O performance and throughput but there is still a “relative distance” between SATA and the CPU-Memory complex. Latency remains (albeit lower than the common spinning HDDs) and this is not good enough for very mission critical applications that demand extremely low latency, high IOPS and throughput.
What if we can place Flash Storage into the memory (DIMM) slots? This would bypass the SATA interface, AHCI HBA and even the PCIe bus! How cool is that!
The DDR3 DRAM is the component that sits in the DIMM slots today. These are our RAM, our ECC memory. From the table below (taken from SNIA NVDIMM Technical Brief document), we can see the comparison between DDR3 DRAM and NAND Flash.
NAND Flash loses out to DRAM in terms of performance, hands down. But NAND Flash has larger capacities and it is non-volatile. The moment active power is removed from DRAM, every bit of data in the DRAM is gone in an instance. In the NAND Flash, however, the data blocks are retained even though active power is removed from the SSD housing the NAND Flash. This is because NAND Flash, especially from the memory storage context, will have some form of battery or supercapacitor to retain its data content for a brief period of time for bits salvation. Furthermore, the content in the NAND Flash are restorable after power is resumed.
This concludes Part 1. In Part 2, I will talk more about the NVDIMMs (Non-Volatile Dual Inline Memory Modules). So hold on to your hat, folks!
0 Comment Log in or register to post comments