Why SSD and Flash Need More Brains to Reach the Corporate Data Center
About 20 years ago, a wave of storage startups began adding intelligence (in the form of hardware and software) to commodity hard disk drives. Technologies such as hardware-based RAID controllers in SANs removed individual disk drive as a single point of failure, allowing vendors to deliver enterprise levels of performance, scalability and reliability with low-priced hardware at a fraction of the price of previous storage solutions.
Today, those startups themselves are about to be disrupted by a new wave of challengers. This time, the challengers are adding intelligence to external SSDs and to server-side Flash memory. This new software layer will enable the broad adoption of distributed server and storage architectures not only for service providers but for enterprise customers, delivering a many-fold improvement in performance at a fraction of the cost of existing storage systems.
This software must enable the cost-effective scale-out of block- and file based storage as well as performance, high availability and capacity. Our intelligent software layers currently addresses the first of these needs and is about to address the latter.
To understand this better, let’s look at how SSD (solid state disk within arrays) and Flash (flash memory within servers) are currently being used, and how where its use falls short.
Challenge: Performance and High Availability
Because they’re relatively expensive and have relatively low capacities, SSD and Flash are now used mainly as a caching layer either within servers, or within storage arrays otherwise made up of traditional hard drives. The overhead of moving data from legacy storage to the SSDs over the storage bus saps performance. Caching wastes space by requiring data (and any changes to the data) to be stored both on primary storage and in cache. Constantly writing and deleting caches shortens the life of SSD and Flash memory. And since Flash is installed in the server layer it becomes a hard-to-access data silo – and a single point of failure that endangers application availability. So, the current use of commodity SSD and Flash, then, doesn’t meet the performance and high availability requirements.
Our Melio data management platform solves the high-availability issue with a host based clustered volume manager, and dynamic symmetrical clustered file system that spans SSD and Flash storage linked to any number of commodity servers. The resulting architecture enables dynamic scale-out and high-availability of SSD and server-side Flash in commodity servers. The recent announcement of our Latency Targeted Allocator (LTA) module to Melio adds server-based Flash cards and solid-state drives (SSDs) to its previous support of conventional spinning disk.
Next Frontier: Capacity
This leaves, of course, the second challenge: How to cost-effectively scale capacity across block and file storage usage models while also benefiting from the low latency and fast performance of SSD and Flash. That’s where the intelligence in our LTA really shines. In our next post we will share more about how LTA delivers on the full promise of cross-workload, file and block scalability built on commodity storage and servers.
If you are interested in learning more about our platform please reach out.
Follow me on twitter @mvmsan
Experience for yourself why OVER 700 customers globally have chosen Sanbolic over more costly and complex alternatives. Melio for FREE