Saturday, November 23, 2024
Home » All-flash storage all the time? Why it doesn’t make sense for petabyte-scale data

All-flash storage all the time? Why it doesn’t make sense for petabyte-scale data

Scality has delivered best-in-class storage solutions for more than a dozen years since we first launched Scality RING in 2010. Our goal has always been to provide customers with solutions for multi-petabyte-scale data problems that offer easy application integration, simplified systems management and reliable long-term data protection — all at an affordable price. 

Through nine major software releases, RING has embraced both flash-based SSDs and hard-disk drive (HDD) media in storage servers for what they each do well:

  • Flash SSDs provide low-latency access for small IO-intensive metadata and index operations.
  • High-density HDDs are ideal for storing high volumes of customer data in the form of files and objects for most workloads (including those requiring high throughput).
  • For latency-sensitive, read-intensive workloads, we do recommend all-flash for primary data storage.

Scality’s combined use of flash and HDD — leveraging the strengths of each for specific circumstances — provides an optimal balance of performance, capacity, long-term durability and cost. In essence, our software is able to use the appropriate storage media to provide the best fit for each workload, since this is ultimately about delivering value to our customers.

Are high-density flash SSDs ideal for high-capacity unstructured data storage?

Recently, high-density flash SSDs have been purported as a superior alternative to HDDs for high-capacity unstructured data storage. Industry news and several storage solution vendors have presented the latest generations of flash as effectively “on par” with HDD in terms of capacity cost, claiming it enables petabyte-scale storage along with several other advantages of flash. Some have even predicted the imminent death of HDD, arguing that high-density flash SSDs can do it all and better.

But does all-flash all the time really make sense for large volumes of data? Are HDDs going the way of the dodo? No, and no.

Here’s why: When it comes to price/performance, high-density SSDs can’t fully replace HDDs, especially with respect to petabyte-scale unstructured data storage across the spectrum of application workloads.

A fast history of fast flash storage

In the last decade, flash has received extraordinary attention and investment from the enterprise data storage industry. While a decade can be considered an eon in our space, consider that flash SSDs have now been implemented in enterprise storage arrays for over 15 years since EMC delivered them starting in 2008. To help accelerate access to this new lower latency media, we’ve seen rapid advancements in interface protocols and speeds as the industry evolved from Fibre Channel (FC) to SATA and SAS, and then to PCI bus-based interfaces, culminating in the NVMe specification from Intel in 2011. 

Simultaneously, incredible advances were made in the underlying NAND flash, the type of non-volatile memory that first emerged on the market in the late 1980s. The original NAND flash technology was single-level cell (SLC) — one bit per cell — making it limited in capacity and expensive. The transition to two bits per cell (multi-level cell or MLC) provided a 100% increase in density from SLC. TLC (triple-level cell) offered a 50% increase over MLC, going from two to three bits per cell. This was a major inflection point. Flash memory was then adopted by mainstream IT since it was high-performance, reliable and more affordable. 

Quad-level cell (QLC) is the latest big step in flash storage, offering twice as much density as TLC. But, it’s also extremely important to understand that due to its higher bit density (more bits per cell), the durability properties of QLC SSD are inherently several orders of magnitude lower than HDDs and even compared to lower-density flash SSDs. We see this demonstrated through well-known “P/E cycle” ratings of the various flash types — a measure of the number of times the flash media itself can be programmed and erased, as we progress from 100,000 cycles down to just 1,000 cycles on QLC.

Before we discuss the impact of these QLC flash durability properties in storage systems, let’s establish what QLC flash SSDs claim to provide):

  • High density (just over 30 TB per drive in the latest 2.5” NVMe-based drives)
  • Low-latency access (microseconds compared to millisecond level access times on HDD)
  • Improved cost curves (trending better than HDDs annually on a per GB basis)
  • Better “green” characteristics than HDDs (specifically from a power consumption perspective)

But do these claims match reality? Let’s address performance first.

The performance reality of high-density QLC flash SSDs

Storage systems optimized for petabyte-scale unstructured data (mainly file and object storage) have primarily leveraged HDDs for their high-density, reliability and performance characteristics — and, importantly, their cost-effectiveness in terms of low dollar per gigabyte of capacity.

For organizations (like financial services, healthcare, government agencies, media, etc.) that require dozens of petabytes or more, HDD-based storage solutions still offer the best blend of price/performance and affordability to store and protect data for long-term retention periods. These storage solutions offer exceptional price/performance, and their absolute raw performance is exceptional for these data workloads. Demonstrating their proven nature at massive scale: 90% of storage capacity in cloud data centers is still HDD-based today.

In recent years, a few vendors have started to embrace newer QLC flash SSDs as a basis to deliver a trifecta of high density, high performance and low cost for storing massive capacities of unstructured data. The argument for these solutions hinges on the ability of these vendors to deliver the aforementioned combination of advantages:

  1. Performance that is meaningfully better to the application
  2. Reduction in the effective cost to near parity with HDD on a cost-per-GB basis
  3. Sufficient durability to provide enterprise-class data reliability for the long-term

Flash storage systems can optimize performance for workloads that require random access to small data payloads. A classic example is a transactional system that performs random queries against a product ordering system, performing lookups of customer records based on a key (such as the customer name or phone number). Newer workloads with similar patterns are those involving small IoT or device sensor event streams, with small event data (kilobytes or less per record). In this scenario, flash SSDs can offer gains in terms of lower latency and higher operations per second compared to HDDs.

Globally, enterprise customers achieve sufficient performance on HDD-based systems even for latency-sensitive workloads (and operations per second). If the difference between microsecond and millisecond-level latencies isn’t visible or important (again, the key word being meaningful) to the end user, then other factors including the pricing of the solution will become the primary buying criteria.

Most storage people know that performance requirements depend on the data (type, size, volume) and the application’s access requirements, so we have to understand a diverse range of workload demands on storage. When an application requires sequential access to larger payloads (files in the megabyte and higher range), the performance equation changes dramatically from small file workloads. 

Many modern applications fit this characterization — anything that deals with video, high-resolution photos and other large binary data types (such as video content delivery, surveillance, imaging in healthcare, and telemetry) will likely have sequential write and read patterns.

Backup applications within the broad context of data protection also fit this profile. With cybercriminals targeting backup data in 93% of attacks, solutions from vendors like Veeam, Commvault, Veritas, IBM, and Rubrik have gained importance as a mission-critical line of defense against ransomware threats and go hand-in-hand with immutable data storage solutions as the targets for storing backups. 

These applications tend to read and write larger file payloads to storage, with object storage rising thanks to intrinsic immutability advantages. In terms of performance demands on the storage system, these applications are just about the exact opposite of random IO, latency-sensitive workloads in terms of their performance demands:

  • Backup: Store (write) large backup payloads into the storage system sequentially
  • Restore: Critical to rapid business recovery, the application sequentially reads these large stored backup payloads back to the user application. 

The important performance metric for the backup application is throughput in gigabytes per second (or terabytes per hour) for fast sequential access to large backup files. Additionally, in today’s enterprises with hundreds of mission critical applications, there will be a need to process multiple backup and restore jobs simultaneously and in parallel. Economically, it makes sense to leverage shared storage systems and avoid the classic storage silo explosion.

For these sequential IO workloads, the difference between a QLC-flash and HDD-based solution is insignificant. HDD-based object storage solutions can be configured to achieve dozens of GB/second (tens of terabytes per hour), with sufficient throughput to saturate the network to the application. This is key, since the storage system isn’t the limiting factor in terms of performance. 

Moreover, the application itself can be the limiting factor in overall solution performance, with application processing, dedupe/compression and data reassembly times surfacing as dominant time factors for backup and restore. This means that the incremental difference between HDD and flash SSD throughput can be often rendered meaningless, especially when we consider the factor of cost (as we do in the next section).

Before we analyze cost, there is one remaining major performance characteristic of flash to be aware of. Performance decays as the workload “churns” (overwrites and deletes) the flash media and as capacity approaches maximum utilization. When a drive is factory-fresh, new data is written to the drive in a linear pattern until the usable capacity has been filled. In this scenario, the drive hasn’t had to perform any reorganization or reclamation of its internal blocks and pages (termed garbage collection) resulting from data being deleted or overwritten.

For many application workloads, including backup, data is frequently overwritten and deleted. This means that the drives will have to perform garbage collection, and the pattern in which data is written will begin to impact performance. Entire studies are dedicated to utilization and garbage collection dynamics of flash SSD as well as more esoteric aspects related to data reduction techniques and data entropy. 

We can summarize that in practice — and especially for workloads that are not purely sequential — an SSD’s performance begins to decline after it reaches about 50% full, and capacity should be maintained under 70% for performance. 

What does this mean for end users? The initial performance of flash storage solutions when first deployed represents absolute peak capabilities, with performance degrading over time as the system is filled. Some savvy admins even resort to custom “over-provisioning” levels on the SSD to reserve additional internal capacity to boost performance — but this effectively reduces the available data storage capacity of the drive and further increases the real cost.

The real cost of QLC flash SSDs

As a software-defined storage vendor, Scality partners with nearly all industry-leading server platform providers, giving us unique insights into the true prices of storage media across the HDD and flash SSD spectrum. To fairly assess the current claims of price parity between flash SSDs and HDDs, you first have to clarify what is really being compared. 

For high-capacity storage, the comparison we need to make is between the following flash SSD and HDD drive types:

  • High-density, large form factor (3.5” LFF) hard disk drives (HDD): Up to 22 TB per drive (as of today’s writing)
  • High-density, QLC based flash SSDs: Up to 30 TB per drive (today)

Today, the reality is media costs of QLC flash drives are 5 to 7 times higher than LFF  HDDs on a $ per gigabyte basis. 

Since both types of media technologies have active roadmap development, this gap will decline but is still forecasted to be 4 to 5 times in favor of HDDs on a $/GB basis by the end of the decade. And a really important note: A much more favorable price comparison for SSDs can be made by vendors when comparing to small form factor (2.5” SFF) hard disk drives.

Ask your vendor the tough questions about what is being compared when they make cost claims about flash SSDs versus HDDs, since SFF hard drives don’t offer the same advantages as LFF HDDs.

Given the 5 to 7x higher price of capacity for flash SSDs, vendors must rely on data reduction techniques to make the effective cost of flash lower. Techniques including data compression and deduplication are implemented in flash storage systems to provide this level of reduction (and also to preserve flash media durability), with claims ranging from nominal 2:1 up to 20:1 reduction factors in required storage capacity. 

These claims assume that the application does not implement data reduction techniques itself. While this may be true in some specialized application domains, we can once again refer to enterprise backup applications, which do indeed perform data reduction functions, dramatically reducing the ability of the storage system to achieve the reduction factors claimed above. This is also true for domains such as video content and genomics, where data is already extremely optimized and reduced in some form. Here, as with backup data, the underlying flash system cannot be expected to gain the same levels of data reduction as with non-optimized data. This ends up minimizing and eroding the third leg of the claimed flash SSD trifecta: cost per GB.

In summary:
  • The promise of cost parity for flash SSDs hasn’t yet arrived.
  • Flash vendors won’t be able to eliminate this media price disadvantage through data reduction techniques for backup and other unstructured data workloads.
  • While there may be a relatively small advantage in flash SSD performance for sequential access workloads, it comes with a significant penalty in price versus HDD-based solutions.
Takeaway:

QLC flash-based data storage is not needed for all large-scale, mission-critical workloads. While we recommend and use it for latency-sensitive, read-intensive workloads that justify its higher cost, QLC flash is not an ideal fit today for most other workloads, including backups, at the center of modern data and ransomware protection strategies.

The long-term question: Durability of QLC flash for “high-churn” workloads

There are two other critical considerations we will explore in forthcoming blogs. The first relates to power efficiency of flash SSD versus HDD, and whether or not this is reality or hype. The second relates to the critical issue of long-term durability of QLC-based flash SSDs for workloads that are not storing data statically, and instead require a high rate of data churn. 

Since both are cornerstone issues for enterprise storage they are certainly worth a detailed discussion — stay tuned!

About Us

Solved is a digital magazine exploring the latest innovations in Cloud Data Management and other topics related to Scality.

Editors' Picks

Newsletter

Challenges solved, insights delivered, straight to your inbox.

Receive hand-picked articles, case studies, and expert opinions. Keep up with industry innovations and get actionable insights to optimize your strategy.

All Right Reserved. Designed by Scality.com