Thursday, September 19, 2024
Home » How to achieve flash performance for unstructured data

How to achieve flash performance for unstructured data

Scale-out storage flash performance makes the cut for unstructured data

Storage vendors talk up how flash performance can help customers handle the next avalanche of data. But it’s not just any kind of data: the boom of unstructured data vastly outpaces other types, like structured data which fits neatly into rows and columns of well-known relational databases.

Most companies understand that they manage petabytes of file data in the forms of documents, media (videos and photos), backup files, virtual machine files and more. That mix is also expanding to include industry-specific file types (medical images, genomics) plus data from new Internet of Things and edge applications.

As a result, scale-out object and file storage solutions are becoming more popular because IT experts know they can modernize:

  • storing billions of objects and dozens of petabytes,
  • extracting more value by extracting insights from data,
  • achieving cloud-like economics.

However, even with these benefits, there is an industry perception that scale-out storage is sluggish compared to traditional block storage arrays (SAN) and file servers (NAS filers) — so scale-out is only suitable for “archival” or “Tier 2”-type applications. But this terminology is misleading since so-called “Tier-1 applications” or “mission-critical applications” usually require a transactional system on a database. What matters here is how fast the application can find and process small records (not how long it takes to read the data, that is less relevant) –  so the performance metric that matters most for block storage is access latency.

On the other hand, the expectation for most flash array storage systems is that access latencies should be in the single digit milliseconds (ms) and even sub 1ms for these types of transactional workloads. While there are storage systems that respond faster than a millisecond, the types of applications that actually require this level of performance are highly specialized — think financial trading.

For modern scale-out storage systems, like Scality RING we actually need to consider two dimensions of performance:

  1. Access latencies: Response time to a system read or write request, as discussed above, and…
  2. Throughput:  The measure of how fast data can be accessed after it is located, typically measured in megabytes per second (MB/sec) or gigabytes per second (GB/sec).

Throughput is king for massive data workloads

For most large file/data payloads such as video streaming, writing (storing) backups or reading (recovering) backups – throughput, and not latency, is the key measure of performance. Since it may require seconds to minutes to read a large (megabyte to gigabytes) file such as in video streaming, whether the access latency is 1 or several milliseconds is irrelevant to the overall application performance.

With Scality RING, throughput is and always has been a core strength, with the ability to scale-out or grow throughput to dozens of gigabytes per second and beyond. This means RING is widely deployed in production for storing data in throughput-intensive workloads such as:

Clearly, these are not archival, “Tier 2” workloads with infrequent-to-never access characteristics. These “mission-critical” data sets indeed require “Tier 1” type throughput performance and is why the RING is a trusted scale-out storage solution for some of the largest data workloads in the world.

Latency optimized with flash performance

To understand how scale-out storage handles latency, it helps to examine the underlying architecture.

RING has leveraged the performance of flash media for nearly a decade to store internal file system and object storage metadata.  For file systems, metadata pertains to the directory structures, POSIX permissions and the file paths/identifiers. For object storage, metadata pertains to the containers (Buckets in AWS S3), plus internal attributes such as Bucket names, creation dates, access control policies, and the IDs (keys) to the objects in that Bucket.

RING goes another step further in the name of performance: it stores another layer of internal metadata on flash – the indexes to the on-disk data – reserving the lower-cost spinning disks only for the actual data payloads. This means that a very high percentage of access operations can be served at the speed of thought:

  • locating a file,
  • changing or accessing a metadata attribute,
  • listing a file system directory, or
  • accessing a Bucket through the S3 API.

The operations shown above are served directly from flash without ever touching the data stored on spinning disk drives (HDD).  By keeping the data itself on HDD, we lower the total cost of storage, but also ensure that access to the data is served lightning fast.

So how fast does this make RING in terms of latency? By profiling hundreds of our customer’s workloads, Scality has documented the real-world response time latency of RING on hybrid flash/HDD storage servers, using 95th percentile measurement techniques.

Typical write requests show a response time latency on the order of 3 milliseconds, which is attributable to the use of flash for indexes and safe write caching to non-volatile memory in the storage servers. For read access we see response times that match hard drive latencies. This is in the range of 5-12 milliseconds depending on the specific drives, and it stays constant with the number of objects stored thanks to indexes stored on flash.

So what’s the bottomline?

These response times are perfectly suited to a large range of demanding big data applications, where overall performance is dominated by actual data transfers and throughput. With the inherent economic advantage of HDDs combined with a ‘dash of flash’, IT pros can take advantage of the optimal price/performance tradeoff for a wide range of applications and workloads.

So how fast would RING be on all-flash (extending flash media for both metadata and data)?  Stay tuned for part 2…

Leave a Comment

* By using this form you agree with the storage and handling of your data by this website.

About Us

Solved is a digital magazine exploring the latest innovations in Cloud Data Management and other topics related to Scality.

Editors' Picks

Newsletter

Challenges solved, insights delivered, straight to your inbox.

Receive hand-picked articles, case studies, and expert opinions. Keep up with industry innovations and get actionable insights to optimize your strategy.

All Right Reserved. Designed by Scality.com