178 New technologies have always faced adoption cycles and hype-curves, the latter being so well-known that top analysts, such as Gartner, have adopted the term in their famous “Hype Cycle” reports. Those of us in the object storage space are now riding a new wave of hyper-growth powered by the rapid emergence of transformational digital business initiatives like AI, ML, automation and analytics. As a result of this shift, we are experiencing increasing adoption towards all-flash object storage. To learn how customers are adapting, we asked ESG Senior Analyst Scott Sinclair to help us understand this changing landscape. Scott and his team interviewed 200 customers for their latest report The Digital Era Is Fueling Adoption of All-Flash Object Storage, and the findings clearly show that all-flash object storage is “well on its way to becoming a foundation of the modern data storage ecosystem.” Here’s what we learned and why we believe object storage — and, specifically, all-flash object storage — is rapidly becoming the new primary data storage technology. 95% of IT pros surveyed by ESG are using flash storage within all or part of their object storage environments, and 23% say they already have an all-flash object storage solution. Of those using all-flash object storage, 77% say the technology has had high impact or has been a game changing technology in their on-prem storage environment. All-flash for object storage has arrived With new high-density and lower cost flash media such as QLC becoming more available, the suitability of flash for high-capacity data storage has arrived. Object storage has now begun to embrace flash media, much like the world of all-flash arrays before it but with a key difference: it can be used at-scale (potentially many petabytes) and for long-term data storage. As the cost of flash decreases, we see it becoming the default media for object storage as well. This not only increases the performance of object storage, but will effectively make object storage the new primary storage for this much broader range of applications. A short history — how did we get here? Back in the day, object storage first entered the scene as cheap, scale-out cold storage. For years it remained in the emerging corners of application and IT administrators’ minds. For many, the right place for object storage was for high-capacity storage, as it promised scalability and low cost but with a tradeoff in performance. It was mainly thought to be suitable for Tier 2 archival applications — ones where access to data could be slower than for many mission-critical, transactional applications. However, the truth is that as object storage storage took hold, it has become the de facto solution for performance-intensive applications including: Video content delivery Medical imaging (PACS) Big data analytics Enterprise backups, where SLAs for backup windows and restore times are prevalent For many of these applications, object storage has, by default, become “primary storage.” Because the file payloads for performance-intensive applications tend to be large (megabytes and beyond per file), the metric that truly matters is throughput — how fast can the files be written to storage or accessed from the storage. This is typically measured in Megabytes or Gigabytes per second and is a measure where many scale-out object storage systems, such as our own Scality RING, excel. Moreover, since throughput can be scaled-out to very high levels (dozens of GB/sec from a single system being common), these indeed can be thought of as true high-performance storage deployments. Back to the future:All-flash object storage becomes the fuel to power AI, ML and automation Now, a new world of application workloads has emerged that changes the requirements for storage. Video content delivery is a great example of the need for high-capacity and high-performance, and we clearly see new and innovative applications for it, such as in cloud DVRs, LiveTV and online streaming. We also see emerging machine learning (ML) applications within the broader artificial intelligence (AI) domain requiring access to massive amounts of data at very high speed, hence needing a combination of speed and scalability from storage systems not seen before. For ML applications, the need for very large training data sets is what will make these algorithms effective and successful. Much of this training will be based on very large volumes of video and imaging data, making classical storage arrays insufficient for them in terms of scale. Enterprises also now have access to new breeds of big data analytics applications that can consume mountains of data and provide the meaningful insights we’ve been promised. For example, apps from vendors such as Splunk, Vertica and Elastic have a need for high-speed and scalable storage that is clearly becoming key to solving critical business problems. And, of course, thousands of new applications will be deployed on the ‘edge,’ ranging from sensors, IoT devices, cameras and autonomous vehicles — all of this will once again create the same demands for scale and speed from storage. Happily, applications in all of these areas have embraced the AWS S3 API as a standard storage protocol for leveraging object storage, which has really knocked down the barriers to its adoption. We expect this to continue, as new cloud-native applications will also naturally use these APIs. With all-flash object storage at the foundation, ESG’s survey identified the following top benefits for on-prem data centers — much of it focused on improvements for applications. For more detailed information, download the full ESG report. You might also be interested in registering for this webinar: Expanding horizons: Applications embrace object as the new primary data storage. ESG Senior Analyst Scott Sinclair joins Scality GM, Americas, Greg DiFraia to discuss how all-flash object storage will enable a new class of applications in AI/ML, big data analytics, edge computing, IoT use cases — and even in traditional areas of data protection, such as backup.