Thursday, December 26, 2024
Home » All you need to know about cloud-native storage

All you need to know about cloud-native storage

The CNCF Storage Special Interest Group published a detailed whitepaper to explain storage systems structure for cloud-native applications. The document is a must-read to get a full understanding and comparisons of different kinds of storage and how they are being used in production today.  This post is an excerpt from the paper that was presented at KubeCon Shanghai and the PDF is available on the CNCF Storage SIG GitHub repository.

Cloud-native applications are defined by the way we architect, develop and deploy them, not necessarily by where they “live.” Cloud native means microservices, containers, orchestrators, and more; the underlying idea is to build truly platform-agnostic, flexible apps that can be run by any authorized end user.

Here are some key attributes of cloud-native applications, as defined by the Cloud Native Computing Foundation (CNCF) via a post on The New Stack:

  • Packaged in containers
  • Designed as loosely-coupled microservices
  • Isolated from server and OS dependencies
  • Policy-driven resource allocation
  • Managed through agile DevOps processes
  • Centered around APIs for interaction

If you don’t go cloud native, you will miss out on important, cutting edge technologies and end up spending  more of your time patching and maintaining your application. There are exceptions, of course.

A need for storage

Let’s assume for argument’s sake  you have decided to deploy a stateful application on Kubernetes. Stateful means that the app needs to remember things when a client interacts with it. Saving things like this requires storage. What is important for your application?

The first section of the whitepaper introduces you to the basic attributes of storage interfaces and systems. Those are features/characteristics that the storage system usually has or is expected to have. Since every application has a unique set of functionality and objectives, it may be that you will want to have all of the attributes, just some of them, or have them all but pay more attention to a subset of features and less to the others.  Key attributes of a storage system:

  • Availability
  • Scalability
  • Performance
  • Consistency
  • Durability
  • Instantiation & Deployment

Storage layers

Another way to talk about storage systems is to talk about what they’re made of. This helps determine how your data is stored and retrieved, how it’s protected, and how it interacts with your applications (operating system and orchestrator). These layers are tightly connected with the attributes mentioned above.

Storage Topology – how different parts of a system are interrelated and connected (like storage devices, compute nodes, and data). Topology is important to take into consideration as it influences many attributes of a storage system as it’s built. Storage topology can be centralized, distributed, sharded, or hyperconverged.

Data Protection – an important service that makes sure your data persists even in the event of some disaster. There are a couple of ways to do this:

  • RAID (redundant array of independent disks) – techniques used to distribute data across multiple discs with redundancy in mind
  • Erasure coding – a method used to protect data where it is split into fragments that are encoded and stored with a number of redundant parity sets.
  • Replica – a full copy of a dataset distributed across multiple servers.

Data services – those services often implemented as extra features for the main storage system functions. This can vary from system to system, but some common ones are replication, version control, management of some sort, and more.

Encryption – storage systems can offer ways to ensure data protection by encrypting it. It should be noted that encryption will have an impact on performance because of computing overhead, but acceleration options are available on modern systems.

Physical layer – the actual hardware where the data is stored. The choice of physical storage impacts the overall performance of the system and the durability of data stored. Traditional systems use magnetic discs and flash-based SSD/NVM.

Data access interface

The data access interface defines how data is consumed and/or stored (by applications or workloads). Usually, different applications have been preferred or pre-defined by the architecture access method. The choice of the interface also affects important attributes such as availability, performance, and scalability. In this whitepaper, data access interfaces were split into Volumes and Application APIs.

Data access interface: Volumes

Block store

This is maybe the oldest storage system. Data is stored in fixed-size chunks, or blocks. Those blocks are typically addressed numerically. So in order to retrieve data, the application will have to provide the correct numerical address and then organize a complete file out of pieces.

There is no other way to store and retrieve data but with addresses. This gives very granular access and very fast if the persistence target is local.

File system

This is a logical persistence layer that stores and retrieves data referenced by files. This interface provides richer set of primitives (e.g., directory structure, access control, and naming)  than a block store. Because of that, this interface is more suitable for consumption by applications. The actual persistence function translates files to logical block addresses.

File systems can be local, remote, and distributed. There are also numerous types of file systems to optimize for many characteristics like performance, durability, storage medium. and access patterns.

Data access interface: application API

Object store

In general, object store systems use atomic key-value stores. Unlike file systems and block stores, the implementation of object stores is quite diverse. The whitepaper focuses on HTTP-based object stores as defined by AWS S3, Azure Blob, Google Cloud, and OpenStack Swift.

These types of systems define a set of methods based on HTTP protocol. With this interface, there is no need to mount or attach an object store; these systems are always remote to the requester node.

One of the advantages of HTTP-based systems is metadata management. This makes it possible to attach custom metadata to the objects. Another advantage is in granular access control.

HTTP object stores are designed for scalability and durability. They can support extremely large amounts of data spread across different data centers and regions. They usually support many methods to maintain extreme data integrity.

Key-Value store

This type of system is designed to store, retrieve, and manage key-value pairs. In a key-value store, there is no predefined schema, making it flexible. The application, then, is in full control over what is stored in the “value.”

This kind of system can work on a single node or scale to many, it can be accessed locally or over the network, and it can store its data fully in memory, partially in memory, or fully on disk. Many more complex storage systems are built on top of key-value stores.

Orchestration and management interfaces

Cloud-native means containerized. Containers usually require some type of management system or orchestrator. How do those orchestration systems interact with the storage systems to associate workloads with data from the storage systems? Depending on the data access interfaces, different layers may be involved.

Kubernetes, for example, can support multiple interfaces to interact with the storage system. That storage system can:

  • support control-plane API and intact directly with the orchestrator
  • interact with orchestrator via an API Framework layer or other tools

Control-plane API refers to storage interfaces to the container orchestrator and focuses on dynamically provisioning storage for workloads.

API Framework and other tools are extensions to control-plane APIs and can also support automation and other data services (data protection, data migration, data replication and many more) in addition to provisioning.

The whitepaper also includes useful comparison tables for a number of storage system features that can help make a decision on which one best fits your own specific needs. CNCF also hosted a companion webinar for the whitepaper, which is in itself well worth watching.

Leave a Comment

* By using this form you agree with the storage and handling of your data by this website.

About Us

Solved is a digital magazine exploring the latest innovations in Cloud Data Management and other topics related to Scality.

Editors' Picks

COME MEET US

Where you can meet and learn more about Scality.

 

A complete listing of global live and virtual events where you can learn about Scality products and partnerships.

All Right Reserved. Designed by Scality.com