NVMe Storage

NVMe Storage on Multi-Cloud

Multi-cloud architectures require storage that is not tied to a single provider's managed disk API. NVMe-oF/TCP running on commodity compute instances provides consistent block storage performance and a unified API across AWS, GCP, and Azure.

NVMe-Equipped Instance Types

Instance Family NVMe Characteristics
AWS i4i / c6id NVMe storage nodes on AWS
GCP z3 / n2d NVMe storage nodes on Google Cloud
Azure Lsv3 NVMe storage nodes on Azure

Each cloud provides NVMe-optimized instance types. By deploying a consistent NVMe-oF/TCP storage layer on top, you get the same storage API, same latency profile, and same management tooling regardless of which cloud the compute runs in.

NVMe-oF/TCP on Multi-Cloud

Deploy an NVMe-oF/TCP storage cluster on NVMe-equipped instances in each cloud region. Compute nodes (including Kubernetes workers) connect to the local NVMe-oF cluster in their region. Because NVMe/TCP uses standard TCP/IP, the same nvme-cli commands, CSI drivers, and configuration work identically on every cloud.

For the NVMe/TCP protocol deep-dive, see nvme-tcp.com → For a full NVMe-oF architecture overview, see the NVMe-oF guide.

Recommended Storage Architecture

Tier / Use Case Recommendation
AWS region NVMe-oF cluster on i4i instances → EC2/EKS compute
GCP region NVMe-oF cluster on z3 instances → GCE/GKE compute
Azure region NVMe-oF cluster on Lsv3 VMs → Azure VM/AKS compute
Kubernetes Same NVMe-oF CSI driver deployed on EKS, GKE, AKS
Management Unified API: same NQN, nvme-cli, and CSI StorageClass

simplyblock: NVMe/TCP storage for Multi-Cloud

simplyblock deploys as a software-defined NVMe/TCP storage cluster on standard Multi-Cloud instances. It provides Kubernetes CSI, dynamic provisioning, and sub-40µs persistent block storage — without proprietary hardware or cloud-managed disk limits.

Related