NVMe Storage

NVMe Storage in the Cloud

Every major cloud provides NVMe-equipped instances. The architectural choice between local instance NVMe, managed block storage, and self-managed NVMe-oF determines the latency, cost, and operational trade-offs of your storage tier.

AWS

AWS offers both local NVMe SSDs (instance store) and network-attached NVMe via EBS. Understanding when to use each — and when to deploy your own NVMe-oF layer — directly impacts storage cost and performance for EC2 and EKS workloads.

Google Cloud

Google Cloud provides local NVMe SSDs on Compute Engine instances and Hyperdisk for persistent NVMe-class block storage. For the lowest latency shared storage on GKE or GCE, a self-managed NVMe-oF/TCP layer outperforms managed disk offerings.

Azure

Azure supports NVMe through local storage VMs (Lsv3 series) and Ultra Disk for persistent NVMe-class block storage. For the lowest latency shared storage in AKS or multi-VM architectures, a self-managed NVMe-oF/TCP layer provides 5–10× lower latency than managed disk.

Bare Metal

Bare-metal servers provide direct NVMe PCIe access with zero hypervisor overhead. The architectural choice is local NVMe for single-node maximum performance or NVMe-oF for storage disaggregation across a cluster — both on your own hardware.

Multi-Cloud

Multi-cloud architectures require storage that is not tied to a single provider's managed disk API. NVMe-oF/TCP running on commodity compute instances provides consistent block storage performance and a unified API across AWS, GCP, and Azure.

NVMe/TCP transport — the same NVMe-oF protocol works identically across all clouds using standard TCP/IP. For a protocol deep-dive, see nvme-tcp.com →