NVMe Storage
Kubernetes & Cloud-Native

NVMe Storage for Kubernetes (PVC & CSI)

Kubernetes stateful workloads — databases, message queues, AI pipelines — require block storage with consistent low latency. NVMe-oF over TCP is the modern replacement for iSCSI-backed PersistentVolumes: same standard Ethernet, 10× lower latency, 10× higher IOPS.

The Storage Challenge

Why NVMe Storage Fits

NVMe-oF CSI driver for dynamic PVCs

A Kubernetes NVMe-oF CSI driver provisions one NVMe namespace per PVC, connects it to the scheduled node via nvme connect, and presents it as /dev/nvme0n1. Same driver interface as iSCSI — zero application changes.

25–40µs latency PersistentVolumes

NVMe/TCP adds only ~20µs over a local LAN. Total PVC latency of 25–40µs is 8–10× lower than iSCSI and orders of magnitude faster than NFS-backed volumes.

No dedicated SAN hardware

NVMe/TCP runs over standard 10/25/100GbE NICs — the same network Kubernetes uses for pod-to-pod traffic. No iSCSI HBAs, no Fibre Channel switches.

ANA multipath for HA

NVMe ANA (Asymmetric Namespace Access) provides automatic failover across storage nodes — equivalent to iSCSI MPIO but built into the NVMe protocol itself.

Reference Architecture

Layer Recommendation
StorageClass nvme-tcp-sc (provisioner: csi.simplyblock.io)
Volume mode Block or Filesystem (ext4/xfs)
Access mode ReadWriteOnce (RWO); RWX via shared namespace
Binding mode WaitForFirstConsumer (topology-aware)
Transport NVMe/TCP over 10/25GbE Ethernet

Need shared block storage at NVMe speed?

NVMe over Fabrics (NVMe-oF) extends NVMe performance across standard Ethernet — delivering 25–40µs block storage to any host in your cluster. NVMe/TCP guide →

simplyblock provides production NVMe/TCP block storage for Kubernetes and bare-metal — no proprietary hardware required.