NVMe Storage

NVMe vs HDD: The Complete Performance Comparison

Hard disk drives dominated storage for 60 years. NVMe SSDs have made them obsolete for performance-sensitive workloads — delivering 1000× higher random IOPS, 500× lower latency, and near-zero seek times. Yet HDDs still ship in volume for cold storage, media archives, and backup. Here is when each technology makes sense.

Quick Verdict

NVMe vs HDD: Key Metrics

Metric NVMe SSD (PCIe 4.0) HDD (7200 RPM)
Sequential Read 5,000–7,000 MB/s 150–300 MB/s
Sequential Write 4,000–6,500 MB/s 130–280 MB/s
Random 4K Read IOPS 500K–7M 80–200
Random 4K Write IOPS 500K–1M 80–200
Read Latency (4K) 10–20µs 5–10ms (5,000–10,000µs)
Seek Time 0µs (no seek) 8–12ms
Cost per TB (2024) $80–$120/TB $15–$25/TB
Capacity (max) up to 8TB (consumer), 30TB+ (enterprise) up to 30TB (SMR)
Power (active) 3–7W 6–12W
Vibration sensitivity None (no moving parts) High — adjacent drives can degrade each other in dense racks
MTBF / reliability 2M+ hours 1–1.5M hours (more failure modes)

Random I/O Is Where HDDs Collapse

An HDD delivers ~150 IOPS because the read/write head must physically seek to each 4KB location on a spinning platter — at 7200 RPM, each rotation takes ~8ms. A database query touching thousands of random rows creates thousands of separate seeks; at 150 IOPS an HDD can handle fewer than 1 such query per second without queueing.

NVMe flash has no seek time. Any 4KB block is readable in 10–20µs regardless of physical location. At 1M IOPS, the same database can handle 6,600 random queries per second from a single device. The practical impact on PostgreSQL or MySQL response times is measured in orders of magnitude, not percentages.

Sequential Workloads: The Gap Narrows

For large sequential writes — video recording, bulk backup, log shipping — HDDs can sustain 200–300 MB/s with predictable throughput and much lower cost per TB. A 30TB HDD costs ~$450; an equivalent NVMe SSD costs $2,400–$3,600.

Surveillance systems, media rendering archives, Hadoop data lakes, and long-term backup storage are the canonical HDD use cases that remain economically justified in 2024.

Mixed Workloads Always Favor NVMe

Real production workloads are never purely sequential. Databases mix reads and writes at random offsets; VMs boot from multiple locations simultaneously; Kubernetes pods create burst I/O patterns. Any mixed workload pushes HDDs into their latency-dominated regime where random seeks dominate throughput.

The rule of thumb: if your storage workload generates more than a few hundred random IOPS per volume, NVMe is not a performance optimization — it is a correctness requirement. HDDs at that load level will cause application timeouts, database deadlocks, and cascading latency spikes.

NVMe-oF: NVMe Performance for Shared Storage

NVMe over Fabrics (NVMe-oF) extends NVMe's performance across the network. A shared NVMe-oF storage pool accessed over 25GbE TCP adds only ~20µs of network latency — the total round-trip of 25–40µs is still 200× faster than a local HDD.

For Kubernetes clusters that need shared block storage, NVMe-oF is the replacement for NFS-backed PVCs served by HDDs. The latency improvement translates directly to lower p99 API response times and higher database throughput.

More Comparisons