Architecture

Storage Tiering

Storage tiering is the practice of automatically migrating data between different storage media based on access frequency and performance requirements. Hot data (frequently accessed) lives on fast NVMe storage; warm data moves to lower-cost SSDs or HDDs; cold data goes to object storage or tape. NVMe-oF enables tiering across the network without manual data movement.

Storage Tiers

  • Tier 0 — NVMe SSD: Sub-50µs latency, 1M+ IOPS. For active databases, OLTP, real-time analytics, and AI training datasets currently in use.
  • Tier 1 — SATA/SAS SSD: 100–500µs latency, ~100K IOPS. For recently accessed data, development environments, and secondary databases.
  • Tier 2 — HDD: 5–10ms latency, ~150 IOPS. For archival, backup, and infrequently accessed data.
  • Tier 3 — Object Storage / Tape: Minutes to hours for retrieval. For compliance archives, long-term backups.

Automatic vs Manual Tiering

Manual tiering requires administrators to explicitly move data between storage classes. Automatic tiering (auto-tiering) monitors access patterns and migrates data transparently — hot blocks are promoted to NVMe, cold blocks are demoted to cheaper media, all without application changes.

NVMe-oF and Tiering

Disaggregated NVMe-oF storage pools enable fine-grained tiering. A single storage cluster can contain both NVMe nodes (fast tier) and HDD nodes (capacity tier). The tiering engine migrates data between nodes transparently over the fabric. Kubernetes workloads see this as a single StorageClass — the tiering happens below the CSI layer.