Rook-Ceph Deployment Using Helm
This guide explains how to deploy Rook-Ceph on a K8S cluster at Voltagepark.
We've moved! You're viewing our legacy docs. For the most up-to-date guidance, AI-powered search, and quick access to all of your support tickets, please visit our new documentation site at support.voltagepark.com.
By default, Voltage Park On-Demand bare metal servers have one NVMe mounted at the root partition and six (6) additional NVMe disks. The six additional NVMe disks are unmounted, offering high flexibility for custom HPC workloads.
This Rook-Ceph Deployment Using Helm procedure assumes that Kubernetes is already up and running, that you have Helm installed, and your nodes have six (6) clean 2.9TB NVMe disks each.
IMPORTANT: You need at least three nodes to deploy a healthy Ceph cluster.
Use Case
Nodes
Description
Minimum functional
3
Bare minimum to enable 3x replication and data durability.
Recommended for HA + performance
5
Provides better performance, improved failure tolerance, and smoother recovery.
High-performance scalable cluster
6+
Maximizes NVMe utilization and scales bandwidth and I/O performance.
🧩 1. Add Helm Repo & Update
helm repo add rook-release https://charts.rook.io/release
helm repo update📁 2. Create Namespace
kubectl create namespace rook-ceph📦 3. Install Rook Operator (Helm-managed)
helm install rook-ceph rook-release/rook-ceph --namespace rook-cephVerify it’s running:
📝 4. Create CephCluster Custom Resource
This example features three nodes, utilizing all available disks on the servers.
ceph-cluster.yaml
Apply it:
🔍 5. Monitor Cluster Startup
Look for:
rook-ceph-mon-*, rook-ceph-mgr-*, rook-ceph-osd-*PHASE: Ready, HEALTH_OK
This may take a few minutes as all OSDs initialize.
🔧 6. Deploy Toolbox Pod
To run ceph CLI commands:
Test:
🧪 Confirm All NVMe Disks Are Used
Run this in the toolbox pod:
You should see 6 disks per node listed and marked as “in use” by OSDs.
💾 7. Create Block StorageClass (RBD)
Set default (optional):
📦 8. Test PVC with Ubuntu Pod
Create a test PVC with 3 TB
test-pvc.yaml
Apply:
Create Ubuntu Pod
ubuntu-ceph-test.yaml
Deploy:
The Ceph filesystem will be mounted at /mnt/ceph.
👏 9. Conclusion
Congratulations! You have successfully deployed Rook-Ceph on your Kubernetes cluster, allocating the six unmounted NVMe disks.
You can now use this storage for high-performance workloads.
If you encounter issues, reach out to [email protected] for assistance.
Last updated
