LongHorn Deployment Using Helm
This guide explains how to deploy Longhorn on a Kubernetes (K8S) cluster at Voltagepark.
Longhorn is a lightweight, distributed block storage system specifically built for Kubernetes.
By default, Voltage Park On-Demand bare metal servers have one NVMe mounted at the root partition and six (6) additional NVMe disks. The six additional NVMe disks are unmounted, offering high flexibility for custom HPC workloads. This LongHorn Deployment Guide Using Helm assumes that Kubernetes is already up and running, that you have Helm installed, and your nodes have six (6) clean 2.9TB NVMe disks each.
IMPORTANT: You need at least three nodes to deploy a HA (replication, failover) LongHorn cluster. Unlike Ceph, Longhorn requires disks to be mounted and filesystems to be already mounted.
Therefore, it requires an additional step for node preparation.
Longhorn provides built-in data protection, but it works differently from Ceph.
Each Longhorn volume is replicated across multiple nodes, similar to RAID1:
The default is three (3) replicas.
If one node fails, Longhorn uses a healthy replica to continue serving data.
When the failed node returns, Longhorn automatically rebuilds the lost replica.
Node Disk Preparation
For this use case, we will run Longhorn on top of a RAID0 volume using all six (6) clean 2.9TB NVMe disks, mounted in /mnt/raid0.
Longhorn Installation via Helm
1. 🧩 Add Helm Repo
helm repo add longhorn https://charts.longhorn.io
helm repo update
2. 📁 Create Namespace
kubectl create namespace longhorn-system
3. 📦 Install Longhorn via Helm
helm install longhorn longhorn/longhorn \
--namespace longhorn-system \
--set defaultSettings.defaultDataPath="/mnt/raid0"
🔧 You can adjust defaultDataPath to where you want Longhorn to store its data on each node (like /mnt/nvme0, etc., if you mount NVMes manually).
4. 🧰 Create a StorageClass
Longhorn provides a default one called longhorn. Verify:
kubectl get storageclass
(Optional) Set it as the default:
kubectl patch storageclass longhorn \
-p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
5. 🧪 Test with PVC + Ubuntu Pod
longhorn-test-pvc.yaml
longhorn-test-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: longhorn-test-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3072Gi
storageClassName: longhorn
ubuntu-longhorn-test.yaml
ubuntu-longhorn-test.yaml
apiVersion: v1
kind: Pod
metadata:
name: ubuntu-longhorn-test
spec:
containers:
- name: ubuntu
image: ubuntu:22.04
command: ["/bin/bash", "-c", "--"]
args: ["while true; do sleep 30; done;"]
volumeMounts:
- mountPath: /mnt/longhorn
name: longhorn-vol
volumes:
- name: longhorn-vol
persistentVolumeClaim:
claimName: longhorn-test-pvc
restartPolicy: Never
tolerations:
- key: "node-role.kubernetes.io/master"
operator: "Exists"
effect: "NoSchedule"
Apply both:
kubectl apply -f longhorn-test-pvc.yaml
kubectl apply -f ubuntu-longhorn-test.yaml
🧪 Test Inside Pod
kubectl exec -it ubuntu-longhorn-test -- bash
# Inside the POD:
echo "longhorn works!" > /mnt/longhorn/test.txt
cat /mnt/longhorn/test.txt
👏 6. Conclusion
Congratulations! You have successfully deployed Longhorn on your Kubernetes cluster, allocating the six unmounted NVMe disks.
In our internal tests, we got better performance when using Ceph.
Ceph provides excellent scalability and supports multiple protocols (block, file, object), making it more suitable for large-scale, production-grade clusters. In contrast, Longhorn is simpler to deploy and manage in smaller Kubernetes environments.
If you encounter issues, reach out to [email protected] for assistance.
Last updated