OpenShift Virtualization + Ceph RBD
Customer PoC Runbook
Purpose
This runbook documents how to use Ceph RBD (via CSI) as the primary storage backend for OpenShift Virtualization (KubeVirt) in a VMware replacement PoC.
It assumes:
-
Community Ceph (external)
-
Ceph CSI RBD already working
-
ceph-rbd-v2 is the default StorageClass
1. Scope & Assumptions
In Scope
-
VM disks backed by Ceph RBD
-
VM creation via OpenShift Virtualization
-
OS image storage on Ceph
-
Live VM lifecycle (start/stop/delete)
Out of Scope (for this PoC)
-
CephFS
-
Multisite Ceph
-
Production HA tuning
-
Red Hat supportability
2. Architecture Overview
Storage flow:
OpenShift Virtualization (VM)
|
v
PersistentVolumeClaim (RWO)
|
v
Ceph CSI (rbd.csi.ceph.com)
|
v
Ceph RBD Image (pool: openshift)
Key points:
-
Each VM disk = one RBD image
-
RWO access mode is expected
-
Live migration requires shared storage → Ceph RBD satisfies this
3. Prerequisites
OpenShift
-
OpenShift 4.x
-
Ceph CSI RBD installed and healthy
-
Default StorageClass: ceph-rbd-v2
-
Worker nodes with sufficient CPU & RAM
Ceph
-
Pool exists (openshift)
-
Ceph user has:
-
rwx on the pool
-
-
MONs reachable from workers
4. Install OpenShift Virtualization
4.1 Enable OperatorHub Access
Ensure OperatorHub is available (default in most clusters).
oc get packagemanifests kubevirt-hyperconverged -n openshift-marketplace
4.2 Install the Operator
Install OpenShift Virtualization from:
-
OperatorHub → Red Hat Operators → OpenShift Virtualization
Or CLI example:
oc apply -f - <<EOF
apiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
name: openshift-cnv
namespace: openshift-cnv
spec:
targetNamespaces:
- openshift-cnv
---
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: kubevirt-hyperconverged
namespace: openshift-cnv
spec:
channel: stable
name: kubevirt-hyperconverged
source: redhat-operators
sourceNamespace: openshift-marketplace
EOF
4.3 Create the HyperConverged CR
oc apply -f - <<EOF
apiVersion: hco.kubevirt.io/v1beta1
kind: HyperConverged
metadata:
name: kubevirt-hyperconverged
namespace: openshift-cnv
spec: {}
EOF
Verify:
oc get pods -n openshift-cnv
Expected: all pods Running
5. Verify Ceph RBD Is Used by Virtualization
oc get sc
Expected:
ceph-rbd-v2 (default)
This ensures:
-
VM disks
-
OS images
-
DataVolumes
all land on Ceph RBD.
6. Enable OpenShift Virtualization UI
Navigate to:
Console → Virtualization
If not visible:
oc patch consoles.operator.openshift.io cluster \
--type merge -p '{"spec":{"plugins":["virtualization"]}}'
7. Create a VM (Ceph-backed)
7.1 Upload OS Image (Ceph-backed)
From UI:
-
Virtualization → Catalog → Bootable volumes
-
Upload ISO or QCOW2
Or CLI example:
apiVersion: cdi.kubevirt.io/v1beta1
kind: DataVolume
metadata:
name: fedora-boot
spec:
source:
http:
url: https://download.fedoraproject.org/pub/fedora/linux/releases/40/Cloud/x86_64/images/Fedora-Cloud-Base-40.qcow2
pvc:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
oc apply -f fedora-datavolume.yaml
Verify:
oc get dv
oc get pvc
PVC should be Bound using ceph-rbd-v2.
8. Create a VirtualMachine
8.1 VM Definition (Ceph RBD Disk)
apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
name: fedora-test
spec:
running: false
template:
spec:
domain:
cpu:
cores: 2
resources:
requests:
memory: 4Gi
devices:
disks:
- name: rootdisk
disk:
bus: virtio
volumes:
- name: rootdisk
dataVolume:
name: fedora-boot
oc apply -f vm.yaml
9. Start the VM
oc start vm fedora-test
Check:
oc get vmi
10. Verify Ceph RBD Usage
On Ceph side:
rbd ls openshift
You should see:
csi-vol-<uuid>
Each VM disk corresponds to one RBD image.
11. Live Migration Test (Optional but Recommended)
oc adm cordon mox1
oc migrate vm fedora-test
Verify:
oc get vmi -o wide
Expected:
-
VM moves to another worker
-
No storage interruption
12. VM Delete & Cleanup
oc delete vm fedora-test
oc delete dv fedora-boot
Ceph:
rbd ls openshift
RBD images should be removed automatically.
13. Common PoC Pitfalls
❌ VM stuck in Provisioning
-
Check PVC events
-
Verify default StorageClass
❌ “provided secret is empty”
-
Wrong StorageClass (old one)
-
Ensure ceph-rbd-v2 is used
❌ VM won’t migrate
-
Disk must be on shared storage
-
Ceph RBD is required
14. VMware Replacement Mapping
|
VMware Concept |
OpenShift Virtualization |
|---|---|
|
vSphere Datastore |
Ceph RBD Pool |
|
VMDK |
RBD Image |
|
ESXi Host |
OpenShift Worker |
|
vMotion |
Live Migration |
|
VM Template |
DataVolume / Bootable Volume |
15. PoC Status
✅ OpenShift Virtualization installed
✅ Ceph RBD backing VM disks
✅ Dynamic provisioning
✅ Live migration supported
🚀 Ready for:
-
VM performance testing
-
Stateful workloads
-
Migration demos
-
Customer workshops
