OpenShift 4.20 and community Ceph with helm

Absolutely. Below is a clean, reusable customer PoC runbook written so you can paste it directly into WordPress (Gutenberg).

It’s structured with clear headings, copy-paste blocks, callouts, and a logical flow suitable for customers with limited OpenShift/Ceph experience.

You can paste this as one Gutenberg “Custom HTML” block or split it into Heading + Paragraph + Code blocks.

 


 

 

OpenShift + Community Ceph (RBD CSI)

 

 

Customer PoC Runbook

 

Purpose:

This runbook documents how to integrate an existing community Ceph cluster with OpenShift 4.x using the Ceph CSI RBD driver (Helm) — without OpenShift Data Foundation (ODF).

Scope:

 

  • Dynamic PersistentVolume provisioning using Ceph RBD

  • OpenShift 4.x (tested on 4.20)

  • External / community Ceph (not Red Hat ODF)

  • Suitable for PoC and lab environments

 

 


 

 

1. Architecture Overview

 

 

  • OpenShift cluster with:

     

    • 3 control-plane nodes

    • ≥2 worker nodes

     

  • External Ceph cluster providing:

     

    • MON endpoints

    • RBD pool

    • Ceph user with RBD permissions

     

  • Ceph CSI deployed via Helm into:

     

    • openshift-storage namespace

     

 

⚠️ Important
OpenShift Data Foundation does not support community Ceph.
This method uses upstream Ceph CSI and is unsupported by Red Hat — appropriate for PoC only.

 


 

 

2. Prerequisites

 

 

OpenShift

 

 

  • Cluster-admin access

  • oc CLI

  • helm v3+

  • At least one schedulable worker node

 

 

Ceph

 

 

  • Running Ceph cluster

  • MON IPs reachable from OpenShift

  • Existing RBD pool (example: openshift)

  • Ceph user with RBD permissions

 

Example Ceph user:

ceph auth get-or-create client.openshift \
  mon 'allow r' \
  osd 'allow rwx pool=openshift'

 


 

 

3. Collect Ceph Information

 

You will need:

 

  • Cluster FSID

  • Monitor endpoints

  • Ceph user ID

  • Ceph user key

 

ceph fsid
ceph mon dump
ceph auth get client.openshift

Keep these values ready.

 


 

 

4. Create Namespace

 

oc create namespace openshift-storage

 


 

 

5. Install Ceph CSI (RBD only)

 

 

5.1 Add Helm Repository

 

helm repo add ceph-csi https://ceph.github.io/csi-charts
helm repo update

 

5.2 Helm values file (

ceph-csi-driver-values.yaml

)

 

# Enable only RBD
rbd:
  enabled: true

# Disable CephFS
cephFS:
  enabled: false

# Controller plugin
controllerPlugin:
  enabled: true
  replicaCount: 1
  nodeSelector:
    node-role.kubernetes.io/worker: ""

# Node plugin
nodePlugin:
  enabled: true

# Ceph cluster configuration
csiConfig:
  - clusterID: "c3db0267-2c6d-4248-94ae-50379afeea49"
    monitors:
      - "130.236.59.198:6789"
      - "130.236.59.107:6789"
      - "130.236.59.128:6789"

driverType: rbd

 

5.3 Install via Helm

 

helm upgrade --install ceph-csi-rbd ceph-csi/ceph-csi-rbd \
  --version 3.16.0 \
  -n openshift-storage \
  -f ceph-csi-driver-values.yaml

 


 

 

6. Verify CSI Pods

 

oc -n openshift-storage get pods -o wide | grep rbd

Expected:

 

  • ceph-csi-rbd-provisioner → Running

  • ceph-csi-rbd-nodeplugin → Running on each worker

 

 


 

 

7. Create Ceph Credentials Secret

 

 

7.1 Create Secret YAML

 

apiVersion: v1
kind: Secret
metadata:
  name: ceph-rbd-secret
  namespace: openshift-storage
type: Opaque
data:
  userID: Q0VQSFVTRVJJRA==
  userKey: Q0VQSENVU0VSS0VZ
ℹ️

 

  • CEPHUSERID → base64-encoded
  • CEPHUSERKEY → base64-encoded

 

Encode values:

echo -n CEPHUSERID | base64
echo -n CEPHUSERKEY | base64

Apply:

oc apply -f ceph-rbd-secret.yaml

 


 

 

8. Create StorageClass (Correct Secret Wiring)

 

⚠️ Critical learning from the PoC
Using legacy adminSecretName / userSecretName fails with:
rpc error: provided secret is empty
CSI requires explicit csi.storage.k8s.io/* secret keys

 

8.1 StorageClass (

ceph-rbd-v2

)

 

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: ceph-rbd-v2
provisioner: rbd.csi.ceph.com
parameters:
  clusterID: c3db0267-2c6d-4248-94ae-50379afeea49
  pool: openshift
  imageFormat: "2"
  imageFeatures: layering
  csi.storage.k8s.io/fstype: ext4

  csi.storage.k8s.io/provisioner-secret-name: ceph-rbd-secret
  csi.storage.k8s.io/provisioner-secret-namespace: openshift-storage
  csi.storage.k8s.io/node-stage-secret-name: ceph-rbd-secret
  csi.storage.k8s.io/node-stage-secret-namespace: openshift-storage

reclaimPolicy: Delete
allowVolumeExpansion: true
volumeBindingMode: Immediate

Apply:

oc apply -f ceph-rbd-v2.yaml

 


 

 

9. Set Default StorageClass

 

oc annotate sc ceph-rbd-v2 \
  storageclass.kubernetes.io/is-default-class=true --overwrite

(Optional) remove default from others:

oc annotate sc OLDDEFAULT storageclass.kubernetes.io/is-default-class-

Verify:

oc get sc

 


 

 

10. Validate with Test PVC

 

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: ceph-rbd-test
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 5Gi

Apply:

oc apply -f pvc-test.yaml

Check:

oc describe pvc ceph-rbd-test

Expected:

Status: Bound
ProvisioningSucceeded

 


 

 

11. Known Pitfalls & Lessons Learned

 

 

❌ Using ODF with community Ceph

 

 

  • Will not work

  • Operator installs but CSI never provisions volumes

 

 

❌ Missing CSI secret keys

 

 

  • Error: provided secret is empty

  • Fix: Use csi.storage.k8s.io/* parameters

 

 

❌ Editing StorageClass parameters

 

 

  • StorageClass parameters are immutable

  • Always create a new StorageClass version

 

 

⚠️ Topology warnings

 

 

  • Safe to ignore in PoC unless using topology-aware scheduling

 

 


 

 

12. PoC Status

 

✅ CSI deployed

✅ Dynamic RBD provisioning working

✅ Default StorageClass set

🚀 Ready for:

  • OpenShift Virtualization

  • Stateful workloads

  • VMware replacement testing

Similar Posts