OpenStack and Ceph

Ceph is the most widely deployed storage backend for OpenStack. It provides unified block, object, and filesystem storage that integrates natively with Cinder (volumes), Glance (images), and Nova (ephemeral disks). This guide explains how to connect an existing Ceph cluster to OpenStack 2024.2 Dalmatian.

Why Ceph for OpenStack?

Benefit Details
Unified storage One cluster serves block, object, and file storage
No single point of failure Data replicates across multiple OSDs and nodes
Thin provisioning Volumes and images only consume actual used space
Copy-on-write clones Boot-from-volume creates instant image clones
Live migration RBD-backed instances migrate without shared NFS

Prerequisites

  • A running Ceph cluster (Reef or later recommended)
  • OpenStack 2024.2 Dalmatian deployed
  • Network connectivity between OpenStack nodes and Ceph monitors
  • ceph-common package installed on all OpenStack controller and compute nodes

Step 1: Create Ceph Pools for OpenStack

On a Ceph monitor node, create three pools:

ceph osd pool create volumes 128
ceph osd pool create images 64
ceph osd pool create vms 128

ceph osd pool application enable volumes rbd
ceph osd pool application enable images rbd
ceph osd pool application enable vms rbd

Step 2: Create Ceph Auth Keys

Create dedicated Ceph users for each OpenStack service:

ceph auth get-or-create client.cinder \
  mon 'profile rbd' \
  osd 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images' \
  -o /etc/ceph/ceph.client.cinder.keyring

ceph auth get-or-create client.glance \
  mon 'profile rbd' \
  osd 'profile rbd pool=images' \
  -o /etc/ceph/ceph.client.glance.keyring

ceph auth get-or-create client.nova \
  mon 'profile rbd' \
  osd 'profile rbd pool=vms' \
  -o /etc/ceph/ceph.client.nova.keyring

Step 3: Distribute Keyrings

Copy the Ceph configuration and keyrings to the OpenStack nodes:

# On every controller and compute node:
sudo apt install -y ceph-common
scp ceph-mon:/etc/ceph/ceph.conf /etc/ceph/
scp ceph-mon:/etc/ceph/ceph.client.cinder.keyring /etc/ceph/
scp ceph-mon:/etc/ceph/ceph.client.glance.keyring /etc/ceph/
scp ceph-mon:/etc/ceph/ceph.client.nova.keyring /etc/ceph/

Step 4: Configure Glance

Edit /etc/glance/glance-api.conf:

[DEFAULT]
enabled_backends = rbd:rbd

[glance_store]
default_backend = rbd

[rbd]
rbd_store_pool = images
rbd_store_user = glance
rbd_store_ceph_conf = /etc/ceph/ceph.conf
rbd_store_chunk_size = 8

Restart Glance:

sudo systemctl restart glance-api

Step 5: Configure Cinder

Edit /etc/cinder/cinder.conf:

[DEFAULT]
enabled_backends = ceph

[ceph]
volume_driver = cinder.volume.drivers.rbd.RBDDriver
volume_backend_name = ceph
rbd_pool = volumes
rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_user = cinder
rbd_secret_uuid = <generate-a-uuid>
rbd_flatten_volume_from_snapshot = false
rbd_max_clone_depth = 5

Generate the UUID and create a libvirt secret on every compute node:

UUID=$(uuidgen)
cat > secret.xml <<EOF
<secret ephemeral='no' private='no'>
  <uuid>${UUID}</uuid>
  <usage type='ceph'>
    <name>client.cinder secret</name>
  </usage>
</secret>
EOF

sudo virsh secret-define --file secret.xml
sudo virsh secret-set-value --secret ${UUID} \
  --base64 $(ceph auth get-key client.cinder)

Restart Cinder:

sudo systemctl restart cinder-volume

Step 6: Configure Nova for Ephemeral on Ceph

Edit /etc/nova/nova.conf on every compute node:

[libvirt]
images_type = rbd
images_rbd_pool = vms
images_rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_user = nova
rbd_secret_uuid = <same-uuid-as-cinder>

Restart Nova compute:

sudo systemctl restart nova-compute

Step 7: Verify Integration

Test Glance image upload:

openstack image create --disk-format raw --container-format bare \
  --file cirros.img --public cirros-ceph
rbd -p images ls   # should show the image

Test Cinder volume creation:

openstack volume create --size 10 test-vol
rbd -p volumes ls   # should show the volume

Troubleshooting

Issue Solution
HEALTH_WARN: pool has no application Run ceph osd pool application enable <pool> rbd
Permission denied on RBD Verify keyring file permissions and Ceph user caps
Glance upload fails Check rbd_store_user matches the keyring user
Cinder volume stuck in error Check cinder-volume logs for Ceph connection issues

Summary

Integrating Ceph with OpenStack eliminates the need for separate storage solutions for images, volumes, and ephemeral disks. The copy-on-write clone feature makes boot-from-volume nearly instantaneous.