Cinder Multiple Backends
Cinder supports multiple storage backends simultaneously. You can offer different storage tiers—fast SSD, bulk HDD, Ceph distributed—in the same OpenStack deployment and let users choose via volume types. This guide configures Cinder with LVM, Ceph, and NFS backends on OpenStack 2024.2 Dalmatian.
How It Works
Cinder uses volume types to map user requests to backends. When a user creates a volume with a specific type, the Cinder scheduler places it on the matching backend.
User creates volume (type=ssd) → Scheduler → LVM-SSD backend
User creates volume (type=ceph) → Scheduler → Ceph RBD backend
User creates volume (type=nfs) → Scheduler → NFS backend
Prerequisites
| Requirement | Details |
|---|---|
| OpenStack | 2024.2 Dalmatian with Cinder |
| LVM | A volume group (e.g., cinder-volumes) on SSD |
| Ceph | Running cluster with an volumes RBD pool |
| NFS | NFS server with an exported share |
Step 1: Configure Multiple Backends in cinder.conf
Edit /etc/cinder/cinder.conf on the storage node:
[DEFAULT]
enabled_backends = lvm-ssd,ceph-rbd,nfs-bulk
default_volume_type = ceph
# ---- LVM SSD Backend ----
[lvm-ssd]
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_group = cinder-volumes
target_protocol = iscsi
target_helper = lioadm
volume_backend_name = ssd-storage
# ---- Ceph RBD Backend ----
[ceph-rbd]
volume_driver = cinder.volume.drivers.rbd.RBDDriver
rbd_pool = volumes
rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_user = cinder
rbd_secret_uuid = 457eb676-33da-42ec-9a8c-9293d545c337
rbd_flatten_volume_from_snapshot = false
volume_backend_name = ceph-storage
# ---- NFS Backend ----
[nfs-bulk]
volume_driver = cinder.volume.drivers.nfs.NfsDriver
nfs_shares_config = /etc/cinder/nfs_shares
nfs_mount_point_base = /var/lib/cinder/nfs
nas_secure_file_operations = false
nas_secure_file_permissions = false
volume_backend_name = nfs-storage
Create the NFS shares file:
echo "10.0.0.50:/export/cinder" | sudo tee /etc/cinder/nfs_shares
sudo chown cinder:cinder /etc/cinder/nfs_shares
Step 2: Create Volume Types
Volume types are how users select a backend:
source openrc admin admin
# Create volume types
openstack volume type create ssd
openstack volume type create ceph
openstack volume type create nfs
# Associate each type with a backend name
openstack volume type set ssd --property volume_backend_name=ssd-storage
openstack volume type set ceph --property volume_backend_name=ceph-storage
openstack volume type set nfs --property volume_backend_name=nfs-storage
Step 3: Restart Cinder
sudo systemctl restart cinder-volume cinder-scheduler
Verify all backends are reported:
cinder service-list
openstack volume service list
You should see three cinder-volume entries, one for each backend.
Step 4: Test Volume Creation
Create a volume on each backend:
openstack volume create --size 10 --type ssd fast-vol
openstack volume create --size 50 --type ceph distributed-vol
openstack volume create --size 100 --type nfs bulk-vol
openstack volume list
Step 5: Set a Default Volume Type
When users do not specify a type, Cinder uses the default:
openstack volume type set --property is_public=true ceph
cinder type-default ceph
Or in cinder.conf:
[DEFAULT]
default_volume_type = ceph
Backend Capabilities Comparison
| Feature | LVM (iSCSI) | Ceph RBD | NFS |
|---|---|---|---|
| Thin provisioning | Yes | Yes | Depends on FS |
| Snapshots | Yes | Yes (fast COW) | Yes |
| Live migration | Limited | Yes | Yes |
| Replication | No | Yes (built-in) | Depends |
| Multi-attach | No | Yes | No |
| Best for | Single-node fast I/O | Distributed HA | Bulk cheap storage |
Using QoS Policies
You can apply I/O limits per volume type:
# Create a QoS policy
openstack volume qos create high-iops \
--consumer front-end \
--property total_iops_sec=5000 \
--property total_bytes_sec=209715200
# Associate with the SSD type
openstack volume qos associate high-iops ssd
Troubleshooting
| Issue | Fix |
|---|---|
| Volume stuck in creating | Check cinder-volume logs for backend errors |
| Wrong backend selected | Verify volume_backend_name matches the type property |
| NFS mount fails | Check NFS exports and firewall rules |
| Ceph auth error | Verify keyring and rbd_secret_uuid in libvirt |
Summary
Multiple Cinder backends let you offer storage tiers—fast SSD, distributed Ceph, and bulk NFS—within one OpenStack cloud. Volume types map user requests to the right backend, and QoS policies add I/O controls.