OpenStack Ansible Managed Ceph
OpenStack-Ansible (OSA) can deploy and manage a Ceph cluster alongside OpenStack using ceph-ansible integration. This "managed Ceph" approach is ideal for greenfield deployments where you want a single tool to handle both OpenStack and Ceph lifecycle.
How Managed Ceph Works in OSA
OSA includes playbooks that call ceph-ansible to deploy Ceph monitors, OSDs, and RGW daemons on dedicated or collocated hosts. Ceph pools, keyrings, and configuration are automatically generated and distributed to the OpenStack services.
Prerequisites
| Requirement | Details |
|---|---|
| OSA | 2024.2 Dalmatian |
| Ceph hosts | At least 3 nodes with dedicated OSD disks |
| OS | Ubuntu 22.04 LTS on all nodes |
| Disks | Each Ceph node needs 1+ unused block devices for OSDs |
| Network | Dedicated storage network recommended |
Step 1: Define Ceph Hosts
Edit /etc/openstack_deploy/openstack_user_config.yml to add a ceph-mon_hosts and ceph-osd_hosts section:
ceph-mon_hosts:
ceph01:
ip: 172.29.236.21
ceph02:
ip: 172.29.236.22
ceph03:
ip: 172.29.236.23
ceph-osd_hosts:
ceph01:
ip: 172.29.236.21
container_vars:
devices:
- /dev/sdb
- /dev/sdc
ceph02:
ip: 172.29.236.22
container_vars:
devices:
- /dev/sdb
- /dev/sdc
ceph03:
ip: 172.29.236.23
container_vars:
devices:
- /dev/sdb
- /dev/sdc
Step 2: Configure Ceph Variables
Edit /etc/openstack_deploy/user_variables_ceph.yml:
# Ceph deployment method
ceph_pkg_source: distro
ceph_stable_release: reef
# Network
ceph_public_network: 172.29.236.0/22
ceph_cluster_network: 172.29.244.0/22
# OSD settings
osd_scenario: lvm
osd_objectstore: bluestore
# Pool defaults
ceph_conf_overrides:
global:
osd_pool_default_size: 3
osd_pool_default_min_size: 2
osd_pool_default_pg_num: 64
Step 3: Enable Ceph for OpenStack Services
Edit /etc/openstack_deploy/user_variables.yml:
# Glance
glance_default_store: rbd
glance_rbd_store_pool: images
# Cinder
cinder_backends:
rbd:
volume_driver: cinder.volume.drivers.rbd.RBDDriver
rbd_pool: volumes
rbd_ceph_conf: /etc/ceph/ceph.conf
rbd_user: cinder
volume_backend_name: rbd
report_discard_supported: true
# Nova
nova_libvirt_images_rbd_pool: vms
Step 4: Run the Ceph Playbooks
OSA includes a dedicated Ceph deployment playbook:
cd /opt/openstack-ansible
# Deploy Ceph cluster
openstack-ansible playbooks/ceph-install.yml
This playbook:
- Installs Ceph packages on all Ceph hosts
- Bootstraps monitors and creates the initial quorum
- Prepares and activates OSDs on the specified devices
- Creates OpenStack pools (volumes, images, vms)
- Generates keyrings for Cinder, Glance, and Nova
- Distributes ceph.conf and keyrings to OpenStack nodes
Step 5: Deploy OpenStack
Now deploy OpenStack as normal:
openstack-ansible playbooks/setup-hosts.yml
openstack-ansible playbooks/setup-infrastructure.yml
openstack-ansible playbooks/setup-openstack.yml
The OpenStack services will automatically pick up the Ceph configuration.
Step 6: Verify the Deployment
Check Ceph health:
sudo ceph -s
sudo ceph osd tree
sudo ceph osd pool ls detail
Test OpenStack integration:
# Upload image to Glance (stored in Ceph)
openstack image create --disk-format raw --container-format bare \
--file cirros.img cirros-test
# Create volume (stored in Ceph)
openstack volume create --size 10 test-vol
# Verify in Ceph
rbd -p images ls
rbd -p volumes ls
Scaling the Ceph Cluster
To add a new OSD host, add it to openstack_user_config.yml and re-run:
openstack-ansible playbooks/ceph-install.yml --limit new-ceph-host
To add disks to an existing host, update the devices list and re-run the playbook.
Day 2 Operations
| Task | Command |
|---|---|
| Check health | ceph -s |
| Add OSD host | Add to config, re-run ceph-install.yml |
| Replace failed OSD | ceph osd out <id>, replace disk, re-run playbook |
| Upgrade Ceph | Update ceph_stable_release, re-run ceph-install.yml |
| Pool stats | ceph df |
Troubleshooting
| Issue | Fix |
|---|---|
| OSD not created | Verify disk is unused: lsblk, wipefs -a /dev/sdX |
| Mon quorum not forming | Check clock sync and network between Ceph hosts |
| Ceph install playbook fails | Run with -vvv for detailed output |
| Pool not created | Check user_variables_ceph.yml pool configuration |
Summary
OSA-managed Ceph provides a single-tool deployment experience for both OpenStack and Ceph. The ceph-install.yml playbook handles the full Ceph lifecycle, and all OpenStack services are automatically configured to use Ceph storage.