Ceph Setup Ubuntu Three Node
A three-node Ceph cluster on Ubuntu 22.04 provides fault-tolerant distributed storage with triple replication. This guide uses cephadm, the official orchestrator since Ceph Pacific, to deploy monitors, managers, and OSDs across three servers.
Prerequisites
| Requirement | Details |
|---|---|
| OS | Ubuntu 22.04 LTS (minimal) on all 3 nodes |
| RAM | 4 GB minimum per node (8 GB recommended) |
| Disks | 1 OS disk + at least 1 dedicated OSD disk per node |
| Network | All nodes on the same L2 subnet with static IPs |
| Access | Root or sudo on all nodes, SSH between them |
Example hostnames and IPs:
| Host | IP |
|---|---|
| ceph01 | 10.0.0.11 |
| ceph02 | 10.0.0.12 |
| ceph03 | 10.0.0.13 |
Step 1: Prepare All Nodes
Run these commands on every node to update packages and configure time sync:
sudo apt update && sudo apt upgrade -y
sudo apt install -y chrony lvm2
sudo timedatectl set-ntp true
Set hostnames and populate /etc/hosts on each node:
sudo hostnamectl set-hostname ceph01 # adjust per node
cat <<EOF | sudo tee -a /etc/hosts
10.0.0.11 ceph01
10.0.0.12 ceph02
10.0.0.13 ceph03
EOF
Step 2: Bootstrap the First Node
Cephadm bootstraps the cluster from a single node and then expands. On ceph01:
sudo apt install -y cephadm
sudo cephadm bootstrap \
--mon-ip 10.0.0.11 \
--cluster-network 10.0.0.0/24
This deploys the first Ceph monitor, manager, crash handler, and the Ceph Dashboard. Note the dashboard URL and admin credentials printed at the end.
Install the Ceph CLI tools:
sudo cephadm install ceph-common
Verify the single-node cluster:
sudo ceph -s
Step 3: Add the Remaining Hosts
Copy the cluster SSH public key to the other two nodes:
sudo ssh-copy-id -f -i /etc/ceph/ceph.pub root@ceph02
sudo ssh-copy-id -f -i /etc/ceph/ceph.pub root@ceph03
Register them with the orchestrator:
sudo ceph orch host add ceph02 10.0.0.12
sudo ceph orch host add ceph03 10.0.0.13
Confirm all three hosts appear:
sudo ceph orch host ls
Step 4: Deploy Monitors and Managers
Place monitors and managers on all three hosts for high availability:
sudo ceph orch apply mon --placement="ceph01,ceph02,ceph03"
sudo ceph orch apply mgr --placement="ceph01,ceph02,ceph03"
Three monitors form a Paxos quorum. Three managers provide active-standby redundancy.
Step 5: Add OSDs
Cephadm can automatically claim any unused, unmounted block device:
sudo ceph orch apply osd --all-available-devices
Or add specific devices manually:
sudo ceph orch daemon add osd ceph01:/dev/sdb
sudo ceph orch daemon add osd ceph02:/dev/sdb
sudo ceph orch daemon add osd ceph03:/dev/sdb
Verify OSDs are up:
sudo ceph osd tree
Expected output shows three OSDs distributed across the three hosts.
Step 6: Create a Storage Pool
Create a replicated pool with size 3 so every object has one copy on each node:
sudo ceph osd pool create rbd-pool 64 64 replicated
sudo ceph osd pool set rbd-pool size 3
sudo ceph osd pool set rbd-pool min_size 2
sudo ceph osd pool application enable rbd-pool rbd
Step 7: Verify Cluster Health
sudo ceph -s
You should see HEALTH_OK with 3 mons, 3 mgrs, and 3 OSDs all up.
Step 8: Enable the Dashboard
The dashboard was enabled during bootstrap. Access it at https://ceph01:8443. To reset the admin password:
sudo ceph dashboard ac-user-set-password admin NewSecurePass123
Step 9: Optional — Add an S3 Gateway (RGW)
To expose S3-compatible object storage:
sudo ceph orch apply rgw my-rgw --placement="ceph01" --port=8080
Troubleshooting
| Symptom | Solution |
|---|---|
HEALTH_WARN clock skew |
Ensure chrony is synced on all nodes |
| OSD not appearing | Run ceph orch device ls to check disk availability |
| Dashboard unreachable | Verify firewall permits TCP 8443 |
| Mon election flapping | Check network latency between nodes |
Summary
With cephadm, a production-grade three-node Ceph cluster deploys in under 30 minutes. The cluster provides replicated block, object, and filesystem storage suitable for OpenStack integration or standalone use.