CephBasics
Check Ceph cluster status
The command :
ceph -s
shows the actual status of the Ceph cluster :
[root@cephq1 ~]# ceph -s cluster 82766e04-585b-49a6-a0ac-c13d9ffd0a7d health HEALTH_OK monmap e1: 3 mons at {cephq2=192.168.41.2:6789/0,cephq3=192.168.41.3:6789/0,cephq4=192.168.41.4:6789/0} election epoch 8, quorum 0,1,2 cephq2,cephq3,cephq4 osdmap e78: 6 osds: 6 up, 6 in pgmap v1293: 192 pgs, 2 pools, 0 bytes data, 0 objects 27920 kB used, 4021 GB / 4106 GB avail 192 active+clean
The following command displays a real-time summary of the status of the cluster, and major events :
ceph -w
Remove OSDs
When you want to remove a machine that contains OSDs (for example : decommissioning of an old equipment out of warranty), there is manual procedure to follow in order to do things in a clean way and to avoid problems :
- Identify the OSDs hosted by the machine with the command :
ceph osd tree
- Take the OSDs out of the cluster :
Before you remove an OSD, it is usually up and in. You need to take it out of the cluster so that Ceph can begin rebalancing and copying its data to other OSDs :
ceph osd out {osd-num}
Repeat this operation for all the OSDs on the machine.
- Monitor the data migration :
Once you have taken the OSDs out of the cluster, Ceph will begin rebalancing the cluster by migrating placement groups out of the OSDs you've removed. You must follow this process with the following command :
ceph -w
You should see the placement group states change from active+clean to active, some degraded objects, and finally active+clean when migration completes.