Deploying a new Ceph Octopus cluster: Difference between revisions
(Created page with "== Before you start == Very important remark if you reinstall a ceph cluster: make sure to power off the nodes from a previous cluster deployment, otherwise the mon nodes will...") |
No edit summary |
||
(7 intermediate revisions by the same user not shown) | |||
Line 6: | Line 6: | ||
# cephadm shell | # cephadm shell | ||
</pre> | </pre> | ||
Yet another remark: when you run the command 'hostname' on the machines of our cluster, you get the FQDN of the machine, ie the name followed by the domain name. Ceph will always rely on this, meaning that when you need to specify the name of a machine in a Ceph command, you must always use the FQDN, and not the short name. That explains why we have to add the flag <pre>--allow-fqdn-hostname</pre> in the ceph bootstrap command below. | |||
== Bootstrap a new cluster == | == Bootstrap a new cluster == | ||
Line 19: | Line 21: | ||
</pre> | </pre> | ||
Take note of the "admin" password that is generated, you'll need it to connect to the Ceph Dashboard. | Take note of the "admin" password that is generated, you'll need it to connect to the Ceph Dashboard. | ||
== Add other hosts to the cluster == | |||
* The other nodes of the Ceph cluster must be installed with Quattor with the following include in their object template: | |||
<pre> | |||
include 'config/ceph/nodes'; | |||
</pre> | |||
* Copy the ssh pubkey of the ceph adm in the authorized_keys of other hosts: do it with Quattor (see config/ceph/nodes) | |||
* Tell Ceph that new nodes are part of the cluster: | |||
<pre> | |||
# ceph orch host add <new_host> | |||
</pre> | |||
For example: | |||
<pre> | |||
# ceph orch host add ceph2.wn.iihe.ac.be | |||
# ceph orch host add ceph3.wn.iihe.ac.be | |||
</pre> | |||
== Deploy mon's == | |||
Change the default value of 5 mons: | |||
<pre> | |||
# ceph orch apply mon 3 | |||
</pre> | |||
Add a "mon" label to machine ceph1.wn to ceph3.wn: | |||
<pre> | |||
# ceph orch host label add ceph1.wn.iihe.ac.be mon | |||
# ceph orch host label add ceph2.wn.iihe.ac.be mon | |||
# ceph orch host label add ceph3.wn.iihe.ac.be mon | |||
</pre> | |||
and check: | |||
<pre> | |||
# ceph orch host ls | |||
</pre> | |||
You can now spawn the mon daemons on the machines labelled "mon": | |||
<pre> | |||
# ceph orch apply mon label:mon | |||
</pre> | |||
Remark: You may have noticed that mon daemons are already deployed on ceph2.wn and ceph3.wn before you execute the previous command. If some mon daemons were running on machines with no "mon" label, the previous command will move these daemons to the labelled machines. This way, you have a mean to control the placement of the daemons. | |||
== Deploy osd's == | |||
First, add the storage machines in the ceph cluster: | |||
<pre> | |||
# ceph orch host add <new_host> | |||
</pre> | |||
Then you can flag these machines with the "osd" label: | |||
<pre> | |||
# ceph orch host label add <storage_host> osd | |||
</pre> | |||
You can also print an invetory of the available disks: | |||
<pre> | |||
# ceph orch device ls | |||
</pre> | |||
What ceph considers as an "available disk": | |||
* The device must have no partitions. | |||
* The device must not have any LVM state. | |||
* The device must not be mounted. | |||
* The device must not contain a file system. | |||
* The device must not contain a Ceph BlueStore OSD. | |||
* The device must be larger than 5 GB. | |||
When comes the time to create the osd's (one osd per available drive), you have several possibilities. You can let ceph to create osd's for all available drives like this: | |||
<pre> | |||
# ceph orch apply osd --all-available-devices | |||
</pre> | |||
This is useful when you have servers containing a lot of drives. | |||
Or you may want to keep the control by specifying yourself the hosts and the drives/devices for which you want to create the osds: | |||
<pre> | |||
# ceph orch daemon add osd *<host>*:*<device-path>* | |||
</pre> | |||
However, this command might fail because the drive has a FS/GPT signature. To fix this, try the following on the storage node: | |||
<pre> | |||
# wipefs -a <device> | |||
</pre> | |||
== Source == | |||
* [https://docs.ceph.com/en/octopus/cephadm/install/# Deploying a New Ceph Cluster] |
Latest revision as of 11:44, 6 April 2021
Before you start
Very important remark if you reinstall a ceph cluster: make sure to power off the nodes from a previous cluster deployment, otherwise the mon nodes will spawn a mon on the first machine to reach the quorum and the boostrap will fail! (The other nodes are not aware that you reinstall a new cluster.)
Another important remark: for all the Ceph commands, you must type them in the ceph shell that you launch with:
# cephadm shell
Yet another remark: when you run the command 'hostname' on the machines of our cluster, you get the FQDN of the machine, ie the name followed by the domain name. Ceph will always rely on this, meaning that when you need to specify the name of a machine in a Ceph command, you must always use the FQDN, and not the short name. That explains why we have to add the flag
--allow-fqdn-hostname
in the ceph bootstrap command below.
Bootstrap a new cluster
- Reinstall ceph1 that will be the ceph adm machine, ie the machine that will be used to bootstrap the ceph cluster and deploy all the other ceph node. The object template need contain the line:
include 'config/ceph/adm';
- Once the machine is quattor-installed, log in to it and do the following:
# mkdir -p /etc/ceph # cephadm bootstrap --mon-ip *<ceph1-ip>* --allow-fqdn-hostname
Take note of the "admin" password that is generated, you'll need it to connect to the Ceph Dashboard.
Add other hosts to the cluster
- The other nodes of the Ceph cluster must be installed with Quattor with the following include in their object template:
include 'config/ceph/nodes';
- Copy the ssh pubkey of the ceph adm in the authorized_keys of other hosts: do it with Quattor (see config/ceph/nodes)
- Tell Ceph that new nodes are part of the cluster:
# ceph orch host add <new_host>
For example:
# ceph orch host add ceph2.wn.iihe.ac.be # ceph orch host add ceph3.wn.iihe.ac.be
Deploy mon's
Change the default value of 5 mons:
# ceph orch apply mon 3
Add a "mon" label to machine ceph1.wn to ceph3.wn:
# ceph orch host label add ceph1.wn.iihe.ac.be mon # ceph orch host label add ceph2.wn.iihe.ac.be mon # ceph orch host label add ceph3.wn.iihe.ac.be mon
and check:
# ceph orch host ls
You can now spawn the mon daemons on the machines labelled "mon":
# ceph orch apply mon label:mon
Remark: You may have noticed that mon daemons are already deployed on ceph2.wn and ceph3.wn before you execute the previous command. If some mon daemons were running on machines with no "mon" label, the previous command will move these daemons to the labelled machines. This way, you have a mean to control the placement of the daemons.
Deploy osd's
First, add the storage machines in the ceph cluster:
# ceph orch host add <new_host>
Then you can flag these machines with the "osd" label:
# ceph orch host label add <storage_host> osd
You can also print an invetory of the available disks:
# ceph orch device ls
What ceph considers as an "available disk":
- The device must have no partitions.
- The device must not have any LVM state.
- The device must not be mounted.
- The device must not contain a file system.
- The device must not contain a Ceph BlueStore OSD.
- The device must be larger than 5 GB.
When comes the time to create the osd's (one osd per available drive), you have several possibilities. You can let ceph to create osd's for all available drives like this:
# ceph orch apply osd --all-available-devices
This is useful when you have servers containing a lot of drives.
Or you may want to keep the control by specifying yourself the hosts and the drives/devices for which you want to create the osds:
# ceph orch daemon add osd *<host>*:*<device-path>*
However, this command might fail because the drive has a FS/GPT signature. To fix this, try the following on the storage node:
# wipefs -a <device>