InstallCephWithQuattor: Difference between revisions

From T2B Wiki
Jump to navigation Jump to search
 
(One intermediate revision by the same user not shown)
Line 11: Line 11:
include 'site/ceph';
include 'site/ceph';
</pre>
</pre>
But before the deployment, your OSD machines must be prepared : the OSD disks must be partitioned and formatted correctly. In a OSD machine, at least one partition or one drive must be prepared to store the objects. To achieve this, you need to define an appropriate disk layout.
But before the deployment, your OSD machines must be prepared : the OSD disks must be partitioned and formatted correctly. In a OSD machine, at least one partition or one drive must be prepared to store the objects. To achieve this, you need to define an appropriate disk layout. You will find an example of such a disk layout in template site/filesystems/ceph_fs_test. Below, the intesting parts of this template :
<pre>
...
variable CEPH_FS = 'btrfs';
 
variable DISK_VOLUME_PARAMS = {
t = dict();
...
t['logvda'] = dict(
'size', -1,
'mountpoint', '/var/lib/ceph/log/vda4',
'fstype', CEPH_FS,
'type', 'partition',
'device', 'vda4',
'preserve', true,
'mkfsopts', CEPH_DISK_OPTIONS[CEPH_FS]['mkfsopts'],
'mountopts', CEPH_DISK_OPTIONS[CEPH_FS]['mountopts'],
);
t['osd'] = dict(
'size', -1,
'mountpoint', '/var/lib/ceph/osd/vdb',
'fstype', CEPH_FS,
'type', 'partition',
'device', 'vdb1',
'preserve', true,
'mkfsopts', CEPH_DISK_OPTIONS[CEPH_FS]['mkfsopts'],
'mountopts', CEPH_DISK_OPTIONS[CEPH_FS]['mountopts'],
);
t;
};
 
...
 
</pre>
From this extract, you can see that our OSD has two drives (vda and vdb). We choose to use the filesystem btrfs to store the objects (you should use ext4 of xfs for production cluster). The last partition of vda (vda4) will contain the journal of the filesystem, and the vdb disk will only contain one partition (vdb1) to store the data.

Latest revision as of 16:29, 17 November 2015

Overview

The component ncm-ceph is based on ceph-deploy. It is documented here.

Prior to the installation, you must assign the roles to the machines of the future cluster. The possible roles are : MON, MDS, OSD, and DEPLOY. The machine with the DEPLOY role will be the one from which deployment will be done, and so, it is the machine on which the component ncm-ceph will be executed. You need at least one MON machine, but to avoid bottlenecks when the number of OSD increases, it's better to have 3 MON machines (always have an odd number of MON machines to be able to keep a quorum in case of failure of one MON machine). The MDS is only needed if you want to use CephFS. The OSD daemons will run on machines that will contains the storage disks (one OSD daemon per disk). When doing tests, you can put the MON daemons on OSD machines, but on large cluster in production, it's better to put the MON daemons on dedicated machines.

The layout and the parameters of the cluster are described in the template site/ceph.

Before deploying Ceph

The deployment of the Ceph cluster is triggered by this line in the profile of the DEPLOY machine :

include 'site/ceph';

But before the deployment, your OSD machines must be prepared : the OSD disks must be partitioned and formatted correctly. In a OSD machine, at least one partition or one drive must be prepared to store the objects. To achieve this, you need to define an appropriate disk layout. You will find an example of such a disk layout in template site/filesystems/ceph_fs_test. Below, the intesting parts of this template :

...
variable CEPH_FS = 'btrfs';

variable DISK_VOLUME_PARAMS = {
	t = dict();
...
	t['logvda'] = dict(
		'size',			-1,
		'mountpoint',	'/var/lib/ceph/log/vda4',
		'fstype',		CEPH_FS,
		'type',			'partition',
		'device',		'vda4',
		'preserve',		true,
		'mkfsopts',		CEPH_DISK_OPTIONS[CEPH_FS]['mkfsopts'],
		'mountopts',	CEPH_DISK_OPTIONS[CEPH_FS]['mountopts'],
	);
	t['osd'] = dict(
		'size',			-1,
		'mountpoint',	'/var/lib/ceph/osd/vdb',
		'fstype',		CEPH_FS,
		'type',			'partition',
		'device',		'vdb1',
		'preserve',		true,
		'mkfsopts',		CEPH_DISK_OPTIONS[CEPH_FS]['mkfsopts'],
		'mountopts',	CEPH_DISK_OPTIONS[CEPH_FS]['mountopts'],
	);
	t;
};

...

From this extract, you can see that our OSD has two drives (vda and vdb). We choose to use the filesystem btrfs to store the objects (you should use ext4 of xfs for production cluster). The last partition of vda (vda4) will contain the journal of the filesystem, and the vdb disk will only contain one partition (vdb1) to store the data.