纯手动部署Ceph之Monitor部署

Ceph官网上介绍了使用ceph-deploy工具部署Ceph集群的方法,但是手工部署的方法文档中写的不够详细,花了点时间研究了一下,下面是手工部署一个简单的Ceph集群的步骤,先说怎么部署Monitor。

Monitor是Ceph的核心,用于存储所有的元信息,这里部署的是一个3 Monitor的集群,出于简单考虑,这三个Monitor被我放在了同一台机器上,实际部署的话,还是要放在不同的机器上保持高可用。

首先需要安装好对应的包,这里用了一台CentOS 7的机器进行部署,使用了Ceph官方提供的安装源,使用jewel版本。

[root@test ~]# rpm -Uvh https://download.ceph.com/rpm-jewel/el7/noarch/ceph-release-1-0.el7.noarch.rpm
[root@test ~]# yum install -y ceph

然后开始准备配置文件,第一步需要生成一个uuid,这个uuid就是整个Ceph集群的唯一标识。

[root@test ~]# uuidgen 
def5bc47-3d8a-4ca0-9cd6-77243339ab0f

再给三个Monitor起一个名字,这边取mon0, mon1, mon2,然后就可以生成对应的配置文件:

[root@test ~]# cat /etc/ceph/ceph.conf

全局配置

[global]
fsid = def5bc47-3d8a-4ca0-9cd6-77243339ab0f # 集群ID
auth cluster required = cephx # 打开cephx授权
auth service required = cephx
auth client required = cephx
public network = 10.67.0.0/16 # 集群工作的网络段
osd journal size = 1024
osd pool default size = 2
osd pool default min size = 1
osd pool default pg num = 512
osd pool default pgp num = 512
log file = /data0/logs/ceph/$cluster-$name.log
run_dir = /data0/ceph

Monitor总配置

[mon]
mon initial members = mon0,mon1,mon2
mon data = /data0/ceph/mon/$name
log file = /data0/logs/ceph/$name.log

每个Monitor分别的配置

[mon.mon0]
name = mon0
mon addr = 10.67.15.100:6789
mon host = 10.67.15.100:6789
log dir = /data0/logs/ceph
[mon.mon1]
name = mon1
mon addr = 10.67.15.100:6790
mon host = 10.67.15.100:6790
log dir = /data0/logs/ceph
[mon.mon2]
name = mon2
mon addr = 10.67.21.100:6791
mon host = 10.67.21.100:6791
log dir = /data0/logs/ceph

客户端keyring配置

[client.admin]
keyring = /etc/ceph/ceph.client.admin.keyring

由于是单机的3 Monitor,所以分配了6789,6790,6791三个不同的端口。

配置文件准备好,然后开始生成keyring:

[root@test ~]# ceph-authtool --create-keyring /etc/ceph/ceph.mon.keyring --gen-key -n mon. --cap mon 'allow *' #生成Monitor的keyring
creating /etc/ceph/ceph.mon.keyring
[root@test ~]# ceph-authtool --create-keyring /etc/ceph/ceph.client.admin.keyring --gen-key -n client.admin --set-uid=0 --cap mon 'allow *' --cap osd 'allow *' --cap mds 'allow' #生产管理keyring
creating /etc/ceph/ceph.client.admin.keyring
[root@test ~]# ceph-authtool /etc/ceph/ceph.mon.keyring --import-keyring /etc/ceph/ceph.client.admin.keyring    # 将管理keyring导入到Monitor中
importing contents of /etc/ceph/ceph.client.admin.keyring into /etc/ceph/ceph.mon.keyring

生成Monitor map:

[root@test ~]# monmaptool --create --add mon0 10.67.15.100:6789 --fsid def5bc47-3d8a-4ca0-9cd6-77243339ab0f /etc/ceph/monmap
monmaptool: monmap file /etc/ceph/monmap
monmaptool: set fsid to def5bc47-3d8a-4ca0-9cd6-77243339ab0f
monmaptool: writing epoch 0 to /etc/ceph/monmap (1 monitors)
[root@test ~]# monmaptool --add mon1 10.67.15.100:6790 --fsid def5bc47-3d8a-4ca0-9cd6-77243339ab0f /etc/ceph/monmap
monmaptool: monmap file /etc/ceph/monmap
monmaptool: set fsid to def5bc47-3d8a-4ca0-9cd6-77243339ab0f
monmaptool: writing epoch 0 to /etc/ceph/monmap (2 monitors)
[root@test ~]# monmaptool --add mon2 10.67.15.100:6791 --fsid def5bc47-3d8a-4ca0-9cd6-77243339ab0f /etc/ceph/monmap
monmaptool: monmap file /etc/ceph/monmap
monmaptool: set fsid to def5bc47-3d8a-4ca0-9cd6-77243339ab0f
monmaptool: writing epoch 0 to /etc/ceph/monmap (3 monitors)

初始化Monitor的文件系统:

[root@test ~]# ceph-mon --mkfs -i mon0 --monmap /etc/ceph/monmap --keyring /etc/ceph/ceph.mon.keyring 
ceph-mon: set fsid to def5bc47-3d8a-4ca0-9cd6-77243339ab0f
ceph-mon: created monfs at /data0/ceph/mon/mon.mon0 for mon.mon0
[root@test ~]# ceph-mon --mkfs -i mon1 --monmap /etc/ceph/monmap --keyring /etc/ceph/ceph.mon.keyring
ceph-mon: set fsid to def5bc47-3d8a-4ca0-9cd6-77243339ab0f
ceph-mon: created monfs at /data0/ceph/mon/mon.mon1 for mon.mon1
[root@test ~]# ceph-mon --mkfs -i mon2 --monmap /etc/ceph/monmap --keyring /etc/ceph/ceph.mon.keyring
ceph-mon: set fsid to def5bc47-3d8a-4ca0-9cd6-77243339ab0f
ceph-mon: created monfs at /data0/ceph/mon/mon.mon2 for mon.mon2

启动Monitor:

[root@test ~]# ceph-mon --id mon0
[root@test ~]# ceph-mon --id mon1
[root@test ~]# ceph-mon --id mon2

启动完成后,Monitor会自动选举出主,这时可以使用ceph -s和ceph osd lspools命令查看集群状态了:

[root@test ~]# ceph -s
    cluster def5bc47-3d8a-4ca0-9cd6-77243339ab0f
     health HEALTH_ERR
            no osds
     monmap e1: 3 mons at {mon0=10.67.21.37:6789/0,mon1=10.67.21.37:6790/0,mon2=10.67.21.37:6791/0}
            election epoch 4, quorum 0,1,2 mon0,mon1,mon2
     osdmap e1: 0 osds: 0 up, 0 in
            flags sortbitwise,require_jewel_osds
      pgmap v2: 64 pgs, 1 pools, 0 bytes data, 0 objects
            0 kB used, 0 kB / 0 kB avail
                  64 creating
[root@test ~]# ceph osd lspools
0 rbd,

其中ceph -s的状态为HEALTH_ERR,是因为还没有添加osd,所以集群还处于不可用状态。

到此Ceph集群的Monitor就已经搭建完毕了。