文章目录
A、首先认识ceph中的public network 和 cluster network
#ceph集群ip修改难在public网络(因为cluster network直接在ceph.conf配置文件中改掉、重启就行了)
public network(公共网络是必须的)
作用:client访问、若是没有cluster network还将承担 osd之间的数据复制、数据平衡、数据恢复、消除副本、osd间心跳检测
cluster network(集群网络:这是为了提升性能而设计的)
作用:osd之间的数据复制、数据平衡、数据恢复、消除副本、osd间心跳检测
Ceph 守护进程是动态绑定的,所以如果你改变了网络配置,你不必立即重启整个集群。(这句话没理解,后面再说吧,影响不大)
B、修改的大致步骤
1.在机器正常运行时导出mon的配置;
2.修改mon的配置;
3.修改ceph配置文件
4.关闭ceph集群(停止ceph进程)
5.修改服务器IP,/etc/hosts对应的ip等:
6.导入修改后的mon配置;
7.重启集群(启动ceph相关进程 mon、mgr、osd)
C1、在集群正常时,导出mon配置
1、导出mon配置
root@pve-ceph02:~# ceph mon getmap -o monmap.bin
2、打印查看原来的mon配置
root@pve-ceph02:~# monmaptool --print monmap.bin
monmaptool: monmap file monmap.bin
epoch 7
fsid 21f31929-2fc5-4bc9-a0d5-060b5eb0a695
last_changed 2023-03-23T13:10:58.312468+0800
created 2022-12-22T09:08:06.437852+0800
min_mon_release 17 (quincy)
election_strategy: 1
0: [v2:10.99.99.2:3300/0,v1:10.99.99.2:6789/0] mon.pve-ceph02
1: [v2:10.99.99.3:3300/0,v1:10.99.99.3:6789/0] mon.pve-ceph03
2: [v2:10.99.99.4:3300/0,v1:10.99.99.4:6789/0] mon.pve-ceph04
附C1、ip已经修改,集群已经故障时,如何导出mon配置
#monmaptool可以根据ceph.conf配置文件来生成monmap(但是节点名字是不对的)
root@pve-ceph03:~# monmaptool --create --generate -c /etc/ceph/ceph.conf ./monmap.bin
monmaptool: monmap file ./monmap.bin
setting min_mon_release = octopus
monmaptool: set fsid to 21f31929-2fc5-4bc9-a0d5-060b5eb0a695
monmaptool: writing epoch 0 to ./monmap.bin (3 monitors)
root@pve-ceph03:~# monmaptool --print monmap.bin
monmaptool: monmap file monmap.bin
epoch 0
fsid 21f31929-2fc5-4bc9-a0d5-060b5eb0a695
last_changed 2023-03-30T09:44:12.515940+0800
created 2023-03-30T09:44:12.515940+0800
min_mon_release 15 (octopus)
election_strategy: 1
0: [v2:10.15.11.78:3300/0,v1:10.15.11.78:6789/0] mon.noname-b
1: [v2:10.15.11.109:3300/0,v1:10.15.11.109:6789/0] mon.noname-a
2: [v2:10.15.11.137:3300/0,v1:10.15.11.137:6789/0] mon.noname-c
C2、删除原来mon配置(删除老的mon.id配置)
#root@pve-ceph02:~# monmaptool --rm pve-ceph02 monmap.bin 这是一个一个删除,也可以像下面一次性删除多个
root@pve-ceph02:~# monmaptool --rm pve-ceph02 --rm pve-ceph03 --rm pve-ceph04 monmap.bin
monmaptool: monmap file monmap.bin
monmaptool: removing pve-ceph02
monmaptool: removing pve-ceph03
monmaptool: removing pve-ceph04
monmaptool: writing epoch 7 to monmap.bin (0 monitors)
C3、添加新的mon配置
#monmaptool --add pve-ceph02 10.15.11.109:6789 monmap.bin (注意--add 只会添加v1协议,--addv可以添加两种协议,v2端口是3300)
root@pve-ceph02:~# monmaptool --addv pve-ceph02 [v2:10.15.11.109:3300,v1:10.15.11.109:6789] monmap.bin
root@pve-ceph02:~# monmaptool --addv pve-ceph03 [v2:10.15.11.78:3300,v1:10.15.11.78:6789] monmap.bin
root@pve-ceph02:~# monmaptool --addv pve-ceph04 [v2:10.15.11.137:3300,v1:10.15.11.137:6789] monmap.bin
#修改后打印一下,验证是否修改正确
root@pve-ceph02:~# monmaptool --print monmap.bin
monmaptool: monmap file monmap.bin
epoch 7
fsid 21f31929-2fc5-4bc9-a0d5-060b5eb0a695
last_changed 2023-03-23T13:10:58.312468+0800
created 2022-12-22T09:08:06.437852+0800
min_mon_release 17 (quincy)
election_strategy: 1
0: [v2:10.15.11.109:3300/0,v1:10.15.11.109:6789/0] mon.pve-ceph02
1: [v2:10.15.11.78:3300/0,v1:10.15.11.78:6789/0] mon.pve-ceph03
2: [v2:10.15.11.137:3300/0,v1:10.15.11.137:6789/0] mon.pve-ceph04
C4、修改ceph.conf中public网络ip,和mon角色ip(每个节点都要改)
#1、修改前长这样:
root@pve-ceph02:~# cat /etc/ceph/ceph.conf
[global]
auth_client_required = cephx
auth_cluster_required = cephx
auth_service_required = cephx
cluster_network = 10.99.99.1/24
fsid = 21f31929-2fc5-4bc9-a0d5-060b5eb0a695
mon_allow_pool_delete = true
mon_host = 10.99.99.2 10.99.99.3 10.99.99.4
ms_bind_ipv4 = true
ms_bind_ipv6 = false
osd_pool_default_min_size = 2
osd_pool_default_size = 3
public_network = 10.99.99.1/24
[client]
keyring = /etc/pve/priv/$cluster.$name.keyring
[mon.pve-ceph02]
public_addr = 10.99.99.2
[mon.pve-ceph03]
public_addr = 10.99.99.3
[mon.pve-ceph04]
public_addr = 10.99.99.4
#2、修改后长这样:
root@pve-ceph02:~# cat /etc/ceph/ceph.conf
[global]
auth_client_required = cephx
auth_cluster_required = cephx
auth_service_required = cephx
cluster_network = 10.15.11.0/24
#cluster_network = 10.99.99.1/24 cluster网络就这么改掉,重启osd即可
fsid = 21f31929-2fc5-4bc9-a0d5-060b5eb0a695
mon_allow_pool_delete = true
mon_host = 10.15.11.109 10.15.11.78 10.15.11.137
ms_bind_ipv4 = true
ms_bind_ipv6 = false
osd_pool_default_min_size = 2
osd_pool_default_size = 3
public_network = 10.15.11.0/24
[client]
keyring = /etc/pve/priv/$cluster.$name.keyring
[mon.pve-ceph02]
public_addr = 10.15.11.109
[mon.pve-ceph03]
public_addr = 10.15.11.78
[mon.pve-ceph04]
public_addr = 10.15.11.137
C5、停止所有节点的ceph服务
#若是pve部署的直接停止ceph.target
systemctl stop ceph.target
#或者
systemctl stop ceph-osd.target
systemctl stop ceph-mgr@.service
systemctl stop ceph-mon@.service
systemctl stop ceph-crash.service
systemctl stop ceph-mds@.service
C6、修改ceph节点的ip地址(非pve部署的可能要改/etc/hosts)
C7、注入monmap(在原本的mon节点,用新的monmap进行注入)
#将monmap.bin拷贝到所有的mon节点,并执行注入(若是pve集群直接复制到/etc/pve/目录,每个节点都能访问了)
#只需要在mon节点注入即可,其他的osd节点不需要注入
在pve-ceph02上执行注入:ceph-mon -i pve-ceph02 --inject-monmap monmap.bin
在pve-ceph03上执行注入:ceph-mon -i pve-ceph03 --inject-monmap monmap.bin
在pve-ceph04上执行注入:ceph-mon -i pve-ceph04 --inject-monmap monmap.bin
C8、启动所有节点的ceph服务,检查ceph和ip端口占用状态
#若是pve部署的直接启动ceph.target
systemctl start ceph.target
ss -tnlp | grep ceph
#或者
systemctl start ceph-osd.target
systemctl start ceph-mgr@.service
systemctl start ceph-mon@.service
systemctl start ceph-crash.service
systemctl start ceph-mds@.service
#如果发现子部件没有启动,则以下面命令为例,先将服务标志删除再启动
systemctl reset-failed ceph-mon@node1.service
systemctl reset-failed ceph-mds@node1.service
systemctl reset-failed ceph-osd@0.service
systemctl reset-failed ceph-osd@1.service
如果文章对你有帮助,欢迎点击上方按钮打赏作者
暂无评论