文章目录
cephadm 这种方式部署,要求集群有持续的互联网访问能力
以便各种服务image的下载(当然先导入相关的image可以的,不过麻烦)
参考文章:https://www.koenli.com/ef5921b8.html
配置主机名、hots文件、root免密登录
# 1、分别执行
hostnamectl set-hostname c8s-ceph17-01; hostname c8s-ceph17-01
hostnamectl set-hostname c8s-ceph17-02; hostname c8s-ceph17-02
hostnamectl set-hostname c8s-ceph17-03; hostname c8s-ceph17-03
hostnamectl set-hostname c8s-ceph17-04; hostname c8s-ceph17-04
# 2、配置 Host 解析
cat >> /etc/hosts << EOF
10.15.12.31 c8s-ceph17-01
10.15.12.37 c8s-ceph17-02
10.15.12.172 c8s-ceph17-03
10.15.12.128 c8s-ceph17-04
EOF
# 3、root免密登录
在管理节点上生成ssh-key,并分发到其他主机,配置免密码登录,(只在 ceph50上执行即在管理节点上执行)
[root@c8s-ceph17-01 ~]# ssh-keygen #全部直接回车
[root@c8s-ceph17-01 ~]# ssh-copy-id root@c8s-ceph17-01
[root@c8s-ceph17-01 ~]# ssh-copy-id root@c8s-ceph17-02
[root@c8s-ceph17-01 ~]# ssh-copy-id root@c8s-ceph17-03
[root@c8s-ceph17-01 ~]# ssh-copy-id root@c8s-ceph17-04
关闭防火墙,重启系统
systemctl stop firewalld; systemctl disable firewalld
sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
reboot
配置时间同步
# 设置时区为Asia/Shanghai
timedatectl set-timezone Asia/Shanghai
# 安装chrony并备份配置文件
dnf install chrony -y
# 启动服务,设置开机自启
systemctl restart chronyd.service
systemctl enable chronyd.service
systemctl status chronyd.service
# 手动同步
chronyc -a makestep
配置国内ceph17源,并安装cephadm(多个节点都配置)
# 配置ceph17的中科大yum源(三台主机都执行)
cat > /etc/yum.repos.d/ceph_quincy.repo << EOF
[Ceph_quincy_norch]
name=Ceph quincy norch
baseurl=https://mirrors.ustc.edu.cn/ceph/rpm-quincy/el8/noarch/
enabled=1
gpgcheck=0
[Ceph_quincy_x86]
name=Ceph quincy x86
baseurl=https://mirrors.ustc.edu.cn/ceph/rpm-quincy/el8/x86_64/
enabled=1
gpgcheck=0
EOF
# 安装cephadm(会自动安装lvm2 和 podman)(不用docker)
dnf install cephadm -y # 会响应的安装 Python 3、Podman or Docker、 chrony or NTP、LVM2
dnf install epel-release -y
dnf install ceph-common bash-completion-y
# 可选:更换国内的podman源
mv /etc/containers/registries.conf /etc/containers/registries.conf.bak
cat > /etc/containers/registries.conf <<EOF
unqualified-search-registries = ["docker.io"]
[[registry]]
prefix = "docker.io"
location = "anwk44qv.mirror.aliyuncs.com"
EOF
在其中一个存储节点,初始化一个最小的ceph集群
[root@c8s-ceph17-01 ~]# cephadm bootstrap --mon-ip 10.15.12.31
Ceph Dashboard is now available at:
URL: https://10.15.12.31:8443/
User: admin
Password: g7zgd2beqm1
Enabling client.admin keyring and conf on hosts with "admin" label
Saving cluster configuration to /var/lib/ceph/6ceb15fc-1a4f-11ee-a6cc-faa2e0000004/config directory
Enabling autotune for osd_memory_target
You can access the Ceph CLI as following in case of multi-cluster or non-default config:
sudo /usr/sbin/cephadm shell --fsid 6ceb15fc-1a4f-11ee-a6cc-faa2e0000004 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring
Or, if you are only running a single cluster on this host:
sudo /usr/sbin/cephadm shell
Please consider enabling telemetry to help improve Ceph:
ceph telemetry on
For more information see:
https://docs.ceph.com/docs/master/mgr/telemetry/
安装集群公钥
# 在 c8s-ceph17-01上,将/etc/ceph/ceph.pub 拷贝到其他的节点上
通过 ssh-copy-id 命令配置集群的公共 SSH 公钥至其它所有 Ceph 节点
ssh-copy-id -f -i /etc/ceph/ceph.pub root@c8s-ceph17-02
ssh-copy-id -f -i /etc/ceph/ceph.pub root@c8s-ceph17-03
ssh-copy-id -f -i /etc/ceph/ceph.pub root@c8s-ceph17-04
在ceph的管理节点上,向集群添加指定新节点
# 就像下面最简单的啥都不指定,也会添加三个podman服务 ceph-exporter、crash、node-exporter
cephadm shell ceph orch host add c8s-ceph17-02
cephadm shell ceph orch host add c8s-ceph17-03
# 可明确提供主机 IP 地址。如果未提供 IP,则将立即通过 DNS 解析主机名,并使用该 IP。还可以包括一个或多个标签,以立即标记新主机。
# cephadm shell ceph orch host add c8s-ceph17-04 10.15.12.128 --labels _admin
部署其他 MON(可选)
# 默认情况下mon会在ceph集群中部署5个
ceph orch apply mon 3 # 手动调整mon为3个
# 禁用 MON 的自动部署
ceph orch apply mon --unmanaged
# 查看mon相关服务
[root@c8s-ceph17-01 ~]# ceph orch ls mon
NAME PORTS RUNNING REFRESHED AGE PLACEMENT
mon 3/5 8m ago 25m <unmanaged>
[root@c8s-ceph17-01 ~]# ceph orch ps --daemon_type mon
NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID
mon.c8s-ceph17-01 c8s-ceph17-01 running (4h) 3m ago 4h 127M 2048M 17.2.6 cc5b7b143311 ddba2982f6bb
mon.c8s-ceph17-02 c8s-ceph17-02 running (2h) 62s ago 2h 85.9M 2048M 17.2.6 cc5b7b143311 c063d704531a
mon.c8s-ceph17-03 c8s-ceph17-03 running (107m) 9m ago 107m 67.9M 2048M 17.2.6 cc5b7b143311 bd100289a2ba
部署其他MGR(可选)
# 查看mgr相关服务
[root@c8s-ceph17-01 ~]# ceph orch ls mgr
NAME PORTS RUNNING REFRESHED AGE PLACEMENT
mgr 2/2 2m ago 4h count:2
[root@c8s-ceph17-01 ~]# ceph orch ps --daemon_type mgr
NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID
mgr.c8s-ceph17-01.apecly c8s-ceph17-01 *:9283 running (4h) 3m ago 4h 521M - 17.2.6 cc5b7b143311 14ed2b1566ea
mgr.c8s-ceph17-02.fmsjaz c8s-ceph17-02 *:8443,9283 running (2h) 56s ago 2h 436M - 17.2.6 cc5b7b143311 b89a45a6627a
# 指定mgr在某些节点上 运行
ceph orch apply mgr c8s-ceph17-01,c8s-ceph17-02,c8s-ceph17-03
业务编排(给节点添加标签、删除标签)
# 后续可以根据标签,进行业务编排
# 给 c8s-ceph17-01(第一台)、c8s-ceph17-02、c8s-ceph17-03 添加_admin标签
cephadm shell ceph orch host label add c8s-ceph17-01 _admin
cephadm shell ceph orch host label add c8s-ceph17-02 _admin
cephadm shell ceph orch host label add c8s-ceph17-03 _admin
# 给 c8s-ceph17-01、c8s-ceph17-02、c8s-ceph17-03 添加mon标签
cephadm shell ceph orch host label add c8s-ceph17-01 mon
cephadm shell ceph orch host label add c8s-ceph17-02 mon
cephadm shell ceph orch host label add c8s-ceph17-03 mon
# 给 c8s-ceph17-01、c8s-ceph17-03、c8s-ceph17-04 添加mgr标签
cephadm shell ceph orch host label add c8s-ceph17-01 mgr
cephadm shell ceph orch host label add c8s-ceph17-03 mgr
cephadm shell ceph orch host label add c8s-ceph17-04 mgr
# 列出节点、查看节点标签
ceph orch host ls
# 根据标签,进行业务编排
# 更加 label 为mon,进行mon角色的编排,(运行3个mon)
cephadm shell ceph orch apply mon --placement="3 label:mon"
# 将grafana 放置到 c8s-ceph17-02 节点
ceph orch apply grafana --placement="c8s-ceph17-02"
部署 OSD
# 列出设备
[root@c8s-ceph17-01 ~]# ceph orch device ls # 可以加上 --wide 查看详细信息
HOST PATH TYPE DEVICE ID SIZE AVAILABLE REFRESHED REJECT REASONS
c8s-ceph17-01 /dev/vdb hdd 32.2G Yes 12m ago
c8s-ceph17-01 /dev/vdc hdd 32.2G Yes 12m ago
c8s-ceph17-01 /dev/vdd hdd 32.2G Yes 12m ago
c8s-ceph17-02 /dev/vdb hdd 32.2G Yes 13m ago
c8s-ceph17-02 /dev/vdc hdd 32.2G Yes 13m ago
c8s-ceph17-02 /dev/vdd hdd 32.2G Yes 13m ago
c8s-ceph17-03 /dev/vdb hdd 32.2G Yes 30m ago
c8s-ceph17-03 /dev/vdc hdd 32.2G Yes 30m ago
c8s-ceph17-03 /dev/vdd hdd 32.2G Yes 30m ago
ceph-volume 会不时扫描集群中的每个主机,以确定存在哪些设备以及这些设备是否可用做 OSD
# 如果满足以下所有条件,则认为存储设备可用
1、设备必须没有分区
2、设备不得具有任何 LVM 状态
3、设备必须没有被挂载
4、设备必须没有包含任何文件系统
5、设备不得包含 Ceph BlueStore OSD
6、设备必须大于 5GB
# 禁用在可用设备上自动创建 OSD,可使用非托管参数
# ceph orch apply osd --all-available-devices # 自动使用任何可用且未使用的存储设备(不要这样)
ceph orch apply osd --all-available-devices --unmanaged=true
# 初始化osd
# 将指定的磁盘格式化为无分区的原始磁盘
blkdiscard /dev/vdb
# cephadm shell ceph orch device zap c8s-ceph17-01 /dev/vdb
# 可以将lvm和分区信息,清除掉
ceph orch device zap c8s-ceph17-01 /dev/vdb --force
ceph orch device zap c8s-ceph17-01 /dev/vdc --force
ceph orch device zap c8s-ceph17-01 /dev/vdd --force
ceph orch device zap c8s-ceph17-02 /dev/vdb --force
ceph orch device zap c8s-ceph17-02 /dev/vdc --force
ceph orch device zap c8s-ceph17-02 /dev/vdd --force
ceph orch device zap c8s-ceph17-03 /dev/vdb --force
ceph orch device zap c8s-ceph17-03 /dev/vdc --force
ceph orch device zap c8s-ceph17-03 /dev/vdd --force
# 添加osd
cephadm shell ceph orch daemon add osd c8s-ceph17-01:/dev/vdb
cephadm shell ceph orch daemon add osd c8s-ceph17-01:/dev/vdc
cephadm shell ceph orch daemon add osd c8s-ceph17-01:/dev/vdd
cephadm shell ceph orch daemon add osd c8s-ceph17-02:/dev/vdb
cephadm shell ceph orch daemon add osd c8s-ceph17-02:/dev/vdc
cephadm shell ceph orch daemon add osd c8s-ceph17-02:/dev/vdd
如何剔除 OSD
# 从集群中剔除 OSD 包括两个步骤
1、从集群中撤出此 OSD 上所有的 PG
2、从集群中卸载无 PG 的 OSD
ceph orch osd rm 0
cephadm shell ceph orch osd rm 0
# 注意:这样删除的 不会删除lvm分区,要lsblk 看不到lvm信息,然后清除磁盘重新创建osd
# 可以使用 ceph orch osd rm status 命令查询 OSD 操作的状态
维护类
# 检查c8s-ceph17-02主机是否能接受cephadm管理(若是ok,则说明cephadm可以管理c8s-ceph17-02上的服务)
ceph cephadm check-host c8s-ceph17-02
# 从管理中删除损坏的主机
ceph orch host rm c8s-ceph17-04
# 启用集群配置检查
# Cephadm 定期扫描集群中的每台主机,以了解操作系统、磁盘、网卡等的状态。然后可以分析这些事实以确保集群中主机之间的一致性,以识别任何配置异常。
ceph config set mgr mgr/cephadm/config_checks_enabled true
如何删除host
# 会删除该主机所有守护进程后,并删除主机,注意不会清理掉osd的lvm信息
# 会先给 主机打上特殊标签 _no_schedule ,不要在此主机上调度或部署守护程序,有的守护进程也会被调度走
cephadm shell ceph orch host drain c8s-ceph17-04
# cephadm shell ceph orch host drain c8s-ceph17-04 --force
# 确定主机上是否任有守护进程
ceph orch ps c8s-ceph17-04
# 在删除所有守护进程后,将主机从集群中删除
ceph orch host rm c8s-ceph17-04
# 脱机主机移除(即使主机离线也可以删除)
# 这可能会导致数据丢失。此命令通过调用每个 OSD 来强制从集群中清除 OSD 。仍包含此主机的任何服务规范都应手动更新。osd purge-actual
ceph orch host rm <host> --offline --force
OSD内存自动跳转
# 默认情况下,cephadmosd_memory_target_autotune在引导程序上启用,并mgr/cephadm/autotune_memory_target_ratio设置为.7主机内存总量。
# 限制集群硬件并非有ceph专用情况下的内存消耗
ceph config set mgr mgr/cephadm/autotune_memory_target_ratio 0.2
# 启用内存自动调整
ceph config set osd osd_memory_target_autotune true
如果文章对你有帮助,欢迎点击上方按钮打赏作者
暂无评论