debian11部署k8s v1.24.13

file

系统即软件环境

debian11
containerd.io 1.6.21
crictl-v1.24.2
kubelet=1.24.13-00 、kubeadm=1.24.13-00 、kubectl=1.24.13-00

A、主机相关设置

注意:节点之中不可以有重复的主机名、MAC 地址或 product_uuid
    cat /sys/class/dmi/id/product_uuid      #可以用来检查product_uuid(只要不是克隆的基本不会一样)

#修改主机名,并完善hosts文件(all)
hostnamectl set-hostname cka-n1
echo "10.15.12.100 cka-n1" >> /etc/hosts

#修正时间(一定要修改)
timedatectl set-timezone Asia/Shanghai

#关闭交换分区和防火墙(为了保证 kubelet 正常工作,你必须禁用交换分区,注意/etc/fstab)
swapoff -a ; systemctl disable ufw
sed -ri 's/.*swap.*/#&/' /etc/fstab

#更换为中科大源
cat > /etc/apt/sources.list << EOF
deb https://mirrors.ustc.edu.cn/debian/ bullseye main contrib non-free
deb https://mirrors.ustc.edu.cn/debian/ bullseye-updates main contrib non-free
deb https://mirrors.ustc.edu.cn/debian-security/ bullseye-security main contrib non-free
deb http://deb.debian.org/debian bullseye-backports main
EOF
apt update

B、debian11 安装容器运行时containerd.io(简称CR)+容器运行时接口(简称CRI)

B1、什么是CR,什么是CRI

CR :容器运行时(Containerd Runtime)
CRI:容器运行时接口(Container Runtime Interface)

CR有那些:
    1、containerd:这是个低版本1.4(containerd.io这是高版本1.6,但不是containerd官方发布,而是docker官方发布的)(containerd原是docker的子项目,但是现在已经独立了)
    2、CRI-O:
    3、Docker Engine:k8s v1.24 之前的版本直接集成了 Docker Engine 的一个组件,名为 dockershim(k8s v1.24版本起,Dockershim 已从 Kubernetes 项目中移除。也就是说k8s 1.24版本之后的需要单独安装Container Runtime Interface,即CRI )
        关于 dockershim的介绍,分别是dockershim的删除的影响,以及你可能需要如何迁移dockershim
        https://kubernetes.io/zh-cn/docs/tasks/administer-cluster/migrating-from-dockershim/check-if-dockershim-removal-affects-you/
        https://kubernetes.io/zh-cn/docs/tasks/administer-cluster/migrating-from-dockershim/

    4、Mirantis Container Runtime:

CRI有哪些:
    1、crictl:(k8s v1.24及以后版本需要手动安装)可以查看这里https://github.com/kubernetes-sigs/cri-tools/releases

特别介绍:cri-containerd-cni-1.7.1-linux-amd64.tar.gz(这个软件:既包含CR+CRI+CNI)
    软件项目地址:https://github.com/containerd/containerd

B2、转发 IPv4 并让 iptables 看到桥接流量

1、配置内核模块overlay,br_netfilter
cat >> /etc/modules-load.d/containerd.conf << EOF
overlay
br_netfilter
EOF
modprobe overlay
modprobe br_netfilter

2、设置k8s运行时需要的内核参数
cat >> /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-iptables  = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward                 = 1
EOF

3、应用 sysctl 参数而不重新启动
    sysctl --system
    sysctl net.bridge.bridge-nf-call-iptables net.bridge.bridge-nf-call-ip6tables net.ipv4.ip_forward   #这是查看

B3、安装containerd.io(简称CR)(这里是1.6版本)

1、安装必要软件,卸载老的docker-ce环境
    apt update
    apt install -y lrzsz apt-transport-https ca-certificates curl gnupg2 software-properties-common
    apt remove -y docker docker-engine docker.io containerd runc

2、添加docker-ce的apt源(是为了装containerd的高版本containerd.io,不是为了装docker-ce)

    #国外:将 Docker 的 GPG 密钥添加到您的系统。
    #curl -sSL https://download.docker.com/linux/debian/gpg | gpg --dearmor > /usr/share/keyrings/docker-ce.gpg
    #echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-ce.gpg] https://download.docker.com/linux/debian $(lsb_release -sc) stable" > /etc/apt/sources.list.d/docker.list

    #国内:将 Docker 的 GPG 密钥添加到您的系统。(用清华 TUNA的国内源)
    curl -sSL https://download.docker.com/linux/debian/gpg | gpg --dearmor > /usr/share/keyrings/docker-ce.gpg
    echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-ce.gpg] https://mirrors.tuna.tsinghua.edu.cn/docker-ce/linux/debian $(lsb_release -sc) stable" > /etc/apt/sources.list.d/docker-ce.list

3、安装containerd.io
    apt update
    apt install -y containerd.io
    #apt install containerd(这是安装老版本,不需要添加docker-ce的apt源)

4、修改containerd配置文件/etc/containerd/config.toml(该文件默认存在,需要覆盖)
    rm /etc/containerd/config.toml
    containerd config default > /etc/containerd/config.toml

    sed -i "s#k8s.gcr.io/pause:3.2#registry.aliyuncs.com/google_containers/pause:3.7#g" /etc/containerd/config.toml
    sed -i "s#registry.k8s.io/pause:3.6#registry.aliyuncs.com/google_containers/pause:3.7#g" /etc/containerd/config.toml
    #配置 systemd cgroup 驱动(因为必须要与kubelet的驱动一致,kubelet默认是systemd)
    sed -i 's/SystemdCgroup = false/SystemdCgroup = true/' /etc/containerd/config.toml

5、重启containerd服务
    systemctl restart containerd.service
    systemctl status containerd.service
    systemctl enable containerd.service

B4、安装crictl(简称CRI)

1、crictl软件包介绍和二进制下载地址(这个版本最好和k8s的大版本版本一致,这里选取v1.24.2)
    https://github.com/kubernetes-sigs/cri-tools/releases
    https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.24.2/crictl-v1.24.2-linux-amd64.tar.gz

2、解压安装
    tar zxvf crictl-v1.24.2-linux-amd64.tar.gz -C /usr/local/bin

3、调整配置文件,并重启containerd服务
cat > /etc/crictl.yaml <<EOF
runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
timeout: 10
debug: false
EOF

    crictl config runtime-endpoint unix:/run/containerd/containerd.sock
    systemctl restart containerd

4、查看镜像和测试镜像拉取
    crictl images
    crictl pull nginx

C、安装k8s(kubeadm kubelet kubectl)

1、下载 Google Cloud 公开签名秘钥:
    #Debian 12 和 Ubuntu 22.04不需要手动创建,权限是所有人可读,但仅管理员可写(mkdir创建后默认就是这样的权限rwxr-xr-x)
    mkdir /etc/apt/keyrings
    #国外
    #curl -fsSLo /etc/apt/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg
    #国内
    curl -fsSLo /etc/apt/keyrings/kubernetes-archive-keyring.gpg https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg
    curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | sudo apt-key add -

2、添加 Kubernetes apt 仓库:
    #国外
    #echo "deb [signed-by=/etc/apt/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
    #国内
    echo "deb [signed-by=/etc/apt/keyrings/kubernetes-archive-keyring.gpg] https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list

3、安装 kubelet,kubeadm 和 kubectl,并锁定其版本:
    apt update    #更新软件列表,并查看kubernetes有哪些可用版本:apt list -a kubeadm
    apt install -qq -y kubelet=1.24.13-00 kubeadm=1.24.13-00 kubectl=1.24.13-00     #安装指定版本
    #apt install -y kubelet kubeadm kubectl     #不指定默认安装最新版
    apt-mark hold kubelet kubeadm kubectl    #锁定后,就不会自动更新这些包

D、使用kubeadm初始化k8s集群

1、查看默认的镜像地址:registry.k8s.io
root@cka-m:~# kubeadm config images list
I0510 12:13:33.714553    7838 version.go:256] remote version is much newer: v1.27.1; falling back to: stable-1.24
registry.k8s.io/kube-apiserver:v1.24.13
registry.k8s.io/kube-controller-manager:v1.24.13
registry.k8s.io/kube-scheduler:v1.24.13
registry.k8s.io/kube-proxy:v1.24.13
registry.k8s.io/pause:3.7
registry.k8s.io/etcd:3.5.6-0
registry.k8s.io/coredns/coredns:v1.8.6

2、预先拉取image(不做也没关系,初始化kubernetes的时候会自动拉取)
    #kubeadm config images pull    国外不指定镜像仓库的用法
    kubeadm config images pull --image-repository registry.cn-hangzhou.aliyuncs.com/google_containers

3、初始化master
    #--apiserver-advertise-address 10.15.12.100   #当前 Master 主机的ip地址(或者本地与其他节点通信的ip)
    #--pod-network-cidr=10.15.10.0/16             #指定pod的通信网络
    #--image-repository registry.cn-hangzhou.aliyuncs.com/google_containers     #选择用于拉取镜像的容器仓库
    #--kubernetes-version       #指定 kubernetes 版本
    kubeadm init --apiserver-advertise-address 10.15.12.100 --pod-network-cidr=10.15.10.0/16 --image-repository registry.cn-hangzhou.aliyuncs.com/google_containers
    source <(kubectl completion bash)    #让kubectl支持命令补全

4、查看库k8s当前的状态
root@cka-m:~# kubectl get node    #注意这里状态是NotReady是正常的,因为还没有部署pod network方案
NAME     STATUS     ROLES           AGE   VERSION
cka-m    NotReady   control-plane   25h   v1.24.13

root@cka-m:~# kubectl get pod -A    #有7个
NAMESPACE     NAME                            READY   STATUS    RESTARTS   AGE
kube-system   coredns-7f74c56694-9xxbn        0/1     Pending   0          25h
kube-system   coredns-7f74c56694-hls7d        0/1     Pending   0          25h
kube-system   etcd-cka-m                      1/1     Running   0          25h
kube-system   kube-apiserver-cka-m            1/1     Running   0          25h
kube-system   kube-controller-manager-cka-m   1/1     Running   0          25h
kube-system   kube-proxy-8gb7q                1/1     Running   0          15h
kube-system   kube-scheduler-cka-m            1/1     Running   0          25h

初始化之后的结果如下图。
file

D、为master部署pod network方案(这里选择flannel overlay network)

1、k8s的network方案有很多,可以参看下面的连接
    https://kubernetes.io/docs/concepts/cluster-administration/addons/

2、flannel overlay network的yml文件路径(需要复制下来命名为flannel.yml,然后修改两个地方)
    https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

2.1、第一处:修正Network的值为自己指定的pod-network-cidr
  net-conf.json: |
    {
      "Network": "10.15.10.0/16",
      "Backend": {
        "Type": "vxlan"
      }
    }

2.2、第二处:添加正确的集群api通信网卡接口(- --iface=eth0)
        args:
        - --ip-masq
        - --kube-subnet-mgr
        - --iface=eth0

3、部署容器
    kubectl apply -f flannel.yml

4、再次查看库k8s当前的状态
root@cka-m:~# kubectl get node
NAME     STATUS   ROLES           AGE   VERSION
cka-m    Ready    control-plane   25h   v1.24.13

root@cka-m:~# kubectl get pod -A
NAMESPACE      NAME                            READY   STATUS    RESTARTS   AGE
kube-flannel   kube-flannel-ds-j4dlx           1/1     Running   0          26m
kube-system    coredns-7f74c56694-9xxbn        1/1     Running   0          25h
kube-system    coredns-7f74c56694-hls7d        1/1     Running   0          25h
kube-system    etcd-cka-m                      1/1     Running   0          25h
kube-system    kube-apiserver-cka-m            1/1     Running   0          25h
kube-system    kube-controller-manager-cka-m   1/1     Running   0          25h
kube-system    kube-proxy-8gb7q                1/1     Running   0          15h
kube-system    kube-scheduler-cka-m            1/1     Running   0          25h

E、其他worker节点如何加入k8s

1、先完成,CR+CRI+k8s的部署

2、添加进入k8s
kubeadm join 10.15.11.114:6443 --token ho7gpf.8r26ypz4zkqfwtxm \
    --discovery-token-ca-cert-hash sha256:c5da455c9ee61cdf5680bfc78d7b79c7d2c52c2de922e403e5007b92a94893f1

3、每添加一个worker节点,就会多一个flannel 和 proxy
root@cka-m:~# kubectl get node -o wide
NAME     STATUS   ROLES           AGE   VERSION    INTERNAL-IP    EXTERNAL-IP   OS-IMAGE                         KERNEL-VERSION          CONTAINER-RUNTIME
cka-m    Ready    control-plane   26h   v1.24.13   10.15.11.114   <none>        Debian GNU/Linux 11 (bullseye)   5.10.0-22-cloud-amd64   containerd://1.6.21
cka-n1   Ready    <none>          16h   v1.24.13   10.15.12.100   <none>        Debian GNU/Linux 11 (bullseye)   5.10.0-22-cloud-amd64   containerd://1.6.21
cka-n2   Ready    <none>          16h   v1.24.13   10.15.12.40    <none>        Debian GNU/Linux 11 (bullseye)   5.10.0-22-cloud-amd64   containerd://1.6.21

root@cka-m:~# kubectl get pod -A -o wide
NAMESPACE      NAME                            READY   STATUS    RESTARTS   AGE   IP             NODE     NOMINATED NODE   READINESS GATES
kube-flannel   kube-flannel-ds-6h6tt           1/1     Running   0          57m   10.15.12.100   cka-n1   <none>           <none>
kube-flannel   kube-flannel-ds-h6qgk           1/1     Running   0          57m   10.15.11.114   cka-m    <none>           <none>
kube-flannel   kube-flannel-ds-j4dlx           1/1     Running   0          57m   10.15.12.40    cka-n2   <none>           <none>
kube-system    coredns-7f74c56694-9xxbn        1/1     Running   0          26h   10.15.0.2      cka-m    <none>           <none>
kube-system    coredns-7f74c56694-hls7d        1/1     Running   0          26h   10.15.0.3      cka-m    <none>           <none>
kube-system    etcd-cka-m                      1/1     Running   0          26h   10.15.11.114   cka-m    <none>           <none>
kube-system    kube-apiserver-cka-m            1/1     Running   0          26h   10.15.11.114   cka-m    <none>           <none>
kube-system    kube-controller-manager-cka-m   1/1     Running   0          26h   10.15.11.114   cka-m    <none>           <none>
kube-system    kube-proxy-8gb7q                1/1     Running   0          16h   10.15.12.100   cka-n1   <none>           <none>
kube-system    kube-proxy-nl2fm                1/1     Running   0          16h   10.15.12.40    cka-n2   <none>           <none>
kube-system    kube-proxy-nm8mg                1/1     Running   0          26h   10.15.11.114   cka-m    <none>           <none>
kube-system    kube-scheduler-cka-m            1/1     Running   0          26h   10.15.11.114   cka-m    <none>           <none>

F、附1、kubeadm初始化集群常见报错

查看报错的方法
    1、systemctl status containerd.service   #初始化的过程中,可以查看 containerd 日志,能看到具体的问题
    2、journalctl -xeu kubelet       #或者这么看

root@cka-n1:~# journalctl -xeu kubelet
May 11 21:51:13 cka-n1 kubelet[4770]: E0511 21:51:13.166825    4770 remote_runtime.go:201] "RunPodSandbox from runtime service failed" err="rpc error: code = DeadlineExceeded desc = failed to get sandbox image \"registry.k8s.io/pause:3.6\":>
May 11 21:51:13 cka-n1 kubelet[4770]: E0511 21:51:13.166884    4770 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = DeadlineExceeded desc = failed to get sandbox image \"registry.k8s.io/pause:3.6\": fail>
May 11 21:51:13 cka-n1 kubelet[4770]: E0511 21:51:13.166926    4770 kuberuntime_manager.go:815] "CreatePodSandbox for pod failed" err="rpc error: code = DeadlineExceeded desc = failed to get sandbox image \"registry.k8s.io/pause:3.6\": fail>
May 11 21:51:13 cka-n1 kubelet[4770]: E0511 21:51:13.166979    4770 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"etcd-cka-n1_kube-system(0b730b591c33e1d481767fea9e5a9010)\" with CreatePodSandbo>
May 11 21:51:13 cka-n1 kubelet[4770]: E0511 21:51:13.197494    4770 kubelet.go:2427] "Error getting node" err="node \"cka-n1\" not found"
May 11 21:51:13 cka-n1 kubelet[4770]: E0511 21:51:13.325946    4770 eviction_manager.go:254] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"cka-n1\" not found"
May 11 21:51:13 cka-n1 kubelet[4770]: E0511 21:51:13.398144    4770 kubelet.go:2427] "Error getting node" err="node \"cka-n1\" not found"
May 11 21:51:13 cka-n1 kubelet[4770]: E0511 21:51:13.422974    4770 kubelet.go:2352] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not in>
May 11 21:51:13 cka-n1 kubelet[4770]: E0511 21:51:13.470551    4770 remote_runtime.go:201] "RunPodSandbox from runtime service failed" err="rpc error: code = DeadlineExceeded desc = failed to get sandbox image \"registry.k8s.io/pause:3.6\":>
May 11 21:51:13 cka-n1 kubelet[4770]: E0511 21:51:13.470612    4770 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = DeadlineExceeded desc = failed to get sandbox image \"registry.k8s.io/pause:3.6\": fail>
May 11 21:51:13 cka-n1 kubelet[4770]: E0511 21:51:13.470645    4770 kuberuntime_manager.go:815] "CreatePodSandbox for pod failed" err="rpc error: code = DeadlineExceeded desc = failed to get sandbox image \"registry.k8s.io/pause:3.6\": fail>
May 11 21:51:13 cka-n1 kubelet[4770]: E0511 21:51:13.470713    4770 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-controller-manager-cka-n1_kube-system(eb057a3ba24aacbb975b9444691581c8)\" w>
May 11 21:51:13 cka-n1 kubelet[4770]: E0511 21:51:13.498718    4770 kubelet.go:2427] "Error getting node" err="node \"cka-n1\" not found"
May 11 21:51:13 cka-n1 kubelet[4770]: E0511 21:51:13.599740    4770 kubelet.go:2427] "Error getting node" err="node \"cka-n1\" not found"
May 11 21:51:13 cka-n1 kubelet[4770]: E0511 21:51:13.700510    4770 kubelet.go:2427] "Error getting node" err="node \"cka-n1\" not found"
May 11 21:51:13 cka-n1 kubelet[4770]: E0511 21:51:13.801488    4770 kubelet.go:2427] "Error getting node" err="node \"cka-n1\" not found"

    #上面这个报错是缺少CRI,需要单独安装crictl(因为我们采用的docker-ce中的containerd.io,这个包没有包含CRI,只有CR)

G、附2、修复INTERNAL-IP

1、若是出现INTERNAL-IP不正确
root@cka-m:~# kubectl get node -o wide
NAME     STATUS   ROLES           AGE   VERSION    INTERNAL-IP    EXTERNAL-IP   OS-IMAGE                         KERNEL-VERSION          CONTAINER-RUNTIME
cka-m    Ready    control-plane   26h   v1.24.13   10.15.11.114   <none>        Debian GNU/Linux 11 (bullseye)   5.10.0-22-cloud-amd64   containerd://1.6.21
cka-n1   Ready    <none>          16h   v1.24.13   10.15.12.100   <none>        Debian GNU/Linux 11 (bullseye)   5.10.0-22-cloud-amd64   containerd://1.6.21
cka-n2   Ready    <none>          16h   v1.24.13   10.15.12.40    <none>        Debian GNU/Linux 11 (bullseye)   5.10.0-22-cloud-amd64   containerd://1.6.21

2、去对应节点的该文件内加入新变量KUBELET_EXTRA_ARGS
root@cka-n1:~# cat /var/lib/kubelet/kubeadm-flags.env 
KUBELET_KUBEADM_ARGS="--container-runtime=remote --container-runtime-endpoint=unix:///var/run/containerd/containerd.sock --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.7"
KUBELET_EXTRA_ARGS="--node-ip=102.168.3.21"

3、重启kubelet后就会显示正常
    systemctl daemon-reload
    systemctl restart kubelet

H、附3、参考安装教程

安装参考教程(推荐)
    https://blog.csdn.net/weixin_42562106/article/details/123100476
    https://blog.csdn.net/tianmingqing0806/article/details/129887991

官方kubeadm安装k8s教程
    https://kubernetes.io/zh-cn/docs/setup/production-environment/tools/
    https://kubernetes.io/zh-cn/docs/setup/production-environment/tools/kubeadm/
    https://kubernetes.io/zh-cn/docs/setup/production-environment/tools/kubeadm/install-kubeadm/

关于containerd以及kubelet的驱动配置(两这必须一致,推荐使用systemd)
    https://kubernetes.io/zh-cn/docs/setup/production-environment/container-runtimes/
声明:本文为原创,作者为 辣条①号,转载时请保留本声明及附带文章链接:https://boke.wsfnk.com/archives/1135.html
谢谢你请我吃辣条谢谢你请我吃辣条

如果文章对你有帮助,欢迎点击上方按钮打赏作者

最后编辑于:2023/5/13作者: 辣条①号

现在在做什么? 接下来打算做什么? 你的目标什么? 期限还有多少? 进度如何? 不负遇见,不谈亏欠!

暂无评论

发表回复

您的电子邮箱地址不会被公开。 必填项已用*标注

arrow grin ! ? cool roll eek evil razz mrgreen smile oops lol mad twisted wink idea cry shock neutral sad ???

文章目录