centos部署k8s集群(kubeadm方式)

前言

环境:centos7.9 docker-ce-20.10.9 kubernetes-version v1.22.6

本篇来讲解如何在centos下安装部署k8s集群

生产环境部署k8s集群的两种方式

kubeadm
kubeadm是一个工具,提供kubeadm init和kubeadm join,用于快速部署k8s集群。
部署地址:https://kubernetes.io/zh/docs/setup/production-environment/tools/kubeadm/install-kubeadm/、https://kubernetes.io/zh/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/

二进制
从官方下载发行版的二进制包,手动部署每个组件,组成k8s集群。
下载地址:GitHub
推荐使用二进制安装部署,有助于更了解k8s。

服务器初始化、环境准备

准备3台虚拟机,1个master,2个node节点。

主机说明
192.168.118.131master节点,能连外网,centos7.x版本,至少2核CPU,2G内存
192.168.118.132node1节点,能连外网,centos7.x版本,至少2核CPU,2G内存
192.168.118.133node2节点,能连外网,centos7.x版本,至少2核CPU,2G内存

3台主机都根据实际情况做如下6大步骤配置:

1、关闭防火墙

[root@master ~]# systemctl stop firewalld			#关闭防火墙
[root@master ~]# systemctl disable firewalld		#设置开机不启动

2、禁用selinux

[root@master ~]# setenforce 0						#临时关闭selinux
[root@master ~]# getenforce 						#查看selinux状态
Permissive
[root@master ~]# vim /etc/selinux/config			#永久关闭selinux
SELINUX=disabled

3、关闭swap分区(必须,因为k8s官网要求)

[root@master ~]# swapoff -a							#禁用所有swap交换分区
[root@master ~]# free -h
              total        used        free      shared  buff/cache   available
Mem:           1.8G        280M        1.2G        9.6M        286M        1.4G
Swap:            0B          0B          0B
[root@master ~]# vim /etc/fstab						#永久禁用swap,删除或注释掉/etc/fstab里的swap设备的挂载命令即可
#/dev/mapper/centos-swap swap                    swap    defaults        0 0

4、设置主机名

cat >> /etc/hosts <<EOF
192.168.118.131 master
192.168.118.132 node1
192.168.118.133 node2
EOF

5、时间同步

[root@master ~]# yum -y install ntp	 				#安装ntpd服务
[root@master ~]# systemctl start ntpd				#开始ntpd服务,或者做定时任务如:*/5 * * * * /usr/sbin/ntpdate -u 192.168.11.100
[root@master ~]# systemctl enable ntpd

6、将桥接的IPv4流量传递到iptables的链(有一些ipv4的流量不能走iptables链,因为linux内核的一个过滤器,每个流量都会经过他,然后再匹配是否可进入当前应用进程去处理,所以会导致流量丢失),配置k8s.conf文件(k8s.conf文件原来不存在,需要自己创建的)

[root@master sysctl.d]# touch /etc/sysctl.d/k8s.conf				#创建k8s.conf文件
[root@master sysctl.d]# cat >> /etc/sysctl.d/k8s.conf <<EOF      	#往k8s.conf文件添加内容
> net.bridge.bridge-nf-call-ip6tables=1
> net.bridge.bridge-nf-call-iptables=1
> net.ipv4.ip_forward=1
> vm.swappiness=0
> EOF
[root@master sysctl.d]# sysctl --system								#重新加载系统全部参数,或者使用sysctl -p亦可

使用kubeadm安装k8s(本篇讲解使用kubeadm安装k8s)

以上6大步骤在每一台虚拟机做完之后,开始安装k8s。本篇讲解使用kubeadm安装k8s),kubeadm是官方社区推出的一个用于快速部署kubernetes集群的工具,这个工具能通过两条指令完成一个kubernetes集群的部署。
1、创建一个master节点,kubeadm init
2、将node节点加入kubernetes集群,kubeadm join <master_IP:port >

步骤一、安装docker(在所有节点服务器上都执行,因为k8s默认CRI为docker,cri称之为容器运行时)

#在所有3台虚拟机上都要安装docker
[root@master ~]# yum remove docker 									#先删除旧的docker版本
                   docker-client \
                   docker-client-latest \
                   docker-common \
                   docker-latest \
                   docker-latest-logrotate \
                   docker-logrotate \
                   docker-engine
[root@master ~]# yum install -y yum-utils							#安装yum-utils,主要提供yum-config-manager命令
[root@master ~]# yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo	#下载并安装docker的repo仓库
[root@master ~]# yum list docker-ce --showduplicates | sort -r							#查看可获取的docker版本
[root@master ~]# yum -y install docker-ce docker-ce-cli containerd.io					#直接安装最新的docker版本
[root@master ~]# yum -y install docker-ce-20.10.9 docker-ce-cli-20.10.9 containerd.io	#或者安装指定版本
[root@master ~]# systemctl enable docker												#设置开机自启
[root@master ~]# systemctl start docker													#启动docker
[root@master ~]# cat /etc/docker/daemon.json 											#设置镜像加速器
{
    "registry-mirrors": ["https://b9pmyelo.mirror.aliyuncs.com"]
}
[root@master ~]# systemctl restart docker												#重启docker
[root@master ~]# docker info |tail -5													#检查加速器配置是否成功
  127.0.0.0/8
 Registry Mirrors:
  https://b9pmyelo.mirror.aliyuncs.com/													#加速器配置成功,仓库已经是阿里云
 Live Restore Enabled: false

[root@master ~]#

步骤二、配置kubernetes的阿里云yum源(所有节点服务器都需要执行)

[root@master ~]# cat >/etc/yum.repos.d/kubernetes.repo <<'EOF' 							#在3台虚拟机都配置k8s的yum源
[kubernetes]
name = Kubernetes
baseurl = https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled = 1
gpgcheck = 0
repo_gpgcheck = 0
gpgkey = https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

步骤三、yum安装kubeadm、kubelet、kubectl(所有节点都执行)

#在3台虚拟机上都执行安装kubeadm、kubelet、kubectl(kubeadm和kubectl都是工具,kubelet才是系统服务)
[root@master ~]# yum list --showduplicates | grep  kubeadm								#查看yum可获取的kubeadm版本,这里安装1.22.6版本,不指定版本的话默认安装最新版本
[root@master ~]# yum -y install kubelet-1.22.6 kubeadm-1.22.6 kubectl-1.22.6			#安装kubeadm、kubelet、kubectl
[root@master ~]# systemctl enable kubelet												#设置kubelet开机自启(先不用启动,也起不了,后面kubeadm init初始化master时会自动拉起kubelet)

步骤四、初始化master节点的控制面板

# kubeadm init --help可以查看命令的具体参数用法
[root@master ~]# kubeadm init \												#在master节点执行初始化(node节点不用执行)
--apiserver-advertise-address=192.168.118.131 \								#指定apiserver的IP,即master节点的IP
--image-repository registry.aliyuncs.com/google_containers \				#设置镜像仓库为国内的阿里云镜像仓库
--kubernetes-version v1.22.6 \												#设置k8s的版本,跟步骤三的kubeadm版本一致
--service-cidr=10.96.0.0/12 \												#这是设置node节点的网络的,暂时这样设置
--pod-network-cidr=10.244.0.0/16											#这是设置node节点的网络的,暂时这样设置

#再开一个窗口,执行docker images可以看到,其实执行kubeadm init时k8s去拉取了好多镜像
[root@master ~]# docker images
REPOSITORY                                                        TAG       IMAGE ID       CREATED         SIZE
registry.aliyuncs.com/google_containers/kube-apiserver            v1.22.6   d35b182b4200   2 weeks ago     128MB
registry.aliyuncs.com/google_containers/kube-proxy                v1.22.6   63f3f385dcfe   2 weeks ago     104MB
registry.aliyuncs.com/google_containers/kube-controller-manager   v1.22.6   3618e4ab750f   2 weeks ago     122MB
registry.aliyuncs.com/google_containers/kube-scheduler            v1.22.6   9fe44a6192d1   2 weeks ago     52.7MB
registry.aliyuncs.com/google_containers/etcd                      3.5.0-0   004811815584   7 months ago    295MB
registry.aliyuncs.com/google_containers/coredns                   v1.8.4    8d147537fb7d   8 months ago    47.6MB
registry.aliyuncs.com/google_containers/pause                     3.5       ed210e3e4a5b   10 months ago   683kB
[root@master ~]# 
#在执行kubeadm init的过程中,k8s报错,信息如下:
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.

        Unfortunately, an error has occurred:
                timed out waiting for the condition

        This error is likely caused by:
                - The kubelet is not running
                - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

        If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
                - 'systemctl status kubelet'
                - 'journalctl -xeu kubelet'

        Additionally, a control plane component may have crashed or exited when started by the container runtime.
        To troubleshoot, list all containers using your preferred container runtimes CLI.

        Here is one example how you may list all Kubernetes containers running in docker:
                - 'docker ps -a | grep kube | grep -v pause'
                Once you have found the failing container, you can inspect its logs with:
                - 'docker logs CONTAINERID'

error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher



[root@master ~]# tail -22 /var/log/messages				#查看输出的日志,发现有个关键词cgroup driver,k8s和docker不一致
Feb  3 08:49:18 master kubelet: E0203 08:49:18.373751   14870 server.go:294] "Failed to run kubelet" err="failed to run Kubelet: misconfiguration: kubelet cgroup driver: \"systemd\" is different from docker cgroup driver: \"cgroupfs\""
Feb  3 08:49:18 master systemd: kubelet.service: main process exited, code=exited, status=1/FAILURE
Feb  3 08:49:18 master systemd: Unit kubelet.service entered failed state.
Feb  3 08:49:18 master systemd: kubelet.service failed.
[root@master ~]# 
#原因:kubernetes1.14之后的版本推荐使用systemd,但docker默认的Cgroup Driver 是Cgroup,使得kubelet部署报错
[root@master ~]# docker info | grep -i "Cgroup Driver"		#查看一下docker使用的Cgroup Driver,还真是cgroupfs
 Cgroup Driver: cgroupfs
[root@master ~]# 
#处理办法:修改/etc/docker/daemon.json 文件,添加如下参数:
[root@master ~]# vim /etc/docker/daemon.json 				#为了保持所有节点docker配置一致,所以其它节点的docker也改了
{
    "registry-mirrors": ["https://b9pmyelo.mirror.aliyuncs.com"],	#这句是之前就配置了的,但要注意加一个道号
    "exec-opts": ["native.cgroupdriver=systemd"]			#添加这一句
}
[root@master ~]# systemctl restart docker
[root@master ~]# docker info | grep -i "Cgroup Driver"		#查看验证
 Cgroup Driver: systemd


#重新开始初始化master节点
[root@master ~]# kubeadm reset												#清楚kubeadm init做的配置
[root@master ~]# kubeadm init \												#在master节点执行初始化(node节点不用执行)
--apiserver-advertise-address=192.168.118.131 \								#指定apiserver的IP,即master节点的IP
--image-repository registry.aliyuncs.com/google_containers \				#设置镜像仓库为国内的阿里云镜像仓库
--kubernetes-version v1.22.6 \												#设置k8s的版本,跟步骤三的kubeadm版本一致
--service-cidr=10.96.0.0/12 \												#这是设置node节点的网络的,暂时这样设置
--pod-network-cidr=10.244.0.0/16											#这是设置node节点的网络的,暂时这样设置

#最后kubeadm init初始化成功,提示信息如下:
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
  mkdir -p $HOME/.kube														#这3步是配置kubectl工具在master节点实现管理
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.								#提示我们去配置pod网络,步骤六再配置
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.118.131:6443 --token o6mdo3.fhfvz4vzmcrr6hsm \
        --discovery-token-ca-cert-hash sha256:8a80625d031f09efa43532360585b63dc2778a26435a9a4a6319cbf9f5acf91b 

#我们根据输入的提示信息复制粘贴照着做即可:
[root@master ~]# mkdir -p $HOME/.kube											#复制照着做即可
[root@master ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config		#复制照着做即可
[root@master ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config				#复制照着做即可
[root@master ~]# export KUBECONFIG=/etc/kubernetes/admin.conf					#复制照着做即可
[root@master ~]# 

步骤五、将node节点加入k8s集群

#在步骤四初始化完成master节点之后会提示你在node节点执行如下的命令来将node节点加入k8s集群,如下所示,复制它到node节点执行即可;
#注意:这段kubeamd join命令的token只有24h,24h就过期,需要执行kubeadm token create --print-join-command 重新生成。
kubeadm join 192.168.118.131:6443 --token o6mdo3.fhfvz4vzmcrr6hsm \
        --discovery-token-ca-cert-hash sha256:8a80625d031f09efa43532360585b63dc2778a26435a9a4a6319cbf9f5acf91b

[root@node1 ~]# kubeadm join 192.168.118.131:6443 --token o6mdo3.fhfvz4vzmcrr6hsm \			#在node1、node2节点执行
>         --discovery-token-ca-cert-hash sha256:8a80625d031f09efa43532360585b63dc2778a26435a9a4a6319cbf9f5acf91b
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

[root@node1 ~]# 

步骤六、部署容器网络,CNI网络插件

#在master节点配置pod网络创建
#node节点加入k8s集群后,在master上执行kubectl get nodes发现状态是NotReady,因为还没有部署CNI网络插件,其实在步骤四初始化
#完成master节点的时候k8s已经叫我们去配置pod网络了。在k8s系统上Pod网络的实现依赖于第三方插件进行,这类插件有近数十种之多,较为
#著名的有flannel、calico、canal和kube-router等,简单易用的实现是为CoreOS提供的flannel项目。

#执行下面这条命令在线配置pod网络,因为是国外网站,所以可能报错,测试去http://ip.tool.chinaz.com/网站查到
#域名raw.githubusercontent.com对应的IP,把域名解析配置到/etc/hosts文件,然后执行在线配置pod网络,多尝试几次即可成功。
[root@master ~]# kubectl apply -f https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml					
Warning: policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created
[root@master ~]# kubectl get pods -n kube-system						#查看pod状态
NAME                             READY   STATUS     RESTARTS   AGE
coredns-7f6cbbb7b8-bm2gl         0/1     Pending    0          86m
coredns-7f6cbbb7b8-frq8l         0/1     Pending    0          86m
etcd-master                      1/1     Running    1          87m
kube-apiserver-master            1/1     Running    1          87m
kube-controller-manager-master   1/1     Running    1          87m
kube-flannel-ds-5rwkt            0/1     Init:1/2   0          2m13s
kube-flannel-ds-9fqkl            1/1     Running    0          2m13s
kube-flannel-ds-bvgh4            1/1     Running    0          2m13s
kube-proxy-8vmqg                 1/1     Running    0          59m
kube-proxy-ll9hw                 1/1     Running    0          86m
kube-proxy-zndg7                 1/1     Running    0          59m
kube-scheduler-master            1/1     Running    1          87m
[root@master ~]# kubectl get pods -n kube-system
[root@master ~]# kubectl get nodes										#pod网络已经配置完成,状态已经是Ready
NAME     STATUS   ROLES                  AGE   VERSION
master   Ready    control-plane,master   97m   v1.22.6
node1    Ready    <none>                 69m   v1.22.6
node2    Ready    <none>                 69m   v1.22.6
[root@master ~]#

步骤七、测试k8s集群
在k8s中创建一个pod,验证是否正常运行:

[root@master ~]# kubectl create deployment httpd --image=httpd					#创建一个httpd服务测试
deployment.apps/httpd created
[root@master ~]# kubectl expose deployment httpd --port=80 --type=NodePort		#端口就写80,如果你写其他的可能防火墙拦截了
service/httpd exposed
[root@master ~]# kubectl get pod,svc											#对外暴露端口
NAME                         READY   STATUS    RESTARTS   AGE
pod/httpd-757fb56c8d-w42l5   1/1     Running   0          39s

NAME                 TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE
service/httpd        NodePort    10.102.83.215   <none>        80:30176/TCP     26s			#30176端口就是对外映射的端口
service/kubernetes   ClusterIP   10.96.0.1       <none>        443/TCP          112m
[root@master ~]# 
#作为初学者,以上命令先不用纠结,端口就写80即可,如果你写其他的端口可能防火墙拦截了,网页就访问不了

网页测试访问,使用master节点的IP或者node节点的IP都可以访问,端口就是30176,如下所示,这就说明我们k8s已经部署完成,网络ok。
在这里插入图片描述


版权声明:本文为MssGuo原创文章,遵循CC 4.0 BY-SA版权协议,转载请附上原文出处链接和本声明。