K8s集群搭建

Kubernetes安装

前提要件

系统:Centos7
硬件要求:推荐 CPU:2核,内存:4G
各节点静态IP配置

环境准备

安装过程中有可能会用到jq命令,如果系统中没有安装,推荐提前安装
安装EPEL源:

yum install epel-release

安装完EPEL源后,可以查看下jq包是否存在:

yum list jq

安装jq:

yum install jq

安装Docker
Docker安装指南

设置主机名

Master节点

sudo hostnamectl set-hostname master-node

工作节点

sudo hostnamectl set-hostname worker-node1
sudo hostnamectl set-hostname worker-node2

设置主机名和IP映射

sudo vi /etc/hosts

192.168.128.112 master-node
192.168.128.113 worker-node1
192.168.128.114 worker-node2

设置防火墙

Master节点

sudo firewall-cmd --permanent --add-port=6443/tcp
sudo firewall-cmd --permanent --add-port=2379-2380/tcp
sudo firewall-cmd --permanent --add-port=10250/tcp
sudo firewall-cmd --permanent --add-port=10251/tcp
sudo firewall-cmd --permanent --add-port=10252/tcp
sudo firewall-cmd --permanent --add-port=10255/tcp
sudo firewall-cmd –-reload

工作节点

sudo firewall-cmd --permanent --add-port=10251/tcp
sudo firewall-cmd --permanent --add-port=10255/tcp
firewall-cmd –-reload

禁用SELinux

setenforce 0
sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config

禁用交换分区

sudo sed -i '/swap/d' /etc/fstab
sudo swapoff -a

添加IP规则

vi /etc/sysctl.d/k8s.conf

net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1

查看IP规则,显示上面设置的内容

sysctl --system

yum安装

添加安装源

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF

执行安装

sudo yum install -y kubelet kubeadm kubectl

启动docker

systemctl start docker
systemctl enable docker

启动k8s

systemctl enable kubelet
systemctl start kubelet

初始化pod

kubeadm init --pod-network-cidr=10.244.0.0/16

如果(Master节点和Work节点)初始化的有问题,可以通过下面命令清除,然后再重新初始化

kubeadm reset

初始化完成后会显示如下输出:

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.128.112:6443 --token dsx78s.sxyjn54pbz45v52t \
    --discovery-token-ca-cert-hash sha256:cc4c3d90691e52d3b8595ddad0c533a8fae1b204c45d779a9a7a139b7733f9c1 

复制最后一行命令,工作节点通过这个命令加入

 kubeadm join 192.168.128.112:6443 --token dsx78s.sxyjn54pbz45v52t --discovery-token-ca-cert-hash sha256:cc4c3d90691e52d3b8595ddad0c533a8fae1b204c45d779a9a7a139b7733f9c1

有效时间

该 token 的有效时间为 2 个小时,2小时内,您可以使用此 token 初始化任意数量的 worker 节点。

如果token过期,可以重新生成

kubeadm token create --print-join-command

执行命令

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

查看初始化结果

kubectl get nodes -o wide

查看节点状态

kubectl get nodes
NAME           STATUS     ROLES    AGE   VERSION
master-node    NotReady   master   23m   v1.18.5

如果状态为NotReady,说明节点未就绪,可以查看详细原因

kubectl get pod --all-namespaces
NAMESPACE     NAME                                  READY   STATUS    RESTARTS   AGE
kube-system   coredns-66bff467f8-954l9              0/1     Pending   0          20m
kube-system   coredns-66bff467f8-w4mw5              0/1     Pending   0          20m
kube-system   etcd-master-node                      1/1     Running   0          20m
kube-system   kube-apiserver-master-node            1/1     Running   0          20m
kube-system   kube-controller-manager-master-node   1/1     Running   7          20m
kube-system   kube-proxy-frsd6                      1/1     Running   0          20m
kube-system   kube-proxy-wkcqz                      1/1     Running   0          13m
kube-system   kube-proxy-zn7ww                      1/1     Running   0          12m
kube-system   kube-scheduler-master-node            1/1     Running   6          20m

coredns的Pending状态解决

如果coredns处于Pending状态,说明缺少flannel配置,这个配置是支持pod之间通信的底层通信协议。

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

coredns的ContainerCreating状态解决

如果处于ContainerCreating状态,可以通过以下命令查看日志

kubectl describe pod/coredns-66bff467f8-954l9 -n kube-system

如果看到如下错误,需要检查/run/flannel/subnet.env文件是否存在

network: open /run/flannel/subnet.env: no such file or directory

/run/flannel/subnet.env文件内容如下:

FLANNEL_NETWORK=10.244.0.0/16
FLANNEL_SUBNET=10.244.0.1/24
FLANNEL_MTU=1450
FLANNEL_IPMASQ=true

如果遇到工作节点有镜像pull失败的,可以到工作节点上通过docker pull拉取。

docker pull quay.io/coreos/flannel:v0.12.0-amd64

内存过低也有可能会导致某些pod启动失败

重装coredns服务

删除已有pod

kubectl delete --namespace=kube-system deployment coredns

重新安装

wget https://raw.githubusercontent.com/coredns/deployment/master/kubernetes/coredns.yaml.sed
wget https://raw.githubusercontent.com/coredns/deployment/master/kubernetes/deploy.sh
chmod +x deploy.sh
./deploy.sh | kubectl apply -f -

查看有问题的节点的pod状态

kubectl get pods -n kube-system -owide | grep work-node1

删除节点

kubectl delete node work-node1

删除颁发的证书,在Node节点上的目录:/var/lib/kubelet/

rm -f /var/lib/kubelet/pki/*

安装完成

问题解决后的pod正常状态

# kubectl get pod --all-namespaces
NAMESPACE     NAME                                  READY   STATUS    RESTARTS   AGE
kube-system   coredns-66bff467f8-d2rnk              1/1     Running   0          131m
kube-system   coredns-66bff467f8-dbfkt              1/1     Running   0          131m
kube-system   etcd-master-node                      1/1     Running   0          131m
kube-system   kube-apiserver-master-node            1/1     Running   0          131m
kube-system   kube-controller-manager-master-node   1/1     Running   0          131m
kube-system   kube-flannel-ds-amd64-9xfgg           1/1     Running   0          39m
kube-system   kube-flannel-ds-amd64-bq4kn           1/1     Running   0          39m
kube-system   kube-proxy-kpxdp                      1/1     Running   0          70m
kube-system   kube-proxy-ltzrl                      1/1     Running   0          131m
kube-system   kube-scheduler-master-node            1/1     Running   0          131m

# kubectl get nodes
NAME           STATUS   ROLES    AGE    VERSION
master-node    Ready    master   131m   v1.18.5
worker-node1   Ready    <none>   70m    v1.18.5

K8s应用

kubeadm、kubelet、kubectl

  • kubeadm:kubernetes集群快速构建工具
  • kubelet:运行在所有节点上,负责启动POD和容器,已系统服务形式出现
  • kubectl:kubenetes命令行工具,提供指令

启动,设置开机自动启动

systemctl start kubelet
systemctl enable kubelet

通过yml创建容器

kubectl create -f ./xxxx.yaml

更新容器配置,如果容器不存在,跟create作用相同

kubectl apply -f ./xxxx.yaml

配置Dashboard

创建证书

mkdir dashboard-certs

cd dashboard-certs/

#创建命名空间
kubectl create namespace kubernetes-dashboard

# 创建key文件
openssl genrsa -out dashboard.key 2048

#证书请求
openssl req -days 36000 -new -out dashboard.csr -key dashboard.key -subj '/CN=dashboard-cert'

#自签证书
openssl x509 -req -in dashboard.csr -signkey dashboard.key -out dashboard.crt

#创建kubernetes-dashboard-certs对象
kubectl create secret generic kubernetes-dashboard-certs --from-file=dashboard.key --from-file=dashboard.crt -n kubernetes-dashboard

安装Dashboard

kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0/aio/deploy/recommended.yaml

查看安装结果

#kubectl get pods -A  -o wide

NAMESPACE              NAME                                         READY   STATUS    RESTARTS   AGE     IP                NODE           NOMINATED NODE   READINESS GATES
kube-system            coredns-66bff467f8-d2rnk                     1/1     Running   33         20h     10.244.0.7        master-node    <none>           <none>
kube-system            coredns-66bff467f8-dbfkt                     1/1     Running   32         20h     10.244.0.6        master-node    <none>           <none>
kube-system            etcd-master-node                             1/1     Running   4          20h     192.168.128.112   master-node    <none>           <none>
kube-system            kube-apiserver-master-node                   1/1     Running   3          20h     192.168.128.112   master-node    <none>           <none>
kube-system            kube-controller-manager-master-node          1/1     Running   4          20h     192.168.128.112   master-node    <none>           <none>
kube-system            kube-flannel-ds-amd64-9xfgg                  1/1     Running   1          18h     192.168.128.112   master-node    <none>           <none>
kube-system            kube-flannel-ds-amd64-bq4kn                  1/1     Running   0          18h     192.168.128.113   worker-node1   <none>           <none>
kube-system            kube-flannel-ds-amd64-gbcz7                  1/1     Running   0          17h     192.168.128.114   worker-node2   <none>           <none>
kube-system            kube-proxy-kpxdp                             1/1     Running   0          19h     192.168.128.113   worker-node1   <none>           <none>
kube-system            kube-proxy-ltzrl                             1/1     Running   1          20h     192.168.128.112   master-node    <none>           <none>
kube-system            kube-proxy-wwk5s                             1/1     Running   0          17h     192.168.128.114   worker-node2   <none>           <none>
kube-system            kube-scheduler-master-node                   1/1     Running   3          20h     192.168.128.112   master-node    <none>           <none>
kubernetes-dashboard   dashboard-metrics-scraper-6b4884c9d5-ffdql   1/1     Running   0          8m17s   10.244.0.15       master-node    <none>           <none>
kubernetes-dashboard   kubernetes-dashboard-7b544877d5-gjzg8        1/1     Running   0          8m17s   10.244.0.14       master-node    <none>           <none>

如果dashboard的状态处于Running状态说明已经正常运行,如果处于crashloopbackoff状态,有可能是机器内存不够。

创建dashboard管理员

#vim dashboard-admin.yaml

apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: dashboard-admin
  namespace: kubernetes-dashboard

:wq

kubectl create -f ./dashboard-admin.yaml

为用户分配权限

#vim dashboard-admin-bind-cluster-role.yaml

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: dashboard-admin-bind-cluster-role
  labels:
    k8s-app: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: dashboard-admin
  namespace: kubernetes-dashboard
:wq

kubectl create -f ./dashboard-admin-bind-cluster-role.yaml

查看dashboard端口映射

# kubectl -n kubernetes-dashboard get svc
NAME                        TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)         AGE
dashboard-metrics-scraper   ClusterIP   10.105.5.109    <none>        8000/TCP        7h18m
kubernetes-dashboard        NodePort    10.102.18.151   <none>        443/TCP   73m

可以看到 kubernetes-dashboard service 在集群内部,无法再外部访问,需要暴露kubernetes-dashboard 443端口给NodePort。

# kubectl -n kubernetes-dashboard edit svc kubernetes-dashboard

找到type字段,将ClusterIP,修改为NodePort

spec:
  clusterIP: 10.106.68.90
  ports:
  - port: 443
    protocol: TCP
    targetPort: 8443
  selector:
    k8s-app: kubernetes-dashboard
  sessionAffinity: None
  type: ClusterIP ## <------修改为NodePort
status:
  loadBalancer: {}

保存退出,重新查看 service

# kubectl -n kubernetes-dashboard get svc
NAME                        TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)         AGE
dashboard-metrics-scraper   ClusterIP   10.105.5.109    <none>        8000/TCP        7h18m
kubernetes-dashboard        NodePort    10.102.18.151   <none>        443:31461/TCP   73m

可以看到当前NodePort 端口是随机的31461

查看并复制用户Token

#kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep dashboard-admin | awk '{print $1}')


Name:         dashboard-admin-token-b6c7z
Namespace:    kubernetes-dashboard
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: dashboard-admin
              kubernetes.io/service-account.uid: e091ac40-221f-4c66-a6df-c04d6c22c5f2

Type:  kubernetes.io/service-account-token

Data
====
ca.crt:     1025 bytes
namespace:  20 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6ImFENEF3Ymc4M0Fxc3pQRkRzZTNkaDd0NG50ZTdPVUE3NXNITmVVekE3WnMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4tYjZjN3oiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTA5MWFjNDAtMjIxZi00YzY2LWE2ZGYtYzA0ZDZjMjJjNWYyIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmVybmV0ZXMtZGFzaGJvYXJkOmRhc2hib2FyZC1hZG1pbiJ9.yg22OVtuOm9T2KiMcpNU9aHHhT3rSrYWEj98_x0rapaLdbMcyjAqToFvJ69N8-RGihnJ2KwwV9eeeFvCp4d3nh3lM8nJIRXjocXpJLLaGKo8G71WmLfa9kF4_W9DfsRA1ER0PJSYTyN3RvlIQaa3-98HhlaxIKNfL_tbHa1ELgH0wXYLAKpx4SDq5POV_EcyuszCgwqNJQ6LU3S4x_OML9ebZX8cuJ3vQeF4Prt9CR5oqduNNzdUcCRGSLhP_Mimr9lVpJqeQjwGzgTO3oWZeuUUjys6WzBC4tex6fI8Q4OT4qaOcGrcYDIgGWdS_rI_YdN0tSmcWz1OKPcRG80dhw

通过chrome访问Master节点IP+上面生成的端口号31461
在这里插入图片描述
输入上面的token,进入dashboard页面
在这里插入图片描述


版权声明:本文为yuyangbaby原创文章,遵循CC 4.0 BY-SA版权协议,转载请附上原文出处链接和本声明。