K8s集群部署

K8s集群部署

一、初始化集群

1.在master节点进行初始化操作

Kubernetes 1.8 开始要求关闭系统的Swap,如果不关闭,默认配置下kubelet将无法启动,关闭系统的Swap方法如下:

swapoff -a

开始初始化

kubeadm init --kubernetes-version=v1.20.4 --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=自己IP --ignore-preflight-errors=Swap

如显示以下消息则成功完成初始化

复制以下内容

[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 10.9.29.106:6443 --token x8awau.rxteung7iir394e8 \
    --discovery-token-ca-cert-hash sha256:273528cee3b448a4c92394b0844021ea2514c1c895742ca5835f35d3fecac494 

其中有以下关键内容:

[kubelet] 生成kubelet的配置文件”/var/lib/kubelet/config.yaml”

[certificates]生成相关的各种证书

[kubeconfig]生成相关的kubeconfig文件

[bootstraptoken]生成token记录下来,后边使用kubeadm join往集群中添加节点时会用到

2.在master节点配置使用kubectl

依据复制内容的提示进行配置

假如你希望用一个普通用户运行集群,执行如下操作

rm -rf $HOME/.kube
mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config

假如你使用 root 用户运行集群,执行如下命令

export KUBECONFIG=/etc/kubernetes/admin.conf

3.查看当前node节点

kubectl get nodes

二、配置网路插件

1.master节点下载yaml配置文件

cd ~ && mkdir flannel && cd flannel
wget https://github.com/flannel-io/flannel/blob/master/Documentation/kube-flannel.yml

需要修改自己的网卡,有多少写多少
注意是 ens33 或者 eth0

[root@k8s-master flannel]# cat kube-flannel.yml 
---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
  name: psp.flannel.unprivileged
  annotations:
    seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
    seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
    apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
    apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
spec:
  privileged: false
  volumes:
  - configMap
  - secret
  - emptyDir
  - hostPath
  allowedHostPaths:
  - pathPrefix: "/etc/cni/net.d"
  - pathPrefix: "/etc/kube-flannel"
  - pathPrefix: "/run/flannel"
  readOnlyRootFilesystem: false
  # Users and groups
  runAsUser:
    rule: RunAsAny
  supplementalGroups:
    rule: RunAsAny
  fsGroup:
    rule: RunAsAny
  # Privilege Escalation
  allowPrivilegeEscalation: false
  defaultAllowPrivilegeEscalation: false
  # Capabilities
  allowedCapabilities: ['NET_ADMIN', 'NET_RAW']
  defaultAddCapabilities: []
  requiredDropCapabilities: []
  # Host namespaces
  hostPID: false
  hostIPC: false
  hostNetwork: true
  hostPorts:
  - min: 0
    max: 65535
  # SELinux
  seLinux:
    # SELinux is unused in CaaSP
    rule: 'RunAsAny'
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: flannel
rules:
- apiGroups: ['extensions']
  resources: ['podsecuritypolicies']
  verbs: ['use']
  resourceNames: ['psp.flannel.unprivileged']
- apiGroups:
  - ""
  resources:
  - pods
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - nodes/status
  verbs:
  - patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: flannel
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: flannel
subjects:
- kind: ServiceAccount
  name: flannel
  namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: flannel
  namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: kube-flannel-cfg
  namespace: kube-system
  labels:
    tier: node
    app: flannel
data:
  cni-conf.json: |
    {
      "name": "cbr0",
      "cniVersion": "0.3.1",
      "plugins": [
        {
          "type": "flannel",
          "delegate": {
            "hairpinMode": true,
            "isDefaultGateway": true
          }
        },
        {
          "type": "portmap",
          "capabilities": {
            "portMappings": true
          }
        }
      ]
    }
  net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "Backend": {
        "Type": "vxlan"
      }
    }
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: kubernetes.io/os
                operator: In
                values:
                - linux
      hostNetwork: true
      priorityClassName: system-node-critical
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: quay.io/coreos/flannel:v0.13.1-rc2
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: quay.io/coreos/flannel:v0.13.1-rc2
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        - --iface=ens33              #在我这里修改
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
            add: ["NET_ADMIN", "NET_RAW"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
      - name: run
        hostPath:
          path: /run/flannel
      - name: cni
        hostPath:
          path: /etc/cni/net.d
      - name: flannel-cfg
        configMap:
          name: kube-flannel-cf

2.启动

kubectl apply -f ~/flannel/kube-flannel.yml

3.查看

[root@k8s-master flannel]# kubectl get pods --namespace kube-system
NAME                                 READY   STATUS    RESTARTS   AGE
coredns-74ff55c5b-cjc8x              1/1     Running   0          20m
coredns-74ff55c5b-stf4z              1/1     Running   0          20m
etcd-k8s-master                      1/1     Running   0          20m
kube-apiserver-k8s-master            1/1     Running   0          20m
kube-controller-manager-k8s-master   1/1     Running   0          20m
kube-flannel-ds-2m646                1/1     Running   0          76s
kube-proxy-4qqqr                     1/1     Running   0          20m
kube-scheduler-k8s-master            1/1     Running   0          20m

[root@k8s-master flannel]# kubectl get service
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   20m

[root@k8s-master flannel]# kubectl get svc --namespace kube-system
NAME       TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)                  AGE
kube-dns   ClusterIP   10.96.0.10   <none>        53/UDP,53/TCP,9153/TCP   20m

[root@k8s-master flannel]# kubectl get nodes
NAME         STATUS   ROLES                  AGE   VERSION
k8s-master   Ready    control-plane,master   20m   v1.20.4

只有网络插件也安装配置完成之后,才能会显示为ready状态

三、配置所有node节点加入集群

在所有node节点操作,此命令为初始化master成功后返回的结果
注意是在node节点
之前复制的 token在这里使用

swapoff -a
kubeadm join 10.9.29.106:6443 --token x8awau.rxteung7iir394e8 \
    --discovery-token-ca-cert-hash sha256:273528cee3b448a4c92394b0844021ea2514c1c895742ca5835f35d3fecac494 

四、集群检测

节点加入到集群之后再次查看

[root@k8s-master flannel]# kubectl get pods -n kube-system
NAME                                 READY   STATUS    RESTARTS   AGE
coredns-74ff55c5b-cjc8x              1/1     Running   0          31m
coredns-74ff55c5b-stf4z              1/1     Running   0          31m
etcd-k8s-master                      1/1     Running   0          31m
kube-apiserver-k8s-master            1/1     Running   0          31m
kube-controller-manager-k8s-master   1/1     Running   0          31m
kube-flannel-ds-2m646                1/1     Running   0          12m  *
kube-proxy-4qqqr                     1/1     Running   0          31m
kube-scheduler-k8s-master            1/1     Running   0          31m

[*]遇到异常状态0/1的pod长时间启动不了可删除它等待集群创建新的pod资源

kubectl delete pod kube-flannel-ds-sr6tq -n kube-system
pod "kube-flannel-ds-sr6tq" deleted

删除后,稍等片刻再次查看,发现状态为正常

[root@k8s-master flannel]# kubectl get pods -n kube-system
NAME                                 READY   STATUS    RESTARTS   AGE
coredns-74ff55c5b-cjc8x              1/1     Running   0          45m
coredns-74ff55c5b-stf4z              1/1     Running   0          45m
etcd-k8s-master                      1/1     Running   0          45m
kube-apiserver-k8s-master            1/1     Running   0          45m
kube-controller-manager-k8s-master   1/1     Running   0          45m
kube-flannel-ds-2m646                1/1     Running   0          26m
kube-flannel-ds-79vfz                1/1     Running   0          2m37s
kube-flannel-ds-wm7bc                1/1     Running   0          2m39s
kube-proxy-4qqqr                     1/1     Running   0          45m
kube-proxy-b57j7                     1/1     Running   0          2m39s
kube-proxy-k2w67                     1/1     Running   0          2m37s
kube-scheduler-k8s-master            1/1     Running   0          45m
[root@k8s-master flannel]# kubectl get nodes -n kube-system
NAME         STATUS   ROLES                  AGE     VERSION
k8s-master   Ready    control-plane,master   45m     v1.20.4
k8s-node1    Ready    <none>                 2m40s   v1.20.4
k8s-node2    Ready    <none>                 2m42s   v1.20.4

1.清理集群

删除对集群的本地引用

kubectl config delete-cluster

删除节点
使用适当的凭证与控制平面节点通信,运行:

kubectl drain <node name> --delete-local-data --force --ignore-daemonsets

[*]在删除节点之前,请重置 kubeadm 安装的状态:

kubeadm reset

重置过程不会重置或清除 iptables 规则或 IPVS 表。如果你希望重置 iptables,则必须手动进行:

iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X

如果要重置 IPVS 表,则必须运行以下命令:

ipvsadm -C

现在删除节点:

kubectl delete node <node name>

如果你想重新开始,只需运行 kubeadm initkubeadm join 并加上适当的参数。

清理控制平面
你可以在控制平面主机上使用 kubeadm reset 来触发尽力而为的清理。

有关此子命令及其选项的更多信息,请参见 [ kubeadm reset ] 参考文档。

五、使用 kubernetes 工作

1.编写yaml文件

k8s推荐使用配置文件运行一个或者多个容器
配置文件可以是 yaml格式或者json文件

这里以运行nginx容器 为例

nginx.tml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  selector:
    matchLabels:
      app: nginx
  replicas: 2
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.8
        ports:
        - containerPort: 80

2.运行 yaml 文件

kubectl apply -f nginx.yml

3.获取 pods 列表

运行此容器的节点会先下载容器的镜像,因此不一定会立刻就绪

kubectl get pods -l app=nginx
[root@k8s-master yaml]# kubectl get pods -l app=nginx
NAME                                READY   STATUS              RESTARTS   AGE
nginx-deployment-64c9d67564-847kn   0/1     ContainerCreating   0          16s
nginx-deployment-64c9d67564-k5sjf   0/1     ContainerCreating   0          16s

4.查看某个 POD 的事件信息

kubectl describe pods 你的nginx名字
kubectl describe pods nginx-deployment-64c9d67564-k5sjf

下面的内容最后一行的 Pulling image 表示容器使用的镜像正在下载。

Events:
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  75s   default-scheduler  Successfully assigned default/nginx-deployment-64c9d67564-847kn to k8s-node1
  Normal  Pulling    67s   kubelet            Pulling image "nginx:1.8"

从上面的事件信息中的第一行可以看出此容器运行于哪个节点。

下面的内容表示容器已经成功运行了

Events:
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  46m   default-scheduler  Successfully assigned default/nginx-deployment-64c9d67564-847kn to k8s-node1
  Normal  Pulling    45m   kubelet            Pulling image "nginx:1.8"
  Normal  Pulled     43m   kubelet            Successfully pulled image "nginx:1.8" in 2m15.560747316s
  Normal  Created    43m   kubelet            Created container nginx
  Normal  Started    43m   kubelet            Started container nginx

六、kubernetes API 对象的滚动更新

1 修改之前的 YAML 文件

修改nginx.yml文件中的 image: 部分的值:

由原来的
image: nginx:1.8

修改为
image: nginx:1.9

之后执行如下命令进行生效

kubectl apply -f nginx.yml

在接下来的几分钟时间内,可以间歇性的反复使用如下命令来观察整个更新的过程(被调度的节点会依次执行下载镜像,运行容器的动作)。

kubectl get pods
kubectl describe pods

通过上面的更新示例,相信你可以体会到:
Kubernetes 会根据 YAML 文件的内容变化,自动进行具体的处理。

七、删除对象

kubectl delete -f nginx.yml

八、k8s 命令补全

k8s命令补全 官方文档

yum install bash-completion
echo 'source <(kubectl completion bash)' >>~/.bashr

版权声明:本文为Houaki原创文章,遵循CC 4.0 BY-SA版权协议,转载请附上原文出处链接和本声明。