一、kubernetes 官方提供的三种部署方式
minikube
Minikube是一个工具,可以在本地快速运行一个单点的Kubernetes,仅用于尝试Kubernetes或日常开发的用户使用。
部署地址:https://kubernetes.io/docs/setup/minikube/kubeadm
Kubeadm也是一个工具,提供kubeadm init和kubeadm join,用于快速部署Kubernetes集群。
部署地址:https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm/二进制包
推荐,从官方下载发行版的二进制包,手动部署每个组件,组成Kubernetes集群。
下载地址:https://github.com/kubernetes/kubernetes/releases
二、kubeadm 快速部署K8S集群
2.1 kubeadm部署环境准备
以下操作,在三台节点都执行
2.1.1 环境角色
环境:centos 7.4 +
| IP | 角色 | 安装软件 |
|---|---|---|
| 192.168.56.102 | master | kube-apiserver kube-schduler kube-controller-manager docker flannel kubelet |
| 192.168.56.103 | node1 | kubelet kube-proxy docker flannel |
| 192.168.56.104 | node2 | kubelet kube-proxy docker flannel |
注意:master硬件需求:CPU>=2c
2.1.2 初始化机器环境
1、关闭防火墙及selinux
# systemctl stop firewalld && systemctl disable firewalld
2、关闭 swap 分区
# swapoff -a # 临时
# sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab #永久
3、关闭selinux
# setenforce 0
# sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config
4、配置hosts
# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.56.102 master
192.168.56.103 node1
192.168.56.104 node2
5、修改各主机名
# hostnamectl set-hostname master
# hostnamectl set-hostname node1
# hostnamectl set-hostname node2
2.1.3 docker 安装
1.下载docker源
# wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo
2.安装docker
# yum -y install docker
3.设置docker服务
# systemctl enable docker && systemctl start docker
4.查看版本
# docker --version
Docker version 1.13.1, build 7d71120/1.13.1
2.1.4 安装kubeadm,kubelet和kubectl
1.添加kubernetes YUM软件源
# cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
2.安装kubeadm,kubelet和kubectl
# yum install -y kubelet-1.15.0 kubeadm-1.15.0 kubectl-1.15.0
# systemctl enable kubelet
2.2 部署Kubernetes Master
只需要在Master 节点执行
2.2.1.设置阿里docker镜像加速
# cat /etc/docker/daemon.json
{
"registry-mirrors":["https://6kx4zyno.mirror.aliyuncs.com"]
}
2.2.2 拉取master相关镜像
这里的apiserve需要修改成自己的master地址
kubeadm init \
--apiserver-advertise-address=192.168.56.102 \
--image-repository registry.aliyuncs.com/google_containers \
--kubernetes-version v1.15.0 \
--service-cidr=10.1.0.0/16 \
--pod-network-cidr=10.244.0.0/16
如果上面操作失误,可以通过重新初始化,再进行上面操作
# kubeadm reset
输出结果:
[init] Using Kubernetes version: v1.15.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [master kubernetes kubernetes.default kubernetes.default.svc kuberne tes.default.svc.cluster.local] and IPs [10.1.0.1 192.168.56.102]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [master localhost] and IPs [192.168.56.102 127.0.0.1 ::1]
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [master localhost] and IPs [192.168.56.102 127.0.0.1 ::1]
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/ma nifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 17.504274 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.15" in namespace kube-system with the configuration for the kubelets in th e cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node master as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedu le]
[bootstrap-token] Using token: ei1jt9.yspq53ljxove1e7g
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term cer tificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstra p Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.56.102:6443 --token ei1jt9.yspq53ljxove1e7g \
--discovery-token-ca-cert-hash sha256:1cb76c6105563a913bb290aaac1d0f94a68229011237a8dc8e416da31193509e
注意:上面最后一行kubeadm join…,是加入节点方式,在node上执行即可,时效24h,超过需重新获取token值
2.2.3 配置master环境
根据2.2.2提示配置
# mkdir -p $HOME/.kube
# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
# chown $(id -u):$(id -g) $HOME/.kube/config
2.3 部署kubernetes node
注册Node节点到Matser
格式:kubeadm join :6443 --token <token值> --discovery-token-ca-cert-hash sha256: <sha值>
2.3.1 master部署24h内
kubeadm join 的内容,在上面kubeadm init 已经生成好了
在node1、node2执行下列命令
# kubeadm join 192.168.56.102:6443 --token ei1jt9.yspq53ljxove1e7g \
--discovery-token-ca-cert-hash sha256:1cb76c6105563a913bb290aaac1d0f94a68229011237a8dc8e416da31193509e
输出结果:
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.15" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
2.3.1 master部署24h后
初始token超过24h将失效,需重新生成
1.查看token (如下是有效,还有20h过期)
# kubeadm token list
TOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS
ei1jt9.yspq53ljxove1e7g 20h 2021-06-09T23:47:53-04:00 authentication,signing The default bootstrap token generated by 'kubeadm init'. system:bootstrappers:kubeadm:default-node-token.
2.重新生成token
# kubeadm token create
0w3a92.ijgba9ia0e3scicg
3.获取ca证书sha256编码hash值
# openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'
1cb76c6105563a913bb290aaac1d0f94a68229011237a8dc8e416da31193509e
4.节点加入集群
# kubeadm join 192.168.56.102:6443 --token aa78f6.8b4cafc8ed26c34f --discovery-token-ca-cert-hash sha256:0fd95a9bc67a7bf0ef42da968a0d55d92e
2.4 安装网络插件
只需要在Master 节点执行
2.4.1 下载配置文件
# wget https://raw.githubusercontent.com/coreos/flannel/a70459be0084506e4ec919aa1c114638878db11b/Documentation/kube-flannel.yml
默认是失败的,kube-flannel.yml内容如下
# cat kube-flannel.yml
---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: psp.flannel.unprivileged
annotations:
seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
spec:
privileged: false
volumes:
- configMap
- secret
- emptyDir
- hostPath
allowedHostPaths:
- pathPrefix: "/etc/cni/net.d"
- pathPrefix: "/etc/kube-flannel"
- pathPrefix: "/run/flannel"
readOnlyRootFilesystem: false
# Users and groups
runAsUser:
rule: RunAsAny
supplementalGroups:
rule: RunAsAny
fsGroup:
rule: RunAsAny
# Privilege Escalation
allowPrivilegeEscalation: false
defaultAllowPrivilegeEscalation: false
# Capabilities
allowedCapabilities: ['NET_ADMIN']
defaultAddCapabilities: []
requiredDropCapabilities: []
# Host namespaces
hostPID: false
hostIPC: false
hostNetwork: true
hostPorts:
- min: 0
max: 65535
# SELinux
seLinux:
# SELinux is unused in CaaSP
rule: 'RunAsAny'
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: flannel
rules:
- apiGroups: ['extensions']
resources: ['podsecuritypolicies']
verbs: ['use']
resourceNames: ['psp.flannel.unprivileged']
- apiGroups:
- ""
resources:
- pods
verbs:
- get
- apiGroups:
- ""
resources:
- nodes
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes/status
verbs:
- patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: flannel
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: flannel
subjects:
- kind: ServiceAccount
name: flannel
namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: flannel
namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
name: kube-flannel-cfg
namespace: kube-system
labels:
tier: node
app: flannel
data:
cni-conf.json: |
{
"name": "cbr0",
"cniVersion": "0.3.1",
"plugins": [
{
"type": "flannel",
"delegate": {
"hairpinMode": true,
"isDefaultGateway": true
}
},
{
"type": "portmap",
"capabilities": {
"portMappings": true
}
}
]
}
net-conf.json: |
{
"Network": "10.244.0.0/16",
"Backend": {
"Type": "vxlan"
}
}
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: kube-flannel-ds-amd64
namespace: kube-system
labels:
tier: node
app: flannel
spec:
selector:
matchLabels:
app: flannel
template:
metadata:
labels:
tier: node
app: flannel
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: beta.kubernetes.io/os
operator: In
values:
- linux
- key: beta.kubernetes.io/arch
operator: In
values:
- amd64
hostNetwork: true
tolerations:
- operator: Exists
effect: NoSchedule
serviceAccountName: flannel
initContainers:
- name: install-cni
# image: quay.io/coreos/flannel:v0.11.0-amd64
image: lizhenliang/flannel:v0.11.0-amd64
command:
- cp
args:
- -f
- /etc/kube-flannel/cni-conf.json
- /etc/cni/net.d/10-flannel.conflist
volumeMounts:
- name: cni
mountPath: /etc/cni/net.d
- name: flannel-cfg
mountPath: /etc/kube-flannel/
containers:
- name: kube-flannel
#image: quay.io/coreos/flannel:v0.11.0-amd64
image: lizhenliang/flannel:v0.11.0-amd64
command:
- /opt/bin/flanneld
args:
- --ip-masq
- --kube-subnet-mgr
resources:
requests:
cpu: "100m"
memory: "50Mi"
limits:
cpu: "100m"
memory: "50Mi"
securityContext:
privileged: false
capabilities:
add: ["NET_ADMIN"]
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeMounts:
- name: run
mountPath: /run/flannel
- name: flannel-cfg
mountPath: /etc/kube-flannel/
volumes:
- name: run
hostPath:
path: /run/flannel
- name: cni
hostPath:
path: /etc/cni/net.d
- name: flannel-cfg
configMap:
name: kube-flannel-cfg
2.4.2 flannel生成
# kubectl apply -f kube-flannel.yml
查看集群的node状态,安装完网络工具之后,只有显示如下状态,所有节点全部都Ready好了之后才能继续后面的操作
# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready master 3h18m v1.15.0
node1 Ready <none> 178m v1.15.0
node2 Ready <none> 178m v1.15.0
# kubectl get pod -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-bccdc95cf-6cs28 1/1 Running 0 3h18m
coredns-bccdc95cf-jztbl 1/1 Running 0 3h18m
etcd-master 1/1 Running 0 3h18m
kube-apiserver-master 1/1 Running 0 3h18m
kube-controller-manager-master 1/1 Running 0 3h17m
kube-flannel-ds-amd64-49w26 1/1 Running 0 152m
kube-flannel-ds-amd64-vtcfw 1/1 Running 0 152m
kube-flannel-ds-amd64-xzg4s 1/1 Running 0 152m
kube-proxy-bz2b7 1/1 Running 0 179m
kube-proxy-hznlg 1/1 Running 0 3h18m
kube-proxy-pp9tk 1/1 Running 0 179m
kube-scheduler-master 1/1 Running 0 3h17m
只有全部都为1/1则可以成功执行后续步骤,如果flannel需检查网络情况,重新进行如下操作
kubectl delete -f kube-flannel.yml
然后重新wget,然后修改镜像地址,然后
kubectl apply -f kube-flannel.yml
2.5 测试Kubernetes集群
在Kubernetes集群中创建一个pod,然后暴露端口,验证是否正常访问:
# kubectl create deployment nginx --image=nginx
deployment.apps/nginx created
# kubectl expose deployment nginx --port=80 --type=NodePort
service/nginx exposed
# kubectl get pods,svc
NAME READY STATUS RESTARTS AGE
pod/nginx-554b9c67f9-dz8g7 1/1 Running 0 152m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.1.0.1 <none> 443/TCP 3h21m
service/nginx NodePort 10.1.54.55 <none> 80:32641/TCP 152m
# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-554b9c67f9-dz8g7 1/1 Running 0 153m 10.244.1.3 node2 <none> <none>
可以看到在node2上
访问地址:http://NodeIP:Port ,此例就是:http://192.168.56.104:32641 (成功访问到nginx)