K8S集群创建-笔记

准备工作:
一。设置系统主机名及host文件的解释
hostname k8s-master …
vim /etc/hosts

二。安装依赖包
yum -y install conntrack ntpdate ntp ipvsadm.x86_64 ipset jq iptables curl sysstat libseccomp wget git

三。安装 iptables并设置控规则
yum -y install iptables

四.关闭selinux ,关闭交换分区
swapoff -a && sed -i ‘/ swap / s/^(.*)$/#\1/g’ /etc/fstab

五.调整内核
cat > kubernetes.conf <<EOF
net.bridge.bridge-nf-call-iptables=1 #必备,开启网桥模式
net.bridge.bridge-nf-call-ip6tables=1 #必备 开启网桥模式
net.ipv4.ip_forward=1
net.ipv4.tcp_tw_recycle=0
vm.swappiness=0 #禁止使用swap空间,只有当系统00M是才允许使用它
vm.overcommit_memory=1 #不检查物理内存是否够用
vm.panic_on_oom=0 #开启oom
fs.inotify.max_user_instances=8192
fs.inotify.max_user_watches=1048576
fs.file-max=52706963
fs.nr_open=52706963
net.ipv6.conf.all.disable_ipv6=1 #必备,关闭ipv6协议
net.netfilter.nf_conntrack_max=2310720
EOF

除了必备行,其他都是默认
cp kubernetes.conf /etc/sysctl.d/
sysctl -p /etc/sysctl.d/kubernetes.conf

六。调整时区为上海 #如果选择的默认时区就不用再做调整
timedatectl set-timezone Asia/Shanghai
#将当前 的 UTC时间写入硬件时钟
timedatectl set-local-rtc 0
#重启依赖系统时间的服务
systemctl restart rsyslog.service
systemctl restart crond

七。关闭系统不需要的服务
systemctl stop postfix.service
systemctl disable postfix.service

八。设置rsyslogd和systemd journald
cat > /etc/systemd/journald.conf.d/99-prophet.conf <<EOF
[Journal]
#持久化保存到磁盘
Stroage=persistent

#压缩历史日志
Compress=yes

SyncIntervalSec=5m
RateLimitInterval=30s
RateLimitBurst=1000

#最大占用空间 10G
SystemMaxUse=10G

#单日志最大200m
SystemMaxFileSize=200M

#日志保存时间2周
MaxRetentionSec=2week

#不将日志转发到syslog
ForwardToSyslog=no
EOF

systemctl restart systemd-journald

九。升级内核为4.44,并设置为默认启动项
Centos 7.x系统自带的3.10X内核存在一些bugs ,导致运行的docker、kubernetes 不稳定 ,所以要升级到4.44版本内核
wget
https://www.elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpm
#安装完成后检查 /boot/grub2/grub.cfg 中对应的内核 menuentry 中是否包含initrd16配置,如果没有,再安装一次
rpm -Uvh elrepo-release-7.0-3.el7.elrepo.noarch.rpm
yum --enablerepo=elrepo-kernel install -y kernel-lt
grub2-set-default ‘CentOS Linux (4.4.231-1.el7.elrepo.x86_64) 7 (Core)’

安装kubeadm
一。kube-proxy开启ipvs的前提条件
modprobe br_netfilter

vim /etc/sysconfig/modules/ipvs.modules
#!/bin/bash
modprobe – ip_vs
modprobe – ip_vs_rr
modprobe – ip_vs_wrr
modprobe – ip_vs_sh
modprobe – nf_conntrack_ipv4

chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4

二。安装docker软件
yum源均使用阿里云的yum源
1.下载yum文件:
centos:
wget -O /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo
docker-ce:
wget -O /etc/yum.repos.d/docker-ce.repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

2.安装支持包:
yum -y install yum-utils device-mapper-persistent-data lvm2
yum -y install docker-ce

3.创建/etc/docker目录
mkdir /etc/docker

4.配置daemon
vim /etc/docker/daemon.json
{
“exec-opts”: [“native.cgroupdriver=systemd”],
“log-driver”: “json-file”,
“log-opts”: {
“max-size”: “100m”
}
}
mkdir -p /etc/systemd/system/docer.service.d/ #存放docker的配置文件

5.重启docker服务
systemctl daemon-reload && systemctl restart docker && systemctl enable docker

6.安装kubeadm

cat < /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

yum install -y kubelet-1.15.0 kubeadm-1.15.0 kubectl-1.15.0 #master和node节点都要安装
systemctl enable kubelet

7.下载镜像,如果网络比较慢,可以直接导入
kubeadm config images list
可以获得相应的版本信息
编辑脚本下载响应的镜像
vim kubernetes-file.sh
#!/bin/bash
#下载镜像
docker pull mirrorgcrio/kube-apiserver:v1.15.1
docker pull mirrorgcrio/kube-controller-manager:v1.15.1
docker pull mirrorgcrio/kube-scheduler:v1.15.1
docker pull mirrorgcrio/kube-proxy:v1.15.1
docker pull mirrorgcrio/pause:3.1
docker pull mirrorgcrio/etcd:3.3.10
docker pull mirrorgcrio/coredns:1.3.1

#tag镜像重命名
docker tag mirrorgcrio/kube-apiserver:v1.15.1 k8s.gcr.io/kube-apiserver:v1.15.1
docker tag mirrorgcrio/kube-controller-manager:v1.15.1 k8s.gcr.io/kube-controller-manager:v1.15.1
docker tag mirrorgcrio/kube-scheduler:v1.15.1 k8s.gcr.io/kube-scheduler:v1.15.1
docker tag mirrorgcrio/kube-proxy:v1.15.1 k8s.gcr.io/kube-proxy:v1.15.1
docker tag mirrorgcrio/pause:3.1 k8s.gcr.io/pause:3.1
docker tag mirrorgcrio/etcd:3.3.10 k8s.gcr.io/etcd:3.3.10
docker tag mirrorgcrio/coredns:1.3.1 k8s.gcr.io/coredns:1.3.1

#rm命令删除原始镜像
docker image rm mirrorgcrio/kube-apiserver:v1.15.1
docker image rm mirrorgcrio/kube-controller-manager:v1.15.1
docker image rm mirrorgcrio/kube-scheduler:v1.15.1
docker image rm mirrorgcrio/kube-proxy:v1.15.1
docker image rm mirrorgcrio/pause:3.1
docker image rm mirrorgcrio/etcd:3.3.10
docker image rm mirrorgcrio/coredns:1.3.1

为了方便其他节点导入镜像,可以打包生成镜像文件
docker save $(docker images | grep -v REPOSITORY | awk ‘BEGIN{OFS=":";ORS=" "}{print $1,$2}’) -o kubernetes.tar
8.初始化master节点:
1).获得配置文件
kubeadm config print init-defaults > kubeadm-config.yaml
2)修改文件:
#vim kubeadm-config.yaml
kubernetesVersion: v1.15.0
advertiseAddress: 192.168.1.17 #IP地址改成本机地址
bindPort: 6443 #端口默认不改
在serviceSubnet所在行前添加一行:
podSubnet: “10.244.0.0/16”
然后再在文件的末尾添加以下:

apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
featureGates:
SupportIPVSProxyMode: true #使用IPvs高可用方案
mode: ipvs

 kubeadm init --config=kubeadm-config.yaml --upload-certs | tee kubeadm-init.log  		#由配置文件启动并输出到kubeadm-init.log

9.创建用户目录
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown ( i d − u ) : (id -u):(idu):(id -g) $HOME/.kube/config

10.fannal安装
检查只有一个节点
kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master01 NotReady master 125m v1.18.5

11.编辑 master 机器上的 /etc/kubernetes/manifests/kube-controller-manager.yaml
加入以下两行,避免flannel无法启动
启动文件加上下面两句话,那篇文档没有加,所以报错;下面这个cluster-cidr要和kube-flannel.yml里面的地址一致,要和kube-proxy.config.yaml里面的clusterCIDR一致

–allocate-node-cidrs=true
–cluster-cidr=10.244.0.0/16

  • command:
    • kube-controller-manager
    • –authentication-kubeconfig=/etc/kubernetes/controller-manager.conf
    • –authorization-kubeconfig=/etc/kubernetes/controller-manager.conf
    • –bind-address=127.0.0.1
    • –client-ca-file=/etc/kubernetes/pki/ca.crt
    • –cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt
    • –cluster-signing-key-file=/etc/kubernetes/pki/ca.key
    • –controllers=*,bootstrapsigner,tokencleaner
    • –kubeconfig=/etc/kubernetes/controller-manager.conf
    • –leader-elect=true
    • –requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
    • –root-ca-file=/etc/kubernetes/pki/ca.crt
    • –service-account-private-key-file=/etc/kubernetes/pki/sa.key
    • –use-service-account-credentials=true
    • –allocate-node-cidrs=true
    • –cluster-cidr=10.244.0.0/16
      12.下载fannel配置文件:
      wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
      wget https://raw.githubusercontent.com/coreos/flannel/a70459be0084506e4ec919aa1c114638878db11b/Documentation/kube-flannel.yml
      下载不了,只能是编辑文件了

kubectl create -f flannel.yml
kubectl get pod -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-66bff467f8-7jxkp 0/1 ContainerCreating 0 15m
coredns-66bff467f8-8mqgx 0/1 ContainerCreating 0 15m
etcd-k8smaster01 1/1 Running 0 15m
kube-apiserver-k8smaster01 1/1 Running 0 15m
kube-controller-manager-k8smaster01 1/1 Running 0 14m
kube-flannel-ds-amd64-hs45g 1/1 Running 0 6m27s
kube-proxy-pkvgw 1/1 Running 0 15m
kube-scheduler-k8smaster01 1/1 Running 0 15m
然后ifconfig出现flannel网络

13.进入node节点加入节点
1).将master的flannel环境配置拷贝到node:
2).安装镜像:
docker load --i kubernetes.tar #将master机器的镜像文件直接导入
3).加入集群:
命令在kubeadm init 初始化时末尾有提示

14.搭建harbor私有化仓库:
详见:harbor私有化仓库

15.创建pod,设置3个副本
kubectl run nginx-deployment --image=仓库地址/library/nginx:latest --port=80 --replicas=1
kubectl get pod -o wide #显示pod详细信息
kubectl scale --replicas=3 deployment/nginx-deployment #设置pod的副本为3个实现负载均衡

16.暴露端口访问,实现IPVS负载均衡访问
kubectl expose deployment nginx-deployment --port=30000 --target-port=80
kubectl get svc #获得访问的IP地址和端口
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 443/TCP 68m
nginx-deployment ClusterIP 10.108.74.124 30000/TCP 5m56s

17.修改暴露端口类型使得外部可以访问内部网络
kubectl edit svc nginx-deployment
type: NodePort
kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 443/TCP 75m
nginx-deployment NodePort 10.108.74.124 30000:30917/TCP 13m
30917就是外部访问的端口
外部机器访问所有节点的IP地址:30917都可以访问内部网络


版权声明:本文为weixin_45473752原创文章,遵循CC 4.0 BY-SA版权协议,转载请附上原文出处链接和本声明。