k8s学习 - Pod调度

资源限制

Pod需要的资源不是无限制的,宿主机的容量永远是边界的, 所以为了确保大家能够不争抢资源, 需要对pod的资源进行限制 .

apiVersion: v1
kind: Pod
metadata:
  name: resource
spec:
  containers:
  - name: web
    image: nginx:1.16
    resources:
      requests:
        memory: "64Mi"
        cpu: "250m"
      limits:
        memory: "128Mi"
        cpu: "500m"

Memory 的单位是Mi
CPU的单位: 0.5 =500m , 1=1000m

NodeSelector

给特定的节点打标签

kubectl label nodes node1 location=node1

删除标签

kubectl label nodes node1 location-

案例:

apiVersion: v1
kind: Pod
metadata:
  name: nodeselector
spec:
  nodeSelector:
    location: node1
  containers:
  - name: nginx
    image: nginx:1.19

NodeAffinity

节点亲和类似于nodeSelector,可以根据节点上的标签来约束Pod可以调度到哪些节点。
相比nodeSelector:
•匹配有更多的逻辑组合,不只是字符串的完全相等
•调度分为软策略和硬策略,而不是硬性要求
•硬(required):必须满足
•软(preferred):尝试满足,但不保证
操作符:In、NotIn、Exists、DoesNotExist、Gt、Lt

案例

apiVersion: v1
kind: Pod
metadata:
  name: nginx
spec:
  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
        - matchExpressions:
          - key: disktype
            operator: In
            values:
            - ssd            
  containers:
  - name: nginx
    image: nginx:1.16
    imagePullPolicy: IfNotPresent

污点 & 容忍

Taints:避免Pod调度到特定Node上

kubectl taint node [node] key=value:[effect]

案例

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: web
  name: web
spec:
  replicas: 10
  selector:
    matchLabels:
      app: web
  strategy: {}
  template:
    metadata:
      labels:
        app: web
    spec:
      containers:
      - image: nginx:1.16
        name: nginx
        resources: {}
      tolerations:
      - key: "disktype"
        operator: "Equal"
        value: "sata"
        effect: "NoSchedule"
status: {}

NodeName

apiVersion: v1
kind: Pod
metadata:
  name: nodename
  labels:
    app: nginx
spec:
  nodeName: node2
  containers:
  - name: nginx
    image: nginx:1.15

Daemonset 控制器

在每一个Node上运行一个Pod
新加入的Node也同样会自动运行一个Pod
网络插件、监控Agent、日志Agent

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: filebeat
  namespace: kube-system
spec:
  selector:
    matchLabels:
      name: filebeat
  template:
    metadata:
      labels:
        name: filebeat
    spec:
      containers:
      - name: log
        image: elastic/filebeat:7.10.2

思考:
如上的案例执行之后, 只会在node1 & 2上运行filebeat, 如何在master上也运行filebeat ?


版权声明:本文为weixin_41444433原创文章,遵循CC 4.0 BY-SA版权协议,转载请附上原文出处链接和本声明。