您当前的位置: 首页 > 慢生活 > 程序人生 网站首页程序人生
51、集群调度 污点和容忍
发布时间:2023-01-08 14:45:52编辑:雪饮阅读()
Step1
咱们目前有三个节点
[root@k8s-master01 ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master01 Ready master 25d v1.15.1
k8s-node01 Ready <none> 25d v1.15.1
k8s-node02 Ready <none> 25d v1.15.1
首先我们来查看下master节点的详情
[root@k8s-master01 ~]# kubectl describe node k8s-master01
Name: k8s-master01
Roles: master
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=k8s-master01
kubernetes.io/os=linux
node-role.kubernetes.io/master=
Annotations: flannel.alpha.coreos.com/backend-data: {"VtepMAC":"ce:35:55:57:4f:73"}
flannel.alpha.coreos.com/backend-type: vxlan
flannel.alpha.coreos.com/kube-subnet-manager: true
flannel.alpha.coreos.com/public-ip: 192.168.66.10
kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Tue, 13 Dec 2022 22:27:00 +0800
Taints: node-role.kubernetes.io/master:NoSchedule
Unschedulable: false
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
NetworkUnavailable False Sun, 08 Jan 2023 13:45:54 +0800 Sun, 08 Jan 2023 13:45:54 +0800 FlannelIsUp Flannel is running on this node
MemoryPressure False Sun, 08 Jan 2023 13:51:48 +0800 Sat, 31 Dec 2022 20:39:25 +0800 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Sun, 08 Jan 2023 13:51:48 +0800 Sat, 31 Dec 2022 20:39:25 +0800 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Sun, 08 Jan 2023 13:51:48 +0800 Sat, 31 Dec 2022 20:39:25 +0800 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Sun, 08 Jan 2023 13:51:48 +0800 Sun, 01 Jan 2023 11:53:38 +0800 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.66.10
Hostname: k8s-master01
Capacity:
cpu: 4
ephemeral-storage: 51175Mi
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 4002416Ki
pods: 110
Allocatable:
cpu: 4
ephemeral-storage: 48294789041
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 3900016Ki
pods: 110
System Info:
Machine ID: d91e1539f6204c5cb9395fac14ff180a
System UUID: 36504d56-4dc1-9048-935d-e53650cc9977
Boot ID: 91fb842c-fd08-463e-940f-b3232c606d72
Kernel Version: 5.4.225-1.el7.elrepo.x86_64
OS Image: CentOS Linux 7 (Core)
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://20.10.21
Kubelet Version: v1.15.1
Kube-Proxy Version: v1.15.1
PodCIDR: 10.224.0.0/24
Non-terminated Pods: (7 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE
--------- ---- ------------ ---------- --------------- ------------- ---
kube-system coredns-5c98db65d4-kvgj2 100m (2%) 0 (0%) 70Mi (1%) 170Mi (4%) 4d15h
kube-system etcd-k8s-master01 0 (0%) 0 (0%) 0 (0%) 0 (0%) 25d
kube-system kube-apiserver-k8s-master01 250m (6%) 0 (0%) 0 (0%) 0 (0%) 25d
kube-system kube-controller-manager-k8s-master01 200m (5%) 0 (0%) 0 (0%) 0 (0%) 25d
kube-system kube-flannel-ds-amd64-xsq8h 100m (2%) 100m (2%) 50Mi (1%) 50Mi (1%) 11d
kube-system kube-proxy-4lwzp 0 (0%) 0 (0%) 0 (0%) 0 (0%) 25d
kube-system kube-scheduler-k8s-master01 100m (2%) 0 (0%) 0 (0%) 0 (0%) 25d
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 750m (18%) 100m (2%)
memory 120Mi (3%) 220Mi (5%)
ephemeral-storage 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 4d16h kubelet, k8s-master01 Starting kubelet.
Normal NodeHasSufficientMemory 4d16h (x8 over 4d16h) kubelet, k8s-master01 Node k8s-master01 status is now: NodeHasSufficientMemory
Normal NodeHasSufficientPID 4d16h (x7 over 4d16h) kubelet, k8s-master01 Node k8s-master01 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 4d16h kubelet, k8s-master01 Updated Node Allocatable limit across pods
Normal NodeHasNoDiskPressure 4d16h (x8 over 4d16h) kubelet, k8s-master01 Node k8s-master01 status is now: NodeHasNoDiskPressure
Normal Starting 4d16h kube-proxy, k8s-master01 Starting kube-proxy.
Normal NodeAllocatableEnforced 4d15h kubelet, k8s-master01 Updated Node Allocatable limit across pods
Normal Starting 4d15h kubelet, k8s-master01 Starting kubelet.
Normal NodeHasSufficientMemory 4d15h (x8 over 4d15h) kubelet, k8s-master01 Node k8s-master01 status is now: NodeHasSufficientMemory
Normal NodeHasSufficientPID 4d15h (x7 over 4d15h) kubelet, k8s-master01 Node k8s-master01 status is now: NodeHasSufficientPID
Normal NodeHasNoDiskPressure 4d15h (x8 over 4d15h) kubelet, k8s-master01 Node k8s-master01 status is now: NodeHasNoDiskPressure
Normal Starting 4d15h kube-proxy, k8s-master01 Starting kube-proxy.
Normal Starting 2d16h kubelet, k8s-master01 Starting kubelet.
Normal NodeHasSufficientMemory 2d16h (x8 over 2d16h) kubelet, k8s-master01 Node k8s-master01 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 2d16h (x8 over 2d16h) kubelet, k8s-master01 Node k8s-master01 status is now: NodeHasNoDiskPressure
Normal NodeAllocatableEnforced 2d16h kubelet, k8s-master01 Updated Node Allocatable limit across pods
Normal NodeHasSufficientPID 2d16h (x7 over 2d16h) kubelet, k8s-master01 Node k8s-master01 status is now: NodeHasSufficientPID
Normal Starting 2d16h kube-proxy, k8s-master01 Starting kube-proxy.
Normal Starting 40h kubelet, k8s-master01 Starting kubelet.
Normal NodeAllocatableEnforced 40h kubelet, k8s-master01 Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 40h (x8 over 40h) kubelet, k8s-master01 Node k8s-master01 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 40h (x8 over 40h) kubelet, k8s-master01 Node k8s-master01 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 40h (x7 over 40h) kubelet, k8s-master01 Node k8s-master01 status is now: NodeHasSufficientPID
Normal Starting 40h kube-proxy, k8s-master01 Starting kube-proxy.
Normal Starting 6m20s kubelet, k8s-master01 Starting kubelet.
Normal NodeHasSufficientMemory 6m18s (x8 over 6m20s) kubelet, k8s-master01 Node k8s-master01 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 6m18s (x8 over 6m20s) kubelet, k8s-master01 Node k8s-master01 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 6m18s (x7 over 6m20s) kubelet, k8s-master01 Node k8s-master01 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 6m18s kubelet, k8s-master01 Updated Node Allocatable limit across pods
Normal Starting 6m10s kube-proxy, k8s-master01 Starting kube-proxy.
可见master节点是有污点的,是NoSchedule,表示 k8s将不会将Pod调度到具有该污点的Node 上
Step2
我们来看看目前pod都是怎样分布的
[root@k8s-master01 ~]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
node01 1/1 Running 1 39h 10.224.2.209 k8s-node02 <none> <none>
pod-3 1/1 Running 1 39h 10.224.2.210 k8s-node02 <none> <none>
可以看到这些pod都分布在节点2上面
那么这个时候我给这个节点2设置污点
[root@k8s-master01 ~]# kubectl taint nodes k8s-node02 check=wwangyang:NoExecute
node/k8s-node02 tainted
这里check是对应key,wwangyang是对应value
这个key和value可以自定义,但是后面有的功能如果你要用就会有引用到的,自定义的你要能记住。
NoExecute:表示k8s将不会将Pod调度到具有该污点的Node上,同时会将Node 上已经存在的Pod驱逐出去
稍待片刻节点2上面的这些不能pod就因为节点2有污点了,所以这些pod就都丢失了
[root@k8s-master01 ~]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
node01 0/1 Terminating 1 40h 10.224.2.209 k8s-node02 <none> <none>
pod-3 0/1 Terminating 1 39h 10.224.2.210 k8s-node02 <none> <none>
[root@k8s-master01 ~]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
node01 0/1 Terminating 1 40h 10.224.2.209 k8s-node02 <none> <none>
pod-3 0/1 Terminating 1 39h 10.224.2.210 k8s-node02 <none> <none>
[root@k8s-master01 ~]# kubectl get pod -o wide
No resources found.
由于这几个pod都是自主建立的并非是deployment或statefulset建立的,所以并不会自动重新找到一个合适的节点上,比如这里节点1应该就还是完璧之身。。。却还是没有找节点1.
实际上此时如果有另外一个节点1上也有pod
然后我节点1上面也驱逐了
[root@k8s-master01 ~]# kubectl taint nodes k8s-node01 check=wangyang:NoExecute
node/k8s-node01 tainted
那么此时执行kubectl get pod则会出现pending状态。
实际上这个倒是不是非常重要,具体要看pod创建时候的样板文件yaml,如果还有亲和性等其他因素影响,则结果可能是不同的。
Step3
接下来我们再创建一个pod
[root@k8s-master01 affi]# cat pod9.yaml
apiVersion: v1
kind: Pod
metadata:
name: pod-3
labels:
app: pod-3
spec:
containers:
- name: pod-3
image: wangyanglinux/myapp:v1
tolerations:
- key: "check"
operator: "Equal"
value: "wangyang"
effect: "NoExecute"
tolerationSeconds: 3600
这里有配置容忍,容忍的key为check,值为wangyang。然后容忍的是驱逐NoExecute。
那么上面的两个节点上面的污点,正好node01节点的污点符合这个容忍,node02的污点刚才把那个污点的value打多了一个字符。。。
然后那么自然就部署到了node01上面了
[root@k8s-master01 affi]# kubectl create -f pod9.yaml
pod/pod-3 created
[root@k8s-master01 affi]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod-3 1/1 Running 0 34s 10.224.1.196 k8s-node01 <none> <none>
Step4
有多个master时候为了防止资源浪费可以做如下配置
kubectl taint nodes k8s-master01 node-role.kubernetes.io/master=:PreferNoSchedule
其实这里所讲的效果我是没有看到,但根据我的理解,应该是说比如一个k8s集群,假定只有一个master,然后两个node时候,那么假如这两个node挂掉了或者有污点等因素导致创建的pod无法分配到这两个node上,则会分配到master节点上。
但是如果master也有污点呢?可能就不分配了吧。
Step5
节点污点的去除,去除节点污点就很简单,指定污点的key和value以及污点的effect
如这里去除掉上面给节点node01和节点node02上的刚才配置的这两个污点
kubectl taint nodes k8s-node01 check=wangyang:NoExecute-
kubectl taint nodes k8s-node02 check=wwangyang:NoExecute-
就是说和设置污点的命令一样,只是命令最后多加一个减号咯
然后咱们随便看一个,比如查看node01节点的详情
[root@k8s-master01 affi]# kubectl describe node k8s-node01
Name: k8s-node01
Roles: <none>
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=k8s-node01
kubernetes.io/os=linux
Annotations: flannel.alpha.coreos.com/backend-data: {"VtepMAC":"4a:65:70:28:dd:56"}
flannel.alpha.coreos.com/backend-type: vxlan
flannel.alpha.coreos.com/kube-subnet-manager: true
flannel.alpha.coreos.com/public-ip: 192.168.66.20
kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Tue, 13 Dec 2022 22:30:41 +0800
Taints: <none>
Unschedulable: false
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
NetworkUnavailable False Sun, 08 Jan 2023 14:39:26 +0800 Sun, 08 Jan 2023 14:39:26 +0800 FlannelIsUp Flannel is running on this node
MemoryPressure False Sun, 08 Jan 2023 14:42:59 +0800 Tue, 13 Dec 2022 22:30:41 +0800 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Sun, 08 Jan 2023 14:42:59 +0800 Tue, 13 Dec 2022 22:30:41 +0800 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Sun, 08 Jan 2023 14:42:59 +0800 Tue, 13 Dec 2022 22:30:41 +0800 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Sun, 08 Jan 2023 14:42:59 +0800 Thu, 05 Jan 2023 21:16:29 +0800 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.66.20
Hostname: k8s-node01
Capacity:
cpu: 4
ephemeral-storage: 51175Mi
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 4002416Ki
pods: 110
Allocatable:
cpu: 4
ephemeral-storage: 48294789041
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 3900016Ki
pods: 110
System Info:
Machine ID: c6d101dfd04a41d8b252ad6c0966effb
System UUID: e5584d56-a2ee-71aa-35c4-dff388e0e82d
Boot ID: da903d56-a2de-41ea-9b6c-6568dae89c8f
Kernel Version: 5.4.225-1.el7.elrepo.x86_64
OS Image: CentOS Linux 7 (Core)
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://20.10.21
Kubelet Version: v1.15.1
Kube-Proxy Version: v1.15.1
PodCIDR: 10.224.1.0/24
Non-terminated Pods: (4 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE
--------- ---- ------------ ---------- --------------- ------------- ---
default pod-3 0 (0%) 0 (0%) 0 (0%) 0 (0%) 13m
ingress-nginx nginx-ingress-controller-5cb7db844-xm7q9 100m (2%) 0 (0%) 90Mi (2%) 0 (0%) 28m
kube-system kube-flannel-ds-amd64-w2qp2 100m (2%) 100m (2%) 50Mi (1%) 50Mi (1%) 3m54s
kube-system kube-proxy-xqhhg 0 (0%) 0 (0%) 0 (0%) 0 (0%) 25d
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 200m (5%) 100m (2%)
memory 140Mi (3%) 50Mi (1%)
ephemeral-storage 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 4d16h kubelet, k8s-node01 Starting kubelet.
Normal NodeAllocatableEnforced 4d16h kubelet, k8s-node01 Updated Node Allocatable limit across pods
Normal NodeHasSufficientPID 4d16h (x7 over 4d16h) kubelet, k8s-node01 Node k8s-node01 status is now: NodeHasSufficientPID
Normal NodeHasSufficientMemory 4d16h (x8 over 4d16h) kubelet, k8s-node01 Node k8s-node01 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 4d16h (x8 over 4d16h) kubelet, k8s-node01 Node k8s-node01 status is now: NodeHasNoDiskPressure
Normal Starting 4d16h kube-proxy, k8s-node01 Starting kube-proxy.
Normal Starting 2d17h kubelet, k8s-node01 Starting kubelet.
Normal NodeHasSufficientMemory 2d17h kubelet, k8s-node01 Node k8s-node01 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 2d17h kubelet, k8s-node01 Node k8s-node01 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 2d17h kubelet, k8s-node01 Node k8s-node01 status is now: NodeHasSufficientPID
Warning Rebooted 2d17h kubelet, k8s-node01 Node k8s-node01 has been rebooted, boot id: 71034db9-b521-47f1-80c1-e8f62d47aa44
Normal NodeNotReady 2d17h kubelet, k8s-node01 Node k8s-node01 status is now: NodeNotReady
Normal NodeAllocatableEnforced 2d17h kubelet, k8s-node01 Updated Node Allocatable limit across pods
Normal NodeReady 2d17h kubelet, k8s-node01 Node k8s-node01 status is now: NodeReady
Normal Starting 2d17h kube-proxy, k8s-node01 Starting kube-proxy.
Normal Starting 40h kubelet, k8s-node01 Starting kubelet.
Normal NodeAllocatableEnforced 40h kubelet, k8s-node01 Updated Node Allocatable limit across pods
Normal NodeHasSufficientPID 40h (x7 over 40h) kubelet, k8s-node01 Node k8s-node01 status is now: NodeHasSufficientPID
Normal NodeHasSufficientMemory 40h (x8 over 40h) kubelet, k8s-node01 Node k8s-node01 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 40h (x8 over 40h) kubelet, k8s-node01 Node k8s-node01 status is now: NodeHasNoDiskPressure
Normal Starting 40h kube-proxy, k8s-node01 Starting kube-proxy.
Normal Starting 57m kubelet, k8s-node01 Starting kubelet.
Normal NodeAllocatableEnforced 57m kubelet, k8s-node01 Updated Node Allocatable limit across pods
Normal NodeHasNoDiskPressure 57m (x7 over 57m) kubelet, k8s-node01 Node k8s-node01 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientMemory 57m (x8 over 57m) kubelet, k8s-node01 Node k8s-node01 status is now: NodeHasSufficientMemory
Normal NodeHasSufficientPID 57m (x8 over 57m) kubelet, k8s-node01 Node k8s-node01 status is now: NodeHasSufficientPID
Normal Starting 57m kube-proxy, k8s-node01 Starting kube-proxy.
可以看到node01节点的污点为none了。
关键字词:集群调度,污点,容忍
下一篇:52、集群调度 固定节点调度
相关文章
-
无相关信息