您当前的位置: 首页 > 慢生活 > 程序人生 网站首页程序人生
27、资源控制器 - RS、Deployment
发布时间:2022-12-18 17:46:25编辑:雪饮阅读()
Step1
上篇中部署的nginx对应的这3个pod,首先确认是都可以访问的
[root@k8s-master01 ~]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-deployment-5df65767f-skm5n 1/1 Running 1 17h 10.224.1.51 k8s-node01 <none> <none>
nginx-deployment-5df65767f-vbw8f 1/1 Running 1 17h 10.224.1.52 k8s-node01 <none> <none>
nginx-deployment-5df65767f-xrvsh 1/1 Running 1 17h 10.224.2.37 k8s-node02 <none> <none>
[root@k8s-master01 ~]# curl 10.224.1.51
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
[root@k8s-master01 ~]# curl 10.224.1.52
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
[root@k8s-master01 ~]# curl 10.224.2.37
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
Step2
既然期望值为3是可以,则尝试下扩容到10个
[root@k8s-master01 ~]# kubectl scale deployment nginx-deployment --replicas=10
deployment.extensions/nginx-deployment scaled
10个也照样没有问题
[root@k8s-master01 ~]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-deployment-5df65767f-76ktt 1/1 Running 0 2m30s 10.224.1.54 k8s-node01 <none> <none>
nginx-deployment-5df65767f-dnvmh 1/1 Running 0 2m30s 10.224.2.40 k8s-node02 <none> <none>
nginx-deployment-5df65767f-h8kr2 1/1 Running 0 2m30s 10.224.2.39 k8s-node02 <none> <none>
nginx-deployment-5df65767f-rst76 1/1 Running 0 2m30s 10.224.2.42 k8s-node02 <none> <none>
nginx-deployment-5df65767f-scmp4 1/1 Running 0 2m30s 10.224.1.55 k8s-node01 <none> <none>
nginx-deployment-5df65767f-skm5n 1/1 Running 1 17h 10.224.1.51 k8s-node01 <none> <none>
nginx-deployment-5df65767f-tzrr8 1/1 Running 0 2m30s 10.224.1.53 k8s-node01 <none> <none>
nginx-deployment-5df65767f-vbw8f 1/1 Running 1 17h 10.224.1.52 k8s-node01 <none> <none>
nginx-deployment-5df65767f-xrvsh 1/1 Running 1 17h 10.224.2.37 k8s-node02 <none> <none>
nginx-deployment-5df65767f-zcr5k 1/1 Running 0 2m30s 10.224.2.41 k8s-node02 <none> <none>
可以看到有3个pod分别出现了一次重启,这个无碍,我认为是我这里自上篇到这篇之间有关闭整个k8s目前关联的所有服务器以及master的原因。
但是rs好像没有变(这里说的没有变应该是指rs的数量并不像pod那样一下多了10个条目)
[root@k8s-master01 ~]# kubectl get rs
NAME DESIRED CURRENT READY AGE
nginx-deployment-5df65767f 10 10 10 17h
Step3
但当批量变更镜像(这种有多个pod的情况下镜像变更不就是相当于批量变更嘛)可以看到是有点像是新增加10个占位,然后逐个将原10个pod的镜像弄过来,可能说的不够准确。
[root@k8s-master01 ~]# kubectl set image deployment/nginx-deployment nginx=wangyanglinux/myapp:v2
deployment.extensions/nginx-deployment image updated
[root@k8s-master01 ~]# kubectl get rs
NAME DESIRED CURRENT READY AGE
nginx-deployment-5c478875d8 5 5 3 11s
nginx-deployment-5df65767f 6 6 6 17h
[root@k8s-master01 ~]# kubectl get rs
NAME DESIRED CURRENT READY AGE
nginx-deployment-5c478875d8 10 10 10 19s
nginx-deployment-5df65767f 0 0 0 17h
[root@k8s-master01 ~]# kubectl get rs
NAME DESIRED CURRENT READY AGE
nginx-deployment-5c478875d8 10 10 10 21s
nginx-deployment-5df65767f 0 0 0 17h
[root@k8s-master01 ~]# kubectl get rs
NAME DESIRED CURRENT READY AGE
nginx-deployment-5c478875d8 10 10 10 22s
nginx-deployment-5df65767f 0 0 0 17h
[root@k8s-master01 ~]# kubectl get rs
NAME DESIRED CURRENT READY AGE
nginx-deployment-5c478875d8 10 10 10 22s
nginx-deployment-5df65767f 0 0 0 17h
[root@k8s-master01 ~]# kubectl get rs
NAME DESIRED CURRENT READY AGE
nginx-deployment-5c478875d8 10 10 10 23s
nginx-deployment-5df65767f 0 0 0 17h
那么镜像改变后,各个pod的ip也发生变化了,这里以最后的这个pod的ip作为访问入口,以验证版本确实是改变了。
[root@k8s-master01 ~]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-deployment-5c478875d8-47rnz 1/1 Running 0 4m29s 10.224.1.59 k8s-node01 <none> <none>
nginx-deployment-5c478875d8-4drq8 1/1 Running 0 4m41s 10.224.1.56 k8s-node01 <none> <none>
nginx-deployment-5c478875d8-4sjvg 1/1 Running 0 4m27s 10.224.1.60 k8s-node01 <none> <none>
nginx-deployment-5c478875d8-6jc64 1/1 Running 0 4m32s 10.224.1.57 k8s-node01 <none> <none>
nginx-deployment-5c478875d8-d45b8 1/1 Running 0 4m30s 10.224.2.45 k8s-node02 <none> <none>
nginx-deployment-5c478875d8-fk874 1/1 Running 0 4m30s 10.224.1.58 k8s-node01 <none> <none>
nginx-deployment-5c478875d8-m2xzh 1/1 Running 0 4m32s 10.224.2.44 k8s-node02 <none> <none>
nginx-deployment-5c478875d8-mm4v7 1/1 Running 0 4m28s 10.224.2.47 k8s-node02 <none> <none>
nginx-deployment-5c478875d8-r46pp 1/1 Running 0 4m41s 10.224.2.43 k8s-node02 <none> <none>
nginx-deployment-5c478875d8-rwbd7 1/1 Running 0 4m29s 10.224.2.46 k8s-node02 <none> <none>
[root@k8s-master01 ~]# curl 10.224.2.46
Hello MyApp | Version: v2 | <a href="hostname.html">Pod Name</a>
之前step1哪里访问是v1,现在是v2,是变了。
Step4
上面不是更换镜像了嘛,那么我们还可以回滚到之前的镜像
[root@k8s-master01 ~]# kubectl rollout undo deployment/nginx-deployment
deployment.extensions/nginx-deployment rolled back
可以看到又变成v1了
[root@k8s-master01 ~]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-deployment-5df65767f-2kxfk 1/1 Running 0 46s 10.224.1.63 k8s-node01 <none> <none>
nginx-deployment-5df65767f-4xbdl 1/1 Running 0 47s 10.224.1.62 k8s-node01 <none> <none>
nginx-deployment-5df65767f-gjxpq 1/1 Running 0 43s 10.224.1.65 k8s-node01 <none> <none>
nginx-deployment-5df65767f-h7pg7 1/1 Running 0 48s 10.224.1.61 k8s-node01 <none> <none>
nginx-deployment-5df65767f-hdvsm 1/1 Running 0 46s 10.224.2.50 k8s-node02 <none> <none>
nginx-deployment-5df65767f-lxfpw 1/1 Running 0 41s 10.224.1.66 k8s-node01 <none> <none>
nginx-deployment-5df65767f-mvbgr 1/1 Running 0 48s 10.224.2.48 k8s-node02 <none> <none>
nginx-deployment-5df65767f-thbk4 1/1 Running 0 47s 10.224.2.49 k8s-node02 <none> <none>
nginx-deployment-5df65767f-w6427 1/1 Running 0 45s 10.224.2.51 k8s-node02 <none> <none>
nginx-deployment-5df65767f-zfg4z 1/1 Running 0 43s 10.224.1.64 k8s-node01 <none> <none>
[root@k8s-master01 ~]# curl 10.224.1.64
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
也可以查看回滚状态
[root@k8s-master01 ~]# kubectl rollout status deployment/nginx-deployment
deployment "nginx-deployment" successfully rolled out
Step5
也可以查看回滚历史
[root@k8s-master01 ~]# kubectl rollout history deployment/nginx-deployment
deployment.extensions/nginx-deployment
REVISION CHANGE-CAUSE
2 kubectl apply --filename=deployment.yaml --record=true
3 kubectl apply --filename=deployment.yaml --record=true
这里可以看到change-cause并非空值,是因为我们创建deployment时候
kubectl apply -f deployment.yaml –record
的—record参数的作用
我们删除全部deployment,并重建并不带—record试试
[root@k8s-master01 ~]# kubectl delete deployment --all
deployment.extensions "nginx-deployment" deleted
[root@k8s-master01 ~]# kubectl apply -f deployment.yaml
deployment.extensions/nginx-deployment created
然后再次设置镜像,这次搞个v3出来
[root@k8s-master01 ~]# kubectl set image deployment/nginx-deployment nginx=wangyanglinux/myapp:v3
deployment.extensions/nginx-deployment image updated
然后也可以马上查看回滚状态,如果你电脑性能快,并且你手速慢,就看到下面这样的结果,否则会看到一整个类似各个pod就绪的过程,最后才是下面这个结果
[root@k8s-master01 ~]# kubectl rollout status deployment/nginx-deployment
deployment "nginx-deployment" successfully rolled out
这次可以看到change-cause字段就没有内容了
[root@k8s-master01 ~]# kubectl rollout history deployment/nginx-deployment
deployment.extensions/nginx-deployment
REVISION CHANGE-CAUSE
1 <none>
2 <none>
Step6
貌似是每更换一次镜像就能产生一次回滚历史记录,比如下面我们再次更新到指定(这里不叫指定,误导了。。。)版本镜像(v2),就又多出一条记录
[root@k8s-master01 ~]# kubectl set image deployment/nginx-deployment nginx=wangyanglinux/myapp:v2
deployment.extensions/nginx-deployment image updated
[root@k8s-master01 ~]# kubectl rollout history deployment/nginx-deployment
deployment.extensions/nginx-deployment
REVISION CHANGE-CAUSE
1 <none>
2 <none>
3 <none>
Step7
回滚到指定版本
[root@k8s-master01 ~]# kubectl rollout undo deployment/nginx-deployment --to-revision=1
deployment.extensions/nginx-deployment rolled back
这里回滚到指定版本,那个—to-revision的参数值应该就是上面那个回滚历史里面的那个最前面的历史记录id吧
[root@k8s-master01 ~]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE REA DINESS GATES
nginx-deployment-5df65767f-28v7g 1/1 Running 0 19s 10.224.1.72 k8s-node01 <none> <no ne>
nginx-deployment-5df65767f-c74cg 1/1 Running 0 20s 10.224.2.57 k8s-node02 <none> <no ne>
nginx-deployment-5df65767f-n4gl9 1/1 Running 0 20s 10.224.1.71 k8s-node01 <none> <no ne>
[root@k8s-master01 ~]# curl 10.224.1.71
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
Step8
回滚状态的验证
对于回滚状态据说还可以验证变量$?,若值为0,表示也是ok的
[root@k8s-master01 ~]# kubectl rollout status deployment/nginx-deployment
deployment "nginx-deployment" successfully rolled out
[root@k8s-master01 ~]# echo $?
0
这里稍微科普下
$?:是上一指令的返回值,成功是0,不成功是1。一般来说,UNIX(linux) 系统的进程以执行系统调用exit() 来结束的。这个回传值就是status值。回传给父进程,用来检查子进程的执行状态。一般指令程序倘若执行成功,其回传值为 0;失败为 1。
这属于linux的知识,并非k8s独有
Step9
Rs的条目数量,好像也是对应着回滚历史记录的条目数量的哈
[root@k8s-master01 ~]# kubectl rollout history deployment/nginx-deployment
deployment.extensions/nginx-deployment
REVISION CHANGE-CAUSE
2 <none>
3 <none>
4 <none>
[root@k8s-master01 ~]# kubectl get rs
NAME DESIRED CURRENT READY AGE
nginx-deployment-5c478875d8 0 0 0 9m2s
nginx-deployment-5df65767f 3 3 3 14m
nginx-deployment-68bbd86499 0 0 0 13m
关键字词:资源控制器,RS,Deployment
相关文章
- 26、 资源控制器 - RS、Deployment
- 08-Maven坐标(groupId,artifactId,version)
- 15_Jedis_操作set&sortedset(sadd、zadd、members、zr
- 踩坑系列-php的max_input_vars, php post 最大接收数
- sqlserver連接錯誤SSL routinesssl_choose_client_ver
- workerman-redis-sMembers
- workerman-redis-sInterStore
- workerman类TcpConnection的maxSendBufferSize属性(缓
- workerman类TcpConnection的defaultMaxSendBufferSize
- workerman回调onWorkerStart