您当前的位置: 首页 > 慢生活 > 程序人生 网站首页程序人生
47. 存储 PV-PVC(4)
发布时间:2023-01-03 23:27:15编辑:雪饮阅读()
Step1
继昨天我们的环境,我们有了好几个pv,那么我们现在看看这pv1的详情
[root@k8s-master01 ~]# kubectl describe pv nfspv1
Name: nfspv1
Labels: <none>
Annotations: pv.kubernetes.io/bound-by-controller: yes
Finalizers: [kubernetes.io/pv-protection]
StorageClass: nfs
Status: Bound
Claim: default/www-web-0
Reclaim Policy: Retain
Access Modes: RWO
VolumeMode: Filesystem
Capacity: 10Gi
Node Affinity: <none>
Message:
Source:
Type: NFS (an NFS mount that lasts the lifetime of a pod)
Server: 192.168.66.100
Path: /nfs
ReadOnly: false
Events: <none>
可见其挂载于/nfs这个上面
那么我们在harbor上将这个挂载的路径上面昨天创建的那个文件删除掉并新增创建一个index.html并重启nfs服务(毕竟今天刚开机)
[root@hub nfs]# rm -rf 1.html
[root@hub nfs]# date > index.html
老师这里新建文件还chmod 777了,我发现我没有这样操作也是没有问题的,不过这个就不用纠结了,可能老师有点不放心哈
[root@hub nfs]# systemctl restart rpcbind
[root@hub nfs]# systemctl restart nfs
由于上面这个nfspv1是pod为web-0的申请的
[root@k8s-master01 ~]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
test-pd 1/1 Running 1 31h 10.224.2.181 k8s-node02 <none> <none>
web-0 1/1 Running 1 23h 10.224.2.183 k8s-node02 <none> <none>
web-1 1/1 Running 1 23h 10.224.1.175 k8s-node01 <none> <none>
web-2 1/1 Running 1 23h 10.224.1.174 k8s-node01 <none> <none>
那么我们访问这个web-0
[root@k8s-master01 ~]# curl 10.224.2.183
2023年 01月 03日 星期二 21:49:08 CST
可见果然没有错(好像有点废话了)
Step2
如此以来可以借助pod详情和pv详情互相参考如上之法(老师没有用pod,仅仅用pv详情,可能是我自己本地环境弄乱了,或者我自己眼睛看的不仔细,反正殊途同归咯),我们分别把这3个pod区分出来
[root@k8s-master01 ~]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
test-pd 1/1 Running 1 31h 10.224.2.181 k8s-node02 <none> <none>
web-0 1/1 Running 1 23h 10.224.2.183 k8s-node02 <none> <none>
web-1 1/1 Running 1 23h 10.224.1.175 k8s-node01 <none> <none>
web-2 1/1 Running 1 23h 10.224.1.174 k8s-node01 <none> <none>
[root@k8s-master01 ~]# curl 10.224.2.183
2023年 01月 03日 星期二 21:56:44 CST
[root@k8s-master01 ~]# curl 10.224.1.175
/nfs2 2023年 01月 03日 星期二 22:00:13 CST
[root@k8s-master01 ~]# curl 10.224.1.174
/nfs3 2023年 01月 03日 星期二 22:01:48 CST
Step3
删除一个pod会自动再启动一个同名的pod,这statefulset是有点像deployment的
[root@k8s-master01 ~]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
test-pd 1/1 Running 1 31h 10.224.2.181 k8s-node02 <none> <none>
web-0 1/1 Running 1 23h 10.224.2.183 k8s-node02 <none> <none>
web-1 1/1 Running 1 23h 10.224.1.175 k8s-node01 <none> <none>
web-2 1/1 Running 1 23h 10.224.1.174 k8s-node01 <none> <none>
[root@k8s-master01 ~]# kubectl delete pod web-0
pod "web-0" deleted
[root@k8s-master01 ~]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
test-pd 1/1 Running 1 31h 10.224.2.181 k8s-node02 <none> <none>
web-0 1/1 Running 0 5s 10.224.2.184 k8s-node02 <none> <none>
web-1 1/1 Running 1 23h 10.224.1.175 k8s-node01 <none> <none>
web-2 1/1 Running 1 23h 10.224.1.174 k8s-node01 <none> <none>
但ip地址变了,但数据没有丢失
[root@k8s-master01 ~]# curl 10.224.2.184
2023年 01月 03日 星期二 21:56:44 CST
数据是对的
Step4
我们从昨天创建使用pvc请求的模板中以及我们本地svc列表里面只有一个k8s内置的svc外,我们可以确定我们用的svc是nginx
[root@k8s-master01 pv]# cat ss.yaml
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: nginx
spec:
ports:
- port: 80
name: web
clusterIP: None
selector:
app: nginx
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web
spec:
selector:
matchLabels:
app: nginx
serviceName: "nginx"
replicas: 3
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: wangyanglinux/myapp:v2
ports:
- containerPort: 80
name: web
volumeMounts:
- name: www
mountPath: /usr/share/nginx/html
volumeClaimTemplates:
- metadata:
name: www
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "nfs"
resources:
requests:
storage: 1Gi
[root@k8s-master01 pv]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 20d
nginx ClusterIP None <none> 80/TCP 23h
所以我们这里在pod中还可以以”pod名.svc名”的形式进行ping,这样就算pod一直变ip,pod名不会变,不就挺好的
[root@k8s-master01 ~]# kubectl exec test-pd -it -- /bin/sh
/ # ping web-0.nginx
ping: bad address 'web-0.nginx'
/ # ping web-0.nginx
ping: bad address 'web-0.nginx'
/ # ping web_0.nginx
ping: bad address 'web_0.nginx'
/ # ping web-0.nginx
ping: bad address 'web-0.nginx'
/ #
/ # [root@k8s-master01 ~]# kubectl exec test-pd -it -- /bin/sh
/ # ping web-0.nginx
PING web-0.nginx (10.224.2.184): 56 data bytes
64 bytes from 10.224.2.184: seq=0 ttl=64 time=0.054 ms
64 bytes from 10.224.2.184: seq=1 ttl=64 time=0.064 ms
64 bytes from 10.224.2.184: seq=2 ttl=64 time=0.061 ms
64 bytes from 10.224.2.184: seq=3 ttl=64 time=0.062 ms
64 bytes from 10.224.2.184: seq=4 ttl=64 time=0.065 ms
64 bytes from 10.224.2.184: seq=5 ttl=64 time=0.067 ms
64 bytes from 10.224.2.184: seq=6 ttl=64 time=0.059 ms
^C
--- web-0.nginx ping statistics ---
7 packets transmitted, 7 packets received, 0% packet loss
round-trip min/avg/max = 0.054/0.061/0.067 ms
当然这里发生了点小波折,出现那个bad address的错误是coredns的故障
我的解决办法是干掉其中一个让其重新创建
[root@k8s-master01 pv]# kubectl delete pod -n kube-system coredns-5c98db65d4-6kdwt
pod "coredns-5c98db65d4-6kdwt" deleted
[root@k8s-master01 pv]# kubectl get pod -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-5c98db65d4-kvgj2 1/1 Running 0 8s
coredns-699cc4c4cb-d4wjw 0/1 CrashLoopBackOff 688 20d
coredns-699cc4c4cb-xdhkf 0/1 CrashLoopBackOff 7 19m
当然我这里干掉了两个,因为第一个干掉了一个比较久远的,结果不行,第二次又干掉了一个比较近的就行了。。。
也好像说是初始化集群时候
当您使用kubeadm init时,请指定pod-network-cidr。确保主机/主网络的IP不在您引用的子网中。
即如果您的网络运行在192.168.*.*使用10.0.0.0/16
反正这里是临时解决了
那我将刚才的we-0的pod干掉,等他再次恢复,然后再次在test-pd中还是可以ping通
[root@k8s-master01 ~]# kubectl delete pod web-0
pod "web-0" deleted
[root@k8s-master01 ~]# kubectl exec test-pd -it -- /bin/sh
/ # ping web-0.nginx
PING web-0.nginx (10.224.2.186): 56 data bytes
64 bytes from 10.224.2.186: seq=0 ttl=64 time=0.055 ms
64 bytes from 10.224.2.186: seq=1 ttl=64 time=0.061 ms
^C
--- web-0.nginx ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.055/0.058/0.061 ms
就是或pod间可以通过pod名通信,应该是整个集群都可以吧
打脸了哈,仅pod间咯
[root@k8s-master01 ~]# ping web-0.nginx
ping: web-0.nginx: 未知的名称或服务
这种pod间通过pod名通信应该是statefulset的功劳吧。
我的理解:是的
Step5
查看下coredns的地址
[root@k8s-master01 ~]# kubectl get pod -n kube-system -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
coredns-5c98db65d4-kvgj2 1/1 Running 0 11m 10.224.0.19 k8s-master01 <none> <none>
coredns-699cc4c4cb-d4wjw 0/1 CrashLoopBackOff 690 20d 10.224.1.173 k8s-node01 <none> <none>
coredns-699cc4c4cb-xdhkf 0/1 CrashLoopBackOff 9 31m 10.224.2.185 k8s-node02 <none> <none>
根据上面step4的逻辑,则也可以看到a记录的解析
[root@k8s-master01 ~]# dig -t A nginx.default.svc.cluster.local. @10.224.0.19
; <<>> DiG 9.11.4-P2-RedHat-9.11.4-26.P2.el7_9.10 <<>> -t A nginx.default.svc.cluster.local. @10.224.0.19
;; global options: +cmd
;; Got answer:
;; WARNING: .local is reserved for Multicast DNS
;; You are currently testing what happens when an mDNS query is leaked to DNS
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 47191
;; flags: qr aa rd; QUERY: 1, ANSWER: 3, AUTHORITY: 0, ADDITIONAL: 1
;; WARNING: recursion requested but not available
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;nginx.default.svc.cluster.local. IN A
;; ANSWER SECTION:
nginx.default.svc.cluster.local. 30 IN A 10.224.1.174
nginx.default.svc.cluster.local. 30 IN A 10.224.1.175
nginx.default.svc.cluster.local. 30 IN A 10.224.2.186
;; Query time: 1 msec
;; SERVER: 10.224.0.19#53(10.224.0.19)
;; WHEN: 二 1月 03 22:57:26 CST 2023
;; MSG SIZE rcvd: 201
正好对应这三个由statefulset创建的pod的ip
[root@k8s-master01 ~]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
test-pd 1/1 Running 1 32h 10.224.2.181 k8s-node02 <none> <none>
web-0 1/1 Running 0 6m52s 10.224.2.186 k8s-node02 <none> <none>
web-1 1/1 Running 1 24h 10.224.1.175 k8s-node01 <none> <none>
web-2 1/1 Running 1 24h 10.224.1.174 k8s-node01 <none> <none>
Step6
有序部署
这里先删除掉statefulset
[root@k8s-master01 pv]# kubectl delete statefulset --all
statefulset.apps "web" deleted
真实环境需要谨慎
[root@k8s-master01 pv]# kubectl apply -f ss.yaml
service/nginx unchanged
statefulset.apps/web created
[root@k8s-master01 pv]# kubectl get pod
NAME READY STATUS RESTARTS AGE
test-pd 1/1 Running 1 32h
web-0 1/1 Running 0 2s
web-1 0/1 ContainerCreating 0 1s
[root@k8s-master01 pv]# kubectl get pod
NAME READY STATUS RESTARTS AGE
test-pd 1/1 Running 1 32h
web-0 1/1 Running 0 4s
web-1 1/1 Running 0 3s
web-2 1/1 Running 0 2s
这个不难理解,看看web-0到web-1再到web-2
依次顺序运行起来的。
Step7
有序删除
说是statefulset删除时候也是有序删除的
[root@k8s-master01 pv]# kubectl delete statefulset --all
statefulset.apps "web" deleted
[root@k8s-master01 pv]# kubectl get pod
NAME READY STATUS RESTARTS AGE
test-pd 1/1 Running 1 32h
web-0 0/1 Terminating 0 8s
web-1 0/1 Terminating 0 7s
web-2 0/1 Terminating 0 6s
[root@k8s-master01 pv]# kubectl get pod
NAME READY STATUS RESTARTS AGE
test-pd 1/1 Running 1 32h
[root@k8s-master01 pv]# kubectl get pod
NAME READY STATUS RESTARTS AGE
test-pd 1/1 Running 1 32h
这他妈的,这里太快了,我没有办法证明。。。
另外就是关于有序扩展,这里就不演示了,自己的电脑配置不行了。
Step8
然后再次重新创建起来statefulset
[root@k8s-master01 pv]# kubectl apply -f ss.yaml
service/nginx unchanged
statefulset.apps/web created
仍旧可以再次访问
[root@k8s-master01 pv]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
test-pd 1/1 Running 1 32h 10.224.2.181 k8s-node02 <none> <none>
web-0 1/1 Running 0 16s 10.224.2.196 k8s-node02 <none> <none>
web-1 1/1 Running 0 15s 10.224.1.186 k8s-node01 <none> <none>
web-2 1/1 Running 0 14s 10.224.2.197 k8s-node02 <none> <none>
[root@k8s-master01 pv]# curl 10.224.2.196
2023年 01月 03日 星期二 21:56:44 CST
Step9
Pod比如web-0你会发现删除了还会再出来,那么如何删除。。。
上面已经演示过了,当然这里演示另外一种通过你创建该pod时候的配置文件,那么我这里当时是通过创建statefulset的那个yaml创建出来的,所以:
[root@k8s-master01 pv]# kubectl delete -f ss.yaml
service "nginx" deleted
statefulset.apps "web" deleted
这样也就ok了
[root@k8s-master01 pv]# kubectl get pod
NAME READY STATUS RESTARTS AGE
test-pd 1/1 Running 1 32h
按理来说,svc不会被删除,但是确实刚才的那个nginx的svc也被删除了
[root@k8s-master01 pv]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 21d
因为我这个ss.yaml里面可不仅仅是定义statefulset。。。还有svc都是一起定义的,那么正常流程下,这里svc还存在的,你可能的话应该是需要再来这里手动删除下svc的
那么pvc也是不会因为我们pod被干掉而自动干掉的,需要我们手动删除掉的
[root@k8s-master01 pv]# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
www-web-0 Bound nfspv1 10Gi RWO nfs 25h
www-web-1 Bound nfspv3 5Gi RWO nfs 25h
www-web-2 Bound nfspv4 50Gi RWO nfs 24h
[root@k8s-master01 pv]# kubectl delete pvc --all
persistentvolumeclaim "www-web-0" deleted
persistentvolumeclaim "www-web-1" deleted
persistentvolumeclaim "www-web-2" deleted
就算我们干掉了pvc,那么pv还是存在的,并且还不是可用的available的状态
[root@k8s-master01 pv]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
nfspv1 10Gi RWO Retain Released default/www-web-0 nfs 27h
nfspv2 5Gi ROX Retain Available nfs 24h
nfspv3 5Gi RWO Retain Released default/www-web-1 nfs 24h
nfspv4 50Gi RWO Retain Released default/www-web-2 nfs 24h
就算你在他们对应的nfs共享那边都清理了文件也没有用的
[root@hub /]# rm -rf /nfs/*
[root@hub /]# rm -rf /nfs2/*
[root@hub /]# rm -rf /nfs3/*
还是released的状态
[root@k8s-master01 pv]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
nfspv1 10Gi RWO Retain Released default/www-web-0 nfs 27h
nfspv2 5Gi ROX Retain Available nfs 24h
nfspv3 5Gi RWO Retain Released default/www-web-1 nfs 24h
nfspv4 50Gi RWO Retain Released default/www-web-2 nfs 24h
随便查看一个pv的yaml
[root@k8s-master01 pv]# kubectl get pv nfspv1 -o yaml
apiVersion: v1
kind: PersistentVolume
metadata:
annotations:
pv.kubernetes.io/bound-by-controller: "yes"
creationTimestamp: "2023-01-02T11:40:06Z"
finalizers:
- kubernetes.io/pv-protection
name: nfspv1
resourceVersion: "341474"
selfLink: /api/v1/persistentvolumes/nfspv1
uid: eab2dc66-1a6d-4920-98f7-c010170b7074
spec:
accessModes:
- ReadWriteOnce
capacity:
storage: 10Gi
claimRef:
apiVersion: v1
kind: PersistentVolumeClaim
name: www-web-0
namespace: default
resourceVersion: "329115"
uid: b800481d-f477-4d72-85ce-389e146e7462
nfs:
path: /nfs
server: 192.168.66.100
persistentVolumeReclaimPolicy: Retain
storageClassName: nfs
volumeMode: Filesystem
status:
phase: Released
因为这家伙还有claimRef的引用
所以我们需要把这个引用摘掉,以这个nfspv1为例则如:
[root@k8s-master01 pv]# kubectl edit pv nfspv1
会进入vi编辑器界面,把上面claimRef这一块儿块儿干掉即可。
然后就变成这样
apiVersion: v1
kind: PersistentVolume
metadata:
annotations:
pv.kubernetes.io/bound-by-controller: "yes"
creationTimestamp: "2023-01-02T11:40:06Z"
finalizers:
- kubernetes.io/pv-protection
name: nfspv1
resourceVersion: "341474"
selfLink: /api/v1/persistentvolumes/nfspv1
uid: eab2dc66-1a6d-4920-98f7-c010170b7074
spec:
accessModes:
- ReadWriteOnce
capacity:
storage: 10Gi
nfs:
path: /nfs
server: 192.168.66.100
persistentVolumeReclaimPolicy: Retain
storageClassName: nfs
volumeMode: Filesystem
status:
phase: Released
然后保存
[root@k8s-master01 pv]# kubectl edit pv nfspv1
persistentvolume/nfspv1 edited
然后就又多了一个availabe可用状态
[root@k8s-master01 pv]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
nfspv1 10Gi RWO Retain Available nfs 27h
nfspv2 5Gi ROX Retain Available nfs 24h
nfspv3 5Gi RWO Retain Released default/www-web-1 nfs 24h
nfspv4 50Gi RWO Retain Released default/www-web-2 nfs 24h
如此,将nfspv3和nfspv4也处理掉哈
[root@k8s-master01 pv]# kubectl edit pv nfspv3
persistentvolume/nfspv3 edited
[root@k8s-master01 pv]# kubectl edit pv nfspv4
persistentvolume/nfspv4 edited
[root@k8s-master01 pv]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
nfspv1 10Gi RWO Retain Available nfs 27h
nfspv2 5Gi ROX Retain Available nfs 24h
nfspv3 5Gi RWO Retain Available nfs 24h
nfspv4 50Gi RWO Retain Available nfs 24h
现在就都是可用的了
关键字词:存储,PV,PVC
上一篇:46、存储 PV-PVC(3)
下一篇:49、集群调度 节点亲和性(1)