您当前的位置: 首页 > 慢生活 > 程序人生 网站首页程序人生
60、 Helm 及其它功能性组件 - Dashboard](calico)
发布时间:2023-01-28 20:52:13编辑:雪饮阅读()
Step1
创建一个用于接下来安装dashboard的目录
mkdir /usr/local/install-k8s/plugin/dashboard
进入该目录获取dashboard镜像到该目录
[root@k8s-master01 ~]# cd /usr/local/install-k8s/plugin/dashboard
注意:如果获取不到,则需要helm repo update,如果helm repo update成功后应该会通过helm repo list看到一个stable的谷歌官方的仓库和一个本地的local仓库
这是按照原来老师的操作的,但我这里竟然是能获取成功的,可能之前我各种折腾了吧。
[root@k8s-master01 dashboard]# helm fetch stable/kubernetes-dashboard
顺带说下我这里的helm不是helm3,上篇最后处理的那个问题是在公司处理的,我这次本地环境是我笔记本上面的。
我本地是2.13.1
[root@k8s-master01 dashboard]# helm version
Client: &version.Version{SemVer:"v2.13.1", GitCommit:"618447cbf203d147601b4b9bd7f8c37a5d39fbb4", GitTreeState:"clean"}
Error: could not find a ready tiller pod
好吧,我本地的这个仓库应该不是谷歌官方的那个咯。。。
[root@k8s-master01 dashboard]# helm repo list
NAME URL
local http://127.0.0.1:8879/charts
camptocamp3 https://camptocamp.github.io/charts
bitnami https://charts.bitnami.com/bitnami
stable https://charts.helm.sh/stable
那么我这里下载下来的dashboard的版本也和老师的不一样。。。
[root@k8s-master01 dashboard]# ls
kubernetes-dashboard-1.11.1.tgz
老师的是
kubernetes-dashboard-1.8.0.tgz
老师的这个dashboard的下载链接我这里找到了一个,应该是没有问题的
http://mirror.azure.cn/kubernetes/charts/kubernetes-dashboard-1.8.0.tgz
Step2
然后这里拿到老师的这个dashboard,然后解压并进入
[root@k8s-master01 dashboard]# tar -zxvf kubernetes-dashboard-1.8.0.tgz
[root@k8s-master01 dashboard]# cd kubernetes-dashboard
结构和之前那个hello-world是差不多的
[root@k8s-master01 kubernetes-dashboard]# ls
Chart.yaml README.md templates values.yaml
然后创建一个yaml模板,据说是和values一个意思?
[root@k8s-master01 kubernetes-dashboard]# cat kubernetes-dashboard.yaml
image:
repository: k8s.gcr.io/kubernetes-dashboard-amd64
tag: v1.10.1
ingress:
enabled: true
hosts:
- k8s.frognew.com
annotations:
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
tls:
- secretName: frognew-com-tls-secret
hosts:
- k8s.frognew.com
rbac:
clusterAdminRole: true
然后创建
[root@k8s-master01 kubernetes-dashboard]# helm install . -n kubernetes-dashboard --namespace kube-system -f kubernetes-dashboard.yaml
Error: could not find a ready tiller pod
尴尬了。。。
看来还是得helm3吧。
Step3
然后换成helm3则安装命令需要调整后如
[root@k8s-master01 kubernetes-dashboard]# helm install kubernetes-dashboard --namespace kube-system -f kubernetes-dashboard.yaml .
NAME: kubernetes-dashboard
LAST DEPLOYED: Sat Jan 28 16:36:39 2023
NAMESPACE: kube-system
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
*********************************************************************************
*** PLEASE BE PATIENT: kubernetes-dashboard may take a few minutes to install ***
*********************************************************************************
From outside the cluster, the server URL(s) are:
这里命令最后有个“.”别漏掉哈。
这里最后的提示的意思就是说:
请耐心等待:kubernetes仪表板可能需要几分钟才能安装
然后可以查看到是有这个pod正在创建
[root@k8s-master01 kubernetes-dashboard]# kubectl get pod -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-699cc4c4cb-6ss56 0/1 Error 23 13d
coredns-699cc4c4cb-j2svs 0/1 Terminating 24 13d
coredns-699cc4c4cb-wsw9k 0/1 ContainerCreating 0 9m35s
etcd-k8s-master01 1/1 Running 29 45d
kube-apiserver-k8s-master01 1/1 Running 31 45d
kube-controller-manager-k8s-master01 1/1 Running 33 45d
kube-flannel-ds-amd64-hdmvv 1/1 Running 1 13d
kube-flannel-ds-amd64-tlrrk 1/1 Running 1 13d
kube-flannel-ds-amd64-tpn9z 1/1 Running 1 13d
kube-proxy-4lwzp 1/1 Running 28 45d
kube-proxy-m6bsz 1/1 Running 26 45d
kube-proxy-xqhhg 1/1 Running 26 45d
kube-scheduler-k8s-master01 1/1 Running 33 45d
kubernetes-dashboard-77f54dc48f-c5k2r 0/1 ContainerCreating 0 4m20s
tiller-deploy-68b6dcdb7b-zpsjv 0/1 Error 0 13d
如果查看其ip地址,这个时候是查看不到的
[root@k8s-master01 kubernetes-dashboard]# kubectl get pod -n kube-system -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
coredns-699cc4c4cb-6ss56 0/1 Error 23 13d <none> k8s-node02 <none> <none>
coredns-699cc4c4cb-j2svs 0/1 Terminating 24 13d <none> k8s-node01 <none> <none>
coredns-699cc4c4cb-wsw9k 0/1 ContainerCreating 0 10m <none> k8s-master01 <none> <none>
etcd-k8s-master01 1/1 Running 29 45d 192.168.66.10 k8s-master01 <none> <none>
kube-apiserver-k8s-master01 1/1 Running 31 45d 192.168.66.10 k8s-master01 <none> <none>
kube-controller-manager-k8s-master01 1/1 Running 33 45d 192.168.66.10 k8s-master01 <none> <none>
kube-flannel-ds-amd64-hdmvv 1/1 Running 1 13d 192.168.66.20 k8s-node01 <none> <none>
kube-flannel-ds-amd64-tlrrk 1/1 Running 1 13d 192.168.66.10 k8s-master01 <none> <none>
kube-flannel-ds-amd64-tpn9z 1/1 Running 1 13d 192.168.66.21 k8s-node02 <none> <none>
kube-proxy-4lwzp 1/1 Running 28 45d 192.168.66.10 k8s-master01 <none> <none>
kube-proxy-m6bsz 1/1 Running 26 45d 192.168.66.21 k8s-node02 <none> <none>
kube-proxy-xqhhg 1/1 Running 26 45d 192.168.66.20 k8s-node01 <none> <none>
kube-scheduler-k8s-master01 1/1 Running 33 45d 192.168.66.10 k8s-master01 <none> <none>
kubernetes-dashboard-77f54dc48f-c5k2r 0/1 ContainerCreating 0 5m4s <none> k8s-node02 <none> <none>
tiller-deploy-68b6dcdb7b-zpsjv 0/1 Error 0 13d <none> k8s-node02 <none> <none>
毕竟是拉取谷歌官方的那个镜像咯
那么我们也可以利用离线的镜像导入,老师当时是在节点1和节点2上面分别尝试,根据这里看到的节点之前看到的是节点1,于是老师导入节点1,结果下次创建人家创建到节点2上面了,老师又导入到节点2,我这里为了怕麻烦,干脆节点1和节点2都导入下,不过老师说如果弄到私有仓库,我理解就是harbor中就比较好了,就不用分别导入了,而是按需自动从本地私有仓库拉取哈。
老师的这个离线镜像的参考路径如
E:\10、Kubernetes - Helm 及其它功能性组件\鸿鹄论坛_10、Kubernetes - Helm 及其它功能性组件\2、资料\镜像文件.zip\Images\Dashboard\dashboard.tar
Step4
然后节点1和节点2都加载下这个镜像
docker load -i dashboard.tar
其实这里这个离线镜像我猜测就是从上面那个dashboard的yaml模板中的这个路径
k8s.gcr.io/kubernetes-dashboard-amd64
中拿到的
然后删除上面的那个dashboard的正在拉取镜像的pod以迫使其重新创建呗
[root@k8s-master01 kubernetes-dashboard]# kubectl delete pod kubernetes-dashboard-77f54dc48f-c5k2r -n kube-system
pod "kubernetes-dashboard-77f54dc48f-c5k2r" deleted
结果又报错。。。。
[root@k8s-master01 kubernetes-dashboard]# kubectl get pod -n kube-system -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
coredns-699cc4c4cb-6ss56 0/1 Error 23 13d <none> k8s-node02 <none> <none>
coredns-699cc4c4cb-j2svs 0/1 Terminating 24 13d <none> k8s-node01 <none> <none>
coredns-699cc4c4cb-wsw9k 0/1 ContainerCreating 0 20m <none> k8s-master01 <none> <none>
etcd-k8s-master01 1/1 Running 29 45d 192.168.66.10 k8s-master01 <none> <none>
kube-apiserver-k8s-master01 1/1 Running 31 45d 192.168.66.10 k8s-master01 <none> <none>
kube-controller-manager-k8s-master01 1/1 Running 33 45d 192.168.66.10 k8s-master01 <none> <none>
kube-flannel-ds-amd64-hdmvv 1/1 Running 1 13d 192.168.66.20 k8s-node01 <none> <none>
kube-flannel-ds-amd64-tlrrk 1/1 Running 1 13d 192.168.66.10 k8s-master01 <none> <none>
kube-flannel-ds-amd64-tpn9z 1/1 Running 1 13d 192.168.66.21 k8s-node02 <none> <none>
kube-proxy-4lwzp 1/1 Running 28 45d 192.168.66.10 k8s-master01 <none> <none>
kube-proxy-m6bsz 1/1 Running 26 45d 192.168.66.21 k8s-node02 <none> <none>
kube-proxy-xqhhg 1/1 Running 26 45d 192.168.66.20 k8s-node01 <none> <none>
kube-scheduler-k8s-master01 1/1 Running 33 45d 192.168.66.10 k8s-master01 <none> <none>
kubernetes-dashboard-77f54dc48f-tw786 0/1 ContainerCreating 0 88s <none> k8s-node02 <none> <none>
查看原因
[root@k8s-master01 kubernetes-dashboard]# kubectl describe pod kubernetes-dashboard-77f54dc48f-tw786 -n kube-system
Name: kubernetes-dashboard-77f54dc48f-tw786
Namespace: kube-system
Priority: 0
Node: k8s-node02/192.168.66.21
Start Time: Sat, 28 Jan 2023 16:50:39 +0800
Labels: app=kubernetes-dashboard
pod-template-hash=77f54dc48f
release=kubernetes-dashboard
Annotations: <none>
Status: Pending
IP:
Controlled By: ReplicaSet/kubernetes-dashboard-77f54dc48f
Containers:
kubernetes-dashboard:
Container ID:
Image: k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1
Image ID:
Port: 8443/TCP
Host Port: 0/TCP
Args:
--auto-generate-certificates
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Limits:
cpu: 100m
memory: 100Mi
Requests:
cpu: 100m
memory: 100Mi
Liveness: http-get https://:8443/ delay=30s timeout=30s period=10s #success=1 #failure=3
Environment: <none>
Mounts:
/certs from kubernetes-dashboard-certs (rw)
/tmp from tmp-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kubernetes-dashboard-token-6zjkl (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kubernetes-dashboard-certs:
Type: Secret (a volume populated by a Secret)
SecretName: kubernetes-dashboard
Optional: false
tmp-volume:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
kubernetes-dashboard-token-6zjkl:
Type: Secret (a volume populated by a Secret)
SecretName: kubernetes-dashboard-token-6zjkl
Optional: false
QoS Class: Guaranteed
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 2m15s default-scheduler Successfully assigned kube-system/kubernetes-dashboard-77f54dc48f-tw786 to k8s-node02
Warning FailedCreatePodSandBox 2m14s kubelet, k8s-node02 Failed create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "21129c6635fce714547645acf158058d892c6283c91320716aa84e8cb803fb70" network for pod "kubernetes-dashboard-77f54dc48f-tw786": NetworkPlugin cni failed to set up pod "kubernetes-dashboard-77f54dc48f-tw786_kube-system" network: stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/
Step5
根据这个情况怀疑是calico网络插件残留配置文件问题,因为之前又更换flannel插件为calico插件,这里我的处理方式是
这个错误描述里面出现的所在节点上删除对应的calico插件残留配置文件
rm -rf /etc/cni/net.d/*calico*
我这里可能是master节点之前有污点策略的改动也或者是本来就是这样,反正master节点也有出现这个的问题,所以也一并删除了。
然后coredns有错误的pod我也都删除迫使重建,然后dashboard我也删除迫使重建
经过几番折腾现在是变成error了。。。
[root@k8s-master01 kubernetes-dashboard]# kubectl get pod -n kube-system -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
coredns-699cc4c4cb-84hf4 0/1 Error 2 11m 10.224.2.240 k8s-node02 <none> <none>
coredns-699cc4c4cb-wsw9k 1/1 Running 0 65m 10.224.0.31 k8s-master01 <none> <none>
etcd-k8s-master01 1/1 Running 29 45d 192.168.66.10 k8s-master01 <none> <none>
kube-apiserver-k8s-master01 1/1 Running 31 45d 192.168.66.10 k8s-master01 <none> <none>
kube-controller-manager-k8s-master01 1/1 Running 33 45d 192.168.66.10 k8s-master01 <none> <none>
kube-flannel-ds-amd64-hdmvv 1/1 Running 1 13d 192.168.66.20 k8s-node01 <none> <none>
kube-flannel-ds-amd64-tlrrk 1/1 Running 1 13d 192.168.66.10 k8s-master01 <none> <none>
kube-flannel-ds-amd64-tpn9z 1/1 Running 1 13d 192.168.66.21 k8s-node02 <none> <none>
kube-proxy-4lwzp 1/1 Running 28 45d 192.168.66.10 k8s-master01 <none> <none>
kube-proxy-m6bsz 1/1 Running 26 45d 192.168.66.21 k8s-node02 <none> <none>
kube-proxy-xqhhg 1/1 Running 26 45d 192.168.66.20 k8s-node01 <none> <none>
kube-scheduler-k8s-master01 1/1 Running 33 45d 192.168.66.10 k8s-master01 <none> <none>
kubernetes-dashboard-77f54dc48f-h4prx 0/1 Error 1 5m25s 10.224.2.241 k8s-node02 <none> <none>
tiller-deploy-68b6dcdb7b-zpsjv 1/1 Running 1 13d 10.224.2.234 k8s-node02 <none> <none>
Step6
关于error,分析log
[root@k8s-master01 kubernetes-dashboard]# kubectl logs kubernetes-dashboard-77f54dc48f-h4prx -n kube-system
2023/01/28 09:42:35 Starting overwatch
2023/01/28 09:42:35 Using in-cluster config to connect to apiserver
2023/01/28 09:42:35 Using service account token for csrf signing
2023/01/28 09:43:05 Error while initializing connection to Kubernetes apiserver. This most likely means that the cluster is misconfigured (e.g., it has invalid apiserver certificates or service account's configuration) or the --apiserver-host param points to a server that does not exist. Reason: Get https://10.96.0.1:443/version: dial tcp 10.96.0.1:443: i/o timeout
Refer to our FAQ and wiki pages for more information: https://github.com/kubernetes/dashboard/wiki/FAQ
尝试的方案1
[root@k8s-master01 kubernetes-dashboard]# systemctl stop kubelet
[root@k8s-master01 kubernetes-dashboard]# systemctl stop docker
Warning: Stopping docker.service, but it can still be activated by:
docker.socket
[root@k8s-master01 kubernetes-dashboard]# iptables --flush
[root@k8s-master01 kubernetes-dashboard]# iptables -tnat --flush
[root@k8s-master01 kubernetes-dashboard]# systemctl start kubelet
[root@k8s-master01 kubernetes-dashboard]# systemctl start docker
尝试的方案2
systemctl stop NetworkManager
尝试的方案3(在master节点和dashboard节点所部署的节点都执行)
systemctl stop kubelet
systemctl stop docker
iptables --flush
iptables -tnat --flush
systemctl start kubelet
systemctl start docker
Step7
经过上面的一番周折,最后我选择的是重置集群(需要注意节点加入集群之前先安装flannel网络插件)。。。
参考
19、Kubernetes - 资源清单 - initC (gaojiupan.cn)
最后我们再次回来后直接进入dashboard目录进行安装
cd /usr/local/install-k8s/plugin/dashboard/kubernetes-dashboard
helm install kubernetes-dashboard --namespace kube-system -f kubernetes-dashboard.yaml .
那么同样报错了,那么describe查看是在那个节点报的错
[root@k8s-master01 kubernetes-dashboard]# kubectl describe pod kubernetes-dashboard-77f54dc48f-77qst -n kube-system
Name: kubernetes-dashboard-77f54dc48f-77qst
Namespace: kube-system
Priority: 0
Node: k8s-node02/192.168.66.21
Start Time: Sat, 28 Jan 2023 20:02:30 +0800
Labels: app=kubernetes-dashboard
pod-template-hash=77f54dc48f
release=kubernetes-dashboard
Annotations: <none>
Status: Pending
IP:
Controlled By: ReplicaSet/kubernetes-dashboard-77f54dc48f
Containers:
kubernetes-dashboard:
Container ID:
Image: k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1
Image ID:
Port: 8443/TCP
Host Port: 0/TCP
Args:
--auto-generate-certificates
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Limits:
cpu: 100m
memory: 100Mi
Requests:
cpu: 100m
memory: 100Mi
Liveness: http-get https://:8443/ delay=30s timeout=30s period=10s #success=1 #failure=3
Environment: <none>
Mounts:
/certs from kubernetes-dashboard-certs (rw)
/tmp from tmp-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kubernetes-dashboard-token-6dfld (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kubernetes-dashboard-certs:
Type: Secret (a volume populated by a Secret)
SecretName: kubernetes-dashboard
Optional: false
tmp-volume:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
kubernetes-dashboard-token-6dfld:
Type: Secret (a volume populated by a Secret)
SecretName: kubernetes-dashboard-token-6dfld
Optional: false
QoS Class: Guaranteed
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 49s default-scheduler Successfully assigned kube-system/kubernetes-dashboard-77f54dc48f-77qst to k8s-node02
Warning FailedCreatePodSandBox 48s kubelet, k8s-node02 Failed create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "3b566c8e86fe4e45a48730d66099f879e35b1ad9edb80fd0ec1b473fd7be05f1" network for pod "kubernetes-dashboard-77f54dc48f-77qst": NetworkPlugin cni failed to set up pod "kubernetes-dashboard-77f54dc48f-77qst_kube-system" network: failed to set bridge addr: "cni0" already has an IP address different from 10.224.1.1/24
Warning FailedCreatePodSandBox 48s kubelet, k8s-node02 Failed create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "a783042f2e71e71d7bd0f2bf183f7561a95fdf0a8a73bf39af3709167172de00" network for pod "kubernetes-dashboard-77f54dc48f-77qst": NetworkPlugin cni failed to set up pod "kubernetes-dashboard-77f54dc48f-77qst_kube-system" network: failed to set bridge addr: "cni0" already has an IP address different from 10.224.1.1/24
Warning FailedCreatePodSandBox 47s kubelet, k8s-node02 Failed create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "6d72ba290812642bbdb8cd67681100aa4461e61d327cd1f38ac15bc8c2477ce3" network for pod "kubernetes-dashboard-77f54dc48f-77qst": NetworkPlugin cni failed to set up pod "kubernetes-dashboard-77f54dc48f-77qst_kube-system" network: failed to set bridge addr: "cni0" already has an IP address different from 10.224.1.1/24
Warning FailedCreatePodSandBox 46s kubelet, k8s-node02 Failed create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "c4510de7cda022498aec03da2e9a7bc744699d07e71bf9ad8d036742f30332a1" network for pod "kubernetes-dashboard-77f54dc48f-77qst": NetworkPlugin cni failed to set up pod "kubernetes-dashboard-77f54dc48f-77qst_kube-system" network: failed to set bridge addr: "cni0" already has an IP address different from 10.224.1.1/24
Warning FailedCreatePodSandBox 45s kubelet, k8s-node02 Failed create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "86e04e3b482fafd8e794b78a9dd5fff8d5080b049c8087e39760853be55513c7" network for pod "kubernetes-dashboard-77f54dc48f-77qst": NetworkPlugin cni failed to set up pod "kubernetes-dashboard-77f54dc48f-77qst_kube-system" network: failed to set bridge addr: "cni0" already has an IP address different from 10.224.1.1/24
Warning FailedCreatePodSandBox 44s kubelet, k8s-node02 Failed create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "b0f11f3c2984edc4e88ac93264d8ab1621018ba8c746515a239fe2a45454b8d0" network for pod "kubernetes-dashboard-77f54dc48f-77qst": NetworkPlugin cni failed to set up pod "kubernetes-dashboard-77f54dc48f-77qst_kube-system" network: failed to set bridge addr: "cni0" already has an IP address different from 10.224.1.1/24
Warning FailedCreatePodSandBox 43s kubelet, k8s-node02 Failed create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "5c150008ef8016ecd546d9082ac3620ba0abd67a2b0dbacdcdc5b0e394f0cde2" network for pod "kubernetes-dashboard-77f54dc48f-77qst": NetworkPlugin cni failed to set up pod "kubernetes-dashboard-77f54dc48f-77qst_kube-system" network: failed to set bridge addr: "cni0" already has an IP address different from 10.224.1.1/24
Warning FailedCreatePodSandBox 42s kubelet, k8s-node02 Failed create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "f4ba144692d1549adeccfe612abcc019acc1552c4440d32b0a7162c593922ba7" network for pod "kubernetes-dashboard-77f54dc48f-77qst": NetworkPlugin cni failed to set up pod "kubernetes-dashboard-77f54dc48f-77qst_kube-system" network: failed to set bridge addr: "cni0" already has an IP address different from 10.224.1.1/24
Warning FailedCreatePodSandBox 41s kubelet, k8s-node02 Failed create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "11a11effa405fd33956ac1f62577651536c3901ef196d40d77bf030c916ea971" network for pod "kubernetes-dashboard-77f54dc48f-77qst": NetworkPlugin cni failed to set up pod "kubernetes-dashboard-77f54dc48f-77qst_kube-system" network: failed to set bridge addr: "cni0" already has an IP address different from 10.224.1.1/24
Normal SandboxChanged 37s (x12 over 48s) kubelet, k8s-node02 Pod sandbox changed, it will be killed and re-created.
Warning FailedCreatePodSandBox 37s (x4 over 40s) kubelet, k8s-node02 (combined from similar events): Failed create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "15493b8339fc349ff83aade3f918e3269fd03b951005909b1a211e7b0b0cec32" network for pod "kubernetes-dashboard-77f54dc48f-77qst": NetworkPlugin cni failed to set up pod "kubernetes-dashboard-77f54dc48f-77qst_kube-system" network: failed to set bridge addr: "cni0" already has an IP address different from 10.224.1.1/24
可以看到是在节点2上面报的关键错误:
network: failed to set bridge addr: "cni0" already has an IP address different from则在该节点执行如下两个操作:
ifconfig cni0 down
ip link delete cni0
重新回到master节点上就ok了
你操作上面两个命令时候有可能又漂移到其他节点了,所以最好是节点1也处理下,否则漂移到节点1上又报错了。。。
最好是在节点1处理好后也在重新删除下那个报错的pod哈。
一个新问题是老不简单每过一会儿就出现
CrashLoopBackOff
。。。。
但又据说是正常现象。。。
Step8
算了,这个问题暂时先不管了。。。
按原计划这个443的dashboard确实也是出来了
[root@k8s-master01 ~]# kubectl get svc -n kube-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 38m
kubernetes-dashboard ClusterIP 10.98.70.81 <none> 443/TCP 35m
但svc类型是ClusterIP,原计划也是有修改svc类型为NodePort的
[root@k8s-master01 ~]# kubectl edit svc kubernetes-dashboard -n kube-system
service/kubernetes-dashboard edited
暴露出了节点上面的端口
[root@k8s-master01 ~]# kubectl get svc -n kube-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 42m
kubernetes-dashboard NodePort 10.98.70.81 <none> 443:30593/TCP 38m
按照原计划,接下来像是这样应该能访问
只是会报这样的错误
换成火狐浏览器就ok了(要点下那个我同意安全隐患并继续的类似选项)
然后是查看token名称
然后是查看token
然后令牌方式就登录了
然后是创建应用
然后是部署成功
关键字词:Helm,Dashboard