您当前的位置: 首页 > 慢生活 > 程序人生 网站首页程序人生
59、 Helm 及其它功能性组件 - Helm(2(calico)
发布时间:2023-01-15 01:22:05编辑:雪饮阅读()
Step1
Helm项目部署的配置
首先咱们建立一个测试的项目的目录
mkdir /usr/local/install-k8s/helm/test
然后在这个测试项目目录中在建立一个模板目录
[root@k8s-master01 test]# mkdir /usr/local/install-k8s/helm/test/templates
这里据说模板目录名称必须是templates
然后在这个测试项目目录中建立chart图表模板
[root@k8s-master01 test]# cat Chart.yaml
name: hello-world
version: 1.0.0
以及测试项目目录中模板目录中的deployment模板
[root@k8s-master01 test]# cat templates/deployment.yaml
apiversion: extensions/v1beta1
kind: Deployment
metadata:
name: hello-world
spec:
replicas: 1
template:
metadata:
labels:
app: hello-wor1d
spec:
containers:
- name: hello-wor1d
image: hub.atguigu.com/library/myapp:v1
ports:
- containerPort: 80
protocol: TCP
和测试项目目录中的模板目录中的service模板
[root@k8s-master01 test]# cat templates/service.yaml
apiVersion: v1
kind: Service
metadata:
name: hello-world
spec:
type: NodePort
ports:
- port: 80
targetPort: 80
protocol: TCP
selector:
app: hello-world
然后helm安装这个项目
[root@k8s-master01 test]# helm install .
Error: no available release name found
可见报出了错误。。。
该命令最后install后面有个空格然后后面有个”.”,别看漏掉了哦,表示在这个项目目录创建呗(我的理解)
Step2
no available release name found问题的解决
尝试方案1
[root@k8s-master01 test]# kubectl create serviceaccount --namespace kube-system tiller
Error from server (AlreadyExists): serviceaccounts "tiller" already exists
[root@k8s-master01 test]# kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
clusterrolebinding.rbac.authorization.k8s.io/tiller-cluster-rule created
[root@k8s-master01 test]# kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'
deployment.extensions/tiller-deploy patched (no change)
尝试方案2
[root@k8s-master01 test]# kubectl create serviceaccount --namespace kube-system tiller
Error from server (AlreadyExists): serviceaccounts "tiller" already exists
[root@k8s-master01 test]# kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
Error from server (AlreadyExists): clusterrolebindings.rbac.authorization.k8s.io "tiller-cluster-rule" already exists
[root@k8s-master01 test]# helm init --service-account tiller
$HELM_HOME has been configured at /root/.helm.
Warning: Tiller is already installed in the cluster.
(Use --client-only to suppress this message, or --upgrade to upgrade Tiller to the current version.)
Happy Helming!
尝试方案3
在这个站点搜素这个chart
[root@k8s-master01 test]# helm repo add camptocamp3 https://camptocamp.github.io/charts
"camptocamp3" has been added to your repositories
尝试方案4
helm repo add bitnami https://charts.bitnami.com/bitnami
[root@k8s-master01 test]# helm repo update
Hang tight while we grab the latest from your chart repositories...
...Skip local chart repository
...Unable to get an update from the "stable" chart repository (https://kubernetes-charts.storage.googleapis.com):
Failed to fetch https://kubernetes-charts.storage.googleapis.com/index.yaml : 403 Forbidden
...Successfully got an update from the "camptocamp3" chart repository
...Successfully got an update from the "bitnami" chart repository
Update Complete. ⎈ Happy Helming!⎈
尝试方案5
[root@k8s-master01 test]# kubectl create serviceaccount --namespace kube-system tiller
Error from server (AlreadyExists): serviceaccounts "tiller" already exists
[root@k8s-master01 test]# kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
Error from server (AlreadyExists): clusterrolebindings.rbac.authorization.k8s.io "tiller-cluster-rule" already exists
[root@k8s-master01 test]# kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'
deployment.extensions/tiller-deploy patched (no change)
[root@k8s-master01 test]# helm init --upgrade --service-account default
尝试方案6
helm init --stable-repo-url=https://charts.helm.sh/stable
尝试方案7
[root@k8s-master01 test]# helm repo remove stable
"stable" has been removed from your repositories
[root@k8s-master01 test]# helm repo add stable https://charts.helm.sh/stable
"stable" has been added to your repositories
[root@k8s-master01 test]# helm repo update
Hang tight while we grab the latest from your chart repositories...
...Skip local chart repository
...Successfully got an update from the "camptocamp3" chart repository
...Successfully got an update from the "bitnami" chart repository
...Successfully got an update from the "stable" chart repository
Update Complete. ⎈ Happy Helming!⎈
尝试方案8
[root@k8s-master01 test]# helm init --stable-repo-url=https://charts.helm.sh/stable --client-only
$HELM_HOME has been configured at /root/.helm.
Not installing Tiller due to 'client-only' flag having been set
Happy Helming!
尝试方案9
kubectl create clusterrolebinding add-on-cluster-admin --clusterrole=cluster-admin --serviceaccount=kube-system:default
尝试方案10
[root@k8s-master01 test]# kubectl create clusterrolebinding permissive-binding --clusterrole=cluster-admin --user=admin --user=kubelet --group=system:serviceaccounts
clusterrolebinding.rbac.authorization.k8s.io/permissive-binding created
尝试方案11
helm install . --set=rbac.enabled=true
尝试方案12
原说明:
kubectl edit deploy --namespace kube-system tiller-deploy (#and add the line serviceAccount: tiller to spec/template/spec)
实际操作(新开的vi中的界面里面就安原说明操作)
[root@k8s-master01 test]# kubectl edit deploy --namespace kube-system tiller-deploy
deployment.extensions/tiller-deploy edited
说白了就是修改这个vi里面spec/template/spec路径中的serviceAccount的值为tiller
Step3
。。。这个问题太难了,先放放吧。
我们继续向下走吧。如果有机会再处理。
按照它原来的说法,这里安装过程后就有一个可访问的端口
然后也可以通过helm list来查看
[root@k8s-master01 test]# helm list
这里原本结果应如:
然而实际结果为:
[root@k8s-master01 test]# helm list
Error: Get https://10.96.0.1:443/api/v1/namespaces/kube-system/configmaps?labelSelector=OWNER%!D(MISSING)TILLER: dial tcp 10.96.0.1:443: i/o timeout
。。。
Step4
解决443/api/v1/namespaces/kube-system/configmaps?labelSelector=OWNER%!D(MISSING)TILLER: dial tcp错误
尝试的方案1
[root@k8s-master01 test]# helm init --upgrade -i helm-tiller.tar --tiller-namespace=kube-system --service-account default
$HELM_HOME has been configured at /root/.helm.
Tiller (the Helm server-side component) has been upgraded to the current version.
Happy Helming!
[root@k8s-master01 test]# kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:default
Error from server (AlreadyExists): clusterrolebindings.rbac.authorization.k8s.io "tiller-cluster-rule" already exists
[root@k8s-master01 test]# kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"default"}}}}'
deployment.extensions/tiller-deploy patched (no change)
helm-tiller.tar这个是我们之前那个gcr.io/kubernetes-helm/tiller:v2.13.1的镜像我是直接用上篇中所提到的老师的那个包
然后这次遇到的是这样的错误吧
[root@k8s-master01 test]# helm list
Error: could not find a ready tiller pod
尝试的方案2
[root@k8s-master01 test]# helm init --upgrade --tiller-namespace=kube-system --service-account default
$HELM_HOME has been configured at /root/.helm.
Tiller (the Helm server-side component) has been upgraded to the current version.
Happy Helming!
[root@k8s-master01 test]# kubectl get pod -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-699cc4c4cb-5gzzv 1/1 Running 3 6d9h
coredns-699cc4c4cb-nd7nb 1/1 Running 3 6d9h
etcd-k8s-master01 1/1 Running 28 32d
kube-apiserver-k8s-master01 1/1 Running 30 32d
kube-controller-manager-k8s-master01 1/1 Running 32 32d
kube-flannel-ds-amd64-v8bkz 1/1 Running 2 6d8h
kube-flannel-ds-amd64-w2qp2 1/1 Running 3 6d8h
kube-flannel-ds-amd64-xsq8h 1/1 Running 25 18d
kube-proxy-4lwzp 1/1 Running 27 32d
kube-proxy-m6bsz 1/1 Running 25 32d
kube-proxy-xqhhg 1/1 Running 25 32d
kube-scheduler-k8s-master01 1/1 Running 32 32d
tiller-deploy-68b6dcdb7b-zpsjv 1/1 Running 0 13s
好像就是说这里tiller-deploy为前缀的这个pod在kube-system名称空间中必须是runing才对吧,我直觉是这样,所以这个方案只是修复了方案1的问题。。。
尝试的方案3
卸载flannel
[root@k8s-master01 test]# kubectl delete -f /root/kube-flannel2.yml
podsecuritypolicy.policy "psp.flannel.unprivileged" deleted
clusterrole.rbac.authorization.k8s.io "flannel" deleted
clusterrolebinding.rbac.authorization.k8s.io "flannel" deleted
serviceaccount "flannel" deleted
configmap "kube-flannel-cfg" deleted
daemonset.apps "kube-flannel-ds-amd64" deleted
daemonset.apps "kube-flannel-ds-arm64" deleted
daemonset.apps "kube-flannel-ds-arm" deleted
daemonset.apps "kube-flannel-ds-ppc64le" deleted
daemonset.apps "kube-flannel-ds-s390x" deleted
rm -rf /etc/cni/net.d/*
这个命令有说是如果无效,可以尝试直接重启服务器。。。我这里到是没有重启服务器(我的虚拟机)
[root@k8s-master01 test]# systemctl restart kubelet
然后查看 kubectl get pod -n kube-system
安装calico
[root@k8s-master01 test]# kubectl create -f calico.yaml
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
serviceaccount/calico-node created
deployment.apps/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
该方法除了报相关镜像拉取错误,我另外一个服务器拉取镜像保存给他本地加载算是解决了,但是现在竟然还有这些错误
[root@k8s-master01 test]# kubectl describe pod -n kube-system calico-kube-controllers-7fc57b95d4-gt4zp
Name: calico-kube-controllers-7fc57b95d4-gt4zp
Namespace: kube-system
Priority: 2000000000
Priority Class Name: system-cluster-critical
Node: k8s-node02/192.168.66.21
Start Time: Sun, 15 Jan 2023 00:17:56 +0800
Labels: k8s-app=calico-kube-controllers
pod-template-hash=7fc57b95d4
Annotations: scheduler.alpha.kubernetes.io/critical-pod:
Status: Pending
IP:
Controlled By: ReplicaSet/calico-kube-controllers-7fc57b95d4
Containers:
calico-kube-controllers:
Container ID:
Image: calico/kube-controllers:v3.8.9
Image ID:
Port: <none>
Host Port: <none>
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Readiness: exec [/usr/bin/check-status -r] delay=0s timeout=1s period=10s #success=1 #failure=3
Environment:
ENABLED_CONTROLLERS: node
DATASTORE_TYPE: kubernetes
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from calico-kube-controllers-token-jztrm (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
calico-kube-controllers-token-jztrm:
Type: Secret (a volume populated by a Secret)
SecretName: calico-kube-controllers-token-jztrm
Optional: false
QoS Class: BestEffort
Node-Selectors: beta.kubernetes.io/os=linux
Tolerations: CriticalAddonsOnly
node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 2m23s default-scheduler Successfully assigned kube-system/calico-kube-controllers-7fc57b95d4-gt4zp to k8s-node02
Warning FailedCreatePodSandBox 2m22s kubelet, k8s-node02 Failed create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "6e1a9263e8712bb76a96e7f21f25f96287b3652400322354f8babef547cfba84" network for pod "calico-kube-controllers-7fc57b95d4-gt4zp": NetworkPlugin cni failed to set up pod "calico-kube-controllers-7fc57b95d4-gt4zp_kube-system" network: stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/
Warning FailedCreatePodSandBox 2m21s kubelet, k8s-node02 Failed create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "a3e6be084a1ac85b33a3ed60b507e310a74ec72753bb95ca50a61ad1eee56f8c" network for pod "calico-kube-controllers-7fc57b95d4-gt4zp": NetworkPlugin cni failed to set up pod "calico-kube-controllers-7fc57b95d4-gt4zp_kube-system" network: stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/
Warning FailedCreatePodSandBox 2m20s kubelet, k8s-node02 Failed create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "3d3c9fc4e779e4d609b9d919545473fc14a02fdf70f7f74f3a15c172b1bffd23" network for pod "calico-kube-controllers-7fc57b95d4-gt4zp": NetworkPlugin cni failed to set up pod "calico-kube-controllers-7fc57b95d4-gt4zp_kube-system" network: stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/
Warning FailedCreatePodSandBox 2m19s kubelet, k8s-node02 Failed create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "9b50d454897f282a2290f9c7745b79fb813092fbda1cbb8c565265fdce53fd2e" network for pod "calico-kube-controllers-7fc57b95d4-gt4zp": NetworkPlugin cni failed to set up pod "calico-kube-controllers-7fc57b95d4-gt4zp_kube-system" network: stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/
Warning FailedCreatePodSandBox 2m18s kubelet, k8s-node02 Failed create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "00c64c2a8bcec3226d5b9ca17b6db7a6065856d92d001c3e2e9f7a8aac46d92b" network for pod "calico-kube-controllers-7fc57b95d4-gt4zp": NetworkPlugin cni failed to set up pod "calico-kube-controllers-7fc57b95d4-gt4zp_kube-system" network: stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/
Warning FailedCreatePodSandBox 2m17s kubelet, k8s-node02 Failed create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "387c83934212e463707ba75282f0ee0f55bf3194af5f07f52195740e2b12002f" network for pod "calico-kube-controllers-7fc57b95d4-gt4zp": NetworkPlugin cni failed to set up pod "calico-kube-controllers-7fc57b95d4-gt4zp_kube-system" network: stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/
Warning FailedCreatePodSandBox 2m16s kubelet, k8s-node02 Failed create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "2732ba6ce07e67bb27d9aa749cd9c69e4be931531e0cc95bb89f8347163be6ff" network for pod "calico-kube-controllers-7fc57b95d4-gt4zp": NetworkPlugin cni failed to set up pod "calico-kube-controllers-7fc57b95d4-gt4zp_kube-system" network: stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/
Warning FailedCreatePodSandBox 2m15s kubelet, k8s-node02 Failed create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "098dc99b5956c7224c8bdcdadcb250cf2258eaa65cfe313fe04fa482568dc8ba" network for pod "calico-kube-controllers-7fc57b95d4-gt4zp": NetworkPlugin cni failed to set up pod "calico-kube-controllers-7fc57b95d4-gt4zp_kube-system" network: stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/
Warning FailedCreatePodSandBox 2m14s kubelet, k8s-node02 Failed create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "66a654fe505dc8d5c1fa53bf9e2556989376426d4c5cb4a344e84576ce18b85a" network for pod "calico-kube-controllers-7fc57b95d4-gt4zp": NetworkPlugin cni failed to set up pod "calico-kube-controllers-7fc57b95d4-gt4zp_kube-system" network: stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/
Normal SandboxChanged 2m11s (x12 over 2m22s) kubelet, k8s-node02 Pod sandbox changed, it will be killed and re-created.
Warning FailedCreatePodSandBox 2m10s (x4 over 2m13s) kubelet, k8s-node02 (combined from similar events): Failed create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "db443508d90d158a439a5135942e76b767c37bd736020ff2b9a852e17fcf01ab" network for pod "calico-kube-controllers-7fc57b95d4-gt4zp": NetworkPlugin cni failed to set up pod "calico-kube-controllers-7fc57b95d4-gt4zp_kube-system" network: stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/
。。。暂时也先还原回flannel吧。。。感觉直接换这种网络插件算是架构上有点太大变化了。
所以这种问题也就先放到后面再解决吧。。。
Step5
然后查看pod列表,原本应该是这样
结果我这里却是这样。。。
[root@k8s-master01 test]# kubectl get pod
NAME READY STATUS RESTARTS AGE
myweb111111-6756c88c87-45gxj 1/1 Running 1 5d3h
myweb111111-6756c88c87-dfnhl 1/1 Running 1 5d3h
myweb111111-6756c88c87-hrkn6 1/1 Running 1 5d3h
myweb111111-6756c88c87-jvgm8 1/1 Running 1 5d3h
myweb111111-6756c88c87-ltj9k 1/1 Running 1 5d3h
myweb111111-6756c88c87-s5ffk 1/1 Running 1 5d3h
myweb111111-6756c88c87-v5j4s 1/1 Running 1 5d3h
myweb111111-6756c88c87-w777m 1/1 Running 1 5d3h
那么按原来的方案,他这个问题是因为那个镜像不存在,我们需要修改为一个已存在的可以正常pull到的一个镜像,或者你本地有的那种应该就不用pull了。那么再此之前先清理下上面我们由于修改helm list时所产生的几个文件
[root@k8s-master01 test]# rm -rf calico_*
[root@k8s-master01 test]# rm -rf helm-tiller.tar
这几个文件大都是因为依赖到要翻墙的镜像,只能从其他服务器docker镜像保存过来的
然后要将测试项目中template目录中的deployment模板的拉取镜像修改下了
[root@k8s-master01 test]# vi templates/deployment.yaml
[root@k8s-master01 test]# cat templates/deployment.yaml
apiversion: extensions/v1beta1
kind: Deployment
metadata:
name: hello-world
spec:
replicas: 1
template:
metadata:
labels:
app: hello-wor1d
spec:
containers:
- name: hello-wor1d
image: wangyanglinux/myapp:v1
ports:
- containerPort: 80
protocol: TCP
然后执行命令如
[root@k8s-master01 test]# helm upgrade nobby-eel .
其中这里nobby-eel是那个helm list能正常看到结果时候
的结果中取得。
结果这里。。。。
[root@k8s-master01 test]# helm upgrade nobby-eel .
UPGRADE FAILED
ROLLING BACK
Error: Get https://10.96.0.1:443/api/v1/namespaces/kube-system/configmaps?labelSelector=NAME%!D(MISSING)nobby-eel%!C(MISSING)OWNER%!D(MISSING)TILLER%!C(MISSING)STATUS%!D(MISSING)DEPLOYED: dial tcp 10.96.0.1:443: i/o timeout
Error: UPGRADE FAILED: Get https://10.96.0.1:443/api/v1/namespaces/kube-system/configmaps?labelSelector=NAME%!D(MISSING)nobby-eel%!C(MISSING)OWNER%!D(MISSING)TILLER%!C(MISSING)STATUS%!D(MISSING)DEPLOYED: dial tcp 10.96.0.1:443: i/o timeout
那么再来说查看helm的某个项目的历史
[root@k8s-master01 test]# helm history nobby-eel
Error: Get https://10.96.0.1:443/api/v1/namespaces/kube-system/configmaps?labelSelector=NAME%!D(MISSING)nobby-eel%!C(MISSING)OWNER%!D(MISSING)TILLER: dial tcp 10.96.0.1:443: i/o timeout
还是同样的问题。。。。。
那么查看helm的某个项目的状态
[root@k8s-master01 test]# helm status nobby-eel
Error: getting deployed release "nobby-eel": Get https://10.96.0.1:443/api/v1/namespaces/kube-system/configmaps?labelSelector=NAME%!D(MISSING)nobby-eel%!C(MISSING)OWNER%!D(MISSING)TILLER: dial tcp 10.96.0.1:443: i/o timeout
还是一样。。。
算了,接下来的流程就直接贴老师原来的吧。。。
这里应该是可以通过物理机访问了的
然后deployment是要改改了
[root@k8s-master01 test]# cat templates/deployment.yaml
apiversion: extensions/v1beta1
kind: Deployment
metadata:
name: hello-world
spec:
replicas: 1
template:
metadata:
labels:
app: hello-wor1d
spec:
containers:
- name: hello-wor1d
image: {{ .Values.image.repository }}:{{ .Values.image.tag }}
ports:
- containerPort: 80
protocol: TCP
所引用的镜像
[root@k8s-master01 test]# cat values.yaml
image:
repository: wangyanglinux/myapp
tag: 'v2'
当然,按照原本的计划,理应是这样
就是helm install之后就可以访问了,且tag为我们配置的v2了
那么我这里实际。。。
[root@k8s-master01 test]# helm install .
Error: no available release name found
然后接下来,按照原来的就是说上面的这种动态修改tag,就可以更简单了,直接执行命令如:
[root@k8s-master01 test]# helm upgrade nobby-eel --set image.tag='v3' .
UPGRADE FAILED
ROLLING BACK
Error: Get https://10.96.0.1:443/api/v1/namespaces/kube-system/configmaps?labelSelector=NAME%!D(MISSING)nobby-eel%!C(MISSING)OWNER%!D(MISSING)TILLER%! C(MISSING)STATUS%!D(MISSING)DEPLOYED: dial tcp 10.96.0.1:443: i/o timeout
Error: UPGRADE FAILED: Get https://10.96.0.1:443/api/v1/namespaces/kube-system/configmaps?labelSelector=NAME%!D(MISSING)nobby-eel%!C(MISSING)OWNER%!D( MISSING)TILLER%!C(MISSING)STATUS%!D(MISSING)DEPLOYED: dial tcp 10.96.0.1:443: i/o timeout
然后正常情况就是v3了
那么我这里。。。。
然后是删除
原本删除应该是能执行成功的,如:
helm delete nobby-eel
结果我这里。。。
[root@k8s-master01 test]# helm delete nobby-eel
Error: Get https://10.96.0.1:443/api/v1/namespaces/kube-system/configmaps?labelSelector=NAME%!D(MISSING)nobby-eel%!C(MISSING)OWNER%!D(MISSING)TILLER: dial tcp 10.96.0.1:443: i/o timeout
那么按照原来的就是说删除成功后再安装同名的helm项目是会报错同名helm项目已存在的
但我这里。。。
[root@k8s-master01 test]# helm install --name nobby-eel .
Error: Get https://10.96.0.1:443/version?timeout=32s: dial tcp 10.96.0.1:443: i/o timeout
原来就是说,已经删除的并不是真正删除,类似软删除的,这里还是可以看到已经删除的数据的
但是我这里。。。。
[root@k8s-master01 test]# helm list --deleted
Error: Get https://10.96.0.1:443/api/v1/namespaces/kube-system/configmaps?labelSelector=OWNER%!D(MISSING)TILLER: dial tcp 10.96.0.1:443: i/o timeout
同时这里在这个图片上可以看到revision的号码为6
那么接下来就是可以将已删除的helm项目回滚回来
当然我这里。。。。
[root@k8s-master01 test]# helm rollback nobby-eel 6
Error: Get https://10.96.0.1:443/api/v1/namespaces/kube-system/configmaps?labelSelector=NAME%!D(MISSING)nobby-eel%!C(MISSING)OWNER%!D(MISSING)TILLER: dial tcp 10.96.0.1:443: i/o timeout
那么接下来再看到的helm list就是revision的版本号就会增加到7了
当然我这里。。。
然后也可正常访问
当然我这里。。。
还可以回滚到比6版本更低的?
可能是中间各种试错导致的一些版本存在吧
想要彻底删除可以使用命令如
当然我这里。。。
然后再次查看已删除的helm项目列表
当然我这里。。。
尝试(执行/安装/运行)helm项目
该图的意思就是说尝试运行后并不会真的建立helm项目,只是尝试下脚本运行是否有异常的,算是预运行吧。
当然我这里。。。
关键字词:Helm,helm