K8S 故障处理 - chalon - 博客园

标签: | 发表时间:2021-08-12 09:32 | 作者:
出处:https://www.cnblogs.com

问题1:K8S集群服务访问失败?

curl: (60) Peer's Certificate issuer is not recognized.

More details here: http://curl.haxx.se/docs/sslcerts.html

curl performs SSL certificate verification by default, using a "bundle"
of Certificate Authority (CA) public keys (CA certs). If the default
bundle file isn't adequate, you can specify an alternate file
using the --cacert option.
If this HTTPS server uses a certificate signed by a CA represented in
the bundle, the certificate verification probably failed due to a
problem with the certificate (it might be expired, or the name might
not match the domain name in the URL).
If you'd like to turn off curl's verification of the certificate, use
the -k (or --insecure) option.

原因分析:证书不能被识别,其原因为:自定义证书,过期等。

解决方法:更新证书即可。

 

问题2:K8S集群服务访问失败?

curl: (7) Failed connect to 10.103.22.158:3000; Connection refused

原因分析:端口映射错误,服务正常工作,但不能提供服务。

解决方法:删除svc,重新映射端口即可。

kubectl delete svc nginx-deployment

 

  

问题3:K8S集群服务暴露失败?

Error from server (AlreadyExists): services "nginx-deployment" already exists

原因分析:该容器已暴露服务了。

解决方法:删除svc,重新映射端口即可。

 

问题4:外网无法访问K8S集群提供的服务?

 

原因分析:K8S集群的type为ClusterIP,未将服务暴露至外网。

解决方法:修改K8S集群的type为NodePort即可,于是可通过所有K8S集群节点访问服务。

kubectl edit svc nginx-deployment

 

 

问题5:pod状态为ErrImagePull?

 readiness-httpget-pod   0/1     ErrImagePull   0          10s

原因分析:image无法拉取;

 Warning Failed 59m (x4 over 61m) kubelet, k8s-node01 Error: ErrImagePull

解决方法:更换镜像即可。

 

问题6:创建init C容器后,其状态不正常?

NAME READY STATUS RESTARTS AGE
myapp-pod 0/1 Init:0/2 0 20s

原因分析:查看日志发现,pod一直出于初始化中;然后查看pod详细信息,定位pod创建失败的原因为:初始化容器未执行完毕。

  Error from server (BadRequest): container "myapp-container" in pod "myapp-pod" is waiting to start: PodInitializing

  

  

waiting for myservice

Server: 10.96.0.10
Address: 10.96.0.10:53

** server can't find myservice.default.svc.cluster.local: NXDOMAIN

*** Can't find myservice.svc.cluster.local: No answer
*** Can't find myservice.cluster.local: No answer
*** Can't find myservice.default.svc.cluster.local: No answer
*** Can't find myservice.svc.cluster.local: No answer
*** Can't find myservice.cluster.local: No answer

解决方法:创建相关service,将SVC的name写入K8S集群的coreDNS服务器中,于是coreDNS就能对POD的initC容器执行过程中的域名解析了。

kubectl apply -f myservice.yaml

  

  

  NAME READY STATUS RESTARTS AGE

myapp-pod 0/1 Init:1/2 0 27m
myapp-pod 0/1 PodInitializing 0 28m
myapp-pod 1/1 Running 0 28m

 

  

问题7:探测存活pod状态为CrashLoopBackOff?

readiness-httpget-pod 0/1 CrashLoopBackOff 1 13s
readiness-httpget-pod 0/1 Completed 2 20s
readiness-httpget-pod 0/1 CrashLoopBackOff 2 31s
readiness-httpget-pod 0/1 Completed 3 42s
readiness-httpget-pod 0/1 CrashLoopBackOff 3 53s

原因分析:镜像问题,导致容器重启失败。

Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Pulling 56m kubelet, k8s-node01 Pulling image "hub.atguigu.com/library/mylandmarktech/myapp:v1"
Normal Pulled 56m kubelet, k8s-node01 Successfully pulled image "hub.atguigu.com/library/mylandmarktech/myapp:v1"
Normal Created 56m (x3 over 56m) kubelet, k8s-node01 Created container readiness-httpget-container
Normal Started 56m (x3 over 56m) kubelet, k8s-node01 Started container readiness-httpget-container
Normal Pulled 56m (x2 over 56m) kubelet, k8s-node01 Container image "hub.atguigu.com/library/mylandmarktech/myapp:v1" already present on machine
Warning Unhealthy 56m kubelet, k8s-node01 Readiness probe failed: Get http://10.244.2.22:80/index1.html: dial tcp 10.244.2.22:80: connect: connection refused
Warning BackOff 56m (x4 over 56m) kubelet, k8s-node01 Back-off restarting failed container
Normal Scheduled 50s default-scheduler Successfully assigned default/readiness-httpget-pod to k8s-node01

解决方法:更换镜像即可。

 

 

问题8:POD创建失败?

readiness-httpget-pod 0/1 Pending 0 0s
readiness-httpget-pod 0/1 Pending 0 0s
readiness-httpget-pod 0/1 ContainerCreating 0 0s
readiness-httpget-pod 0/1 Error 0 2s
readiness-httpget-pod 0/1 Error 1 3s
readiness-httpget-pod 0/1 CrashLoopBackOff 1 4s
readiness-httpget-pod 0/1 Error 2 15s
readiness-httpget-pod 0/1 CrashLoopBackOff 2 26s
readiness-httpget-pod 0/1 Error 3 37s
readiness-httpget-pod 0/1 CrashLoopBackOff 3 52s
readiness-httpget-pod 0/1 Error 4 82s

原因分析:镜像问题导致容器无法启动。

[root@k8s-master01 ~]# kubectl logs readiness-httpget-pod
url.js:106
throw new errors.TypeError('ERR_INVALID_ARG_TYPE', 'url', 'string', url);
^

TypeError [ERR_INVALID_ARG_TYPE]: The "url" argument must be of type string. Received type undefined
at Url.parse (url.js:106:11)
at Object.urlParse [as parse] (url.js:100:13)
at module.exports (/myapp/node_modules/mongodb/lib/url_parser.js:17:23)
at connect (/myapp/node_modules/mongodb/lib/mongo_client.js:159:16)
at Function.MongoClient.connect (/myapp/node_modules/mongodb/lib/mongo_client.js:110:3)
at Object.<anonymous> (/myapp/app.js:12:13)
at Module._compile (module.js:641:30)
at Object.Module._extensions..js (module.js:652:10)
at Module.load (module.js:560:32)
at tryModuleLoad (module.js:503:12)
at Function.Module._load (module.js:495:3)
at Function.Module.runMain (module.js:682:10)
at startup (bootstrap_node.js:191:16)
at bootstrap_node.js:613:3

 

 Events:

Type Reason Age From Message
---- ------ ---- ---- -------
Normal Pulled 58m (x5 over 59m) kubelet, k8s-node01 Container image "hub.atguigu.com/library/myapp:v1" already present on machine
Normal Created 58m (x5 over 59m) kubelet, k8s-node01 Created container readiness-httpget-container
Normal Started 58m (x5 over 59m) kubelet, k8s-node01 Started container readiness-httpget-container
Warning BackOff 57m (x10 over 59m) kubelet, k8s-node01 Back-off restarting failed container
Normal Scheduled 3m35s default-scheduler Successfully assigned default/readiness-httpget-pod to k8s-node01

解决方法:更换镜像。

 

 

问题9:POD的ready状态未进入?

readiness-httpget-pod   0/1     Running   0          116s

原因分析:POD的执行命令失败,无法获取资源。

Error from server (NotFound): pods "pod" not found

2021/06/11 07:10:14 [error] 30#30: *1 open() "/usr/share/nginx/html/index1.html" failed (2: No such file or directory), client: 10.244.2.1, server: localhost, request: "GET /index1.html HTTP/1.1", host: "10.244.2.25:80"
10.244.2.1 - - [11/Jun/2021:07:10:14 +0000] "GET /index1.html HTTP/1.1" 404 153 "-" "kube-probe/1.15" "-"
10.244.2.1 - - [11/Jun/2021:07:10:17 +0000] "GET /index1.html HTTP/1.1" 404 153 "-" "kube-probe/1.15" "-"

 Events:

Type Reason Age From Message
---- ------ ---- ---- -------
Normal Pulled 64m kubelet, k8s-node01 Container image "hub.atguigu.com/library/nginx" already present on machine
Normal Created 64m kubelet, k8s-node01 Created container readiness-httpget-container
Normal Started 64m kubelet, k8s-node01 Started container readiness-httpget-container
Warning Unhealthy 59m (x101 over 64m) kubelet, k8s-node01 Readiness probe failed: HTTP probe failed with statuscode: 404
Normal Scheduled 8m16s default-scheduler Successfully assigned default/readiness-httpget-pod to k8s-node01

解决方法:进入容器内部,创建yaml定义的资源

 

问题10:pod创建失败?

error: error validating "myregistry-secret.yml": error validating data: ValidationError(Pod.spec.imagePullSecrets[0]): invalid type for io.k8s.api.core.v1.LocalObjectReference: got "string", expected "map"; if you choose to ignore these errors, turn validation off with --validate=false

原因分析:yml文件内容出错---使用中文字符;

解决方法:修改myregistrykey内容即可。

 

11、kube-flannel-ds-amd64-ndsf7插件pod的status为Init:0/1?

 排查思路:kubectl -n kube-system describe pod kube-flannel-ds-amd64-ndsf7 #查询pod描述信息;

原因分析:k8s-slave1节点拉取镜像失败。

解决方法:登录k8s-slave1,重启docker服务,手动拉取镜像。

k8s-master节点,重新安装插件即可。

kubectl create -f kube-flannel.yml;kubectl get nodes

 

12、K8S创建服务status为ErrImagePull?

排查思路:kubectl describe pod test-nginx

原因分析:拉取镜像名称问题。

解决方法:删除错误pod;重新拉取镜像;

kubectl delete pod test-nginx;kubectl run test-nginx --image=10.0.0.81:5000/nginx:alpine

 

13、不能进入指定容器内部?

Error from server (BadRequest): container volume-test-container is not valid for pod volume-test-pod

 原因分析:yml文件comtainers字段重复,导致该pod没有该容器。

 

 

 解决方法:去掉yml文件中多余的containers字段,重新生成pod。

 

 

14、创建PV失败?

persistentvolume/nfspv1 unchanged
persistentvolume/nfspv01 created
Error from server (Invalid): error when applying patch:
{"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"v1\",\"kind\":\"PersistentVolume\",\"metadata\":{\"annotations\":{},\"name\":\"nfspv01\"},\"spec\":{\"accessModes\":[\"ReadWriteOnce\"],\"capacity\":{\"storage\":\"5Gi\"},\"nfs\":{\"path\":\"/nfs2\",\"server\":\"192.168.66.100\"},\"persistentVolumeReclaimPolicy\":\"Retain\",\"storageClassName\":\"nfs\"}}\n"}},"spec":{"nfs":{"path":"/nfs2"}}}
to:
Resource: "/v1, Resource=persistentvolumes", GroupVersionKind: "/v1, Kind=PersistentVolume"
Name: "nfspv01", Namespace: ""
Object: &{map["apiVersion":"v1" "kind":"PersistentVolume" "metadata":map["annotations":map["kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"v1\",\"kind\":\"PersistentVolume\",\"metadata\":{\"annotations\":{},\"name\":\"nfspv01\"},\"spec\":{\"accessModes\":[\"ReadWriteOnce\"],\"capacity\":{\"storage\":\"5Gi\"},\"nfs\":{\"path\":\"/nfs1\",\"server\":\"192.168.66.100\"},\"persistentVolumeReclaimPolicy\":\"Retain\",\"storageClassName\":\"nfs\"}}\n"] "creationTimestamp":"2021-06-25T01:54:24Z" "finalizers":["kubernetes.io/pv-protection"] "name":"nfspv01" "resourceVersion":"325674" "selfLink":"/api/v1/persistentvolumes/nfspv01" "uid":"89cb1d15-8012-47f0-aee6-6507bb624387"] "spec":map["accessModes":["ReadWriteOnce"] "capacity":map["storage":"5Gi"] "nfs":map["path":"/nfs1" "server":"192.168.66.100"] "persistentVolumeReclaimPolicy":"Retain" "storageClassName":"nfs" "volumeMode":"Filesystem"] "status":map["phase":"Available"]]}
for: "PV.yml": PersistentVolume "nfspv01" is invalid: spec.persistentvolumesource: Forbidden: is immutable after creation

 原因分析:pv的name字段重复。

解决方法:修改pv的name字段即可。

 

15、pod无法挂载PVC?

 原因分析:pod无法挂载PVC。

Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 60s default-scheduler pod has unbound immediate PersistentVolumeClaims (repeated 2 times)

accessModes与可使用的PV不一致,导致无法挂载PVC,由于只能挂载大于1G且accessModes为RWO的PV,故只能成功创建1个pod,第2个pod一致pending,按序创建时则第3个pod一直未被创建;

解决方法:修改yml文件中accessModes或PV的accessModes即可。

 

16、问题:pod使用PV后,无法访问其内容?

 原因分析:nfs卷中没有文件或权限不对。

 

 解决方法:在nfs卷中创建文件并授予权限。

 

 

 17、查看节点状态失败?

Error from server (NotFound): the server could not find the requested resource (get services http:heapster:)

原因分析:没有heapster服务。

解决方法:安装promethus监控组件即可。

 

 

18、pod一直处于pending'状态?

 原因分析:由于已使用同样镜像发布了pod,导致无节点可调度。

  Events:

Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 9s (x13 over 14m) default-scheduler 0/3 nodes are available: 3 node(s) didn't match node selector.

解决方法:删除所有pod后部署pod即可。

 

19、helm安装组件失败?

 [root@k8s-master01 hello-world]# helm install

Error: This command needs 1 argument: chart nam

[root@k8s-master01 hello-world]# helm install ./
Error: no Chart.yaml exists in directory "/root/hello-world"

 原因分析:文件名格式不对。

解决方法:mv chart.yaml Chart.yaml

 

  

20、helm更新release失败?

 [root@k8s-master01 hello-world]#

[root@k8s-master01 hello-world]# helm upgrade joyous-wasp ./
UPGRADE FAILED
ROLLING BACK
Error: render error in "hello-world/templates/deployment.yaml": template: hello-world/templates/deployment.yaml:14:35: executing "hello-world/templates/deployment.yaml" at <.values.image.reposi...>: can't evaluate field image in type interface {}
Error: UPGRADE FAILED: render error in "hello-world/templates/deployment.yaml": template: hello-world/templates/deployment.yaml:14:35: executing "hello-world/templates/deployment.yaml" at <.values.image.reposi...>: can't evaluate field image in type interface {}

原因分析:yaml文件语法错误。

 解决方法:修改yaml文件即可。

 

 

21、etcd启动失败?

[root@k8s-master01 ~]# systemctl enable --now etcd
Created symlink from /etc/systemd/system/etcd3.service to /usr/lib/systemd/system/etcd.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/etcd.service to /usr/lib/systemd/system/etcd.service.
Job for etcd.service failed because a timeout was exceeded. See "systemctl status etcd.service" and "journalctl -xe" for details.

原因分析:认证失败原因可能为证书、配置、端口等。检查配置符合etcd版本要求,证书生成过程有效。最后确认端口被占用导致认证失败。

[root@k8s-master01 ~]# systemctl status etcd
● etcd.service - Etcd.service
Loaded: loaded (/usr/lib/systemd/system/etcd.service; enabled; vendor preset: disabled)
Active: activating (start) since Wed 2021-07-14 09:53:03 CST; 1min 6s ago
Docs: https://coreos.com/etcd/docs/latest/
Main PID: 39692 (etcd)
CGroup: /system.slice/etcd.service
└─39692 /usr/local/bin/etcd --config-file=/etc/etcd/etcd.config.yml

Jul 14 09:54:09 k8s-master01 etcd[39692]: rejected connection from "192.168.0.108:46168" (error "remote error: tls: bad certificate", ServerName "")
Jul 14 09:54:09 k8s-master01 etcd[39692]: rejected connection from "192.168.0.108:46166" (error "remote error: tls: bad certificate", ServerName "")
Jul 14 09:54:09 k8s-master01 etcd[39692]: rejected connection from "192.168.0.108:46170" (error "remote error: tls: bad certificate", ServerName "")
Jul 14 09:54:09 k8s-master01 etcd[39692]: rejected connection from "192.168.0.108:46172" (error "remote error: tls: bad certificate", ServerName "")
Jul 14 09:54:09 k8s-master01 etcd[39692]: rejected connection from "192.168.0.108:46176" (error "remote error: tls: bad certificate", ServerName "")
Jul 14 09:54:09 k8s-master01 etcd[39692]: rejected connection from "192.168.0.108:46174" (error "remote error: tls: bad certificate", ServerName "")
Jul 14 09:54:09 k8s-master01 etcd[39692]: rejected connection from "192.168.0.108:46178" (error "remote error: tls: bad certificate", ServerName "")
Jul 14 09:54:09 k8s-master01 etcd[39692]: rejected connection from "192.168.0.108:46180" (error "remote error: tls: bad certificate", ServerName "")
Jul 14 09:54:10 k8s-master01 etcd[39692]: rejected connection from "192.168.0.108:46182" (error "remote error: tls: bad certificate", ServerName "")
Jul 14 09:54:10 k8s-master01 etcd[39692]: rejected connection from "192.168.0.108:46186" (error "remote error: tls: bad certificate", ServerName "") 

解决方法:kill占用2379端口的进程,重启etcd即可。

 

相关 [k8s chalon 博客] 推荐:

K8S 故障处理 - chalon - 博客园

- -
问题1:K8S集群服务访问失败. 原因分析:证书不能被识别,其原因为:自定义证书,过期等. 问题2:K8S集群服务访问失败. 原因分析:端口映射错误,服务正常工作,但不能提供服务. 解决方法:删除svc,重新映射端口即可. 问题3:K8S集群服务暴露失败. 解决方法:删除svc,重新映射端口即可. 问题4:外网无法访问K8S集群提供的服务.

k8s docker集群搭建 - CSDN博客

- -
一、Kubernetes系列之介绍篇.     - 一次构建,到处运行. 2.什么是kubernetes.   首先,他是一个全新的基于容器技术的分布式架构领先方案. Kubernetes(k8s)是Google开源的容器集群管理系统(谷歌内部:Borg). 在Docker技术的基础上,为容器化的应用提供部署运行、资源调度、服务发现和动态伸缩等一系列完整功能,提高了大规模容器集群管理的便捷性.

深入掌握K8S Pod - Yabea - 博客园

- -
K8S configmap介绍. Pod是k8s中最小的调度单元,包含了一个“根容器”和其它用户业务容器. 如果你使用过k8s的话,当然会了解pod的基本使用,但是为了更好的应用,你需要深入了解pod的配置、调度、升级和扩缩容等. pod包含一个或多个相对紧密耦合的容器,处于同一个pod中的容器共享同样的存储空间、IP地址和Port端口.

实践k8s istio熔断 - fat_girl_spring - 博客园

- -
熔断主要是无感的处理服务异常并保证不会发生级联甚至雪崩的服务异常. 在微服务方面体现是对异常的服务情况进行快速失败,它对已经调用失败的服务不再会继续调用,如果仍需要调用此异常服务,它将立刻返回失败. 与此同时,它一直监控服务的健康状况,一旦服务恢复正常,则立刻恢复对此服务的正常访问. 这样的快速失败策略可以降低服务负载压力,很好地保护服务免受高负载的影响.

K8S部署SpringBoot应用_都超的博客-CSDN博客_k8s springboot

- -
K8S环境机器做部署用,推荐一主双从. Docker Harbor私有仓库,准备完成后在需要使用仓库的机器docker login. 开发机器需要Docker环境,build及push使用. 一、构建基本Springboot工程,本例所用版本及结构如下图. 创建测试代码,简单打印几行log. .

K8S故障排除方法 - 笃行之 - 博客园

- -
1.查看pods哪些是有问题的,Runningg正常,其他异常;. 解决办法:参考:https://github.com/kubernetes-sigs/kubespray/issues/2798. 我这边是NotReady状态,经排查发现, node18.147上面的kubelet kube-proxy挂掉了,服务启来后就可以了.

K8s网络插件flannel与calico - 小雨淅淅o0 - 博客园

- -
Kubernetes的网络通信问题:. 容器间通信: 即同一个Pod内多个容器间通信,通常使用loopback来实现. Pod间通信: K8s要求,Pod和Pod之间通信必须使用Pod-IP 直接访问另一个Pod-IP. Pod与Service通信: 即PodIP去访问ClusterIP,当然,clusterIP实际上是IPVS 或 iptables规则的虚拟IP,是没有TCP/IP协议栈支持的.

k8s网络原理之flannel - 渡边彻 - 博客园

- -
首先当你创建一个k8s集群后一般会存在三种IP分别是,Pod IP,Node IP,Cluster IP. (这部分的知识我们在docker网络当中有详细的讲解,不了解的同学可以查看之前的微博. 那么不同Node节点下的Pod又是如何进行通信的呢. 本文重点要讲的flannel网络插件就是用来解决这个问题的.

k8s网络之Flannel网络 - 金色旭光 - 博客园

- -
一、k8s网络之设计与实现. 二、k8s网络之Flannel网络. 三、k8s网络之Calico网络. Flannel是CoreOS团队针对Kubernetes设计的一个网络规划服务,简单来说,它的功能是让集群中的不同节点主机创建的Docker容器都具有全集群唯一的虚拟IP地址. 在默认的Docker配置中,每个节点上的Docker服务会分别负责所在节点容器的IP分配.

k8s资源需求和限制, 以及pod驱逐策略 - ainimore - 博客园

- -
requests:需求,最低保障, 保证被调度的节点上至少有的资源配额 limits:限制,硬限制, 容器可以分配到的最大资源配额. 如果Pod中所有Container的所有Resource的limit和request都相等且不为0,则这个Pod的QoS Class就是Guaranteed. 注意,如果一个容器只指明了limit,而未指明request,则表明request的值等于limit的值.