Node Kubelet Error: node “xxxxx“ not found [How to Solve]

11月 05 10:58:35 localhost.localdomain kubelet[974]: E1105 10:58:35.108952     974 kubelet.go:2412] "Error getting node" err="node "localhost.localdomain" not found"
11月 05 10:58:35 localhost.localdomain kubelet[974]: E1105 10:58:35.209293     974 kubelet.go:2412] "Error getting node" err="node "localhost.localdomain" not found"
11月 05 10:58:35 localhost.localdomain kubelet[974]: E1105 10:58:35.310543     974 kubelet.go:2412] "Error getting node" err="node "localhost.localdomain" not found"
11月 05 10:58:35 localhost.localdomain kubelet[974]: E1105 10:58:35.411121     974 kubelet.go:2412] "Error getting node" err="node "localhost.localdomain" not found"
11月 05 10:58:35 localhost.localdomain kubelet[974]: E1105 10:58:35.511949     974 kubelet.go:2412] "Error getting node" err="node "localhost.localdomain" not found"
11月 05 10:58:35 localhost.localdomain kubelet[974]: E1105 10:58:35.612822     974 kubelet.go:2412] "Error getting node" err="node "localhost.localdomain" not found"
11月 05 10:58:35 localhost.localdomain kubelet[974]: E1105 10:58:35.713249     974 kubelet.go:2412] "Error getting node" err="node "localhost.localdomain" not found"
11月 05 10:58:35 localhost.localdomain kubelet[974]: E1105 10:58:35.781263     974 controller.go:144] failed to ensure lease exists, will retry in 7s, error: leases.coordination.k8s.io "localhost.localdomain" is forbidden: User "system:node:k8s222" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease": can only access node lease with the same name as the requesting node
11月 05 10:58:35 localhost.localdomain kubelet[974]: E1105 10:58:35.813355     974 kubelet.go:2412] "Error getting node" err="node "localhost.localdomain" not found"
11月 05 10:58:35 localhost.localdomain kubelet[974]: E1105 10:58:35.913495     974 kubelet.go:2412] "Error getting node" err="node "localhost.localdomain" not found"

1.1 this node is always notready

[root@crust-m01 ~]# kubectl get node
NAME        STATUS     ROLES                  AGE   VERSION
k8s220   NotReady   control-plane,master   44d   v1.21.3
k8s221   NotReady   <none>                 44d   v1.21.3
k8s222   NotReady   <none>                 44d   v1.21.3

1.2 view details of this node

[root@localhost ~]# kubectl describe node k8s221


……
Unschedulable:      false
Lease:
  HolderIdentity:  k8s221
  AcquireTime:     <unset>
  RenewTime:       Tue, 28 Sep 2021 14:37:08 +0800
Conditions:
  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
  ----             ------    -----------------                 ------------------                ------              -------
  MemoryPressure   Unknown   Tue, 28 Sep 2021 14:32:16 +0800   Tue, 28 Sep 2021 14:38:17 +0800   NodeStatusUnknown   Kubelet stopped posting node status.
  DiskPressure     Unknown   Tue, 28 Sep 2021 14:32:16 +0800   Tue, 28 Sep 2021 14:38:17 +0800   NodeStatusUnknown   Kubelet stopped posting node status.
  PIDPressure      Unknown   Tue, 28 Sep 2021 14:32:16 +0800   Tue, 28 Sep 2021 14:38:17 +0800   NodeStatusUnknown   Kubelet stopped posting node status.
  Ready            Unknown   Tue, 28 Sep 2021 14:32:16 +0800   Tue, 28 Sep 2021 14:38:17 +0800   NodeStatusUnknown   Kubelet stopped posting node status.
……

1.3 view kubelet logs on this node

[root@crust-m2 ~]# service kubelet status -l
Redirecting to /bin/systemctl status  -l kubelet.service
● kubelet.service - kubelet: The Kubernetes Node Agent
   Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
  Drop-In: /usr/lib/systemd/system/kubelet.service.d
           └─10-kubeadm.conf
   Active: active (running) since 二 2021-09-28 14:51:57 CST; 4min 6s ago
     Docs: https://kubernetes.io/docs/
 Main PID: 21165 (kubelet)
    Tasks: 19
   Memory: 43.0M
   CGroup: /system.slice/kubelet.service
           └─21165 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --network-plugin=cni --pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.4.1

9月 28 14:56:03 crust-m2 kubelet[21165]: E0928 14:56:03.119645   21165 kubelet.go:2291] "Error getting node" err="node "crust-m2" not found"
9月 28 14:56:03 crust-m2 kubelet[21165]: E0928 14:56:03.220694   21165 kubelet.go:2291] "Error getting node" err="node "crust-m2" not found"
9月 28 14:56:03 crust-m2 kubelet[21165]: E0928 14:56:03.321635   21165 kubelet.go:2291] "Error getting node" err="node "crust-m2" not found"
9月 28 14:56:03 crust-m2 kubelet[21165]: E0928 14:56:03.385100   21165 eviction_manager.go:255] "Eviction manager: failed to get summary stats" err="failed to get node info: node "crust-m2" not found"
9月 28 14:56:03 crust-m2 kubelet[21165]: E0928 14:56:03.422387   21165 kubelet.go:2291] "Error getting node" err="node "crust-m2" not found"
9月 28 14:56:03 crust-m2 kubelet[21165]: E0928 14:56:03.523341   21165 kubelet.go:2291] "Error getting node" err="node "crust-m2" not found"
9月 28 14:56:03 crust-m2 kubelet[21165]: E0928 14:56:03.624021   21165 kubelet.go:2291] "Error getting node" err="node "crust-m2" not found"
9月 28 14:56:03 crust-m2 kubelet[21165]: E0928 14:56:03.724418   21165 kubelet.go:2291] "Error getting node" err="node "crust-m2" not found"
9月 28 14:56:03 crust-m2 kubelet[21165]: E0928 14:56:03.825475   21165 kubelet.go:2291] "Error getting node" err="node "crust-m2" not found"
9月 28 14:56:03 crust-m2 kubelet[21165]: E0928 14:56:03.926199   21165 kubelet.go:2291] "Error getting node" err="node "crust-m2" not found"

2. [troubleshooting]

The startup command of log output in 1.3 is as follows:

/usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --network-plugin=cni --pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.4.1

 

There is no problem viewing and analyzing all configuration files in the startup command

The error err = "node" localhost. Output in 1.3 was found localdomain "Not found
the information of kubectl get node on the master is k8s220, k8s221, k8s222

conclusion
when kubernetes was installed before, the name of the master was k8s220, and the node was k8s221, k8s222, because /etc/hostname was written as localhost.localdomain by default, kubelet has always been report errors

3. [modification]

Modify the hostname file, execute the hostname command, modify the server name,

Restart kubelete

Read More: