[Solved] Kubeadm Reset error: etcdserver: re-configuration failed due to not enough started members

Error information:

[root@bogon log]# kubeadm reset
[reset] Reading configuration from the cluster...
[reset] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[reset] WARNING: Changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted.
[reset] Are you sure you want to proceed?[y/N]: y
[preflight] Running pre-flight checks
[reset] Removing info for node "bogon" from the ConfigMap "kubeadm-config" in the "kube-system" Namespace
{"level":"warn","ts":"2021-07-03T08:19:14.041-0400","caller":"clientv3/retry_interceptor.go:61","msg":"retrying of unary invoker failed","target":"endpoint://client-7295b53f-6c7d-4a5e-8795-ab4b33048049/192.168.28.128:2379","attempt":0,"error":"rpc error: code = Unknown desc = etcdserver: re-configuration failed due to not enough started members"}
{"level":"warn","ts":"2021-07-03T08:19:14.096-0400","caller":"clientv3/retry_interceptor.go:61","msg":"retrying of unary invoker failed","target":"endpoint://client-7295b53f-6c7d-4a5e-8795-ab4b33048049/192.168.28.128:2379","attempt":0,"error":"rpc error: code = Unknown desc = etcdserver: re-configuration failed due to not enough started members"}

Solutions:

Execute the following two commands

rm -rf /etc/kubernetes/*
rm -rf /root/.kube/

Then execute it again

kubeadm reset

Read More: