Tag Archives: cubernetes

Rancher application service error: request entity too large

When request entity too large occurs, it is because the transport stream exceeds 1m.

1. It is necessary to set parameters in ingress of rancher.

Configuration note: nginx.ingress.kubernetes.io/proxy-body-size

2. Springboot 2.0 adds configuration to the configuration file

spring.servlet.multipart.max-file-size=1024MB
spring.servlet.multipart.max-request-size=1024MB

K8s ❉ Error: cannot be handled as a** [How to Solve]

Error Messages:

[[email protected] ~]# kubectl create -f pod-nginx.yaml 
namespace/dev created
Error from server (BadRequest): error when creating "pod-nginx.yaml": pod in version "v1" cannot be handled as a Pod: no kind "pod" is registered for version "v1" in scheme "k8s.io/kubernetes/pkg/api/legacyscheme/scheme.go:30"

 

 

Solution:

Check the yaml file for the reason as below

apiVersion: v1
kind: pod  # Here it should be Pod, P should be capitalized
metadata:
    name: nginxpod
    namespace: dev
spec:
    containers:
    - name: nginx-containers
      image: nginx:latest

[Solved] K8s Initialize Error: failed with error: Get “http://localhost:10248/healthz“

Environmental description

Server: CentOS 7
docker: 20.10 12
kubeadm:v1. 23.1
Kubernetes:v1. twenty-three point one

Exception description

After docker and k8s related components are installed, there is a problem when executing kubedm init initializing the master node
execute the statement

kubeadm init \
--apiserver-advertise-address=Server_IP \
--image-repository registry.aliyuncs.com/google_containers \
--kubernetes-version=v1.23.1 \
--pod-network-cidr=10.244.0.0/16 \
--service-cidr=10.96.0.0/12 

Error reporting exception

[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.

According to the prompt following the error, you can use journalctl -XEU kubelet or journalctl -XEU kubelet -L to view the detailed error information. If you can’t see it completely, you can directly use the direction keys to adjust the error information.

This is

[[email protected] ~]# journalctl -xeu kubelet
Dec 24 20:24:13 k8s-node01 kubelet[9127]: I1224 20:24:13.456712    9127 cni.go:240] "Unable to update cni config" err="no 
Dec 24 20:24:13 k8s-node01 kubelet[9127]: I1224 20:24:13.476156    9127 docker_service.go:264] "Docker Info" dockerInfo=&{
Dec 24 20:24:13 k8s-node01 kubelet[9127]: E1224 20:24:13.476236    9127 server.go:302] "Failed to run kubelet" err="failed
Dec 24 20:24:13 k8s-node01 systemd[1]: kubelet.service: main process exited, code=exited, status=1/FAILURE
Dec 24 20:24:13 k8s-node01 systemd[1]: Unit kubelet.service entered failed state.
Dec 24 20:24:13 k8s-node01 systemd[1]: kubelet.service failed.

Move the direction key to the right to view the details of the fourth line

ID:ZYIL:OO24:BWLY:DTTB:TDKT:D3MZ:YGJ4:3ZOU:7DDY:YYPQ:DPWM:ERFV Containers:0 ContainersRunning:0 ContainersPaused:0 Contain
 to run Kubelet: misconfiguration: kubelet cgroup driver: \"systemd\" is different from docker cgroup driver: \"cgroupfs\"

Error reporting reason

In fact, according to the above error information, it is caused by the inconsistency between k8s and docker’s CGroup driver
k8s is SYSTEMd, while docker is cgroupfs
Yes

docker info

Check CGroup driver: SYSTEMd or cgroupfs are displayed. K8s defaults to cgroupfs

Solution:

Modify the cgroup driver of docker to systemd
edit the configuration file of docker, and create it if it does not exist

vi /etc/docker/daemon.json

Modified to

{
…
“exec-opts”: [“native.cgroupdriver=systemd”]
…
}

Then restart Dockers

systemctl restart docker 

Re kubedm init

[Solved] k8s error retrieving resource lock default/fuseim.pri-ifs: Unauthorized

When helm installed Prometheus, the NFS client provider serviceaccount was arranged in the default namespace and encountered a title problem

[[email protected] NFS]$ vim nfs-rbac.yaml

apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-client-provisioner
  #namespace: nfs-client

---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: nfs-client-provisioner-runner
rules:
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create", "update", "patch"]   ## Deploy to the default namespace to report an error title error
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    namespace: default
roleRef:
  kind: ClusterRole
  name: nfs-client-provisioner-runner
  apiGroup: rbac.authorization.k8s.io

---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  namespace: default
rules:
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]

---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  namespace: default
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    namespace: default
roleRef:
  kind: Role
  name: leader-locking-nfs-client-provisioner
  apiGroup: rbac.authorization.k8s.io


kubectl logs nfs-client-provisioner-764f44f754-wdtqp nfs provider pod

E1206 08:52:27.293890       1 leaderelection.go:234] error retrieving resource lock default/fuseim.pri-ifs: endpoints "fuseim.pri-ifs" is forbidden: User "system:serviceaccount:default:nfs-client-provisioner" cannot get resource "endpoints" in API group "" in the namespace "default"

Modify clusterrole configuration permissions

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: nfs-client-provisioner-runner
rules:
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["get", "list", "watch", "create", "update", "patch"] ### 把权限修改为这个(default namespace)

[Solved] Docker failed to start daemon: error initializing graphdriver: driver not supported

When the kubelet node joins, an error VFS not support is reported

[ERROR SystemVerification]: unsupported graph driver: vfs

/etc/docker/daemon.json

{
        "registry-mirrors":["https://registry.docker-cn.com"],
        "bridge":"nufront-br",
        "storage-driver":"devicemapper",   ####
        "exec-opts": ["native.cgroupdriver=systemd"],
        "insecure-registries": ["hadoop03:5000"]
}

###
systemctl daemon-reload
service docker start #Note error initializing graphdriver: driver not supported

reference resources: https://github.com/moby/moby/issues/15651, it is found that the current node downloads the docker CE decompression package and directly configures the service, not through Yum (offline environment…)

#### 

[[email protected] bin]# cd /opt/module/docker/
[[email protected] docker]# ll

-rwxr-xr-x 1 root root 39593864 Nov 23 11:12 containerd
-rwxr-xr-x 1 root root 21508168 Nov 23 11:12 ctr
-rwxr-xr-x 1 root root 60073904 Nov 23 11:12 docker
-rwxr-xr-x 1 root root 78951368 Nov 23 11:12 dockerd
-rwxr-xr-x 1 root root   708616 Nov 23 11:12 docker-init
-rwxr-xr-x 1 root root  2933646 Nov 23 11:12 docker-proxy


Try RPM installation

#######
[[email protected] docker]# ll
total 350072
-rw-r--r-- 1 root root   104408 Nov 23 11:12 audit-libs-2.8.5-4.el7.x86_64.rpm
-rw-r--r-- 1 root root    78256 Nov 23 11:12 audit-libs-python-2.8.5-4.el7.x86_64.rpm
-rwxr-xr-x 1 root root 39593864 Nov 23 11:12 containerd
-rw-r--r-- 1 root root 35130608 Nov 23 11:12 containerd.io-1.4.6-3.1.el7.x86_64.rpm
-rwxr-xr-x 1 root root  7270400 Nov 23 11:12 containerd-shim
-rwxr-xr-x 1 root root  9953280 Nov 23 11:12 containerd-shim-runc-v2
-rw-r--r-- 1 root root    40816 Nov 23 11:12 container-selinux-2.119.2-1.911c772.el7_8.noarch.rpm
-rwxr-xr-x 1 root root 21508168 Nov 23 11:12 ctr
-rwxr-xr-x 1 root root 60073904 Nov 23 11:12 docker
-rw-r--r-- 1 root root 27902344 Nov 23 11:12 docker-ce-20.10.7-3.el7.x86_64 (1).rpm
-rw-r--r-- 1 root root 34717572 Nov 23 11:12 docker-ce-cli-20.10.7-3.el7.x86_64.rpm
-rw-r--r-- 1 root root  9659320 Nov 23 11:12 docker-ce-rootless-extras-20.10.7-3.el7.x86_64.rpm
-rwxr-xr-x 1 root root 78951368 Nov 23 11:12 dockerd
-rwxr-xr-x 1 root root   708616 Nov 23 11:12 docker-init
-rwxr-xr-x 1 root root  2933646 Nov 23 11:12 docker-proxy
-rw-r--r-- 1 root root  4373740 Nov 23 11:12 docker-scan-plugin-0.8.0-3.el7.x86_64.rpm
-rwxr-xr-x 1 root root     1200 Nov 23 11:12 docker.service
-rw-r--r-- 1 root root    83764 Nov 23 11:12 fuse3-libs-3.6.1-4.el7.x86_64.rpm
-rw-r--r-- 1 root root    95424 Nov 23 11:12 fuse-libs-2.9.2-11.el7.x86_64.rpm
-rw-r--r-- 1 root root    55796 Nov 23 11:12 fuse-overlayfs-0.7.2-6.el7_8.x86_64.rpm
-rw-r--r-- 1 root root    67720 Nov 23 11:12 libcgroup-0.41-21.el7.x86_64.rpm
-rw-r--r-- 1 root root   101800 Nov 23 11:12 libcgroup-tools-0.41-21.el7.x86_64.rpm
-rw-r--r-- 1 root root    56824 Nov 23 11:12 libnetfilter_conntrack-1.0.6-1.el7_3.x86_64.rpm
-rw-r--r-- 1 root root    57460 Nov 23 11:12 libseccomp-2.3.1-4.el7.x86_64.rpm
-rw-r--r-- 1 root root   166012 Nov 23 11:12 libselinux-2.5-15.el7.x86_64.rpm
-rw-r--r-- 1 root root   154876 Nov 23 11:12 libselinux-utils-2.5-15.el7.x86_64.rpm
-rw-r--r-- 1 root root   154244 Nov 23 11:12 libsemanage-2.5-14.el7.x86_64.rpm
-rw-r--r-- 1 root root   115284 Nov 23 11:12 libsemanage-python-2.5-14.el7.x86_64.rpm
-rw-r--r-- 1 root root   304196 Nov 23 11:12 libsepol-2.5-10.el7.x86_64.rpm
-rw-r--r-- 1 root root    78740 Nov 23 11:12 libsepol-devel-2.5-10.el7.x86_64 (1).rpm
-rw-r--r-- 1 root root    78740 Nov 23 11:12 libsepol-devel-2.5-10.el7.x86_64.rpm
-rw-r--r-- 1 root root   938736 Nov 23 11:12 policycoreutils-2.5-34.el7.x86_64.rpm
-rw-r--r-- 1 root root   468316 Nov 23 11:12 policycoreutils-python-2.5-34.el7.x86_64.rpm
-rwxr-xr-x 1 root root 14485560 Nov 23 11:12 runc
-rw-r--r-- 1 root root   509568 Nov 23 11:12 selinux-policy-3.13.1-268.el7_9.2.noarch.rpm
-rw-r--r-- 1 root root  7335504 Nov 23 11:12 selinux-policy-targeted-3.13.1-268.el7_9.2.noarch.rpm
-rw-r--r-- 1 root root    83452 Nov 23 11:12 slirp4netns-0.4.3-4.el7_8.x86_64.rpm

[[email protected] docker]# rpm -ivh *.rpm  --nodeps --force 


[[email protected] docker]# yum list installed | grep docker
docker-ce.x86_64                        3:20.10.7-3.el7                installed
docker-ce-cli.x86_64                    1:20.10.7-3.el7                installed
docker-ce-rootless-extras.x86_64        20.10.7-3.el7                  installed
docker-scan-plugin.x86_64               0.8.0-3.el7                    installed

Docker can be started again…

[[email protected] docker]# docker info
Client:
 Context:    default
 Debug Mode: false
 Plugins:
  app: Docker App (Docker Inc., v0.9.1-beta3)
  buildx: Build with BuildKit (Docker Inc., v0.5.1-docker)
  scan: Docker Scan (Docker Inc., v0.8.0)

Server:
 Containers: 0
  Running: 0
  Paused: 0
  Stopped: 0
 Images: 0
 Server Version: 20.10.7
 Storage Driver: devicemapper ###
  Pool Name: docker-253:0-812466384-pool
  Pool Blocksize: 65.54kB
  Base Device Size: 10.74GB
  Backing Filesystem: xfs
  Udev Sync Supported: true
  Data file: /dev/loop0
  Metadata file: /dev/loop1
  Data loop file: /var/lib/docker/devicemapper/devicemapper/data
  Metadata loop file: /var/lib/docker/devicemapper/devicemapper/metadata
  Data Space Used: 11.8MB
  Data Space Total: 107.4GB
  Data Space Available: 107.4GB
  Metadata Space Used: 581.6kB
  Metadata Space Total: 2.147GB
  Metadata Space Available: 2.147GB
  Thin Pool Minimum Free Space: 10.74GB
  Deferred Removal Enabled: true
  Deferred Deletion Enabled: true
  Deferred Deleted Device Count: 0
  Library Version: 1.02.107-RHEL7 (2015-10-14)
 Logging Driver: json-file
 Cgroup Driver: systemd
 Cgroup Version: 1
 Plugins:
...

Nginx Container Error: nginx: [emerg] mkdir() “/var/cache/nginx/client_temp“ failed (13: Permission denied)

Phenomenon

Previously, an nginx image was run with docker without any error, but when the image was started with k8s, the error “nginx: [emerg] mkdir()”/var/cache/Nginx/client_temp “failed (13: permission denied)” was reported This error occurs only under a specific namespace. The normal docker version is 17.03.3-ce the abnormal docker version is docker 19.03.4 which uses the overlay 2 storage driver

reflection

According to the error message, it is obvious that it is a user permission problem. Similar nginx permission problems have been encountered before, but they are caused by the setting of SELinux. After closing SELinux, it returns to normal. For the setting method, refer to “CentOS 7. X closing SELinux”
to find k8s startup or failure, I also saw a blog “unable to run Nginx docker due to” 13: permission denied “to delete the container by executing the following command_t added to SELinux, but failed

semanage permissive -a container_t
semodule -l | grep permissive

other

In addition, I try to solve this problem by configuring the security context for pod or container. yaml the configuration of security context is

  securityContext:
    fsGroup: 1000
    runAsGroup: 1000
    runAsUser: 1000
    runAsNonRoot: true

 

last

In the end, you can only directly make an nginx image started by a non root user. Follow https://github.com/nginxinc/docker-nginx-unprivileged Create your own image for the project
first view the user ID and group ID of your starting pod. You can use ID <User name>, for example:

[[email protected] ~]$ id deploy
uid=1000(deploy) gid=1000(deploy) Team=1000(deploy),980(docker) 

You need to modify the uid and GID in the dockerfile in the project to the corresponding ID of your user. My user ID and group ID are 1000
I also added a line of settings for using alicloud image, otherwise it will be particularly slow to build the image. You can also add some custom settings yourself, It should be noted that the image exposes the 8080 port instead of the 80 port. Non root users cannot directly start the 80port

Dockerfile:

#
# NOTE: THIS DOCKERFILE IS GENERATED VIA "update.sh"
#
# PLEASE DO NOT EDIT IT DIRECTLY.
#
ARG IMAGE=alpine:3.13
FROM $IMAGE

LABEL maintainer="NGINX Docker Maintainers <[email protected]>"

ENV NGINX_VERSION 1.20.1
ENV NJS_VERSION   0.5.3
ENV PKG_RELEASE   1

ARG UID=1000
ARG GID=1000

RUN set -x \
    && sed -i 's/dl-cdn.alpinelinux.org/mirrors.aliyun.com/g' /etc/apk/repositories \
# create nginx user/group first, to be consistent throughout docker variants
    && addgroup -g $GID -S nginx \
    && adduser -S -D -H -u $UID -h /var/cache/nginx -s /sbin/nologin -G nginx -g nginx nginx \
    && apkArch="$(cat /etc/apk/arch)" \
    && nginxPackages=" \
        nginx=${NGINX_VERSION}-r${PKG_RELEASE} \
        nginx-module-xslt=${NGINX_VERSION}-r${PKG_RELEASE} \
        nginx-module-geoip=${NGINX_VERSION}-r${PKG_RELEASE} \
        nginx-module-image-filter=${NGINX_VERSION}-r${PKG_RELEASE} \
        nginx-module-njs=${NGINX_VERSION}.${NJS_VERSION}-r${PKG_RELEASE} \
    " \
    && case "$apkArch" in \
        x86_64|aarch64) \
# arches officially built by upstream
            set -x \
            && KEY_SHA512="e7fa8303923d9b95db37a77ad46c68fd4755ff935d0a534d26eba83de193c76166c68bfe7f65471bf8881004ef4aa6df3e34689c305662750c0172fca5d8552a *stdin" \
            && apk add --no-cache --virtual .cert-deps \
                openssl \
            && wget -O /tmp/nginx_signing.rsa.pub https://nginx.org/keys/nginx_signing.rsa.pub \
            && if [ "$(openssl rsa -pubin -in /tmp/nginx_signing.rsa.pub -text -noout | openssl sha512 -r)" = "$KEY_SHA512" ]; then \
                echo "key verification succeeded!"; \
                mv /tmp/nginx_signing.rsa.pub /etc/apk/keys/; \
            else \
                echo "key verification failed!"; \
                exit 1; \
            fi \
            && apk del .cert-deps \
            && apk add -X "https://nginx.org/packages/alpine/v$(egrep -o '^[0-9]+\.[0-9]+' /etc/alpine-release)/main" --no-cache $nginxPackages \
            ;; \
        *) \
# we're on an architecture upstream doesn't officially build for
# let's build binaries from the published packaging sources
            set -x \
            && tempDir="$(mktemp -d)" \
            && chown nobody:nobody $tempDir \
            && apk add --no-cache --virtual .build-deps \
                gcc \
                libc-dev \
                make \
                openssl-dev \
                pcre-dev \
                zlib-dev \
                linux-headers \
                libxslt-dev \
                gd-dev \
                geoip-dev \
                perl-dev \
                libedit-dev \
                mercurial \
                bash \
                alpine-sdk \
                findutils \
            && su nobody -s /bin/sh -c " \
                export HOME=${tempDir} \
                && cd ${tempDir} \
                && hg clone https://hg.nginx.org/pkg-oss \
                && cd pkg-oss \
                && hg up ${NGINX_VERSION}-${PKG_RELEASE} \
                && cd alpine \
                && make all \
                && apk index -o ${tempDir}/packages/alpine/${apkArch}/APKINDEX.tar.gz ${tempDir}/packages/alpine/${apkArch}/*.apk \
                && abuild-sign -k ${tempDir}/.abuild/abuild-key.rsa ${tempDir}/packages/alpine/${apkArch}/APKINDEX.tar.gz \
                " \
            && cp ${tempDir}/.abuild/abuild-key.rsa.pub /etc/apk/keys/ \
            && apk del .build-deps \
            && apk add -X ${tempDir}/packages/alpine/ --no-cache $nginxPackages \
            ;; \
    esac \
# if we have leftovers from building, let's purge them (including extra, unnecessary build deps)
    && if [ -n "$tempDir" ]; then rm -rf "$tempDir"; fi \
    && if [ -n "/etc/apk/keys/abuild-key.rsa.pub" ]; then rm -f /etc/apk/keys/abuild-key.rsa.pub; fi \
    && if [ -n "/etc/apk/keys/nginx_signing.rsa.pub" ]; then rm -f /etc/apk/keys/nginx_signing.rsa.pub; fi \
# Bring in gettext so we can get `envsubst`, then throw
# the rest away. To do this, we need to install `gettext`
# then move `envsubst` out of the way so `gettext` can
# be deleted completely, then move `envsubst` back.
    && apk add --no-cache --virtual .gettext gettext \
    && mv /usr/bin/envsubst /tmp/ \
    \
    && runDeps="$( \
        scanelf --needed --nobanner /tmp/envsubst \
            | awk '{ gsub(/,/, "\nso:", $2); print "so:" $2 }' \
            | sort -u \
            | xargs -r apk info --installed \
            | sort -u \
    )" \
    && apk add --no-cache $runDeps \
    && apk del .gettext \
    && mv /tmp/envsubst /usr/local/bin/ \
# Bring in tzdata so users could set the timezones through the environment
# variables
    && apk add --no-cache tzdata \
# Bring in curl and ca-certificates to make registering on DNS SD easier
    && apk add --no-cache curl ca-certificates \
# forward request and error logs to docker log collector
    && ln -sf /dev/stdout /var/log/nginx/access.log \
    && ln -sf /dev/stderr /var/log/nginx/error.log \
# create a docker-entrypoint.d directory
    && mkdir /docker-entrypoint.d

# implement changes required to run NGINX as an unprivileged user
RUN sed -i 's,listen       80;,listen       8080;,' /etc/nginx/conf.d/default.conf \
    && sed -i '/user  nginx;/d' /etc/nginx/nginx.conf \
    && sed -i 's,/var/run/nginx.pid,/tmp/nginx.pid,' /etc/nginx/nginx.conf \
    && sed -i "/^http {/a \    proxy_temp_path /tmp/proxy_temp;\n    client_body_temp_path /tmp/client_temp;\n    fastcgi_temp_path /tmp/fastcgi_temp;\n    uwsgi_temp_path /tmp/uwsgi_temp;\n    scgi_temp_path /tmp/scgi_temp;\n" /etc/nginx/nginx.conf \
# nginx user must own the cache and etc directory to write cache and tweak the nginx config
    && chown -R $UID:0 /var/cache/nginx \
    && chmod -R g+w /var/cache/nginx \
    && chown -R $UID:0 /etc/nginx \
    && chmod -R g+w /etc/nginx

COPY docker-entrypoint.sh /
COPY 10-listen-on-ipv6-by-default.sh /docker-entrypoint.d
COPY 20-envsubst-on-templates.sh /docker-entrypoint.d
COPY 30-tune-worker-processes.sh /docker-entrypoint.d
RUN  chmod 755 /docker-entrypoint.sh \
     && chmod 755 /docker-entrypoint.d/*.sh

ENTRYPOINT ["/docker-entrypoint.sh"]

EXPOSE 8080

STOPSIGNAL SIGQUIT

USER $UID

CMD ["nginx", "-g", "daemon off;"]

10-listen-on-ipv6-by-default.sh:

#!/bin/sh
# vim:sw=4:ts=4:et

set -e

ME=$(basename $0)
DEFAULT_CONF_FILE="etc/nginx/conf.d/default.conf"

# check if we have ipv6 available
if [ ! -f "/proc/net/if_inet6" ]; then
    echo >&3 "$ME: info: ipv6 not available"
    exit 0
fi

if [ ! -f "/$DEFAULT_CONF_FILE" ]; then
    echo >&3 "$ME: info: /$DEFAULT_CONF_FILE is not a file or does not exist"
    exit 0
fi

# check if the file can be modified, e.g. not on a r/o filesystem
touch /$DEFAULT_CONF_FILE 2>/dev/null || { echo >&3 "$ME: info: can not modify /$DEFAULT_CONF_FILE (read-only file system?)"; exit 0; }

# check if the file is already modified, e.g. on a container restart
grep -q "listen  \[::]\:8080;" /$DEFAULT_CONF_FILE && { echo >&3 "$ME: info: IPv6 listen already enabled"; exit 0; }

if [ -f "/etc/os-release" ]; then
    . /etc/os-release
else
    echo >&3 "$ME: info: can not guess the operating system"
    exit 0
fi

echo >&3 "$ME: info: Getting the checksum of /$DEFAULT_CONF_FILE"

case "$ID" in
    "debian")
        CHECKSUM=$(dpkg-query --show --showformat='${Conffiles}\n' nginx | grep $DEFAULT_CONF_FILE | cut -d' ' -f 3)
        echo "$CHECKSUM  /$DEFAULT_CONF_FILE" | md5sum -c - >/dev/null 2>&1 || {
            echo >&3 "$ME: info: /$DEFAULT_CONF_FILE differs from the packaged version"
            exit 0
        }
        ;;
    "alpine")
        CHECKSUM=$(apk manifest nginx 2>/dev/null| grep $DEFAULT_CONF_FILE | cut -d' ' -f 1 | cut -d ':' -f 2)
        echo "$CHECKSUM  /$DEFAULT_CONF_FILE" | sha1sum -c - >/dev/null 2>&1 || {
            echo >&3 "$ME: info: /$DEFAULT_CONF_FILE differs from the packaged version"
            exit 0
        }
        ;;
    *)
        echo >&3 "$ME: info: Unsupported distribution"
        exit 0
        ;;
esac

# enable ipv6 on default.conf listen sockets
sed -i -E 's,listen       8080;,listen       8080;\n    listen  [::]:8080;,' /$DEFAULT_CONF_FILE

echo >&3 "$ME: info: Enabled listen on IPv6 in /$DEFAULT_CONF_FILE"

exit 0

20-envsubst-on-templates.sh:

#!/bin/sh

set -e

ME=$(basename $0)

auto_envsubst() {
  local template_dir="${NGINX_ENVSUBST_TEMPLATE_DIR:-/etc/nginx/templates}"
  local suffix="${NGINX_ENVSUBST_TEMPLATE_SUFFIX:-.template}"
  local output_dir="${NGINX_ENVSUBST_OUTPUT_DIR:-/etc/nginx/conf.d}"

  local template defined_envs relative_path output_path subdir
  defined_envs=$(printf '${%s} ' $(env | cut -d= -f1))
  [ -d "$template_dir" ] || return 0
  if [ ! -w "$output_dir" ]; then
    echo >&3 "$ME: ERROR: $template_dir exists, but $output_dir is not writable"
    return 0
  fi
  find "$template_dir" -follow -type f -name "*$suffix" -print | while read -r template; do
    relative_path="${template#$template_dir/}"
    output_path="$output_dir/${relative_path%$suffix}"
    subdir=$(dirname "$relative_path")
    # create a subdirectory where the template file exists
    mkdir -p "$output_dir/$subdir"
    echo >&3 "$ME: Running envsubst on $template to $output_path"
    envsubst "$defined_envs" < "$template" > "$output_path"
  done
}

auto_envsubst

exit 0

30-tune-worker-processes.sh:

#!/bin/sh
# vim:sw=2:ts=2:sts=2:et

set -eu

LC_ALL=C
ME=$( basename "$0" )
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin

[ "${NGINX_ENTRYPOINT_WORKER_PROCESSES_AUTOTUNE:-}" ] || exit 0

touch /etc/nginx/nginx.conf 2>/dev/null || { echo >&2 "$ME: error: can not modify /etc/nginx/nginx.conf (read-only file system?)"; exit 0; }

ceildiv() {
  num=$1
  div=$2
  echo $(( (num + div - 1)/div ))
}

get_cpuset() {
  cpusetroot=$1
  cpusetfile=$2
  ncpu=0
  [ -f "$cpusetroot/$cpusetfile" ] || return 1
  for token in $( tr ',' ' ' < "$cpusetroot/$cpusetfile" ); do
    case "$token" in
      *-*)
        count=$( seq $(echo "$token" | tr '-' ' ') | wc -l )
        ncpu=$(( ncpu+count ))
        ;;
      *)
        ncpu=$(( ncpu+1 ))
        ;;
    esac
  done
  echo "$ncpu"
}

get_quota() {
  cpuroot=$1
  ncpu=0
  [ -f "$cpuroot/cpu.cfs_quota_us" ] || return 1
  [ -f "$cpuroot/cpu.cfs_period_us" ] || return 1
  cfs_quota=$( cat "$cpuroot/cpu.cfs_quota_us" )
  cfs_period=$( cat "$cpuroot/cpu.cfs_period_us" )
  [ "$cfs_quota" = "-1" ] && return 1
  [ "$cfs_period" = "0" ] && return 1
  ncpu=$( ceildiv "$cfs_quota" "$cfs_period" )
  [ "$ncpu" -gt 0 ] || return 1
  echo "$ncpu"
}

get_quota_v2() {
  cpuroot=$1
  ncpu=0
  [ -f "$cpuroot/cpu.max" ] || return 1
  cfs_quota=$( cut -d' ' -f 1 < "$cpuroot/cpu.max" )
  cfs_period=$( cut -d' ' -f 2 < "$cpuroot/cpu.max" )
  [ "$cfs_quota" = "max" ] && return 1
  [ "$cfs_period" = "0" ] && return 1
  ncpu=$( ceildiv "$cfs_quota" "$cfs_period" )
  [ "$ncpu" -gt 0 ] || return 1
  echo "$ncpu"
}

get_cgroup_v1_path() {
  needle=$1
  found=
  foundroot=
  mountpoint=

  [ -r "/proc/self/mountinfo" ] || return 1
  [ -r "/proc/self/cgroup" ] || return 1

  while IFS= read -r line; do
    case "$needle" in
      "cpuset")
        case "$line" in
          *cpuset*)
            found=$( echo "$line" | cut -d ' ' -f 4,5 )
            break
            ;;
        esac
        ;;
      "cpu")
        case "$line" in
          *cpuset*)
            ;;
          *cpu,cpuacct*|*cpuacct,cpu|*cpuacct*|*cpu*)
            found=$( echo "$line" | cut -d ' ' -f 4,5 )
            break
            ;;
        esac
    esac
  done << __EOF__
$( grep -F -- '- cgroup ' /proc/self/mountinfo )
__EOF__

  while IFS= read -r line; do
    controller=$( echo "$line" | cut -d: -f 2 )
    case "$needle" in
      "cpuset")
        case "$controller" in
          cpuset)
            mountpoint=$( echo "$line" | cut -d: -f 3 )
            break
            ;;
        esac
        ;;
      "cpu")
        case "$controller" in
          cpu,cpuacct|cpuacct,cpu|cpuacct|cpu)
            mountpoint=$( echo "$line" | cut -d: -f 3 )
            break
            ;;
        esac
        ;;
    esac
done << __EOF__
$( grep -F -- 'cpu' /proc/self/cgroup )
__EOF__

  case "${found%% *}" in
    "/")
      foundroot="${found##* }$mountpoint"
      ;;
    "$mountpoint")
      foundroot="${found##* }"
      ;;
  esac
  echo "$foundroot"
}

get_cgroup_v2_path() {
  found=
  foundroot=
  mountpoint=

  [ -r "/proc/self/mountinfo" ] || return 1
  [ -r "/proc/self/cgroup" ] || return 1

  while IFS= read -r line; do
    found=$( echo "$line" | cut -d ' ' -f 4,5 )
  done << __EOF__
$( grep -F -- '- cgroup2 ' /proc/self/mountinfo )
__EOF__

  while IFS= read -r line; do
    mountpoint=$( echo "$line" | cut -d: -f 3 )
done << __EOF__
$( grep -F -- '0::' /proc/self/cgroup )
__EOF__

  case "${found%% *}" in
    "")
      return 1
      ;;
    "/")
      foundroot="${found##* }$mountpoint"
      ;;
    "$mountpoint")
      foundroot="${found##* }"
      ;;
  esac
  echo "$foundroot"
}

ncpu_online=$( getconf _NPROCESSORS_ONLN )
ncpu_cpuset=
ncpu_quota=
ncpu_cpuset_v2=
ncpu_quota_v2=

cpuset=$( get_cgroup_v1_path "cpuset" ) && ncpu_cpuset=$( get_cpuset "$cpuset" "cpuset.effective_cpus" ) || ncpu_cpuset=$ncpu_online
cpu=$( get_cgroup_v1_path "cpu" ) && ncpu_quota=$( get_quota "$cpu" ) || ncpu_quota=$ncpu_online
cgroup_v2=$( get_cgroup_v2_path ) && ncpu_cpuset_v2=$( get_cpuset "$cgroup_v2" "cpuset.cpus.effective" ) || ncpu_cpuset_v2=$ncpu_online
cgroup_v2=$( get_cgroup_v2_path ) && ncpu_quota_v2=$( get_quota_v2 "$cgroup_v2" ) || ncpu_quota_v2=$ncpu_online

ncpu=$( printf "%s\n%s\n%s\n%s\n%s\n" \
               "$ncpu_online" \
               "$ncpu_cpuset" \
               "$ncpu_quota" \
               "$ncpu_cpuset_v2" \
               "$ncpu_quota_v2" \
               | sort -n \
               | head -n 1 )

sed -i.bak -r 's/^(worker_processes)(.*)$/# Commented out by '"$ME"' on '"$(date)"'\n#\1\2\n\1 '"$ncpu"';/' /etc/nginx/nginx.conf

docker-entrypoint.sh:

#!/bin/sh
# vim:sw=4:ts=4:et

set -e

if [ -z "${NGINX_ENTRYPOINT_QUIET_LOGS:-}" ]; then
    exec 3>&1
else
    exec 3>/dev/null
fi

if [ "$1" = "nginx" -o "$1" = "nginx-debug" ]; then
    if /usr/bin/find "/docker-entrypoint.d/" -mindepth 1 -maxdepth 1 -type f -print -quit 2>/dev/null | read v; then
        echo >&3 "$0: /docker-entrypoint.d/ is not empty, will attempt to perform configuration"

        echo >&3 "$0: Looking for shell scripts in /docker-entrypoint.d/"
        find "/docker-entrypoint.d/" -follow -type f -print | sort -V | while read -r f; do
            case "$f" in
                *.sh)
                    if [ -x "$f" ]; then
                        echo >&3 "$0: Launching $f";
                        "$f"
                    else
                        # warn on shell scripts without exec bit
                        echo >&3 "$0: Ignoring $f, not executable";
                    fi
                    ;;
                *) echo >&3 "$0: Ignoring $f";;
            esac
        done

        echo >&3 "$0: Configuration complete; ready for start up"
    else
        echo >&3 "$0: No files found in /docker-entrypoint.d/, skipping configuration"
    fi
fi

exec "[email protected]"

Run a couple of files at the same catalog and then run docker build -t nginxinc/docker-nginx-unprivileged:latest

How to Solve Client-go Mod Error

Error message:

E:\github\client-go>go mod tidy
go: finding module for package k8s.io/client-go/kubernetes
go: finding module for package k8s.io/client-go/tools/clientcmd
go: finding module for package k8s.io/apimachinery/pkg/apis/meta/v1
go: found k8s.io/apimachinery/pkg/apis/meta/v1 in k8s.io/apimachinery v0.22.2
go: finding module for package k8s.io/client-go/kubernetes
go: finding module for package k8s.io/client-go/tools/clientcmd
client-go imports
        k8s.io/client-go/kubernetes: module k8s.io/[email protected] found (v1.5.2), but does not contain package k8s.io/client-go/kubernetes
client-go imports
        k8s.io/client-go/tools/clientcmd: module k8s.io/[email protected] found (v1.5.2), but does not contain package k8s.io/client-go/tools/clientcmd

Solution:
always specify three files in the go.mod file

require (
    ...
    k8s.io/api v0.19.0
    k8s.io/apimachinery v0.19.0
    k8s.io/client-go v0.19.0
    ...
)

ERROR: Rancher must be ran with the –privileged flag when running outside of Kubernetes

According to the document on the official website of rancher 2.4, only one Linux host is required. Remember that a single node rancher server can be quickly deployed. Of course, this can only be used for testing. Deployment is very convenient. Just start dcoker on the host, and then start a container:

sudo docker run -d --restart=unless-stopped -p 80:80 -p 443:443 rancher/rancher

After the test, it is found that the container is constantly restarted after startup, and there is no way to enter the UI, prompting a network error.

CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS                         PORTS               NAMES
bfd5c52a1ce7        rancher/rancher     "entrypoint.sh"     12 hours ago        Restarting (1) 3 seconds ago                       elated_heisenberg

View log:

[[email protected] ~]# docker logs --tail 3 bfd5c52a1ce7
ERROR: Rancher must be ran with the --privileged flag when running outside of Kubernetes
ERROR: Rancher must be ran with the --privileged flag when running outside of Kubernetes
ERROR: Rancher must be ran with the --privileged flag when running outside of Kubernetes

If an error is found, it is said that — privileged is required to increase privileges
turn off the lifting appliance — privileged to try:

docker run -d --restart=unless-stopped --privileged  -p 80:80  -p 443:443 rancher/rancher

Found it

Kubernetes Error: Error in configuration: unable to read client-cert* unable to read client-key*

System environment:

Ubuntu 20.04 LTSDocker 20.10.8Kubernetes 1.22.1Node: node

Execute command:

$ kubectl version

Errors are reported as follows:

Error in configuration: 
* unable to read client-cert /var/lib/kubelet/pki/kubelet-client-current.pem for default-auth due to open /var/lib/kubelet/pki/kubelet-client-current.pem: no such file or directory
* unable to read client-key /var/lib/kubelet/pki/kubelet-client-current.pem for default-auth due to open /var/lib/kubelet/pki/kubelet-client-current.pem: no such file or directory

Error reporting Description:

The token on the node has expired.

Solution:

$ kubeadm token create --print-join-command

kubeadm join 192.168.50.51:6443 --token rt43m1.b9py8ba6uxbfv7sr --discovery-token-ca-cert-hash sha256:f57a09633cf0e18cd905d41159a790704502410fd841acd63cffc8e493ad3cb2 

Regenerate the token on the Master node.

$ kubeadm join 192.168.50.51:6443 --token rt43m1.b9py8ba6uxbfv7sr --discovery-token-ca-cert-hash sha256:f57a09633cf0e18cd905d41159a790704502410fd841acd63cffc8e493ad3cb2 
$ kubectl version

Re execute on the node node.

Error from server (BadRequest): a container name must be specified for pod

report errors

Previously, I used kubectl logs -f <POD-name> -n <nameSpace> to view the logs of a pod. One day when I used this command to check the status of a pod that was runnning, I got an error.

Error from server (BadRequest): a container name must be specified for pod xxx ,choose one of:[xxx  xxx]


Causes and treatment of error reporting

Reason: originally, a pod used a container. When you use the command to view the pod log, the log of the pod’s container will be output…
but one day, the architect adjusted the structure of the pod and enabled multiple containers in a pod. From then on, you need to specify which container to view the pod when viewing the log. You can use the command – C < container_ name> Specify that the name of the container that can be viewed is listed in the choose one of error message

 kubectl logs -f  <POD-name> -n <nameSpace> -c  <container_name> 


[Solved] K8s cluster build error: error: kubectl get csr No resources found.

K8s cluster setup error: error: kubectl get CSR no resources found

Problem cause and solution test successful

problem

kubectl get csr
No resources found.

reason

because the original SSL certificate is invalid after restart, if it is not deleted, kubelet cannot communicate with the master even after restart 

Solution:

cd /opt/kubernetes/ssl
ls
kubelet-client-2021-04-14-08-41-36.pem  kubelet-client-current.pem  kubelet.crt  kubelet.key
# Delete all certificates 
rm -rf *
# close or open the kubelet
systemctl stop kubelet

master01

kubectl delete clusterrolebinding kubelet-bootstrap
clusterrolebinding.rbac.authorization.k8s.io "kubelet-bootstrap" deleted

kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap
clusterrolebinding.rbac.authorization.k8s.io/kubelet-bootstrap created

node 

#open kubelet
#node01
bash kubelet.sh 192.168.238.82
#node02
bash kubelet.sh 192.168.238.83

Test successful

master01

kubectl get csr
NAME                                                   AGE   REQUESTOR           CONDITION
node-csr-mJwuqA7DAf4UmB1InN_WEYhFWbQKOqUVXg9Bvc7Intk   4s    kubelet-bootstrap   Pending
node-csr-ydhzi9EG9M_Ozmbvep0ledwhTCanppStZoq7vuooTq8   11s   kubelet-bootstrap   Pending

Done!!!

Error reporting and resolution of kubernetes installation

A, the initialization of the cluster error reporting
1. Error reported.
[WARNING Hostname]: hostname “master1” could not be reached
[WARNING Hostname]: hostname “master1”: lookup master1 on 114.114.114.114:53: no such host,
details:

[[email protected] ~]# kubeadm init --config kubeadm.yaml
W1124 09:40:03.139811   68129 strict.go:54] error unmarshaling configuration schema.GroupVersionKind{Group:"kubeadm.k8s.io", Version:"v1beta2", Kind:"ClusterConfiguration"}: error unmarshaling JSON: while decoding JSON: json: unknown field "scheduler"
W1124 09:40:03.333487   68129 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.19.3
[preflight] Running pre-flight checks
        [WARNING Hostname]: hostname "master1" could not be reached
        [WARNING Hostname]: hostname "master1": lookup master1 on 114.114.114.114:53: no such host
error execution phase preflight: [preflight] Some fatal errors occurred:
        [ERROR Port-6443]: Port 6443 is in use
        [ERROR Port-10259]: Port 10259 is in use
        [ERROR Port-10257]: Port 10257 is in use
        [ERROR FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml]: /etc/kubernetes/manifests/kube-apiserver.yaml already exists
        [ERROR FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml]: /etc/kubernetes/manifests/kube-controller-manager.yaml already exists
        [ERROR FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml]: /etc/kubernetes/manifests/kube-scheduler.yaml already exists
        [ERROR FileAvailable--etc-kubernetes-manifests-etcd.yaml]: /etc/kubernetes/manifests/etcd.yaml already exists
        [ERROR Port-10250]: Port 10250 is in use
        [ERROR Port-2379]: Port 2379 is in use
        [ERROR Port-2380]: Port 2380 is in use
        [ERROR DirAvailable--var-lib-etcd]: /var/lib/etcd is not empty
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher

Solution:
delete/etc/kubernetes/manifest
Modify kubedm.yaml. Change the name to the host’s hostname
and then perform initialization.

[[email protected] ~]# ls /etc/kubernetes/
admin.conf  controller-manager.conf  kubelet.conf  manifests  pki  scheduler.conf
[[email protected] ~]# ls
anaconda-ks.cfg  kubeadm.yaml
[[email protected] ~]# rm -rf  /etc/kubernetes/manifests
[[email protected] ~]# ls /etc/kubernetes/
admin.conf  controller-manager.conf  kubelet.conf  pki  scheduler.conf

2.error message:
WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
details:

[[email protected] ~]# kubeadm init --config kubeadm.yaml
W1124 09:47:13.677697   70122 strict.go:54] error unmarshaling configuration schema.GroupVersionKind{Group:"kubeadm.k8s.io", Version:"v1beta2", Kind:"ClusterConfiguration"}: error unmarshaling JSON: while decoding JSON: json: unknown field "scheduler"
W1124 09:47:13.876821   70122 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.19.3
[preflight] Running pre-flight checks
error execution phase preflight: [preflight] Some fatal errors occurred:
        [ERROR Port-10250]: Port 10250 is in use
        [ERROR DirAvailable--var-lib-etcd]: /var/lib/etcd is not empty
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher

Solution:
during initialization, add: – ignore preflight errors = all
that is:

kubeadm init --config kubeadm.yaml --ignore-preflight-errors=all

3.error:
[kubelet-check] Initial timeout of 40s passed.
error execution phase upload-config/kubelet: Error writing Crisocket information for the control-plane node: timed out waiting for the condition
To see the stack trace of this error execute with –v=5 or higher
details:

[[email protected] ~]# kubeadm init --config kubeadm.yaml --ignore-preflight-errors=all
W1124 09:54:07.361765   71406 strict.go:54] error unmarshaling configuration schema.GroupVersionKind{Group:"kubeadm.k8s.io", Version:"v1beta2", Kind:"ClusterConfiguration"}: error unmarshaling JSON: while decoding JSON: json: unknown field "scheduler"
W1124 09:54:07.480565   71406 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.19.3
[preflight] Running pre-flight checks
        [WARNING Port-10250]: Port 10250 is in use
        [WARNING DirAvailable--var-lib-etcd]: /var/lib/etcd is not empty
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/admin.conf"
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/scheduler.conf"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 13.510116 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.19" in namespace kube-system with the configuration for the kubelets in the cluster
[kubelet-check] Initial timeout of 40s passed.
error execution phase upload-config/kubelet: Error writing Crisocket information for the control-plane node: timed out waiting for the condition
To see the stack trace of this error execute with --v=5 or higher

Solution:
execute:

swapoff -a && kubeadm reset  && systemctl daemon-reload && systemctl restart kubelet  && iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X

After execution, it is OK to initialize again.

[[email protected] ~]# kubeadm init --config kubeadm.yaml --ignore-preflight-errors=all                                                                                                  
W1124 10:00:18.648091   74450 strict.go:54] error unmarshaling configuration schema.GroupVersionKind{Group:"kubeadm.k8s.io", Version:"v1beta2", Kind:"ClusterConfiguration"}: error unmarshaling JSON: while decoding JSON: json: unknown field "scheduler"
W1124 10:00:18.760000   74450 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.19.3
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8smaster kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.68.127]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8smaster localhost] and IPs [192.168.68.127 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8smaster localhost] and IPs [192.168.68.127 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 10.517451 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.19" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8smaster as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node k8smaster as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: abcdef.0123456789abcdef
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.68.127:6443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:31020d84f523a2af6fc4fea38e514af8e5e1943a26312f0515e65075da314b29