Tag Archives: cubernetes

[Solved] Error from server (InternalError): error when creating “ingress.yaml”: Internal error occurred: fail

When using the ingress exposure service, kubectl apply -f ingress.yaml reports the following error.
Reported error:

Error from server (InternalError): error when creating “ingress.yaml”: Internal error occurred: failed calling webhook “validate.nginx.ingress.kubernetes.io”: failed to call webhook: Post “https://ingress-nginx-controller-admission.ingress-nginx.svc:443/networking/v1/ingresses?timeout=10s”: x509: certificate has expired or is not yet valid: current time 2022-03-26T14:45:34Z is before 2022-03-26T20:16:32Z

 

Solution:
Check kubectl apply -f ingress.yaml

kubectl get validatingwebhookconfigurations

Delete ingress-nginx-admission

kubectl delete -A ValidatingWebhookConfiguration ingress-nginx-admission

Then execute

kubectl apply -f ingress.yaml 

Kubernetes create secret Error: Error from server (InternalError): Internal error occurred…

Creating secret Error:
# kubectl create secret generic thanos-objectstorage –from-file=objstore.yaml -n monitoring
Error Messages:

Error from server (InternalError): Internal error occurred: failed calling webhook “rancher.cattle.io”: Post https://rancher-webhook.cattle-system.svc:443/v1/webhook/mutation?timeout=10s: service “rancher-webhook” not found

 

According to the error report, it may be a problem with rbac, which cannot create

# kubectl get mutatingwebhookconfigurations
NAME                             WEBHOOKS   AGE
mutating-webhook-configuration   7          156d
rancher.cattle.io                2          156d

# kubectl get validatingwebhookconfigurations
NAME                               WEBHOOKS   AGE
rancher.cattle.io                  2          156d
validating-webhook-configuration   7          156d

There are two admission controllers found in the view, both of which are leftover from the previous installation of components

Just Delete it

# kubectl delete mutatingwebhookconfigurations mutating-webhook-configuration
mutatingwebhookconfiguration.admissionregistration.k8s.io "mutating-webhook-configuration" deleted

# kubectl delete mutatingwebhookconfigurations rancher.cattle.io
mutatingwebhookconfiguration.admissionregistration.k8s.io "rancher.cattle.io" deleted

# kubectl delete validatingwebhookconfigurations rancher.cattle.io
validatingwebhookconfiguration.admissionregistration.k8s.io "rancher.cattle.io" deleted

# kubectl delete validatingwebhookconfigurations validating-webhook-configuration
validatingwebhookconfiguration.admissionregistration.k8s.io "validating-webhook-configuration" deleted

[Solved] kubelet Startup Error: cannot find network namespace for the terminated container

1. Error reporting:

Use the journalctl – xefu kubelet command to view the kubelet log. The following errors are found:

cannot find network namespace for the terminated container

2. Solution:

# docker system prune

# systemctl restart kubelet

Instructions for using docker system:

# docker system -h

Flag shorthand -h has been deprecated, please use –help

Usage: docker system COMMAND

Manage Docker

Commands:

df             Show docker disk usage

Check the usage of docker space.

events     Get real time events from the server

View live events.

info         Display system-wide information

View system information.

prune     Remove unused data

Docker cleans the stopped container, and there is no network, image and cache used by the container.

[Solved] GRPC-Server Error: com.google.common.base.Preconditions.checkArgument(ZLjava/lang/String; CLjava

Grpc server reports an error com google.common.base.Preconditions.checkArgument (ZLjava/lang/String;CLjava/lang/Object);

Problem background solution summary Lyric: I really want to take another bite, ผั๥๥๥ผั๥ณ, ผั๥ณ, ผั๥๥ณ This is the first song. It’s over. Have you guessed the title of the song?

Problem background

When working as grpc server, I can’t start it. The error report is printed as follows, but I can’t well see what’s wrong. Since grpc can be used when I test it alone, but as the project becomes more and more complex, more and more POM dependencies are introduced, so I began to find the reason from it

2022-01-25 11:01:39.896 ERROR [id-mapping-AsyncThread-1] o.s.a.i.SimpleAsyncUncaughtExceptionHandler.handleUncaughtException(SimpleAsyncUncaughtExceptionHandler.java:39): Unexpected exception occurred invoking async method: public void grpc.server.GrpcServer.start() throws java.io.IOException
java.lang.NoSuchMethodError: com.google.common.base.Preconditions.checkArgument(ZLjava/lang/String;CLjava/lang/Object;)V
	at io.grpc.Metadata$Key.validateName(Metadata.java:629)
	at io.grpc.Metadata$Key.<init>(Metadata.java:637)
	at io.grpc.Metadata$Key.<init>(Metadata.java:567)
	at io.grpc.Metadata$AsciiKey.<init>(Metadata.java:742)
	at io.grpc.Metadata$AsciiKey.<init>(Metadata.java:737)
	at io.grpc.Metadata$Key.of(Metadata.java:593)
	at io.grpc.Metadata$Key.of(Metadata.java:589)
	at io.grpc.internal.GrpcUtil.<clinit>(GrpcUtil.java:86)
	at io.grpc.internal.AbstractServerImplBuilder.<clinit>(AbstractServerImplBuilder.java:60)
	at io.grpc.netty.shaded.io.grpc.netty.NettyServerProvider.builderForPort(NettyServerProvider.java:39)
	at io.grpc.netty.shaded.io.grpc.netty.NettyServerProvider.builderForPort(NettyServerProvider.java:24)
	at io.grpc.ServerBuilder.forPort(ServerBuilder.java:41)
	at server.Server.start(GrpcServer.java:30)
	at grpc.server.GrpcServer$$FastClassBySpringCGLIB$$be87d0e.invoke(<generated>)
	at org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:218)
	at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.invokeJoinpoint(CglibAopProxy.java:771)
	at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163)
	at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.proceed(CglibAopProxy.java:749)
	at org.springframework.aop.interceptor.AsyncExecutionInterceptor.lambda$invoke$0(AsyncExecutionInterceptor.java:115)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)

 

Solution:

1. Analyze the imported jar package dependency and use file → setting to install Maven dependency helper dependency management

2 after installation, open the POM file, click dependency analyzer

3 select conflicts and click refresh UI to refresh. You can see that guava: version 18.0 appears, which means there is a conflict with this dependency, It’s a repeated introduction,
but the introduction of a problem is to exclude which repeated guava. This problem has been bothering me. My approach is to exclude the displayed dependencies first, and then continue to compile. If not, find other versions of guava for exclusion

4 because there is no exclusion option in Guava in right-click conflicts, Therefore, select jump to left tree to display more clearly

5 exclude 18 versions, re import

6 Click conflicts, and it is found that there is no conflict

7 at that time, the problem that the grpc server cannot be started is also solved

Rancher application service error: request entity too large

When request entity too large occurs, it is because the transport stream exceeds 1m.

1. It is necessary to set parameters in ingress of rancher.

Configuration note: nginx.ingress.kubernetes.io/proxy-body-size

2. Springboot 2.0 adds configuration to the configuration file

spring.servlet.multipart.max-file-size=1024MB
spring.servlet.multipart.max-request-size=1024MB

K8s ❉ Error: cannot be handled as a** [How to Solve]

Error Messages:

[root@master ~]# kubectl create -f pod-nginx.yaml 
namespace/dev created
Error from server (BadRequest): error when creating "pod-nginx.yaml": pod in version "v1" cannot be handled as a Pod: no kind "pod" is registered for version "v1" in scheme "k8s.io/kubernetes/pkg/api/legacyscheme/scheme.go:30"

 

 

Solution:

Check the yaml file for the reason as below

apiVersion: v1
kind: pod  # Here it should be Pod, P should be capitalized
metadata:
    name: nginxpod
    namespace: dev
spec:
    containers:
    - name: nginx-containers
      image: nginx:latest

[Solved] K8s Initialize Error: failed with error: Get “http://localhost:10248/healthz“

Environmental description

Server: CentOS 7
docker: 20.10 12
kubeadm:v1. 23.1
Kubernetes:v1. twenty-three point one

Exception description

After docker and k8s related components are installed, there is a problem when executing kubedm init initializing the master node
execute the statement

kubeadm init \
--apiserver-advertise-address=Server_IP \
--image-repository registry.aliyuncs.com/google_containers \
--kubernetes-version=v1.23.1 \
--pod-network-cidr=10.244.0.0/16 \
--service-cidr=10.96.0.0/12 

Error reporting exception

[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.

According to the prompt following the error, you can use journalctl -XEU kubelet or journalctl -XEU kubelet -L to view the detailed error information. If you can’t see it completely, you can directly use the direction keys to adjust the error information.

This is

[root@k8s-node01 ~]# journalctl -xeu kubelet
Dec 24 20:24:13 k8s-node01 kubelet[9127]: I1224 20:24:13.456712    9127 cni.go:240] "Unable to update cni config" err="no 
Dec 24 20:24:13 k8s-node01 kubelet[9127]: I1224 20:24:13.476156    9127 docker_service.go:264] "Docker Info" dockerInfo=&{
Dec 24 20:24:13 k8s-node01 kubelet[9127]: E1224 20:24:13.476236    9127 server.go:302] "Failed to run kubelet" err="failed
Dec 24 20:24:13 k8s-node01 systemd[1]: kubelet.service: main process exited, code=exited, status=1/FAILURE
Dec 24 20:24:13 k8s-node01 systemd[1]: Unit kubelet.service entered failed state.
Dec 24 20:24:13 k8s-node01 systemd[1]: kubelet.service failed.

Move the direction key to the right to view the details of the fourth line

ID:ZYIL:OO24:BWLY:DTTB:TDKT:D3MZ:YGJ4:3ZOU:7DDY:YYPQ:DPWM:ERFV Containers:0 ContainersRunning:0 ContainersPaused:0 Contain
 to run Kubelet: misconfiguration: kubelet cgroup driver: \"systemd\" is different from docker cgroup driver: \"cgroupfs\"

Error reporting reason

In fact, according to the above error information, it is caused by the inconsistency between k8s and docker’s CGroup driver
k8s is SYSTEMd, while docker is cgroupfs
Yes

docker info

Check CGroup driver: SYSTEMd or cgroupfs are displayed. K8s defaults to cgroupfs

Solution:

Modify the cgroup driver of docker to systemd
edit the configuration file of docker, and create it if it does not exist

vi /etc/docker/daemon.json

Modified to

{
…
“exec-opts”: [“native.cgroupdriver=systemd”]
…
}

Then restart Dockers

systemctl restart docker 

Re kubedm init

[Solved] k8s error retrieving resource lock default/fuseim.pri-ifs: Unauthorized

When helm installed Prometheus, the NFS client provider serviceaccount was arranged in the default namespace and encountered a title problem

[hadoop@hadoop03 NFS]$ vim nfs-rbac.yaml

apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-client-provisioner
  #namespace: nfs-client

---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: nfs-client-provisioner-runner
rules:
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create", "update", "patch"]   ## Deploy to the default namespace to report an error title error
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    namespace: default
roleRef:
  kind: ClusterRole
  name: nfs-client-provisioner-runner
  apiGroup: rbac.authorization.k8s.io

---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  namespace: default
rules:
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]

---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  namespace: default
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    namespace: default
roleRef:
  kind: Role
  name: leader-locking-nfs-client-provisioner
  apiGroup: rbac.authorization.k8s.io


kubectl logs nfs-client-provisioner-764f44f754-wdtqp nfs provider pod

E1206 08:52:27.293890       1 leaderelection.go:234] error retrieving resource lock default/fuseim.pri-ifs: endpoints "fuseim.pri-ifs" is forbidden: User "system:serviceaccount:default:nfs-client-provisioner" cannot get resource "endpoints" in API group "" in the namespace "default"

Modify clusterrole configuration permissions

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: nfs-client-provisioner-runner
rules:
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["get", "list", "watch", "create", "update", "patch"] ### 把权限修改为这个(default namespace)

[Solved] Docker failed to start daemon: error initializing graphdriver: driver not supported

When the kubelet node joins, an error VFS not support is reported

[ERROR SystemVerification]: unsupported graph driver: vfs

/etc/docker/daemon.json

{
        "registry-mirrors":["https://registry.docker-cn.com"],
        "bridge":"nufront-br",
        "storage-driver":"devicemapper",   ####
        "exec-opts": ["native.cgroupdriver=systemd"],
        "insecure-registries": ["hadoop03:5000"]
}

###
systemctl daemon-reload
service docker start #Note error initializing graphdriver: driver not supported

reference resources: https://github.com/moby/moby/issues/15651, it is found that the current node downloads the docker CE decompression package and directly configures the service, not through Yum (offline environment…)

#### 

[root@nufront-worker-02 bin]# cd /opt/module/docker/
[root@nufront-worker-02 docker]# ll

-rwxr-xr-x 1 root root 39593864 Nov 23 11:12 containerd
-rwxr-xr-x 1 root root 21508168 Nov 23 11:12 ctr
-rwxr-xr-x 1 root root 60073904 Nov 23 11:12 docker
-rwxr-xr-x 1 root root 78951368 Nov 23 11:12 dockerd
-rwxr-xr-x 1 root root   708616 Nov 23 11:12 docker-init
-rwxr-xr-x 1 root root  2933646 Nov 23 11:12 docker-proxy


Try RPM installation

#######
[root@nufront-worker-02 docker]# ll
total 350072
-rw-r--r-- 1 root root   104408 Nov 23 11:12 audit-libs-2.8.5-4.el7.x86_64.rpm
-rw-r--r-- 1 root root    78256 Nov 23 11:12 audit-libs-python-2.8.5-4.el7.x86_64.rpm
-rwxr-xr-x 1 root root 39593864 Nov 23 11:12 containerd
-rw-r--r-- 1 root root 35130608 Nov 23 11:12 containerd.io-1.4.6-3.1.el7.x86_64.rpm
-rwxr-xr-x 1 root root  7270400 Nov 23 11:12 containerd-shim
-rwxr-xr-x 1 root root  9953280 Nov 23 11:12 containerd-shim-runc-v2
-rw-r--r-- 1 root root    40816 Nov 23 11:12 container-selinux-2.119.2-1.911c772.el7_8.noarch.rpm
-rwxr-xr-x 1 root root 21508168 Nov 23 11:12 ctr
-rwxr-xr-x 1 root root 60073904 Nov 23 11:12 docker
-rw-r--r-- 1 root root 27902344 Nov 23 11:12 docker-ce-20.10.7-3.el7.x86_64 (1).rpm
-rw-r--r-- 1 root root 34717572 Nov 23 11:12 docker-ce-cli-20.10.7-3.el7.x86_64.rpm
-rw-r--r-- 1 root root  9659320 Nov 23 11:12 docker-ce-rootless-extras-20.10.7-3.el7.x86_64.rpm
-rwxr-xr-x 1 root root 78951368 Nov 23 11:12 dockerd
-rwxr-xr-x 1 root root   708616 Nov 23 11:12 docker-init
-rwxr-xr-x 1 root root  2933646 Nov 23 11:12 docker-proxy
-rw-r--r-- 1 root root  4373740 Nov 23 11:12 docker-scan-plugin-0.8.0-3.el7.x86_64.rpm
-rwxr-xr-x 1 root root     1200 Nov 23 11:12 docker.service
-rw-r--r-- 1 root root    83764 Nov 23 11:12 fuse3-libs-3.6.1-4.el7.x86_64.rpm
-rw-r--r-- 1 root root    95424 Nov 23 11:12 fuse-libs-2.9.2-11.el7.x86_64.rpm
-rw-r--r-- 1 root root    55796 Nov 23 11:12 fuse-overlayfs-0.7.2-6.el7_8.x86_64.rpm
-rw-r--r-- 1 root root    67720 Nov 23 11:12 libcgroup-0.41-21.el7.x86_64.rpm
-rw-r--r-- 1 root root   101800 Nov 23 11:12 libcgroup-tools-0.41-21.el7.x86_64.rpm
-rw-r--r-- 1 root root    56824 Nov 23 11:12 libnetfilter_conntrack-1.0.6-1.el7_3.x86_64.rpm
-rw-r--r-- 1 root root    57460 Nov 23 11:12 libseccomp-2.3.1-4.el7.x86_64.rpm
-rw-r--r-- 1 root root   166012 Nov 23 11:12 libselinux-2.5-15.el7.x86_64.rpm
-rw-r--r-- 1 root root   154876 Nov 23 11:12 libselinux-utils-2.5-15.el7.x86_64.rpm
-rw-r--r-- 1 root root   154244 Nov 23 11:12 libsemanage-2.5-14.el7.x86_64.rpm
-rw-r--r-- 1 root root   115284 Nov 23 11:12 libsemanage-python-2.5-14.el7.x86_64.rpm
-rw-r--r-- 1 root root   304196 Nov 23 11:12 libsepol-2.5-10.el7.x86_64.rpm
-rw-r--r-- 1 root root    78740 Nov 23 11:12 libsepol-devel-2.5-10.el7.x86_64 (1).rpm
-rw-r--r-- 1 root root    78740 Nov 23 11:12 libsepol-devel-2.5-10.el7.x86_64.rpm
-rw-r--r-- 1 root root   938736 Nov 23 11:12 policycoreutils-2.5-34.el7.x86_64.rpm
-rw-r--r-- 1 root root   468316 Nov 23 11:12 policycoreutils-python-2.5-34.el7.x86_64.rpm
-rwxr-xr-x 1 root root 14485560 Nov 23 11:12 runc
-rw-r--r-- 1 root root   509568 Nov 23 11:12 selinux-policy-3.13.1-268.el7_9.2.noarch.rpm
-rw-r--r-- 1 root root  7335504 Nov 23 11:12 selinux-policy-targeted-3.13.1-268.el7_9.2.noarch.rpm
-rw-r--r-- 1 root root    83452 Nov 23 11:12 slirp4netns-0.4.3-4.el7_8.x86_64.rpm

[root@nufront-worker-02 docker]# rpm -ivh *.rpm  --nodeps --force 


[root@nufront-worker-02 docker]# yum list installed | grep docker
docker-ce.x86_64                        3:20.10.7-3.el7                installed
docker-ce-cli.x86_64                    1:20.10.7-3.el7                installed
docker-ce-rootless-extras.x86_64        20.10.7-3.el7                  installed
docker-scan-plugin.x86_64               0.8.0-3.el7                    installed

Docker can be started again…

[root@nufront-worker-02 docker]# docker info
Client:
 Context:    default
 Debug Mode: false
 Plugins:
  app: Docker App (Docker Inc., v0.9.1-beta3)
  buildx: Build with BuildKit (Docker Inc., v0.5.1-docker)
  scan: Docker Scan (Docker Inc., v0.8.0)

Server:
 Containers: 0
  Running: 0
  Paused: 0
  Stopped: 0
 Images: 0
 Server Version: 20.10.7
 Storage Driver: devicemapper ###
  Pool Name: docker-253:0-812466384-pool
  Pool Blocksize: 65.54kB
  Base Device Size: 10.74GB
  Backing Filesystem: xfs
  Udev Sync Supported: true
  Data file: /dev/loop0
  Metadata file: /dev/loop1
  Data loop file: /var/lib/docker/devicemapper/devicemapper/data
  Metadata loop file: /var/lib/docker/devicemapper/devicemapper/metadata
  Data Space Used: 11.8MB
  Data Space Total: 107.4GB
  Data Space Available: 107.4GB
  Metadata Space Used: 581.6kB
  Metadata Space Total: 2.147GB
  Metadata Space Available: 2.147GB
  Thin Pool Minimum Free Space: 10.74GB
  Deferred Removal Enabled: true
  Deferred Deletion Enabled: true
  Deferred Deleted Device Count: 0
  Library Version: 1.02.107-RHEL7 (2015-10-14)
 Logging Driver: json-file
 Cgroup Driver: systemd
 Cgroup Version: 1
 Plugins:
...

Nginx Container Error: nginx: [emerg] mkdir() “/var/cache/nginx/client_temp“ failed (13: Permission denied)

Phenomenon

Previously, an nginx image was run with docker without any error, but when the image was started with k8s, the error “nginx: [emerg] mkdir()”/var/cache/Nginx/client_temp “failed (13: permission denied)” was reported This error occurs only under a specific namespace. The normal docker version is 17.03.3-ce the abnormal docker version is docker 19.03.4 which uses the overlay 2 storage driver

reflection

According to the error message, it is obvious that it is a user permission problem. Similar nginx permission problems have been encountered before, but they are caused by the setting of SELinux. After closing SELinux, it returns to normal. For the setting method, refer to “CentOS 7. X closing SELinux”
to find k8s startup or failure, I also saw a blog “unable to run Nginx docker due to” 13: permission denied “to delete the container by executing the following command_t added to SELinux, but failed

semanage permissive -a container_t
semodule -l | grep permissive

other

In addition, I try to solve this problem by configuring the security context for pod or container. yaml the configuration of security context is

  securityContext:
    fsGroup: 1000
    runAsGroup: 1000
    runAsUser: 1000
    runAsNonRoot: true

 

last

In the end, you can only directly make an nginx image started by a non root user. Follow https://github.com/nginxinc/docker-nginx-unprivileged Create your own image for the project
first view the user ID and group ID of your starting pod. You can use ID <User name>, for example:

[deploy@host ~]$ id deploy
uid=1000(deploy) gid=1000(deploy) Team=1000(deploy),980(docker) 

You need to modify the uid and GID in the dockerfile in the project to the corresponding ID of your user. My user ID and group ID are 1000
I also added a line of settings for using alicloud image, otherwise it will be particularly slow to build the image. You can also add some custom settings yourself, It should be noted that the image exposes the 8080 port instead of the 80 port. Non root users cannot directly start the 80port

Dockerfile:

#
# NOTE: THIS DOCKERFILE IS GENERATED VIA "update.sh"
#
# PLEASE DO NOT EDIT IT DIRECTLY.
#
ARG IMAGE=alpine:3.13
FROM $IMAGE

LABEL maintainer="NGINX Docker Maintainers <[email protected]>"

ENV NGINX_VERSION 1.20.1
ENV NJS_VERSION   0.5.3
ENV PKG_RELEASE   1

ARG UID=1000
ARG GID=1000

RUN set -x \
    && sed -i 's/dl-cdn.alpinelinux.org/mirrors.aliyun.com/g' /etc/apk/repositories \
# create nginx user/group first, to be consistent throughout docker variants
    && addgroup -g $GID -S nginx \
    && adduser -S -D -H -u $UID -h /var/cache/nginx -s /sbin/nologin -G nginx -g nginx nginx \
    && apkArch="$(cat /etc/apk/arch)" \
    && nginxPackages=" \
        nginx=${NGINX_VERSION}-r${PKG_RELEASE} \
        nginx-module-xslt=${NGINX_VERSION}-r${PKG_RELEASE} \
        nginx-module-geoip=${NGINX_VERSION}-r${PKG_RELEASE} \
        nginx-module-image-filter=${NGINX_VERSION}-r${PKG_RELEASE} \
        nginx-module-njs=${NGINX_VERSION}.${NJS_VERSION}-r${PKG_RELEASE} \
    " \
    && case "$apkArch" in \
        x86_64|aarch64) \
# arches officially built by upstream
            set -x \
            && KEY_SHA512="e7fa8303923d9b95db37a77ad46c68fd4755ff935d0a534d26eba83de193c76166c68bfe7f65471bf8881004ef4aa6df3e34689c305662750c0172fca5d8552a *stdin" \
            && apk add --no-cache --virtual .cert-deps \
                openssl \
            && wget -O /tmp/nginx_signing.rsa.pub https://nginx.org/keys/nginx_signing.rsa.pub \
            && if [ "$(openssl rsa -pubin -in /tmp/nginx_signing.rsa.pub -text -noout | openssl sha512 -r)" = "$KEY_SHA512" ]; then \
                echo "key verification succeeded!"; \
                mv /tmp/nginx_signing.rsa.pub /etc/apk/keys/; \
            else \
                echo "key verification failed!"; \
                exit 1; \
            fi \
            && apk del .cert-deps \
            && apk add -X "https://nginx.org/packages/alpine/v$(egrep -o '^[0-9]+\.[0-9]+' /etc/alpine-release)/main" --no-cache $nginxPackages \
            ;; \
        *) \
# we're on an architecture upstream doesn't officially build for
# let's build binaries from the published packaging sources
            set -x \
            && tempDir="$(mktemp -d)" \
            && chown nobody:nobody $tempDir \
            && apk add --no-cache --virtual .build-deps \
                gcc \
                libc-dev \
                make \
                openssl-dev \
                pcre-dev \
                zlib-dev \
                linux-headers \
                libxslt-dev \
                gd-dev \
                geoip-dev \
                perl-dev \
                libedit-dev \
                mercurial \
                bash \
                alpine-sdk \
                findutils \
            && su nobody -s /bin/sh -c " \
                export HOME=${tempDir} \
                && cd ${tempDir} \
                && hg clone https://hg.nginx.org/pkg-oss \
                && cd pkg-oss \
                && hg up ${NGINX_VERSION}-${PKG_RELEASE} \
                && cd alpine \
                && make all \
                && apk index -o ${tempDir}/packages/alpine/${apkArch}/APKINDEX.tar.gz ${tempDir}/packages/alpine/${apkArch}/*.apk \
                && abuild-sign -k ${tempDir}/.abuild/abuild-key.rsa ${tempDir}/packages/alpine/${apkArch}/APKINDEX.tar.gz \
                " \
            && cp ${tempDir}/.abuild/abuild-key.rsa.pub /etc/apk/keys/ \
            && apk del .build-deps \
            && apk add -X ${tempDir}/packages/alpine/ --no-cache $nginxPackages \
            ;; \
    esac \
# if we have leftovers from building, let's purge them (including extra, unnecessary build deps)
    && if [ -n "$tempDir" ]; then rm -rf "$tempDir"; fi \
    && if [ -n "/etc/apk/keys/abuild-key.rsa.pub" ]; then rm -f /etc/apk/keys/abuild-key.rsa.pub; fi \
    && if [ -n "/etc/apk/keys/nginx_signing.rsa.pub" ]; then rm -f /etc/apk/keys/nginx_signing.rsa.pub; fi \
# Bring in gettext so we can get `envsubst`, then throw
# the rest away. To do this, we need to install `gettext`
# then move `envsubst` out of the way so `gettext` can
# be deleted completely, then move `envsubst` back.
    && apk add --no-cache --virtual .gettext gettext \
    && mv /usr/bin/envsubst /tmp/ \
    \
    && runDeps="$( \
        scanelf --needed --nobanner /tmp/envsubst \
            | awk '{ gsub(/,/, "\nso:", $2); print "so:" $2 }' \
            | sort -u \
            | xargs -r apk info --installed \
            | sort -u \
    )" \
    && apk add --no-cache $runDeps \
    && apk del .gettext \
    && mv /tmp/envsubst /usr/local/bin/ \
# Bring in tzdata so users could set the timezones through the environment
# variables
    && apk add --no-cache tzdata \
# Bring in curl and ca-certificates to make registering on DNS SD easier
    && apk add --no-cache curl ca-certificates \
# forward request and error logs to docker log collector
    && ln -sf /dev/stdout /var/log/nginx/access.log \
    && ln -sf /dev/stderr /var/log/nginx/error.log \
# create a docker-entrypoint.d directory
    && mkdir /docker-entrypoint.d

# implement changes required to run NGINX as an unprivileged user
RUN sed -i 's,listen       80;,listen       8080;,' /etc/nginx/conf.d/default.conf \
    && sed -i '/user  nginx;/d' /etc/nginx/nginx.conf \
    && sed -i 's,/var/run/nginx.pid,/tmp/nginx.pid,' /etc/nginx/nginx.conf \
    && sed -i "/^http {/a \    proxy_temp_path /tmp/proxy_temp;\n    client_body_temp_path /tmp/client_temp;\n    fastcgi_temp_path /tmp/fastcgi_temp;\n    uwsgi_temp_path /tmp/uwsgi_temp;\n    scgi_temp_path /tmp/scgi_temp;\n" /etc/nginx/nginx.conf \
# nginx user must own the cache and etc directory to write cache and tweak the nginx config
    && chown -R $UID:0 /var/cache/nginx \
    && chmod -R g+w /var/cache/nginx \
    && chown -R $UID:0 /etc/nginx \
    && chmod -R g+w /etc/nginx

COPY docker-entrypoint.sh /
COPY 10-listen-on-ipv6-by-default.sh /docker-entrypoint.d
COPY 20-envsubst-on-templates.sh /docker-entrypoint.d
COPY 30-tune-worker-processes.sh /docker-entrypoint.d
RUN  chmod 755 /docker-entrypoint.sh \
     && chmod 755 /docker-entrypoint.d/*.sh

ENTRYPOINT ["/docker-entrypoint.sh"]

EXPOSE 8080

STOPSIGNAL SIGQUIT

USER $UID

CMD ["nginx", "-g", "daemon off;"]

10-listen-on-ipv6-by-default.sh:

#!/bin/sh
# vim:sw=4:ts=4:et

set -e

ME=$(basename $0)
DEFAULT_CONF_FILE="etc/nginx/conf.d/default.conf"

# check if we have ipv6 available
if [ ! -f "/proc/net/if_inet6" ]; then
    echo >&3 "$ME: info: ipv6 not available"
    exit 0
fi

if [ ! -f "/$DEFAULT_CONF_FILE" ]; then
    echo >&3 "$ME: info: /$DEFAULT_CONF_FILE is not a file or does not exist"
    exit 0
fi

# check if the file can be modified, e.g. not on a r/o filesystem
touch /$DEFAULT_CONF_FILE 2>/dev/null || { echo >&3 "$ME: info: can not modify /$DEFAULT_CONF_FILE (read-only file system?)"; exit 0; }

# check if the file is already modified, e.g. on a container restart
grep -q "listen  \[::]\:8080;" /$DEFAULT_CONF_FILE && { echo >&3 "$ME: info: IPv6 listen already enabled"; exit 0; }

if [ -f "/etc/os-release" ]; then
    . /etc/os-release
else
    echo >&3 "$ME: info: can not guess the operating system"
    exit 0
fi

echo >&3 "$ME: info: Getting the checksum of /$DEFAULT_CONF_FILE"

case "$ID" in
    "debian")
        CHECKSUM=$(dpkg-query --show --showformat='${Conffiles}\n' nginx | grep $DEFAULT_CONF_FILE | cut -d' ' -f 3)
        echo "$CHECKSUM  /$DEFAULT_CONF_FILE" | md5sum -c - >/dev/null 2>&1 || {
            echo >&3 "$ME: info: /$DEFAULT_CONF_FILE differs from the packaged version"
            exit 0
        }
        ;;
    "alpine")
        CHECKSUM=$(apk manifest nginx 2>/dev/null| grep $DEFAULT_CONF_FILE | cut -d' ' -f 1 | cut -d ':' -f 2)
        echo "$CHECKSUM  /$DEFAULT_CONF_FILE" | sha1sum -c - >/dev/null 2>&1 || {
            echo >&3 "$ME: info: /$DEFAULT_CONF_FILE differs from the packaged version"
            exit 0
        }
        ;;
    *)
        echo >&3 "$ME: info: Unsupported distribution"
        exit 0
        ;;
esac

# enable ipv6 on default.conf listen sockets
sed -i -E 's,listen       8080;,listen       8080;\n    listen  [::]:8080;,' /$DEFAULT_CONF_FILE

echo >&3 "$ME: info: Enabled listen on IPv6 in /$DEFAULT_CONF_FILE"

exit 0

20-envsubst-on-templates.sh:

#!/bin/sh

set -e

ME=$(basename $0)

auto_envsubst() {
  local template_dir="${NGINX_ENVSUBST_TEMPLATE_DIR:-/etc/nginx/templates}"
  local suffix="${NGINX_ENVSUBST_TEMPLATE_SUFFIX:-.template}"
  local output_dir="${NGINX_ENVSUBST_OUTPUT_DIR:-/etc/nginx/conf.d}"

  local template defined_envs relative_path output_path subdir
  defined_envs=$(printf '${%s} ' $(env | cut -d= -f1))
  [ -d "$template_dir" ] || return 0
  if [ ! -w "$output_dir" ]; then
    echo >&3 "$ME: ERROR: $template_dir exists, but $output_dir is not writable"
    return 0
  fi
  find "$template_dir" -follow -type f -name "*$suffix" -print | while read -r template; do
    relative_path="${template#$template_dir/}"
    output_path="$output_dir/${relative_path%$suffix}"
    subdir=$(dirname "$relative_path")
    # create a subdirectory where the template file exists
    mkdir -p "$output_dir/$subdir"
    echo >&3 "$ME: Running envsubst on $template to $output_path"
    envsubst "$defined_envs" < "$template" > "$output_path"
  done
}

auto_envsubst

exit 0

30-tune-worker-processes.sh:

#!/bin/sh
# vim:sw=2:ts=2:sts=2:et

set -eu

LC_ALL=C
ME=$( basename "$0" )
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin

[ "${NGINX_ENTRYPOINT_WORKER_PROCESSES_AUTOTUNE:-}" ] || exit 0

touch /etc/nginx/nginx.conf 2>/dev/null || { echo >&2 "$ME: error: can not modify /etc/nginx/nginx.conf (read-only file system?)"; exit 0; }

ceildiv() {
  num=$1
  div=$2
  echo $(( (num + div - 1)/div ))
}

get_cpuset() {
  cpusetroot=$1
  cpusetfile=$2
  ncpu=0
  [ -f "$cpusetroot/$cpusetfile" ] || return 1
  for token in $( tr ',' ' ' < "$cpusetroot/$cpusetfile" ); do
    case "$token" in
      *-*)
        count=$( seq $(echo "$token" | tr '-' ' ') | wc -l )
        ncpu=$(( ncpu+count ))
        ;;
      *)
        ncpu=$(( ncpu+1 ))
        ;;
    esac
  done
  echo "$ncpu"
}

get_quota() {
  cpuroot=$1
  ncpu=0
  [ -f "$cpuroot/cpu.cfs_quota_us" ] || return 1
  [ -f "$cpuroot/cpu.cfs_period_us" ] || return 1
  cfs_quota=$( cat "$cpuroot/cpu.cfs_quota_us" )
  cfs_period=$( cat "$cpuroot/cpu.cfs_period_us" )
  [ "$cfs_quota" = "-1" ] && return 1
  [ "$cfs_period" = "0" ] && return 1
  ncpu=$( ceildiv "$cfs_quota" "$cfs_period" )
  [ "$ncpu" -gt 0 ] || return 1
  echo "$ncpu"
}

get_quota_v2() {
  cpuroot=$1
  ncpu=0
  [ -f "$cpuroot/cpu.max" ] || return 1
  cfs_quota=$( cut -d' ' -f 1 < "$cpuroot/cpu.max" )
  cfs_period=$( cut -d' ' -f 2 < "$cpuroot/cpu.max" )
  [ "$cfs_quota" = "max" ] && return 1
  [ "$cfs_period" = "0" ] && return 1
  ncpu=$( ceildiv "$cfs_quota" "$cfs_period" )
  [ "$ncpu" -gt 0 ] || return 1
  echo "$ncpu"
}

get_cgroup_v1_path() {
  needle=$1
  found=
  foundroot=
  mountpoint=

  [ -r "/proc/self/mountinfo" ] || return 1
  [ -r "/proc/self/cgroup" ] || return 1

  while IFS= read -r line; do
    case "$needle" in
      "cpuset")
        case "$line" in
          *cpuset*)
            found=$( echo "$line" | cut -d ' ' -f 4,5 )
            break
            ;;
        esac
        ;;
      "cpu")
        case "$line" in
          *cpuset*)
            ;;
          *cpu,cpuacct*|*cpuacct,cpu|*cpuacct*|*cpu*)
            found=$( echo "$line" | cut -d ' ' -f 4,5 )
            break
            ;;
        esac
    esac
  done << __EOF__
$( grep -F -- '- cgroup ' /proc/self/mountinfo )
__EOF__

  while IFS= read -r line; do
    controller=$( echo "$line" | cut -d: -f 2 )
    case "$needle" in
      "cpuset")
        case "$controller" in
          cpuset)
            mountpoint=$( echo "$line" | cut -d: -f 3 )
            break
            ;;
        esac
        ;;
      "cpu")
        case "$controller" in
          cpu,cpuacct|cpuacct,cpu|cpuacct|cpu)
            mountpoint=$( echo "$line" | cut -d: -f 3 )
            break
            ;;
        esac
        ;;
    esac
done << __EOF__
$( grep -F -- 'cpu' /proc/self/cgroup )
__EOF__

  case "${found%% *}" in
    "/")
      foundroot="${found##* }$mountpoint"
      ;;
    "$mountpoint")
      foundroot="${found##* }"
      ;;
  esac
  echo "$foundroot"
}

get_cgroup_v2_path() {
  found=
  foundroot=
  mountpoint=

  [ -r "/proc/self/mountinfo" ] || return 1
  [ -r "/proc/self/cgroup" ] || return 1

  while IFS= read -r line; do
    found=$( echo "$line" | cut -d ' ' -f 4,5 )
  done << __EOF__
$( grep -F -- '- cgroup2 ' /proc/self/mountinfo )
__EOF__

  while IFS= read -r line; do
    mountpoint=$( echo "$line" | cut -d: -f 3 )
done << __EOF__
$( grep -F -- '0::' /proc/self/cgroup )
__EOF__

  case "${found%% *}" in
    "")
      return 1
      ;;
    "/")
      foundroot="${found##* }$mountpoint"
      ;;
    "$mountpoint")
      foundroot="${found##* }"
      ;;
  esac
  echo "$foundroot"
}

ncpu_online=$( getconf _NPROCESSORS_ONLN )
ncpu_cpuset=
ncpu_quota=
ncpu_cpuset_v2=
ncpu_quota_v2=

cpuset=$( get_cgroup_v1_path "cpuset" ) && ncpu_cpuset=$( get_cpuset "$cpuset" "cpuset.effective_cpus" ) || ncpu_cpuset=$ncpu_online
cpu=$( get_cgroup_v1_path "cpu" ) && ncpu_quota=$( get_quota "$cpu" ) || ncpu_quota=$ncpu_online
cgroup_v2=$( get_cgroup_v2_path ) && ncpu_cpuset_v2=$( get_cpuset "$cgroup_v2" "cpuset.cpus.effective" ) || ncpu_cpuset_v2=$ncpu_online
cgroup_v2=$( get_cgroup_v2_path ) && ncpu_quota_v2=$( get_quota_v2 "$cgroup_v2" ) || ncpu_quota_v2=$ncpu_online

ncpu=$( printf "%s\n%s\n%s\n%s\n%s\n" \
               "$ncpu_online" \
               "$ncpu_cpuset" \
               "$ncpu_quota" \
               "$ncpu_cpuset_v2" \
               "$ncpu_quota_v2" \
               | sort -n \
               | head -n 1 )

sed -i.bak -r 's/^(worker_processes)(.*)$/# Commented out by '"$ME"' on '"$(date)"'\n#\1\2\n\1 '"$ncpu"';/' /etc/nginx/nginx.conf

docker-entrypoint.sh:

#!/bin/sh
# vim:sw=4:ts=4:et

set -e

if [ -z "${NGINX_ENTRYPOINT_QUIET_LOGS:-}" ]; then
    exec 3>&1
else
    exec 3>/dev/null
fi

if [ "$1" = "nginx" -o "$1" = "nginx-debug" ]; then
    if /usr/bin/find "/docker-entrypoint.d/" -mindepth 1 -maxdepth 1 -type f -print -quit 2>/dev/null | read v; then
        echo >&3 "$0: /docker-entrypoint.d/ is not empty, will attempt to perform configuration"

        echo >&3 "$0: Looking for shell scripts in /docker-entrypoint.d/"
        find "/docker-entrypoint.d/" -follow -type f -print | sort -V | while read -r f; do
            case "$f" in
                *.sh)
                    if [ -x "$f" ]; then
                        echo >&3 "$0: Launching $f";
                        "$f"
                    else
                        # warn on shell scripts without exec bit
                        echo >&3 "$0: Ignoring $f, not executable";
                    fi
                    ;;
                *) echo >&3 "$0: Ignoring $f";;
            esac
        done

        echo >&3 "$0: Configuration complete; ready for start up"
    else
        echo >&3 "$0: No files found in /docker-entrypoint.d/, skipping configuration"
    fi
fi

exec "$@"

Run a couple of files at the same catalog and then run docker build -t nginxinc/docker-nginx-unprivileged:latest

How to Solve Client-go Mod Error

Error message:

E:\github\client-go>go mod tidy
go: finding module for package k8s.io/client-go/kubernetes
go: finding module for package k8s.io/client-go/tools/clientcmd
go: finding module for package k8s.io/apimachinery/pkg/apis/meta/v1
go: found k8s.io/apimachinery/pkg/apis/meta/v1 in k8s.io/apimachinery v0.22.2
go: finding module for package k8s.io/client-go/kubernetes
go: finding module for package k8s.io/client-go/tools/clientcmd
client-go imports
        k8s.io/client-go/kubernetes: module k8s.io/client-go@latest found (v1.5.2), but does not contain package k8s.io/client-go/kubernetes
client-go imports
        k8s.io/client-go/tools/clientcmd: module k8s.io/client-go@latest found (v1.5.2), but does not contain package k8s.io/client-go/tools/clientcmd

Solution:
always specify three files in the go.mod file

require (
    ...
    k8s.io/api v0.19.0
    k8s.io/apimachinery v0.19.0
    k8s.io/client-go v0.19.0
    ...
)