Tag Archives: k8s

Neither –kubeconfig nor –master was specified. Using the inClusterConfig. This might not work

When downloading yaml of ingress nginx

wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.30.0/deploy/static/mandatory.yaml -O nginx-ingress-controller.yaml

When deploying ingress nginx, check the pod log and report the following error

Solution:

Add the following hostnetwork: true in yaml, re deploy, or modify the deployment file of pod to deploy the update

Kubernetes hostnetwork: true network
this is a way to define pod network directly
if you use hostnetwork: true to configure the network in pod, the application running in pod can directly see the network interface of the host computer, and all network interfaces on the LAN where the host computer is located can access the application.

After successful deployment, you can log in to the node where the pod is located to view it

netstat -anp |grep LISTEN |grep 80

hostNetwork: true

View log information after deployment

You must be logged in to the server (unauthorized)

Problem description
K8S cluster has 3 masters with the same certificate file, one master is normal, and the other two masters have questions: enter kubectl get nodes kubectl get Po – all-minus namespaces is logged with an error You must be logged in to the server(unauthorized)
The solution
Cluster if installed using administrator:

export KUBECONFIG=/etc/kubernetes/admin.conf

If the cluster is installed with a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Summary of k8s single master cluster deployment

fix warning:

1.[WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly

answer: systemctl stop firewalld.service

2.[WARNING Service-Docker]: docker service is not enabled, please run ‘systemctl enable docker.service’

Systemctl enable docker.service

answer: systemctl enable docker.service

3.[WARNING IsDockerSystemdCheck]: detected “cgroupfs” as the Docker cgroup driver. The recommended driver is “systemd”. Please follow the guide at https://kubernetes.io/docs/setup/cri/

a:

official documentation indicates that changing the Settings so that the container runtime and kubelet use systemd as the cgroup driver makes the system more stable. Notice that under Docker, you set the option native. Cgroupdriver = Systemd.

two solutions:

1. Edit docker configuration file /etc/docker/daemon.json

1

2

3

“exec-opts”: [“native.cgroupdriver=systemd”]

systemctl daemon-reload

systemctl restart docker

2, edit the/usr/lib/systemd/system/docker. Service

1

2

3

ExecStart=/usr/bin/dockerd -H fd:// –containerd=/run/containerd/containerd.sock –exec-opt native.cgroupdriver=systemd

systemctl daemon-reload

systemctl restart docker

command:

vi /usr/lib/systemd/system/docker.service

–exec-opt native. Cgroupdriver =systemd (append)

After setting

, you can see that the Cgroup Driver is systemd

through the docker info command

1

docker info | grep Cgroup

4.[WARNING FileExisting-tc]: tc not found in system path

solution:

cannot be installed with yum, yum has a version, but has been unable to load down.

can be installed with yum, yum server is also this version:

yum command: yum install tc-y

note: the version is very important. I tried several versions. Only 5.3.0-1 works

download RPM package:

http://mirror.centos.org/centos/8/BaseOS/x86_64/os/Packages/iproute-tc-5.3.0-1.el8.x86_64.rpm

local installation:

yum localinstall – y iproute – tc – 5.3.0-1 RPM el8. X86_64.

5.[WARNING Hostname]: hostname “master” could not be reached

solution: see 6

6.[WARNING Hostname]: hostname “master”: lookup master on [fe80::1%ens33]:53: read udp [fe80::e0c:1711:9797:f6c7%ens33]:56921-> [fe80::1%ens33]:53: i/o timeout
error execution phase preflight: [preflight] Some fatal errors occurred:

solution:

The

command changes the host name

hostnamectl set-hostname k8s

change/etc/hostname

echo k8s > /etc/hostname

modifies the example

cat > > /etc/hosts < < EOF
192.168.100.4 master
192.168.100.5 node1
192.168.100.6 node2
EOF

, where IP is the address of the master node and the node node assigned to you

cleanup command:

sudo kubeadm reset

if

$kubeadm init \
– apiserver – advertise – address = 192.168.44.146 \
– image – repository registry.aliyuncs.com/google_containers \
– kubernetes – version v1.18.0 \
– service – cidr = 10.96.0.0/12 \
– pod – network – cidr = 10.244.0.0/16

that’s the step that’s going to go wrong, that’s the easiest step to go wrong. Master can’t boot up alive, so you can use this command.

caution, the test is nothing, it will not kill you (k8s) I am still a vegetable chicken, not only this command will bring side effects, but when the test, to solve the problem, it really works!

docker system prune -a

can be used to clean up disks, remove closed containers, useless data volumes, and networks

K8s configure HTTPS with existing certificate

existing certificate

import certificate

#kubectl create secret tls example-secret --key cert/xxx.key --cert cert/xxx.pem

note that the name and suffix can be changed, such as the possibility that the certificate suffix is CRT.

ingressde yaml file configuration example:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: example
spec:
  tls:
    - hosts:
        - www.example.com
      secretName: example-secret
  rules:
  - host: www.example.com
    http:
      paths:
      - backend:
          serviceName: example
          servicePort: 80

Check

kubectl get secret

self-generated certificate

can do its own ca, generate its own certificate,. But this way your debugging can also used as a production environment, most browsers will prompt site is not safe, you can configure the browser to trust your own ca, but other users can’t all such operations, so you can go to some free certificate application website to apply for free use, although there is a life, but also is a kind of emergency plans.