Tag Archives: docker

[Solved] Error getting ssh command ‘exit 0‘ : ssh command error:

docker-machine Create Certificate Stuck
docker-machine –debug create -d hyperv –hyperv-virtual-switch “Default Switch” docker-machine
Hint
https://github.com/minishift/minishift/issues/2722

Hi,
the only way for me to get around this was to disable the Windows 10
built-in OpenSSH Client, via Windows Features.
After that minishift used its internal ssh client and proceeded.
Unfortunately i am running into another issue after that, where the
control-plane pods are not starting and the minishift deployment fails,
since the API access times out.
Would be inteeresting to see if you get the same once you deal with the SSH
stuff
Am Fr., 30. Nov. 2018 um 12:15 Uhr schrieb denisjc7 <
[email protected]>:
…
Hi,
@LW81 https://github.com/LW81 I am experiencing the same issue on
basically the same configurations as yours. Did you find a solution?Thank
you
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#2722 (comment)>,
or mute the thread
https://github.com/notifications/unsubscribe-auth/ATAFNHdE_fyUXANlW2bT8vzeFZ8Un2Erks5u0RNggaJpZM4WHLXu

Solution:
Through [Settings] => [Applications] => [Optional Applications
Uninstall Windows 10 built-in OpenSSH Client
After using the built-in ssh, it runs successfully

[Solved] error during connect: This error may indicate that the docker daemon is not running

Because the shortcut key of my screenshot tool is Ctrl+q , and the shortcut key of docker desktop exit is also Ctrl+q, when I press Ctrl+q, docker desktop exits, and then when I enter the docker command in the console,

burst this line of error

error during connect: This error may indicate that the docker daemon is not running.: Get "http://%2F%2F.%2Fpipe%2Fdocker_engine/v1.24/containers/json": open //./pipe/docker_engine: The system cannot find the file specified.

Solution:
reopen docker desktop

when the color of the icon in the lower-left corner is the same as that shown in the picture, it proves that docker operates normally
then I go to CMD and enter the docker command

to see that there is no error

[Solved] docker Commands Execute Error: Segmentation fault

If you execute any docker command, you will report an error segmentation fault. There have been no similar errors when using docker before. After troubleshooting, it was found that the available memory was only 110m. It was speculated that the memory was not enough, so the command to clean the memory was executed, but the parameters were changed to 1, 2 and 3, which could not clean the memory.

sync
echo 1 > /proc/sys/vm/drop_caches

The solution is found on GitHub. First enter

sysctl vm.overcommit_memory

The output is 0, and then change the parameters

sysctl vm.overcommit_memory=1

At this time, the application that occupies a lot of memory has been restarted automatically. If not, execute the above cleaning command.

[Solved] Kibana Error: Kibana server is not ready yet

Background


Visit kinaba in the web page http://localhost:5601 , always prompt “kibana server is not ready yet”.

Execute the following command to view kibana logs,

docker logs kibana

Tips found:

Text


It is suspected that the internal IP of each container changes after the container is restarted.

1. Therefore, execute the following command to check the internal IP of elasticsearch container and find that it is kibana The ES container IP in yaml configuration file is inconsistent with the actual es container IP.

docker inspect --format '{{ .NetworkSettings.IPAddress }}'  es container ID

// Check the id of es container
docker ps

2. Enter kibana container and update kibana.Yaml configuration file. Execute the following command to enter and edit kibana.yaml,

docker exec -it kibana container id /bin/bash
cd config
vi kibana.yml

Replace the IP address of the selected part in the figure below with the actual es container IP address, save and exit kibana.

3. Stop kibana service, delete kibana container and restart kibana.

// 3.1 Stop the kibana service
docker stop kibana container id

// 3.2 Delete the kibana container. (Not delete the kibana image! Not delete the kibana image! Not delete the kibana image!)
docker rm -f kibana container id

// 3.3 Enable running kibana
docker run --name kibana -e ELASTICSEARCH_HOST=http://es_contaner_ip:9200 -p 5601:5601 -d kibana:7.7.0


// Note: The command 'kibana:' in 3.3 above is followed by the kibana version number. To be on the safe side, it is recommended that the version of elasticsearch and kibana remain the same.

4. Browser revisit http://localhost:5601, refresh a few more times to access kibana normally.

[Solved] nvidia-docker runtime Error: (Unknown runtime specified nvidia)

1. An error is reported when running the docker command

root@test:~# docker run --runtime=nvidia -ti  -v $(pwd):/workspace -w /workspace -v /nfs:/nfs $@ --privileged -v /var/run/docker.sock:/var/run/docker.sock registry.test.cn/mla/cxx_toolchains:latest
docker: Error response from daemon: Unknown runtime specified nvidia.
See 'docker run --help'.

According to the error prompt, check whether NVIDIA-docker is installed

root@test:~# nvidia-docker
nvidia-docker: command not found
root@test:~# 

Obviously, it is not installed

2 execute the script and install NVIDIA-docker

root@test:~# cat install-nvidia-docker.sh
sudo curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | sudo apt-key add -
distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
sudo curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | sudo tee /etc/apt/sources.list.d/nvidia-docker.list
sudo apt-get update
sudo apt-get install -y nvidia-docker2
sudo pkill -SIGHUP dockerd
root@test:~#

Check that NVIDIA-docker and NVIDIA-container-Runtim are installed successfully

root@test:~# which nvidia-docker
/usr/bin/nvidia-docker
root@test:~# which nvidia-container-runtime
/usr/bin/nvidia-container-runtime
root@test:~#

3 edit /etc/docker/daemon.JSON is as follows

root@test:~# cat /etc/docker/daemon.json
{
  "insecure-registries": ["registry.test.cn"],
  "max-concurrent-downloads": 10,
  "log-driver": "json-file",
  "log-level": "warn",
  "log-opts": {
    "max-size": "10m",
    "max-file": "3"
  },
  "live-restore": true,
  "metrics-addr": "0.0.0.0:9323",
  "default-runtime": "nvidia",
  "experimental": true,
    "runtimes": {
        "nvidia": {
            "path": "/usr/bin/nvidia-container-runtime",
            "runtimeArgs": []
        }
    }
}
root@test:~#

4 restart docker

root@test:~# systemctl daemon-reload
root@test:~# systemctl restart docker

5 verification

root@test:~# docker run --runtime=nvidia -ti  -v $(pwd):/workspace -w /workspace -v /nfs:/nfs $@ --privileged -v /var/run/docker.sock:/var/run/docker.sock registry.test.cn/mla/cxx_toolchains:latest
root@c3a43f4564a8:/workspace#
root@c3a43f4564a8:/workspace# ls
root@c3a43f4564a8:/workspace# pwd
/workspace
root@c3a43f4564a8:/workspace#

Rancher application service error: request entity too large

When request entity too large occurs, it is because the transport stream exceeds 1m.

1. It is necessary to set parameters in ingress of rancher.

Configuration note: nginx.ingress.kubernetes.io/proxy-body-size

2. Springboot 2.0 adds configuration to the configuration file

spring.servlet.multipart.max-file-size=1024MB
spring.servlet.multipart.max-request-size=1024MB

[Errno 14] curl#6 – “Could not resolve host: yum.dockerproject.org; Unknown error“

1. Commands:

sudo yum install docker-ce docker-ce-cli containerd.ioLoaded plugins: fastestmirror

2. Error Messages:

Loading mirror speeds from cached hostfile
base                                                   | 3.6 kB     00:00
docker-ce-stable                                                                                                                                                                                                          | 3.5 kB  00:00:00
docker-ce-test                                                                                                                                                                                                            | 3.5 kB  00:00:00
https://yum.dockerproject.org/repo/main/centos/7/repodata/repomd.xml: [Errno 14] curl#6 – “Could not resolve host: yum.dockerproject.org; Unknown error”
Trying other mirror.

One of the configured repositories failed (Docker Repository),
and yum doesn’t have enough cached data to continue. At this point the only
safe thing yum can do is fail. There are a few ways to work “fix” this:
1. Contact the upstream for the repository and get them to fix the problem.
2. Reconfigure the baseurl/etc. for the repository, to point to a working
upstream. This is most often useful if you are using a newer
distribution release than is supported by the repository (and the
packages for the previous distribution release still work).
3. Run the command with the repository temporarily disabled
yum –disablerepo=dockerrepo …
4. Disable the repository permanently, so yum won’t use it by default. Yum
will then just ignore the repository until you permanently enable it
again or use –enablerepo for temporary usage:
yum-config-manager –disable dockerrepo
or
subscription-manager repos –disable=dockerrepo
5. Configure the failing repository to be skipped, if it is unavailable.
Note that yum will try to contact the repo. when it runs most commands,
so will have to try and fail each time (and thus. yum will be be much
slower). If it is a very temporary problem though, this is often a nice
compromise:
yum-config-manager –save –setopt=dockerrepo.skip_if_unavailable=true
failure: repodata/repomd.xml from dockerrepo: [Errno 256] No more mirrors to try.
https://yum.dockerproject.org/repo/main/centos/7/repodata/repomd.xml: [Errno 14] curl#6 – “Could not resolve host: yum.dockerproject.org; Unknown error”

3. Solution:

yum-config-manager –disable dockerrepo

[Solved] docker Error: System has not been booted with systemd as init system (PID 1). Can‘t operate. Failed to con

Environment centos7 eight

The docker container reported an error using the systemctl command:

[root@d7a74069b83c yum.repos.d]# systemctl status firewalld
System has not been booted with systemd as init system (PID 1). Can't operate.
Failed to connect to bus: Host is down

Solution:

Add the parameter — privileged when starting the container

[root@localhost ~]# docker run -itd --name c8 --privileged centos /usr/sbin/init
6a6a3c9f9fa9acc59d62a6e82ccb6a637db8aada004aa8a096c6061108c6b144
[root@localhost ~]# docker exec -it c8 /bin/bash

[Solved] K8s Initialize Error: failed with error: Get “http://localhost:10248/healthz“

Environmental description

Server: CentOS 7
docker: 20.10 12
kubeadm:v1. 23.1
Kubernetes:v1. twenty-three point one

Exception description

After docker and k8s related components are installed, there is a problem when executing kubedm init initializing the master node
execute the statement

kubeadm init \
--apiserver-advertise-address=Server_IP \
--image-repository registry.aliyuncs.com/google_containers \
--kubernetes-version=v1.23.1 \
--pod-network-cidr=10.244.0.0/16 \
--service-cidr=10.96.0.0/12 

Error reporting exception

[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.

According to the prompt following the error, you can use journalctl -XEU kubelet or journalctl -XEU kubelet -L to view the detailed error information. If you can’t see it completely, you can directly use the direction keys to adjust the error information.

This is

[root@k8s-node01 ~]# journalctl -xeu kubelet
Dec 24 20:24:13 k8s-node01 kubelet[9127]: I1224 20:24:13.456712    9127 cni.go:240] "Unable to update cni config" err="no 
Dec 24 20:24:13 k8s-node01 kubelet[9127]: I1224 20:24:13.476156    9127 docker_service.go:264] "Docker Info" dockerInfo=&{
Dec 24 20:24:13 k8s-node01 kubelet[9127]: E1224 20:24:13.476236    9127 server.go:302] "Failed to run kubelet" err="failed
Dec 24 20:24:13 k8s-node01 systemd[1]: kubelet.service: main process exited, code=exited, status=1/FAILURE
Dec 24 20:24:13 k8s-node01 systemd[1]: Unit kubelet.service entered failed state.
Dec 24 20:24:13 k8s-node01 systemd[1]: kubelet.service failed.

Move the direction key to the right to view the details of the fourth line

ID:ZYIL:OO24:BWLY:DTTB:TDKT:D3MZ:YGJ4:3ZOU:7DDY:YYPQ:DPWM:ERFV Containers:0 ContainersRunning:0 ContainersPaused:0 Contain
 to run Kubelet: misconfiguration: kubelet cgroup driver: \"systemd\" is different from docker cgroup driver: \"cgroupfs\"

Error reporting reason

In fact, according to the above error information, it is caused by the inconsistency between k8s and docker’s CGroup driver
k8s is SYSTEMd, while docker is cgroupfs
Yes

docker info

Check CGroup driver: SYSTEMd or cgroupfs are displayed. K8s defaults to cgroupfs

Solution:

Modify the cgroup driver of docker to systemd
edit the configuration file of docker, and create it if it does not exist

vi /etc/docker/daemon.json

Modified to

{
…
“exec-opts”: [“native.cgroupdriver=systemd”]
…
}

Then restart Dockers

systemctl restart docker 

Re kubedm init