Tag Archives: docker

Apiserver Error: OpenAPI spec does not exists [How to Solve]

**

**
kubectl suddenly failed to obtain resources in the environment just deployed a few days ago. Check the apiserver log, as shown in the above results
then the controller manager component also reports an error

E0916 08:35:55.495444       1 leaderelection.go:306] error retrieving resource lock kube-system/kube-controller-manager: Get https://192.168.1.119:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: EOF

This environment adopts three master deployments. Then I see that VIP drifts to the master node. Both haproxy and keepalived here are container deployments. Then I restart haproxy first

docker restart xxx
iptables failed: iptables --wait -t nat -A DOCKER -p tcp -d 0/0 --dport 8443 -j DNAT
 --to-destination 172.18.0.6:8443 ! -i docker0 iptables: No chain/target/match by that name

Haproxy doesn’t get up, so I guess there is something wrong with the iptables rules of the master 3 host. Then I restart it. Kept
at this time, the VIP drifts to master 1. At this time, kubectl can obtain resources

Finally, I restarted the docker component of Master 3

systemctl restatrt docker

Then manually drift the VIP to master3, and it is normal at this time

Docker compose reports an error and multiple containers conflict

Docker compose reports an error and multiple containers conflict

When the vulnerability repeats, I want to open the vulnerability environment. The docker compose result reports an error. I understand that it is a multi container conflict problem

The error contents are as follows:

WARNING: Found orphan containers (unacc_slave_1, unacc_master_1) for this project. If you removed or renamed this service in your compose file, you can run this command with the --remove-orphans flag to clean it up.
unacc_web_1 is up-to-date

The solution given by the system here is: – remove orphans, but it will directly delete the container, which is obviously not very good

Learned that each configuration has a project name. If you provide the - P flag, you can specify the project name. If no flag is specified, compose uses the current directory name.

Use the following command to specify a container name

docker-compose -p xxx up -d

Open the 8086 port specified here in the container to successfully open the mirror environment.

Exsi virtual machine is missing vmdk file error [How to Solve]

background note: the virtual machine TANG2 lacks the vmdk file tang2.vmdk, resulting in boot failure and error

[ root@localhost :/vmfs/volumes/e9f402/tang2] ls -l

total 84028480
-rw-------    1 root     root     49936498688 Sep  9 02:30 tang2-000001-sesparse.vmdk
-rw-------    1 root     root           329 Aug 17  2020 tang2-000001.vmdk
-rw-------    1 root     root     4294967296 Dec 25  2019 tang2-Snapshot1.vmem
-rw-------    1 root     root       9732350 Dec 25  2019 tang2-Snapshot1.vmsn
-rw-------    1 root     root     107374182400 Dec 25  2019 tang2-flat.vmdk
-rw-------    1 root     root          8684 Sep  9 01:43 tang2.nvram
-rw-r--r--    1 root     root             0 Feb 24  2021 tang2.vmsd
-rwxr-xr-x    1 root     root          3303 Feb  7  2021 tang2.vmx
-rw-------    1 root     root          3237 Feb  7  2021 tang2.vmxf
-rw-------    1 root     root     107374182400 Sep  9 08:42 temp-flat.vmdk
-rw-------    1 root     root           494 Sep  9 08:42 temp.vmdk
-rw-r--r--    1 root     root        266758 Oct 18  2019 vmware-1.log
-rw-r--r--    1 root     root        351477 May  3  2020 vmware-2.log
-rw-r--r--    1 root     root        271780 Aug 17  2020 vmware-3.log
-rw-r--r--    1 root     root        296091 Sep  9 01:43 vmware-4.log
-rw-r--r--    1 root     root         78208 Sep  9 01:44 vmware-5.log
-rw-r--r--    1 root     root         76793 Sep  9 02:30 vmware.log

1. Generate a vmdk disk boot file based on the tang2-flat.vmdk file size 107374182400
[root@localhost:/vmfs/volumes/e9f402/tang2] vmkfstools -c 107374182400 -d thin temp.vmdk
Create: 100% done.
2. Delete the -flat.vmdk actual disk file and keep the .vmdk disk boot file**
[root@localhost:/vmfs/volumes/e9f402/tang2] rm -f temp-flat.vmdk
3. Rename the newly generated disk boot file to the missing file name

[root@localhost:/vmfs/volumes/e9f402/tang2] mv temp.vmdk tang2.vmdk
[root@localhost:/vmfs/volumes/e9f402/tang2] ls -l

total 84028480
-rw-------    1 root     root     49936498688 Sep  9 02:30 tang2-000001-sesparse.vmdk
-rw-------    1 root     root           329 Aug 17  2020 tang2-000001.vmdk
-rw-------    1 root     root     4294967296 Dec 25  2019 tang2-Snapshot1.vmem
-rw-------    1 root     root       9732350 Dec 25  2019 tang2-Snapshot1.vmsn
-rw-------    1 root     root     107374182400 Dec 25  2019 tang2-flat.vmdk
-rw-------    1 root     root          8684 Sep  9 01:43 tang2.nvram
-rw-------    1 root     root           494 Sep  9 08:42 tang2.vmdk
-rw-r--r--    1 root     root             0 Feb 24  2021 tang2.vmsd
-rwxr-xr-x    1 root     root          3303 Feb  7  2021 tang2.vmx
-rw-------    1 root     root          3237 Feb  7  2021 tang2.vmxf
-rw-r--r--    1 root     root        266758 Oct 18  2019 vmware-1.log
-rw-r--r--    1 root     root        351477 May  3  2020 vmware-2.log
-rw-r--r--    1 root     root        271780 Aug 17  2020 vmware-3.log
-rw-r--r--    1 root     root        296091 Sep  9 01:43 vmware-4.log
-rw-r--r--    1 root     root         78208 Sep  9 01:44 vmware-5.log
-rw-r--r--    1 root     root         76793 Sep  9 02:30 vmware.log

Confirm that the master disk is RW 209715200 VMFS “tang2-flat.vmdk”
[root@localhost:/vmfs/volumes/e9f402/tang2] vi tang2.vmdk

# Disk DescriptorFile
version=1
encoding="UTF-8"
CID=fffffffe
parentCID=ffffffff
isNativeSnapshot="no"
createType="vmfs"

 # Extent description
RW 209715200 VMFS "tang2-flat.vmdk"

 # The Disk Data Base
#DDB

ddb.adapterType = "lsilogic"
ddb.geometry.cylinders = "13054"
ddb.geometry.heads = "255"
ddb.geometry.sectors = "63"
ddb.longContentID = "dd1fc9f492c51eb078deb1b8fffffffe"
ddb.thinProvisioned = "1"
ddb.uuid = "60 00 C2 91 4b f7 a2 67-9d 42 aa b1 50 cf fe d0"
ddb.virtualHWVersion = "13"

[ root@localhost :/vmfs/volumes/e9f402/tang2]

4. Modify the secondary disk boot file as follows: parentcid = fffffffe modify the CID in the primary disk boot file
[ root@localhost :/vmfs/volumes/e9f402/tang2] vi tang2-000001.vmdk

# Disk DescriptorFile
version=1
encoding="UTF-8"
CID=daed5d66
parentCID=fffffffe
isNativeSnapshot="no"
createType="seSparse"
parentFileNameHint="tang2.vmdk"
 # Extent description
RW 209715200 SESPARSE "tang2-000001-sesparse.vmdk"

 # The Disk Data Base 
#DDB

ddb.grain = "8"
ddb.longContentID = "d6bf9759610883dad09509d5daed5d66" 
[root@localhost:/vmfs/volumes/e9f402/tang2] cat tang2-000001.vmdk 
 # Disk DescriptorFile
version=1
encoding="UTF-8"
CID=daed5d66
parentCID=fffffffe
isNativeSnapshot="no"
createType="seSparse"
parentFileNameHint="tang2.vmdk"
 # Extent description
RW 209715200 SESPARSE "tang2-000001-sesparse.vmdk"

 # The Disk Data Base 
#DDB

ddb.grain = "8"
ddb.longContentID = "d6bf9759610883dad09509d5daed5d66"

[ root@localhost :/vmfs/volumes/e9f402/tang2]

5. Check whether the disk chain configuration of the master vmdk file is correct
[ root@localhost :/vmfs/volumes/e9f402/tang2] vmkfstools -e tang2.vmdk
Disk chain is consistent.

6. Then check whether the disk chain configuration of the secondary vmdk file is correct
[ root@localhost :/vmfs/volumes/e9f402/tang2] vmkfstools -e tang2-000001.vmdk
Disk chain is consistent.

If the configuration is wrong, the following error messages will be reported:

It’s done. Turn on the console normally!

Error in node when PM2 starts multiple processes in docker

2021-09-13T15:41:15: PM2 log: App [kafka:1] starting in -cluster mode- 2021-09-13T15:41:15: PM2 log: App name:kafka id:2 disconnected 2021-09-13T15:41:15: PM2 log: App [kafka:2] exited with code [0] via signal [SIGINT] 2021-09-13T15:41:15: PM2 log: App [kafka:2] starting in -cluster mode- 2021-09-13T15:41:15: PM2 log: App [kafka:1] online 2021-09-13T15:41:15: PM2 log: App [kafka:2] online /bin/bash:1 ELF ^ SyntaxError: Invalid or unexpected token

Docker start use CMD["pm2-runtime","process.json"].

The configuration file looks like this

{
    "apps" : [
        {
            "name": "kafka",
            "script": "node main.js --NODE_ENV=test",
            "log_date_format"  : "YYYY-MM-DD HH:mm:ss",
            "log_file"   : "/home/logs/log.log",
            "error_file" : "/home/logs/err.log",
            "out_file"   : "/home/logs/out.log",
            "instances": 3,
            "exec_mode": "cluster"
        }
    ]
  }

Start three Kafka instances in docker. But it keeps reporting errors. The reason is “exec_mode” in the configuration file. Delete it. In docker, remember to use process blocking to run in the foreground mode. Do not use the background, otherwise it will start frequently and cause error.

Easyexcel Llinux reports an error, but the local error is correct

terms of settlement:

    on the command line, enter
    Yum install fontconfig to add a sentence in the dockerfile (just add it in the line before entrypoint)
    Run APK add -- update font-adobe-100dpi TTF dejavu fontconfig restart or rebuild

    Reason:
    the openjdk used in the docker container is different from the local JDK, so the null pointer error of the font will be reported. If the font is useful, you can try it

[Solved] Error response from daemon: Get “*“: x509: certificate signed by unknown authority

Environmental description

The harbor repository I built requires a domain name and https access

Error 1 is reported. When other docker environments log in to harbor

[root@k8s0001 ~]# docker login www.harbor.wuhan.cn
Username: admin
Password: 
Error response from daemon: Get "https://www.harbor.wuhan.cn/v2/": x509: certificate signed by unknown authority

Error 2: when other docker environments pull self built harbor warehouse images

[root@k8s0001 ~]# docker pull www.harbor.wuhan.cn/22202/helloworld@sha256:0d9ce49958ea82a48c40a397ccc785674ec3ce1dfd4f749c3c7c7a63790a54cd
Error response from daemon: Get "https://www.harbor.wuhan.cn/v2/": x509: certificate signed by unknown authority

For these two errors, you need to transfer the generated key CP to the configuration file directory of the corresponding machine docker. The operations are as follows:

Configure HTTPS links

##harbor
[root@harbor opt]# cd /etc/docker/
[root@harbor docker]# ls
certs.d  key.json
[root@harbor docker]# cd certs.d/
[root@harbor certs.d]# ls
www.harbor.wuhan.cn
[root@harbor certs.d]# cd www.harbor.wuhan.cn/
[root@harbor www.harbor.wuhan.cn]# ls
ca.crt  www.harbor.wuhan.cn.cert  www.harbor.wuhan.cn.key
[root@harbor certs.d]# cd ..
[root@harbor certs.d]# scp -r www.harbor.wuhan.cn [email protected]:/etc/docker/certs.d/ 
[email protected]'s password: 
www.harbor.wuhan.cn.cert                                                                          100% 2126   914.9KB/s   00:00    
www.harbor.wuhan.cn.key                                                                           100% 3243     1.5MB/s   00:00    
ca.crt                                                                                             100% 2033   839.2KB/s   00:00    
[root@harbor certs.d]# 
[root@harbor certs.d]# scp -r www.harbor.wuhan.cn [email protected]:/etc/docker/certs.d/
[email protected]'s password: 
www.harbor.wuhan.cn.cert                                                                          100% 2126   845.3KB/s   00:00    
www.harbor.wuhan.cn.key                                                                           100% 3243     1.9MB/s   00:00    
ca.crt                                                                                             100% 2033     1.8MB/s   00:00    
[root@harbor certs.d]# 
[root@harbor certs.d]# 
[root@harbor certs.d]# scp -r www.harbor.wuhan.cn [email protected]:/etc/docker/certs.d/
[email protected]'s password: 
www.harbor.wuhan.cn.cert                                                                          100% 2126   227.8KB/s   00:00    
www.harbor.wuhan.cn.key                                                                           100% 3243     2.5MB/s   00:00    
ca.crt                                                                                             100% 2033     1.2MB/s   00:00 

Then restart the docker

[root@k8s0001 opt]# systemctl restart docker
[root@k8s0002 opt]# systemctl restart docker
[root@k8s0003 opt]# systemctl restart docker

[Win 10] Docker Error: error during connect: In the default daemon configuration on Windows

error during connect: In the default daemon configuration on Windows, 
the docker client must be run with elevated privileges to connect.: 
Get http://%2F%2F.%2Fpipe%2Fdocker_engine/v1.24/containers/json: 
open //./pipe/docker_engine: The system cannot find the file specified.

solution: 

cd "C:\Program Files\Docker\Docker"
DockerCli.exe -SwitchDaemon

Error response from daemon: failed to parse mydockerfile-centos: ENV must have two arguments

This problem occurred when I was trying to create my own dockerfile ready to execute build

Error response from daemon: failed to parse mydockerfile-centos: ENV must have two arguments

The question is shown in the figure. This question means  

  At this time, we found that our env must require two parameters. Let’s take a look at our screenshot

As shown in the figure, there is only one parameter in my env. At this time, I found that I forgot the space

Make the following modifications

Obviously, I changed the env parameter to two parameters

  In the newly built dockerfile we created ourselves

It’s a success!

 

 

Error running docker container: starting container process caused “exec: \“python\“: executable file

Problem: minicanda3 virtual environment creates a python environment, and uses the following dockerfile to compile the docker image

FROM cuda10.2_pt1.5:09061
COPY . /workspace
WORKDIR /workspace
CMD ["python","run.py","/input_path","/output_path"]

Error using:

docker: Error response from daemon: OCI runtime create failed: container_linux.go:345: starting container process caused "exec: \"python\": executable file not found in $PATH": unknown.

Try to add an environment variable. It takes effect in the container after adding, and python can be recognized. However, after the build image, python still has the same problem, and python cannot be recognized

EXPORT PATH="/root/miniconda3/bin:&PATH"

Try to establish a soft connection to python

ln -s /root/miniconda3/bin/python /usr/bin/python

It takes effect in the container after adding, and python can be recognized, but there is still an error after using the build image

docker: Error response from daemon: OCI runtime create failed: container_linux.go:345: starting container process caused "exec: \"/root/miniconda3/bin/path\": stat /root/miniconda3/bin/path: no such file or directory": unknown.

Analysis: an automatically executed command cannot locate the executable location

Solution: since Python for short cannot be located, just give the complete path directly./root/minikonda3/bin/python. Meanwhile, refer to some schemes and use run to add some necessary environments. The modified dockerfile is as follows:

FROM cuda10.2_pt1.5:09061
RUN apt-get update && apt-get install -y --no-install-recommends \
         build-essential \
         cmake \
         curl \
         ca-certificates \
         libjpeg-dev \
         libpng-dev && \
     rm -rf /var/lib/apt/lists/*
COPY . /workspace
WORKDIR /workspace
CMD ["/root/miniconda3/bin/python","run.py","/input_path","/output_path"]

docker: Error response from daemon: driver failed programming external connectivity on endpoint lamp

Write custom items here

Problem Description: solution: export and import

Problem Description:

Docker: error response from daemon: driver failed programming external connectivity on endpoint lamp

Solution:

When the dacker service starts, the defined connection daocker is deleted.
restart the docker to systemctl restart the docker

Export and import

Export

If you want to try this editor, you can edit it at will in this article. When you have finished writing an article, find the article export in the upper toolbar and generate a. MD file or. HTML file for local saving.

Import

If you want to load an. MD file you have written, you can select the import function in the upper toolbar to import the file with the corresponding extension,
continue your creation.

[Solved] Navicat connection error 1251 compatibility with docker MySQL

# Modify the encryption rules
(1) ALTER USER ‘root’@’%’ IDENTIFIED BY ‘password’ PASSWORD EXPIRE NEVER;
# Update the user’s password
(2) ALTER USER ‘root’@’%’ IDENTIFIED WITH mysql_native_password BY ‘password’;
# Refresh permissions
(3)FLUSH PRIVILEGES;
reconnect the Navicat will solve the problem.

Nacos Error: server is DOWN now, please try again later! [How to Solve]

Problem: After upgrading docker service and restarting docker service, nacos can start normally and the related configuration is still there. However, when starting the code, the server is DOWN now, please try again later!
Check the access_log.2021-09-02.log log as follows: Found app=unknow

0.11.25.205 – – [02/Sep/2021:14:46:30 +0800] “PUT /nacos/v1/ns/instance/beat?app=unknown&namespaceId=41ba0c0d-f13f-48a4-9ebf-7390dac6335c&port=8181&clusterName=DEFAULT&ip=10.11.25.205&serviceName=DEFAULT_GROUP%40%40hb-homepage&encoding=UTF-8 HTTP/1.1” 503 43 37 Nacos-Java-Client:v1.2.1 –
10.1.193.42 – – [02/Sep/2021:14:46:30 +0800] “GET /nacos/v1/cs/configs?dataId=alarm-unified.yml&group=DEFAULT_GROUP HTTP/1.1” 404 22 17 Java/1.8.0_111 –
10.1.193.30 – – [02/Sep/2021:14:46:31 +0800] “PUT /nacos/v1/ns/instance/beat?app=unknown&namespaceId=3a0e1249-fd7c-4fcb-b07f-9c209ce91fce&port=8112&clusterName=DEFAULT&ip=10.1.193.30&serviceName=DEFAULT_GROUP%40%40SFTP&encoding=UTF-8 HTTP/1.1” 503 43 3 Nacos-Java-Client:v1.2.1 –
10.1.193.30 – – [02/Sep/2021:14:46:31 +0800] “PUT /nacos/v1/ns/instance/beat?app=unknown&namespaceId=3a0e1249-fd7c-4fcb-b07f-9c209ce91fce&port=8112&clusterName=DEFAULT&ip=10.1.193.30&serviceName=DEFAULT_GROUP%40%40SFTP&encoding=UTF-8 HTTP/1.1” 503 43 11 Nacos-Java-Client:v1.2.1 –

Reason: The dada file under nacos stores user-defined configuration information. Among them, the protocol folder stores information such as the cache computer Ip. Because of the upgrade of container services, metadata changes, so the program can not start properly
Solution:
Delete the copied protocol folder, restart nacos, OK!