Tag Archives: docker

Docke Run: response from daemon: OCI runtime create failed: container with id exists:XXX:unknown

Environment:
Ubuntu 18.0.4
Docker

Problem description:
System restart is abnormal. When docker container is started, the following error message is prompted:

Where 8ADF is the first four letters of my container ID.
Solutions:
1, implement

 find/-name "8adfcb497827c65a7c8c05dc745f206af2679f2333112570124e8d1af581a7fe"

8 adfcb497827c65a7c8c05dc745f206af2679f2333112570124e8d1af581a7fe is full container id. The results are as follows:

2. Delete the following files:

sudo rm -rf /var/run/docker/runtime-runc/moby/8adfcb497827c65a7c8c05dc745f206af2679f2333112570124e8d1af581a7fe/

3. Restart the container

docker start 8adf

done

Modify the tomcat configuration in docker, causing javaagent to report agent library failed to init instrument

A project in the Docker Tomcat container needs to modify the JavaAgent parameter, which was modified previously as follows
In the catalina.sh file, add the JAVA_OPTS parameter as follows
Export JAVA_OPTS = “$JAVA_OPTS – javaagent:/aspectjweaver – 1.8.13. Jar”
Restart the container, error, jar cannot be found
After checking, the JavaAgent parameter should be followed by a full path. Relative paths cannot be used.
 
To:
Export JAVA_OPTS = “$JAVA_OPTS – javaagent:/usr/local/tomcat/bin/aspectjweaver – 1.8.13. Jar”
 

Job for docker.service failed because the control process exited with error code. See systemctl sta

Dock worker startup error reported as:
[root@iz8vb4rhbik3h93v48ztfvz dockworker]# systemctl restart dockworker. Service
docker job. The service failed because the control process exited with an error code. See “systemctl state docker”. Services” and “journalctl -xe” for more information.
[root@iz8vb4rhbik3h93v48ztfvz docker]# systemctl stop docker
[root@iz8vb4rhbik3h93v48ztfvz docker]# systemctl status docker
● docker load:load(/usr/lib/systemd/system/ Docker .service; enable;
Activation:failed(result:startup limit) from Mon 2020-06-15 23:52:04 CST;
Docs: https://docs.docker.com>
Process: 9466 ExecStart=/usr/bin/dockerd -H fd://–containerd=/run/containerd/containerd. sock (code=exit, status=1/FAILURE)
Main PID: 9466 (code=exit, status=1/FAILURE)</p

Jun 15 23:52:02 iz8vb4rhbik3h93v48ztfvz systemd[1]: unit docker application container engine failed to start.
Jun 15 23:52:02 iz8vb4rhbik3h93v48ztfvz systemd[1]: Unit docker service enters failed state.
Jun 15 23:52:02 iz8vb4rhbik3h93v48ztfvz systemd[1]: docker service failed.
Jun 15 23:52:04 iz8vb4rhbik3h93v48ztfvz systemd[1]: docker service delay has expired, a restart is planned.
Jun 15 23:52:04 iz8vb4rhbik3h93v48ztfvz systemd: Docker application container engine stopped.
Jun 15 23:52:04 iz8vb4rhbik3h93v48ztfvz systemd[1]: docker startup request repeats too fast. iz8vb4rhbik3h93v48ztfvz systemd[1]: docker application container engine failed to start.
Jun 15 23:52:04 iz8vb4rhbik3h93v48ztfvz systemd[1]: unit docker. the service entered a failed state.
Jun 15 23:52:04 iz8vb4rhbik3h93v48ztfvz systemd[1]: docker service failed. Start docker.
Job to docker. Service failed because it tried to start the service too many times. See “systemctl state docker”. Services” and “journalctl -xe” for more information.
To force startup use “systemctl reset-failed docker”. Followed by “systemctl reset-failed docker”. Service” is now enabled.

The reason is that the daemon.json file has been modified and needs to be modified again.

After modification by mv command, the dock worker will start normally.

 

[root@iz8vb4rhbik3h93v48ztfvz docker]# mv daemon.json daemon.conf
[root@iz8vb4rhbik3h93v48ztfvz docker]# systemctl start docker
[root@iz8vb4rhbik3h93v48ztfvz docker]# systemctl status docker
● docker.service - Docker Application Container Engine
   Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled)
   Active: active (running) since Mon 2020-06-15 23:53:59 CST; 4s ago
     Docs: https://docs.docker.com
 Main PID: 9510 (dockerd)
    Tasks: 10
   Memory: 50.6M
   CGroup: /system.slice/docker.service
           └─9510 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock

Jun 15 23:53:58 iz8vb4rhbik3h93v48ztfvz dockerd[9510]: time="2020-06-15T23:53:58.554833629+08:00" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/contain...module=grpc
Jun 15 23:53:58 iz8vb4rhbik3h93v48ztfvz dockerd[9510]: time="2020-06-15T23:53:58.554857738+08:00" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
Jun 15 23:53:58 iz8vb4rhbik3h93v48ztfvz dockerd[9510]: time="2020-06-15T23:53:58.570959343+08:00" level=info msg="[graphdriver] using prior storage driver: overlay2"
Jun 15 23:53:58 iz8vb4rhbik3h93v48ztfvz dockerd[9510]: time="2020-06-15T23:53:58.898909596+08:00" level=info msg="Loading containers: start."
Jun 15 23:53:59 iz8vb4rhbik3h93v48ztfvz dockerd[9510]: time="2020-06-15T23:53:59.152882347+08:00" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17....IP address"
Jun 15 23:53:59 iz8vb4rhbik3h93v48ztfvz dockerd[9510]: time="2020-06-15T23:53:59.245906513+08:00" level=info msg="Loading containers: done."
Jun 15 23:53:59 iz8vb4rhbik3h93v48ztfvz dockerd[9510]: time="2020-06-15T23:53:59.283200393+08:00" level=info msg="Docker daemon" commit=afacb8b graphdriver(s)=overlay2 version=19.03.8
Jun 15 23:53:59 iz8vb4rhbik3h93v48ztfvz dockerd[9510]: time="2020-06-15T23:53:59.289824517+08:00" level=info msg="Daemon has completed initialization"
Jun 15 23:53:59 iz8vb4rhbik3h93v48ztfvz dockerd[9510]: time="2020-06-15T23:53:59.320315408+08:00" level=info msg="API listen on /var/run/docker.sock"
Jun 15 23:53:59 iz8vb4rhbik3h93v48ztfvz systemd[1]: Started Docker Application Container Engine.
Hint: Some lines were ellipsized, use -l to show in full.

</ p>
</ div>

Failed to start docker.service : unit not found

Failed to start docker.service: Unit not found (docker service cannot be started)

welcome to discuss WeChat: 986739453

Linux deployment Docker appears: Failed to start Docker. Service: Unit not found

fix:
direct yum update
make sure to restart Linux
yum install Docker
systemctl start docker.service

Docker start error: failed to start docker application container engine.

directory

phenomenon: docker does not start

check docker status, as failed.

manually start dockerd

check daemon. Json

modify the daemon. Json

starts the daemon and docker

with systemctl

why does using ipv6:true cause docker to fail to start normally?


phenomenon: docker doesn’t start

to see if elasticsearch service exists

[root@warehouse00 ~]# docker ps|grep elastic
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

indicates that docker is not running.

check docker status, as failed.

[root@warehouse00 ~]# service docker status
Redirecting to /bin/systemctl status docker.service
● docker.service - Docker Application Container Engine
   Loaded: loaded (/etc/systemd/system/docker.service; enabled; vendor preset: disabled)
   Active: failed (Result: start-limit) since Mon 2020-06-15 06:30:16 UTC; 1 weeks 1 days ago
     Docs: https://docs.docker.com
  Process: 19410 ExecStart=/usr/bin/dockerd -H fd:// (code=exited, status=1/FAILURE)
 Main PID: 19410 (code=exited, status=1/FAILURE)

Jun 15 06:30:16 warehouse00 systemd[1]: docker.service holdoff time over, scheduling restart.
Jun 15 06:30:16 warehouse00 systemd[1]: Stopped Docker Application Container Engine.
Jun 15 06:30:16 warehouse00 systemd[1]: start request repeated too quickly for docker.service
Jun 15 06:30:16 warehouse00 systemd[1]: Failed to start Docker Application Container Engine.
Jun 15 06:30:16 warehouse00 systemd[1]: Unit docker.service entered failed state.
Jun 15 06:30:16 warehouse00 systemd[1]: docker.service failed.
Jun 15 06:30:33 warehouse00 systemd[1]: start request repeated too quickly for docker.service
Jun 15 06:30:33 warehouse00 systemd[1]: Failed to start Docker Application Container Engine.
Jun 15 06:30:33 warehouse00 systemd[1]: docker.service failed.
Jun 23 07:45:16 warehouse00 systemd[1]: Unit docker.service cannot be reloaded because it is inactive.

manually start dockerd

[root@warehouse00 ~]# dockerd
INFO[2020-06-23T07:46:26.609656620Z] Starting up                                  
INFO[2020-06-23T07:46:26.615956282Z] libcontainerd: started new containerd process  pid=1802
INFO[2020-06-23T07:46:26.616133833Z] parsed scheme: "unix"                         module=grpc
INFO[2020-06-23T07:46:26.616167280Z] scheme "unix" not registered, fallback to default scheme  module=grpc
INFO[2020-06-23T07:46:26.616225318Z] ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0  <nil>}] <nil>}  module=grpc
INFO[2020-06-23T07:46:26.616255985Z] ClientConn switching balancer to "pick_first"  module=grpc
INFO[2020-06-23T07:46:26.665833586Z] starting containerd                           revision=b34a5c8af56e510852c35414db4c1f4fa6172339 version=v1.2.10
INFO[2020-06-23T07:46:26.667139004Z] loading plugin "io.containerd.content.v1.content"...  type=io.containerd.content.v1
INFO[2020-06-23T07:46:26.667283454Z] loading plugin "io.containerd.snapshotter.v1.btrfs"...  type=io.containerd.snapshotter.v1
WARN[2020-06-23T07:46:26.667700679Z] failed to load plugin io.containerd.snapshotter.v1.btrfs  error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotter"
INFO[2020-06-23T07:46:26.667751093Z] loading plugin "io.containerd.snapshotter.v1.aufs"...  type=io.containerd.snapshotter.v1
WARN[2020-06-23T07:46:26.672276961Z] failed to load plugin io.containerd.snapshotter.v1.aufs  error="modprobe aufs failed: "modprobe: FATAL: Module aufs not found.\n": exit status 1"
INFO[2020-06-23T07:46:26.672329642Z] loading plugin "io.containerd.snapshotter.v1.native"...  type=io.containerd.snapshotter.v1
INFO[2020-06-23T07:46:26.672396890Z] loading plugin "io.containerd.snapshotter.v1.overlayfs"...  type=io.containerd.snapshotter.v1
INFO[2020-06-23T07:46:26.672604691Z] loading plugin "io.containerd.snapshotter.v1.zfs"...  type=io.containerd.snapshotter.v1
INFO[2020-06-23T07:46:26.673060327Z] skip loading plugin "io.containerd.snapshotter.v1.zfs"...  type=io.containerd.snapshotter.v1
INFO[2020-06-23T07:46:26.673097387Z] loading plugin "io.containerd.metadata.v1.bolt"...  type=io.containerd.metadata.v1
WARN[2020-06-23T07:46:26.673137831Z] could not use snapshotter btrfs in metadata plugin  error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotter"
WARN[2020-06-23T07:46:26.673161222Z] could not use snapshotter aufs in metadata plugin  error="modprobe aufs failed: "modprobe: FATAL: Module aufs not found.\n": exit status 1"
WARN[2020-06-23T07:46:26.673185176Z] could not use snapshotter zfs in metadata plugin  error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin"
INFO[2020-06-23T07:46:26.673528926Z] loading plugin "io.containerd.differ.v1.walking"...  type=io.containerd.differ.v1
INFO[2020-06-23T07:46:26.673583123Z] loading plugin "io.containerd.gc.v1.scheduler"...  type=io.containerd.gc.v1
INFO[2020-06-23T07:46:26.673692715Z] loading plugin "io.containerd.service.v1.containers-service"...  type=io.containerd.service.v1
INFO[2020-06-23T07:46:26.673736540Z] loading plugin "io.containerd.service.v1.content-service"...  type=io.containerd.service.v1
INFO[2020-06-23T07:46:26.673769457Z] loading plugin "io.containerd.service.v1.diff-service"...  type=io.containerd.service.v1
INFO[2020-06-23T07:46:26.673804631Z] loading plugin "io.containerd.service.v1.images-service"...  type=io.containerd.service.v1
INFO[2020-06-23T07:46:26.673839842Z] loading plugin "io.containerd.service.v1.leases-service"...  type=io.containerd.service.v1
INFO[2020-06-23T07:46:26.673903483Z] loading plugin "io.containerd.service.v1.namespaces-service"...  type=io.containerd.service.v1
INFO[2020-06-23T07:46:26.673943174Z] loading plugin "io.containerd.service.v1.snapshots-service"...  type=io.containerd.service.v1
INFO[2020-06-23T07:46:26.673979791Z] loading plugin "io.containerd.runtime.v1.linux"...  type=io.containerd.runtime.v1
INFO[2020-06-23T07:46:26.674145534Z] loading plugin "io.containerd.runtime.v2.task"...  type=io.containerd.runtime.v2
INFO[2020-06-23T07:46:26.674279834Z] loading plugin "io.containerd.monitor.v1.cgroups"...  type=io.containerd.monitor.v1
INFO[2020-06-23T07:46:26.675734289Z] loading plugin "io.containerd.service.v1.tasks-service"...  type=io.containerd.service.v1
INFO[2020-06-23T07:46:26.675851914Z] loading plugin "io.containerd.internal.v1.restart"...  type=io.containerd.internal.v1
INFO[2020-06-23T07:46:26.676034068Z] loading plugin "io.containerd.grpc.v1.containers"...  type=io.containerd.grpc.v1
INFO[2020-06-23T07:46:26.676081089Z] loading plugin "io.containerd.grpc.v1.content"...  type=io.containerd.grpc.v1
INFO[2020-06-23T07:46:26.676117009Z] loading plugin "io.containerd.grpc.v1.diff"...  type=io.containerd.grpc.v1
INFO[2020-06-23T07:46:26.676149377Z] loading plugin "io.containerd.grpc.v1.events"...  type=io.containerd.grpc.v1
INFO[2020-06-23T07:46:26.676180874Z] loading plugin "io.containerd.grpc.v1.healthcheck"...  type=io.containerd.grpc.v1
INFO[2020-06-23T07:46:26.676214131Z] loading plugin "io.containerd.grpc.v1.images"...  type=io.containerd.grpc.v1
INFO[2020-06-23T07:46:26.676245682Z] loading plugin "io.containerd.grpc.v1.leases"...  type=io.containerd.grpc.v1
INFO[2020-06-23T07:46:26.676284266Z] loading plugin "io.containerd.grpc.v1.namespaces"...  type=io.containerd.grpc.v1
INFO[2020-06-23T07:46:26.676316443Z] loading plugin "io.containerd.internal.v1.opt"...  type=io.containerd.internal.v1
INFO[2020-06-23T07:46:26.676440092Z] loading plugin "io.containerd.grpc.v1.snapshots"...  type=io.containerd.grpc.v1
INFO[2020-06-23T07:46:26.676482597Z] loading plugin "io.containerd.grpc.v1.tasks"...  type=io.containerd.grpc.v1
INFO[2020-06-23T07:46:26.676514017Z] loading plugin "io.containerd.grpc.v1.version"...  type=io.containerd.grpc.v1
INFO[2020-06-23T07:46:26.676545815Z] loading plugin "io.containerd.grpc.v1.introspection"...  type=io.containerd.grpc.v1
INFO[2020-06-23T07:46:26.677125349Z] serving...                                    address="/var/run/docker/containerd/containerd-debug.sock"
INFO[2020-06-23T07:46:26.677301919Z] serving...                                    address="/var/run/docker/containerd/containerd.sock"
INFO[2020-06-23T07:46:26.677337263Z] containerd successfully booted in 0.013476s  

INFO[2020-06-23T07:46:26.693415067Z] parsed scheme: "unix"                         module=grpc
INFO[2020-06-23T07:46:26.693481428Z] scheme "unix" not registered, fallback to default scheme  module=grpc
INFO[2020-06-23T07:46:26.693528406Z] ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0  <nil>}] <nil>}  module=grpc
INFO[2020-06-23T07:46:26.693554473Z] ClientConn switching balancer to "pick_first"  module=grpc
INFO[2020-06-23T07:46:26.695436116Z] parsed scheme: "unix"                         module=grpc
INFO[2020-06-23T07:46:26.695494014Z] scheme "unix" not registered, fallback to default scheme  module=grpc
INFO[2020-06-23T07:46:26.695532298Z] ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0  <nil>}] <nil>}  module=grpc
INFO[2020-06-23T07:46:26.695558869Z] ClientConn switching balancer to "pick_first"  module=grpc
INFO[2020-06-23T07:46:26.747479158Z] Loading containers: start.                   
INFO[2020-06-23T07:46:26.887145110Z] stopping event stream following graceful shutdown  error="<nil>" module=libcontainerd namespace=moby
INFO[2020-06-23T07:46:26.887630706Z] stopping healthcheck following graceful shutdown  module=libcontainerd
INFO[2020-06-23T07:46:26.887672287Z] stopping event stream following graceful shutdown  error="context canceled" module=libcontainerd namespace=plugins.moby
failed to start daemon: Error initializing network controller: Error creating default "bridge" network: could not find an available, non-overlapping IPv6 address pool among the defaults to assign to the network

misread means daemone.json has a problem with ipv6 configuration .

check daemon. Json

[root@warehouse00 ~]# cat /etc/docker/daemon.json 
{
  "insecure-registries": ["registry.local", "127.0.0.1:5001", "10.10.13.42:5000"],
  "registry-mirrors": ["http://mirror.local"],
  "bip": "172.18.18.1/24",
  "data-root": "/var/lib/docker",
  "ipv6": true,
  "storage-driver": "overlay2",
  "live-restore": true,
  "log-opts": {
    "max-size": "500m"
  }
}

modify the daemon. Json

deleted ipv6:true .

manually started daemon validation, modified ipv6:true, found that the problem has not been repeated.

restart docker, error: Job for docker.service invalid.

[root@warehouse00 ~]# systemctl reload docker
Job for docker.service invalid.

starts the daemon with systemctl and docker

// 启动docker dameon
$ sudo systemctl start docker
 
// 启动docker服务
$ sudo service docker start

problem solved.

why does using ipv6:true cause docker to fail to start normally?

related issue: https://github.com/moby/moby/issues/36954
https://github.com/moby/moby/issues/29443

https://github.com/moby/moby/issues/29386

either disable ipv6. That is, ipv6:false, or delete ipv6:true
or fix it with fixed-cidr-v6.

Failed to get D-Bus connection: No such file or directory

when running centos7 image in docker, error
[root@2181bc14e47f /]# systemctl list-units
Failed to get d-bus connection: No such file or directory
d-bus allows the program to register on it to provide services to other programs. It also provides the possibility for the client program to query what services are available. Applications can also be registered to wait for kernel events, such as hardware hotplug. It is a three-tier inter-process communication (IPC) system, including:
function library libdbus, used for calling contacts and interacting messages between two applications. A message constructed based on libdbus, the bus daemon can connect to multiple applications simultaneously and can route messages from one application to 0 or more other programs. A series of Wrapper libraries based on specific application frameworks. D-bus is designed for two specific situations:
communication between two desktop applications within the same desktop session, allowing the desktop session to be integrated as a whole to solve problems related to the process lifecycle. Communication between a desktop session and an operating system, where the operating system generally includes the kernel and system daemons.
after the docker exit
$docker run — br> $docker run –privileged/ti-e “container=docker” centos7-base /usr/sbin/init
-privileged docker introduction, using this parameter, the root in the container has the real root privilege, otherwise, the root in the container is just an external ordinary user privilege. The container that the Privileged starts can see a lot of equipment on host and mount can be performed.
even allows you to start a docker container in a docker container. -e “container=docker” sets the environment variable that processes inside the container can get directly.
can now execute the systemctl command

/root @ bd5aa199dbc9 ~ # systemctl status SSHD
low SSHD. Service – the OpenSSH server daemon
the Loaded: the Loaded (/ usr/lib/systemd/system/SSHD. Service; enabled; Vendor pre: enabled)
Active: inactive (dead)
Docs: man: SSHD (8)
man:sshd_config(5)

Solve the problem of copy failed: stat / var / lib / docker / TMP / docker builder 455335933 / opt: no such file or directory

1. Error reporting

2. Dockerfile file

3. The reasons for

Dockerfile lines beginning with # are treated as comments. If a # comment is used after a valid instruction, it will be treated as a parameter, resulting in an error

4. Solution: just remove the comment, or put the comment on a separate line

An error occurred trying to connect: get http: / / var / run/ docker.sock/v1 .21/containers/json?all

start the container error An error occurred trying to connect: Get http:///var/run/docker.sock/v1.21/containers/json?all

Internet search data three steps to solve:

  1. Stop docker daemon
  2. sudo rm/var/lib/docker/network/files/local – kv. Db
  3. Start docker daemon but failed to solve my problem.

    shows UUID when viewing dockre status:

    qian:/var/lib/docker/devicemapper/devicemapper # systemctl status docker -l
    docker.service - Docker Application Container Engine
       Loaded: loaded (/usr/lib/systemd/system/docker.service; disabled)
       Active: failed (Result: exit-code) since Fri 2017-03-31 10:15:21 EDT; 45s ago
         Docs: http://docs.docker.com
      Process: 34204 ExecStart=/usr/bin/docker daemon -H fd:// $DOCKER_OPTS (code=exited, status=1/FAILURE)
     Main PID: 34204 (code=exited, status=1/FAILURE)
    
    Mar 31 10:15:21 qian docker[34204]: time="2017-03-31T10:15:21.084102345-04:00" level=warning msg="devmapper: Usage of loopback devices is strongly discouraged for production use. Please use `--storage-opt dm.thinpooldev` or use `man docker` to refer to dm.thinpooldev section."
    Mar 31 10:15:21 qian docker[34204]: time="2017-03-31T10:15:21.098182020-04:00" level=error msg="[graphdriver] prior storage driver \"devicemapper\" failed: devmapper: Base Device UUID and Filesystem verification failed.devmapper: Current Base Device UUID: does not match with stored UUID:ecfee58a-4aef-41e5-888b-e1588b90f325. Possibly using a different thin pool than last invocation"
    Mar 31 10:15:21 qian docker[34204]: time="2017-03-31T10:15:21.098304502-04:00" level=fatal msg="Error starting daemon: error initializing graphdriver: devmapper: Base Device UUID and Filesystem verification failed.devmapper: Current Base Device UUID: does not match with stored UUID:ecfee58a-4aef-41e5-888b-e1588b90f325. Possibly using a different thin pool than last invocation"

    looking at the disk UUID found that there are fewer disks dm-0 or * than the normal container

    qian:/var/lib/docker/devicemapper/devicemapper # ls -al /dev/disk/by-uuid/
    total 0
    drwxr-xr-x 2 root root 100 Mar 30 19:13 .
    drwxr-xr-x 5 root root 100 Mar 14 09:56 ..
    lrwxrwxrwx 1 root root  10 Mar 14 09:56 ddaefbae-cc5e-4579-b2dc-24340323ccde -> ../../sda2
    lrwxrwxrwx 1 root root  10 Mar 14 09:56 eabdb089-f65f-4150-ac14-7e2deaca1d85 -> ../../sda1
    lrwxrwxrwx 1 root root  10 Mar 14 09:56 ecfee58a-4aef-41e5-888b-e1588b90f325 -> ../../sda3

    check the container device PID, found and normal is not the same, normal is UUID, my display as PTUUID

    qian:/var/lib/docker/devicemapper/devicemapper # blkid /dev/mapper/docker-8\:3-126130724-pool 
    /dev/mapper/docker-8:3-126130724-pool: PTUUID="d541fadc" PTTYPE="dos" 

    at this point there is no way to continue to search for the cause, and I can confirm that the disk UUID associated with the container was indeed not found on my node.

    view the docker uuid associated in/var/lib/docker/devicemapper/metadata/deviceset – metadata file.

    my final solution is to delete the docker related folder /etc/docker /var/lib/docker reinstall docker…

EEROR:Windows named pipe error: The system cannot find the file specified. (code:2)

executes the docker-compose. Exe up-d command and returns the following error:

ERROR: Windows named pipe error: The system cannot find the file specified. (code:2)

solution:

1. Check the status of docker for Windows software. The status should be running

2. View the Task Manager in Performance to Virtualization, and change it to Enable if not.

modify Virtualization method refer to this link: https://zhidao.baidu.com/question/339098469.html
3. Run the command window for CMD in the directory where the docker-compose. Yml file is located, and then type the command

for docker-compose. Exe upd

error:Failed to connect to 127.0.0.1:27017, reason: errno:111 Connection refused;error:couldn‘t ….

error : MongoDB shell version: 2.6.10
connecting connecting to: test
2020-01-06T19:04:40.945+0800 warning: Failed to connect to 127.0.0.1:27017, reason: Errno :111 Connection union 40
2020-01-06 t19:04:40. 946+0800 Error: Couldn ‘t connect to server 127.0.0.1:27017 (127.0.0.1), the connection attempt failed at the SRC/mongo/shell/mongo. Js: 146
I am under a virtual machine, download the mongo, want to get into mongo library, found an error (above), the Internet a lot, according to their way to try many, but is not go, Later, I found that because I did not enter into the docker image (just started to contact the image)
, I learned
from docker image , and learned

by virtue of virtual machine

step 1 , switch to su root, check all images under docker images

step 2 , if no mongo, pull mongo through docker pull mongo official website, if you have pulled direct operation step 3

step 3 0, Start container docker run-dit –name mymongodb-p 27017:27017 mongo



step 4 , open container docker exec-it mymongodb bash, enter mongo
appear the pattern in the blue circle, indicating that you have entered mongo, directly enter the command statement to operate