Author Archives: Robins

Docker error response from daemon: Conflict: unable to deletexxxxx

An error is reported when docker deletes an image. After docker images, the output is as follows:

REPOSITORY                             TAG                        IMAGE ID            CREATED             SIZE
nvidia/cuda                            9.0-base                   74f5aea45cf6        6 weeks ago         134MB
paddlepaddle/paddle                    1.1.0-gpu-cuda8.0-cudnn7   b3cd25f64a2a        8 weeks ago         2.76GB
hub.baidubce.com/paddlepaddle/paddle   1.1.0-gpu-cuda8.0-cudnn7   b3cd25f64a2a        8 weeks ago         2.76GB
paddlepaddle/paddle                    1.1.0-gpu-cuda9.0-cudnn7   0df4fe3ecea3        8 weeks ago         2.89GB
hub.baidubce.com/paddlepaddle/paddle   1.1.0-gpu-cuda9.0-cudnn7   0df4fe3ecea3

The first image directly docker RMI 74f5aea45cf6 will be deleted successfully. However, the latter two images appear in pairs. The direct docker RMI deletion fails. The error message is as follows:

Error response from daemon:
conflict: unable to delete b3cd25f64a2a (must be forced) - image 
is referenced in multiple repositories

Solution:

First, specify the name instead of the image ID when docker RMI, and then execute docker RMI – f image IDJ:

docker rmi paddlepaddle/paddle:1.1.0-gpu-cuda8.0-cudnn7
docker rmi -f b3cd25f64a2a

[ONNXRuntimeError] : 10 : INVALID_Graph loading model error

Project scenario:

The python model is converted to onnx model and can be exported successfully, but the following errors occur when loading the model using onnxruntime

InvalidGraph: [ONNXRuntimeError] : 10 : INVALID_ GRAPH : Load model from T.onnx failed:This is an invalid model. Type Error: Type ‘tensor(bool)’ of input parameter (8) of operator (ScatterND) in node (ScatterND_ 15) is invalid.


Problem Description:

import torch
import torch.nn as nn
import onnxruntime
from torch.onnx import export

class Preprocess(nn.Module):
    def __init__(self):
        super().__init__()
        self.max = 1000
        self.min = -44

    def forward(self, inputs):
        inputs[inputs>self.max] = self.max
        inputs[inputs<self.min] = self.min
        return inputs
        
x = torch.randint(-1024,3071,(1,1,28,28))
model = Preprocess()
model.eval()

export(
    model,
    x,
    "test.onnx",
    input_names=["input"],
    output_names=["output"],
    opset_version=11,
)

session = onnxruntime.InferenceSession("test.onnx")

Cause analysis:

The same problem can be found in GitHub of pytorch #34054


Solution:

The specific operations are as follows: Mr. Cheng mask , and then use torch.masked_ Fill() operation. Instead of using the index to directly assign the entered tensor

class MaskHigh(nn.Module):
    def __init__(self, val):
        super().__init__()
        self.val = val

    def forward(self, inputs):
        x = inputs.clone()
        mask = x > self.val
        output = torch.masked_fill(inputs, mask, self.val)
        return output


class MaskLow(nn.Module):
    def __init__(self, val):
        super().__init__()
        self.val = val

    def forward(self, inputs):
        x = inputs.clone()
        mask = x < self.val
        output = torch.masked_fill(inputs, mask, self.val)
        return output


class Clip(nn.Module):
    def __init__(self):
        super().__init__()
        self.high = MaskHigh(1300)
        self.low = MaskLow(-44)

    def forward(self, inputs):
        output = self.high(inputs)
        output = self.low(output)
        return output

Netron can be used to visualize the calculation diagram generated by the front and rear methods

Index assignment

When Jenkins deploys the project, GIT reports an error fatal: index file smaller than expected

@When Jenkins deploys the project, GIT reports an error fatal: index file smaller than expectedtoc

Recently, when learning to deploy Jenkins, microservice construction has been reporting errors:

Caused by: hudson.plugins.git.GitException: Command "git fetch --tags --progress https://gitee.com/xxx +refs/heads/*:refs/remotes/origin/*" returned status code 128:
stdout: 
stderr: fatal: index file smaller than expected

	at org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandIn(CliGitAPIImpl.java:2450)
	at org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandWithCredentials(CliGitAPIImpl.java:2051)
	at org.jenkinsci.plugins.gitclient.CliGitAPIImpl.access$500(CliGitAPIImpl.java:84)
	at org.jenkinsci.plugins.gitclient.CliGitAPIImpl$1.execute(CliGitAPIImpl.java:573)
	at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:994)
	... 11 more
ERROR: Error fetching remote repo 'origin'
Finished: FAILURE

The GIT problem found on the Internet is solved as follows:

View links

Because Jenkins uses git deployed on the server to pull the remote warehouse, I handle it according to the above method and re push it. It still doesn’t work.

Later, it was found that the workspace in Jenkins should be cleaned up first, and Jenkins should be pulled from the remote warehouse again. The problem was solved. As shown in the figure:

The fastest way to solve the problem of error reporting from crypto.cipher import AES

In my personal development process, I need to use the encryption method, but in the process of quoting from crypto.cipher import AES, it has been reported that the crypto module does not exist. Baidu finally found a solution after a few days

 

I use Python 3.9

1、 First perform the following 2 steps

1.pip   install crypto

2.pip   install pycryptohome

2、 After the installation is completed, you can use it directly or prompt that it cannot be found:

1. Change crypto to uppercase

 

So far, the problem has been solved

 

Failed to get HbA FCP target mapping in lanfree backup

Premise: dataark software lanfree backup
establishes a connection with the client through the FC card

First execute the following three commands

echo 1>/ sys/class/fc_ host/host15/issue_ lip
echo ‘- – -’ >/ sys/class/scsi_ host/host15/scan
/opt/scutech/dbackup3/bin/lsscsi

Then initiate the backup job in the client umount/MNT directory

Error “NPM err” when starting Vue project! code ELIFECYCLE”

Since NPM and cnpm have been used together before, there is no problem. Today, when starting the Vue project, I encountered an error “NPM”   ERR!   code   “Elifecycle” was initially used with NPM   Run started and later changed to cnpm   Neither can run.

  After checking the information, it is mostly said on the Internet that it is because of node_ There is a problem with the installation of modules. The basic solution is to clear the cache and reinstall.

The following steps are summarized:

1、npm   cache   clean  — force

2、rm  – rf   node_ modules

3、rm  – rf   Package-lock.json (optional)

4、npm   install

 

 

Error in web.xml file: error while downloading

Complete error message: error while downloading‘ http://www.w3.org/2001/xml.xsd ’To C:
\ users \ jarvis5. Lemminx \ cache \ http \ www.w3. ORG \ 2001 \ xml.xsd. It is speculated that there should be an error in the file download path. According to the error information, find the folder under the relevant path. It is found that there are two folders: http and HTTPS, As shown in the figure:
click the two folders and find the file content prompted in the error message (www.w3. Org/2001/XML. XSD) in the HTTPS file, so change the HTTP in the XML file to HTTPS and the error message disappears
before modification:

After modification:

or add & lt; xml-body> tag, the error message disappears:

(the content is for reference only, please point out if there is an error!)

The C language qsort() function reports an error for overflow of – 2147483648 and 2147483648

Today, I encountered the need to use qsort function in the force deduction problem. As a result, there was no test case. The error reports are as follows

signed integer overflow: 0 – -2147483648 cannot be represented in type ‘int

Signed integer overflow: 0 — 2147483648 cannot be represented in type ‘Int’

The error was reported in the CMP function. At that time, I wondered if it was still within the range. I thought about it and found the problem

This is my original CMP function  

int cmp(const void *a,const void *b){
    return (*(const int*)a > *(const int*)b);
}

At this time, a is – 2147483648. Subtract B. as long as B is greater than 0, it overflows

terms of settlement:

Change the minus sign to the greater than sign

int cmp(const void *a,const void *b){
    return (*(const int*)a > *(const int*)b);
}

So it passed smoothly.

There are few questions about this on the Internet. Maybe it’s too stupid. Send a post to help Xiaobai like me

Docker Nacos deployment uses container name to access 400 bad request

Springboot error

Ignore the empty nacos configuration and get it based on dataId

Curl test error

< HTTP/1.1 400 
< Content-Type: text/html;charset=utf-8
< Content-Language: en
< Content-Length: 435
< Date: Fri, 03 Sep 2021 03:06:16 GMT
< Connection: close
< 
* Closing connection 0
<!doctype html><html lang="en"><head><title>HTTP Status 400 – Bad Request</title><style type="text/css">body {font-family:Tahoma,Arial,sans-serif;} h1, h2, h3, b {color:white;background-color:#525D76;} h1 {font-size:22px;} h2 {font-size:16px;} h3 {font-size:14px;} p {font-size:12px;} a {color:black;} .line {height:1px;background-color:#525D76;border:none;}</style></head><body><h1>HTTP Status 400 – Bad Request</h1></body></html>

Original request address

http://private_ appstore_ cloud_ nacos:8848

Modify container name and new request address

http://private-appstore-cloud-nacos:8848

Through curl test, it is found that the original request address server responds to 400 and the new address responds to 200. The reason is that the Nacos server determines that the HTTP protocol header host contains non-standard domain name characters.

An error commandnotfounderror (to initialize your shell) is reported when activating the virtual environment

Error in activating virtual environment

PS E:\projects\Text2Scene_v3> conda activate base

CommandNotFoundError: Your shell has not been properly configured to use 'conda activate'.
If using 'conda activate' from a batch script, change your
invocation to 'CALL conda.bat activate'.

To initialize your shell, run

    $ conda init <SHELL_NAME>

Currently supported shells are:
  - bash
  - cmd.exe
  - fish
  - tcsh
  - xonsh
  - powershell

See 'conda init --help' for more information and options.

IMPORTANT: You may need to close and restart your shell after running 'conda init'.

Windows10 + vscode
after all the information found on the Internet is used, it is found that it is useless
later, I found out why the previous one was PS (base) or not. As a result, I changed the terminal from power shell to CMD emmmmm~~

Mistakes vary from person to person. It may take a lot of unnecessary time to solve them, but patience will always solve them, rush
If I can’t solve this problem, go and see someone else’s~~