Category Archives: Error

[Solved] Error: Another program is already listening on a port that one of our HTTP servers is configured to

Question

    1. Step 1 run
supervisord  -c /etc/supervisor/supervisord.conf

Error

Error: Another program is already listening on a port that one of our HTTP servers is configured to

Solution:

ps -ef | grep supervisord

Turn off the 8079 process

kill -s SIGTERM 8079  

Solution results

supervisord  -c /etc/supervisor/supervisord.conf

No error is the best case, indicating that the process is started successfully

[Solved] Playbook Start Nginx Error: Unable to start service nginx: Job for nginx.service fd with error code

The error information is as follows:

TASK [start nginx service] *******************************************************************************************
fatal: [192.168.126.129]: FAILED! => {"changed": false, "msg": "Unable to start service nginx: Job for nginx.service fd with error code. See \"systemctl status nginx.service\" and \"journalctl -xe\" for details.\n"}

The reason is that the user/group I created for Nginx in playbook is user-defined:

Change it to Nginx:

[Solved] Electron Error: Error: Electron failed to install correctly, please delete node_modules/electron and try

When building an electron Vue project on win10, an error is reported using the NPM install command
report errors:

Error: Electron failed to install correctly, please delete node_modules/electron and try installing again

Solution::

Enter the node_modules/electron directory in the project from the command line and run the node install.js command

[Solved] Error: Flash Download failed – “Cortex-M3“

I encountered this problem because I reinstalled keil. There was no problem before. There was no problem with my chip selection, memory capacity and debug, so I deleted the chip package pack

The path is generally C: keil5 \ arm \ pack \ keil \ stm32f1xx_ DFP, delete this version 2.2.0

Before downloading the 1.0.5 version, just restart keil5 (for reference only)

Flutter upgrade upgrade SDK SSL_ERROR_Syscall error [How to Solve]

Execute the following commands when upgrading the flutter

Flutter upgrade

Unfortunately, for some reason, it cannot be accessed. The following error is reported

Standard error: fatal: unable to access
'https://github.com/flutter/flutter.git/': LibreSSL SSL_connect:
SSL_ERROR_SYSCALL in connection to github.com:443

You can set the routing agent. The command is as follows:

git config --global http.proxy http://127.0.0.1:1080
git config --global https.proxy http://127.0.0.1:1080

Note that the port can be modified according to its own configuration. Set or not set HTTPS according to your needs
if you want to cancel the proxy setting, you can use the following command:

git config --global --unset http.proxy 
git config --global --unset https.proxy

After setting, test OK

Git config — the global command will add the following configuration in the ~ /. Gitconfig file

[http]
	proxy = http://127.0.0.1:1080
[https]
	proxy = https://127.0.0.1:1080

Note:
there is another way to use this command. It has not been tested yet. Save it first

git config --global --add remote.origin.proxy

[Solved] Brew Install ffmpeg Error: tar: Error opening archive: Failed to open

Error Message:

tar: Error opening archive: Failed to open ‘/Users/edy/Library/Caches/Homebrew/downloads/4a1df878f9549839794e9466cff829ff77e4e90d33b6ee3119051ec2590f8780–unbound-1.13.1.big_sur.bottle.tar.gz’

0f5d4b5b16c58d5ad770a56d–harfbuzz-2.8.2.big_sur.bottle.tar.gz –directory /private/tmp/d20210830-97010-oiocrb` exited with 1. Here’s the output:
tar: Error opening archive: Failed to open ‘/Users/edy/Library/Caches/Homebrew/downloads/ada1b84732f9bc165c3198c7180c846cc5d8e3e40f5d4b5b16c58d5ad770a56d–harfbuzz-2.8.2.big_sur.bottle.tar.gz’

tar: Error opening archive: Failed to open ‘/Users/edy/Library/Caches/Homebrew/downloads/49ceb5675427d6353db6ece526f416910cb5be2e7309f670a29ea845d19a0410–sdl2-2.0.14_1.big_sur.bottle.tar.gz’

Solution:

brew install bottle
brew install harfbuzz
brew install sdl2

Cv2.dnn read model error [How to Solve]

cv2.dnn read model error:

D:\ProgramData\Miniconda3\python.exe D:/project/detect/face/yolov5-face-landmarks-opencv/main_new.py
[ERROR:0] global C:\Users\appveyor\AppData\Local\Temp\1\pip-req-build-oduouqig\opencv\modules\dnn\src\onnx\onnx_importer.cpp (1878) cv::dnn::dnn4_v20201117::ONNXImporter::handleNode DNN/ONNX: ERROR during processing node with 2 inputs and 1 outputs: [Add]:(430)
Traceback (most recent call last):
  File "D:/project/detect/face/yolov5-face-landmarks-opencv/main_new.py", line 126, in <module>
    yolonet = yolov5(confThreshold=args.confThreshold, nmsThreshold=args.nmsThreshold, objThreshold=args.objThreshold)
  File "D:/project/detect/face/yolov5-face-landmarks-opencv/main_new.py", line 23, in __init__
    self.net = cv2.dnn.readNet(r'D:\project\detect\face\yolov5-face-master\yolov5n-face.onnx')
cv2.error: OpenCV(4.5.1) C:\Users\appveyor\AppData\Local\Temp\1\pip-req-build-oduouqig\opencv\modules\dnn\src\onnx\onnx_importer.cpp:1887: error: (-2:Unspecified error) in function 'cv::dnn::dnn4_v20201117::ONNXImporter::handleNode'
> Node [Add]:(430) parse error: OpenCV(4.5.1) C:\Users\appveyor\AppData\Local\Temp\1\pip-req-build-oduouqig\opencv\modules\dnn\src\onnx\onnx_importer.cpp:780: error: (-215:Assertion failed) blob_0.size == blob_1.size in function 'cv::dnn::dnn4_v20201117::ONNXImporter::handleNode'
>
Process finished with exit code 1

Reason:
opencv4.5.1 torch 1.9.0
torch1.9.0 export onnx,opencv4.5.1not supported

Solution:
torch change to version 1.7.1, torchvision 0.8.2 version  —–the other version wil report an error.

The uibot database connection is unrecognized error: The object definition error

When defining the database, the connection object is too long, so the object is separated from the definition first, and then database.createdb();

In order to simplify and enhance readability when separating objects, some typesetting and possible errors will be made, as follows:

//The following two cases are possible
dim a={"a":"123","b":"fg"}
dim a={
    "a":"123",
    "b":"fg"}
//One of the following cases is not allowed
dim a={
    "a":"123",
    "b":"fg"
}
//Separating the closing curly brace from the last element does not work.

[kubernetes] the pod instance of calico node always reports an error and restarts

[background]

Today, we tested the node node capacity expansion of k8s cluster. The whole process of capacity expansion was very smooth. However, it was later found that on the newly expanded node (k8s-node04), there has always been an error reported and restarted pod instance of calico node.

[phenomenon]

From the running status query results of the following pod instances, it can be found that a pod instance (calico-node-xl9bc) is constantly restarting.

[root@k8s-master01 ~]# kubectl get pods -A| grep calico
kube-system            calico-kube-controllers-78d6f96c7b-tv2g6               1/1     Running     0          75m
kube-system            calico-node-6dk7g                                      1/1     Running     0          75m
kube-system            calico-node-dlf26                                      1/1     Running     0          75m
kube-system            calico-node-s5phd                                      1/1     Running     0          75m
kube-system            calico-node-xl9bc                                      0/1     Running     30          3m28s

[troubleshooting]

Query the log of pod

[root@k8s-master01 ~]# kubectl logs calico-node-xl9bc -n kube-system -f

2021-09-04 12:32:45.011 [ERROR][69] felix/health.go 246: Health endpoint failed, trying to restart it... error=listen tcp: lookup localhost on 114.114.114.114:53: no such host
2021-09-04 12:32:46.025 [ERROR][69] felix/health.go 246: Health endpoint failed, trying to restart it... error=listen tcp: lookup localhost on 114.114.114.114:53: no such host
2021-09-04 12:32:47.038 [ERROR][69] felix/health.go 246: Health endpoint failed, trying to restart it... error=listen tcp: lookup localhost on 114.114.114.114:53: no such host
2021-09-04 12:32:48.050 [ERROR][69] felix/health.go 246: Health endpoint failed, trying to restart it... error=listen tcp: lookup localhost on 114.114.114.114:53: no such host
2021-09-04 12:32:49.061 [ERROR][69] felix/health.go 246: Health endpoint failed, trying to restart it... error=listen tcp: lookup localhost on 114.114.114.114:53: no such host
2021-09-04 12:32:50.072 [ERROR][69] felix/health.go 246: Health endpoint failed, trying to restart it... error=listen tcp: lookup localhost on 114.114.114.114:53: no such host
2021-09-04 12:32:51.079 [ERROR][69] felix/health.go 246: Health endpoint failed, trying to restart it... error=listen tcp: lookup localhost on 114.114.114.114:53: no such host
2021-09-04 12:32:52.093 [ERROR][69] felix/health.go 246: Health endpoint failed, trying to restart it... error=listen tcp: lookup localhost on 114.114.114.114:53: no such host
2021-09-04 12:32:53.104 [ERROR][69] felix/health.go 246: Health endpoint failed, trying to restart it... error=listen tcp: lookup localhost on 114.114.114.114:53: no such host
2021-09-04 12:32:54.114 [ERROR][69] felix/health.go 246: Health endpoint failed, trying to restart it... error=listen tcp: lookup localhost on 114.114.114.114:53: no such host
2021-09-04 12:32:55.127 [ERROR][69] felix/health.go 246: Health endpoint failed, trying to restart it... error=listen tcp: lookup localhost on 114.114.114.114:53: no such host
2021-09-04 12:32:56.138 [ERROR][69] felix/health.go 246: Health endpoint failed, trying to restart it... error=listen tcp: lookup localhost on 114.114.114.114:53: no such host
2021-09-04 12:32:57.148 [ERROR][69] felix/health.go 246: Health endpoint failed, trying to restart it... error=listen tcp: lookup localhost on 114.114.114.114:53: no such host
2021-09-04 12:32:58.162 [ERROR][69] felix/health.go 246: Health endpoint failed, trying to restart it... error=listen tcp: lookup localhost on 114.114.114.114:53: no such host
2021-09-04 12:32:59.176 [ERROR][69] felix/health.go 246: Health endpoint failed, trying to restart it... error=listen tcp: lookup localhost on 114.114.114.114:53: no such host
2021-09-04 12:33:00.186 [ERROR][69] felix/health.go 246: Health endpoint failed, trying to restart it... error=listen tcp: lookup localhost on 114.114.114.114:53: no such host
2021-09-04 12:33:01.199 [ERROR][69] felix/health.go 246: Health endpoint failed, trying to restart it... error=listen tcp: lookup localhost on 114.114.114.114:53: no such host
2021-09-04 12:33:02.211 [ERROR][69] felix/health.go 246: Health endpoint failed, trying to restart it... error=listen tcp: lookup localhost on 114.114.114.114:53: no such host
2021-09-04 12:33:03.225 [ERROR][69] felix/health.go 246: Health endpoint failed, trying to restart it... error=listen tcp: lookup localhost on 114.114.114.114:53: no such host
2021-09-04 12:33:04.238 [ERROR][69] felix/health.go 246: Health endpoint failed, trying to restart it... error=listen tcp: lookup localhost on 114.114.114.114:53: no such host

This error report has been searched on the Internet for a long time, and no targeted solution has been found.

As like as two peas of IPv4 and IPv6, the /etc/hosts file was found to be missing from the two files of the node file. It turned out to be a little bit wrong with the /etc/hosts file of my local node node. I didn’t know that when I installed the virtual machine last night, I did not know that the last time I installed it on my own. What strange operation did you do.

### /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

After adding these two lines of configuration in the/etc/hosts file of k8s-node04, restart the network and find that crashloopbackoff occurs in the pod instance.

[root@k8s-master01 ~]# kubectl get pods -A| grep calico
kube-system            calico-kube-controllers-78d6f96c7b-tv2g6               1/1     Running            0          80m
kube-system            calico-node-6dk7g                                      1/1     Running            0          80m
kube-system            calico-node-dlf26                                      1/1     Running            0          80m
kube-system            calico-node-s5phd                                      1/1     Running            0          80m
kube-system            calico-node-xl9bc                                      0/1     CrashLoopBackOff   7          8m24s

After deleting this pod instance, it is found that the running state of the recreated pod instance finally returns to normal.

[root@k8s-master01 ~]# kubectl delete pod calico-node-xl9bc -n kube-system
pod "calico-node-xl9bc" deleted
[root@k8s-master01 ~]# kubectl get pods -A| grep calico
kube-system            calico-kube-controllers-78d6f96c7b-tv2g6               1/1     Running     0          81m
kube-system            calico-node-6dk7g                                      1/1     Running     0          81m
kube-system            calico-node-dlf26                                      1/1     Running     0          81m
kube-system            calico-node-mz58r                                      0/1     Running     0          5s
kube-system            calico-node-s5phd                                      1/1     Running     0          81m
[root@k8s-master01 ~]# kubectl get pods -A| grep calico
kube-system            calico-kube-controllers-78d6f96c7b-tv2g6               1/1     Running     0          81m
kube-system            calico-node-6dk7g                                      1/1     Running     0          81m
kube-system            calico-node-dlf26                                      1/1     Running     0          81m
kube-system            calico-node-mz58r                                      0/1     Running     0          7s
kube-system            calico-node-s5phd                                      1/1     Running     0          81m
[root@k8s-master01 ~]# kubectl get pods -A| grep calico
kube-system            calico-kube-controllers-78d6f96c7b-tv2g6               1/1     Running     0          81m
kube-system            calico-node-6dk7g                                      1/1     Running     0          81m
kube-system            calico-node-dlf26                                      1/1     Running     0          81m
kube-system            calico-node-mz58r                                      0/1     Running     0          8s
kube-system            calico-node-s5phd                                      1/1     Running     0          81m
[root@k8s-master01 ~]# kubectl get pods -A| grep calico
kube-system            calico-kube-controllers-78d6f96c7b-tv2g6               1/1     Running     0          81m
kube-system            calico-node-6dk7g                                      1/1     Running     0          81m
kube-system            calico-node-dlf26                                      1/1     Running     0          81m
kube-system            calico-node-mz58r                                      1/1     Running     0          11s
kube-system            calico-node-s5phd                                      1/1     Running     0          81m