Tag Archives: debug

The solution of no space left on device always appears when using TF’s debug tool (tfdbg)

The first time you use tensorflow’s debug tool, but when you use it for the second time, there is always a shortage of space, which can be solved through the following steps.

  df -h

Find that the root directory is full, then go to the root directory and check the occupied directory  

  du –max-depth=1 -h

  It is found that TMP directory takes up a lot of space

 

Sure enough, when you go to TMP, you find files related to tfdbg. Just delete them  。

 

 

 

 

 

‘coroutine‘ object is not iterable [How to Solve]

ValueError: [TypeError("'coroutine' object is not iterable"), TypeError('vars() argument must have __dict__ attribute')]

In fastapi, uvloop uses asynchronous function
to use asynchronism. ‘coroutine’ object is not Iterable error
originally found that asynchronous code was called in synchronous function.

Please add async, await to the external function

About the problem of calling tools library by running Python program under Mac OS X, modulenotfoundererror: no module named ‘tools‘

ModuleNotFoundError: No module named ‘Tools’

For example, import the tools library into pcharm of MAC

from Tools.scripts.abitype import classify
from sklearn import datasets
from sklearn.model_selection import train_test_split
from sklearn.metrics import classification_report
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.patches import Patch
from sklearn import datasets
from minisom import MiniSom

The first line will report error!!!

It seems that it is very difficult to change the tools variable in MAC environment. The solutions are as follows

Direct import classify

import classify

The test is effective ~

Solution of OpenCV library import error in Python 3

The solution of OpenCV library import error in python3

Operating environment:

Operating system: CentOS 7.9

Software environment: Python 3.6

Error problem description:

When using OpenCV library, error occurred when importing Library:

File "<stdin>", line 1, in <module>
File "/home/summer/.local/lib/python3.6/site-packages/cv2/__init__.py", line 5, in <module>
    from .cv2 import *
ImportError: libGL.so.1: cannot open shared object file: No such file or directory

resolvent:

Enter the command in the terminal:

sudo yum update
sudo yum install mesa-libGL.x86_64

The update is complete.

Error in installing Matplotlib Library: permissionerror: [errno 13] permission denied: ‘/ usr / local / lib / python3.6’

Error occurred when downloading and installing Matplotlib Library in python3: permissionerror: [errno 13] permission denied: ‘/ usr/local/lib/python3.6’.

Operating environment:

Operating system: CentOS 7.9

Software environment: Python 3.6

Error Description: using the command PIP3 install Matplotlib to download the Matplotlib library is an error.

resolvent:

Failed to resolve: com.se renegiant:common 1.5.20

Then we found another more simple and effective way

1. From http://download.csdn.net/download/qq_ 38355313/12156696 download the common package you need

2. Directly put the required module into the LIBS folder

3. Add the original

implementation fileTree(include: ['*.jar'], dir: 'libs')

Change to

implementation fileTree(include: ['*.jar','*.aar'], dir: 'libs') 

This problem can be solved, and the pro test is effective.

com.fasterxml.jackson.databind.exc.InvalidDefinitionException: No serializer found for class

com.fasterxml.jackson.databind.exc.InvalidDefinitionException: No serializer found for class (through reference chain: com.jd.lean.mjp.dal.entity.Province_$$_ jvste70_ 0[“handler”])

1. Background

When using mybatis one to many collection query, an error is reported

2. Solution

One to many, entity class, with annotation
@ jsonignoreproperties (value = {“handler”})

3. Reasons

JSON serialization does not ignore some properties in the bean that do not need to be converted, such as handler

Successfully solved the problem of “runtimee” in RESNET dataset classification rror:expected scalar type Long but found Float”

Recently, I encountered some mistakes as shown in the title when doing deep learning classification, but I don’t know how to modify them. Finally, after exploration, I successfully solved them
the problems and solutions are reported directly below.

Error

Solution

In practice, the label of classification should be long, and the image should be float32
therefore, modifying the data type will succeed, but it doesn’t matter. I’ll share it with you after I solve it successfully!

SSL_ERROR_SYSCALL in connection to github.com:443

Project scenario:

Mac uses git push or hexo deploy to push GitHub

Problem Description:

Bug contains

LibreSSL SSL_connect: SSL_ERROR_SYSCALL in connection to github.com:443

Cause analysis:

Because the agent is attached locally, you can clone
but when pushing, you need to attach the agent to the terminal


Solution:

vim ~/.gitconfig

View the port of the native agent and replace the following XXXX

[http]
	proxy = socks5://127.0.0.1:xxxx
[https]
	proxy = socks5://127.0.0.1:xxxx

You can also push by turning off the agent and accessing it normally

onnx-tensorrt/builtin_op_importers.cpp:766:12: error: ‘class nvinfer1::IDeconvolutionLayer’ has no m

When compiling the onnxruntime source code, an error is reported:

/home/zxq/cxx/onnxruntime/cmake/external/onnx-tensorrt/builtin_op_importers.cpp:766:12: error: ‘class nvinfer1::IDeconvolutionLayer’ has no member named ‘setDilationNd’
     layer->setDilationNd(dilations);
            ^~~~~~~~~~~~~
/home/zxq/cxx/onnxruntime/cmake/external/onnx-tensorrt/builtin_op_importers.cpp: In function ‘onnx2trt::NodeImportResult onnx2trt::{anonymous}::importGemm(onnx2trt::IImporterContext*, const onnx::NodeProto&, std::vector<onnx2trt::TensorOrWeights>&)’:
/home/zxq/cxx/onnxruntime/cmake/external/onnx-tensorrt/builtin_op_importers.cpp:1250:18: error: ‘class nvinfer1::IShuffleLayer’ has no member named ‘setZeroIsPlaceholder’
         squeeze->setZeroIsPlaceholder(false);
                  ^~~~~~~~~~~~~~~~~~~~
/home/zxq/cxx/onnxruntime/cmake/external/onnx-tensorrt/builtin_op_importers.cpp: In function ‘onnx2trt::NodeImportResult onnx2trt::{anonymous}::importGRU(onnx2trt::IImporterContext*, const onnx::NodeProto&, std::vector<onnx2trt::TensorOrWeights>&)’:
/home/zxq/cxx/onnxruntime/cmake/external/onnx-tensorrt/builtin_op_importers.cpp:1536:20: error: ‘class nvinfer1::IShuffleLayer’ has no member named ‘setZeroIsPlaceholder’
         unsqueeze->setZeroIsPlaceholder(false);
                    ^~~~~~~~~~~~~~~~~~~~
/home/zxq/cxx/onnxruntime/cmake/external/onnx-tensorrt/builtin_op_importers.cpp: In function ‘onnx2trt::NodeImportResult onnx2trt::{anonymous}::importLSTM(onnx2trt::IImporterContext*, const onnx::NodeProto&, std::vector<onnx2trt::TensorOrWeights>&)’:
/home/zxq/cxx/onnxruntime/cmake/external/onnx-tensorrt/builtin_op_importers.cpp:2051:22: error: ‘class nvinfer1::IShuffleLayer’ has no member named ‘setZeroIsPlaceholder’
         reshapeBias->setZeroIsPlaceholder(false);
                      ^~~~~~~~~~~~~~~~~~~~
/home/zxq/cxx/onnxruntime/cmake/external/onnx-tensorrt/builtin_op_importers.cpp: In function ‘onnx2trt::NodeImportResult onnx2trt::{anonymous}::importRNN(onnx2trt::IImporterContext*, const onnx::NodeProto&, std::vector<onnx2trt::TensorOrWeights>&)’:
/home/zxq/cxx/onnxruntime/cmake/external/onnx-tensorrt/builtin_op_importers.cpp:3202:22: error: ‘class nvinfer1::IShuffleLayer’ has no member named ‘setZeroIsPlaceholder’
         reshapeBias->setZeroIsPlaceholder(false);
                      ^~~~~~~~~~~~~~~~~~~~
/home/zxq/cxx/onnxruntime/cmake/external/onnx-tensorrt/builtin_op_importers.cpp: In function ‘onnx2trt::NodeImportResult onnx2trt::{anonymous}::importTRT_Shuffle(onnx2trt::IImporterContext*, const onnx::NodeProto&, std::vector<onnx2trt::TensorOrWeights>&)’:
/home/zxq/cxx/onnxruntime/cmake/external/onnx-tensorrt/builtin_op_importers.cpp:4219:12: error: ‘class nvinfer1::IShuffleLayer’ has no member named ‘setZeroIsPlaceholder’
     layer->setZeroIsPlaceholder(zeroIsPlaceholder);
            ^~~~~~~~~~~~~~~~~~~~
external/onnx-tensorrt/CMakeFiles/nvonnxparser_static.dir/build.make:103: recipe for target 'external/onnx-tensorrt/CMakeFiles/nvonnxparser_static.dir/builtin_op_importers.cpp.o' failed
make[2]: *** [external/onnx-tensorrt/CMakeFiles/nvonnxparser_static.dir/builtin_op_importers.cpp.o] Error 1
CMakeFiles/Makefile2:2581: recipe for target 'external/onnx-tensorrt/CMakeFiles/nvonnxparser_static.dir/all' failed
make[1]: *** [external/onnx-tensorrt/CMakeFiles/nvonnxparser_static.dir/all] Error 2
Makefile:165: recipe for target 'all' failed
make: *** [all] Error 2
Traceback (most recent call last):
  File "/home/zxq/cxx/onnxruntime/tools/ci_build/build.py", line 1986, in <module>
    sys.exit(main())
  File "/home/zxq/cxx/onnxruntime/tools/ci_build/build.py", line 1921, in main
    build_targets(args, cmake_path, build_dir, configs, num_parallel_jobs, args.target)
  File "/home/zxq/cxx/onnxruntime/tools/ci_build/build.py", line 1007, in build_targets
    run_subprocess(cmd_args, env=env)
  File "/home/zxq/cxx/onnxruntime/tools/ci_build/build.py", line 528, in run_subprocess
    return run(*args, cwd=cwd, capture_stdout=capture_stdout, shell=shell, env=my_env)
  File "/home/zxq/cxx/onnxruntime/tools/python/util/run.py", line 41, in run
    completed_process = subprocess.run(
  File "/home/zxq/anaconda3/lib/python3.8/subprocess.py", line 512, in run
    raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command '['/usr/local/bin/cmake', '--build', '/home/zxq/cxx/onnxruntime/build/Linux/Release', '--config', 'Release']' returned non-zero exit status 2.

reason:

From the error message, we can see that it is related to the third-party package external/onnx tensorrt of oonnxruntime, and then we can see that

Onnx runtime – rel-1.7.2 is the third-party package onnx tensorrt. This package depends on tensorrt version 7.2.2, but I installed 7.0.0, which is too low.

The reason why I installed 7.0.0 is that I installed CUDA 10.0 before, and CUDA 10.0 supports tensorrt 7.0.0 at most.

terms of settlement:

(1) Download the appropriate version of onnxruntime again. The download is too slow and the time cost is too high.

(2) Update CUDA 11.0, CUDA 11.0 can support the latest tensorrt 7.2.3, re install CUDA and tensorrt reference tutorial.

 

Clion breakpoint not triggered debugging no response to solve the problem

When using clion to debug C + + code, there is no response when debugging after adding breakpoints, and breakpoints are not triggered. After checking some materials, we found that there was a problem. When compiling, we need to set the debug mode in cmakelists. The solution is to add the following code to cmakelists:

set(CMAKE_BUILD_TYPE Debug)

When recompiling and debugging, breakpoints can be triggered normally.

Solution to the problem of spring boot running test class error creating bean with name ‘serverendpoint exporter’ defined

There are many problems in spring boot unit test. When I use websocket, I will run the test class and report an error: error creating bean with name ‘serverendpoint exporter’ defined in class path resource [COM/Jacklin/config]/ WebSocketConfig.class ]Here I introduce the annotation @ serverendpoint:

There are two ways to solve this problem

Method 1: remove the @ runwith of the test class( SpringRunner.class ), but this method will have limitations. For example, when you want a @ authwired class below, you will report an error. I can’t do it here, according to your code situation.

The second way: add webenvironment after springboottest= SpringBootTest.WebEnvironment.RANDOM_ Port means to create a web application context (response based or servlet based). Reason: websocket depends on the startup of Tomcat and other containers. So in the process of testing, we need to really start a Tomcat as a container.

Run after adding, no more error!!