Tag Archives: Computer vision

[Solved] GDAL+OPENCV often reports errors when processing remote sensing impacts

I found that every time I use the opencv package to process remote sensing images, it often reports an error, even though it has become uint format.
This time I found out that some of them only support three channels, my binary image is one channel, the error of storing the result is because I need to save the result of 1 channel, mine is three channels
so it is necessary to convert 1 channel to 3 channels back and forth ` img1 = np.array(b1)

# img2 = np.dtype(img1, np.uint8)
img = np.expand_dims(img1, axis=2)
img = np.concatenate((img, img, img), axis=-1)

gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)



ret, binary = cv2.threshold(gray, 127, 255, cv2.THRESH_BINARY)

contours, hierarchy = cv2.findContours(binary, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
# contours = cv2.findContours(img)
dilate = cv2.fillPoly(img, [contours[1]],(255,15,15))
cv2.imshow("filled binary", dilate)`

 

[Solved] error: (-215:Assertion failed) !_src.empty() in function ‘cv::cvtColor‘

Error Message:

error: (-215:Assertion failed) !_src.empty() in function ‘cv::cvtColor’

cv2.error: OpenCV(4.6.0) D:\a\opencv-python\opencv-python\opencv\modules\imgproc\src\color.cpp:182: error: (-215:Assertion failed) !_src.empty() in function 'cv::cvtColor'

Reason:

Path read error

Solution:

Original code

videos_src_path = "D:\pythonProject\python3.9\dataset\image\baozi_image\3.png"
frame = cv2.imread(videos_src_path)

Press the following to modify the read path

videos_src_path = "D:/pythonProject/python3.9/dataset/image/baozi_image/3.png"
#or  videos_src_path = r" D:\pythonProject\python3.9\dataset\image\baozi_image\3.png"
frame = cv2.imread(videos_src_path)

 

How to Solve Ubuntu18 Compile Kalibr Error (Various Errors)

Opencv-3.4.13 is too new, and errors will be encountered during the compilation of kalibr. The summary is as follows:

Error 1: sudo pip install python-igraph –upgrade failed

Solution:

sudo apt-get install python-igraph

Error 2:

Could not find a package configuration file provided by “code_utils” with
any of the following names:
code_utilsConfig.cmake
code_utils-config.cmake```

Solution:

  1. You need to put the following sentence in the sumpixel_test.cpp file
    #include “backward.hpp”
    to this line
    #include “code_utils/backward.hpp”
  2. Put code_utils into the workspace first, then catkin_ws once, then put imu_utils, then compile

 

Error 3: catkin build -DCMAKE_BUILD_TYPE=Release -j4 errors during compilation

3-1 error reporting:

 error: ‘CV_GRAY2RGB’ was not declared in this scope
     cv::cvtColor(imageCopy1, imageCopy1, CV_GRAY2RGB);
    
 error: ‘CV_TERMCRIT_ITER’ was not declared in this scope
         cv::TermCriteria(CV_TERMCRIT_EPS + CV_TERMCRIT_ITER, 30, 0.1));

error: ‘CV_TERMCRIT_EPS’ was not declared in this scope
         cv::TermCriteria(CV_TERMCRIT_EPS + CV_TERMCRIT_ITER, 30, 0.1));

3-1 solution: add a header file to the corresponding file:

#include <opencv2/imgproc/types_c.h>

##############################################################################################################

3-2 cvstartwindowthread() error:

Change 3-2 to:

cv::startWindowThread()

3-3 CV_LOAD_IMAGE_UNCHANGED error:

3-3 changed to

cv::IMREAD_UNCHANGED

3-4 CV_LOAD_IMAGE_GRAYSCALE error:

3-4 changed to

cv::IMREAD_GRAYSCALE错误:

3-5 CV_ LOAD_ IMAGE_ Grayscale error:

Change 3-5 to

cv::IMREAD_GRAYSCALE

3-6 CV_LOAD_IMAGE_COLOR error:

Change 3-6 to

cv::IMREAD_COLOR

3-7 CV_LOAD_IMAGE_ANYDEPTH error:

3-7 changed to

cv::IMREAD_ANYDEPTH

3-8 CV_MINMAX error:

3-8 changed to

    NORM_MINMAX

3-9 CV_FONT_HERSHEY_SIMPLEX error:

3-9 changed to

cv::FONT_HERSHEY_SIMPLEX```

3-10 CV_WINDOW_AUTOSIZE error:

3-10 changed to

cv::WINDOW_AUTOSIZE

3-11 error: error: aggregate ‘std::ofstream out_t’ has incomplete type and cannot be defined std::ofstream out_t;
3-11 Solution: add the header file as below:

#include <fstream>

[Solved] pytorch loss.backward() Error: RuntimeError: Function AddBackward0 returned an invalid gradient at index 1…

How to Solve pytorch loss.backward() Error

The specific errors are as follows:

In fact, the error message makes it clear that the error is caused by device inconsistency in the backpropagation process.
In the code, we first load all input, target, model, and loss to the GPU via .to(device). But when calculating the loss, we initialize a loss ourselves
loss_1 = torch.tensor(0.0)
This way by default loss_1 is loaded on the CPU.
The final execution loss_seg = loss + loss_1
results in the above error for loss_seg.backward().

Modification method:
Modify
loss_1 = torch.tensor(0.0)
to
loss_1 = torch.tensor(0.0).to(device)
or
loss_1 = torch.tensor(0.0).cuda()

[Solved] ValueError: Error when checking input: expected conv2d_input to have 4 dimensions

Error Messages:

ValueError: Error when checking input: expected conv2d_input to have 4 dimensions, but got array with shape (150, 150, 3)
Codes:

image = mpimg.imread("./ima/b.jpg")
image = image/255
classe = model.predict(image, batch_size=1)

Reason:
the input format is incorrect
solution:
standardize the dataset

Specific solutions:

image = mpimg.imread("./ima/b.jpg")
image = image.reshape(image.shape(1,150,150,3)/255
classe = model.predict(image, batch_size=1)

[Solved] error: (-2:Unspecified error) The function is not implemented. Rebuild the library with Windows

Error: error: (-2:Unspecified error) The function is not implemented. Rebuild the library with Windows

I’m on windows, and when running an image motion blur algorithm from the cv-python library, this error is suddenly reported.
Searching for similar problems has said that the path to read the image is wrong, and reinstalling a higher version of the cv library, but not all of them are useless.
All the functions under the cv2 class are not working.

Solution:
First:

pip uninstall opencv-python 

Then:

pip install opencv-python

Just reinstall this package. I don’t know the cause. The problem occurred after I installed the evaluations package. It should be caused by the conflict between versions
stackoverflow: https://stackoverflow.com/questions/67120450/error-2unspecified-error-the-function-is-not-implemented-rebuild-the-libra

[Solved] Opencv Call yolov3 Error: IndexError: invalid index to scalar variable

Call the weight file of yolov3 with OpenCV to obtain the names of the three scale output layers and report an error

ln = [ln[i[0] - 1] for i in net.getUnconnectedOutLayers()]

Error reporting
indexerror: invalid index to scalar variable

This code was a few years ago. It may be a problem with the version. Now it will report an error

Solution:
Remove [0] and modify as below:

ln = [ln[i - 1] for i in net.getUnconnectedOutLayers()]

[Solved] headless pyrender offscreen render Error: OpenGL.error.GLError: GLError(err = 12296,

Error description

When running vibe, the CentOS server without display is used. When using pyrender for off screen rendering, errors are reported OpenGL.error.GLError: GLError(err = 12296), as follows:

Traceback (most recent call last):
  File "demo.py", line 416, in <module>
    main(args)
  File "demo.py", line 278, in main
    renderer = Renderer(resolution=(orig_width, orig_height), orig_img=True, wireframe=args.wireframe)
  File "***/VIBE/lib/utils/renderer.py", line 60, in __init__
    point_size=1.0
  File "***/anaconda3/envs/vibe-env/lib/python3.7/site-packages/pyrender/offscreen.py", line 31, in __init__
    self._create()
  File "***/anaconda3/envs/vibe-env/lib/python3.7/site-packages/pyrender/offscreen.py", line 134, in _create
    self._platform.init_context()
  File "***/anaconda3/envs/vibe-env/lib/python3.7/site-packages/pyrender/platforms/egl.py", line 177, in init_context
    assert eglInitialize(self._egl_display, major, minor)
  File "***/anaconda3/envs/vibe-env/lib/python3.7/site-packages/OpenGL/platform/baseplatform.py", line 409, in __call__
    return self( *args, **named )
  File "***/anaconda3/envs/vibe-env/lib/python3.7/site-packages/OpenGL/error.py", line 232, in glCheckError
    baseOperation = baseOperation,
OpenGL.error.GLError: GLError(
        err = 12296,
        baseOperation = eglInitialize,
        cArguments = (
                <OpenGL._opaque.EGLDisplay_pointer object at 0x7f883a03a170>,
                c_long(0),
                c_long(0),
        ),
        result = 0
)

Solution

Ubuntu: https://pyrender.readthedocs.io/en/latest/install/index.html#installmesa Use OSMesa to offscreen render.

It is more convenient to use · CONDA · to install osmesa as follows:

conda install osmesa

Note: after installing osmesa, reinstall pyopengl, otherwise an error will still be reported, as follows:

pip uninstall pyopengl
git clone https://github.com/mmatl/pyopengl.git
pip install ./pyopengl

Specify PYOPENGL_PLATFORM before running the scriptis osmesa , as follows:

# demp.py
import os
# os.environ['PYOPENGL_PLATFORM'] = 'egl'
os.environ['PYOPENGL_PLATFORM'] = 'osmesa'

Note: os.environ[‘PYOPENGL_PLATFORM’] = ‘osmesa’ should ideally follow import os to ensure that PYOPENGL_PLATFORM is changed to osmesa before using render, or you can explicitly specify os.environ before using render specifically [‘PYOPENGL_PLATFORM’] = ‘osmesa’.

[Solved] mmdetection Error: ImportError: /home/user/repos/mmdetection/mmdet/ops/dcn/deform_conv_cuda.cpython-37m-x

Environment configuration: torch 1.11.0 + CUDA 11.3 (latest)

Use mmdetection to infer:

from mmdet.apis import init_detector, inference_detector

Errors are reported as follows:

ImportError: /home/user/repos/mmdetection/mmdet/ops/dcn/deform_conv_cuda.cpython-37m-x86_64-linux-gnu.so: undefined symbol: _ZN6caffe26detail37_typeMetaDataInstance_preallocated_32E

 

The problem has been solved. The reason for the error is that the pytorch version is too new. Although openmmlab supports the latest version, it will still cause the error.

Solution:

Degrade the pytorch version to torch 1.6.0 + cu102, query the official GitHub of openmmlab, uninstall and reinstall mmcv 1.3.9, and re run the mmdetection code to solve the error.

[Solved] size_from_dim: Assertion `dim >= 0 && (size_t)dim < sizes_.size()` failed.

Problems encountered

/home/optimizer-master/third_party/onnx_common/tensor.h:117: size_from_dim: Assertion dim >= 0 && (size_t)dim < sizes_.size() failed.

 

Problem description

When import onnx, I encountered the above problems

 

Solution:

If there is an op operation that cannot be exported during onnx export, this error will be reported
How to Solve:
find the operation that cannot be exported and add a judgment. If you export onnx, the op will not be used. An example is as follows

def forward(self, x):
    if torch.onnx.is_in_onnx_export():# if onnx
        return x * self.act(x)/6
    else: # normally
        return x * self.act(x + 3)/6

[Solved] OpenCV Import Error: ImportError: numpy.core.multiarray failed to import

cv2 import Error:

(paddle) C:\Windows\system32>python
Python 3.8.12 (default, Oct 12 2021, 03:01:40) [MSC v.1916 64 bit (AMD64)] :: Anaconda, Inc. on win32
Type “help”, “copyright”, “credits” or “license” for more information.
>>> import cv2
ImportError: numpy.core.multiarray failed to import
Traceback (most recent call last):
File “<stdin>”, line 1, in <module>
File “D:\anaconda3\envs\paddle\lib\site-packages\cv2\__init__.py”, line 8, in <module>
from .cv2 import *
ImportError: numpy.core.multiarray failed to import
>>> exit()

Solution:

pip install -U numpy 

New problems,

(paddle) C:\Windows\system32>pip install -U numpy
Requirement already satisfied: numpy in d:\anaconda3\envs\paddle\lib\site-packages (1.19.3)
Collecting numpy
  Downloading numpy-1.22.3-cp38-cp38-win_amd64.whl (14.7 MB)
     |████████████████████████████████| 14.7 MB 547 kB/s
Installing collected packages: numpy
  Attempting uninstall: numpy
    Found existing installation: numpy 1.19.3
    Uninstalling numpy-1.19.3:
      Successfully uninstalled numpy-1.19.3
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
paddlepaddle-gpu 2.2.2 requires numpy<=1.19.3,>=1.13; python_version >= "3.5" and platform_system == "Windows", but you have numpy 1.22.3 which is incompatible.
openvino 2021.4.2 requires numpy<1.20,>=1.16.6, but you have numpy 1.22.3 which is incompatible.
openvino-dev 2021.4.2 requires numpy<1.20,>=1.16.6, but you have numpy 1.22.3 which is incompatible.
Successfully installed numpy-1.22.3

However, I tested it and found that this new error has not caused too many problems.

To meet compatibility, if you want to change back to the version, you can

pip install numpy==1.19.3
pip uninstall opencv-contrib-python
conda install opencv

Note that the following instructions will install (uninstall) the latest version of OpenCV,

pip (un)install opencv-contrib-python

 

Reference:

https://stackoverflow.com/questions/20518632/importerror-numpy-core-multiarray-failed-to-import

How to Solve Yolox Training C Disk Full Issue

0. Problem description: COCO dataset training to half suddenly interrupted, look at the C disk shows red, there is not much memory (training, generated in AppData/Temp in the temporary file too much)
As shown in the figure: as the epoch increases, the file is getting bigger and bigger (the figure is still yolox-tiny), if we use yolox-x, the C drive is directly full!

1. Problem Cause.
YOLOX-main/yolox/evaluators/coco_evaluator.py in line 203 or so **tempfile.mkstemp()** after creating the file, no close() and remove() operations are performed
The following figure.

2. Solution methods
(1) Method 1
As shown above, add os.close(_) and os.remove(tmp) two lines of code, directly delete the file just created after use. Note: import os module at the beginning]
(2) Method 2
The problem is already known, you can use with…as… to create, automatically delete and close the file.
(3) Method 3
If you want to keep each temporary file, and do not want to C drive blow up, then directly change the save location to a custom path.
Code location: Anoconda/envs/using-environment/Lib/tempfile.py in line 159-185.
directly change the direct dirlist operation to the user-defined folder location, as follows:

(4) Method 4
Manually clean up files in temp at regular intervals
Note: VOC format dataset training, no temporary files are generated because it uses the with…as… file creation method. For details, please refer to the end of voc_evaluater.py