Tag Archives: artificial intelligence

[Solved] pyinstaller: error: unrecognized arguments: sklearn

How to Solve Error: pyinstaller: error: unrecognized arguments: sklearn

 

Solution:

Go to cmd
pyinstaller main.py  –hidden-import PySide2.QtXml –hidden-import sklearn –hidden-import sklearn.ensemble._forest –icon=”logo.ico”
Add the unrecognized sklearn to hidden import –hidden-import sklearn
The issue will be fixed.

 

[Solved] with ERRTYPE = cudaError CUDA failure 999 unknown error

Project scenario [with errtype = cudaerror; bool thrw = true] CUDA failure 999: unknown error; GPU=24 :

The old program needs to be upgraded. The previous CUDA is 10.2


Problem Description:

environment

CUDA 11.2 (previously 10.2)

onnxruntime-gpu 1.10

python 3.9.7

When starting the program

Traceback (most recent call last):
  File "/home/aiuser/cover/liheng-foggun/app.py", line 15, in <module>
    model = DetectMultiBackend(weights=config.paddle.model_file)
  File "/home/aiuser/miniconda3/envs/cover/lib/python3.9/site-packages/torch/autograd/grad_mode.py", line 28, in decorate_context
    return func(*args, **kwargs)
  File "/home/aiuser/cover/liheng-foggun/models/yolo.py", line 37, in __init__
    self.session = onnxruntime.InferenceSession(weights, providers=['CUDAExecutionProvider'])
  File "/home/aiuser/miniconda3/envs/cover/lib/python3.9/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 335, in __init__
    self._create_inference_session(providers, provider_options, disabled_optimizers)
  File "/home/aiuser/miniconda3/envs/cover/lib/python3.9/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 379, in _create_inference_session
    sess.initialize_session(providers, provider_options, disabled_optimizers)
RuntimeError: /onnxruntime_src/onnxruntime/core/providers/cuda/cuda_call.cc:122 bool onnxruntime::CudaCall(ERRTYPE, const char*, const char*, ERRTYPE, const char*) [with ERRTYPE =
 cudaError; bool THRW = true] /onnxruntime_src/onnxruntime/core/providers/cuda/cuda_call.cc:116 bool onnxruntime::CudaCall(ERRTYPE, const char*, const char*
, ERRTYPE, const char*) [with ERRTYPE = cudaError; bool THRW = true] CUDA failure 999: unknown error ; GPU=24 ; hostname=aiserver-sl-01 ; expr=cudaSetDevice(info_.device_id);

Cause analysis:

1. At first, I thought it was the onnxruntime GPU version problem, upgraded to 1.12 it still reports an error.

2. It is said that it is incompatible.

3. Try to reinstall the driver. When 11.2 is uninstalled, nvidia-smi finds that the previous 10.2 driver still exists.

4. The reason is that the previous drive was not unloaded completely


Solution:

1. Uninstall 10.2

sudo /usr/local/cuda-10.2/bin/cuda-uninstaller

2. Install a new drive

#install 515.57 offline
sudo ./NVIDIA-Linux-x86_64-515.57.run -no-x-check -no-nouveau-check

VIDIA-Linux-x86_64-515.57.run -no-x-check -no-nouveau-check

[Solved] Yolov5 Deep Learning Error: RuntimeError: DataLoader worker (pid(s) 2516, 1768) exited unexpectedly

Project scenario:

There is a problem when using yolov5 for deep learning. I use GPU for learning.


Problem description

An error is reported at the beginning of learning, RuntimeError: DataLoader worker (pid(s) 2516, 1768) exited unexpectedly.


Cause analysis:

Because I use GPU to learn, Anaconda’s virtual memory is also allocated enough, so the problem should be the setting of the number of CPU threads. Before that, I tried to adjust the batch size, but it didn’t work.


Solution:

In train There is a parameter of --workers in the file of py.

There is a parameter named --workers in the train.py file. Set it to 0.

the following is my setting, you can refer to it~~~

def parse_opt(known=False):
    parser = argparse.ArgumentParser()
    parser.add_argument('--weights', type=str, default=ROOT/'yolov5x.pt', help='initial weights path') #初始权重值
    parser.add_argument('--cfg', type=str, default='yolov5_Scan_FDDI/PLC_model.yaml', help='model.yaml path') #训练模型文件
    parser.add_argument('--data', type=str, default=ROOT/'yolov5_Scan_FDDI/PLC_parameter.yaml', help='dataset.yaml path') #数据集参数文件
    parser.add_argument('--hyp', type=str, default=ROOT/'data/hyps/hyp.scratch-low.yaml', help='hyperparameters path') #超参数设置
    parser.add_argument('--epochs', type=int, default=100) #训练轮数
    parser.add_argument('--batch-size', type=int, default=4, help='total batch size for all GPUs, -1 for autobatch') #batch size
    parser.add_argument('--imgsz', '--img', '--img-size', type=int, default=320, help='train, val image size (pixels)') #图片大小
    parser.add_argument('--rect', action='store_true', help='rectangular training')
    parser.add_argument('--resume', nargs='?', const=True, default=False, help='resume most recent training') #断续训练
    parser.add_argument('--nosave', action='store_true', help='only save final checkpoint')
    parser.add_argument('--noval', action='store_true', help='only validate final epoch')
    parser.add_argument('--noautoanchor', action='store_true', help='disable AutoAnchor')
    parser.add_argument('--noplots', action='store_true', help='save no plot files')
    parser.add_argument('--evolve', type=int, nargs='?', const=300, help='evolve hyperparameters for x generations')
    parser.add_argument('--bucket', type=str, default='', help='gsutil bucket')
    parser.add_argument('--cache', type=str, nargs='?', const='ram', help='--cache images in "ram" (default) or "disk"')
    parser.add_argument('--image-weights', action='store_true', help='use weighted image selection for training')
    parser.add_argument('--device', default='0', help='cuda device, i.e. 0 or 0,1,2,3 or cpu') #GPU
    parser.add_argument('--multi-scale', action='store_true', help='vary img-size +/- 50%%')
    parser.add_argument('--single-cls', action='store_true', help='train multi-class data as single-class')
    parser.add_argument('--optimizer', type=str, choices=['SGD', 'Adam', 'AdamW'], default='SGD', help='optimizer')
    parser.add_argument('--sync-bn', action='store_true', help='use SyncBatchNorm, only available in DDP mode')
    parser.add_argument('--workers', type=int, default=0, help='max dataloader workers (per RANK in DDP mode)') #CPU线程数设置
    parser.add_argument('--project', default=ROOT/'runs/train', help='save to project/name')
    parser.add_argument('--name', default='exp', help='save to project/name')
    parser.add_argument('--exist-ok', action='store_true', help='existing project/name ok, do not increment')
    parser.add_argument('--quad', action='store_true', help='quad dataloader')
    parser.add_argument('--cos-lr', action='store_true', help='cosine LR scheduler')
    parser.add_argument('--label-smoothing', type=float, default=0.0, help='Label smoothing epsilon')
    parser.add_argument('--patience', type=int, default=100, help='EarlyStopping patience (epochs without improvement)')
    parser.add_argument('--freeze', nargs='+', type=int, default=[0], help='Freeze layers: backbone=10, first3=0 1 2')
    parser.add_argument('--save-period', type=int, default=-1, help='Save checkpoint every x epochs (disabled if < 1)')
    parser.add_argument('--seed', type=int, default=0, help='Global training seed')
    parser.add_argument('--local_rank', type=int, default=-1, help='Automatic DDP Multi-GPU argument, do not modify')

    # Weights & Biases arguments
    parser.add_argument('--entity', default=None, help='W&B: Entity')
    parser.add_argument('--upload_dataset', nargs='?', const=True, default=False, help='W&B: Upload data, "val" option')
    parser.add_argument('--bbox_interval', type=int, default=-1, help='W&B: Set bounding-box image logging interval')
    parser.add_argument('--artifact_alias', type=str, default='latest', help='W&B: Version of dataset artifact to use')

    opt = parser.parse_known_args()[0] if known else parser.parse_args()
    return opt

[Solved] TensorFlow severing Container Creat Error: failed: Out of range: Read less bytes than requested

0. Preface

Recently, I was doing tensorflow severing model deployment, and I took a yolov5 trained pt model for experiment. Everything was fine for the first few days, but today I encountered an error when creating the tensorflow severing container, the environment, commands, models, configuration files have not changed, and suddenly this error appeared.

1. Problem description

There was an error when creating the tensorflow searching container. The environment, commands, models, and configuration files did not change. Suddenly, this error occurred.

1. Error summary (F is easy to search):

  • E tensorflow_serving/util/retrier.cc:37] Loading servable: {name: yolov5_saved_model version: 1} failed: Out of range: Read less bytes than requested
  • Failed to start server. Error: Unknown: 1 servable(s) did not become available: {{{name: yolov5_saved_model version: 1} due to error: Out of range: Read less bytes than requested}, }
  • 2022-07-30 11:15:09.097717: I tensorflow_serving/core/basic_manager.cc:279] Unload all remaining servables in the manager.

2. Error screenshot:

3. The complete error code is as follows:

(see Appendix)

2. Problem analysis:

The error is caused by the damage of the file. Specifically, I think of two possibilities: the interruption or error of the uploaded file, and the damage of the intact file due to disk problems.

Mine is the second kind

Reference https://github.com/tensorflow/tensorflow/issues/21544 Just found out. 👇

3. Solution:

Delete the model and configuration file and upload it again.

If there is no backup in advance, or you don’t know whether the file is damaged, you can re convert the model to generate a saved model model and create a new configuration file.

The following figure is the file structure and screenshot of the file I uploaded again.

models
|—-model1
|—- —-1
|—- —- —-assets
|—- —- —- —-variables
|—- —- —- —-variables. data-00000-of-00001
|—- —- —-variables. index
|—- —- —-saved_ model. pb
|—-model. config

reference resources

Similar problems:
https://github.com/tensorflow/tensorflow/issues/21544
https://bytemeta.vip/repo/Breta01/handwriting-ocr/issues/104

appendix

ubuntu% docker run --rm -p 8500:8500 --mount type=bind,source=/media/userdata/zhangxw/TFSever_Test2/yolo_cow/models,target=/models/models -t tensorflow/serving:latest-gpu --model_config_file=/models/models/model.config --allow_version_labels_for_unavailable_models=true &
[1] 8565
ubuntu% 2022-07-30 11:10:08.016885: I external/org_tensorflow/tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudart.so.11.0
2022-07-30 11:10:08.052728: I tensorflow_serving/model_servers/server_core.cc:465] Adding/updating models.
2022-07-30 11:10:08.052746: I tensorflow_serving/model_servers/server_core.cc:591]  (Re-)adding model: yolov5_saved_model
2022-07-30 11:10:08.153403: I tensorflow_serving/core/basic_manager.cc:740] Successfully reserved resources to load servable {name: yolov5_saved_model version: 1}
2022-07-30 11:10:08.153468: I tensorflow_serving/core/loader_harness.cc:66] Approving load for servable version {name: yolov5_saved_model version: 1}
2022-07-30 11:10:08.153495: I tensorflow_serving/core/loader_harness.cc:74] Loading servable version {name: yolov5_saved_model version: 1}
2022-07-30 11:10:08.153563: I external/org_tensorflow/tensorflow/cc/saved_model/reader.cc:38] Reading SavedModel from: /models/models/model1/1
2022-07-30 11:10:08.221053: I external/org_tensorflow/tensorflow/cc/saved_model/loader.cc:277] SavedModel load for tags { serve }; Status: fail: Out of range: Read less bytes than requested. Took 67495 microseconds.
2022-07-30 11:10:08.221079: E tensorflow_serving/util/retrier.cc:37] Loading servable: {name: yolov5_saved_model version: 1} failed: Out of range: Read less bytes than requested
2022-07-30 11:11:08.221228: I tensorflow_serving/util/retrier.cc:33] Retrying of Loading servable: {name: yolov5_saved_model version: 1} retry: 1
2022-07-30 11:11:08.221377: I external/org_tensorflow/tensorflow/cc/saved_model/reader.cc:38] Reading SavedModel from: /models/models/model1/1
2022-07-30 11:11:08.281805: I external/org_tensorflow/tensorflow/cc/saved_model/loader.cc:277] SavedModel load for tags { serve }; Status: fail: Out of range: Read less bytes than requested. Took 60430 microseconds.
2022-07-30 11:11:08.281849: E tensorflow_serving/util/retrier.cc:37] Loading servable: {name: yolov5_saved_model version: 1} failed: Out of range: Read less bytes than requested
2022-07-30 11:12:08.281994: I tensorflow_serving/util/retrier.cc:33] Retrying of Loading servable: {name: yolov5_saved_model version: 1} retry: 2
2022-07-30 11:12:08.282124: I external/org_tensorflow/tensorflow/cc/saved_model/reader.cc:38] Reading SavedModel from: /models/models/model1/1
2022-07-30 11:12:08.316331: I external/org_tensorflow/tensorflow/cc/saved_model/loader.cc:277] SavedModel load for tags { serve }; Status: fail: Out of range: Read less bytes than requested. Took 34210 microseconds.
2022-07-30 11:12:08.316359: E tensorflow_serving/util/retrier.cc:37] Loading servable: {name: yolov5_saved_model version: 1} failed: Out of range: Read less bytes than requested
2022-07-30 11:13:08.316498: I tensorflow_serving/util/retrier.cc:33] Retrying of Loading servable: {name: yolov5_saved_model version: 1} retry: 3
2022-07-30 11:13:08.316644: I external/org_tensorflow/tensorflow/cc/saved_model/reader.cc:38] Reading SavedModel from: /models/models/model1/1
2022-07-30 11:13:08.348579: I external/org_tensorflow/tensorflow/cc/saved_model/loader.cc:277] SavedModel load for tags { serve }; Status: fail: Out of range: Read less bytes than requested. Took 31936 microseconds.
2022-07-30 11:13:08.348606: E tensorflow_serving/util/retrier.cc:37] Loading servable: {name: yolov5_saved_model version: 1} failed: Out of range: Read less bytes than requested
2022-07-30 11:14:08.348747: I tensorflow_serving/util/retrier.cc:33] Retrying of Loading servable: {name: yolov5_saved_model version: 1} retry: 4
2022-07-30 11:14:08.348881: I external/org_tensorflow/tensorflow/cc/saved_model/reader.cc:38] Reading SavedModel from: /models/models/model1/1
2022-07-30 11:14:08.381418: I external/org_tensorflow/tensorflow/cc/saved_model/loader.cc:277] SavedModel load for tags { serve }; Status: fail: Out of range: Read less bytes than requested. Took 32539 microseconds.
2022-07-30 11:14:08.381447: E tensorflow_serving/util/retrier.cc:37] Loading servable: {name: yolov5_saved_model version: 1} failed: Out of range: Read less bytes than requested

ubuntu% 2022-07-30 11:15:08.381606: I tensorflow_serving/util/retrier.cc:33] Retrying of Loading servable: {name: yolov5_saved_model version: 1} retry: 5
2022-07-30 11:15:08.381749: I external/org_tensorflow/tensorflow/cc/saved_model/reader.cc:38] Reading SavedModel from: /models/models/model1/1
2022-07-30 11:15:08.413642: I external/org_tensorflow/tensorflow/cc/saved_model/loader.cc:277] SavedModel load for tags { serve }; Status: fail: Out of range: Read less bytes than requested. Took 31895 microseconds.
2022-07-30 11:15:08.413673: E tensorflow_serving/util/retrier.cc:37] Loading servable: {name: yolov5_saved_model version: 1} failed: Out of range: Read less bytes than requested
2022-07-30 11:15:08.413680: I tensorflow_serving/util/retrier.cc:46] Retrying of Loading servable: {name: yolov5_saved_model version: 1} exhausted max_num_retries: 5
2022-07-30 11:15:08.413698: I tensorflow_serving/core/loader_harness.cc:155] Encountered an error for servable version {name: yolov5_saved_model version: 1}: Out of range: Read less bytes than requested
2022-07-30 11:15:08.413705: E tensorflow_serving/core/aspired_versions_manager.cc:388] Servable {name: yolov5_saved_model version: 1} cannot be loaded: Out of range: Read less bytes than requested
Failed to start server. Error: Unknown: 1 servable(s) did not become available: {{{name: yolov5_saved_model version: 1} due to error: Out of range: Read less bytes than requested}, }
2022-07-30 11:15:09.097717: I tensorflow_serving/core/basic_manager.cc:279] Unload all remaining servables in the manager.

[1]  + exit 255   docker run --rm -p 8500:8500 --mount  -t tensorflow/serving:latest-gpu
ubuntu%
```![Please add a picture description](https://img-blog.csdnimg.cn/1f8990d3771e4eeea98e3eb1bddd1f24.png)

[Solved] caffe Error: Check failed: cv_img.data Could not load

The following problems were encountered when running Caffe Code:

Problem Description:

E0727 11:22:09.213124  6200 io.cpp:89] Could not open or find file D:/.../
F0727 11:22:09.213124  6200 image_data_layer.cpp:129] Check failed: cv_img.data Could not load D:/.../
*** Check failure stack trace: ***

Solution:

Check the train or test file for the presence of empty lines (usually the last line) and delete them.

How to Solve Ubuntu18 Compile Kalibr Error (Various Errors)

Opencv-3.4.13 is too new, and errors will be encountered during the compilation of kalibr. The summary is as follows:

Error 1: sudo pip install python-igraph –upgrade failed

Solution:

sudo apt-get install python-igraph

Error 2:

Could not find a package configuration file provided by “code_utils” with
any of the following names:
code_utilsConfig.cmake
code_utils-config.cmake```

Solution:

  1. You need to put the following sentence in the sumpixel_test.cpp file
    #include “backward.hpp”
    to this line
    #include “code_utils/backward.hpp”
  2. Put code_utils into the workspace first, then catkin_ws once, then put imu_utils, then compile

 

Error 3: catkin build -DCMAKE_BUILD_TYPE=Release -j4 errors during compilation

3-1 error reporting:

 error: ‘CV_GRAY2RGB’ was not declared in this scope
     cv::cvtColor(imageCopy1, imageCopy1, CV_GRAY2RGB);
    
 error: ‘CV_TERMCRIT_ITER’ was not declared in this scope
         cv::TermCriteria(CV_TERMCRIT_EPS + CV_TERMCRIT_ITER, 30, 0.1));

error: ‘CV_TERMCRIT_EPS’ was not declared in this scope
         cv::TermCriteria(CV_TERMCRIT_EPS + CV_TERMCRIT_ITER, 30, 0.1));

3-1 solution: add a header file to the corresponding file:

#include <opencv2/imgproc/types_c.h>

##############################################################################################################

3-2 cvstartwindowthread() error:

Change 3-2 to:

cv::startWindowThread()

3-3 CV_LOAD_IMAGE_UNCHANGED error:

3-3 changed to

cv::IMREAD_UNCHANGED

3-4 CV_LOAD_IMAGE_GRAYSCALE error:

3-4 changed to

cv::IMREAD_GRAYSCALE错误:

3-5 CV_ LOAD_ IMAGE_ Grayscale error:

Change 3-5 to

cv::IMREAD_GRAYSCALE

3-6 CV_LOAD_IMAGE_COLOR error:

Change 3-6 to

cv::IMREAD_COLOR

3-7 CV_LOAD_IMAGE_ANYDEPTH error:

3-7 changed to

cv::IMREAD_ANYDEPTH

3-8 CV_MINMAX error:

3-8 changed to

    NORM_MINMAX

3-9 CV_FONT_HERSHEY_SIMPLEX error:

3-9 changed to

cv::FONT_HERSHEY_SIMPLEX```

3-10 CV_WINDOW_AUTOSIZE error:

3-10 changed to

cv::WINDOW_AUTOSIZE

3-11 error: error: aggregate ‘std::ofstream out_t’ has incomplete type and cannot be defined std::ofstream out_t;
3-11 Solution: add the header file as below:

#include <fstream>

tensorflow2.3 InvalidArgumentError: jpeg::Uncompress failed [How to Solve]

When training your own dataset, you often report errors:

tensorflow2.3 InvalidArgumentError: jpeg::Uncompress failed
[[{{node decode_image/DecodeImage}}]] [Op:IteratorGetNext]

 

Solution:
check whether the picture is damaged before training:

import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
import os


num_skipped = 0
for folder_name in ("Fruit apples", "Fruit bananas", "Fruit oranges"):
    folder_path = os.path.join(".\data\image_data", folder_name)
    for fname in os.listdir(folder_path):

        fpath = os.path.join(folder_path, fname)

        try:
            fobj = open(fpath, mode="rb")
            is_jfif = tf.compat.as_bytes("JFIF") in fobj.peek(10)
            
        finally:
            fobj.close()

        if not is_jfif:
            num_skipped += 1
            # Delete corrupted image
            os.remove(fpath)

print("Deleted %d images" % num_skipped)

Delete the damaged picture and train again to solve the problem
if an error is prompted again, use:

# Determine if an image is corrupt from local
def is_valid_image(path):
    '''
    Check if the file is corrupt
    '''
    try:
        bValid = True
        fileObj = open(path, 'rb')  # Open in binary form
        buf = fileObj.read()
        if not buf.startswith(b'\xff\xd8'): # whether to start with \xff\xd8
            bValid = False
        elif buf[6:10] in (b'JFIF', b'Exif'): # ASCII code of "JFIF"
            if not buf.rstrip(b'\0\r\n').endswith(b'\xff\xd9'): # whether it ends with \xff\xd9
                bValid = False
        else:
            try:
                Image.open(fileObj).verify()
            except Exception as e:
                bValid = False
                print(e)
    except Exception as e:
        return False
    return bValid
  
 num_skipped = 0
for folder_name in ("fruit-apple", "fruit-banana", "fruit-orange"):
    #os.path.join() joins two or more pathname components
    folder_path = os.path.join(". \data\image_data", folder_name)
    # os.listdir(path) lists the subdirectories under this directory
    for fname in os.listdir(folder_path):
        fpath = os.path.join(folder_path, fname)
        flag1 = is_valid_image(fpath)
        if not flag1:
            print(flag1)
            print(fpath)#Print the path and name of the error file
 

Adjust the error file and train again to solve the problem.

[Solved] torchvision.models.resnet18() Error: PytorchStreamReader failed reading zip archive: failed finding…

When I was downloading the resnet18 network using torchvision.models.resnet18(), I manually terminated it once and when I ran it again, I got the error PytorchStreamReader failed reading zip archive: failed finding central directory

This is because after manually terminating the program, the file was halfway down, but when I rerun it, the program thought it was done and started unpacking, which resulted in an error. Here is the file I downloaded halfway:

The procedure to detect if the file already exists is in the torch.hub file at line 585, follow the error message to find the torch.hub file

Type a breakpoint here, debug it, then see what the value of cached_file is here, follow the path of cached_file to find the file that is halfway down, delete it and you’re done.

When I run it, the path of this file looks like this

How to Solve Cython-bbox pip install Error

Cython-bbox pip install error

Installation steps

    1. 1. Download Cython_bbox source code, click Download files to download.
    1. 2. Unzip the file.
    1. 3. Open setup.py, find line 31, replace extra_compile_args=[‘-Wno-cpp’] with extra_compile_args={‘gcc’: [‘/Qstd=c99’]}.
    1. 4. Save the changes and return to the cython_bbox-0.1.3 file directory. After calling cmd and jumping to this directory, use the command line
python setup.py build_ext install

The following command is displayed to indicate success:

you can also package the file back to the original compressed file and use the offline PIP installation.

https://blog.csdn.net/qq_28949847/article/details/124974088

How to Solve OpenCV CVUI Error: LINK2019

OpenCV CVUI Error: LINK2019

1、Severity Code Description Project File Line Suppression State
Error LNK2019 unresolved external symbol “void __cdecl cvui::init(class std::basic_string<char,struct std::char_traits,class std::allocator > const &,int,bool)” (?init@cvui@@YAXAEBV?b a s i c s t r i n g @ D U ? basic_string@DU?basic 
s
​
 tring@DU?char_traits@D@std@@V?$allocator@D@2@@std@@H_N@Z) referenced in function main

2、Severity Code Description Project File Line Suppression State
Error LNK2019 unresolved external symbol “bool __cdecl cvui::button(class cv::Mat &,int,int,class std::basic_string<char,struct std::char_traits,class std::allocator > const &)” (?button@cvui@@YA_NAEAVMat@cv@@HHAEBV?b a s i c s t r i n g @ D U ? basic_string@DU?basic 
s
​
 tring@DU?char_traits@D@std@@V?$allocator@D@2@@std@@@Z) referenced in function main DotMatrix

3、Severity Code Description Project File Line Suppression State
Error LNK2019 unresolved external symbol “void __cdecl cvui::printf(class cv::Mat &,int,int,double,unsigned int,char const *,…)” (?printf@cvui@@YAXAEAVMat@cv@@HHNIPEBDZZ) referenced in function main

4、Severity Code Description Project File Line Suppression State
Error LNK2019 unresolved external symbol “void __cdecl cvui::update(class std::basic_string<char,struct std::char_traits,class std::allocator > const &)” (?update@cvui@@YAXAEBV?b a s i c s t r i n g @ D U ? basic_string@DU?basic 
s
​
 tring@DU?char_traits@D@std@@V?$allocator@D@2@@std@@@Z) referenced in function main DotMatrix

 

Solution:

Add the codes below before the .cpp

#define CVUI_IMPLEMENTATION
#include "cvui.h"

[Solved] ValueError: Error when checking input: expected conv2d_input to have 4 dimensions

Error Messages:

ValueError: Error when checking input: expected conv2d_input to have 4 dimensions, but got array with shape (150, 150, 3)
Codes:

image = mpimg.imread("./ima/b.jpg")
image = image/255
classe = model.predict(image, batch_size=1)

Reason:
the input format is incorrect
solution:
standardize the dataset

Specific solutions:

image = mpimg.imread("./ima/b.jpg")
image = image.reshape(image.shape(1,150,150,3)/255
classe = model.predict(image, batch_size=1)

OSError: [WinError 1455] The page file is too small to complete the operation. Error loading…

Complete error oserror: [winerror 1455] the page file is too small to complete the operation. Error loading “C:\ProgramData\Anaconda3\lib\site-packages\torch\lib\shm.dll” or one of its dependencies.

Scenario: Running the reid-strong-baseline model

Reason: The model is too large, and the system allocated paging memory is too small to train

Environment: windows10, cuda version: 11.1, pytorch version: 1.11.0+cu113

(1) Query your CUDA version:

nvidia-smi

(2) Query your own version of pytorch

import torch
print(torch.__version__)

Solution: Right-click Properties->Advanced System Settings->Advanced->Settings->Advanced->Programs->Change->Uncheck “Automatically manage…” (Define initial size and maximum size) (set here according to the actual available space, as large as possible) -> click “Settings” -> OK -> reboot

If the error is still reported after reboot, the possible reasons are: (1) the custom size is still too small (for example, I set 10G at the beginning, but still reported an error, and subsequently modified to 100G (100000M) to run successfully) (2) the batch_size is too large, you can adjust the size appropriately (for example, reduce 64 to 16)