when running pytorch gpu, reported this error
many people on the Internet also encountered this problem, some said that CUDA and cudnn version matching problem, some said that need to reinstall pytorch, CUDA, cudnn. I have checked the official website, the version is the match, trying to reinstall does not work, and I according to the version of another system can not install.
you can see that every time the error is in the file conv.py, which is the error made when doing the CNN operation.
The
solution is to introduce the following statement
import torch
torch.backends.cudnn.enabled = False
means you don’t need cudnn acceleration anymore.
GPU, CUDA, cudnn relationship is:
- CUDA is a parallel computing framework launched by NVIDIA for its own GPU. CUDA can only run on NVIDIA gpus, and can only play the role of CUDA when the computing problem to be solved can be massively parallel computing.
- cuDNN is an acceleration library for deep neural networks built by NVIDIA, and is a GPU acceleration library for deep neural networks. CuDNN isn’t a must if you’re going to use GPU to train models, but it’s usually used with this accelerator library.
reference: GPU, CUDA, cuDNN understanding
cudnn will be used by default. Since the matching problem cannot be solved at present, it is not used for now. The GPU will still work, but probably not as fast as cudNN.
if any friends know how to solve possible version problems, welcome to exchange ~
add: p>
- version: win10, python 3.6, pytorch 1.1.0, CUDA 9.0, cudnn 7.1.4
- test case: pytorch github Example Basic MNIST
Read More:
- Pytorch RuntimeError CuDNN error CUDNN_STATUS_SUCCESS (How to Fix)
- [MMCV]RuntimeError: CUDA error: no kernel image is available for execution on the device
- (Solved) pytorch error: RuntimeError: cuDNN error: CUDNN_STATUS_EXECUTION_FAILED (install cuda)
- FCOS No CUDA runtime is found, using CUDA_HOME=’/usr/local/cuda-10.0′
- RuntimeError: cuDNN error: CUDNN_STATUS_EXECUTION_FAILED
- After the new video card rtx3060 arrives, configure tensorflow and run “TF. Test. Is”_ gpu_ The solution of “available ()” output false
- torch.cuda.is_ Available() returns false
- Could NOT find CUDNN: Found unsuitable version “..“, but required is at least “6“
- Tensorflow error in Windows: failed call to cuinit: CUDA_ ERROR_ UNKNOWN
- An error occurred when installing pytorch version 1.7 GPU
- To solve the problem of importerror when installing tensorflow: libcublas.so . 10.0, failed to load the native tensorflow runtime error
- Docker error response from daemon: Conflict: unable to deletexxxxx
- ImportError: libcudnn.so.7: cannot open shared object file: No such file or directory
- Docker delete error response from daemon: Conflict: unable to delete xxxxx solution
- tensorflow2.1 Error:Could not create cudnn handle: CUDNN_STATUS_INTERNAL_ERROR
- Error in training yolox: error in importing apex
- ’nvcc.exe‘ failed with exit status 1
- How to Fix NVIDIA-SMI has failed because it couldn‘t communicate with the NVIDIA driver.
- check CUDA and CUDNN version
- Solution to CUDA installation failure problem visual studio integration failed