GPU Error: RuntimeError: ProcessGroupNCCL is only supported with GPUs, no GPUs found!
pg = ProcessGroupNCCL(prefix_store, rank, world_size, pg_options)
RuntimeError: ProcessGroupNCCL is only supported with GPUs, no GPUs found!
At first, this mistake made me wonder if this GPU was useless, – – |, But the little partners in the lab are sure that GPU is OK! Then I started the bug troubleshooting journey
At this time, when viewing the command line, it finally shows its feet. It is estimated that there is a problem with pytorch, which is harmful!
>>> import torch
>>> print(torch.cuda.is_available())
/home/xutianjiao/anaconda3/envs/py36/lib/python3.6/site-packages/torch/cuda/__init__.py:80: UserWarning: CUDA initialization: The NVIDIA driver on your system is too old (found version 9020). Please update your GPU driver by downloading and installing a new version from the URL: http://www.nvidia.com/Download/index.aspx Alternatively, go to: https://pytorch.org to install a PyTorch version that has been compiled with your version of the CUDA driver. (Triggered internally at ../c10/cuda/CUDAFunctions.cpp:112.)
return torch._C._cuda_getDeviceCount() > 0
False
>>> print(torch.cuda.get_device_name(0))
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/xutianjiao/anaconda3/envs/py36/lib/python3.6/site-packages/torch/cuda/__init__.py", line 326, in get_device_name
return get_device_properties(device).name
File "/home/xutianjiao/anaconda3/envs/py36/lib/python3.6/site-packages/torch/cuda/__init__.py", line 356, in get_device_properties
_lazy_init() # will define _get_device_properties
File "/home/xutianjiao/anaconda3/envs/py36/lib/python3.6/site-packages/torch/cuda/__init__.py", line 214, in _lazy_init
torch._C._cuda_init()
RuntimeError: The NVIDIA driver on your system is too old (found version 9020). Please update your GPU driver by downloading and installing a new version from the URL: http://www.nvidia.com/Download/index.aspx Alternatively, go to: https://pytorch.org to install a PyTorch version that has been compiled with your version of the CUDA driver.
After checking this error, it shows that the versions of CUDA and torch do not match.
Check the version of pytorch, 1.10 +. OK, try installing a lower version of torch!
pip install torch==1.7.0
be accomplished!
Read More:
- RuntimeError: No HIP GPUs are available [How to Solve]
- [Solved] RuntimeError: cuda runtime error (100) : no CUDA-capable device is detected at
- [Solved] Pyinstaller Error: “RuntimeError: No metadata path found for distribution ‘greenlet‘
- [Solved] RuntimeError: cuda runtime error (801) : operation not supported at
- RuntimeError: Non RGB images are not supported [How to Fix]
- [Solved] RuntimeError: No application found. Either work inside a view function or push an application contex
- [Solved] RuntimeError: expected scalar type Long but found Float
- [Solved] bushi RuntimeError: version_ <= kMaxSupportedFileFormatVersion INTERNAL ASSERT FAILED at /pytorch/caffe2/s
- [Solved] Pytorch Error: RuntimeError: expected scalar type Double but found Float
- [Solved] python tqdm raise RuntimeError(“cannot join current thread“) RuntimeError: cannot join current thr
- [Solved] RuntimeError: Numpy is not available (Associated Torch or Tensorflow)
- How to Solve Error: RuntimeError CUDA out of memory
- [Solved] PyTorch Caught RuntimeError in DataLoader worker process 0和invalid argument 0: Sizes of tensors mus
- [Solved] mmdetection benchmark.py Error: RuntimeError: Distributed package doesn‘t have NCCL built in
- How to Solve RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu
- linux ubuntu pip search Fault: <Fault -32500: “RuntimeError: PyPI‘s XMLRPC API is currently disab
- [Solved] RuntimeError: cublas runtime error : resource allocation failed at
- [How to Fix]RuntimeError: Python is not installed as a framework, If you are using (Ana)Conda
- Pytorch Error: runtimeerror: expected scalar type double but found float
- [Solved] RuntimeError: a view of a leaf Variable that requires grad is being used in an in-place