When we upload our code to the server to run, we encounter the following problems:
THCudaCheck FAIL file=/pytorch/aten/src/THC/THCGeneral.cpp line=50 error=100 : no CUDA-capable device is detected
Traceback (most recent call last):
File "HyperAttentionDTI_main.py", line 185, in <module>
model = AttentionDTI(hp).cuda()
File "/usr/local/lib/python3.7/site-packages/torch/nn/modules/module.py", line 304, in cuda
return self._apply(lambda t: t.cuda(device))
File "/usr/local/lib/python3.7/site-packages/torch/nn/modules/module.py", line 201, in _apply
module._apply(fn)
File "/usr/local/lib/python3.7/site-packages/torch/nn/modules/module.py", line 223, in _apply
param_applied = fn(param)
File "/usr/local/lib/python3.7/site-packages/torch/nn/modules/module.py", line 304, in <lambda>
return self._apply(lambda t: t.cuda(device))
File "/usr/local/lib/python3.7/site-packages/torch/cuda/__init__.py", line 197, in _lazy_init
torch._C._cuda_init()
RuntimeError: cuda runtime error (100) : no CUDA-capable device is detected at /pytorch/aten/src/THC/THCGeneral.cpp:50
It’s because our graphics card settings are wrong, because we know whether we have a graphics card or not, otherwise we won’t report the wrong graphics card problem
Solution:
Let’s look at the program code we executed and check the CUDA part. I set my graphics card to 6. We don’t have so many graphics cards to use. I checked the location of my graphics card, which is No. 0. Therefore, we set the first line of the following code to 0!
os.environ["CUDA_VISIBLE_DEVICES"] = "6"
if __name__ == "__main__":
"""select seed"""
SEED = 1234
random.seed(SEED)
torch.manual_seed(SEED)
torch.cuda.manual_seed_all(SEED)
# torch.backends.cudnn.deterministic = True
Read More:
- [Solved] RuntimeError: cublas runtime error : resource allocation failed at
- [Solved] RuntimeError: ProcessGroupNCCL is only supported with GPUs, no GPUs found
- How to Solve Error: RuntimeError CUDA out of memory
- [Solved] bushi RuntimeError: version_ <= kMaxSupportedFileFormatVersion INTERNAL ASSERT FAILED at /pytorch/caffe2/s
- [Solved] Torch Build Module Error: NotImplementedError
- Pytorch ValueError: Expected more than 1 value per channel when training, got input size [1, 768
- [Solved] Pycharm paddle Error: Error: (External) CUDA error(35), CUDA driver version is insufficient for CUDA
- OSError libespeak.so.1 error: no such file or directory [How to Solve]
- How to Solve RuntimeError: cuDNN error: CUDNN_STATUS_EXECUTION_FAILED
- [How to Solve] ImportError: No module named typing
- Gunicorn Flask Error: [ERROR] Socket error processing request
- [Solved] torchsummary Error: RuntimeError: Input type (torch.cuda.FloatTensor) and weight type (torch.F
- [Solved] AttributeError: module ‘logging‘ has no attribute ‘Handler‘
- [Solved] django.core.exceptions.ImproperlyConfigured: Error loading MySQLdb module.Did you install mysqlclie
- Keras import package error: importerror: cannot import name ‘get_ config‘
- Python installs virtualenv through PIP and always reports an error: response.py“, line 438, in _error_catcher yield
- Pytorch directly creates a tensor on the GPU error [How to Solve]
- [Solved] PyTorch Caught RuntimeError in DataLoader worker process 0和invalid argument 0: Sizes of tensors mus
- Pytorch: How to Handle error warning conda.gateways.disk.delete:unlink_or_rename_to_trash(140)
- [Solved] theano-GPU Error: pygpu.gpuarray.GpuArrayException: b‘cuMemAlloc: CUDA_ERROR_OUT_OF_MEMORY: out of memory