error message
THCudaCheck FAIL file=..\torch/csrc/THCudaChgeneck FAeric/StoragITHL Cuefile=Sha.da.\rCinhtoregcch.k cpp /csrcline=24F/gAI9 errL file=oe..\tor=nr801 : eoric/Stopch/ceration not susrcpporte/gend
eric/StorageSharing.cpp rageSharing.cpp line=249 error=801 : operation not supported
line=249 error=801 : operation not supported
Traceback (most recent call last):
Traceback (most recent call last):
File "D:\Miniconda3\envs\dl\lib\multiprocessing\queues.py", line 236, in _feed
obj = _ForkingPickler.dumps(obj)
File "D:\Miniconda3\envs\dl\lib\multiprocessing\reduction.py", line 51, in dumps
cls(buf, protocol).dump(obj)
File "D:\Miniconda3\envs\dl\lib\multiprocessing\queues.py", line 236, in _feed
obj = _ForkingPickler.dumps(obj)
File "D:\Miniconda3\envs\dl\lib\site-packages\torch\multiprocessing\reductions.py", line 247, in reduce_tensor
event_sync_required) = storage._share_cuda_()
File "D:\Miniconda3\envs\dl\lib\multiprocessing\reduction.py", line 51, in dumps
cls(buf, protocol).dump(obj)
Traceback (most recent call last):
File "D:\Miniconda3\envs\dl\lib\site-packages\torch\multiprocessing\reductions.py", line 247, in reduce_tensor
event_sync_required) = storage._share_cuda_()
RuntimeError: cuda runtime error (801) : operation not supported at ..\torch/csrc/generic/StorageSharing.cpp:249
RuntimeError: cuda runtime error (801) : operation not supported at ..\torch/csrc/generic/StorageSharing.cpp:249
reason
https://github.com/fastai/fastbook/issues/85
Since pytorch multiprocessing does not work on windows, set the num of dataloaders_ workers=0
Read More:
- RuntimeError: cuda runtime error (100) : no CUDA-capable device is detected at /opt/conda/conda-bld/
- RuntimeError:cuda runtime error (11) : invalid argument at /pytorch/aten/src/THC/generic
- (29)RuntimeError: cuda runtime error (999)
- Successfully solved runtimeerror: CUDA runtime error (30)
- FCOS No CUDA runtime is found, using CUDA_HOME=’/usr/local/cuda-10.0′
- RuntimeError: reciprocal is not implemented for type torch.cuda.LongTensor
- Error: cudaGetDevice() failed. Status: CUDA driver version is insufficient for CUDA runtime version
- Python learning notes (5) — cross entropy error runtimeerror: 1D target tensor expected, multi target not supported
- Diamond types are not supported at this language level appears in IntelliJ
- RuntimeError: CUDA error: CUBLAS_STATUS_ALLOC_FAILED when calling `cublasCreate(handle)`
- Runtimeerror using Python training model: CUDA out of memory error resolution
- [Solved] RuntimeError: CUDA error: CUBLAS_STATUS_ALLOC_FAILED when calling `cublasCreate(handle)`
- RuntimeError: CUDA out of memory. Tried to allocate 600.00 MiB (GPU 0; 23.69 GiB total capacity)
- [MMCV]RuntimeError: CUDA error: no kernel image is available for execution on the device
- RuntimeError: CUDA error: out of memory solution (valid for pro-test)
- RuntimeError: ‘lengths’ argument should be a 1D CPU int64 tensor, but got 1D cuda:0 Long tensor
- (Solved) pytorch error: RuntimeError: cuDNN error: CUDNN_STATUS_EXECUTION_FAILED (install cuda)
- RuntimeError: CUDA error: device-side assert triggered
- RuntimeError: Input type (torch.FloatTensor) and weight type (torch.cuda.FloatTensor) should be the
- String operation to delete the character at the specified position