In this way, when running fastercnn, we need to change the original model’s 21 categories to our own number of categories. After the first modification, no error will be reported in the run, and after the second modification, an error will be reported as follows:
1 block: [0,0,0], thread: [16,0,0] assertion T & gt= 0 && amp; t < n_ Classes
failed.
2 runtime error: CUDA runtime error (59): device side assert triggered
the main solutions on the Internet are as follows:
The reason for this problem is that there are tags in the training data that exceed the number of categories. For example, I set up a total of 8 classes, but if 9 appears in the tag in the training data, this error will be reported. So here’s the problem. There’s a trap. If the tag in the training data contains 0, the above error will also be reported. This is very weird. Generally, we start counting from 0, but in Python, the category labels below 0 have to report an error. So if the category label starts from 0, add 1 to all category labels.
Solution:
The first time I ran the program, I found that there were 16 categories (I deleted 4 categories, but I didn’t find them). After running the program, I found that there were four more categories, so I deleted these four categories. However, when I ran the program again, I reported the above error. The reason is that every time we
run the program, we have to delete the cache generated by the last run, because I didn’t delete it, so the program thought it was 16 categories, But only 12 categories are provided. So if you report this error, you can delete the cache and run it again
Read More:
- pytorch: RuntimeError CUDA error device-side assert triggered
- [DL Common Issue] RuntimeError: CUDA error 59: Device-side assert triggered
- [Solved] RuntimeError: CUDA error: device-side assert triggered CUDA kernel errors
- [Solved] RuntimeError: cuda runtime error (100) : no CUDA-capable device is detected at
- [Solved] RuntimeError: cuda runtime error (801) : operation not supported at
- [Solved] bushi RuntimeError: version_ <= kMaxSupportedFileFormatVersion INTERNAL ASSERT FAILED at /pytorch/caffe2/s
- [Solved] RuntimeError: CUDA error: CUBLAS_STATUS_EXECUTION_FAILED when calling `cubla…
- [Solved] RuntimeError: CUDA error: CUBLAS_STATUS_EXECUTION_FAILED when calling `cublasSgemm
- RuntimeError: CUDA error: an illegal memory access was encountered
- [Solved] pytorch Error: KeyError: tensor(2, device=‘cuda:0‘)
- [How to Solve] RuntimeError: CUDA out of memory.
- [Solved] RuntimeError: cublas runtime error : resource allocation failed at
- [Solved] Pyinstaller Package and Run Error: RuntimeError: Unable to open/read ui device
- How to Solve Error: RuntimeError CUDA out of memory
- YOLOX Model conversion error: [TensorRT] ERROR: runtime.cpp (25) – Cuda Error in allocate: 2 (out of memory)
- [Solved] Using summary to View network parameters Error: RuntimeError: Input type (torch.cuda.FloatTensor)
- Pytorch torch.cuda.FloatTensor Error: RuntimeError: one of the variables needed for gradient computation has…
- [Solved] Pycharm paddle Error: Error: (External) CUDA error(35), CUDA driver version is insufficient for CUDA
- [Solved] CUDA failure 999: unknown error ; GPU=-351697408 ; hostname=4f5e6dff58e6 ; expr=cudaSetDevice(info_.device_id);
- [Solved] torchsummary Error: RuntimeError: Input type (torch.cuda.FloatTensor) and weight type (torch.F