yolox_s.pth to tensorRT Error
python tools/trt.py -n yolox-s -c preModels/yolox_s.pth
error:
2021-08-26 06:47:14.864 | INFO | __main__:main:52 - loaded checkpoint done.
[TensorRT] INFO: [MemUsageChange] Init CUDA: CPU +300, GPU +0, now: CPU 1889, GPU 970 (MiB)
[TensorRT] WARNING: Tensor DataType is determined at build time for tensors not marked as input or output.
2021-08-26 06:47:21.311 | ERROR | __main__:<module>:77 - An error has been caught in function '<module>', process 'MainProcess' (12166), thread 'MainThread' (140257983133504):
Traceback (most recent call last):
> File "tools/trt.py", line 77, in <module>
main()
└ <function main at 0x7f8f9f310950>
File "tools/trt.py", line 62, in main
max_workspace_size=(1 << 32),
File "/home/moli/anaconda3/envs/tf25/lib/python3.7/site-packages/torch2trt-0.3.0-py3.7.egg/torch2trt/torch2trt.py", line 558, in torch2trt
builder.max_workspace_size = max_workspace_size
│ └ 4294967296
└ <tensorrt.tensorrt.Builder object at 0x7f8f94a50970>
AttributeError: 'tensorrt.tensorrt.Builder' object has no attribute 'max_workspace_size'
The solution to this problem is to reduce the version of NVIDIA tensorrt [2021-8-27]
reference link: https://github.com/NVIDIA-AI-IOT/torch2trt/issues/557
pip install nvidia-tensorrt==7.2.* --index-url https://pypi.ngc.nvidia.com
Finally, with the following command, the model conversion was successful. It took 10 minutes, which was outrageous
python tools/trt.py -n yolox-s -c preModels/yolox_s.pth
`0 N/A N/A 7491 C python 6491MiB `
## output:
cuda : True
2021-08-27 04:09:41.895 | INFO | __main__:main:57 - loaded checkpoint done.
[TensorRT] WARNING: Tensor DataType is determined at build time for tensors not marked as input or output.
[TensorRT] INFO: Some tactics do not have sufficient workspace memory to run. Increasing workspace size may increase performance, please check verbose output.
[TensorRT] INFO: Detected 1 inputs and 1 output network tensors.
2021-08-27 04:17:33.106 | INFO | __main__:main:70 - Converted TensorRT model done.
2021-08-27 04:17:33.286 | INFO | __main__:main:78 - Converted TensorRT model engine file is saved for C++ inference.
Conversion model default path: yolox_ outputs/yolox_ s/
ll YOLOX_outputs/yolox_s/
total 52348
22065949 Aug 27 04:17 model_trt.engine
31524975 Aug 27 04:17 model_trt.pth