Tag Archives: tensorrt

ERROR: ../tSafe/coreReadArchive.cpp (38) – Serialization Error in verifyHeader: 0 (Version tag does

Tenserrt TRT reports an error when using engine infer

Question

An error occurred in building the yoov5s model and trying to use the TRT inference service

[TensorRT] ERROR: ../rtSafe/coreReadArchive.cpp (38) - Serialization Error in verifyHeader: 0 (Version tag does not match)
[TensorRT] ERROR: INVALID_STATE: std::exception
[TensorRT] ERROR: INVALID_CONFIG: Deserialize the cuda engine failed.

[the external chain image transfer fails. The source station may have an anti-theft chain mechanism. It is recommended to save the image and upload it directly (img-z24zz5ve-1629281553325) (C: \ users \ dell-3020 \ appdata \ roaming \ typora user images \ image-20210818142817754. PNG)]

Error reporting reason:

The version of tensorrt used when compiling engine is inconsistent with the version of tensorrt used when TRT reasoning is used. It needs to be consistent

terms of settlement

Confirm the tensorrt version of each link to ensure consistency; Look at the dynamic link library of Yolo compiled files

ldd yolo

After modification, it runs normally and the speed becomes very fast

reference resources

https://forums.developer.nvidia.com/t/tensorrt-error-rtsafe-corereadarchive-cpp-31-serialization-error-in-verifyheader-0-magic-tag-does-not-match/81872/3https://github.com/wang -xinyu/tensorrtx.git

Vs configure tensorrt environment to use

Install the Nvidia driver (same as before)
2. Install the CUDA10 version, do not select Compact, select Custom and then select All.
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> Make sure you have CUDA installed and don’t continue without it.
②. Create a new project – select nvidia-cuda xx, select your name and define your path.
③. Remove the kernel.cu file and add your own CPP, CU, H files.
④. Configuration header file
Conventional – character set for double-byte character or not
c/c + + – general – additional include directories
E: \ opencv300 \ opencv \ build \ include \ opencv2
E: \ opencv300 \ opencv, build, include, opencv
E: \ opencv300, opencv, build, include the
D: \ tensorRTIntegrate \ TensorRT – master \ third_party \ cub
D: \ tensorRTIntegrate \ TensorRT – master \ include
C: \ Program Files \ NVIDIA GPU Computing Toolkit \ CUDA \ v10.0 \ include
D:\ TensorrtIntegrate \ Tensorrt-master \ Plugin
C/C ++ – Secure_No_Warnings
/ C ++ – Preprocessor – Warnings
D:\ TensorrtIntegrate \ Tensorrt-master \ Plugin
C/C ++ – Preprocessor – Warnings

C/C ++ – Precompiled header – Precompiled header – Do not use the precompiled header
CUDA C/C ++ -Common-Additional Include Directories
C: ProgramData\NVIDIA Corporation\CUDA Samples\ V10.0 \ Common \ Inc
DA C/C ++ -Device – Code Generation
te_75, SM_75 Note: Here the numbers are changed based on the graphics card power, the GPU2080/2080TI power is 75.
linker
– general – additional libraries directory E: \ opencv300 \ opencv, build, x64, vc12 \ lib
D: \ tensorRTIntegrate \ lean \ cuda10.0 \ lib
C: \ Program Files \ NVIDIA GPU Computing Toolkit \ CUDA \ v10.0 \ lib \ x64
D:\tensorRTIntegrate\lean\ tensorrt-6.0.1.5 \lib
Linker
dadevrt.lib
c>t.lib
cudart_>ic.lib
cudnn.libb7 cufft.lib
>ftw.lib
c>t.lib curand.lib
cusolver.lib
cusparse.lib
nppc.lib
nppial.lib
nppicc.lib
nppicom.lib
nppidei.lib
nppif.lib
nppig.lib
nppim.lib
nppist.lib
nppisu.lib
nppitc.lib
npps.lib
nvblas.lib
nvgraph.lib
nvml.lib
nvrtc.lib
OpenCL.lib
nvinfer.lib
nvinfer_plugin.lib
nvonnxparser.lib
nvparsers.lib
opencv_ts300.lib
Opencv_ts300d. lib
opencv_world300.lib
encv_world300d. lib
P> the DLL files of Tnesorrt and the DLL files of OpenCV in the project directory.
Tnesorrt DLL path: D: \ tensorRTIntegrate \ lean \ TensorRT – 6.0.1.5 \ lib DLL file:

nvinfer. DLL
nvinfer_plugin. DLL
nvonnxparser. DLL
nvparsers. DLL
nvserialize. DLL
OPENCV DLL path: E: \ opencv300 \ opencv, build, x64, vc12 \ bin DLL file:

opencv_ffmpeg300_64. DLL
opencv_world300. DLL
opencv_world300d. DLL
configured above, you can compile.