Tag Archives: ../rtExt/cuda/cudaPointWiseRunner.cpp:28

[TensorRT] INTERNAL ERROR: Assertion failed: mem = nullpt

[TensorRT] ERROR: FAILED_EXECUTION: std::exception
^C[TensorRT] INTERNAL ERROR: Assertion failed: mem != nullptr
../rtExt/cuda/cudaPointWiseRunner.cpp:28
Aborting…
The above error is encountered when using tensorrt. This error occurs after the RT model is built. The reason for the error is:

Output was not copied to the video memory, resulting in illegal access

def setup_binding_shapes(engine: trt.ICudaEngine, context: trt.IExecutionContext, host_inputs, input_binding_idxs,
                         output_binding_idxs):
    # Explicitly set the dynamic input shapes, so the dynamic output
    # shapes can be computed internally
    for host_input, binding_index in zip(host_inputs, input_binding_idxs):
        context.set_binding_shape(binding_index, host_input.shape)
    assert context.all_binding_shapes_specified
    host_outputs = []
    device_outputs = []
    for binding_index in output_binding_idxs:
        output_shape = context.get_binding_shape(binding_index)
        # Allocate buffers to hold output results after copying back to host
        buffer = np.empty(output_shape, dtype=np.float32)
        host_outputs.append(buffer)
        # Allocate output buffers on device
        device_outputs.append(cuda.mem_alloc(buffer.nbytes))
        # Bind output shape
    utput_names = [engine.get_binding_name(binding_idx) for binding_idx in output_binding_idxs]
    return host_outputs, device_outputs

Note that in the for loop, you need to allocate video memory to all outputs.