Question
RuntimeError: [enforce fail at ..\c10\core\CPUAllocator.cpp:76] data. DefaultCPUAllocator: not enough memory: you tried to allocate 1105920 bytes.
Today, when running yoov7 on my own computer, I used the CPU to run the test model because I didn’t have a GPU. I used the CPU to predict an independent image. There is no problem running an image. It is very nice!!! However, when I predict a video (multiple images), he told me that the memory allocation was insufficient,
DefaultCPUAllocator: not enough memory: you tried to allocate 1105920 bytes.
,
Moreover, it does not appear after the second image is run. It appears when the 17th image is calculated. The memory can not be released several times later~~~~~~~~
analysis
In pytorch, a tensor has a requires_grad parameter, which, if set to True, is automatically derived when backpropagating the tensor. tensor’s requires_grad property defaults to False, and if a node (leaf variable: tensor created by itself) requires_grad is set to True, then all nodes that depend on it require_grad to be True (even if other dependent tensors have requires_grad = False). grad is set to True, then all the nodes that depend on it will have True (even if the other tensor’s requires_grad = False)
Note:
requires_grad is a property of the generic data structure Tensor in Pytorch, which is used to indicate whether the current quantity needs to retain the corresponding gradient information in the calculation. Taking linear regression as an example, it is easy to know that the weights w and deviations b are the objects to be trained, and in order to get the most suitable parameter values, we need to set a relevant loss function, based on the idea of gradient back propagation Perform training.
When requires_grad is set to False, the backpropagation is not automatically derivative, so it saves memory or video memory.
Then the solution to this problem follows, just let the model not record the gradient during the test, because it is not really used.
Solution:
Use with torch.no_grad()
, let the model not save the gradient during the test:
with torch.no_grad():
output, _ = model(image) # Add before the image calculation
In this way, when the model calculates each image, the derivative will not be obtained and the gradient will not be saved!
Perfect solution!
Read More:
- How to Solve Error: RuntimeError CUDA out of memory
- RuntimeError: CUDA error: an illegal memory access was encountered
- YOLOX Model conversion error: [TensorRT] ERROR: runtime.cpp (25) – Cuda Error in allocate: 2 (out of memory)
- [Solved] RuntimeError: unexpected EOF, expected 73963 more bytes. The file might be corrupted.
- [How to Solve] RuntimeError: CUDA out of memory.
- [How to Fix]RuntimeError: Python is not installed as a framework, If you are using (Ana)Conda
- [Solved] RuntimeError: Numpy is not available (Associated Torch or Tensorflow)
- Autograd error in Python: runtimeerror: grad can be implicitly created only for scalar outputs
- [Solved] TypeError: Object of type ‘bytes’ is not JSON serializable
- [Solved] RuntimeError: “unfolded2d_copy“ not implemented for ‘Half‘
- [Solved] Python Error: TypeError: write() argument must be str, not bytes
- [Solved] nn.BatchNorm1d Error: RuntimeError: running_mean should contain 1 elements not 512错误
- Using postman Test Django Interface error: RuntimeError:You called this URL via POST,but the URL doesn‘t end in a slash
- [Solved] RuntimeError: a view of a leaf Variable that requires grad is being used in an in-place
- Matplotlib draw error: Fail to allocate bitmap [How to Solve]
- [Solved] Pytorch Tensor to numpy error: RuntimeError: Can‘t call numpy() on Tensor that requires grad.报错
- [Solved] python-sutime Error: the JSON object must be str, bytes or bytearray, not ‘java.lang.String‘
- [Solved] theano-GPU Error: pygpu.gpuarray.GpuArrayException: b‘cuMemAlloc: CUDA_ERROR_OUT_OF_MEMORY: out of memory
- [Solved] Pytorch Error: RuntimeError: expected scalar type Double but found Float
- [Solved] RuntimeError: cuda runtime error (801) : operation not supported at