When pytorch uses DDP to accelerate, the prompt message is:
[W reducer.cpp:362] Warning: Grad strides do not match bucket view strides. This may indicate grad was not created according to the gradient layout contract, or that the param's strides changed since DDP was constructed. This is not an error, but may impair performance.
The reason is that the internal memory address of the input tensor becomes contiguous after it has been transformed by transpose or permute.
It is simple to add a statement .contiguous() after the tensor has been transposed or permuted to make the memory address contiguous.
For example:
# error codes:
input_tensor = ori_tensor.transpose(1, 3)
# Modified codes:
input_tensor = ori_tensor.transpose(1, 3).contiguous()
Read More:
- [Solved] RuntimeError: a view of a leaf Variable that requires grad is being used in an in-place
- [Solved] ERROR: THESE PACKAGES DO NOT MATCH THE HASHES FROM THE REQUIREMENTS FILE
- [Solved] RuntimeError: a view of a leaf Variable that requires grad is being used in an in-place operation
- [Solved] Pytorch Tensor to numpy error: RuntimeError: Can‘t call numpy() on Tensor that requires grad.报错
- Here is the difference and connection of Torch. View (), Transpose (), and Permute ()
- Autograd error in Python: runtimeerror: grad can be implicitly created only for scalar outputs
- [Solved] DDP/DistributedDataParallel Error: RuntimeError: Address already in use
- Pytorch: How to Handle error warning conda.gateways.disk.delete:unlink_or_rename_to_trash(140)
- Error:output with shape [1, 224, 224] doesn‘t match the broadcast shape [3, 224, 224]
- [Solved] Pytorch-transformers Error: AttributeError: ‘str‘ object has no attribute ‘shape‘
- [Solved] Pytorch error: RuntimeError: one of the variables needed for gradient computation
- [Solved] Odrive gui Error: Do not use the development server in a productioun and supported version of the Socket
- Pytorch directly creates a tensor on the GPU error [How to Solve]
- [Solved] Pycharm Use pip Error: Script file ‘D:\Anaconda3\envs\pytorch\Scripts\pip-script.py‘ is not present
- Pytorch: error message with chunks of 0 [How to Solve]
- [Solved] pytorch loss.backward() Error: RuntimeError: Function AddBackward0 returned an invalid gradient at index 1…
- Pytorch torch.cuda.FloatTensor Error: RuntimeError: one of the variables needed for gradient computation has…
- pytorch: RuntimeError CUDA error device-side assert triggered
- To silence this warning, use `float` by itself. Doing this will not modify any behavior and is safe.
- [Solved] Pytorch Error: AttributeError: ‘Tensor‘ object has no attribute ‘backword‘