Solution:
Use tensor. Detach(). Numpy()
when turning numpy:
a = torch.ones(5)
b = a.detach().numpy()
print(b)
Problem analysis
When the tensor
conversion in calculation, because it has gradient value, it cannot be directly converted to numpy
format, so it is better to call . Detach(). Numpy()
no matter how
Read More:
- [Solved] RuntimeError : PyTorch was compiled without NumPy support
- [Solved] Operator Not Allowed In Graph Error & Attribute Error Tensor object has no attribute numpy
- [Solved] RuntimeError: The size of tensor a (4) must match the size of tensor b (3) at non-singleton dim
- Normalize error: TypeError: Input tensor should be a float tensor…
- Pytorch directly creates a tensor on the GPU error [How to Solve]
- Autograd error in Python: runtimeerror: grad can be implicitly created only for scalar outputs
- can‘t multiply sequence by non-int of type ‘numpy.float64‘
- [Solved] RuntimeError: Numpy is not available (Associated Torch or Tensorflow)
- [Solved] RuntimeError: a view of a leaf Variable that requires grad is being used in an in-place
- RuntimeError: stack expects each tensor to be equal size [How to Solve]
- For the problem of rejecting old usage errors after numpy is updated, modified in numpy 1.20; for more details and guidance
- [Solved] Pytorch Error: AttributeError: ‘Tensor‘ object has no attribute ‘backword‘
- [Solved] RuntimeError: a view of a leaf Variable that requires grad is being used in an in-place operation
- [Solved] pytorch Error: KeyError: tensor(2, device=‘cuda:0‘)
- RuntimeError: Failed to register operator torchvision::_new_empty_tensor_op. +torch&torchversion Version Matching
- How to Solve Pytorch DataLoader Loading Error: UnicodeDecodeError: ‘utf-8‘ codec can‘t decode byte 0xe5 in position 1023
- RuntimeError: stack expects each tensor to be equal size, but got [x] at entry 0 and [x] at entry 1
- Python: RNN principle realized by numpy
- [Solved] Pytorch Error: RuntimeError: expected scalar type Double but found Float