1, corresponding point multiplication, x.ul (y), that is, dot product operation, dot product does not sum operation, also known as Hadamard product; The dot product and the sum is the convolution
>>> a = torch.Tensor([[1,2], [3,4], [5, 6]])
>>> a
tensor([[1., 2.],
[3., 4.],
[5., 6.]])
>>> a.mul(a)
tensor([[ 1., 4.],
[ 9., 16.],
[25., 36.]])
# a*a等价于a.mul(a)
2, matrix multiplication, x.m m (y), the matrix size to meet: (I, n) x (n, j) strong> p>
>>> a
tensor([[1., 2.],
[3., 4.],
[5., 6.]])
>>> b = a.t() # 转置
>>> b
tensor([[1., 3., 5.],
[2., 4., 6.]])
>>> a.mm(b)
tensor([[ 5., 11., 17.],
[11., 25., 39.],
[17., 39., 61.]])
p>
div>