Your array - changed a bit to make the result a bit more interesting:
In [28]: A = np.array([[0.5, 0.25]])
In [29]: A.shape
Out[29]: (1, 2)
In [30]: A.T.shape
Out[30]: (2, 1)
Matrix multiplication
In [31]: A.T@A
Out[31]:
array([[0.25 , 0.125 ],
[0.125 , 0.0625]])
ELement wise, with broadcasting, does the same thing, since the @ summation is on a size 1 dimension:
In [32]: A.T*A
Out[32]:
array([[0.25 , 0.125 ],
[0.125 , 0.0625]])
A*A.T is the same thing, but matrix multipication produces a (1,1):
In [33]: [email protected]
Out[33]: array([[0.3125]])
If A was 1d by mistake, you could get a scalar value
In [34]: A1 = A.ravel()
In [35]: A1.shape
Out[35]: (2,)
In [36]: A1.T.shape
Out[36]: (2,)
In [38]: A1.T@A1
Out[38]: 0.3125
dot does the same thing:
In [39]: np.dot(A.T,A)
Out[39]:
array([[0.25 , 0.125 ],
[0.125 , 0.0625]])
In [40]: np.dot(A1.T,A1)
Out[40]: 0.3125
A = np.array([[0.5, 0.5]])followed byprint(np.dot(A.T,A)). Is there a chance that you haveA = np.array([0.5, 0.5])(a one dimensional array) by mistake? If your question does reflect your code accurately, then it might be helpful if you would share the result ofnp.__version__.