5

short question - is this the FASTEST way to create a 16x16 (or also nxn) matrix with zeros in python & numpy?

a = np.matrix(np.zeros((16, 16), dtype = np.int))
2
  • It would help to know the context. Are you creating multiple matrices? Why do you need np.matrix instead of np.ndarray? Commented Dec 8, 2018 at 23:17
  • i used np.matrix as i wanted to multiplicate matrices.. or better: vectors wits a matrix like: v.T * M * v (which returns a number) i didn't know, that np.matrix is slow and almost(?) deprecated Commented Dec 8, 2018 at 23:57

2 Answers 2

4

The best way to speed up the creation of this matrix would be to skip using the matrix class entirely and just use np.zeros:

a = np.zeros((16, 16))

Skipping the use of matrix gives a 10x speedup:

%%timeit
a = np.matrix(np.zeros((16, 16)))
4.95 µs ± 50.5 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)

%%timeit
a = np.zeros((16, 16))
495 ns ± 2.18 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)

numpy.matrix has been deprecated:

Note It is no longer recommended to use this class, even for linear algebra. Instead use regular arrays. The class may be removed in the future.

Edit: There's a nice discussion about the reasons behind matrix's deprecation that Paul Panzer linked to in the comments.

A common reason why people use matrix instead of array is so that a * b will perform matrix multiplication (instead of pairwise multiplication, like it will for standard array's). However, you can now use the shiny new matrix multiplication operator @ to easily perform matrix multiplication using standard arrays:

a = np.arange(2*2).reshape(2,2)
b = np.arange(2*2, 2*2*2).reshape(2,2)
print('a\n%s\n' % a)
print('b\n%s\n' % b)
print('a * b (pairwise multiplication)\n%s\n' % (a * b))
print('a @ b (matrix multiplication)\n%s\n' % (a @ b))

Output:

a
[[0 1]
 [2 3]]

b
[[4 5]
 [6 7]]

a * b (pairwise multiplication)
[[ 0  5]
 [12 21]]

a @ b (matrix multiplication)
[[ 6  7]
 [26 31]]
Sign up to request clarification or add additional context in comments.

5 Comments

wow, i didn't know that (just startetd with numpy)... the reason i wanted to use a matrix is exactly as you guessed: to do a matrix multiplication. it works, BUT shouldn't a = np.zeros((16, 16)) v = np.zeros((16)) print(a @ v) return a column vector? i get a ROW vector instead...
Yeah, unfortunately the rules Numpy uses for shape coercion when doing matrix multiplication between 2D matrices and 1D vectors are a little confusing. Numpy will treat the 1D v as a column vector during the multiplication operation, but will still return the result as a normal 1D array.
Technically, Numpy doesn't treat 1D arrays as either row or column vectors, but will instead try and guess which they are by context. See the matmul docs (matmul is used to implement the behavior of the @ operator) for complete details.
@Cyberbunny here Is a nice Q&A re the merits or not of the matrix class.
From the linear algebraic point of view, there are situations that I want the expression to match the linear algebra exactly. When manymany transpose, inner, outer products, jacobians, etc are involved, this notation you are advocating literally sucks. Not everything is about performance.
2

Skip matrix and use this directly:

a = np.zeros((16, 16))

Comments

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.