3

This is my first SO question ever. Let me know if I could have asked it better :)

I am trying to find a way to splice together lists of sparse matrices into a larger block matrix.

I have python code that generates lists of square sparse matrices, matrix by matrix. In pseudocode:

Lx = [Lx1, Lx1, ... Lxn]
Ly = [Ly1, Ly2, ... Lyn]
Lz = [Lz1, Lz2, ... Lzn]   

Since each individual Lx1, Lx2 etc. matrix is computed sequentially, they are appended to a list--I could not find a way to populate an array-like object "on the fly".

I am optimizing for speed, and the bottleneck features a computation of Cartesian products item-by-item, similar to the pseudocode:

M += J[i,j] * [ Lxi *Lxj + Lyi*Lyj + Lzi*Lzj ] 

for all combinations of 0 <= i, j <= n. (J is an n-dimensional square matrix of numbers).

It seems that vectorizing this by computing all the Cartesian products in one step via (pseudocode):

L = [ [Lx1, Lx2, ...Lxn],
      [Ly1, Ly2, ...Lyn],
      [Lz1, Lz2, ...Lzn] ]
product = L.T * L

would be faster. However, options such as np.bmat, np.vstack, np.hstack seem to require arrays as inputs, and I have lists instead.

Is there a way to efficiently splice the three lists of matrices together into a block? Or, is there a way to generate an array of sparse matrices one element at a time and then np.vstack them together?

Reference: Similar MATLAB code, used to compute the Hamiltonian matrix for n-spin NMR simulation, can be found here:

http://spindynamics.org/Spin-Dynamics---Part-II---Lecture-06.php

5
  • 2
    Welcome to Stack Overflow! Well asked first question. :-) Commented Apr 8, 2016 at 15:51
  • Where are the little matrices coming from, and in what sparse matrix format? Commented Apr 8, 2016 at 15:56
  • Each Lxn matrix is computed by repetitive krons of these base matrices: code sigma_x = csc_matrix(np.matrix([[0, 1/2], [1/2, 0]])) sigma_y = csc_matrix(np.matrix([[0, -1j/2], [1j/2, 0]])) sigma_z = csc_matrix(np.matrix([[1/2, 0], [0, -1/2]])) unit = csc_matrix(np.matrix([[1, 0], [0, 1]])) code Commented Apr 8, 2016 at 16:03
  • (sorry about the format in the response: I'm new! :) ) Commented Apr 8, 2016 at 16:05
  • Each Lxn matrix is computed by repetitive krons of these base matrices: sigma_x = csc_matrix(np.matrix([[0, 1/2], [1/2, 0]])), sigma_y = csc_matrix(np.matrix([[0, -1j/2], [1j/2, 0]])), sigma_z = csc_matrix(np.matrix([[1/2, 0], [0, -1/2]])), unit = csc_matrix(np.matrix([[1, 0], [0, 1]])) Commented Apr 8, 2016 at 16:12

3 Answers 3

2

This is scipy.sparse.bmat:

L = scipy.sparse.bmat([Lx, Ly, Lz], format='csc')
LT = scipy.sparse.bmat(zip(Lx, Ly, Lz), format='csr') # Not equivalent to L.T
product = LT * L
Sign up to request clarification or add additional context in comments.

6 Comments

That blends everything into one matrix. I actually need to keep the boundaries between individual sub matrices Lx[n]/Ly[n]/Lz[n] . For example, L[1,0] will contain [Lx1, Ly1, Lz1] x [Lx0, Ly0, Lz0]. Then, the number in J[1,0] will be multiplied with this matrix stored in L[1,0]. I suppose you could multiply the J values with slices of the mega-matrix (thus re-creating the submatrices), but that seems clunky.
@GeoffreySametz: You can't "keep the boundaries" like you're thinking of, especially with sparse matrices, which are inherently two-dimensional. Even with dense arrays, "keeping the boundaries" would make things really clunky to work with.
Thanks. I've started implementing a slice approach and will see how that goes.
It turns out this won't work, because the transpose of the sum matrix isn't the same as the transpose of the block elements: math.stackexchange.com/questions/246289/…
@GeoffreySametz: Oh, indeed. That's not really L.T you want there. I interpreted your pseudocode differently from how you did. The new version of the answer should produce the correct result.
|
1

I was able to get a tenfold speed increase by not using sparse matrices and using an array of arrays.

Lx = np.empty((1, nspins), dtype='object') Ly = np.empty((1, nspins), dtype='object') Lz = np.empty((1, nspins), dtype='object')

These are populated with the individual Lx arrays (formerly sparse matrices) as they are generated. Using the array structure allows the transpose and Cartesian product to perform as desired:

Lcol = np.vstack((Lx, Ly, Lz)).real Lrow = Lcol.T # As opposed to sparse version of code, this works! Lproduct = np.dot(Lrow, Lcol)

The individual Lx[n] matrices are still "bundled", so Product is an n x n matrix. This means in-place multiplication of the n x n J array with Lproduct works:

scalars = np.multiply(J, Lproduct)

Each matrix element is then added on to the final hamiltonian matrix:

for n in range(nspins): for m in range(nspins): M += scalars[n, k].real

Comments

0

I have a "vectorized" solution, but it's almost twice as slow as the original code. Both the bottleneck shown above, and the final dot product shown in the last line below, take about 95% of the calculation time according to kernprof tests.

    # Create the matrix of column vectors from these lists
L_column = bmat([Lx, Ly, Lz], format='csc')
# Create the matrix of row vectors (via a transpose of matrix with
# transposed blocks)
Lx_trans = [x.T for x in Lx]
Ly_trans = [y.T for y in Ly]
Lz_trans = [z.T for z in Lz]
L_row = bmat([Lx_trans, Ly_trans, Lz_trans], format='csr').T
product = L_row * L_column

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.