2
$\begingroup$

Consider hamiltonians $H^{z}, H^{x}$, and a unitary operator $U= e^{-i H^{z}} e^{-i H^{x}}$. In physics language, I specifically want the kicked-field Ising model. \begin{align} U & =\mathrm{e}^{-i H^z} \mathrm{e}^{-i H^x}, \quad \text { with } \\ H^z & =J \sum_{j=1}^L \sigma_j^z \sigma_{j+1}^z+\sum_{j=1}^L h_j \sigma_j^z \\ H^x & =b \sum_{j=1}^L \sigma_j^x . \end{align}

$h_j$ are chosen randomly from a Gaussian distribution. $\sigma_{z}, \sigma_{x}$ are the standard Pauli matrices : PauliMatrix[3], PauliMatrix[1] in mathematica.

The following code generates a list of two Hamiltonians $\{H^{z}, H^{x}\}$.


Hkfi[J_, g_, n_] := 
 Module[{alist, a, b, blist, clist, c, e, X, Y, Z, hz, U1, U2},
  X = SparseArray[N[PauliMatrix[1]]];
  Y = SparseArray[N[PauliMatrix[2]]];
  Z = SparseArray[N[PauliMatrix[3]]];
  
  hz = J (KroneckerProduct[Z, 
        IdentityMatrix[(2)^(n - 2), SparseArray], Z] + 
       Total[Table[ 
         KroneckerProduct[IdentityMatrix[(2)^(s - 1), SparseArray], 
          Z, Z, IdentityMatrix[(2)^(n - s - 1), SparseArray]], {s, 1, 
          n - 1}]]) + 
    Total[Table[ 
      RandomVariate[NormalDistribution[0, 1]] KroneckerProduct[
        IdentityMatrix[(2)^(s - 1), SparseArray], Z, 
        IdentityMatrix[(2)^(n - s), SparseArray]], {s, 1, n}]];
  
  c = g Total[
     Table[ KroneckerProduct[IdentityMatrix[(2)^(s - 1), SparseArray],
        X, IdentityMatrix[(2)^(n - s), SparseArray]], {s, 1, n}]];
  
  Return[{hz, c}]
  ]

I would like to find some of the eigenvalues of $U$, near a given $\phi$ in an efficient manner. For now, I am finding it by using the Eigenvalues command with Method-> Arnoldi , and an appropriate shift. However, there are better ways to evaluate it without ever constructing the matrix $U$ explicitly. I would like help with any such method.

$\endgroup$
12
  • $\begingroup$ Why confusing matters with polynomial filtering? Do you want eigenvalues of $U$ or do you want eigenvalues of $U$ using this specific method? $\endgroup$ Commented Mar 3 at 10:08
  • $\begingroup$ "Confusing matters with polynomial filtering" because, as claimed in paper, it allows me to go higher sizes with less memory. But I am open to the problem either way. For now I could do it with the classic "shift-invert" (in MMA) upto $2^{14}$ but then starting from $2^{15}$ my RAM usage go to 160GB (I have a 256GB workstation access )!! If you can help me minimize it either way, I would appreciate it, thanks. I changed the title accordingly. In any case if I can go to even $2^{16}$ without such drastic RAM usage it would be great. $\endgroup$ Commented Mar 3 at 10:12
  • $\begingroup$ Just to be clear, I am explicitly constructing the dense $U$ as of now and then finding few eigenvalues of this dense matrix. The whole process is already time as well as memory consuming. $\endgroup$ Commented Mar 3 at 10:18
  • $\begingroup$ I find this all confusing because (seemingly as always sigh) the "physical notation" does not define the entities involved. "kicked-field Ising model" does not tell ma anything. Bear with us -- this is not a physicists community. $\endgroup$ Commented Mar 3 at 13:17
  • 1
    $\begingroup$ My bad sorry, it would be $e^{-i m \phi}$. $\endgroup$ Commented Mar 3 at 22:16

2 Answers 2

3
$\begingroup$

In principle, the Arnoldi method needs only a function that evaluates U.v for a vector v. Such a function can be obtained without computing U as dense matrix first:

{Hz, Hx} = Hkfi[J, g, n];
diag = Exp[-I Normal[Diagonal[Hz]]];
A = -I Hx;
DotU = v |-> diag MatrixExp[A, v];

Note that A is sparse. So MatrixExp[A, v] can be evaluated much more quickly than U = diag MatrixExp[A] and then U. v, while providing essentially the same result. Computing U = diag MatrixExp[A] has probably complexity $O(N^3)$, where $N = 2^n$. A single A.v has probably complexity about O(N) only. I am not sure about the complexity of MatrixExp[A, v]; but experimentally, it seems to be only slightly above $O(N)$. (I guess because the exponential series converges very quickly for the arguments I put in.) In fact, what I suppose what MatrixExp[A, v] does is just evaluating a truncated exponential series of A applied to the vector v as in the following function (see also my answer here):

MatrixPolynomial[DotA_Function, coefficients_?VectorQ, v_?VectorQ] := 
 Module[{degree, Akv, result}, degree = Length[coefficients] - 1;
  Akv = v;
  result = coefficients[[1]] v;
  Do[Akv = DotA[Akv];
   result += coefficients[[k + 1]] Akv;, {k, 1, degree}];
  result
  ]

I expect that MatrixExp[A, v] does not do that with a fixed loop count, but rather by tracking an upper bound for the size of the tail of the series and by stopping when the Taylor remainder is sufficiently small. (One has to estimate the spectral norm, and that can be down with the Power Method, which also needs only powers of A applied to a fixed vector v.)

So, in a perfect world, we would define

coefficients = Table[Exp[-I m \[Phi]], {m, 0, k}];
DotgU = v |-> MatrixPolynomial[DotU, coefficients, v];

and call a matrix-free Arnoldi implemention with the function DotgU as argument. Sadly, Mathematica does not offer an API for that, at least not to my knowledge.

There are several ways around this:

  1. Writing a LibaryLink function that calls ARPACK. This is a but difficult, because one has keep the ARPACK library alive while communicating with it from within Mathematica.

  2. Rewriting the Arnoldi method from scratch in Mathematica as I did here. It is quite difficult to reach the same level of sophistication than ARPACK.

  3. Writing a command line tools in C (or whatever you like) that calls ARPACK directly and also is able to compute MatrixPower[A,v]. Then write the sparse matrices I Hz and I Hx to file, load them into the command line program, do the eigen analysis and write the results back to file.

  4. You can also start an Python session from within Mathematica and send the data over there, let the computations be performed there an then get the data back.

The latter would start with

$arnoldiPython = StartExternalSession[Association[
    "System" -> "Python",
    "Evaluator" -> <|"Dependencies" -> {"numpy", "scipy"}, 
      "EnvironmentName" -> "arnoldienv"|>
    ]];

Then you can send strings of Python code over there like this:

ExternalEvaluate[$arnoldiPython,
 "
import scipy as sp; 
import numpy as np; 
"]

You can send data, e.g., a vector v. To set a vector named "v" in Python, you can do

ExternalEvaluate[$arnoldiPython,
 Association[
  "Command" -> "
def set_fun(x) : 
    global v
    v = np.array(x)

set_fun
",
  "Arguments" -> {{1, 2, 3}}
  ]
 ]

And you can get it back with

w = Normal[ExternalEvaluate[$arnoldiPython, "v"]]

{1, 2, 3}

This works, but it is a bit awkward to define the function set_fun and then calling it. Also, I think the vector v is send as a general Python list -- not as a numpy.array. The latter would certainly be more desirable. There is certainly a better way. I just do not know it at the moment. Fortunately, there are many others on this site who are more proficient with the Python interface than I am.

$\endgroup$
12
  • 2
    $\begingroup$ I think the poster would like to know how to implement the Arnoldi method rather than how apply matrix to a vector, which would be a part of the process. $\endgroup$ Commented Mar 3 at 14:47
  • 2
    $\begingroup$ In the making... $\endgroup$ Commented Mar 3 at 14:49
  • $\begingroup$ Thanks for the time once again. It seems difficult to really challenge ARPACK ( from another post). But since you mentioned point 3, is it possible that I do the relevant part of eigenvalue finding in Python (I started learning a bit just for this problem recently) and do rest part like hamiltonian forming, etc( which I believe is already fast using the code I wrote) in mathematica only? $\endgroup$ Commented Mar 3 at 22:47
  • 1
    $\begingroup$ Okay with a bit more search I found the ARPACK's way. It uses implicitly restarted Arnoldi. Similar to what you have but a bit more sophisticated. It seems to me that it is much better to just use ARPACK and get it over with, than write this ourselves (too much work and nuances which are otherwise readily available ). $\endgroup$ Commented Mar 4 at 5:04
  • 1
    $\begingroup$ @HenrikSchumacher I was able to workout polynomial filtering using your code only. I was doing some standard test and found the source of my earlier errors. Your list of coefficients should start from m =0 (sorry I blindly copied it before, so couldn't see it). Everything got fixed now. I found like 300 eigenvalues for $N=10$ within machine precision in a completely matrix-free manner. I can go to very high system sizes now probably $2^{18}$ with my workstation now. $\endgroup$ Commented Mar 7 at 14:11
1
$\begingroup$

One can efficiently evaluate selected eigenvalues of a unitary operator built from sparse hermitian matrices by using the Shift-and-Invert Method.

This is an iterative method in which

$$ (U-\sigma I) x_{k+1}=x_{k}. $$

Here, $I$ is an identity matrix, $U$ is given by (according to the OP)

$$ U=e^{-ih_z}e^{-ih_x}, $$

and $\sigma=e^{-I\phi}$-selected eigenvalue. Here $(U-\sigma I)$ can be efficiently evaluated as follows:

ϕ=-Pi/3;
{hz, hx} = Hkfi[1, 0.2, 6]
fU[z_]:=Exp[-I*Diagonal[hz]] * MatrixExp[-I*hx,z]-Exp[-I*ϕ]*z

where I set some specific value of $\phi$. The iteration in the simplest form can be written as

x = RandomReal[{0, 1}, First@Dimensions[hz]];
x = x/x[[1]];
x = x/Norm[x];
tol = 10^-3;

Dynamic[err]
Do[
 (* Matrix free linear solve *)
 y = LinearSolve[fU, x, Method -> "Krylov"];
 y = y/y[[1]];
 y = y/Norm[y];
 err = Norm[y - x];
 If[err < tol, Break[]];
 x = y;
 , {i, 100}]
e = Conjugate[y] . MatrixExp[-I*hz, MatrixExp[-I*hx, y]];

Finally, we plot all eigenvalues, one selected eigenvalue found using a matrix-free approach (red) along the $\phi$-direction (blue line):

ComplexListPlot[Eigenvalues[MatrixExp[-I*hz].MatrixExp[-I*hx]],
    Prolog->{Red,Disk[ReIm[e],0.05],Blue,InfiniteLine[{{0,0},ReIm[Exp[-I*ϕ]]}]}]

Eigenvalue using Shift-and-Invert Method

In my solution I was inspired by the proposal of @user21. I leave further optimisation to you: the function fU can be written slightly more efficient as @HenrikSchumacher proposed, and the implementation can be done in a more functional way.

It should be noted that if you're only interested in one eigenvalue, the Shift-and-Invert Method typically converges faster than the Arnoldi Method.

I can routinely run this method for $n=20$ consuming 2GB. Options to try for LinearSolve would be for instance Method -> {"Krylov", "Tolerance" -> 0.3, "MaxIterations" -> 10}. As can be seen from the plot, the eigenvalues are densely spaced. Therefore, to improve convergence, one may update the value of $\phi$ provided the algorithm approached the given value.

$\endgroup$
3
  • $\begingroup$ Thank you for the solution and your time. It indeed is clever to. It becomes slow for larger size (which I believe is an artifact of matrix-free methods that you give up time for memory saving ?). $\endgroup$ Commented Mar 4 at 2:39
  • $\begingroup$ @Erosannin There are so many ways for improvement! 1) gradually increase the accuracy of the Krylov algorithm with iterations. There are 8 options for this method to play with. 2)MatrixExp does not have options for tuning. Therefore, you can replace it with NDSolve on initial steps. 3) optimise the convergence criterion. $\endgroup$ Commented Mar 4 at 6:51
  • 1
    $\begingroup$ I had a detailed look at your clever code for the last 3 days. So combining it with the matrix-free Arnoldi version code of Henrik, at least from my calculations, it is really good for practical usage (in fact, one can use it better than MMA as it saves memory a lot ). I could go to really large system sizes without any problem. It can even be used to find something like $200-300$ or maybe more eigenvalues (this number is for sparse matrices of $2^{12}$ size matrices) within machine precision with some clever restarting. $\endgroup$ Commented Mar 7 at 13:57

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.