There are at least two kinds of possible optimizations: working in a smarter way (algorithmic improvements) and working faster.
In the algorithmic side, you're using the Euler method, which is a first-order method (so global error is proportional to the step size) and has a smallish stability region. That is, it's not very efficient.
In the other side, if you're using the standard CPython implementation this kind of code is going to be quite slow. To get around that, you could simply try running it under PyPy. Its Just-in-Time compiler can make numerical code run maybe 100x faster. You could also write a custom C or Cython extension.
But there's a better way. Solving systems of ordinary differential equations is quite common, so scipy, one of the core scientific libraries in Python wraps fast, battle-tested Fortran libraries to solve them. By using scipy, you get both the algorithmic improvemnts (as integrators will have a higher order) and a fast implementation.
Solving the Lorenz 95 model for a set of perturbated initial conditions looks like this:
import numpy as np
def lorenz95(x, t):
return np.roll(x, 1) * (np.roll(x, -1) - np.roll(x, 2)) - x + F
if __name__ == '__main__':
import matplotlib.pyplot as plt
from scipy.integrate import odeint
SIZE = 40
F = 8
t = np.linspace(0, 10, 1001)
x0 = np.random.random(SIZE)
for perturbation in 0.1 * np.random.randn(5):
x0i = x0.copy()
x0i[0] += perturbation
x = odeint(lorenz95, x0i, t)
plt.plot(t, x[:, 0])
plt.show()
And the output (setting np.random.seed(7), yours can be different) is nicely chaotic. Small perturbations in the initial conditions (in just one of he coordinates!) produce very different solutions:

But, is it really faster than Euler time stepping? For dt = 0.01 it seems almost three times faster, but the solutions don't match except at the very beginning.

If dt is reduced, the solution provided by the Euler method gets increasingly similar to the odeint solution, but it takes much longer. Notice how the smaller dt, the later Euler solutions loose track of the odeint solution. The most precise Euler solution took 600x longer to computed the solution up to t=6 than odeint up to t=10. See the full script here.

In the end, this system is so unstable that I guess not even the odeint solution is accurate along all plotted time.
globaldoing in there?forloop and spell out the series of computations it does -- but there's no getting around the fact that they each have to be done. Might want to consider writing a C extension to do the actual number crunching.globalis a sign of lazy programming. ;-) I want to make sure that all instances of the classLorenzhave the same size for their data array. So, forcingSIZEto be a global in this class is the easiest way of ensuring this.