1

a other stupid question from my side ;) I have some issues with the following snippet with len(x)=len(y)=7'700'000:

from numpy import *

for k in range(len(x)):
    if x[k] == xmax:
        xind = -1
    else:
        xind = int(floor((x[k]-xmin)/xdelta))
    if y[k] == ymax:
        yind = -1
    else:
        yind = int(floor((y[k]-ymin)/ydelta))

    arr = append(arr,grid[xind,yind])

All variables are floats or integers except arr and grid. arr is a 1D-array and grid is a 2D-array.

My problem is that it takes a long time to run through the loop (several minutes). Can anyone explain me, why this takes such a long time? Have anyone a suggestion? Even if I try to exchange range() through arange()then I save only some second.

Thanks.

1st EDIT Sorry. Forgot to tell that I'm importing numpy

2nd EDIT

I have some points in a 2D-grid. Each cell of the grid have a value stored. I have to find out which position the point have and apply the value to a new array. That's my problem and my idea.

p.s.: look at the picture if you want to understand it better. the values of the cell are represented with different colors.

idea

0

5 Answers 5

4

How about something like:

import numpy as np
xind = np.floor((x-xmin)/xdelta).astype(int)
yind = np.floor((y-ymin)/ydelta).astype(int)

xind[np.argmax(x)] = -1
yind[np.argmax(y)] = -1

arr = grid[xind,yind]

Note: if you're using numpy don't treat the arrays like python lists if you want to do things efficiently.

Sign up to request clarification or add additional context in comments.

4 Comments

+1, especially for "If you're using numpy, don't treat arrays like lists"! Just as a side note, though, np.floor((x - xmin) / xdelta) is equivalent to (x - xmin) // xdelta (and marginally faster, not that it matters in this case.).
@PateToni: No problem. Numpy is a fantastically powerful package but it involves thinking about problems differently than you would if you were using other python data structures. It takes some time to switch you're thinking, but when you get the hang of it you'll get enormous speed-ups in your code vs iterating over arrays one element at a time.
Is there any book or tutorial which can improve the thinking?
@PateToni: I came from a matlab background so I was already use to thinking that way, but it would probably be good to just look through the documentation and user guide and familiarize yourself with the type of methods that are available, and take a look at solutions to other SO questions docs.scipy.org/doc/numpy/reference docs.scipy.org/doc/numpy/user
1
for x_item, y_item in zip(x, y):
    # do stuff.

There's also izip for if you don't want to generate a giant extra list.

Comments

0

I cannot see an obvious problem, beside the size of the data. Is your computer able to hold everything in memory? If not, you are probably "jumping around" in swapped memory, which will always be slow. If the complete data is in memory, give psyco a try. It might speed up your calculation a lot.

Comments

0

I suspect the problem might be in the way you're storing the results:

arr = append(arr,grid[xind,yind])

The docs for append say it returns:

A copy of arr with values appended to axis. Note that append does not occur in-place: a new array is allocated and filled.

This means you'll be deallocating and allocating a larger and larger array every iteration. I suggest allocating an array of the correct size up-front, then populating it with data in each iteration. e.g.:

arr = empty(len(x))

for k in range(len(x)):
    ...
    arr[k] = grid[xind,yind]

Comments

-1

x's lenght is 7 millions? I think that's why! THe iterations ocurrs 7 millions times,

probably you shoud make another kind of loop. It's really necesary looping over 7 m times?

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.