2

If inputed an numpy 2d array such as

                           100   100   100   100   100
                           100    0     0     0    100
                           100    0     0     0    100
                           100    0     0     0    100
                           100   100   100   100   100

an output like this should be obtained

                            100   100   100   100   100
                            100    50    25    50   100
                            100    25     0    25   100
                            100    50    25    50   100
                            100   100   100   100   100

where every number except the border becomes the mean of its adjacent numbers.

My current code works but I need to use it without for loops and vectorize it using numpy.

My current code:

import numpy as np
def evolve_heat_slow(u):      
    u2 = np.copy(u)
    x=u2.shape[0]
    y=u2.shape[1]
    for i in range(1,x-1):
        for s in range(1,y-1):
            u2[i,s]=(u[i-1,s]+u[i+1,s]+u[i,s+1]+u[i,s-1])/4
    return u2
1
  • If you have such types of code, you can compile and in your case multithread it using Numba. stackoverflow.com/a/50470995/4045774 This often outperforms vectorized solutions, except BLAS calls (eg. np.dot,..) Commented Jun 15, 2018 at 12:51

3 Answers 3

6

That's pretty much the definition of 2D convolution. scipy has you covered. I copy a to preserve the borders; the convolution in valid mode will make a smaller array (without the borders), which I then paste inside the prepared "frame".

import numpy as np
from scipy.signal import convolve2d

a = np.array([[100, 100, 100, 100, 100], [100, 0, 0, 0, 100],  [100, 0, 0, 0, 100], [100, 0, 0, 0, 100], [100, 100, 100, 100, 100]])
b = np.array([[0, 0.25, 0], [0.25, 0, 0.25], [0, 0.25, 0]])
r = np.copy(a)
r[1:-1, 1:-1] = convolve2d(a, b, mode='valid')
r
# => array([[100, 100, 100, 100, 100],
#           [100,  50,  25,  50, 100],
#           [100,  25,   0,  25, 100],
#           [100,  50,  25,  50, 100],
#           [100, 100, 100, 100, 100]])
Sign up to request clarification or add additional context in comments.

8 Comments

i don't understand how this works since every number should be the mean of its adjacent numbers instead of just inputting the value also if one was to put this in a definition with only the input of the original numpy array it wont work.
Maybe. Your code is probably faster for small arrays. But mine is vectorised, and it is without loops, which is what the question asked for. (Counting import in benchmark is not really a fair assessment.) If you want something without importing, you can, I assume, copy-paste convolve2d and any necessary support functions out of scipy.
Really...? import slows code down once, for a second or so. Then you can go on multiplying things for hours or days afterwards. Why is that one second relevant?
the code which i am looking for should be 122 times faster
Faster than what? How are you measuring it? (If you're doing time python test.py, it is the entirely wrong way to do it). Why 122? Also, if scipy is not fast enough for you, you should move it to C or Fortran, as this is probably the fastest way to do it in Python.
|
2

Although Amadan's scipy answer makes the most sense for this case, here's another approach doing it "manually":

import numpy as np

# Create your array
data = np.ones((5,5)) * 100
data[1:-1,1:-1] = 0

def evolve_heat_slow(m, should_copy=True):
    if should_copy: m = m.copy()

    components = (
        m[:-2,  1:-1],  # N
        m[2:,   1:-1],  # S
        m[1:-1, 2:],    # E
        m[1:-1, :-2],   # W
    )

    m[1:-1, 1:-1] = np.mean(np.stack(components), axis=0)
    return m

for _ in range(2):
    data = evolve_heat_slow(data)
    print(data)

Here, we define components by first taking the central 3x3 "window" and shifting it by 1 in each direction. We then stack the shifted windows, take the mean, and replace the central window with those values.

After 1 iteration:

[[ 100.  100.  100.  100.  100.]
 [ 100.   50.   25.   50.  100.]
 [ 100.   25.    0.   25.  100.]
 [ 100.   50.   25.   50.  100.]
 [ 100.  100.  100.  100.  100.]]

After 2 iterations:

[[ 100.   100.   100.   100.   100. ]
 [ 100.    62.5   50.    62.5  100. ]
 [ 100.    50.    25.    50.   100. ]
 [ 100.    62.5   50.    62.5  100. ]
 [ 100.   100.   100.   100.   100. ]]

Comments

-2

Not better than other methods, but you can also use np.roll in each direction to do the same thing:

def evolve_heat_slow(u):
    u2 = u.copy()
    u2[1:-1, 1:-1] = ((np.roll(u2,1,0) + np.roll(u2,-1,0) 
                        + np.roll(u2,1,1) + np.roll(u2,-1,1))/4)[1:-1, 1:-1]
    return u2

now with u2 = evolve_heat_slow(u) you get

u2 =
array([[100, 100, 100, 100, 100],
       [100,  50,  25,  50, 100],
       [100,  25,   0,  25, 100],
       [100,  50,  25,  50, 100],
       [100, 100, 100, 100, 100]])

2 Comments

@akhilrastogi Your dislike for Amadan's answer is really odd to me. This answer is 7 times slower than Amadan's for me on your example array and more than an order of magnitude slower on larger arrays.
@akhilrastogi Is this 122 times faster?

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.