I have a Cython function that takes a 2d nd.array (numpy array) of integers and returns a 1d numpy array whose length is the same as the input 2d array.
import numpy as np
cimport numpy as np
np.import_array()
cimport cython
def func(np.ndarray[np.float_t, dim=2] input_arr):
cdef np.ndarray[np.float_t, ndim=1] new_arr = ...
# do stuff
return new_arr
In another loop in the program, I want to call func, but pass it a 2d array that is created dynamically from another 2d array. Right now I have:
my_2d_numpy_array = np.array([[0.5, 0.1], [0.1, 10]]) # assume this is defined
cdef int N = 10000
cdef int k
for j in xrange(N)
# find some element k of interest
# create a 2d array on fly containing just the k-th to func()
func(np.array([my_2d_numpy_array[k]], dtype=float)) # KEY LINE
This works, but I think that the call to np.array each time inside the loop creates a huge overhead, because it goes back to Python. Since func only reads the array and doesn't modify it, how can I just pass it a view of the array as a pointer, without making a new array by going back to Python? I'm only interested in pulling out the kth row of my_2d_numpy_array and passing that to func()
Update: A related question: if I am using an nd.array inside the loop but don't need the full functionality of nd.array in func, can I make func instead take something like a static C array and somehow treat the nd.array as that? Will that save costs? Presumably then you don't have to pass an object to func (nd.array is an object)
np.arraygoes back to the interpreter? It's a built-in function.