I have a (very large) number of data points, each consisting of an x and y coordinate and a sigma-uncertainty (sigma is the same in both x and y directions; all three variables are floats). For each data-point I want to generate a 2d array on a standard grid, with probabilities that the the actual value is in that location.
For instance if x=5.0, y=5.0, sigma=1.0, on a (0,0)->(9,9) grid, I expect to generate:
[ 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. ],
[ 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. ],
[ 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. ],
[ 0. , 0. , 0. , 0. , 0.01, 0.02, 0.01, 0. , 0. , 0. ],
[ 0. , 0. , 0. , 0.01, 0.06, 0.1 , 0.06, 0.01, 0. , 0. ],
[ 0. , 0. , 0. , 0.02, 0.1 , 0.16, 0.1 , 0.02, 0. , 0. ],
[ 0. , 0. , 0. , 0.01, 0.06, 0.1 , 0.06, 0.01, 0. , 0. ],
[ 0. , 0. , 0. , 0. , 0.01, 0.02, 0.01, 0. , 0. , 0. ],
[ 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. ],
[ 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. ]]
Above was generated by creating a numpy array with zeroes, and [5,5] = 1, and then applying ndimage.filters.gaussian_filter with a sigma of 1. I feel that I can deal with non-integer x and y by distributing over nearby integer values and get a good approximation.
It feels however to be extreme overkill to get my resulting array this way, since scipy will have to take all values into account, not just the 1 in location [5, 5], even though they are all 0. It only takes 300us for a 64x64 grid, but still, I would likt to know if there is no more efficient way to get a X*Y numpy array with a gaussian kernel with arbitrary x, y and sigma.


exp(-(x-x0)***2/sigma)and then take the outer product. This should be much faster than what you're going now anyway.