In [208]: l = [np.array([1,2,3]), np.array([4,5,6,7]), np.array([8])]
Making an array from l doesn't do much for us, since the arrays differ in shape:
In [209]: np.array(l)
Out[209]: array([array([1, 2, 3]), array([4, 5, 6, 7]), array([8])], dtype=object)
Out[209] is 1d object dtype. It can't be flattened any further.
hstack is useful, turning the list of arrays into one array:
In [210]: np.hstack(l)
Out[210]: array([1, 2, 3, 4, 5, 6, 7, 8])
In [211]: np.mean(_)
Out[211]: 4.5
If the list contains 2d arrays as revealed in a comment:
In [212]: ll = [np.ones((2,4)), np.zeros((3,4)), np.ones((1,4))*2]
In [213]: ll
Out[213]:
[array([[1., 1., 1., 1.],
[1., 1., 1., 1.]]), array([[0., 0., 0., 0.],
[0., 0., 0., 0.],
[0., 0., 0., 0.]]), array([[2., 2., 2., 2.]])]
In [214]: np.vstack(ll)
Out[214]:
array([[1., 1., 1., 1.],
[1., 1., 1., 1.],
[0., 0., 0., 0.],
[0., 0., 0., 0.],
[0., 0., 0., 0.],
[2., 2., 2., 2.]])
In [215]: np.mean(_, axis=0)
Out[215]: array([0.66666667, 0.66666667, 0.66666667, 0.66666667])
np.concatenate(..., axis=0) would work for both cases.
l_mean = [i.mean() for i in l] and I_std = [i.std() for i in l]meanandstdof the whole list and not each NumPy array independently.hstackcan join them into one array. If you don't like that, show use how you'd do the loop.meanof the whole list, and the arrays are really 2d, then say so. The key to gainingnumpyefficiency is to create a numeric numpy array. With a list of arrays that can be tricky depending on how the arrays vary in shape.