I am using python to build an OpenGL rendering engine and am using numpy arrays with a custom datatype to store my vertex data.
import numpy as np
data_type_vertex = np.dtype({
"names": ["x", "y", "z" , "color"],
"formats": [np.float32, np.float32, np.float32, np.uint32],
"offsets": [0, 4, 8, 12],
"itemsize": 16
})
When loading in vertex data from a .obj file, it's useful to temporarily store the vertex data in a regular python list before converting that data to a Numpy array with my custom datatype. However, simply trying to convert the list to a python array gives unexpected results.
vertex_list = [
[1.1, 2.2, 3.3, 5],
[4.4, 5.5, 6.6, 7]
]
print(np.array(vertex_list, dtype=data_type_vertex))
# Result
# [[(1.1, 1.1, 1.1, 1) (2.2, 2.2, 2.2, 2) (3.3, 3.3, 3.3, 3)
# (5. , 5. , 5. , 5)]
# [(4.4, 4.4, 4.4, 4) (5.5, 5.5, 5.5, 5) (6.6, 6.6, 6.6, 6)
# (7. , 7. , 7. , 7)]]
As can be seen, each element of the list is converted to a full instance of the custom datatype by coping the element to all fields, instead of the intended behaviour of converting the sublists to instances of the custom datatypes. This can be solved initializing a placeholder array and iteratively converting all list elements.
vertex_array = np.zeros(len(vertex_list), dtype=data_type_vertex)
for i, v in enumerate(vertex_list):
vertex_array[i] = (v[0], v[1], v[2], v[3])
print(vertex_array)
# Result
# [(1.1, 2.2, 3.3, 5) (4.4, 5.5, 6.6, 7)]
While this works, it feels somewhat clunky and might require a lot of hardcoded conversion functions if multiple custom datatypes are introduced.
Is there a better way to achieve the same result?