I am beginning to work on a physics system, where there are multiple objects in the world that get rendered. The current approach I am considering consists of pseudo-OOP in C where each object has data that is used for rendering, and data that is not used for rendering.
Quick example:
double Object_update(Object *this, double now) {
double delta = now - this->last_update;
this->position[0] += this->velocity[0];
this->position[1] += this->velocity[1];
this->position[2] += this->velocity[2];
// Check for collisions etc.
}
struct Class {
size_t size;
const struct Class *super;
double (*update)(double);
} Object = {
};
struct Object {
struct Class *class;
double last_update;
float position[3], rotation[3][3], velocity[3], mass;
// Others might be added later, e.g., bounding_box, angular_velocity, force, torque, etc.
};
So here is the problem. If these objects have their position and rotation embedded in them, then how could I efficiently pass those (and not the other properties) to the GPU for rendering?
I have considered storing those specifically in a parallel array in GPU-accessible memory but that will get complicated when I encounter the age old problem of keeping said array contiguous, i.e., removing gaps when objects are destroyed, etc. And gets even more complicated when I add 'subclasses' with extra data that is relevant for rendering, such as texture IDs or meshes, meaning the elements are no longer homogeneous in size.
How can I best maintain GPU-accessible data in my Object along with properties that do not need to be GPU accessible?
I could loop through them in the render loop and then setVertexBytes to the position/rotation data of each object, but that would incur one draw call per object, which is not efficient for a large number of objects.