I'm trying to find the most efficient way of dealing with array of sets of structs. Namely, at the moment I have got the following structure:
struct myStruct{
double p1;
double p2;
bool p3;
double w;
}
So, objects I modelling have three properties and a weight assigned to them. Now, these are arranged in arrays of fixed size, say, 10, and there are multiple combinations of the objects weights, say 1000 of them:
const int nObj = 10;
const int n = 1000;
myStruct comb[n][nObj];
And finally there are a couple of functions I'm passing 10-sized arrays to:
double evalComb(myStruct (&ms)[nObj], double q1, double q2){
double output = 0;
for(int i = 0; i < nObj; i++){
output += // calculate some stuff using p1,p2,p3,w and other params
}
return output;
};
Now, the issue is that the set of ten values of p1, p2, p3 is fixed across all the 1000 combinations (but not const), the only thing changing is set of 10 weights w. This kind of makes me think it is a waste of memory copying all that 1000 times... I've got the working code but would like to make it quicker and my understanding is that this would be the most obvious place to optimize (as that function is called millions of times and the 90% of time goes there). Is is better to get weights out of the struct and have 2d double array of those leaving a 1d array of structs? That would mean another array parameter passed to a function, wouldn't it slow it down? Maybe I should have structs with arrays inside them instead? Any other issues which could arise with this?