This is a performance-related question. As such, regular JS rules and conventions are irrelevant here. Note also that JS objects are not in any way performance-equivalent to C structs, particularly those allocated in AoS format. Thank you for your understanding.
In C, we read or write large structs' data fast by pulling a struct instance from an array of struct into a local variable, modifying it, and writing that (as a value) directly back into the array again by simple assignment. The struct can be up to some platform-specified limit, usually up to several thousand bytes. This makes for both easy and super-fast handling of data. Disassembly may show some fast memcpy type stuff happening under the hood to achieve this. C example:
struct MyStruct arrayOfStructs[]; //on stack for simplicity
//...populate the Array-of-Struct with some data...
//copy out the struct we want, onto the stack:
struct MyStruct structValue = arrayOfStructs[i];
//...change the structValue's members
structValue.x = 12;
structValue.y = 3;
//assign it back by value, rather than memberwise.
arrayOfStructs[i] = structValue;
The benefit here is that no matter how large the struct is, we write it back in a single line, and the compiler handles the copying needed to make that happen as quickly as possible.
In Javascript, the only performant alternative is to use (Typed)Arrays. We fake our "struct" thus:
const SIZEOF_X = 1; //1 byte
const SIZEOF_Y = 1; //1 byte
const SIZEOF_Z = 1; //1 byte
const SIZEOF_PADDING0 = 1; //1 byte padding, aligns following members to 4B heap boundaries.
const SIZEOF_PAYLOAD1 = 2; //2 bytes
const SIZEOF_PAYLOAD2 = 2; //2 bytes
const SIZEOF_BYTES_ENTITY =
SIZEOF_X +
SIZEOF_Y +
SIZEOF_Z +
SIZEOF_PADDING0 +
SIZEOF_PAYLOAD1 +
SIZEOF_PAYLOAD2; //8 bytes total.
//calculating offsets into the "struct" can be done using a loop, but for clarity:
const OFFSET_BYTES_X = 0;
const OFFSET_BYTES_Y = SIZEOF_BYTES_X;
const OFFSET_BYTES_Z = SIZEOF_BYTES_X + SIZEOF_BYTES_Y;
//...etc.
const buffer = new ArrayBuffer(ENTITIES_COUNT * SIZEOF_BYTES_ENTITY);
const structsAs64 = new Uint64Array(buffer); //Uint64 because one struct = 8B.
const structsAs16 = new Uint16Array(buffer);
const structsAs8 = new Uint8Array(buffer);
//...populate our underlying buffer with some data... (not shown)
const structValue64 = structsAs64[i];
const z = structValue64 >> OFFSET_BYTES_Z;
//OR use a finer-grained view over the same data:
const z = structsAs8[i * SIZEOF_BYTES_ENTITY + OFFSET_BYTES_Z];
Great, it works. But what if our fake structs are greater than 64 bits (8 bytes) in length?
//now the new struct type we want to pull is 12x larger!
const SIZEOF_BYTES_ENTITY = 96;
const SIZEOF_WORDS_ENTITY = SIZEOF_BYTES_ENTITY / 8; //multiple 64-bit / 8-byte words.
const buffer = new ArrayBuffer(ENTITIES_COUNT * SIZEOF_BYTES_ENTITY);
const bigStructsAs64 = new Uint64Array(buffer);
//...populate our underlying buffer with some data... (not shown)
let index = 43; //to the entity we want
let bigStruct = bigStructsAs64.subarray(index * SIZEOF_ENTITY_WORDS);
.subarray() itself is a no-no as it creates a new TypedArray view which must then be GC'ed (possibly every animation frame). But at least we can be sure we are not doing this in JS code:
const bigStruct = new Uint64Array(SIZEOF_ENTITY_WORDS);
//copy out, one 64-bit word at a time:
for (int w = 0; w < SIZEOF_ENTITY_WORDS; w++)
{
bigStruct[w] = bigStructsAs64[index * SIZEOF_ENTITY_WORDS + w];
}
Is there any faster method for pulling back large datablocks without using explicit JS loops?
Additional detail This question seems relevant in terms of overestimating the potential impact of the loop mentioned above.
My assumption thus far has been that native methods would perform this data-gathering operation more efficiently than loops written in JS. I'll need to run some tests and get back with results (pending).