Let's take the following two ways to create a 'Person' object with a name and age:
struct Person {
char* Name;
int Age;
};
int using_struct() {
struct Person amy;
amy.Name = "Amy";
amy.Age = 20;
return 1;
};
int without_struct() {
char* Name = "Amy";
int Age = 20;
return 1;
}
The compiler output looks very similar except for two things:
# using_struct -- aligned on 16, Name at lowest address
movq $.LC0, -16(%rbp)
movl $20, -8(%rbp)
# manually -- aligned by type size
movq $.LC0, -8(%rbp)
movl $20, -12(%rbp)
- The
structaligns differently (uses 16 bytes instead of 12 in the above example). - The
structallocates variables with increasing storage whereas the other/stack way aligns it descendingly.
When all is said and done, they seems very very similar in how the compiler treats them. Is the struct then mainly a "programmer convenience" to use, or is it fundamentally different than defining the items manually each time?
structis for the programmer's benefit. You could say the same thing about arrays or enums or many other C constructs. Sure, you could try to do everything with individual variables but that would quickly become impractical if not impossible. Not sure what point you are really trying to get at.structwas going to produce wildly different compiler output but it doesn't.struct. Think instead about how you would handle an array of 5000 of the structs. For one thing, with an array of structs, the members are kept contiguous in memory, instead of being far apart - that's better for caching. Also think about the benefits of passing one struct pointer to a function, instead of 3 or 30 distinct arguments.