I found this Q&A one the Web:
Q: Which is better a char, short or int type for optimization?
A: Where possible, it is best to avoid using char and short as local variables. For the types char and short the compiler needs to reduce the size of the local variable to 8 or 16 bits after each assignment. This is called sign-extending for signed variables and zeroextending for unsigned variables. It is implemented by shifting the register left by 24 or 16 bits, followed by a signed or unsigned shift right by the same amount, taking two instructions (zero-extension of an unsigned char takes one instruction). These shifts can be avoided by using int and unsigned int for local variables. This is particularly important for calculations which first load data into local variables and then process the data inside the local variables. Even if data is input and output as 8- or 16-bit quantities, it is worth considering processing them as 32-bit quantities.
Is this correct? I thought it is better to avoid char and short because of arithmetic conversion (most likely they will be converted to ints or longs, and this will cause compiler to generate extra instructions).
Q: How to reduce function call overhead in ARM based systems?
A: Avoid functions with a parameter that is passed partially in a register and partially on the stack (split-argument). This is not handled efficiently by the current compilers: all register arguments are pushed on the stack.
· Avoid functions with a variable number of parameters. Varargs functions. ...
Concerning 'varargs' -- is this because the arguments will be passed over the stack? What is a function with args partially passed in registers, and partially via stack, could you provide example?
Can we say, that the way function arguments are passed (either by registers or stack) strongly depends on architecture?
Thanks !