I quickly scanned through the C++03 standard but still cannot tell if this behavior is guaranteed:
signed char cNegOne=-1; //char is 8bits
unsigned int a=cNegOne; //int is 32 bits in my Windows system
printf("0x%x\n",a);
result is:
0xffffffff
VC++ gives 0xffffffff in 32-bit Windows. But my assumption is that the conversion could happen in 2 ways:
1) 8-bit signed char -1 is first directly converted to a 8-bit unsigned value which is binary 11111111 or decimal 255 which then widened to 32-bit unsigned int giving 255 (0xff) too.
2) 8-bit signed char -1 is signed extended to 32 bit signed int giving 0xffffffff then reinterpreted as 32 bit unsigned int.
Obviously the second way is used here. But why is it the case? In the standard, I cannot find anything talking about this. Is it implementation specific?
EDIT: the original text from C++03 Chapter 4
Standard conversions are implicit conversions defined for built-in types. Clause 4 enumerates the full set of such conversions. A standard conversion sequence is a sequence of standard conversions in the following order:
— Zero or one conversion from the following set: lvalue-to-rvalue conversion, array-to-pointer conversion, and function-to-pointer conversion.
— Zero or one conversion from the following set: integral promotions, floating point promotion, integral conversions, floating point conversions, floating-integral conversions, pointer conversions, pointer to member conversions, and boolean conversions.
— Zero or one qualification conversion.
Pay attention that the guaranteed order is the l to rvalue conversion (etc) happens before the set integral promotions/conversions but that doesn't mean integral promtions must happen before conversions - they're just in the same set. Or is my interpretation correct?
chartounsigned intis one integral conversion.