I have a std::string containing four bytes received from a serial port. These bytes represent two int16_t in little endian. Since the software is going to run on x86 only, there is no need to change the endianness.
My first approach to get the individual integer values was this:
std::string inData = ...; // inData.length() = 4
int16_t measurement[2];
measurement[0] = ( (int16_t)inData[0] << 8 ) | (int16_t)inData[1];
measurement[1] = ( (int16_t)inData[2] << 8 ) | (int16_t)inData[3];
Well, the result looked like undefined behaviour. I expected casting from inData[x] to int16_t would yield having the four MSB set to zero, but that seems not to be true. So I tried this:
const uint8_t *castedData = reinterpret_cast<const uint8_t*>(&inData[0]);
measurement[0] = ( (int16_t)castedData[0] << 8 ) | (int16_t)castedData[1];
measurement[1] = ( (int16_t)castedData[2] << 8 ) | (int16_t)castedData[3];
and it works as intended. However, is this the "correct" way? Or is there a way to achieve this without using reinterpret_cast?
I had a look at Converting unsigned chars to signed integer, but the accepted answer and the comments about aliasing confused me.
std::basic_string<unsigned char>.(inData[0] << 8) | (inData[1] & 0x00FF)... no casts needed