1

I am trying to get a custom bit field I tried this method:

struct foo
{   
    unsigned z : 10;
    unsigned y : 16;
    unsigned x : 1;
    unsigned w : 16;
};


int main()
{
    foo test({0x345, 0x1234, 0x1 ,0x1234});
    char bytes[8] = {0};

    std::cout << sizeof(test) << std::endl;

    memcpy(bytes, &test, 8);

    std::cout << sizeof(bool)  << std::endl;

    for (int i = 0; i < sizeof(bytes) / sizeof(char); i++)
    {
        std::cout << std::bitset<8>(bytes[sizeof(bytes) / sizeof(char) - i - 1]);
    }
    std::cout << "" << std::endl;

    return 0;
}

With the test I am trying it returns me:

0000000000000000000100100011010000000100010010001101001101000101

(00000000000000000 | 0010 010 0011 0100 | 000001 | 0001 0010 0011 0100 |11 0100 0101 should correspond to: 0x1234 |0x1 | 0x1234 | 0x345)

I am reading it from right to left, in the right side I have the 10 first bits ( 11 0100 0101), then I have next 16 bits (0001 0010 0011 0100). After that field I am expecting just one bit for the next data, but I have 6 bits (000001) instead of (1) before the last 16 bits (0001 0010 0011 0100).

Do you have any insight for this please ?

18
  • 1
    Compilers are free to order, pad, and align bitfields however they like. In this case it seems that you compiler decided to add 5 bits of padding to x so that the overall structure would be 32-bit aligned. Commented Dec 7, 2021 at 19:59
  • How can I solve that ? It's very odd situation, specially since I want to have a certain definition in my bits because I am willing to use it to define a hardware message. Commented Dec 7, 2021 at 20:04
  • 2
    Little Endian might also skew the "expected" view of the bytes. But what problem are you trying to solve? If you are trying to guarantee a specific bit order (say for a network protocol or bus communication), write your own serialization (bit packing) code. Commented Dec 7, 2021 at 20:08
  • 1
    You literally use the bit manipulation operators like <<, >>, | or & to pack a binary message into a byte array instead of relying on the compiler to do the work for you. Commented Dec 7, 2021 at 20:11
  • 1
    unsigned char buffer[16] is 128 bits. Commented Dec 7, 2021 at 20:32

1 Answer 1

1

You have 5 spare bits, because the next bitfield occupies too much space to fit inside the remaining space (unsigned is 8 bits)

#include <cstdint> // types with fixed bit sizes

// force to remove padding
#pragma pack(push, 1) 

struct foo
{   
    // make bitsets occupy one address space
    uint32_t z : 10;
    uint32_t y : 16;
    uint32_t x : 1;
    // until now you have 27 bits, another 16 will not fit, 
// thus adding another 5 bits for padding. Nothing you can do.
    uint32_t w : 16; // or you can have uint16_t
    // 
};

#pragma pack(pop)

Also, bitsets can't share address space of different types of neighboring members.

Sign up to request clarification or add additional context in comments.

7 Comments

But there is no way to make that w "breaks" automatically in 5 bits to complete the first part from 27 to 32 instead of having 5 bits of padding and then the 11 following bits of w complete the rest ? I could do a hard coding with a union or another struct but I don't think it will be a clean code.
@yacth you can fit several bitsets within one byte (for padding == 1 byte). But you can't fit one bitset (smaller than byte) across two bytes. Can you add a message scheme (lsb to msb) to your question? Also, you can add empty bits explicitly within one byte via uint32_t : 2 - this will add 2 bits between your bitsets.
I want the following scheme (MSB on left side, LSB on right side): W= 16 bits | X = 1 bit | Y = 16 bits | Z = 10 bits If I understood what you told me, the compiler groups the bitset by 32 bits and since X,Y and Z adds up to 27 bits then it completes to 32 bits by adding 5 bits and then writes W. I wanted to know if I can write create a structure struct sW{uint8_t firstFiveBits : 5; uint16_t remainingElevelBits : 11;} and declare W as sW would that solve my issue here ?
Actually this is an example that I was working on to know better about bitsets but my real data is devided by 1 bit, 32 bits, 6 bits, 64 bits, 16 bits, 2 bits and 7 bits (MSB → LSB), which will be even more trickier with these padding issues.
Ok strangely if I define every bitfield as uint64_t it won't do a padding.
|

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.