Firstly,
because of the declaration
struct bitfield
{
unsigned a:5;
unsigned c:5;
unsigned b:6;
}bit;
Your bit-field bit will have 16-bit bitfield with the arrangement like this
00000000 00000000
ccccccbb bbbaaaaa
Secondly, because the input is {1, 3, 3}, so the 1 will be assigned to a, 3 to b, and another 3 to c, makes it become like this
00001100 01100001
ccccccbb bbbaaaaa
Thirdly, because the p is just character pointer, it will originally take the last byte (an 8-bit) among the two, which has smaller address in little endian format (edit: albeit implementation/platform dependent, see comment and another answer from Abstraction), so, p points to
Additional note from MDSN
The 8086 family of processors stores the low byte of integer values
before the high byte
01100001
bbbaaaaa
But fourthly, you increase the p! Thus, in little endian format, it now points to
00001100
ccccccbb
Then finally, you print the value of what is pointed by p. And obviously, 00001100 is nothing but bit representation of decimal value 12. That is why you get 12.
char *is implementation defined at best. It would certainly be feasible for the value displayed to be all zero on some types of processor.