7

I have char a[4] and in it: a[0] = 0x76 a[1] = 0x58 a[2] = 0x02 a[3] = 0x00 And I want print it as int, can you tell me how to do that?

3
  • As your tags indicate I recommend you do some casting. Commented Jul 11, 2013 at 19:55
  • Ya i tryed, but it don't work. Thanks for @djf for edit my question. Commented Jul 11, 2013 at 20:10
  • 2
    "Ya i tryed, but it don't work" What exactly did you try? what does don't work mean? Commented Jul 11, 2013 at 20:22

5 Answers 5

13

This works, but gives different results depending on the size of int, endian and so on..

#include <stdio.h>

int main(int argc, char *argv[])
{

    char a[4];
    a[0] = 0x76;
    a[1] = 0x58;
    a[2] = 0x02;
    a[3] = 0x00;
    printf("%d\n", *((int*)a));
    return 0;
}

This is cleaner but you still have endian/size problems.

#include <stdio.h>

typedef union {
    char c[4];
    int i;
} raw_int;

int main(int argc, char *argv[])
{

    raw_int i;
    i.c[0] = 0x76;
    i.c[1] = 0x58;
    i.c[2] = 0x02;
    i.c[3] = 0x00;
    printf("%d\n", i.i);
    return 0;
}

To force a certain endianness, build the int manually:

int i = (0x00 << 24) | (0x02 <<< 16) | (0x58 << 8) | (0x76);
printf("%d\n", i);
Sign up to request clarification or add additional context in comments.

2 Comments

+1, nice. For more on the endian issues with the union approach, see this other SO question.
The union approach has trouble if sizeof(int) != 4. printf("%d\n", i.i) prints a half-defined int if sizeof(int) == 8. Suggest typedef union { char c[sizeof(int)]; ... and raw_int i; i.i = 0; i.c[0] = 0x76; .... Sadly, this still has trouble with sizeof(int) == 2.
7

I think a union is the appropriate way to do this.

#include <stdio.h>
#include <stdint.h>

union char_int {
    char chars[4];
    int32_t num;
};

int main() {
    union char_int tmp;

    tmp.chars[0] = 0x76;
    tmp.chars[1] = 0x58;
    tmp.chars[2] = 0x02;
    tmp.chars[3] = 0x00;
    printf("%d\n", tmp.num);
}

5 Comments

I think you want printf(PRId32 "\n", tmp.num). printf("%d\n", tmp.num) would have trouble should sizeof(int) != 4.
@chux yes PRIu32 is good, but your printf shoud be printf("%"PRIu32"", tmp.num ); you forgot %.
@Grijesh Chauhan Agreed. "%"PRIu32"\n". Pesky things those '%'.
@chux, you're absolutely correct it should be "%"PRIu32"\n", however, I did handle the sizeof(int) != 4 by using int32_t in my union. If I can't rely on sizeof(int32_t) == 4 I've got bigger problems.
My concern about sizeof(int) != 4 was only directed at the "%d" not matching tmp.num. BTW: I think to print the int, as requested by the OP, may be done via printf("%d\n", (int) tmp.num) as the cast would deal with sizeof(int) != sizeof(int32_t).
2

Is the value stored in the array in big-endian or little-endian order? The portable way to do it is based on shift and mask, noting that in the general case, some of the high-order bits will be set and your plain char type might be signed or unsigned.

Little-endian

int i = (a[3] << 24) | ((a[2] & 0xFF) << 16) | ((a[1] & 0xFF) << 8) | (a[0] & 0xFF);

Big-endian

int i = (a[0] << 24) | ((a[1] & 0xFF) << 16) | ((a[2] & 0xFF) << 8) | (a[3] & 0xFF);

You can change those so that each term is consistently of the form ((a[n] & 0xFF) << m). If you know that plain char is unsigned, you can drop the mask operations. You can also use a cast: unsigned char *u = (unsigned char *)a; and then dereference u instead of a.

Comments

1

If you want to read it as a big- or little-endian integer, just do some bit shifting:

char a[4] = {0x76, 0x58, 0x02, 0x00};

// Big-endian:
uint32_t x = ((uint8_t)a[0] << 24) | ((uint8_t)a[1] << 16) | ((uint8_t)a[2] << 8) | (uint8_t)a[3];

// Little-endian:
uint32_t y = (uint8_t)a[0] | ((uint8_t)a[1] << 8) | ((uint8_t)a[2] << 16) | ((uint8_t)a[3] << 24);

If you want to read it as a native-endian integer, you can either cast the array to a pointer to an integer and dereference that. Note that the former is allowed only for char arrays -- for any other types, doing so breaks C's strict aliasing rules, so that would not be safe or portable. For example:

char a[4] = {0x76, 0x58, 0x02, 0x00};

// Cast to pointer to integer and dereference.  This is only allowed if `a' is an
// array of `char'.
uint32_t x = *(uint32_t *)a;

Alternatively, you can use a union, or just memcpy() the data directly. Both of these are safe and portable, as long as a char is 8 bits (its size in bits is given by the macro CHAR_BIT).

char a[4] = {0x76, 0x58, 0x02, 0x00};
uint32_t x;

// Copy the data directly into x
memcpy(&x, a, sizeof(x));

// Use a union to perform the cast
union
{
    char a[4];
    uint32_t x;
} u;
memcpy(u.a, a, sizeof(a));
// u.x now contains the integer value

Note that I've used uint32_t in all of the examples above, since an int is not guaranteed to be 4 bytes on all systems. The type uint32_t is defined in the header file <stdint.h>, and if it's defined by your compiler, it's guaranteed to be 32 bits wide.

4 Comments

Don't you want ((uint32_t)a[0] << 24), etc. instead of ((uint8_t)a[0] << 24)?
@chux: No. If char is signed, then negative values get promoted incorrectly, e.g. the byte 0xaa gets sign-extended to 0xffffffaa, when instead we wanted 0x000000aa. The operands of the << operators get promoted to unsigned int before applying the operator (the default integer promotions), so there's no worry of the shifting causing an overflow of the 8-bit value.
I see your char to int to uint32_t issue. My concern was if sizeof(int) < 4. Then does not (uint8_t)a[0] << 24 become 0 or UB? I suppose in the OP case, it can be assumed sizeof(int) >= 4.
@chux: Yeah, I suppose if sizeof(int) < 4, then the << 24 will overflow. So in that case, you'd have to write something like (uint32_t)(uint8_t)a[0] for each byte to be fully portable.
1

Other option can be using bitwise operators | and << left shift, as follows (to understand read comments):

int main(int argc, char *argv[])
{

  char a[4];
  int32_t  i = 0;  // 32 bits = 4 bytes
  a[0] = 0x76;
  a[1] = 0x58;
  a[2] = 0x02;
  a[3] = 0x00;

  i = 0;   // initial  value must be `0`
  i = i | a[0] << ( 3 * 8); // 0x76 => 0x76 00 00 00, gives i = 0x76 00 00 00  
  i = i | a[1] << ( 2 * 8); // 0x58 => 0x00 58 00 00, gives i = 0x76 58 00 00
  i = i | a[2] << ( 1 * 8); // 0x02 => 0x00 00 02 00, gives i = 0x76 58 02 00
  i = i | a[3] << ( 0 * 8); // 0x02 => 0x02   
                            //      => 0x00 00 00 02, gives i = 0x76 58 02 00 

  printf("Hex: %x\nDec: %d \n", i, i);
  return 0;
}

ouput:

$ gcc  -Wall -pedantic yy.c 
$ ./a.out 
Hex: 76580200            <- "hex decimal"
Dec: 1985479168          <- "decimal"

Notice: i = i | a[3] << ( 0 * 8); can be just i = i | a[3];, I written like that to keep code uniform.

Edit:

Oh you can just do it like:

i = 0 | 
     a[0] << ( 3 * 8) | 
     a[1] << ( 2 * 8) |
     a[2] << ( 1 * 8) |
     a[3] << ( 0 * 8);

Look here:Codepade for working code.

1 Comment

You didn't try with any char larger than 0x80

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.