0

Since char is only 1 byte long, is it better to to use char while dealing with 8-bit unsigned int?

Example: I was trying to create a struct for storing rgb values of a color.

struct color
{
  unsigned int r: 8;
  unsigned int g: 8;
  unsigned int b: 8;
};

Now since it is int, it allocates a memory of 4 bytes in my case. But if I replace them with unsigned char, they will be taking 3 bytes of memory as intended (in my platform).

19
  • 3
    Efficiency for what, size or performance? CPU could event work slower with data less than native size. You can use uint_fast8_t to get best fit for performance. Commented Aug 7, 2022 at 19:39
  • 4
    @EricPostpischil Sure. It is for gcc. I gave an example where sizeof(char) can be not equal to sizeof(uint_fast8_t) Commented Aug 7, 2022 at 20:51
  • 2
    @EricPostpischil uint_fast8_t is defined in C99 standard. Commented Aug 7, 2022 at 21:00
  • 4
    @EricPostpischil Of course it is implementation-defined. int, long etc are also implementation-defined. What is your objection? Are you stating they are always equal? I gave an example when size of uint_fast8_t can be not equal to size of char. Why? Because target CPU have no instructions to work with chars separately and casting to char requires additional instructions. That's the reason why this kind of types are instroduced. Commented Aug 7, 2022 at 21:13
  • 3
    @EricPostpischil arm-none-eabi-gcc (Arch Repository) 12.1.0 Compilation flags: -O2 -march=armv7. I don't see a sense in pointless arguing. Have a nice day! Commented Aug 7, 2022 at 21:28

1 Answer 1

6

No. Maybe a tiny bit.

First, this is a very platform dependent question. However the <stdint.h> header was introduced to help with this.

Some hardware platforms are optimised for a particular size of operand and have an overhead to using smaller (in bit-size) operands even though logically the smaller operand requires less computation.

You should find that uint_fast8_t is the fastest unsigned integer type with at least 8 bits (#include <stdint.h> to use it). That may be the same as unsigned char or unsigned int depending on whether your question is 'yes' or 'no' respectively(*).

So the idea would be that if you're speed focused you'd use uint_fast8_t and the compiler will pick the fastest type fitting your purpose.

There are a couple of downsides to this scheme. One is that if you create very vast quantities of data performance can be impaired (and limits reached) because you're using an 'oversized' type for the purpose.

On a platform where a 4-byte int is faster than a 1-byte char you're using about 4 times as much memory as you need. If your platform is small or your scale large that can be a significant overhead.

Also you need to be careful that if the underlying type isn't the minimum size you expect then some calculations may be confounded.

Arithmetic 'clocks' neatly for unsigned operands but obviously at different sizes if uint_fast8_t isn't in fact 8-bits.

It's platform dependent what the following returns:

#include <stdint.h>

//...

int foo() {
   uint_fast8_t x=255; 
   ++x;
   if(x==0){
       return 1;
   }
   return 0;
}

The overhead of dealing with potentially outsized types can claw back your gains.

I tend to agree with Knuth that "premature optimisation is the root of all evil" and would say you should only get into this sort of cleverness if you need it.

Do a typedef for typedef uint8_t color_comp; for now and get the application working before trying to shave off fractions of a second performance later!

I don't know what your application does but it may be that it's not compute intensive in RGB channels and the bottleneck (if any) is elsewhere. Maybe you find some high load calculation where it's worth dealing with uint_fast8_t conversions and issues.

The wisdom of Knuth is that you probably don't know where that is until later anyway.

(*) It could be unsigned long or indeed some other type. The only constraint is that it is an unsigned type and at least 8 bits.

Sign up to request clarification or add additional context in comments.

2 Comments

You raise a really good point about the pitfalls of uint_fastN_t and uint_leastN_t —if one relies upon wraparound arithmetic on unsigned overflow (such as an 8-bit counter), an exact-width type like uint8_t should probably be used, regardless of the potential for inefficiency on some platforms.
@saxbophone The whole C model of generally specifying minimums and not actual limits can lead to issues. In many industries including the motor industry you're required to work with explicit sizes because they're safer than they are clever!

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.