4

For a project I compile my shaders to binary data and then convert them to a header file that declares the byte code of the shader as a const char array.

However, during compilation I get the following warning:

"conversion from int to const char requires a narrowing conversion".

Now I normally know why this happens, it's because the compiler thinks the values are of type int, while they are in fact declared as a type of char.

const char base_ps[]={
0x44,0x58,0x42,0x43,0x12 ... etc

When hovering over these values IntelliSense also points out these are casted to an int. For instance, for the value at index 0 it lists (int)0x44.

Is there any work around for this? Obviously I can explicity put (char) everywhere before each value, however I'd have to add an extra step into shader generation for this that parses the header file.

I kind of don't want to suppress the warning either, yet I want warnings treated as errors. Any advice on this?

9
  • 1
    Do you still get the warning if you use something like const char foo[] = "\x44\x58\x42\x43";? Commented Mar 20, 2017 at 13:34
  • Also, '\x44', '\x58', '\x42', '\x43', '\x12'... Commented Mar 20, 2017 at 13:41
  • Strange, I couldn't reproduce this Commented Mar 20, 2017 at 13:42
  • Hm, that's odd. Is it maybe the warning level? I'm generating my project using CMake and I'm not sure what the default warning level is. Here it doesn't work; rextester.com/IIE43940 Maybe it's the way it's declared? Commented Mar 20, 2017 at 13:57
  • Also @NathanOliver I don't get the warning then. Just when declared as 0x.. Commented Mar 20, 2017 at 13:58

2 Answers 2

3

The issue is because you are actually using hex values that are too large to store in a char (which is usually signed and 8 bits long). So the largest positive value is 127, or 0x7F. The values you are providing like you show here include values over 127, hence you get a correct narrowing conversion warning.

Either use an unsigned char instead of char, or use smaller numbers (not larger than 0x7F) like here:

const char deferred_ps[]={
    0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f, 
    0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17, 0x18, 0x19, 0x1a, 0x1b, 0x1c, 0x1d, 0x1e, 0x1f,
    0x20, 0x21, 0x22, 0x23, 0x24, 0x25, 0x26, 0x27, 0x28, 0x29, 0x2a, 0x2b, 0x2c, 0x2d, 0x2e, 0x2f,
    0x30, 0x31, 0x32, 0x33, 0x34, 0x35, 0x36, 0x37, 0x38, 0x39, 0x3a, 0x3b, 0x3c, 0x3d, 0x3e, 0x3f,
    0x40, 0x41, 0x42, 0x43, 0x44, 0x45, 0x46, 0x47, 0x48, 0x49, 0x4a, 0x4b, 0x4c, 0x4d, 0x4e, 0x4f,
    0x50, 0x51, 0x52, 0x53, 0x54, 0x55, 0x56, 0x57, 0x58, 0x59, 0x5a, 0x5b, 0x5c, 0x5d, 0x5e, 0x5f,
    0x60, 0x61, 0x62, 0x63, 0x64, 0x65, 0x66, 0x67, 0x68, 0x69, 0x6a, 0x6b, 0x6c, 0x6d, 0x6e, 0x6f,
    0x70, 0x71, 0x72, 0x73, 0x74, 0x75, 0x76, 0x77, 0x78, 0x79, 0x7a, 0x7b, 0x7c, 0x7d, 0x7e, 0x7f
};

Note that if you ignore the narrowing conversion warning, the result is implementation-defined, which is non-portable.

Sign up to request clarification or add additional context in comments.

4 Comments

Why would there be signed integer overflow? That's conversion between int and char and that's implementation defined AFAIK.
@ftfish: When the compiler warns you about a narrowing conversion, it means that you going from a type that is larger (in terms of bits) to a type that is smaller in terms of bits. It's true that whether char is signed or not is implementation defined. Many implementations choose for it to be signed. Therefore there is a risk of signed integer overflow.
There is a conversion from int to char, I know. However, this conversion is well-defined if the destination is unsigned or the destination can hold the source value and implementation-defined otherwise, but not undefined. For a reference, see here. Quote: If the destination type is signed, the value does not change if the source integer can be represented in the destination type. Otherwise the result is implementation-defined. (Note that this is different from signed integer arithmetic overflow, which is undefined).
@ftfish: You're right that the standard says implementation-defined. I've updated my answer.
3

You can suppress the warning by using a character literal instead integer literal. With character literals you can use the escape sequence in the form of \xNN to denote a character defined by a hexadecimal value. That would look like

const char foo[] = "\x44\x58\x42\x43";

or

const char foo[] = {'\x44', '\x58', '\x42', '\x43'};

The difference between them is that since the first one is a c string it will add a null terminator to the end. If you do not want that then you need to use the second method.

6 Comments

This clears up quite a bit on why it presumes it's an integer. I might as well just write up a post generation step that parses the header and changes all the entries to the provided char literal format. Thanks for the help!
@user3001604 No problem. Glad to help.
@user3001604 You might want to consider accepting Andy's answer since it has the real reason why you were getting the warning.
Then again, why does it alao work to make everything a character literal?
@user3001604 It silently disables it. The compiler is allowed to do whatever you want if you give it a hex value that is larger than the representable range: [...]A sequence of octal or hexadecimal digits is terminated by the first character that is not an octal digit or a hexadecimal digit, respectively. The value of a character literal is implementation-defined if it falls outside of the implementation-defined range defined for[...]
|

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.