You can't know, unless you know the specifics of the given system.
char is typically only 8 bits wide and can't hold the value of 500. Furthermore, the char type is unsuitable for storing integer values, since it has implementation-defined signedness.
Meaning you can't know if it can contain values from 0 to 255 or from -128 to 127 (two's complement). It can even in theory have other constrains and other signedness formats.
Also, the conversion from a large unsigned integer to a smaller signed one is implementation-defined.
I would guess that your specific system has signed 8 bit char type and two's complement signedness. The raw value of 500 is 0x1F4. Upon initialization, your particular compiler will truncate this to fit an 8 bit variable, meaning you end up with only the least significant byte, 0xF4. Since you have an 8 bits signed variable in two's complement format, 0xF4 equals -12.
(The implicit type promotion done by printf preserves the sign.)
None of this behavior is guaranteed across different systems.
Needless to say, the code is completely non-portable. You should not write code like this, which heavily relies on numerous forms of poorly defined behavior.
warning: implicit conversion from 'int' to 'char' changes value from 500 to -12 [-Wconstant-conversion]CHAR_BIT == 8in<limits.h>), then characters can hold 256 different values. The plainchartype can have the same range asunsigned char(0..255) or the same range assigned char(-128..+127, assuming two's-complement). On your machine, it appears that plaincharis signed. When the value 500 is stored, the extra (more significant) bit(s) are removed; the least significant 8 bits are stored in the variable — as either +244 or -12 (-12 for your system). When passed toprintf(), the value is converted to (signed)int.