I'm writing some C code to look at the bit representations of integers. In the process I found code that returns different results based on optimization settings. Am I just doing something wrong, or is this a compiler bug?
Here is the code:
#include <stdio.h>
int test(int x, int n)
{
int TMin_n = -(1 << (n-1));
int TMax_n = (1 << (n-1)) - 1;
return x >= TMin_n && x <= TMax_n;
}
int main(){
int x = 0x80000000;
int n = 0x20;
if(test(x,n)){
printf("passes\n");
}
else{
printf("fails\n");
}
}
If I compile it this way I get a passing result
$ gcc tester.c -o tester
$ ./tester
passes
If I compile it this way I get a failing result
$ gcc -O1 tester.c -o tester
$ ./tester
fails
The problem is that
x <= TMax_n
is evaluating as false in the optimized case only.
Here is my platform details:
$ gcc --version
Configured with: --prefix=/Applications/Xcode.app/Contents/Developer/usr --with-gxx-include-dir=/usr/include/c++/4.2.1
Apple LLVM version 8.0.0 (clang-800.0.42.1)
Target: x86_64-apple-darwin16.4.0
Thread model: posix
InstalledDir: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin