I'm trying to write a Bash function that is the inversion of the function in the answer https://stackoverflow.com/a/72687565/1277576.
The purpose is to obtain the decimal representation of a number from its binary representation in two's complement.
dec() {
n=$(getconf LONG_BIT)
x=$(echo "ibase=2; $1" | bc)
echo "if ($x<2^($n-1)) $x else -$((~$x))-1" | bc
}
My issue is that it works only for negative binary integers (that is, when the most significant bit is equal to 1), while if fails for positive ones (that is, when the most significant bit is equal to 0):
$ dec 1111111111111111111111111111111111111111111111111111111111111111
-1
$ dec 1000000000000000000000000000000000000000000000000000000000000000
-9223372036854775808
$ dec 0000000000000000000000000000000000000000000000000000000000000001
(standard_in) 1: syntax error
It seems that the line echo "if ($x<2^($n-1)) $x else -$((~$x))-1" | bc contains a syntax error, but I don't understand what it is.
set -xv) and review the output; you should find theechois not generating what you think it is