1

I apologize if this question has been answered already but I have not been able to find what I am looking for.

I am working in c++ with an SPI device. The SPI device outputs data in 16 bit words in 2's complement form. I am trying to convert this data into decimal for use with a filter.

I've attached some sample code that asks the user to input a number in twos complement and then outputs the signed decimal version.

#include <iostream>     
#include <stdlib.h>
#include <cstdint>
#include <cmath>
#include <bitset>
using std::cout;
using std::endl;
using std::cin;
using std::hex;
using std::dec;
using std::bitset;
int main () {
    uint16_t x2=0;
    cout<<"Please enter the number you would like to convert from 2's complement.  "<<endl;
    cin>>x2;
    int diff=0x0000-x2;
    cout<<"The number you have entered is: "<<dec<<diff<<endl;
return 0;
}

When I run this program and input something like 0x3B4A it always outputs 0. I'm not entirely sure what is going on and I'm very new to c++ so please excuse me if this is a stupid question. Also, please ignore anything extra in the header. This is partof a large project and I couldn't remember what parts of the header go with this specific section of code.

Thanks!

Edit: This is mostly for Ben. After reading your most recent comment I made the following changes but am still simply getting the decimal equivalent of the hexadecimal number I entered

#include <iostream>     
#include <stdlib.h>
#include <cstdint>
#include <cmath>
#include <bitset>
using std::cout;
using std::endl;
using std::cin;
using std::hex;
using std::dec;
using std::bitset;
int main () {
    int16_t x2=0;
    cout<<"Please enter the number you would like to convert from 2's complement.  "<<endl;
    cin>>hex>>x2;
    int flags= (x2>>14) & 3;
    int16_t value=(x2 << 2) >> 2;
    cout<<"The number you have entered is: "<<dec<<value<<endl;
return 0;
}
1
  • The "Conversion" is part of the parsing. This 0x0000-x2 is not helpful in any way. Commented Apr 8, 2014 at 19:42

3 Answers 3

5

I'm not sure it is necessary for the OP's question, but for anybody who is just looking for the formula for converting a 16bit 2's complement unsigned integer to a signed integer, I think a variant of it looks like this (for input val):

(0x8000&val ? (int)(0x7FFF&val)-0x8000 : val)

This amounts to:

  • if first bit is 1, it is a negative number with all other bits in 2's complement
  • extract negative part by subtracting off the 0x8000
  • otherwise the lower bits are just the positive integer value

Probably a good idea to wrap this in a function and do some basic error checking (can also enforce that the input is actually an unsigned 16 bit integer).

Sign up to request clarification or add additional context in comments.

Comments

2

You asked cin to read as decimal (by making no format changes) so as soon as it reads the x, which is not a 0-9 digit, it stops, leaving you with zero.

Just add hex to your cin line: cin >> hex >> x2;

3 Comments

Thanks, Mark. I made this correction and I can now successfully input a hexadecimal number; however, I still cannot seem to get the correct conversion. For instance, when I input Ox3B4A I now get an output of -15178; however, based upon my understanding of two's complement, this value should correspond to -1206. Do you have any ideas about why this is happening? Basically, what I believe I am doing is finding the complement of the number 0x3b4a, which should then simply be the hexadecimal equivalent of the decimal value of the number.
Now, I know another way I can get this conversion correct, by using the following formula. dec=-a_(n-1)*2^(n-1)+sum(a_(i)*2^i)_(i=0)^(n-2) where n is the length of the binary representation of the two's complement number, and a_i is the i^(th) term of the binary representation with all written in loose latex format. The reason i have not used this conversion is because someone told me that it would be very slow compared to other methods and the application I need this for needs to run very fast(again because it's being used by a filter). Thanks again!
Thanks, Mark, Ben was able to help me out the rest of the way but your answer was an important first step! If I could up vote your answer I would.
0

The standard library function strtol converts string input to a number, and supports the 0x prefix as long as you pass 0 as the radix argument.

Since int16_t is almost certainly 16-bit two's complement signed, you can just use that.

7 Comments

Ben, I tried to use int16_t and remove the 0x000-x2 however, I still got the same results as before. Could you elaborate on how this would work. Also, I could forsee some trouble with this even if we do get it working because the device will output a 16 bit word, but the twos complement is only contained in bits 0-13 because bits 15 and 14 are flags.
@user: Then you should pull off the flags: flags = (input >> 14) & 3 And propagate the sign bit from bit 13 to bit 14 and 15 to make a full 16-bit two's complement: value = (input << 2) >> 2
Ben, I tried implementing this (see the addition of code in my original question); however, I am still simply getting the value of the hexadecimal input. That is, inputting 0x3b4a (which should be the 14 bit 2's complement of -1206) outputs 15178.
@andrew: put it in an int16_t before doing the shift
Thank you very much for your continued responses Ben. I thought that I already did store the value in an int16_t by initializing x2 as an int16_t. Is this not the case?
|

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.