Working with two's complement in Python - python-2.7

I am using python as the main scripting language on a micro controller. The micro controller reads two 8-bit hex numbers from the I2C bus; for example:
out_L = C2
out_H = F2
Both these are received as strings in python. F2C2 represents a two's complement number. I need the decimal value of the number.
I can convert the hex strings to binary strings with
bin_out = "0b" + ( bin(int(hex_out, 16))[2:] ).zfill(8)
Now I have to convert the binary value to a decimal value which is where I am stuck. I first have to do two's complement convertion and then convert to decimal. Because the binary value is still a string I can't do normal binary operations on it and can't convert it to decimal. Please assist. All my efforts to correctly change the binary string to binary value provides me with the incorrect value.

You may just apply the 2's compliment directly to the original int value:
out_L = "C2"
out_H = "F2"
hex_out = ''.join((out_H, out_L))
value = int(hex_out, 16) # value = 62146
if value> 0x7FFF:
value -= 0x10000
print value # output -3390

Convert binary to decimal in python by using:
int(binary_number,2)

Related

float number to string converting implementation in STD

I faced with a curious issue. Look at this simple code:
int main(int argc, char **argv) {
char buf[1000];
snprintf_l(buf, sizeof(buf), _LIBCPP_GET_C_LOCALE, "%.17f", 0.123e30f);
std::cout << "WTF?: " << buf << std::endl;
}
The output looks quire wired:
123000004117574256822262431744.00000000000000000
My question is how it's implemented? Can someone show me the original code? I did not find it. Or maybe it's too complicated for me.
I've tried to reimplement the same transformation double to string with Java code but was failed. Even when I tried to get exponent and fraction parts separately and summarize fractions in cycle I always get zeros instead of these numbers "...822262431744". When I tried to continue summarizing fractions after the 23 bits (for float number) I faced with other issue - how many fractions I need to collect? Why the original code stops on left part and does not continue until the scale is end?
So, I really do not understand the basic logic, how it implemented. I've tried to define really big numbers (e.g. 0.123e127f). And it generates huge number in decimal format. The number has much higher precision than float can be. Looks like this is an issue, because the string representation contains something which float number cannot.
Please read documentation:
printf, fprintf, sprintf, snprintf, printf_s, fprintf_s, sprintf_s, snprintf_s - cppreference.com
The format string consists of ordinary multibyte characters (except %), which are copied unchanged into the output stream, and conversion specifications. Each conversion specification has the following format:
introductory % character
...
(optional) . followed by integer number or *, or neither that specifies precision of the conversion. In the case when * is used, the precision is specified by an additional argument of type int, which appears before the argument to be converted, but after the argument supplying minimum field width if one is supplied. If the value of this argument is negative, it is ignored. If neither a number nor * is used, the precision is taken as zero. See the table below for exact effects of precision.
....
Conversion Specifier
Explanation
Expected Argument Type
f F
converts floating-point number to the decimal notation in the style [-]ddd.ddd. Precision specifies the exact number of digits to appear after the decimal point character. The default precision is 6. In the alternative implementation decimal point character is written even if no digits follow it. For infinity and not-a-number conversion style see notes.
double
So with f you forced form ddd.ddd (no exponent) and with .17 you have forced to show 17 digits after decimal separator. With such big value printed outcome looks that odd.
Finally I've found out what the difference between Java float -> decimal -> string convertation and c++ float -> string (decimal) convertation. I did not find the original source code, but I replicated the same code in Java to make it clear. I think the code explains everything:
// the context size might be calculated properly by getting maximum
// float number (including exponent value) - its 40 + scale, 17 for me
MathContext context = new MathContext(57, RoundingMode.HALF_UP);
BigDecimal divisor = BigDecimal.valueOf(2);
int tmp = Float.floatToRawIntBits(1.23e30f)
boolean sign = tmp < 0;
tmp <<= 1;
// there might be NaN value, this code does not support it
int exponent = (tmp >>> 24) - 127;
tmp <<= 8;
int mask = 1 << 23;
int fraction = mask | (tmp >>> 9);
// at this line we have all parts of the float: sign, exponent and fractions. Let's build mantissa
BigDecimal mantissa = BigDecimal.ZERO;
for (int i = 0; i < 24; i ++) {
if ((fraction & mask) == mask) {
// i'm not sure about speed, maybe division at each iteration might be faster than pow
mantissa = mantissa.add(divisor.pow(-i, context));
}
mask >>>= 1;
}
// it was the core line where I was losing accuracy, because of context
BigDecimal decimal = mantissa.multiply(divisor.pow(exponent, context), context);
String str = decimal.setScale(17, RoundingMode.HALF_UP).toPlainString();
// add minus manually, because java lost it if after the scale value become 0, C++ version of code doesn't do it
if (sign) {
str = "-" + str;
}
return str;
Maybe topic is useless. Who really need to have the same implementation like C++ has? But at least this code keeps all precision for float number comparing to the most popular way converting float to decimal string:
return BigDecimal.valueOf(1.23e30f).setScale(17, RoundingMode.HALF_UP).toPlainString();
The C++ implementation you are using uses the IEEE-754 binary32 format for float. In this format, the closet representable value to 0.123•1030 is 123,000,004,117,574,256,822,262,431,744, which is represented in the binary32 format as +13,023,132•273. So 0.123e30f in the source code yields the number 123,000,004,117,574,256,822,262,431,744. (Because the number is represented as +13,023,132•273, we know its value is that exactly, which is 123,000,004,117,574,256,822,262,431,744, even though the digits “123000004117574256822262431744” are not stored directly.)
Then, when you format it with %.17f, your C++ implementation prints the exact value faithfully, yielding “123000004117574256822262431744.00000000000000000”. This accuracy is not required by the C++ standard, and some C++ implementations will not do the conversion exactly.
The Java specification also does not require formatting of floating-point values to be exact, at least in some formatting operations. (I am going from memory and some supposition here; I do not have a citation at hand.) It allows, perhaps even requires, that only a certain number of correct digits be produced, after which zeros are used if needed for positioning relative to the decimal point or for the requested format.
The number has much higher precision than float can be.
For any value represented in the float format, that value has infinite precision. The number +13,023,132•273 is exactly +13,023,132•273, which is exactly 123,000,004,117,574,256,822,262,431,744, to infinite precision. The precision the format has for representing numbers affects only which numbers it can represent, not how precisely it represents the numbers that it does represent.

Reading internal temperature of an ADC that returns 3 databytes in two's complement (micropython)

As the title suggests, I'm trying to read the internal temperature of an ADC (The ADS1235 from Texas Instrument to be precise) using the raspberry Pi Pico running micropython.
The SPI communication between the Pico an the ADC is working fine, I've used an oscilloscope to measure and check.
The problem arises when I have to manipulate the 3 data bytes I receive form the ADC, and turn it into a number which can be used in calculating the internal temperature.
Picture shows the 3 data bytes I receive when I issue the "Read Data command".
The data is received in Twos complement MSB first. I've tried multiple ways to go from a 24-bit twos complement binary string to an negative or positive number.
A positive number calculation works fine, but when I try a negative number (where the most significant bit is 1) it doesn't work. I have a feeling that there must exist some function or easier way to do the conversion, but I haven't been able to find it.
I've attached the code of my current converter function and the main section where I simulate that the ADC has send 3 data bytes in the following order: [0x81, 0x00, 0x00]
As well as the output when the code has run.
import string
def twos_comp_to_decimal(adcreading):
"""compute the int value of 2's complement 24-bit number"""
"""https://www.exploringbinary.com/twos-complement-converter/ look at "implementation" section"""
"""https://note.nkmk.me/en/python-bit-operation/#:~:text=Bitwise%20operations%20in%20Python%20%28AND%2C%20OR%2C%2
0XOR%2C%20NOT%2C,NOT%2C%20invert%3A%20~%206%20Bit%20shifts%3A%20%3C%3C%2C%20%3E%3E"""
signbit = False # Assume adc-reading is positive from the beginning
if adcreading >= 0b100000000000000000000000:
signbit = True
print("negative signbit")
if signbit:
print("inv string")
negativval = bin(~adcreading & 0b011111111111111111111111)
negativval = int(negativval)
negativval += 0b000000000000000000000001
negativval *= -1
return negativval
return adcreading
if __name__ == '__main__':
# tempdata = [0x80, 0xFF, 0x80]
tempdata = [0x81, 0x00, 0x00]
print("Slicing 3 databytes into one 24-bit number")
adc24bit = int.from_bytes(bytearray(tempdata), "big")
print(adc24bit)
print(hex(adc24bit))
print(bin(adc24bit))
print(twos_comp_to_decimal(adc24bit))
# print("Integer value: {}".format(adc24bit))
#temperatureC = ((adc24bit - 122.400) / 420) + 25
#print("Temp in celcius: {}".format(temperatureC))
I wrote this function to do the conversion from 24 bit written in two's complement to decimal, the number you provided as an example turned out to be -8323072, I checked this value here: https://www.exploringbinary.com/twos-complement-converter/.
Here is the code I wrote:
# example data
data = [0x81, 0x00, 0x00]
data_bin = bytearray(data)
def decimal_from_two_comp(bytearr):
string = ''
for byte in bytearr:
string = string + f'{byte:08b}'
# string with the the 24 bits
print(string)
# conversion from two complement to decimal
decimal = -int(string[0]) * 2**23
for i,num in enumerate(string[1:]):
num = int(num)
decimal += num * 2**(23 - (i + 1))
return decimal
You can check on the wikipedia page for Two's Complement under the section Converting from two's complement representation that the algorithm I provided results in the formula they present in that section.

converting a hexadecimal number correctly in a decimal number (also negatives)

As the headline supposes I am trying to convert hex numbers like 0x320000dd to decimal numbers.
My code only works for positive numbers but fails when it comes to hex numbers that represent negative decimal numbers. Here is a excerpt of my code:
cin>>hex>> x;
unsigned int number = x;
int r = number & 0xffffff;
My input is alreay in hex, and the computer converts it automatically in an integer. What I am trying to do is getting the operand of the hex number, so the last 24 bits.
Can you help me get my code working for negative values like :0x32fffdc9 or 0x32ffffff? Thank's a lot!
EDIT:
I would like my output to be :
0x32fffdc9 --> -567
or
0x32ffffff --> -1
so just the plain decimal values but instead it gives me 16776649 and 16777215 for the upper examples.
Negative integers are typically stored in 2's complement Meaning that if your most significant bit (MSB) is not set the number is not negative. This means that just as you need to unset the 8-MSBs of your number to clamp a 32-bit number to a 24-bit positive number, so you'll need to set the 8-MSBs of your number to clamp to a negative number:
const int32_t r = 0x800000 & number ? 0xFF000000 | number : number & 0xFFFFFF;
vector<bool> or bitset may be worth your consideration, as they would clarify the hexadecimal numbers to the range of bits to be set.

How to use negative number with openssl's BIGNUM?

I want a C++ version of the following Java code.
BigInteger x = new BigInteger("00afd72b5835ad22ea5d68279ffac0b6527c1ab0fb31f1e646f728d75cbd3ae65d", 16);
BigInteger y = x.multiply(BigInteger.valueOf(-1));
//prints y = ff5028d4a7ca52dd15a297d860053f49ad83e54f04ce0e19b908d728a342c519a3
System.out.println("y = " + new String(Hex.encode(y.toByteArray())));
And here is my attempt at a solution.
BIGNUM* x = BN_new();
BN_CTX* ctx = BN_CTX_new();
std::vector<unsigned char> xBytes = hexStringToBytes(“00afd72b5835ad22ea5d68279ffac0b6527c1ab0fb31f1e646f728d75cbd3ae65d");
BN_bin2bn(&xBytes[0], xBytes.size(), x);
BIGNUM* negative1 = BN_new();
std::vector<unsigned char> negative1Bytes = hexStringToBytes("ff");
BN_bin2bn(&negative1Bytes[0], negative1Bytes.size(), negative1);
BIGNUM* y = BN_new();
BN_mul(y, x, negative1, ctx);
char* yHex = BN_bn2hex(y);
std::string yStr(yHex);
//prints y = AF27542CDD7775C7730ABF785AC5F59C299E964A36BFF460B031AE85607DAB76A3
std::cout <<"y = " << yStr << std::endl;
(Ignored the case.) What am I doing wrong? How do I get my C++ code to output the correct value "ff5028d4a7ca52dd15a297d860053f49ad83e54f04ce0e19b908d728a342c519a3". I also tried setting negative1 by doing BN_set_word(negative1, -1), but that gives me the wrong answer too.
The BN_set_negative function sets a negative number.
The negative of afd72b5835ad22ea5d68279ffac0b6527c1ab0fb31f1e646f728d75cbd3ae65d is actually -afd72b5835ad22ea5d68279ffac0b6527c1ab0fb31f1e646f728d75cbd3ae65d , in the same way as -2 is the negative of 2.
ff5028d4a7ca52dd15a297d860053f49ad83e54f04ce0e19b908d728a342c519a3 is a large positive number.
The reason you are seeing this number in Java is due to the toByteArray call . According to its documentation, it selects the minimum field width which is a whole number of bytes, and also capable of holding a two's complement representation of the negative number.
In other words, by using the toByteArray function on a number that current has 1 sign bit and 256 value bits, you end up with a field width of 264 bits. However if your negative number's first nibble were 7 for example, rather than a, then (according to this documentation - I haven't actually tried it) you would get a 256-bit field width out (i.e. 8028d4..., not ff8028d4.
The leading 00 you have used in your code is insignificant in OpenSSL BN. I'm not sure if it is significant in BigInteger although the documentation for that constructor says "The String representation consists of an optional minus or plus sign followed by a sequence of one or more digits in the specified radix. "; so the fact that it accepts a minus sign suggests that if the minus sign is not present then the input is treated as a large positive number, even if its MSB is set. (Hopefully a Java programmer can clear this paragraph up for me).
Make sure you keep clear in your mind the distinction between a large negative value, and a large positive number obtained by modular arithmetic on that negative value, such as is the output of toByteArray.
So your question is really: does Openssl BN have a function that emulates the behaviour of BigInteger.toByteArray() ?
I don't know if such a function exists (the BN library has fairly bad documentation IMHO, and I've never heard of it being used outside of OpenSSL, especially not in a C++ program). I would expect it doesn't, since toByteArray's behaviour is kind of weird; and in any case, all of the BN output functions appear to output using a sign-magnitude format, rather than a two's complement format.
But to replicate that output, you could add either 2^256 or 2^264 to the large negative number , and then do BN_bn2hex . In this particular case, add 2^264, In general you would have to measure the current bit-length of the number being stored and round the exponent up to the nearest multiple of 8.
Or you could even output in sign-magnitude format (using BN_bn2hex or BN_bn2mpi) and then iterate through inverting each nibble and fixing up the start!
NB. Is there any particular reason you want to use OpenSSL BN? There are many alternatives.
Although this is a question from 2014 (more than five years ago), I would like to solve your problem / clarify the situation, which might help others.
a) One's complement and two's complement
In finite number theory, there is "one's complement" and "two's complement" representation of numbers. One's complement stores absolute (positive) values only and does not know a sign. If you want to have a sign for a number stored as one's complement, then you have to store it separately, e.g. in one bit (0=positive, 1=negative). This is exactly the situation of floating point numbers (IEEE 754). The mantissa is stored as the one's complement together with the exponent and one additional sign bit. Numbers in one's complement have two zeros: -0 and +0 because you treat the sign independently of the absolute value itself.
In two's complement, the most significant bit is used as the sign bit. There is no '-0' because negating a value in two's complement means performing the logical NOT (in C: tilde) operation followed by adding one.
As an example, one byte (in two's complement) can be one of the three values 0xFF, 0x00, 0x01 meaning -1, 0 and 1. There is no room for the -0. If you have, e.g. 0xFF (-1) and want to negate it, then the logical NOT operation computes 0xFF => 0x00. Adding one yields 0x01, which is 1.
b) OpenSSL BIGNUM and Java BigInteger
OpenSSL's BIGNUM implementation represents numbers as one's complement. The Java BigInteger treats numbers as two's complement. That was your desaster. Your big integer (in hex) is 00afd72b5835ad22ea5d68279ffac0b6527c1ab0fb31f1e646f728d75cbd3ae65d. This is a positive 256bit integer. It consists of 33 bytes because there is a leading zero byte 0x00, which is absolutely correct for an integer stored as two's complement because the most significant bit (omitting the initial 0x00) is set (in 0xAF), which would make this number a negative number.
c) Solution you were looking for
OpenSSL's function bin2bn works with absolute values only. For OpenSSL, you can leave the initial zero byte or cut it off - does not make any difference because OpenSSL canonicalizes the input data anyway, which means cutting off all leading zero bytes. The next problem of your code is the way you want to make this integer negative: You want to multiply it with -1. Using 0xFF as the only input byte to bin2bn makes this 255, not -1. In fact, you multiply your big integer with 255 yielding the overall result AF27542CDD7775C7730ABF785AC5F59C299E964A36BFF460B031AE85607DAB76A3, which is still positive.
Multiplication with -1 works like this (snippet, no error checking):
BIGNUM* x = BN_bin2bn(&xBytes[0], (int)xBytes.size(), NULL);
BIGNUM* negative1 = BN_new();
BN_one(negative1); /* negative1 is +1 */
BN_set_negative(negative1, 1); /* negative1 is now -1 */
BN_CTX* ctx = BN_CTX_new();
BIGNUM* y = BN_new();
BN_mul(y, x, negative1, ctx);
Easier is:
BIGNUM* x = BN_bin2bn(&xBytes[0], (int)xBytes.size(), NULL);
BN_set_negative(x,1);
This does not solve your problem because as M.M said, this just makes -afd72b5835ad22ea5d68279ffac0b6527c1ab0fb31f1e646f728d75cbd3ae65d from afd72b5835ad22ea5d68279ffac0b6527c1ab0fb31f1e646f728d75cbd3ae65d.
You are looking for the two's compülement of your big integer, which is
int i;
for (i = 0; i < (int)sizeof(value); i++)
value[i] = ~value[i];
for (i = ((int)sizeof(posvalue)) - 1; i >= 0; i--)
{
value[i]++;
if (0x00 != value[i])
break;
}
This is an unoptimized version of the two's complement if 'value' is your 33 byte input array containing your big integer prefixed by the byte 0x00. The result of this operation are the 33 bytes ff5028d4a7ca52dd15a297d860053f49ad83e54f04ce0e19b908d728a342c519a3.
d) Working with two's complement and OpenSSL BIGNUM
The whole sequence is like this:
Prologue: If input is negative (check most significant bit), then compute two's complement of input.
Convert to BIGNUM using BN_bin2bn
If input was negative, then call BN_set_negative(x,1)
Main function: Carry out all arithmetic operations using OpenSSL BIGNUM package
Call BN_is_negative to check for negative result
Convert to raw binary byte using BN_bn2bin
If result was negative, then compute two's complement of result.
Epilogue: If result was positive and result raw (output of step 7) byte's most significant bit is set, then prepend a byte 0x00. If result was negative and result raw byte's most significant bit is clear, then prepend a byte 0xFF.

Reading a double stored in a binary format in a character array

I am trying to read a floating point number stored as a specific binary format in a char array. The format is as follows, where each letter represents a binary digit:
SEEEEEEE MMMMMMMM MMMMMMMM MMMMMMMM MMMMMMMM MMMMMMMM MMMMMMMM MMMMMMMM
The format is more clearly explained in this website. Basically, the exponent is in Excess-64 notation and the mantissa is normalized to values <1 and >1/16. To get the true value of the number the mantissa is multipled by 16 to the power of the true value of the exponent.
Basicly, what I've done so far is to extract the sign and the exponent values, but I'm having trouble extracting the mantissa. The implementation I'm trying is quite brute force and is probably far from ideal in terms of code but it seemed to me as the simplest. It basicly is:
unsigned long a = 0;
for(int i = 0; i < 7; i++)
a += static_cast<unsigned long>(m_bufRecord[index+1+i])<<((6-i)*8);
It takes every 8-bit byte size stored in the char array and shifts it left according to its index in the array. So if the array I have is as follows:
{0x3f, 0x28, 0xf5, 0xc2, 0x8f, 0x5c, 0x28, 0xf6}
I'm expecting a to take the value:
0x28f5c28f5c28f6
However, with the above implementation a takes the value:
0x27f4c18f5c27f6
Later, I convert the long integer to a floating number using the following code:
double m = a;
m = m*(pow(16, e-14));
m = (s==1)?-m:m;
What is going wrong here? Also, I'd love to know how a conversion like this would be implemented ideally?
I haven't tried running your code, but I suspect the reason you get this:
0x27f4c18f5c27f6
instead of
0x28f5c28f5c28f6
is because your have a "negative number" in the cell previous to it. Are your 8-bit byte array a signed or unsigned value? I expect it will work better if you make it unsigned. [Or move your cast so that it's before the shift operations].