converting a hexadecimal number correctly in a decimal number (also negatives) - c++

As the headline supposes I am trying to convert hex numbers like 0x320000dd to decimal numbers.
My code only works for positive numbers but fails when it comes to hex numbers that represent negative decimal numbers. Here is a excerpt of my code:
cin>>hex>> x;
unsigned int number = x;
int r = number & 0xffffff;
My input is alreay in hex, and the computer converts it automatically in an integer. What I am trying to do is getting the operand of the hex number, so the last 24 bits.
Can you help me get my code working for negative values like :0x32fffdc9 or 0x32ffffff? Thank's a lot!
EDIT:
I would like my output to be :
0x32fffdc9 --> -567
or
0x32ffffff --> -1
so just the plain decimal values but instead it gives me 16776649 and 16777215 for the upper examples.

Negative integers are typically stored in 2's complement Meaning that if your most significant bit (MSB) is not set the number is not negative. This means that just as you need to unset the 8-MSBs of your number to clamp a 32-bit number to a 24-bit positive number, so you'll need to set the 8-MSBs of your number to clamp to a negative number:
const int32_t r = 0x800000 & number ? 0xFF000000 | number : number & 0xFFFFFF;
vector<bool> or bitset may be worth your consideration, as they would clarify the hexadecimal numbers to the range of bits to be set.

Related

Float to string without rounding

I wanted to extract decimal places from decimal number. Not that hard, right?
So, I converted float to string.
Then, I used substr() to crop the string starting from string.find('.')+1, till the end.
Well.... the problem is here. When we have a number like this: 5.7741589, it's not "precise" that much, it's actually: 5.774159
If it round itself, I can't precisely get the number of decimal places... So question is:
How can I convert decimal number to string, without rounding it?
EDIT: Since a lot of people are asking for actual input and output and code, here it is:
Input: 5.7741589
Output (let's say we want to output just decimal places): 7741589
Output (not expected one): 774159
Code:
float num;
cin>>num;
string s = to_string(num);
s = s.substr(s.find('.')+1,s.length());
cout<<s<<endl;
EDIT2: One more thing. I need this for competitive programming, I can do this exercise with input as string. Imagine you have problem where you have 2 decimal numbers, you need to multiply them and then count number of decimal places. Then you again lose decimal places which is problem.
The rot sets in as soon as you use a binary floating point number to model a decimal number: a floating point scheme models a subset of the real numbers. In this respect it's no different to using an integral type: conceptually the issues that arise are no different.
If you want to model decimal numbers exactly then use a decimal type.
The result you are getting is because of rounding off decimal places. This is because as soon as you enter a floating point number, whatever may be the number of decimal places, it is stored with as many decimal places as the precision of the data type. So for float (which can store 8 decimal places), if you enter
1.1234 ---- Stored as ---> 1.12339997
1.123 ---- Stored as ---> 1.12300003
5.7741589 ---- Stored as ---> 5.77415895
So basically, you are storing the input in those number (8) of decimal places. What can be done is that you can use below snippet to forcefully use the desired number of digits after decimal places and convert it to string.
float num;
cin >> num;
char buf[20];
// 8 is as per the (precision required + 1). So in this case,
// we have 7 digits after decimal, so 8 is used.
sprintf(buf, "%.8f", num);
// Truncating the rounding off
buf[strlen(buf) - 1] = '\0';
string s(buf);
s = s.substr(s.find('.') + 1, s.length());
cout << s << endl;
Though with this approach, it would be necessary that all your inputs are having equal number of decimal places.
This is sort of a workaround, because the value 5.7741589 that you entered is not stored as it is. So, if the source itself is not what you entered, how can you get the desired output with the assumption that source is what you provided.
Let's see what happens (this answer supposes IEEE-754, with floats being 32-bit binary floating-point numbers).
First, you enter the decimal number as input: 5.7741589. This number has to be stored into a 32-bit binary floating point.
This number in binary is 101.11000110001011110100011100010101011010000100011...
To store this into a float, we need to round it (in binary). So cin>>num is lossy. It is rounded to the nearest 32-bit binary floating point number, which is:
in binary: 1.01110001100010111101001 * 2^2 = 101.110001100010111101001.
in decimal: 1.44353973865509033203125 * 2^2 = 5.774158954620361328125.
As you can see, there is no point talking about decimal places, after your input number is converted to float, because your input number has been modified (looking at your number in binary, it got rounded. Looking at the number as decimal, it got a lot of extra digits, its value slightly differs from the original one).
If you want to solve this problem, you need to input the number as string, or you need to use some kind of decimal floating-point library.

How to calculate the range of data type float in c++?

As we can see int has 4 byte in memory, that are 32bits, after applying range formula , we can see range of int -2147483648 to 2147483647. I have calculated the ranges of all datatypes besides float and double and long double.
I dont know how they calculated the range of float mentioned below.
Floating point numbers are stored as an exponent and a fraction within the space available.
For some systems where float is implemented as an IEEE 754 value, the results would looks as below.
sign : 1 bit
exponent : 8 bits
fraction : 23 bits
The exponent allows numbers from 2 ^ (-127) (2 to the power -127) to 2 ^ 128 ( 2 to the power 128).
Allowing a range of numbers from
5.87747E-39
3.40282E+38
the fraction point gives a fraction such as .12313
Thus with 23 bits of values, the accuracy of a number is about 7 decimal digits or 1.19 E-7
For more details see wikipedia : IEEE 754-1985
On a given system, the <cfloat> / <float.h> will give the limits. For non IEEE 754 based representations, you would have to understand how the numbers are stored to calculate the limits.
-2^(n-1) to (2^(n-1)-1) is the formula to calculate the range of data types.
Where n = no.of.bits of the primitive data type.
For example: for the byte data type, n = 8 bits
-2^(8-1) to (2^(8-1)-1)
The above calculation will give you -128 to 127. Now, coming to the question of why it’s not 255. The reason is that byte, int, short, and double are signed data types meaning it has half the range below 0 (negative) and half the range above 0 (positive). The first bit represents a sign (+ or -). The remaining bits are 7. That’s why 2^(8-1) = 128. We take 0 as a positive sign, so the range is 2^(8-1) - 1 for positive numbers.

How to use negative number with openssl's BIGNUM?

I want a C++ version of the following Java code.
BigInteger x = new BigInteger("00afd72b5835ad22ea5d68279ffac0b6527c1ab0fb31f1e646f728d75cbd3ae65d", 16);
BigInteger y = x.multiply(BigInteger.valueOf(-1));
//prints y = ff5028d4a7ca52dd15a297d860053f49ad83e54f04ce0e19b908d728a342c519a3
System.out.println("y = " + new String(Hex.encode(y.toByteArray())));
And here is my attempt at a solution.
BIGNUM* x = BN_new();
BN_CTX* ctx = BN_CTX_new();
std::vector<unsigned char> xBytes = hexStringToBytes(“00afd72b5835ad22ea5d68279ffac0b6527c1ab0fb31f1e646f728d75cbd3ae65d");
BN_bin2bn(&xBytes[0], xBytes.size(), x);
BIGNUM* negative1 = BN_new();
std::vector<unsigned char> negative1Bytes = hexStringToBytes("ff");
BN_bin2bn(&negative1Bytes[0], negative1Bytes.size(), negative1);
BIGNUM* y = BN_new();
BN_mul(y, x, negative1, ctx);
char* yHex = BN_bn2hex(y);
std::string yStr(yHex);
//prints y = AF27542CDD7775C7730ABF785AC5F59C299E964A36BFF460B031AE85607DAB76A3
std::cout <<"y = " << yStr << std::endl;
(Ignored the case.) What am I doing wrong? How do I get my C++ code to output the correct value "ff5028d4a7ca52dd15a297d860053f49ad83e54f04ce0e19b908d728a342c519a3". I also tried setting negative1 by doing BN_set_word(negative1, -1), but that gives me the wrong answer too.
The BN_set_negative function sets a negative number.
The negative of afd72b5835ad22ea5d68279ffac0b6527c1ab0fb31f1e646f728d75cbd3ae65d is actually -afd72b5835ad22ea5d68279ffac0b6527c1ab0fb31f1e646f728d75cbd3ae65d , in the same way as -2 is the negative of 2.
ff5028d4a7ca52dd15a297d860053f49ad83e54f04ce0e19b908d728a342c519a3 is a large positive number.
The reason you are seeing this number in Java is due to the toByteArray call . According to its documentation, it selects the minimum field width which is a whole number of bytes, and also capable of holding a two's complement representation of the negative number.
In other words, by using the toByteArray function on a number that current has 1 sign bit and 256 value bits, you end up with a field width of 264 bits. However if your negative number's first nibble were 7 for example, rather than a, then (according to this documentation - I haven't actually tried it) you would get a 256-bit field width out (i.e. 8028d4..., not ff8028d4.
The leading 00 you have used in your code is insignificant in OpenSSL BN. I'm not sure if it is significant in BigInteger although the documentation for that constructor says "The String representation consists of an optional minus or plus sign followed by a sequence of one or more digits in the specified radix. "; so the fact that it accepts a minus sign suggests that if the minus sign is not present then the input is treated as a large positive number, even if its MSB is set. (Hopefully a Java programmer can clear this paragraph up for me).
Make sure you keep clear in your mind the distinction between a large negative value, and a large positive number obtained by modular arithmetic on that negative value, such as is the output of toByteArray.
So your question is really: does Openssl BN have a function that emulates the behaviour of BigInteger.toByteArray() ?
I don't know if such a function exists (the BN library has fairly bad documentation IMHO, and I've never heard of it being used outside of OpenSSL, especially not in a C++ program). I would expect it doesn't, since toByteArray's behaviour is kind of weird; and in any case, all of the BN output functions appear to output using a sign-magnitude format, rather than a two's complement format.
But to replicate that output, you could add either 2^256 or 2^264 to the large negative number , and then do BN_bn2hex . In this particular case, add 2^264, In general you would have to measure the current bit-length of the number being stored and round the exponent up to the nearest multiple of 8.
Or you could even output in sign-magnitude format (using BN_bn2hex or BN_bn2mpi) and then iterate through inverting each nibble and fixing up the start!
NB. Is there any particular reason you want to use OpenSSL BN? There are many alternatives.
Although this is a question from 2014 (more than five years ago), I would like to solve your problem / clarify the situation, which might help others.
a) One's complement and two's complement
In finite number theory, there is "one's complement" and "two's complement" representation of numbers. One's complement stores absolute (positive) values only and does not know a sign. If you want to have a sign for a number stored as one's complement, then you have to store it separately, e.g. in one bit (0=positive, 1=negative). This is exactly the situation of floating point numbers (IEEE 754). The mantissa is stored as the one's complement together with the exponent and one additional sign bit. Numbers in one's complement have two zeros: -0 and +0 because you treat the sign independently of the absolute value itself.
In two's complement, the most significant bit is used as the sign bit. There is no '-0' because negating a value in two's complement means performing the logical NOT (in C: tilde) operation followed by adding one.
As an example, one byte (in two's complement) can be one of the three values 0xFF, 0x00, 0x01 meaning -1, 0 and 1. There is no room for the -0. If you have, e.g. 0xFF (-1) and want to negate it, then the logical NOT operation computes 0xFF => 0x00. Adding one yields 0x01, which is 1.
b) OpenSSL BIGNUM and Java BigInteger
OpenSSL's BIGNUM implementation represents numbers as one's complement. The Java BigInteger treats numbers as two's complement. That was your desaster. Your big integer (in hex) is 00afd72b5835ad22ea5d68279ffac0b6527c1ab0fb31f1e646f728d75cbd3ae65d. This is a positive 256bit integer. It consists of 33 bytes because there is a leading zero byte 0x00, which is absolutely correct for an integer stored as two's complement because the most significant bit (omitting the initial 0x00) is set (in 0xAF), which would make this number a negative number.
c) Solution you were looking for
OpenSSL's function bin2bn works with absolute values only. For OpenSSL, you can leave the initial zero byte or cut it off - does not make any difference because OpenSSL canonicalizes the input data anyway, which means cutting off all leading zero bytes. The next problem of your code is the way you want to make this integer negative: You want to multiply it with -1. Using 0xFF as the only input byte to bin2bn makes this 255, not -1. In fact, you multiply your big integer with 255 yielding the overall result AF27542CDD7775C7730ABF785AC5F59C299E964A36BFF460B031AE85607DAB76A3, which is still positive.
Multiplication with -1 works like this (snippet, no error checking):
BIGNUM* x = BN_bin2bn(&xBytes[0], (int)xBytes.size(), NULL);
BIGNUM* negative1 = BN_new();
BN_one(negative1); /* negative1 is +1 */
BN_set_negative(negative1, 1); /* negative1 is now -1 */
BN_CTX* ctx = BN_CTX_new();
BIGNUM* y = BN_new();
BN_mul(y, x, negative1, ctx);
Easier is:
BIGNUM* x = BN_bin2bn(&xBytes[0], (int)xBytes.size(), NULL);
BN_set_negative(x,1);
This does not solve your problem because as M.M said, this just makes -afd72b5835ad22ea5d68279ffac0b6527c1ab0fb31f1e646f728d75cbd3ae65d from afd72b5835ad22ea5d68279ffac0b6527c1ab0fb31f1e646f728d75cbd3ae65d.
You are looking for the two's compülement of your big integer, which is
int i;
for (i = 0; i < (int)sizeof(value); i++)
value[i] = ~value[i];
for (i = ((int)sizeof(posvalue)) - 1; i >= 0; i--)
{
value[i]++;
if (0x00 != value[i])
break;
}
This is an unoptimized version of the two's complement if 'value' is your 33 byte input array containing your big integer prefixed by the byte 0x00. The result of this operation are the 33 bytes ff5028d4a7ca52dd15a297d860053f49ad83e54f04ce0e19b908d728a342c519a3.
d) Working with two's complement and OpenSSL BIGNUM
The whole sequence is like this:
Prologue: If input is negative (check most significant bit), then compute two's complement of input.
Convert to BIGNUM using BN_bin2bn
If input was negative, then call BN_set_negative(x,1)
Main function: Carry out all arithmetic operations using OpenSSL BIGNUM package
Call BN_is_negative to check for negative result
Convert to raw binary byte using BN_bn2bin
If result was negative, then compute two's complement of result.
Epilogue: If result was positive and result raw (output of step 7) byte's most significant bit is set, then prepend a byte 0x00. If result was negative and result raw byte's most significant bit is clear, then prepend a byte 0xFF.

Reading a double stored in a binary format in a character array

I am trying to read a floating point number stored as a specific binary format in a char array. The format is as follows, where each letter represents a binary digit:
SEEEEEEE MMMMMMMM MMMMMMMM MMMMMMMM MMMMMMMM MMMMMMMM MMMMMMMM MMMMMMMM
The format is more clearly explained in this website. Basically, the exponent is in Excess-64 notation and the mantissa is normalized to values <1 and >1/16. To get the true value of the number the mantissa is multipled by 16 to the power of the true value of the exponent.
Basicly, what I've done so far is to extract the sign and the exponent values, but I'm having trouble extracting the mantissa. The implementation I'm trying is quite brute force and is probably far from ideal in terms of code but it seemed to me as the simplest. It basicly is:
unsigned long a = 0;
for(int i = 0; i < 7; i++)
a += static_cast<unsigned long>(m_bufRecord[index+1+i])<<((6-i)*8);
It takes every 8-bit byte size stored in the char array and shifts it left according to its index in the array. So if the array I have is as follows:
{0x3f, 0x28, 0xf5, 0xc2, 0x8f, 0x5c, 0x28, 0xf6}
I'm expecting a to take the value:
0x28f5c28f5c28f6
However, with the above implementation a takes the value:
0x27f4c18f5c27f6
Later, I convert the long integer to a floating number using the following code:
double m = a;
m = m*(pow(16, e-14));
m = (s==1)?-m:m;
What is going wrong here? Also, I'd love to know how a conversion like this would be implemented ideally?
I haven't tried running your code, but I suspect the reason you get this:
0x27f4c18f5c27f6
instead of
0x28f5c28f5c28f6
is because your have a "negative number" in the cell previous to it. Are your 8-bit byte array a signed or unsigned value? I expect it will work better if you make it unsigned. [Or move your cast so that it's before the shift operations].

How to figure out how many decimal digits are in a large double?

Hey so I'm making a function that returns the number of decimals, whole numbers, or TOTAL numbers there are, but have been unable to make it work with either of these ways:
multiplying by a really large number like 10 billion doesn't work because of the innacurate way computers store decimals, making 2.3 2.2999575697
Using a StringStream to convert the number to a string and count the characters doesn't work because it requires you to set the stream to a precision which either takes away or adds unnecesary '0' characters if not set to the actual number of decimals
So WHAT DO I DO NOW? somebody please help =( Thanks!
if you wanna see my function that converts the numb to a string here it is:
////////////////////// Numbs_Digits ////////////////////////////////////////////////
template<typename T>
int Numbs_Digits(T numb, int scope)
{
stringstream ss(stringstream::in| stringstream::out), ss2(stringstream::in| stringstream::out);
unsigned long int length= 0;
unsigned long int numb_wholes;
ss2 << (int)numb;
numb_wholes = ss2.str().length(); ss2.flush();
bool all= false;
ss.precision(11); // HOW DO I MAKE THE PRECISION NUMBER THE NUMBER OF DECIMALS?
switch(scope){
case ALL: all = true;
case DECIMALS: ss << fixed << numb;
length += ss.str().length()- (numb_wholes +1); // +1 for the "."
if(all!= true) break;
case WHOLE_NUMBS:
length += numb_wholes;
if(all!= true) break;
default: break;}
return length;};
If you want to know the maximum number of decimal digits that a long double can store, this value is available in the constant LDBL_DIG defined in cfloat. Note that this number is actually an approximation as the values are stored in binary internally, and thus the range of values is not a power of 10.
Only some decimal numbers can be stored in exact form as a floating point number. Because of this there is no way to determine how many decimal places are significant for any decimal number for which this is not true. As hammar suggested, read up on the floating point storage format, I believe that every programmer should have some knowledge of low level stuff like this :D
multiplying by a really large number like 10 billion doesn't work because of the innacurate way computers store decimals, making 2.3 2.2999575697
This is exactly the problem. Would you be able to look at 2.999575697 and tell me it has two decimal places? This number is an example of a decimal number that cannot be stored in exact form using the floating point format. The best you could do is count the significant decimal places stored in the floating point number that best approximates the original decimal number it was given - which I can't imagine would be much use.
Edited for a more accurate explanation.
Can you not set the ios_base precision to the maximum number of decimal digits in the significand on your platform in cfloat.h, and then, using ios_base::setf(), change the floating point formatting to scientific, which will remove any trailing zeroes from the floating point number (you'll just have to trim the exponent off the end)?