Given double x, and assuming that it lies in [0,1] . Assume for example that x=0.3
In binary, (keeping 10 digits after the decimal point), it is represented as
x=0.0100110011...
I want to write some C++ code which will extract the 10 digits shown after the decimal point. In other words I want to extract the integer (0100110011)_2.
Now I am quite new to bit shifting and the (naive) solution which I have for the problem is the following
int temp= (int) (x*(1<<10))
Then temp in binary will have the necesary 10 digits.
Is this a safe way to perform the above process? OR are there safer / more correct ways to do this?
Note: I don't want the digits extracted in the form of a character array. I specifically want an integer (OR unsigned integer) for this. The reason for doing this is that in generation of octrees, points in space are given hash keys based on their position named as Morton Keys. These keys are usually stored as integers. After getting the integr keys for all the points they are then sorted. Theoretically these keys can be obtained by scaling the coordinates to [0,1], extracting the bits , and interleaving them.
Use memcpy to copy double into an array of 32-bit numbers, like this:
unsigned int b[2]; // assume int is 32-bits
memcpy(b, &x, 8);
The most 10 significant binary digits are in b[0] or b[1], depending on whether your machine is big- or little-endian.
EDIT: The same can be achieved by some casting instead of memcpy, but that would violate strict aliasing rules. An alternative is to use a union.
Read this: http://chrishecker.com/images/f/fb/Gdmfp.pdf
If you can grok what that article is telling you, you can derive the algorithm you're looking for. Just remember the bias factor in the exponent and the implicit leading one in the mantissa and the rest should fall into place.
Related
I am working on developing a fixed point algorithm in C++. I know that, for a N-bit integer, the fixed point binary integer is represented as U(a,b). For example, for an 8 bit Integer (i.e 256 samples), If we represent it in the form U(6,2), it means that the binary point is to the left of the 2nd bit starting from the right of the form:
b5 b4 b3 b2 b1 b0 . b(-1) b(-2)
Thus , it has 6 integer bits and 2 fractional bits. In C++, I know there are some bit shift operators I can use, but they are basically used for shifting the bits of the input stream, my question is, how to define a binary fixed point integer of the form, fix<6,2> or U(6,2). All the major processing operation will be carried out on the fractional part and I am just finding a way to do this fix in C++. Any help regarding this would be appreciated.Thanks!
Example : Suppose I have an input discrete signal with 1024 sample points on x-axis (For now just think this input signal is coming from some sensor). Each of this sample point has a particular amplitude. Say the sample at time 2(x-axis) has an amplitude of 3.67(y-axis). Now I have a variable "int *input;" that takes the sample 2, which in binary is 0000 0100. So basically I want to make this as 00000.100 by performing the U(5,3) on the sample 2 in C++. So that I can perform the interpolation operations on fractions of the input sampling period or time.
PS - I don't want to create a separate class or use external libraries for this. I just want to take each 8 bits from my input signal, perform the U(a,b) fix on it followed by rest of the operations are done on the fractional part.
Short answer: left shift.
Long answer:
Fixed point numbers are stored as integers, usually int, which is the fastest integer type for a particular platform.
Normal integer without fractional bits are usually called Q0, Q.0 or QX.0 where X is the total number of bits of underlying storage type(usually int).
To convert between different Q.X formats, left or right shift. For example, to convert 5 in Q0 to 5 in Q4, left shift it 4 bits, or multiply it by 16.
Usually it's useful to find or write a small fixed point library that does basic calculations, like a*b>>q and (a<<q)/b. Because you will do Q.X=Q.Y*Q.Z and Q.X=Q.Y/Q.Z a lot and you need to convert formats when doing calculations. As you may have observed, using normal * operator will give you Q.(X+Y)=Q.X*Q.Y, so in order to fit the result into Q.Z format, you need to right shift the result by (X+Y-Z) bits.
Division is similar, you get Q.(X-Y)=Q.X*Q.Y form the standard / operator, and to get the result in Q.Z format you shift the dividend before the division. What's different is that division is an expensive operation, and it's not trivial to write a fast one from scratch.
Be aware of double-word support of your platform, it will make your life a lot easier. With double word arithmetic, result of a*b can be twice the size of a or b, so that you don't lose range by doing a*b>>c. Without double word, you have to limit the input range of a and b so that a*b doesn't overflow. This is not obvious when you first start, but soon you will find you need more fractional bits or rage to get the job done, and you will finally need to dig into the reference manual of your processor's ISA.
example:
float a = 0.1;// 0.1
int aQ16 = a*65536;// 0.1 in Q16 format
int bQ16 = 4<<16// 4Q16
int cQ16 = a*b>>16 // result = 0.399963378906250Q16 = 26212,
// not 0.4Q16 = 26214 because of truncating error
If this is your question:
Q. Should I define my fixed-binary-point integer as a template, U<int a, int b>(int number), or not, U(int a, int b)
I think your answer to that is: "Do you want to define operators that take two fixed-binary-point integers? If so make them a template."
The template is just a little extra complexity if you're not defining operators. So I'd leave it out.
But if you are defining operators, you don't want to be able to add U<4, 4> and U<6, 2>. What would you define your result as? The templates will give you a compile time error should you try to do that.
I am pretty new to programming and I have to do an Abstract Data Type (ADT) for integer numbers.
I've browsed the web for some tips, examples, tutorials but i couldn't find anything usefull, so i hope i will get here some answers.
I thinked a lot about how should i format the ADT that stores my integer and I'm thinking of something like this:
int lenght; // stores the length of the number(an limit since this numbers goes to infinite)
int[] digits; // stores the digits of my number, with the dimension equal to length
Now, I'm confused about how should i tackle the sign representation.Is it ok to hold the sign into an char something like: char sign?
But then comes the question what to do when I have to add and multiply two integers, what about the cases when i have overflows on this operations.
So , if some of you have some ideas about how should I represent the number(the format) and how should I do the multiply and add i would be very great full. I don't need any code, I i the learning stage just some ideas. Thank you.
One good way to do this is to store the sign as a bool (e.g. bool is_neg;). That way it's completely clear what that data means (vice with a char, where it's not entirely clear.
You might want to store each digit in an unsigned short (or if you want to be precise about sign, uint16_t). Then, when you do a multiply of two digits, you can just multiply them as unsigned ints (uint32_t), and then the low 16 bits are your result and the overflow is in the high 16 bits. You can then add this to the result array fairly easily. You know that the multiplication of a n-bit number by a k-bit number is at most n + k bits long, so you can preallocate your array to that size and then worry about removing extra zeros later.
Hope this helps, and let me know if you want more tips.
The first design decision you have to make is the choice of a basis.
You seem to lean towards plain decimal. Could be unpacked (one full byte per digit, numerical or ASCII representation), or packed digits pairs (Decimal Coded Binary, twice four bits in a byte).
Other schemes are more convenient for faster operations: basis being a power of 2 or a power of 10, fitting in a byte, a short, an int...
Powers of 10 have the benefit that conversion to and from base 10 can be done word by word.
Addition is an easy matter: add the words in pairs and handle the carries. Same for subtraction, with borrows.
Multiplies are a whole different story if you care about efficiency. The method of written computation taught at school can be used, but it requires length1 x length2 operations. For long numbers, more efficient methods are preferred (http://en.wikipedia.org/wiki/Multiplication_algorithm#Karatsuba_multiplication). They are also more complex.
I am writing my own long arithmetic library in C++ for fun and it is already pretty finished, I even implemented several Cryptogrphic algorithms with that library, but one important thing is still missing: I want to convert doubles (and floats/long doubles) into my number and vice versa. My numbers are represented as a variable sized array of unsigned long ints plus a sign bit.
I tried to find the answer with google, but the problem is that people rarely ever implement such things themselves, so I only find things about how to use Java BigInteger etc.
Conceptually, it is rather easy: I take the mantissa, shift it by the number of bits dictated by the exponent and set the sign. In the other direction I truncate it so that it fits into the mantissa and set the exponent depending on my log2 function.
But I am having a hard time to figure out the details, I could either play around with some bit patterns and cast it to a double, but I didn't find an elegant way to achieve that or I could "calculate" it by starting with 2, exponentiate, multiply etc, but that doesn't seem very efficient.
I would appreciate a solution that doesn't use any library calls because I am trying to avoid libraries for my project, otherwise I could just have used gmp, furthermore, I often have two solutions on several other occasions, one using inline assembler which is efficient and one that is more platform independent, so either answer is useful for me.
edit: I use uint64_t for my parts, but I would like to be able to change it depending on the machine, but I am willing to do some different implementations with some #ifdefs to achieve that.
I'm going to make non-portable assumption here: namely, that unsigned long long has more accurate digits than double. (This is true on all modern desktop systems that I know of.)
First, convert the most significant integer(s) into an unsigned long long. Then convert that to a double S. Let M be the number of integers less than those used in that first step. multiply S by(1ull << (sizeof(unsigned)*CHAR_BIT*M). (If shifting more than 63 bits, you will have to split those into seperate shifts and do some alrithmetic) Finally, if the original number was negative you multiply this result by -1.
This rounds a lot, but even with this rounding, due to the above assumption, no digits are lost that wouldn't be lost anyway with the conversion to a double. I think this is a similar process to what Mark Ransom said, but I'm not certain.
For converting from a double to a biginteger, first seperate the mantissa into a double M and the exponent into an int E, using frexp. Multiply M by UNSIGNED_MAX, and store that result in an unsigned R. If std::numeric_limits<double>::radix() is 2 (I don't know if it is or not for x86/x64), you can easily shift R left by E-(sizeof(unsigned)*CHAR_BIT) bits and you're done. Otherwise the result will instead beR*(E**(sizeof(unsigned)*CHAR_BIT)) (where ** means to the power of)
If performance is a concern, you can add an overload to your bignum class for multiplying by std::constant_integer<unsigned, 10>, which simply returns (LHS<<4)+(LHS<<2). You can similarly optimize other constants if you wish.
This blog post might help you Clarifying and optimizing Integer>>asFloat
Otherwise, you can yet have an idea of algorithm with this SO question Converting from unsigned long long to float with round to nearest even
You don't say explicitly, but I assume your library is integer only and the unsigned longs are 32 bit and binary (not decimal). The conversion to double is simple, so I'll tackle that first.
Start with a multiplier for the current piece; if the number is positive it will be 1.0, if negative it will be -1.0. For each of the unsigned long ints in your bignum, multiply by the current multiplier and add it to the result, then multiply your multiplier by pow(2.0, 32) (4294967296.0) for 32 bits or pow(2.0, 64) (18446744073709551616.0) for 64 bits.
You can optimize this process by working with only the 2 most significant values. You need to use 2 even if the number of bits in your integer type is larger than the precision of a double, since the number of used bits in the most significant value might only be 1. You can generate the multiplier by taking a power of 2 to the number of skipped bits, e.g. pow(2.0, most_significant_count*sizeof(bit_array[0])*8). You can't use a bit shift as given in another answer because it will overflow after the first value.
To convert from double, you can get the exponent and mantissa separated from each other with the frexp function. The mantissa will come as a floating point value between 0.5 and 1.0 so you'll want to multiply it by pow(2.0, 32) or pow(2.0, 64) to convert it to an integer, then adjust the exponent by -32 or -64 to compensate.
To go from a big integer to a double, just do it the same way you parse numbers. For example, you parse the number "531" as "1 + (3 * 10) + (5 * 100)". Compute each portion using doubles, starting with the least significant portion.
To go from a double to a big integer, do it the same way but in reverse starting with the most significant portion. So, to convert 531, you first see that it's more than 100 but less than 1000. You find the first digit by dividing by 100. Then you subtract to get the remainder of 31. Then find the next digit by dividing by 10. And so on.
Of course, you won't be using tens (unless you store your big integers as digits). Exactly how you break it apart depends on how your big integer class is constructed. For example, if it's uses 64-bit units, then you'll use powers of 2^64 instead of powers of 10.
How can I convert two unsigned integers that represent the digit and decimal part of a float, into one float.
I know there are a few ways todo this, like converting the decimal component into a float and multiplying it to get a decimal and added it to the digit, but that does not seem optimal.
I'm looking for the optimal way todo this.
/*
* Get Current Temp in Celecius.
*/
void GetTemp(){
int8_t digit = 0; // Digit Part of Temp
uint16_t decimal = 0; // Decimal Part of Temp
// define variable that will hold temperature digit and decimal part
therm_read_temperature(&temperature, &decimal); //Gets the current temp and sets the variables to the value
}
I want to take the Digit and Decimal parts and convert them to a float type, such that it looks like digit.decimal .
It might look like this in end, but I want to find the MOST optimal solution.
/*
* Get Current Temp in Celecius.
*/
float GetTemp(){
int8_t digit = 0; // Digit Part of Temp
uint16_t decimal = 0; // Decimal Part of Temp
// define variable that will hold temperature digit and decimal part
therm_read_temperature(&temperature, &decimal); //Gets the current temp and sets the variables to the value
float temp = SomeFunction(digit, decimal); //This could be a expression also.
return temp;
}
////UPDATE/// - July 5th
I was able to get the source code instead of leveraging just the library. I posted it in this GIST DS12B20.c.
temperature[0]=therm_read_byte();
temperature[1]=therm_read_byte();
therm_reset();
//Store temperature integer digits and decimal digits
digit=temperature[0]>>4;
digit|=(temperature[1]&0x7)<<4;
//Store decimal digits
decimal=temperature[0]&0xf;
decimal*=THERM_DECIMAL_STEPS_12BIT;
*digit_part = digit;
*decimal_part = decimal;
Although the function will not force us to return separate parts as digit and decimal, reading from the temperature sensor seems to require this (unless i'm missing something and it can be retrieved as a float).
I think the original question still stands as what is the optimal way to make this into a float in C (this is for use with AVR and an 8bit microprocessor, making optimization key) using the two parts or to be able to retrieve it directly as a float.
What you are really running into is using fixed-point numbers. These can be represented in two ways: either as a single integer with a known magnitude or multiplier (ie. "tenths", "hundredths", "thousandths", and so on; example: value from a digital scale in ten-thousandths of a gram, held in a 32-bit integer -- you divide by 10000 to get grams), or as two integers, with one holding the "accumulated" or "integer" value, and the other holding the "fractional" value.
Take a look at the <stdfix.h> header. This declares types and functions to hold these fixed-point numbers, and perform math with them. When adding fractional parts, for example, you have to worry about rolling into the next whole value, for which you then want to increment the accumulator of the result. By using the standard functions you can take advantage of built-in processor capabilities for fixed-point math, such as those present in the AVR, PIC and MPS430 microcontrollers. Perfect for temperature sensors, GPS receivers, scales (balances), and other sensors that have rational numbers but only integer registers or arithmetic.
Here is an article about it: "Fixed Point Extensions to the C Programming Language", https://sestevenson.wordpress.com/2009/09/10/fixed-point-extensions-to-the-c-programming-language/
To quote a portion of that article:
I don’t think the extensions simplify the use of fixed types very
much. The programmer still needs to know how many bits are allocated
to integer and fractional parts, and how the number and positions of
bits may change (during multiplication for example). What the
extensions do provide is a way to access the saturation and rounding
modes of the processor without writing assembly code. With this level
of access, it is possible to write much more efficient C code to
handle these operations.
Scott G. Hall
Raleigh, NC, USA
Your question contains a wrong assumption.
If you're given a decimal string and want a floating-point value, the first step should generally not be to turn it into two integers.
For instance, consider the numbers 2.1 and 2.01. What's the "decimal part" in each case? 1 and 01? Both of those equal 1. That's no good.
The only case in which this approach makes any sense is where you have a fixed number of places after the decimal point -- in which case maybe 2.1 turns into (2,1000) and 2.01 turns into (2,100), or something. But unless you've got a positive reason for doing that (which I strongly doubt) you should not do it this way.
In particular, unless therm_read_temperature is a function someone else is providing you with and whose interface you can't influence, you should make that function behave differently -- e.g., just returning a float. (If it is a function someone else is providing and whose interface you can't influence, then to get a useful answer here you'll need to tell us exactly what it's defined to do.)
This question already has answers here:
Why do I see a double variable initialized to some value like 21.4 as 21.399999618530273?
(14 answers)
Closed 6 years ago.
I am facing a problem and unable to resolve it. Need help from gurus. Here is sample code:-
float f=0.01f;
printf("%f",f);
if we check value in variable during debugging f contains '0.0099999998' value and output of printf is 0.010000.
a. Is there any way that we may force the compiler to assign same values to variable of float type?
b. I want to convert float to string/character array. How is it possible that only and only exactly same value be converted to string/character array. I want to make sure that no zeros are padded, no unwanted values are padded, no changes in digits as in above example.
It is impossible to accurately represent a base 10 decimal number using base 2 values, except for a very small number of values (such as 0.25). To get what you need, you have to switch from the float/double built-in types to some kind of decimal number package.
You could use boost::lexical_cast in this way:
float blah = 0.01;
string w = boost::lexical_cast<string>( blah );
The variable w will contain the text value 0.00999999978. But I can't see when you really need it.
It is preferred to use boost::format to accurately format a float as an string. The following code shows how to do it:
float blah = 0.01;
string w = str( boost::format("%d") % blah ); // w contains exactly "0.01" now
Have a look at this C++ reference. Specifically the section on precision:
float blah = 0.01;
printf ("%.2f\n", blah);
There are uncountably many real numbers.
There are only a finite number of values which the data types float, double, and long double can take.
That is, there will be uncountably many real numbers that cannot be represented exactly using those data types.
The reason that your debugger is giving you a different value is well explained in Mark Ransom's post.
Regarding printing a float without roundup, truncation and with fuller precision, you are missing the precision specifier - default precision for printf is typically 6 fractional digits.
try the following to get a precision of 10 digits:
float amount = 0.0099999998;
printf("%.10f", amount);
As a side note, a more C++ way (vs. C-style) to do things is with cout:
float amount = 0.0099999998;
cout.precision(10);
cout << amount << endl;
For (b), you could do
std::ostringstream os;
os << f;
std::string s = os.str();
In truth using the floating point processor or co-processor or section of the chip itself (most are now intergrated into the CPU), will never result in accurate mathematical results, but they do give a fairly rough accuracy, for more accurate results, you could consider defining a class "DecimalString", which uses nybbles as decimal characters and symbols... and attempt to mimic base 10 mathematics using strings... in that case, depending on how long you want to make the strings, you could even do away with the exponent part altogether a string 256 can represent 1x10^-254 upto 1^+255 in straight decimal using actual ASCII, shorter if you want a sign, but this may prove significantly slower. You could speed this by reversing the digit order, so from left to right they read
units,tens,hundreds,thousands....
Simple example
eg. "0021" becomes 1200
This would need "shifting" left and right to make the decimal points line up before routines as well, the best bet is to start with the ADD and SUB functions, as you will then build on them in the MUL and DIV functions. If you are on a large machine, you could make them theoretically as long as your heart desired!
Equally, you could use the stdlib.h, in there are the sprintf, ecvt and fcvt functions (or at least, there should be!).
int sprintf(char* dst,const char* fmt,...);
char *ecvt(double value, int ndig, int *dec, int *sign);
char *fcvt(double value, int ndig, int *dec, int *sign);
sprintf returns the number of characters it wrote to the string, for example
float f=12.00;
char buffer[32];
sprintf(buffer,"%4.2f",f) // will return 5, if it is an error it will return -1
ecvt and fcvt return characters to static char* locations containing the null terminated decimal representations of the numbers, with no decimal point, most significant number first, the offset of the decimal point is stored in dec, the sign in "sign" (1=-,0=+) ndig is the number of significant digits to store. If dec<0 then you have to pad with -dec zeros pror to the decimal point. I fyou are unsure, and you are not working on a Windows7 system (which will not run old DOS3 programs sometimes) look for TurboC version 2 for Dos 3, there are still one or two downloads available, it's a relatively small program from Borland which is a small Dos C/C++ edito/compiler and even comes with TASM, the 16 bit machine code 386/486 compile, it is covered in the help files as are many other useful nuggets of information.
All three routines are in "stdlib.h", or should be, though I have found that on VisualStudio2010 they are anything but standard, often overloaded with function dealing with WORD sized characters and asking you to use its own specific functions instead... "so much for standard library," I mutter to myself almost each and every time, "Maybe they out to get a better dictionary!"
You would need to consult your platform standards to determine how to best determine the correct format, you would need to display it as a*b^C, where 'a' is the integral component that holds the sign, 'b' is implementation defined (Likely fixed by a standard), and 'C' is the exponent used for that number.
Alternatively, you could just display it in hex, it'd mean nothing to a human, though, and it would still be binary for all practical purposes. (And just as portable!)
To answer your second question:
it IS possible to exactly and unambiguously represent floats as strings. However, this requires a hexadecimal representation. For instance, 1/16 = 0.1 and 10/16 is 0.A.
With hex floats, you can define a canonical representation. I'd personally use a fixed number of digits representing the underlying number of bits, but you could also decide to strip trailing zeroes. There's no confusion possible on which trailing digits are zero.
Since the representation is exact, the conversions are reversible: f==hexstring2float(float2hexstring(f))