I've searched half the day and found some very interesting things about using fixed point data types and bit shifting in C++ to accomplish division operations while avoiding floating point math. However, I have only been able to understand a small fraction of it and I can't seem to get anything to work.
All I'm wanting to do is to take two integers, ad them up, and divide by two to get the average. I need to be able to do this very quickly though, since I'm interpolating camera pixel data on an Arduino and I also have other operations to do.
So I'm confused about shifting in general. Say the integer I want to divide by two is 27. Half of 27 is 13.5. But no matter what fixed point datatype I try, I can only get 13 as an output. For example:
uint8_t x = 27;
Serial.println( x >> 1 );
returns 13
There's got to be some simple way to do this, right?
Fixed point does give you a way to represent 13.5. The Wikipedia article on the Q number format is informative: https://en.wikipedia.org/wiki/Q_(number_format)
Think of it this way: You keep using integers, but instead of taking them at face value, divide them all implicitly by a power of 2 to obtain their semantic value.
So, if using an unsigned byte as your base type (values between 0 and 255, inclusive), you might implicitly divide by 2**3 (8). Now to represent 27, you need an integer set to 27*8=>216. To divide by two, you shift one to the right; now your integer is 108, which when divided by the implicit denominator of 8 gives 13.5, the value you're expecting.
You have to realize that fixed-point number systems (and floating point too, though it's less immediately evident) still have limits, of course; certain operations will overflow no matter what you do, and some operations cause a loss of precision. This is a normal consequence of working with limited-size types.
Say the integer I want to divide by two is 27. Half of 27 is 13.5. But
no matter what fixed point data type I try, I can only get 13 as an
output.
From wikipedia Fixed-Point Arithmetic:
The scaling factor is usually a power of 10 (for human convenience) or
a power of 2 (for computational efficiency).
You actually mentioned fixed point data type, and I think that is the best approach. But no matter what you tried? Perhaps we have different understandings of fixed-point-arithmetic.
while avoiding floating point math.
Another worth while goal, though reducing in value. Even in embedded systems, I seldom had to deal with a processor that did not have floating point parts. Floating point hardware has gotten reasonably good.
Any way, using fixed point avoids any need for floating point. Even for display purposes.
I think I need to proceed with a few examples.
Fixed point Example 1: Dollars and pennies
The unit of American money is based on the dollar. The Dollar is a fixed point data type.
So, if you have 27 dollars, how do you split it with your sibling?
One way (of several) that you all know is to convert 27 dollars into 2700 pennies. Dividing this value by 2 is trivial. Now you and your sibling can each get 1350 pennies. (i.e. the penny is a fixed point data type, that easily converts to/from dollars, and vice-vesa)
Note that this is completely integer arithmetic. Adding 2 integers, and dividing by 2 (any modern compiler will choose the fastest implementation.. either integer divide or perhaps right-shift-by-2) and on my desktop these 2 actions take less than a microsecond to complete.
You should waste no more time on measuring the relative performance of those two options (divide vs right-shift), you simply enable -O3 when your code tests correct. Your compiler should be able to choose correctly.
The choice of units in any problem is based on a scale factor that covers the range of values (in your problem) AND the understandable and quickly implemented conversion between units. And note that uint64_t can describe a large amount of cash, even in pennies. (challenge to the student.)
In General, about fixed point:
Given
uint8_t x = 27;
and the desire to divide by 2 evenly and quickly... can any scale factor be something that serves your needs? I say yes.
example 2 - 50 cent coins and a dollar
How about we try, for example, a simple scale factor of 2, i.e. the unit is a hu, or half unit. (analogous to the 50-cent-coin)
uint8_t x = 27 * 1/hu; (hu = 1/2)
This means that 54 hu represents 27 units. (ie, it takes 54 50-cent-coins to add up to 27 dollars)
The fixed point solution is to scale your integer values to achieve the arithmetic required. If you scale to even values, all your integers will divide evenly to the hu units.
example 3 - nickles and a dollar
Another possible scale might be 20, both decimal (for readability) and binary for performance. (note that there are 20 nickels in a dollar)
uint16 x = 27 * 1/tu; (tu = 1/20)
Now 540 represents a scaled 27. i.e. 540 nickles
All examples are fully integer, provide exact answers, and there is a trivial mechanism to convert the values for presentation to the user. i.e. which ever fixed point used, convert to analogue of pennies, and thus 1350 pennies.
Display the penny count as a dollar
std::cout << (pennyCount / 100) << "." << (pennyCount % 100) << std::endl;
I think this should look something like (untested)
13.50
Now your challenge is to make it look nice on the output.
The reason you only get 13 is because you are actually cutting off the least significant bits when you bit shift. Since you are cutting them off, there is no remainder to check. If you are interested in what your remainder is, you could do something like:
uint8_t x = 27;
Serial.println((x - (x >> 1) - (x >> 1));
(x - (x >> 1)) should give 14 here.
it would be pretty simple to add .5 to a number once you determine whether the remainder is 1.
The following should work and should be fast:
float y = (x >> 1) + (0.5 * (x & 0x01))
What it does
(x >> 1) Divide by 2 using the bit shift
(0.5 * (x & 0x01)) Add 0.5 if the last bit was 1 (odd number)
Related
I know, I know, people are probably going to say "just switch to floating point", but currently that is not an option due to the nature of the project that I am working on. I am helping write a programming language in C++ and I am currently having difficulty trying to get a very accurate algorithm for multiplication whereby I have a VM and mainly the operations for mod/smod, div/sdiv (ie signed numbers are not a concern here), mul, a halving number for fully fractional numbers and a pushed shift number that I multiply and divide by to create my shifting. For simplicity, lets say I'm working with a 32 byte space. My algorithms work fine for pretty much anything involving integers, it's just that when my fractional portion gets over 16 bytes that I run into problems with precision, and if I were to round it, the number would be fairly accurate, but I want it as accurate as possible, even willing to sacrifice a tad in performance for it, so long as it stays a fixed point and doesn't go into floating point land. The algorithms I'm concerned with I will map out in a sort of pseudocode. Would love any insight into how I could make this better, or any reasoning as to why by the laws of computational science, what I'm asking for is a fruitless endeavor.
For fully fractional numbers (all bytes are fractional):
A = num1 / halfShift //truncates the number down to 16 so that when multiplied, we get a full 32 byte num
B = num2 / halfShift
finalNum = A * B
For the rest of my numbers that are larger than 16 bytes I use this algorithm:
this algorithm can essentially be broken down into the int.frac form
essentially A.B * C.D taking the mathematic form of
D*B/shift + C*A*shift + D*A + C*B
if the fractional numbers are larger than the integer, I halve them, then multiply them together in my D*B/shift
just like in the fully fractional example above
Is there some kind of "magic" rounding method that I should be aware of? Please let me know.
You get the most accurate result if you do the multiplication first and scale afterwards. Of course that means, that you need to store the result of the multiplication in a 64-bit int type.
If that is not an option, your approach with shifting in advance makes sense. But you certainly loose precision.
Either way, you can increase accuracy a little if you round instead of truncate.
I support Aconcagua's recommendation to round to nearest.
For that you need to add the highest bit which is going to be truncated before you apply the division.
In your case that would look like this:
A = (num1 + 1<<(halfshift-1)) >> halfshift
B = (num2 + 1<<(halfshift-1)) >> halfshift
finalNum = A * B
EDIT:
Example on how to dynamically scale the factors and the result depending on the values of the factors (this improves resolution and therefore the accuracy of the result):
shiftA and shiftB need to be set such that A and B are 16 byte fractionals each and therefore the 32 byte result cannot overflow. If shiftA and shiftB is not known in advance, it can be determined by counting the leading zeros of num1 and num2.
A = (num1 + 1<<(shiftA-1)) >> shiftA
B = (num2 + 1<<(shiftB-1)) >> shiftB
finalNum = (A * B) >> (fullshift - (shiftA + shiftB))
The number of fractional digits of a product equals the sum of the numbers of fractional digits in the operands. You have to carry out the multiplication to that precision and then round or truncate according to the desired target precision.
I am working on developing a fixed point algorithm in C++. I know that, for a N-bit integer, the fixed point binary integer is represented as U(a,b). For example, for an 8 bit Integer (i.e 256 samples), If we represent it in the form U(6,2), it means that the binary point is to the left of the 2nd bit starting from the right of the form:
b5 b4 b3 b2 b1 b0 . b(-1) b(-2)
Thus , it has 6 integer bits and 2 fractional bits. In C++, I know there are some bit shift operators I can use, but they are basically used for shifting the bits of the input stream, my question is, how to define a binary fixed point integer of the form, fix<6,2> or U(6,2). All the major processing operation will be carried out on the fractional part and I am just finding a way to do this fix in C++. Any help regarding this would be appreciated.Thanks!
Example : Suppose I have an input discrete signal with 1024 sample points on x-axis (For now just think this input signal is coming from some sensor). Each of this sample point has a particular amplitude. Say the sample at time 2(x-axis) has an amplitude of 3.67(y-axis). Now I have a variable "int *input;" that takes the sample 2, which in binary is 0000 0100. So basically I want to make this as 00000.100 by performing the U(5,3) on the sample 2 in C++. So that I can perform the interpolation operations on fractions of the input sampling period or time.
PS - I don't want to create a separate class or use external libraries for this. I just want to take each 8 bits from my input signal, perform the U(a,b) fix on it followed by rest of the operations are done on the fractional part.
Short answer: left shift.
Long answer:
Fixed point numbers are stored as integers, usually int, which is the fastest integer type for a particular platform.
Normal integer without fractional bits are usually called Q0, Q.0 or QX.0 where X is the total number of bits of underlying storage type(usually int).
To convert between different Q.X formats, left or right shift. For example, to convert 5 in Q0 to 5 in Q4, left shift it 4 bits, or multiply it by 16.
Usually it's useful to find or write a small fixed point library that does basic calculations, like a*b>>q and (a<<q)/b. Because you will do Q.X=Q.Y*Q.Z and Q.X=Q.Y/Q.Z a lot and you need to convert formats when doing calculations. As you may have observed, using normal * operator will give you Q.(X+Y)=Q.X*Q.Y, so in order to fit the result into Q.Z format, you need to right shift the result by (X+Y-Z) bits.
Division is similar, you get Q.(X-Y)=Q.X*Q.Y form the standard / operator, and to get the result in Q.Z format you shift the dividend before the division. What's different is that division is an expensive operation, and it's not trivial to write a fast one from scratch.
Be aware of double-word support of your platform, it will make your life a lot easier. With double word arithmetic, result of a*b can be twice the size of a or b, so that you don't lose range by doing a*b>>c. Without double word, you have to limit the input range of a and b so that a*b doesn't overflow. This is not obvious when you first start, but soon you will find you need more fractional bits or rage to get the job done, and you will finally need to dig into the reference manual of your processor's ISA.
example:
float a = 0.1;// 0.1
int aQ16 = a*65536;// 0.1 in Q16 format
int bQ16 = 4<<16// 4Q16
int cQ16 = a*b>>16 // result = 0.399963378906250Q16 = 26212,
// not 0.4Q16 = 26214 because of truncating error
If this is your question:
Q. Should I define my fixed-binary-point integer as a template, U<int a, int b>(int number), or not, U(int a, int b)
I think your answer to that is: "Do you want to define operators that take two fixed-binary-point integers? If so make them a template."
The template is just a little extra complexity if you're not defining operators. So I'd leave it out.
But if you are defining operators, you don't want to be able to add U<4, 4> and U<6, 2>. What would you define your result as? The templates will give you a compile time error should you try to do that.
I have a small formula to apply to a money amount and i'm using cents (dividing the money about by 100) on my Money objects to make sure floats are not messing things up. However, i'm not sure how to apply a percentage calculation using that. For example, an interest rate formula of the following: $500 * 0,0048% * 5 (500 bucks, at 0,0048%/day, during 5 days)
How do i represent the percentage properly during my calculation? Would it have to be 50000 * 0.000048f * 5?
Technically you're supposed to calculate interest over interest, but let's ignore that for a moment.
Your interest rate is 48/1000000. Hence, the daily interest in cents is 50000 * 48/1000000. Floats are not needed, and in fact best avoided. They'd work in this case (small amount) but you'd soon need double.
you need to evaluate your operations using fixed point arithmetics.
When you state that you are storing cents, you are in fact saying that your numbers have been all multiplied by 100 - that's another piece of information you are not storing. So 0.01$ is represented as 1/100. 0,0048 can be represented then as 48/10000 and the computation become:
50000/100 * 48/10000 * 5 = 12000000/1000000 = 1200/100
than you convert back to cents (IMPORTANT: at every step it's mandatory you check for overflows, missing bits) to normalize to your preferred representations
Note that this is also the typical approach used with micro-controllers (no fpu, low performance)
As an alternative, if you don't care much about performance you can also make use of arbitrary precision arithmetic libraries, as GMP.
I want to perform some calculations and I want the result correct up to some decimal places, say 12.
So I wrote a sample:
#define PI 3.1415926535897932384626433832795028841971693993751
double d, k, h;
k = 999999/(2*PI);
h = 999999;
d = PI*k*k*h;
printf("%.12f\n", d);
But it gives the output:
79577232813771760.000000000000
I even used setprecision(), but same answer rather in exponential form.
cout<<setprecision(12)<<d<<endl;
prints
7.95772328138e+16
Used long double also, but in vain.
Now is there any way other than storing the integer part and the fractional part separately in long long int types?
If so, what can be done to get the answer precisely?
A double has only about 16 decimal digits of precision. Everything after the decimal point would be nonsense. (In fact, the last digit or two left of the point may not agree with an infinite-precision calculation.)
Long double is not standardized, AFAIK. It may be that on your system it is the same as double, or no more precise. That would slightly surprise me, but it doesn't violate anything.
You need to read Double-Precision concepts again; more carefully.
The double has increased precision by using 64 bits.
Stuff before the decimal is more important than that after it.
So, when you have a large integer part, it will truncate the lower precision -- this is being described to you in various answers here as rounding off.
Update:
To increase precision, you'll need to use some library or change your language.
Check this other question: Best coding language for dealing with large numbers (50000+ digits)
Yet, I'll ask you to re-check your intent once more.
Do you really need 12 decimal places for numbers that have really high values
(over 10 digits in the integer part like in your example)?
Maybe you won't really have large integer parts
(in which case such code should work fine).
But if you are tracking a value like 10000000000.123456789,
I am really interested in exactly which application you are working on (astronomy?).
If the integer part of your values is some way under 10000, you should be fine here.
Update2:
IF you must demonstrate the ability of a specific formula to work accurately within constrained error limits, the way to go is fixing the processing of your formula such that the least error is introduced.
Example,
If you want to do say, (x * y) / z
it would be prudent to try something like max(x,y)/z * min(x,y)
rather than, the original form which may overflow after (x * y), loosing precision if that did not fit in the 16 decimals of double
If you had just 2 digit precision,
. 2-digit regular-precision
`42 * 7 290 297
(42 * 7)/2 290/2 294/2
Result ==> 145 147
But ==> 42/2 = 21
21 * 7 = 147
This is probably the intent of your contest.
The double-precision binary format used by most computers can only hold about 16 digits, after that you'll get rounding. See http://en.wikipedia.org/wiki/Double-precision_floating-point_format
Floating point values have a limit range of digits. Just because your "PI" value has six times as many digits as a double will support doesn't alter the way the hardware works.
A typical (IEEE754) double will produce approximately 15-16 decimal places. Whether that's 0.12345678901235, 1234567.8901235, 12345678901235 or 12345678901235000000000, or some other variation.
In other words, yes, if you calculate your calculation EXACTLY, you'll get lots of decimal places, because pi never ends. On a computer, you get about 15-16 digits, no matter what input values you use - all that changes is where in that sequence the decimal place sits. To get more, you need "big number support", such as the Gnu Multiprcession (GMP) library.
You're looking for std::fixed. That tells the ostream not to use exponential form.
cout << setprecision(12) << std::fixed << d << endl;
I have two integer variables, partial and total. It is a progress, so partial starts at zero and goes up one-by-one to the value of total.
If I want to get a fraction value indicating the progress(from 0.0 to 1.0) I may do the following:
double fraction = double(partial)/double(total);
But if total is too big, the conversion to double may lose information.
Actually, the amount of lost information is tolerable, but I was wondering if there is a algorithm or a std function to get the fraction between two values losing less information.
The obvious answer is to multiply partial by some scaling factor;
100 is a frequent choice, since the division then gives the percent as
an integral value (rounded down). The problem is that if the values are
so large that they can't be represented precisely in a double, there's
also a good chance that the multiplication by the scaling factor will
overflow. (For that matter, if they're that big, the initial values
will overflow an int on most machines.)
Yes, there is an algorithm losing less information. Assuming you want to find the double value closest to the mathematical value of the fraction, you need an integer type capable of holding total << 53. You can create your own or use a library like GMP for that. Then
scale partial so that (total << 52) <= numerator < (total << 53), where numerator = (partial << m)
let q be the integer quotient numerator / total and r = numerator % total
let mantissa = q if 2*r < total, = q+1 if 2*r > total and if 2*r == total, mantissa = q+1 if you want to round half up, = q if you want to round half down, the even of the two if you want round-half-to-even
result = scalbn(mantissa, -m)
Most of the time you get the same value as for (double)partial / (double)total, differences of one least significant bit are probably not too rare, two or three LSB difference wouldn't surprise me either, but are rare, a bigger difference is unlikely (that said, somebody will probably give an example soon).
Now, is it worth the effort? Usually not.
If you want a precise representation of the fraction, you'd have some sort of structure containing the numerator and the denominator as integers, and, for unique representation, you'd just factor out the greatest common divisor (with a special case for zero). If you are just worried that after repeated operations the floating point representation might not be accurate enough, you need to just find some courses on numerical analysisas that issue isn't strictly a programming issue. There are better ways than others to calculate certain results, but I can't really go into them (I've never done the coursework, just read about it).