I have an API call which returns a double. The double's decimal length can variate from many decimal places to a few (it all comes down to the state of an actuator). This double represents the current position on the range radius of an actuator.
I am not interested in such a detailed number, because it adds alot of noise to the system.
I've been using floats to save space, but still I have floats which have 6-8 decimal length.
I am not interested in floor or ceil, simply because it doesn't do the job i want.
For example, I have the numbers:
-2.05176
-0.104545
0.30643
0.140237
1.41205
-0.176715
0.559462
0.364928
I want the precision to be set to 2 decimal places no matter what.
I am NOT talking about the output precision on std::cout, I know how to set this. I am talking about the actual precision of the float. i.e: I am interested in floats who will have 0.XX format and 0.XX00000000000 actual precision.
Therefore transforming the above list to:
-2.05
-0.10
0.30
0.14
1.41
-0.17
0.55
0.36
I know boost has a numerical conversion template, but I cannot figure out how to convert from float to float using a lower precision. Can somebody please help ?
Can't you just round it.
float round(float num, int precision)
{
return floorf(num * pow(10.0f,precision) + .5f)/pow(10.0f,precision);
}
The precision of float and double is down to the hardware. Anything else needs to be coded in software.
You could try scaling instead, working in ints:
-205
-10
30
14
141
-17
55
36
Related
I'm trying to find the point where 2 lines intersect.
I am using this example on geeks for geeks.
This works well for small numbers. I am using "accurate" latitude and longitude points which are scaled by 10000000. So a latitude of 40.4381311 would be 404381311 in my program. I can't use double as I will lose accuracy and the whole point of getting the points scaled like this is for accuracy.
So in the geeks for geeks example I changed everywhere they use double to use int64_t. But unfortunately this is still not enough.
By the time you get to:
double determinant = a1*b2 - a2*b1;
or:
double x = (b2*c1 - b1*c2)/determinant;
double y = (a1*c2 - a2*c1)/determinant;
The number will be too big for int64_t. What is my best way around this issue while still keeping accuracy.
I am writing this code for an ESP32 and it doesn't seem to have __int128/__int128_t.
Thanks!
I've got a third party library that read values from laboratory scales. This library interface many scale models, each one with its own precision.
What I need to do in my C++ application (that uses that library) is to read the weight value from the scale (double format) and print it, taking into account the scale precision (so giving the user, reading that value, the information about scale precision).
So, connecting a scale with 2 decimals precision, the output should be for example: 23.45 g
Instead, connecting a scale with 4 decimals precision, the output should be for example: 23.4567 g
The fact is I don't know from the library the scale precision.
The function looks like the following:
double value = scale.Weight();
If I just print the double value, the output could be in the form of:
1.345999999999999
instead of:
1.346
Is there a way to understand the double precision so that the output shows the weight with the scale precision?
EDIT: scale precision goes from 0 to 6 decimals.
No. This information should be inside scale class as double type has "fixed" precision and you cannot change it. Also type precision and printed precision are two different things. You can use type that has infinite precision but show always 2 digits after dot etc. If scale do not have precision information you could do a helper class and hard code precision inside it then correlate it with some scale property or type.
I am looking some advice for a Qt program I am working on, that uses Qwt to draw a line graph.
Basically my problem arises from the graph's x axis, which is in 24:00 time. I have a QPolygonF that stores a series of QPointFs that hold the values for my plot curve, where every 1.0 in the x axis equates to 1 second. I then use unix timestamps to set each value for the x axis, so basically I have double xAxis initialised to 0.0 which is added to the QPolygonF like points.append(xAxis, yAxis) for the start of curve and for each point thereafter I use currentTime - prevTime to find the difference between both timestamps and then increase xAxis by said difference using +=. If that makes sense.
Anyway, currently everything is displayed in whole seconds and it works perfectly fine. However, I need it to be precise to the millisecond. What I need some guidance on is working with large high precision doubles.
Working with unix timestamps in seconds is easy as that can be done with a simple int, but when you increase the number of digits to include milliseconds doubles are switched to scientific notation.
My question is: how do I store potentially large numbers, like 22429.388 or larger, if they revert to scientific notation?
Thanks and sorry if this is a very basic question.
You say you graph axis is 24:00 long. This will be 24*3600 seconds, so 24*3600*1000 milliseconds: 86,400,000 which is way smaller than INT_MAX (=2,147,483,647).
So there should be no problem storing your x values as an int. You just need to make first axis value be 0 then last axis value will be 86,400,000.
If your times do not start at 0, you just need to define the smallest time displayed as a "reference date" and store values based on this "reference date" (to guarantee they will all be between 00:00:00.0000 (i.e: 0 as an int) and 24:00:00.0000 (i.e: 86,400,000 as an int)).
So I have a CString which contains a number value e.g. "45.05" and I would like to round this number to one decimal place.
I use this funcion
_stscanf(strValue, _T("%f"), &m_Value);
to put the value into a float which i can round. However in the case of 45.05 the number i get is 45.04999... which rounds to 45.0 where one would expect 45.1
How can I get the correct value from my CString?
TIA
If you need a string result, your best bet is to find the decimal point and inspect the two digits after it and use them to make a rounded result. If you need a floating-point number as a result, well.. it's hopeless since 45.1 cannot be represented exactly.
EDIT: the nearest you can come to rounding with arithmetic is computing floor(x*10+0.5)/10, but know that doing this with 45.05 WILL NOT and CAN NOT result in 45.1.
You could extract the digits that make up the hundredths and below positions separately, convert them to a number and round it independently, and then add that to the rest of the number:
"45.05" = 45.0 and 0.5 tenths (0.5 can be represented exactly in binary)
round 0.5 tenths to 1
45.0 + 1 tenth = 45.1
don't confuse this with just handling the fractional position separately. "45.15" isn't divided into 45 and .15, it's divided into 45.1 and 0.5 tenths.
I haven't used c++ in a while but here are the steps I would take.
Count the characters after the Decimal
Remove the Decimal
cast the string to an Int
Perform Rounding operation
Divide by the (number of characters less one)*10
Store result in a float
I'm using floats to specify texture coordinates, in the range 0-1. OpenGL likes things in this range, and I'm fine specifying coordinates this way, but I'm concerned when I start using larger textures (say up 4096 or 8192 pixels), that I may start losing precision. For example, if I want to specify a coordinate of (1,1) in a 8192x8192px texture, that would map to 1/8192=0.0001220703125. That seems to evaluate to 0.000122070313 as a float though... I'm concerned that my OpenGL shader won't map that to the same pixel I intended.
I could keep the coordinates as integers in pixels for awhile, but sooner or later I have to convert it (perhaps as late as in the shader itself). Is there a workaround for this, or is this something I should even be concerned about?
Multiplying it back out, I get 1.000000004096 which I guess would still be interpreted as 1? Actually, OpenGL does blending if its not a whole number, doesn't it? Perhaps not with "nearest neighbour", but with "linear" it ought to.
1/4096f * 4096 = 1, error = 0
1/8192f * 8192 = 1.000000004096, error = 0.000000004096
1/16384f * 16384 = 1.0000000008192, error = 0.0000000008192
1/32768f * 32768 = 0.9999999991808, error = 0.0000000008192
...
1/1048576f * 1048576 = 0.9999999827968, error = 0.0000000172032
(I'm using Visual Studio's debugger to compute the float, and then multiplying it back out with Calculator)
Is the lesson here that the error is negligible for any reasonably sized texture?
That seems to evaluate to 0.000122070313 as a float though... I'm concerned that my OpenGL shader won't map that to the same pixel I intended.
You should not be concerned. Floating point is called floating point because the decimal floats. You get ~7 digits of precision for your mantissa, more or less regardless of how large or small the float is.
The float isn't stored as 0.000122070313; it's stored as 1.22070313x10^-4. The mantissa is 1.22070313, the exponent is -4. If the exponent were -8 instead, you would have the same precision.
Your exponent, with single-precision floats, can go down to + or - ~38. That is, you can have 38 zeros between the decimal and the first non-zero digit of the mantissa.
So no, you shouldn't be concerned.
The only thing that should concern you would be the precision of the interpolated value and the precision in the texture fetching. But these have nothing to do with the precision of data you store your texture coordinates in.