So I have a CString which contains a number value e.g. "45.05" and I would like to round this number to one decimal place.
I use this funcion
_stscanf(strValue, _T("%f"), &m_Value);
to put the value into a float which i can round. However in the case of 45.05 the number i get is 45.04999... which rounds to 45.0 where one would expect 45.1
How can I get the correct value from my CString?
TIA
If you need a string result, your best bet is to find the decimal point and inspect the two digits after it and use them to make a rounded result. If you need a floating-point number as a result, well.. it's hopeless since 45.1 cannot be represented exactly.
EDIT: the nearest you can come to rounding with arithmetic is computing floor(x*10+0.5)/10, but know that doing this with 45.05 WILL NOT and CAN NOT result in 45.1.
You could extract the digits that make up the hundredths and below positions separately, convert them to a number and round it independently, and then add that to the rest of the number:
"45.05" = 45.0 and 0.5 tenths (0.5 can be represented exactly in binary)
round 0.5 tenths to 1
45.0 + 1 tenth = 45.1
don't confuse this with just handling the fractional position separately. "45.15" isn't divided into 45 and .15, it's divided into 45.1 and 0.5 tenths.
I haven't used c++ in a while but here are the steps I would take.
Count the characters after the Decimal
Remove the Decimal
cast the string to an Int
Perform Rounding operation
Divide by the (number of characters less one)*10
Store result in a float
Related
I have a set of N real sequences and need to pick K sequences (with no replacement) such that their sum has the minimum variance.
E.g. I have N=3 real sequences of length 5:
x(1)=[-0.9 0.7 2.0 2.5 1.5]
x(2)=[-1.8 -0.2 0.5 -1.3 -0.7]
x(3)=[-1.5 -0.9 0.3 1.5 0.4]
If I need to select K=2 sequences, the variance of the sums is:
var(x(1)+x(2))=3.7
var(x(1)+x(3))=6.1
var(x(2)+x(3))=2.5
So I'd want to select sequences 2 & 3.
This is easy to brute force for small N, but my real application has much larger N. For example, for N=20 and K=10, there are 184756 combinations. Since my sequence lengths are long and computational time is critical, this is not feasible.
Is there an efficient algorithm to do the selection? Or even to give an approximate solution? Or reduce the problem space to likely candidates?
I've got a third party library that read values from laboratory scales. This library interface many scale models, each one with its own precision.
What I need to do in my C++ application (that uses that library) is to read the weight value from the scale (double format) and print it, taking into account the scale precision (so giving the user, reading that value, the information about scale precision).
So, connecting a scale with 2 decimals precision, the output should be for example: 23.45 g
Instead, connecting a scale with 4 decimals precision, the output should be for example: 23.4567 g
The fact is I don't know from the library the scale precision.
The function looks like the following:
double value = scale.Weight();
If I just print the double value, the output could be in the form of:
1.345999999999999
instead of:
1.346
Is there a way to understand the double precision so that the output shows the weight with the scale precision?
EDIT: scale precision goes from 0 to 6 decimals.
No. This information should be inside scale class as double type has "fixed" precision and you cannot change it. Also type precision and printed precision are two different things. You can use type that has infinite precision but show always 2 digits after dot etc. If scale do not have precision information you could do a helper class and hard code precision inside it then correlate it with some scale property or type.
I need to compress floating point numbers (4 bytes) to 1 byte(0 to 0xFF) to send to another device. The floating point numbers range from -100000.0 to 100000.0.
The other device will decode from 1 byte back to floating point numbers. How do it do it with minimum data loss?
Thanks, JC
One solution is to use quantization. Divide 100000 to 127 intervals. Send the interval number to which float belongs to and a sign in lowest or highest bit
In your case the interval = 787,4
For example, you have input like 100. Send 1. Input 1000,147732. Send 2
On the device you can restore number by its interval.
The easiest solution is to restore the number as a middle of the interval. For example, every float that belongs to the first interval will be restored as 393.7
If you have some stats for digits distribution and it's not uniform, you can play around it by changing the intervals length and quantize frequent floats more precisely
How I can specify two decimal point mask using CMFCMaskedEdit.
I want to validate/allow only integer with two decimal points.
Thanks,
I have an API call which returns a double. The double's decimal length can variate from many decimal places to a few (it all comes down to the state of an actuator). This double represents the current position on the range radius of an actuator.
I am not interested in such a detailed number, because it adds alot of noise to the system.
I've been using floats to save space, but still I have floats which have 6-8 decimal length.
I am not interested in floor or ceil, simply because it doesn't do the job i want.
For example, I have the numbers:
-2.05176
-0.104545
0.30643
0.140237
1.41205
-0.176715
0.559462
0.364928
I want the precision to be set to 2 decimal places no matter what.
I am NOT talking about the output precision on std::cout, I know how to set this. I am talking about the actual precision of the float. i.e: I am interested in floats who will have 0.XX format and 0.XX00000000000 actual precision.
Therefore transforming the above list to:
-2.05
-0.10
0.30
0.14
1.41
-0.17
0.55
0.36
I know boost has a numerical conversion template, but I cannot figure out how to convert from float to float using a lower precision. Can somebody please help ?
Can't you just round it.
float round(float num, int precision)
{
return floorf(num * pow(10.0f,precision) + .5f)/pow(10.0f,precision);
}
The precision of float and double is down to the hardware. Anything else needs to be coded in software.
You could try scaling instead, working in ints:
-205
-10
30
14
141
-17
55
36