I have a parent dialog box and a child dialog box.when i post message from child to parent using PostMessageW(WM_SMESG,NULL,l_dvalue);where l_Value is double value but when i recieve this message in parent and then i am typcasting like double l_value = (double)lParam;then value in l_value always showing 0.0 but the value isend to parent was 0.5 what is the problem
Casting a double of value 0.5 to integer will be "rounded down"; the decimals are truncated to be more specific. The result of truncating .5 from 0.5 will always be 0. However, lParam is not big enough (32 bit) to hold a double value (64 bit). But, assuming float (32 bit) instead of double, you can do it as follows:
Bit-based "cast" from float to long: *((long*)(&myFloat))
Bit-based "cast" from long to float: *((float*)(&lParam))
Or the C++ way:
Bit-based "cast" from float to long: *reinterpret_cast<long*>(&myFloat)
Bit-based "cast" from long to float: *reinterpret_cast<float*>(&lParam)
Related
I want to edit the opacity for a QGraphicsItem with a QSpinBox.
The QSpinBox gives me the value 57 which I use to set the opacity of the item.
Then I get the changed opacity back from the item and want to fill the QSpinBox. But setting the value of the box results in a mistake.
qDebug() << (int)(qreal)(0.57 * 100.0);
outputs 56
Is this a known bug?
Is there a workaround?
Reason
Do not rely on the precision of the floating point numbers due to the inaccuracy of its calculation. You can read more info on this issue here or here.
Demonstration
If you set the higher precision for the output using the qSetRealNumberPrecision, you'll see the actual root of the problem – the result of 0.57 * 100.0 is not exact 57 but something like 56.999999999999992895:
qDebug() << qSetRealNumberPrecision(20) << 0.57 * 100.0;
Solution
So it's better to simply round your number to the nearest integer instead of casting to int which simply omits the fraction number:
qDebug() << qRound(0.57 * 100.0);
Integers that fit in the mantissa have exact representation in floating point, thus static_cast<qreal>(100.0) == 100 always holds and is represented as 100*2^0.
Rationals with denominators of the form 2^-n also have exact representation in floating point as long as the numerator fits in the mantissa, thus e.g. static_cast<qreal>(0.25*4) == 1 holds as long as your compiler doesn't use a brain-dead decimal-to-floating-point conversion function. When most compilers parse the code, they convert both 0.25 and 4 to a floating point representation, and then perform the multiplication to obtain the value of the constant expression.
But static_cast<qreal>(0.57) has no representation as m*2^-n, with sufficiently small integer m,n, and is necessarily represented inexactly. It can be represented as a bit less or more than 0.57. Thus when you multiply it by 100, it can be slightly less than 57 - in your case.
The simplest fix is to avoid the roundtrip: store the opacity everywhere as an integer, and only convert from integer to floating point when changing the value. In other words, only ever use the setOpacity() method, and never use the opacity() method. Store the integer-valued opacity using the item's data attribute:
void setOpacity(QGraphicsItem * item, int opacity) {
item->setData(kOpacity, opacity);
item->setOpacity(opacity / 100.0);
}
void getOpacity(QGraphicsItem * item) {
auto data = item->data(kOpacity);
if (! data.isNull())
return data.toInt();
int opacity = round(item->opacity() * 100.0);
setOpacity(item, opacity);
return opacity;
}
I found a temporary solution but I am not happy with that.
qDebug() << "qreal" << (int)(qreal)(0.57 * 100.0);
qDebug() << "double" << (int)(double)(0.57 * 100.0);
qDebug() << "float" << (int)(float)(0.57 * 100.0);
Output:
qreal 56,
double 56,
float 57
Background: I have some elements in a record, where some elements can be float, unsigned int or unsigned long long. So, I though to use float as a common value to return from a function that reads those elements.
However, I am seeing this strange error on converting from unsigned int to float. On printing the value, it gets changed. How can I avoid it? Should I not return float from this function?
#include <iostream>
#include <limits>
using namespace std;
int main()
{
unsigned int myU = numeric_limits<unsigned int>::max();
cout<<" myU is "<<myU<<'\n'; //correct
float myF = (float) myU;
cout<<" back To Long "<<(unsigned long long ) myF<<'\n'; //error?
cout<<" back To unsigned int "<<(unsigned int ) myF<<'\n'; //error?
cout<<" to Float Without Fixed "<<(float) myU<<'\n';//not clear, so have to use fixed
cout<<" to Float With Fixed "<<fixed<<(float) myU<<'\n';//error?
cout<<" difference "<<myF-myU<<'\n'; //error?
cout<<" myU+32 "<<myU+32<<'\n'; //-1+32=31 ==> understandable
}
Output with gcc 4.6.3:
myU is 4294967295
back To Long 4294967296
back To unsigned int 0
to Float Without Fixed 4.29497e+09
to Float With Fixed 4294967296.000000
difference 1.000000
myU+32 31
The number 4294967295 in float (32-bit IEEE 754) is represented as follows:
0 10011111 00000000000000000000000
sign exponent mantissa
(+1) (2^32) (1.0)
The rule for converting it back to an integer (or long in this case) is:
sign * (2^exponent) * mantissa
and the result would be 4294967296 which is in appropriate size to fill long long for you but too big to be fit in unsigned int so you will get 0 for unsigned int conversion.
Note that the problem is the limitation of representing large numbers with float for example 4294967295 and 4294967200 both are representing the same bits when they are stored as floats.
The main issue you are seeing, is that a floating point type only provides a limited precision of its fraction part, which of course is natural since it can only hold so much information.
Now when you convert from unsigned int to float, the number you are using is too long to fit into the fraction part. Now that some precision was lost and you convert back to an integer format it may differ. To unsigned long long, the result is just one bigger, but in the conversion to unsigned int, you see an overflow happpening.
I try to convert a value from modbus.
The device show "-1.0", the retourned value is 65535 (uint16).
I try now to convert this value retour in double.
I have tried it with different cast's.
It gives me always 65353.00 :(
How do we convert negative uint values in double?
typedef unsigned short uint16;
int main() {
double dRmSP = -1.0; //-1.0000 ok
uint16 tSP = static_cast<uint16>(dRmSP); // = 65535 ok
// retour
double _dRmSP = static_cast<double>(tSP); // = 65535.0000 why??
// try
double _dRmSP_ = static_cast<double>(static_cast<int>(tSP)); // =65535.0000 why??
return 0;
}
You're taking the uint16 value 65535 and turning it into a double. This is 65535.0.
There is no other valid expectation.
The variable tSP does not "remember" that its value originally came from a double of value -1.0. tSP is the unsigned integer value 65535; period.
How do we convert negative uint values in double?
There are no "negative uint values". The "u" stands for unsigned which means negative values are not in the domain of values of that type.
If you wish to use dRmSP then use dRmSP, not some other variable with a different type and value.
Negative unsigned values, by definition do not exist. So you can't convert one to anything.
Your actual situation is that - in getting data from your device - the value of -1.0 is converted to an unsigned value first. The logic, since -1.0 is outside the range of values that an unsigned can represent is to use modulo arithmetic.
The way this works, for a negative input value (like -1.0) and an unsigned variable with maximum value 65535 (corresponding to a 16-bit unsigned) is to keep adding 65536 = 65535 + 1 until a result is obtained between 0 and 65535. For -1.0 this produces a result of 65535.0. When that value is converted to an unsigned, the result is therefore 65535.
That explains why you are getting a value of 65535 when your device displays -1.0.
What you are trying to do with the "retour" is reverse the process. It is not enough to convert an unsigned to a double (as you are) since a double can represent 65535.0 (at least, within limits of numerical precision).
The first step is to convert your value to a double (which will convert 65535 to 65535.0, because a double can represent values like that (again within limits of floating point precision).
The next step - which you are not performing - requires you need to have some idea of what the minimum (or maximum) value is that your device actually supports - which you need to get from documentation. For example, if the minimum value your device can represent is -100.0 (or the maximum is 65435.0) then you reverse the process - keep subtracting 65536.0 until a result is obtained between -100.0 and 65435.0.
In code, this might be done by
double dRmSP = -1.0; //-1.0000 ok
uint16 tSP = static_cast<uint16>(dRmSP); // = 65535 ok
// retour
double dRmSP = static_cast<double>(tSP); // = 65535.0000 - as described above
while (dRmSP > 65435.0) dRmSP -= 65536.0; // voila! -1.0 obtained
First of all, there are no negative unsigned int values. Unsigned means there is no sign bit.
What you did was:
uint16 t1(-1.0); // wraps around to positive 65535
auto t2 = static_cast<double>(t1); // turns 65535 to 65535.0 (no wrapping)
If you want this to work for negative values use an int or comparable (non unsigned integral) type. But if you do this then remember that you will lose a bit for the value (if you use int16).
If I have the following code:
long lSecondsSum = 8039;
double dNumDays = lSecondsSum / (24 * 3600);
I expect to get 0.093044, but for some reason I am getting dNumDays = 0.0000000000.
However, if I write the code as follows:
long lSecondsSum = 8039;
double dNumDays = lSecondsSum/24;
dNumDays = dNumDays/3600;
then I get correct dNumDays = 0.092777777777777778.
How do I avoid all these floating point errors?
lSecondsSum is long, 8039/86400 will be 0
If you convert 24 and 3600 to double, you will get correct result:
double dNumDays = lSecondsSum / (24 * 3600.0);
Or just:
double dNumDays = lSecondsSum / 24.0 / 3600.0;
In your first code snippet you are getting zero because all the math is being done as integers and then converted to double by the assignment. You want to to do all the math in double precision, e.g.
long lSecondsSum = 8039;
double dNumDays = lSecondsSum / (24.0 * 3600.0);
Your second code snippet works because the third line is done in double precision, however the second line is not and you may be expecting it to be, so watch out for that.
The reason for this working like it does is that if you do long * long or long / long then the result will be a long, not a double, even if you assign the resulting long to a double, hence the result of your math was 0. This zero was then assigned to your double. However, long / double will be done in double precision and give you a double back, which is what you want. Essentially, be aware whether your calculations are being done as integer math or in double precision, otherwise you'll get caught out.
I am writing a piece of code in which i have to convert from double to float values. I am using boost::numeric_cast to do this conversion which will alert me of any overflow/underflow. However i am also interested in knowing if that conversion resulted in some precision loss or not.
For example
double source = 1988.1012;
float dest = numeric_cast<float>(source);
Produces dest which has value 1988.1
Is there any way available in which i can detect this kind of precision loss/rounding
You could cast the float back to a double and compare this double to the original - that should give you a fair indication as to whether there was a loss of precision.
float dest = numeric_cast<float>(source);
double residual = source - numeric_cast<double>(dest);
Hence, residual contains the "loss" you're looking for.
Look at these articles for single precision and double precision floats. First of all, floats have 8 bits for the exponent vs. 11 for a double. So anything bigger than 10^127 or smaller than 10^-126 in magnitude is going to be the overflow as you mentioned. For the float, you have 23 bits for the actual digits of the number, vs 52 bits for the double. So obviously, you have a lot more digits of precision for the double than float.
Say you have a number like: 1.1123. This number may not actually be encoded as 1.1123 because the digits in a floating point number are used to actually add up as fractions. For example, if your bits in the mantissa were 11001, then the value would be formed by 1 (implicit) + 1 * 1/2 + 1 * 1/4 + 0 * 1/8 + 0 * 1/16 + 1 * 1/32 + 0 * (64 + 128 + ...). So the exact value cannot be encoded unless you can add up these fractions in such a way that it's the exact number. This is rare. Therefore, there will almost always be a precision loss.
You're going to have a certain level of precision loss, as per Dave's answer. If, however, you want to focus on quantifying it and raising an exception when it exceeds a certain number, you will have to open up the floating point number itself and parse out the mantissa & exponent, then do some analysis to determine if you've exceeded your tolerance.
But, the good news, its usually the standard IEEE floating-point float. :-)