i have written a program and it works with 3D coordinates (i.e. x,y,z).
input data for my program was like
50903.85 21274.97 15.03
50903.57 21274.96 15.08
50903.33 21274.95 15.17
and i got the output with some more columns. So, i got the same x,y,z for my output file.
50903.85 21274.97 15.03
50903.57 21274.96 15.08
50903.33 21274.95 15.17
so, my program works properly, i guess.
then, i used another data set, having more digits than the previous data,
512330.98 5403752.71 330.39
512331.01 5403754.18 329.44
512331.06 5403755.59 329.56
and my output was like;
512331 5.40375e+006 330.39
512331 5.40375e+006 329.44
512331 5.40376e+006 329.56
here, i am not able to get real values. and x values are also rounded. i cant think what should be the reason?
in my program, i used "double" for assigning variables for x,y,z values. SO, i would like to know what is the maximum numerical value that can be refereed to double?
if someone need to work with very long values, what should be the relevant variable?
Numbers aren't changing, you are just seeing a different notation change there. Perhaps you are using something other than %f to format your doubles? See printf format parameters.
Better yet, check out this StackOverflow question: How to avoid scientific notation for large numbers?
Those numbers, such as 5.40375e+006, are another way of representing doubles. When they get sufficiently large, they are by default printed in scientific notation. 5.40375e+006 is 5.40375 * 10^6, or 5403750.
Have a look at this http://www.cplusplus.com/reference/clibrary/cfloat/
Doubles have about 16 decimals of accuracy (source), so you shouldn't have any problem here, unless you're printing floats.
Related
I know about glm::to_string but I cannot find away to set precision of the string conversion. A few issues come up because of this. If I want to serialize glm structures, I won't be able to store the full value in text. More recently I ran into an issue where I had a bug with a value being on the order of -10^-18, but glm insisted on showing it as -0.000 requiring me to manually print out the individual values or create my own printing function to actually see the true value (and I'd really like not to have to do that).
Does glm provide the means to set glm::vecN to std::string output precision or use std::setprecision?
Note: the answers on this question do not present a way to set the precision of glm::vecN using glm built in facilities.
I receive a JSON Array from a server which looks like: [0.00015099, 1, -672.41163]
These values are orderbook entries. If I try to parse all values as double, my price differs slightly from the price in the JSON array. It is clear for me, that this happens because of the double conversion, but how to program around some crazy stuff like this?
Need these values also for calculation, compares etc.
I am using Qt5 and C++.
Any hint?
Well, you should use some Decimal type if you want to deal with money to be on a safe side. Unfortunately, Qt does not have decimal type for some reason. So, you may end up with some rounding rules/conventions if you have no other choice. Otherwise you'd better to implement your own implementation or use some existing solution like qdecimal.
I have been using C++ for quite some time by now and literally took things for granted.
Recently, I asked myself how can the compiler return accurate values{always} when I use out of range values for calculation.
I understand the 2^n{n = bits} concept.
For example: If I would like to add two int's which are out of range such as:
10e6, I would expect the compiler to return a result that is wrong as the bits are overwritten and ultimately represent a wrong integer. But this is never seen to happen.
Can anyone shed some light over this.
Thanks.
I'm currently working on a program to return a jumbled image to their original state. To do this I'm using the Sum of Squared differences algorithm.
Due to the nature of this algorithm it's possible that the differences between two pixels could be negative, what data should i be using to correctly hold a negative value?
I'm currently using a double but I often find that the resulting "score" that was calculated was a "out of bounds number" - by this i mean one with letters - I know that the error is occurring at this location.
Many thanks
Could you post the code? Also, what Eric Finn says is correct. Double is able to hold neagtive values, so this should work for you.
I wrote some parameters (all of type double) to a file for use in performing some complex computations. I write the parameters to the files like so:
refStatsOut << "SomeParam:" << value_of_type_double << endl;
where refStatsOut is an ofstreamparameter. There are four such parameters, each of type double. What I see as written to the file is different from what its actual value is (in terms of loss of precision). As an example, if value_of_type_double had a value -28.07270379934792, then what I see as written in the file is -28.0727.
Also, once these stats have been computed and written I run different programs that use these statistics. The files are read and the values are initially stored as std::strings and then converted to double via atof functions. This results in the values that I have shown above and ruins the computations further down.
My question is this:
1. Is there a way to increase the resolution with which one can write values (of type double and the like) to a file so as to NOT lose any precision?
2. Could this also be a problem of std::string to double conversion with atof? If so, what other function could I use to solve this?
P.S: Please let me know in case some of the details in this question are not clear. I will try to update them and provide more details.
You can use the setprecision function.
ofstream your_file;
you can use your_file.precision(X);
The main difference between precision() and setPrecision() is that precision returns the current precision and setPrecision doesn't. Therefore, you can use precision like this.
streamsize old_precision = your_file.precision(X);
// do what ever you want
//restore precision
your_file.precision(old_precision);
a double is basically a 64-bit integer, if you want a cheap way of writing it out, you can do something like this (note I'm assuming that your compiler uses long for 64-bit ints)
double value = 32985.932235;
long *saveme = (long*)&value;
Just beware of the caveat that the saved value may not remain the same if loaded back on a different architecture.