I am trying to convert some JavaScript code to C++ for obtaining Julian datetime with 17 digits precision. The JS code is able to give me this precision, but its similar code in C++ is not giving value more than 7 digits. This 17 digit precision is absolutely needed because it helps to find Altitude and Azimuth of celestial bodies in realtime with a greater precision.
Here is the JS code.
function JulianDateFromUnixTime(t){
//Not valid for dates before Oct 15, 1582
return (t / 86400000) + 2440587.5;
}
function setJDToNow(){
const date=new Date();
const jd=JulianDateFromUnixTime(date.getTime());
document.getElementById("jd").value=jd;
}
Calling this in HTML code as below
<tr><td align=right>Julian Date:</td><td><input type=text id="jd" value="2459349.210248739"></td><td><input type=button value="Now" onclick='setJDToNow();'></td></tr> gives the value 2459349.210248739
Here is the C++ code
#include <chrono>
#include <cstdint>
#include <iostream>
uint64_t timeSinceEpochMillisec() {
using namespace std::chrono;
return duration_cast<milliseconds>(system_clock::now().time_since_epoch()).count();
}
uint64_t JulianDateFromUnixTime(uint64_t t){
//Not valid for dates before Oct 15, 1582
return (t / 86400000) + 2440587.5;
}
int main() {
std::cout << JulianDateFromUnixTime(timeSinceEpochMillisec()) << std::endl;
return 0;
}
This gives 2459848 as the value.
Question: How do I get 17 digits of precision?
Note: The version of GCC I am using is MSYS2-MINGW-64 GCC 12.1.0
At first look, I see three issues here:
Your Julian Date is a floating point number, so the result of your function should be double, not uint64_t which is an unsigned integer;
You want t / 86400000 to be a floating point division, not an euclidian one which discards the fractional part. There are several ways to do that, the easiest, is to divide by a double, so t / 86400000.0. Some may consider that too subtle and thus prefer double(t) / 86400000.0 or even static_cast<double>(t) / 86400000.0.
Even if you return a double, the display format won't be the one you desire. You should set it with std::fixed and std::setprecision.
Edit: I forgot that the most common double format has 53 bits of precision so about 15 decimal digits. You won't be able to have easily and portably more (some implementations have their long double with 18 decimal digits of precision or more, others have it the same representation as double). AFAIK, Javascript is using the same format as double for all its numbers, so you aren't probably losing anything in the conversion.
Related
I want to convert the String which contains numerical Float into Float or Double Data type values with full significant figures Please help me to fix this
#include <sstream>
#include<iostream>
#include<string>
using namespace std;
int main()
{
string q = "23.3453535";
float f;
istringstream(q) >>f;
f=1.0*f; // stack-overflow viewer it is an example because i want to process this float value
cout<<f;
}
/*OutPut is:
23.3454
but i want this
23.3453535
*/
If you want to control the precision then include #include <iomanip> and use std::cout << std::setprecision(17); to set the number of digits you want. Also float has 6 significant bits precision whereas double has 12. So whenever your string has more than 6 digits in decimal there is a possibility of loosing the precision.
There are TWO issues here:
You are running into the default formatting of floating point output on std::ostream, which is 6 digits. Use std::setprecision(n) to increase the precision to cover enough decimal places as you wish.
You are trying to get more precision out of a float than it will support. "23.3453535" is 9 digits long, so with 3.3 digits per digit it is approximately 30 bits are needed to store this number with all bits preserved. (exactly, you need ceil(log(233453535)/log(2)) which is 28). A float only has 23 bits to store the mantissa, so whilst the value is in the representable range, some of the last digits will disappear. You can fix this by using double instead of float - but as float, you will not ever get 9 valid decimal digits.
Below is the code I've tested in a 64-bit environment and 32-bit. The result is off by one precisely each time. The expected result is: 1180000000 with the actual result being 1179999999. I'm not sure exactly why and I was hoping someone could educate me:
#include <stdint.h>
#include <iostream>
using namespace std;
int main() {
double odds = 1.18;
int64_t st = 1000000000;
int64_t res = st * odds;
cout << "result: " << res << endl;
return 1;
}
I appreciate any feedback.
1.18, or 118 / 100 can't be exactly represented in binary, it will have repeating decimals. The same happens if you write 1 / 3 in decimal.
So let's go over a similar case in decimal, let's calculate (1 / 3) × 30000, which of course should be 10000:
odds = 1 / 3 and st = 30000
Since computers have only a limited precision we have to truncate this number to a limited number of decimals, let's say 6, so:
odds = 0.333333
0.333333 × 10000 = 9999.99. The cast (which in your program is implicit) will truncate this number to 9999.
There is no 100% reliable way to work around this. float and double just have only limited precision. Dealing with this is a hard problem.
Your program contains an implicit cast from double to an integer on the line int64_t res = st * odds;. Many compilers will warn you about this. It can be the source of bugs of the type you are describing. This cast, which can be explicitly written as (int64_t) some_double, rounds the number towards zero.
An alternative is rounding to the nearest integer with round(some_double);. That will—in this case—give the expected result.
First of all - 1.18 is not exactly representable in double. Mathematically the result of:
double odds = 1.18;
is 1.17999999999999993782751062099 (according to an online calculator).
So, mathematically, odds * st is 1179999999.99999993782751062099.
But in C++, odds * st is an expression with type double. So your compiler has two options for implementing this:
Do the computation in double precision
Do the computation in higher precision and then round the result to double
Apparently, doing the computation in double precision in IEEE754 results in exactly 1180000000.
However, doing it in long double precision produces something more like 1179999999.99999993782751062099
Converting this to double is now implementation-defined as to whether it selects the next-highest or next-lowest value, but I believe it is typical for the next-lowest to be selected.
Then converting this next-lowest result to integer will truncate the fractional part.
There is an interesting blog post here where the author describes the behaviour of GCC:
It uses long double intermediate precision for x86 code (due to the x87 FPUs long double registers)
It uses actual types for x64 code (because the SSE/SSE2 FPU supports this more naturally)
According to the C++11 standard you should be able to inspect which intermediate precision is being used by outputting FLT_EVAL_METHOD from <cfloat>. 0 would mean actual values, 2 would mean long double is being used.
Looking at the name and the Boost Multiprecision documentation I would expect that the cpp_dec_float_50 datatype has a precision of 50 decimal digits:
Using typedef cpp_dec_float_50 hides the complexity of multiprecision to allow us to define variables with 50 decimal digit precision just like built-in double.
(Although I don't understand the comparison with double - I mean double usually implements binary floating point arithmetic, not decimal floating point arithmetic.)
This is also matched by the output of following code (except for the double part, but this is expected):
cout << std::numeric_limits<boost::multiprecision::cpp_dec_float_50>::digits10
<< '\n';
// -> 50
cout << std::numeric_limits<double>::digits10 << '\n';
// -> 15
But why does following code print 74 digits then?
#include <boost/multiprecision/cpp_dec_float.hpp>
// "12" repeated 50 times, decimal point after the 10th digit
boost::multiprecision::cpp_dec_float_50 d("1212121212.121212121212121212121212121212121212121212121212121212121212121212121212121212121212121212121212121212121212121212121212121212121212121212121212121212121212121212121212121212121212");
cout << d.convert_to<string>() << '\n';
// Expected output: 50 digits
// Actual output: 74 digits
// -> 1212121212.1212121212121212121212121212121212121212121212121212121212121212
The str() member function works as expected, e.g.
cout << d.str(50) << '\n';
does only print 50 digits - where it is documented as:
Returns the number formatted as a string, with at least precision digits, and in scientific format if scientific is true.
What you are seeing is likely related to the guard digits used internally. The reason is that even decimal representation has limited accuracy (think ("100.0" / "3.0") * "3.0").
In order to get reasonable rounding errors during calculations, the stored precision will be more than the guaranteed precision.
In summary: always be specific about your expected precision. In your example d.str(50) would do.
(In realistic scenarios, you should want to track the precision of your inputs and deduce the precision on your outputs. Most often, people just reserve surplus precision and only print the part they're interested in)
I was wondering, how long in number of characters would the longest a double printed using fprintf be? My guess is wrong.
Thanks in advance.
Twelve would be a bit of an underestimate. On my machine, the following results in a 317 character long string:
#include <limits>
#include <cstdio>
#include <cstring>
int main()
{
double d = -std::numeric_limits<double>::max();
char str[2048] = "";
std::sprintf(str, "%f", d);
std::size_t length = std::strlen(str);
}
Using %e results in a 14 character long string.
Who knows. The Standard doesn't say how many digits of precision a double provides other than saying it (3.9.1.8) "provides at least as much precision as float," so you don't really know how many characters you'll need to sprintf an arbitrary value. Even if you did know how many digits your implementation provided, there's still the question of exponential formatting, etc.
But there's a MUCH bigger question here. Why the heck would you care? I'm guessing it's because you're trying to write something like this:
double d = ...;
int MAGIC_NUMBER = ...;
char buffer[MAGIC_NUMBER];
sprintf(buffer, "%f", d);
This is a bad way to do this, precisely because you don't know how big MAGIC_NUMBER should be. You can pick something that should be big enough, like 14 or 128k, but then the number you picked is arbitrary, not based on anything but a guess that it will be big enough. Numbers like MAGIC_NUMBER are, not suprisingly, called Magic Numbers. Stay away from them. They will make you cry one day.
Instead, there's a lot of ways to do this string formatting without having to care about buffer sizes, digits of precision, etc, that let you just get on with the buisness of programming. Streams is one:
#include <sstream>
double d = ...;
stringstream ss;
ss << d;
string s = ss.str();
cout << s;
...Boost.Format is another:
#include <boost\format\format.hpp>
double d = ... ;
string s = (boost::format("%1%") % d).str();
cout << s;
Its defined in limits:
std::cout << std::numeric_limits<double>::digits << "\n";
std::cout << std::numeric_limits<double>::digits10 << "\n";
Definition:
digits: number of digits (in radix base) in the mantissa
Equivalent to FLT_MANT_DIG, DBL_MANT_DIG or LDBL_MANT_DIG.
digits10: Number of digits (in decimal base) that can be represented without change.
Equivalent to FLT_DIG, DBL_DIG or LDBL_DIG for floating types.
See: http://www.cplusplus.com/reference/std/limits/numeric_limits/
Of course when you print stuff to a stream you can use the stream manipulators to limit the size of the output.
you can decide it by yourself..
double a=1.1111111111111111111111111111111111111111111111111;
printf("%1.15lf\n", a);
return 0;
./a.out
1.111111111111111
you can print more than 12 characters..
If your machine uses IEEE754 doubles (which is fairly widespread now), then the binary precision is 53 bits; The decimal equivalent is approximately 15.95 (calculated via logarithmic conversion), so you can usually rely on 15 decimal digits of precision.
Consult Double precision floating-point format for a brief discussion.
For a much more in-depth study, the canonical paper is What Every Computer Scientist Should Know About Floating-Point Arithmetic. It gets cited here whenever binary floating point discussions pop up, and is worth a weekend of careful reading.
What is the precise meaning of numeric_limits::digits10?
Some other related questions in stackoverflow made me think it is the maximum precision of a double, but
The following prototype starts working (sucess is true) when precision is greater that 17 ( == 2+numeric_limits::digits10)
With STLPort, readDouble==infinity at the end; with microsoft's STL, readDouble == 0.0.
Has this prototype any kind of meaning :) ?
Here is the prototype:
#include <float.h>
#include <limits>
#include <math.h>
#include <iostream>
#include <iomanip>
#include <sstream>
#include <string>
int main(int argc, const char* argv[]) {
std::ostringstream os;
//int digit10=std::numeric_limits<double>::digits10; // ==15
//int digit=std::numeric_limits<double>::digits; // ==53
os << std::setprecision(17);
os << DBL_MAX;
std::cout << os.str();
std::stringbuf sb(os.str());
std::istream is(&sb);
double readDouble=0.0;
is >> readDouble;
bool success = fabs(DBL_MAX-readDouble)<0.1;
}
numeric_limits::digits10 is the number of decimal digits that can be held without loss.
For example numeric_limits<unsigned char>::digits10 is 2. This means that an unsigned char can hold 0..99 without loss. If it were 3 it could hold 0..999, but as we all know it can only hold 0..255.
This manual page has an example for floating point numbers, which (when shortened) shows that
cout << numeric_limits<float>::digits10 <<endl;
float f = (float)99999999; // 8 digits
cout.precision ( 10 );
cout << "The float is; " << f << endl;
prints
6
The float is; 100000000
numeric_limits::digits10 specifies the number of decimal digits to the left of the decimal point you can represent without a loss of precision. Each type will have a different number of representable decimal values.
See this very readable paper:
http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2006/n2005.pdf
Although DBL_MAX ( = std::numeric_limits::digits10 = 15 digits) is the minimum guaranteed number of digits for a double, the DBL_MAXDIG10 value (= 17 digits) proposed in the paper has the useful properties:
Of being the minimum number of digits needed to survive a round-trip to string form and back and get the same double in the end.
Of being the minimum number of digits needed to convert the double
to string form and show different strings every time you get (A != B) in code.
With 16 or fewer digits, you can get doubles that are not equal in code,
but when they are converted to string form they are the same
(which will give the case where they are different when compared in the code,
but a log file will show them as identical - very confusing and hard to debug!)
When you compare values (e.g. by reviewing them manually by diff'ing two log files) we should remember that digits 1-15 are ALWAYS valid, but differences in the 16th and 17th digits MAY be junk.
The '53' is the bit width of the significand that your type (double) holds. The '15' is the number of decimal digits that can be represented safely with that kind of precision.
digits10 is for conversion: string → double → string
max_digits10 is for conversion: double → string → double
In your program, you are using the conversion (double → string → double). You should use max_digits10 instead of digits10.
For more details about digits10 and max_digits10, you can read:
difference explained by stackoverflow
digits10
max_digits10