Timestamp difference with two decimals c++ - c++

I have to timestamps (in µs), which are stored as a uint64_t.
My goal is to be able to get the difference between these timestamps in ms, as a float with 2 decimals.
Fx I'd like the result to be 6.520000 ms.
Though I can't seem to get them to cast correctly.
I've tried the following without any luck:
uint64_t diffus = ms.getT1 - ms.getT2
float diff = static_cast<float>(diffus);
float diffMS = diff / 1000;
std::cout << diffMS << " ms" << std::endl;
I know this wont make the float only two decimals, but I can't even get that to work.
I seem to get the same result all the time, even though I vary T1 and T2 with
srand(time(NULL));
usleep((rand() % 25) * 1000);
The output keeps being:
1.84467e+16 ms
1.84467e+16 ms
1.84467e+16 ms
1.84467e+16 ms
What is happening, and what can I do? :-)
Best regards.

I made the assumption that ms.getT1 and ms.getT2 indicated that T1 was earlier in time than T2.
In that case then you are casting a negative number to float, and the first bit is probably being interpreted incorrectly for your expectations.
The following tests confirm my assumption:
// Force diffus to be a negative number.
uint64_t diffus = 20 - 30;
float diff = static_cast<float>(diffus);
float diffMS = diff / 1000;
std::cout << diffMS << " ms" << std::endl;
// Result of casting negative integer to float.
1.84467e+16 ms
// Force diffus to be a positive number.
uint64_t diffus = 30 - 20;
float diff = static_cast<float>(diffus);
float diffMS = diff / 1000;
std::cout << diffMS << " ms" << std::endl;
// Result of casting positive integer to float.
0.01 ms

A float is normally a 32 bits number, so consider the consequences of doing this for further applications...
uint64_t diffus = ms.getT1 - ms.getT2
float diff = static_cast<float>(diffus);
on the other hand, float numbers can be represented in several ways....
(scientific notation for example) and that is only about how the number will look like, not about the number is holding..
3.1
3.14
3.14159
could be the same pi number printed in diff formats according to the needs of the application...
If your problem is about the representation of a float number then consider to set the precision of the cout object: std::cout.precision, here more info
std::cout.precision(2);
std::cout << diffMS << " ms" << std::endl;

Related

How can I check for - and get a remainder using fmod (floats)?

My goal is to check if there is any remainder left when dividing 2 floats, and if there is, give that remainder back to the user.
Given the following code, I had expected that fmod(2, 0.2) would be 0, however, I get back 0.2. I read that this has to do with floating point problems. But is there any way this can be done properly?
int main() {
float a = 2.0;
float b = 0.2;
float rem = fmod(a, b);
if (rem > 0) {
std::cout << "There is a remainder: " << rem << std::endl;
} else {
std::cout << "No remainder: " << rem << std::endl;
}
}
Output:
There is a remainder: 0.2
Yes your hunch is correct. std::fmod is computing
std::fmod(2.0f, 0.20000000298023223876953125f)
where the second parameter is the closest IEEE754 (assume your plaform uses that) float to 0.2.
Luckily though mathematical modulus is distributive across multiplication, so you could repose as
double rem = (long long)std::round(a * 10) % (long long)std::round(b * 10) / 10.0;
using a larger power of 10 according to the number of decimal places required to represent the original problem.

How to avoid floating point format error

I am facing with following issue.
when I multiply two numbers depending from values of this numbers I get different results. I tried to experiment with types but didn't get expected result.
#include <stdio.h>
#include <iostream>
#include <fstream>
#include <iomanip>
#include <math.h>
int main()
{
const double value1_39 = 1.39;
const long long m_100000 = 100000;
const long long m_10000 = 10000;
const double m_10000double = 10000;
const long long longLongResult_1 = value1_39 * m_100000;
const double doubleResult_1 = value1_39 * m_100000;
const long long longLongResult_2 = value1_39 * m_10000;
const double doubleResult_2 = value1_39 * m_10000;
const long long longLongResult_3 = value1_39 * m_10000double;
const double doubleResult_3 = value1_39 * m_10000double;
std::cout << std::setprecision(6) << value1_39 << '\n';
std::cout << std::setprecision(6) << longLongResult_1 << '\n';
std::cout << std::setprecision(6) << doubleResult_1 << '\n';
std::cout << std::setprecision(6) << longLongResult_2 << '\n';
std::cout << std::setprecision(6) << doubleResult_2 << '\n';
std::cout << std::setprecision(6) << longLongResult_3 << '\n';
std::cout << std::setprecision(6) << doubleResult_3 << '\n';
return 0;
}
result seen in debuger
Variable Value
value1_39 1.3899999999999999
m_100000 100000
m_10000 10000
m_10000double 10000
longLongResult_1 139000
doubleResult_1 139000
longLongResult_2 13899
doubleResult_2 13899.999999999998
longLongResult_3 13899
doubleResult_3 13899.999999999998
result seen in cout
1.39
139000
139000
13899
13900
13899
13900
I know that the problem is that the problem is in nature of keeping floating point format in computer. It keeps data as a fractions in base 2.
My question is how to get 1.39 * 10 000 as 13900?(because I am getting 139000 when multipling with 100000 the same value) is there any trick which can help to achieve my goal?
I have some ideas in my mind bunt not sure are they good enough.
1) pars string to get number from left of . and rigth of doth
2) multiply number by 100 and divide by 100 when calculation is done, but each of this solutions has their drawback. I am wondering is there any nice trick for this.
As the comments already said, no there is no solution. This problem is due to the nature of floating points being stored as base 2 (as you already said). The type floating point is defined in IEEE 754. Everything that is not a base two number can't be stored precisely in base 2.
To be more specific
You CAN store:
1.25 (2^0 + 2^-2)
0.75 (2^-1 + 2^-2)
because there is an exact representation.
You CAN'T store:
1.1
1.4
because this will result in an irrational fracture in the base 2 system. You can try to round or use a sort of arbitrary precision float point library (but even they have their limits [memory/speed]) with a much greater precision than float and then backcast to float after multiplication.
There are also a lot of other related problems when it comes to floating points. You will find out that the result of 10^20 + 2 is only 10^20 because you have a fixed digit resolution (6-7 digits for float and 15-16 digits for double). When you calculate with numbers that have huge differences in magnitude the smaller ones will just "disappear".
Question: Why does multiply 1.39 * 10^6 get 139000 but multiplying 1.39 * 10^5 not?
This could be because of the order of magnitude. 10000 has 5 digits, 1.39 has 3 digits (distance 7 - just within the float). Both could be near enough to "show" the problem. When it comes to 100000 you have 6 digits but you have one more magnitude difference to 1.39 (distance 8 - just out of float). Therefore one of the trailing digits gets cut off and you get a more "natural" result. (This is just one reason for this. Compiler, OS and other reasons might exist)

Double overflow?

I have always wondered what happens in case a double reaches it's max value, so I decided to write this code:
#include <stdint.h>
#include <iostream>
#define UINT64_SIZE 18446744073709551615
int main() {
std::uint64_t i = UINT64_SIZE;
double d1 = ((double)(i+1)) / UINT64_SIZE;
double d2 = (((double)(i)) / UINT64_SIZE)*16;
double d3 = ((double)(i * 16)) / UINT64_SIZE;
std::cout << d1 << " " << d2 << " " << d3;
}
I was expecting something like this:
0 16 0
But this is my output:
0 16 1
What is going on here? Why are the values of d3 and d1 different?
EDIT:
I decided to change my code to this to see the result:
#include <stdint.h>
#include <iostream>
#define UINT64_SIZE 18446744073709551615
int main() {
std::uint64_t i = UINT64_SIZE;
double d1 = ((double)(i+1.0)) / UINT64_SIZE; //what?
double d2 = (((double)(i)) / UINT64_SIZE)*16;
double d3 = ((double)(i * 16.0)) / UINT64_SIZE;
std::cout << d1 << " " << d2 << " " << d3;
}
The result I get now is this:
1 16 16
However, shouldn't d1 and d3 still be the same value?
double overflows by loosing precision, not by starting from 0 (as it works with unsigned integers)
d1
So, when you add 1.0 to very big value (18446744073709551615), you're not getting 0 in double, but something like 18446744073709551610 (note last 10 instead of 15) or 18446744073709551620 (note last 20 instead of 15), so - less significant digit(s) are rounded.
Now, you're dividing two almost identical values, result will be either 0.9(9)9 or 1.0(0)1, as soon as double cannot hold such small value - again it looses precision and rounds to 1.0.
d3
Almost the same, when you multiple huge value by 16 - you're getting rounded result (less significant digits are thrown away), by diving it - you're getting "almost" 16, which is rounded to 16.
This is a case of loss of precision. Consider the following.
#include <stdint.h>
#include <iostream>
#define UINT64_SIZE 18446744073709551615
int main() {
std::uint64_t i = UINT64_SIZE;
auto a = i;
auto b = i * 16;
auto c = (double)b;
auto d = (uint64_t)c;
std::cout << a << std::endl;
std::cout << b << std::endl;
std::cout << c << std::endl;
std::cout << d << std::endl;
return 0;
}
On my system the output is as follow.
18446744073709551615
18446744073709551600
1.8446744073709552e+19
9223372036854775808
double simply doesn't have enough precision in this case.
Edit: There is also a rounding problem. When you preform the division with UINT64_SIZE the denumerator is promoted to double and you are left with a decimal value between 0.0 and 1.0. The decimals are not rounded off. The actual value is very near 1.0 and is rounded up when pushed to std::cout.
In your question you ask "what happens in case a double reaches it's max value". Note that in the example you provided no double is ever near it's maximum value. Only it's precision is exceeded. When a double's precision is exceeded, the excess precision is discarded.

chrono duration_cast not working when doing arithmetic in cout

I do not understand the following behaviour
unsigned long begin_time = \
std::chrono::duration_cast<std::chrono::milliseconds>(std::chrono::steady_clock::now().time_since_epoch()).count();
//some code here
std::cout << "time diff with arithmetic in cout: " << \
std::chrono::duration_cast<std::chrono::milliseconds>(std::chrono::steady_clock::now().time_since_epoch()).count() - begin_time << std::endl;
unsigned long time_diff = \
std::chrono::duration_cast<std::chrono::milliseconds>(std::chrono::steady_clock::now().time_since_epoch()).count() - begin_time;
std::cout << "time_diff: " << time_diff << std::endl;
Output:
time diff with arithmetic in cout: <very large number (definitely not milliseconds)>
time_diff: <smaller number (definitely milliseconds)>
Why does the duration_cast not work when I do arithmetic within the cout? I have used unsigned int and int for the time_diff variable, but I always get good output when I first do the arithmetic within the variable initialization or assignment.
NOTE
I am using Visual Studio 2013 (Community edition)
You are probably overflowing unsigned long (sizeof is 4):
unsigned long begin_time = \
std::chrono::duration_cast<std::chrono::milliseconds>(std::chrono::steady_clock::now().time_since_epoch()).count();
Recommended:
using namespace std::chrono;
auto begin_time = steady_clock::now();
//some code here
std::cout << "time diff with arithmetic in cout: " <<
duration_cast<milliseconds>(steady_clock::now() - begin_time).count() << std::endl;
There is nothing wrong with duration_cast, the problem is that an unsigned long is not large enough to handle a time in milliseconds since epoch. From ideone I get this output:
Max value for `unsigned long`: 4294967295
Milliseconds since epoch: 15426527488
I get the number of milliseconds by directly outpouting:
std::cout << std::chrono::duration_cast<std::chrono::milliseconds>(std::chrono::steady_clock::now().time_since_epoch()).count() << std::endl;
In your first output, you get a gigantic number because begin_time is cast to std::chrono::milliseconds::rep (the return type of .count()) which is large enough to handle the time_since_epoch (guaranted by the standard), while in your second output, both value are truncated by the unsigned long and thus you get a (probably) correct result.
Note: There may be architecture where an unsigned long is enough to handle this but you should not rely on it and directly use the arithmetic operators provided for std::chrono::duration.

Why isn't this operation giving me greater precision?

I'm estimating the value of Pi using the following formula:
Using the following C++ code:
double sub = 0;
int prec = 1000; //How many iterations to use in estimate.
for(int i = 1; i <= prec; i++){
double frac = 1/((3+(2*(i-1))) * pow(3, i));
sub += (i == 1) ? 1-frac : (i%2) ? -frac : frac;
}
double pi = sqrt(12)*sub;
cout << "Pi estimated with precision of " << prec << " iterations is " << pi << ".\n";
My problem is that even at 1000 (or 100000 for that matter) iterations, the highest precision I'm getting is 3.14159. I've tried using static_cast<double>() on each of the numbers in the calculation but still get the same result. Am I doing something wrong here, or is this the max precision this method will yield? I'm new to C++, but not to programming.
the problem is you don't print all the precisions. you need to call,
std::cout << std::setprecision(10) << ...