Why overflow (-2147483648) is happening in the code? - c++

I am seeing pretty weird issue. Somehow with my below code, I am seeing negative number getting printed out as shown below in my holder variable. I am not sure why it is happening.
-2147483648 days -2147483648 hours -2147483648 minutes ago
Here is my timestamp (current_unix_timestamp) value 1437943320 which is getting passed to my below method and then afterwards holder value is coming as shown above everything as negative.
char holder[100];
get_timestamp_value(current_unix_timestamp, holder);
inline void get_timestamp_value(long sec_since_epoch_time, char* holder) {
uint64_t timestamp = current_timestamp();
double delta = timestamp/1000000 - sec_since_epoch_time;
int days = floor(delta/60/60/24);
int hours = floor((delta - days * 60 * 60 * 24)/60/60);
int minutes = floor((delta - days * 60 * 60 * 24 - hours * 60 * 60)/60);
holder[0] = 0;
if (days) sprintf(holder, "%d days ", days);
if (hours) sprintf(holder, "%s%d hours ", holder, hours);
sprintf(holder, "%s%d minutes ago", holder, minutes);
std::cout<< "Timestamp: " << timestamp << ", sec_since_epoch_time: " << sec_since_epoch_time << ", Delta:" << delta << ", Days: " << days << ", hours: " << hours << ", mins: " << mins << std::endl;
}
// get current system time in microseconds since epoch
inline uint64_t current_timestamp()
{
std::chrono::time_point<std::chrono::steady_clock> ts = std::chrono::steady_clock::now();
return std::chrono::duration_cast<std::chrono::microseconds>(ts.time_since_epoch()).count();
}
Now this is what got printed out from the above cout logs:
Timestamp: 433430278724, sec_since_epoch_time: 1437943320, Delta:1.84467e+19, Days: -2147483648, hours: -2147483648, mins: -2147483648
Timestamp: 433679536303, sec_since_epoch_time: 1437943380, Delta:1.84467e+19, Days: -2147483648, hours: -2147483648, mins: -2147483648
Timestamp: 433929683258, sec_since_epoch_time: 1437943440, Delta:1.84467e+19, Days: -2147483648, hours: -2147483648, mins: -2147483648
Timestamp: 434179628271, sec_since_epoch_time: 1437943500, Delta:1.84467e+19, Days: -2147483648, hours: -2147483648, mins: -2147483648
Is there anything wrong happening in the above code which is causing this issue? Any suggestions will be of great help.

You should try to avoid doing arithmetic with a mixture of signed and unsigned integer types. The results are often surprising.
Clearly, timestamp is not the value you expect it to be, since timestamp/1000000 is 433430, which is considerably less than sec_since_epoch_time. Consequently, you might expect timestamp/1000000 - sec_since_epoch_time to be a negative number, but (surprisingly, as above), it will be a large positive number because the signed long sec_since_epoch_time is converted to an unsigned long prior to the subtraction, following the rules for the usual arithmetic conversions. The subtraction is then done using unsigned arithmetic, so the result is a positive number slightly less than 264, as seen by the value of delta.
Dividing that large number by 86400 is not sufficient to bring it into the range of an int, so the assignment
int days = floor(delta/60/60/24);
will overflow with undefined consequences (in this case, setting days to -231.)
To me, it seems a bit odd to ask for a duration in microseconds and then divide it by a million. Why not just ask for a duration in seconds?
But the underlying problem is that you are comparing the value returned by current_timestamp, which is the number of microseconds since the epoch of std::chrono::steady_clock, with the argument sec_since_epoch_time. It looks to me like sec_since_epoch_time is the number of seconds since the Unix epoch (Jan. 1, 1970). However, there is no guarantee that the epoch for a std::chrono clock has that value. (Apparently, on Linux, the epoch for std::chrono:system_clock is the system epoch, but I repeat that there is no guarantee.)
You cannot compare two "seconds since epoch" values unless they are seconds since the same epoch. Which means that the only way your code can work is if the clock which was originally used to get the value of sec_since_epoch_time is the same clock as you are using in current_timestamp.
In addition to ensuring that timestamp actually has the expected value, you should change timestamp to an int64_t (or cast it to int64_t for the purposes of the computation of delta).

Related

How to safely clock_cast days?

I'm using HowardHinnant/date in lieu of the new C++20 calendar/timezone facilities that are not yet available in Clang/GCC. My question applies equally to both implementations: How do I safely clock_cast time_points having days duration?
When I try:
using namespace date; // or using std::chrono in C++20
tai_time<days> tai{days{42}};
sys_days sys = clock_cast<std::chrono::system_clock>(tai);
I get a "no viable conversion" error for the last statement. It turns out the result uses a duration that is common_type<days, seconds>, which comes from utc_clock::to_sys() that's used in the conversion. The common duration type is std::chrono::seconds, so it's normal that it can't be directly converted to a days duration.
I can get it to compile if I use an explicit duration_cast:
using namespace date; // or using std::chrono in C++20
tai_time<days> tai{days{42}};
auto casted = clock_cast<std::chrono::system_clock>(tai);
sys_days sys{std::chrono::duration_cast<days>(casted.time_since_epoch())};
... but I'm worried that the result might by off by a day due to the truncation (especially for dates preceding the epoch). Is this the correct way to do what I'm trying to do? Should I be using floor instead of duration_cast?
Why is there even a std::common_type_t<Duration, std::chrono::seconds> in utc_clock::to_sys() anyway? Shouldn't it simply return the same duration type?
The reason that the clock_cast insists on at least seconds precision is because the offset between the epochs of system_clock and tai_clock has a precision of seconds:
auto diff = sys_days{} - clock_cast<system_clock>(tai_time<days>{});
cout << diff << " == " << duration<double, days::period>{diff} << '\n';
Output:
378691210s == 4383.000116d
So a days precision cast would be lossy. Here's another way of looking at it:
cout << clock_cast<tai_clock>(sys_days{2021_y/June/1}) << '\n';
Output:
2021-06-01 00:00:37
I.e. TAI is currently 37 seconds ahead of system_clock in calendrical terms.
If all you want is the date, I recommend round<days>(result):
cout << round<days>(clock_cast<tai_clock>(sys_days{2021_y/June/1})) << '\n';
Output:
2021-06-01 00:00:00
One could conceivably use floor in one direction and ceil in the other, but that would be very error prone. round will do fine since the offset from an integral number of days is currently only 37 seconds and growing quite slowly.

Error in comparing two std::chrono::time_point instances

I have two std::chrono::time_point instances in variables exp and time. exp has a time in the future and time is the current time. But when I compare them as in this snippet:
std::time_t t_exp = std::chrono::system_clock::to_time_t(exp);
std::time_t t_time = std::chrono::system_clock::to_time_t(time);
std::cout << std::ctime(&t_exp) << std::ctime(&t_time) << (time > exp) << std::endl;
I get output:
Sat Apr 26 01:39:43 4758
Fri May 29 18:11:59 2020
1
Which is wrong because exp is in the year 4758 and time is in the year 2020.
Where am I going wrong?
t_exp is -4243023785
This value of time_t corresponds to 1835-07-18 22:16:55 (assuming the Unix epoch and a precision of seconds, neither of which are specified by the standard, but are common).
Apparently the implementation of ctime on your platform can't handle dates this far in the past, which is a little surprising as 1835 is not very far in the past.
The value of exp is -4243023785 times a million or a billion (depending on the precision of system_clock on your platform) and is stored with a signed 64 bit integer (there is no overflow). Thus time > exp == 1 is correct (time is 1590775919s converted to the precision of system_clock).
Sat Apr 26 01:39:43 4758 corresponds to a time_t of 87990716383.
I see nothing wrong with your use of the chrono library in the above code.
Update
The value 87990716383 is being converted to a time_point using from_time_t()
Ah, this combined with the knowledge that on your platform the precision of system_clock is nanoseconds tells me that you are experiencing overflow on the construction of exp.
This is not the code you have:
std::time_t t_exp = std::chrono::system_clock::to_time_t(exp);
std::time_t t_time = std::chrono::system_clock::to_time_t(time);
std::cout << std::ctime(&t_exp) << std::ctime(&t_time) << (time > exp) << std::endl;
The code you have looks something like:
// ...
std::time_t t_exp = 87990716383;
auto exp = std::chrono::system_clock::from_time_t(t_exp);
std::cout << std::ctime(&t_exp) << std::ctime(&t_time) << (time > exp) << std::endl;
On your platform, system_clock stores nanoseconds since 1970-01-01 00:00:00 UTC in a signed 64 bit integer. The maximum storable date (system_clock::time_point::max()) on your platform is:
2262-04-11 23:47:16.854775807
Beyond this, the underlying storage of nanoseconds overflows.
When 87990716383 (seconds) is converted in from_time_t, it is multiplied by a billion which overflows. The overflowed value is -4243003985547758080 which corresponds to the date 1835-07-19 03:46:54.452241920.
You can get a larger range by using a coarser precision, for example:
std::time_t t_exp = 87990716383;
time_point<system_clock, microseconds> exp{seconds{t_exp}};
// exp == 4758-04-26 01:39:43.000000

What happened when constructing a variable of std::chrono::milliseconds from "LLONG_MAX seconds"?

When constructing a variable of std::chrono::milliseconds from "LLONG_MAX seconds", the result t_milli.count() is -1000
auto t_max_seconds = std::chrono::seconds(LLONG_MAX);
auto t_milli = std::chrono::milliseconds(t_max_seconds);
As far as I can see, somehow "-1" came from "LLONG_MAX", and "1000" was the ratio.
(for "microseconds", the result is -1'000'000)
I wonder what happend here, an overflow or undefined-behavior?
You are getting signed overflow in the conversion from seconds to milliseconds.
On your machine both seconds and milliseconds are represented by signed 64 bit integers. But to convert seconds to milliseconds, the library multiplies by 1000.
You are effectively doing this:
cout << LLONG_MAX*1000 << '\n';
which on my machine prints out:
-1000
std::chrono::seconds is a std::chrono::duration<>, just like std::chrono::milliseconds. As you figured, std::chrono::milliseconds has a ratio of std::milli, which is std::ratio<1,1000>.
The Undefined Behavior here is that the representation of std::chrono::seconds may have as little as 35 bits. Only std::chrono::nanoseconds is required to have a (signed) 64 bit representation, and LLONG_MAX could even be higher than that.
Of course you can define using seconds = std::chrono::duration<std::int64_t>;. There's nothing magical about std::chrono::seconds. Converting that to milliseconds has its own overflow risk, naturally.
I suspect you want std::chrono::seconds::max().

Convert long int seconds to double precision floating-point value

I have a long int variable wich containes seconds since Jan. 1, 1970 in this format:
long int seconds = 1231241242144214;
i need to convert this seconds to double precision floating-point value. The integer part of the value is the number of days since midnight, 30 December 1899.
The fractional part of the value represents time. .5 is equal to 12:00 PM.
how can i convert?
There are 86400 seconds in a day, and 25569 days between these epochs. So the answer is:
double DelphiDateTime = (UnixTime / 86400.0) + 25569;
You really do need to store the Unix time in an integer variable though.

Time Stamp and byte array

I'm trying to insert a timestamp (hour:min:sec) into a two-byte array and i'm a little confused on how to accomplish this...any help is greatly appreciated!
int Hour = CTime::GetCurrentTime().GetHour();
int Minute = CTime::GetCurrentTime().GetMinute();
int Second = CTime::GetCurrentTime().GetSecond();
BYTE arry[2];
//Need to insert 'Hour', 'Minute', & 'Second' into 'arry'
Thanks!
You can't. There are potentially 86402 seconds in a day (a day can have up to two leap seconds), but the 16 bits available to you in a byte[2] array can only represent 65536 separate values.
hour:min:sec is not what people call timestamp. A timestamp is the number of seconds elapsed since 1970-01-01 and will surely not fit into 16 bits.
Assuming ranges of hours=[0;24], minutes=[0;60], seconds=[0;60] (leap seconds included) you will need 5+6+6=17 bits which still won't fit into 16 bits.
If you had a 32-bit array, it would fit:
int Hour = CTime::GetCurrentTime().GetHour();
int Minute = CTime::GetCurrentTime().GetMinute();
int Second = CTime::GetCurrentTime().GetSecond();
uint8_t array[4];
// Just an example
*(uint32_t*)array = (Hour << 12) | (Minute << 6) | Second;
This sounds somewhat like homework for me... or what is the exact purpose of doing this?