Is it safe to store milliseconds since Epoch in uint32 - c++

I'm currently rewriting some old code and came across this:
gettimeofday(&tv, NULL);
unsigned int t = tv.tv_sec * 1000 + tv.tv_usec / 1000;
This really looks like they're trying to store the milliseconds since Epoch in an uint32. And for sure I thought that this would not fit so I did some testing.
#include <sys/time.h>
#include <stdint.h>
int main() {
struct timeval tv;
gettimeofday(&tv, nullptr);
uint32_t t32 = tv.tv_sec * 1000 + tv.tv_usec / 1000;
int64_t t64 = tv.tv_sec * 1000 + tv.tv_usec / 1000;
return 0;
}
And I was kind of right:
(gdb) print t32
$1 = 1730323142
(gdb) print t64
$2 = 1423364498118
So I guess it's not safe what they're doing. But what are they doing and why are they doing this and what does actually happen? (in this example 10 bits from the left will be lost, they only care about the diffs) Do they still keep millisecond precision? (yes) Note they're sending this "timestamp" over network and still use it for calculation.

No, it is not "safe": it sacrifices portability, accuracy, or both.
It is portable if you only care about the low bits, e.g. if you're sending these times on the network and then diffing them on the other side with a maximum difference of about four million seconds (46 days).
It is accurate if you only run this code on systems where int is 64 bits. There are some machines like that, but not many.

It is safer? I could answer no and yes. I said no because, at this day, we are using almost all bits (from an 32 bit number) to account the epoch since january, 1970. When you multiply by 1000 (dec) you are, almost, rotating all bit to left by something closer to 10 bits, which means losing precision.
I could say yes to your answer too. At the end of your question you said that this number is being used to taking account of the timestamp of packet in a network. The question is: How long is it time to live expect to wear off? 10 years? 10 days, 10 seconds? Losing 10 bits in precision of miliseconds will give you a large amount of time to do your calculations between two packets with precision of milisecond, which I guess is what you want

Related

C++: get time zone deviation

So I want to creata a time stamp (as a string) with the format HH:MM:SS in C++. I use std::chrono to get a unix time stamp and then calculate the hours, minutes and seconds.
// Get unix time stamp in seconds.
const auto unix_time_stamp = std::chrono::system_clock::now();
long long seconds_since_epoch = std::chrono::duration_cast<std::chrono::seconds>(unix_time_stamp.time_since_epoch()).count();
// Calculate current time (hours, minutes, seconds).
uint8_t hours = (seconds_since_epoch % 86400) / 3600;
uint8_t minutes = (seconds_since_epoch % 3600) / 60;
uint8_t seconds = (seconds_since_epoch % 60);
// Create strings for hours, minutes, seconds.
std::string hours_string = std::to_string(hours);
std::string minutes_string = std::to_string(minutes);
std::string seconds_string = std::to_string(seconds);
// Check if the number is only one digit. If it is, add a 0 in the beginning (5:3:9 --> 05:03:09).
if(hours_string.size() == 1)
{
hours_string = "0" + hours_string;
}
if(minutes_string.size() == 1)
{
minutes_string = "0" + minutes_string;
}
if(seconds_string.size() == 1)
{
seconds_string = "0" + seconds_string;
}
// Append to a final string.
std::string time_stamp = hours_string + ":" + minutes_string + ":" + seconds_string;
This is all working fine and great but there is one big problem: time zones.
With this way, I'm only calculating the time stamp for GMT. Is there any easy, fast and, most importantly, portable way to get the "offset" in seconds or minutes or hours for your system's time zone? By "portable" I mean platform-independent.
Please note: I know you can do all of this more easily with std::strftime and so on, but I really want to implement this by myself.
Some implementations of std::tm will contain a member that has the local offset as a member. ... But it isn't portable.
One trick is to take your seconds_since_epoch, and either assign it to a std::time_t, or just make its type std::time_t in the first place instead of long long.
... Oh, wait that isn't quite portable. Some platforms still use a 32 bit time_t. But assuming a 64 bit time_t ...
Then use localtime to get a std::tm:
std::tm tm = *localtime(&seconds_since_epoch);
This isn't officially portable because system_clock and time_t aren't guaranteed to have the same epoch. But in practice they do.
Now take the {year, month, day, hour, minute, second} fields out of the tm and compute a "local epoch". The hard part of this computation is converting the {year, month, day} part into a count of days. You can use days_from_civil from here to do that computation efficiently. Be sure to take the weird offsets into account for tm_year and tm_mon when doing this.
After you get this then subtract seconds_since_epoch from it:
auto offset = local_epoch - seconds_since_epoch;
This is your signed UTC offset in seconds. Positive is east of the prime meridian.
In C++20 this simplifies down to:
auto offset = std::chrono::current_zone()->get_info(system_clock::now()).offset;
and offset will have type std::chrono::seconds.
You can get a free, open-source preview of this here. It does require some installation.

I have piece of code that gets duration from a FILETIME struct. What does it mean?

I have this function
void prtduration(const FILETIME *ft_start, const FILETIME *ft_end)
{
double duration = (ft_end->dwHighDateTime - ft_start->dwHighDateTime) *
(7 * 60 + 9 + 496e-3)
+ (ft_end->dwLowDateTime - ft_start->dwLowDateTime) / 1e7;
printf("duration %.1f seconds\n", duration);
system("pause");
}
Could anybody explain the working of the following part of the code?
(ft_end->dwHighDateTime - ft_start->dwHighDateTime) *
(7 * 60 + 9 + 496e-3)
+ (ft_end->dwLowDateTime - ft_start->dwLowDateTime) / 1e7;
Wow! What an obfuscated piece of code. Let us try to simplify it:
// Calculate the delta
FILETIME delta;
delta.dwHighDateTime = ft_end->dwHighDateTime - ft_start->dwHighDateTime;
delta.dwLowDateTime = ft_end->dwLowDateTime - ft_start->dwLowDateTime;
// Convert 100ns units to double seconds.
double secs = delta.dwHighDateTime * 429.496 + delta.dwLowDateTime/1E7
In actual fact I think this is wrong. It should be:
double secs = delta.dwHighDateTime * 429.4967296 + delta.dwLowDateTime/1E7
Or even more clearly:
double secs = (delta.dwHighDateTime * 4294967296. + delta.dwLowDateTime)/10E6
What is happening is that the high time is being multiplied by 2**32 (which converts to 100ns units then divided by 100ns to give seconds.
Note that this is still wrong because the calculation of delta is wrong (in the same way as the original). If the subtraction of the low part underflows, it fails to borrow from the high part. See Microsoft's documentation:
It is not recommended that you add and subtract values from the FILETIME structure to obtain relative times. Instead, you should copy the low- and high-order parts of the file time to a ULARGE_INTEGER structure, perform 64-bit arithmetic on the QuadPart member, and copy the LowPart and HighPart members into the FILETIME structure.
Or actually, in this case, just convert the QuadPart to double and divide. So we end up with:
ULARGE_INTEGER start,end;
start.LowPart = ft_start->dwLowDateTime;
start.HighPart = ft_start->dwHighDateTime;
end.LowPart = ft_end->dwLowDateTime;
end.HighPart = ft_end->dwHighDateTime;
double duration = (end.QuadPart - start.QuadPart)/1E7;
Aside: I bet the reason that the failure to borrow has never been spotted is that the code has never been asked to print a duration of greater than 7 minutes 9 seconds (or if it has, nobody has looked carefully at the result).
7 is very approximately frequency when FileTime variable changes its value. Namely, every 7 (+-3 or even more) minutes it increases on 1. Than we multiply it on 60 for getting value in seconds.
9 + 496e-3 - is time is seconds that deals somehow with compilation (from the start of the withdrawal to output in the console) that we are losing.
Really, it is very bad code and we shouldn't write like this.
However it have forced me to learn better about FileTime work.
Thanks everyone for answers, I very appreciate it.

How to reach best performence with different data types?

I am working on custom class, which holds Date and Time. The main goal of that class is reaching the best performance. My target platform is Linux
Currently, I hold members like this
Year - int
Month - int
Day - int
Hour- int
Min - int
Sec - double (because I need milisecs as well).
What I am thinking now is too change types to following
Year - unsigned short
Month - unsigned char
Day - unsigned char
Hour- unsigned char
Min - unsigned char
Sec - unsigned char
Milisec - unsigned short
Which gives me 2 + 1 + 1 + 1 + 1 + 1 + 2 = 9 bytes.
As you already guessed I want to fit my class into 8 byte(there are no other members).
So what is the best approach to solve it, to merge(e. g. seconds and miliseconds) and use bit masks for retrieving values? Will it affect performance? And what if user passes integers to some setter, would type cast affect performance also ?
Thanks on advance.
There are multiple options you have here. The most compact way would be to have an integer timestamp. It would take a bit of processing to unpack it though. Another option is to use C++ bitfields to pack things tighter. For example, month only needs 4 bits, day 5 bits, minutes and seconds 6 bits. It should make things a bit slower, but only in theory. It all depends on the number of these dates you have and on the amount and kind of processing you're going to perform on them. In some cases having the struct tightly packed into bitfields would increase the performance because of higher memory throughput and better cache utilization. In other cases, the bit manipulation might become more expensive. Like always with performance, better not to guess, but measure.
The simpliest way here is to put pair of sec and millisec into one int (two bytes).
You don't need separate Sec (unsigned char) and Milisec (unsigned short), because you can put number from 0 to 60000 into one unsigned short.
Let's call it milliSecPack (unsigned short).
milliSecPack = 60 * Sec + Milisec;
And
Sec = milliSecPack / 1000;
Milisec = milliSecPack % 1000;

Converting number of 100 ns since 1601 to boost posix time in C++

I am receiving from a data provider timestamps that follow this specification:
number of 100 nanoseconds since 1601
I am using boost::posix_time::ptime and I would like to convert the timestamps to posix time. Is there a simple way to do that ?
When did the switch from the Julian to Gregorian calendar occur for this system? Some countries switched before 1st January 1601; others didn't switch until much later. This will critically affect your calculation - by 11 days or so.
Since there are 107 units of 100 ns each in one second, you divide the starting number by 107 to produce the number of seconds since the reference time (the remainder is the fraction of a second). You then divide that by 86400 to give the number of days (the remainder is the time of day). Then you can compute the date from the number of days.
Since POSIX time uses 1970-01-01 00:00:00 as the reference, you may simply need to compute the correct number of seconds between 1601-01-01 00:00:00 and the POSIX epoch (as it is known), and subtract that number from the number of seconds you calculated.
number of 100 nanoseconds since 1601
It is Windows FILETIME value.
Boost.DateTime actually use Windows FILETIME for Windows platform.
Below is the relevant Boost source code that convert FILETIME to boost::posix_time::ptime:
(from boost/date_time/microsec_time_clock.hpp)
static time_type create_time(time_converter converter)
{
winapi::file_time ft;
winapi::get_system_time_as_file_time(ft);
uint64_t micros = winapi::file_time_to_microseconds(ft); // it will not wrap, since ft is the current time
// and cannot be before 1970-Jan-01
std::time_t t = static_cast<std::time_t>(micros / 1000000UL); // seconds since epoch
// microseconds -- static casts supress warnings
boost::uint32_t sub_sec = static_cast<boost::uint32_t>(micros % 1000000UL);
std::tm curr;
std::tm* curr_ptr = converter(&t, &curr);
date_type d(curr_ptr->tm_year + 1900,
curr_ptr->tm_mon + 1,
curr_ptr->tm_mday);
//The following line will adjust the fractional second tick in terms
//of the current time system. For example, if the time system
//doesn't support fractional seconds then res_adjust returns 0
//and all the fractional seconds return 0.
int adjust = static_cast< int >(resolution_traits_type::res_adjust() / 1000000);
time_duration_type td(curr_ptr->tm_hour,
curr_ptr->tm_min,
curr_ptr->tm_sec,
sub_sec * adjust);
return time_type(d,td);
}
You can browse your Boost installation for the detailed implementation.

Time Stamp and byte array

I'm trying to insert a timestamp (hour:min:sec) into a two-byte array and i'm a little confused on how to accomplish this...any help is greatly appreciated!
int Hour = CTime::GetCurrentTime().GetHour();
int Minute = CTime::GetCurrentTime().GetMinute();
int Second = CTime::GetCurrentTime().GetSecond();
BYTE arry[2];
//Need to insert 'Hour', 'Minute', & 'Second' into 'arry'
Thanks!
You can't. There are potentially 86402 seconds in a day (a day can have up to two leap seconds), but the 16 bits available to you in a byte[2] array can only represent 65536 separate values.
hour:min:sec is not what people call timestamp. A timestamp is the number of seconds elapsed since 1970-01-01 and will surely not fit into 16 bits.
Assuming ranges of hours=[0;24], minutes=[0;60], seconds=[0;60] (leap seconds included) you will need 5+6+6=17 bits which still won't fit into 16 bits.
If you had a 32-bit array, it would fit:
int Hour = CTime::GetCurrentTime().GetHour();
int Minute = CTime::GetCurrentTime().GetMinute();
int Second = CTime::GetCurrentTime().GetSecond();
uint8_t array[4];
// Just an example
*(uint32_t*)array = (Hour << 12) | (Minute << 6) | Second;
This sounds somewhat like homework for me... or what is the exact purpose of doing this?