Time Stamp and byte array - c++

I'm trying to insert a timestamp (hour:min:sec) into a two-byte array and i'm a little confused on how to accomplish this...any help is greatly appreciated!
int Hour = CTime::GetCurrentTime().GetHour();
int Minute = CTime::GetCurrentTime().GetMinute();
int Second = CTime::GetCurrentTime().GetSecond();
BYTE arry[2];
//Need to insert 'Hour', 'Minute', & 'Second' into 'arry'
Thanks!

You can't. There are potentially 86402 seconds in a day (a day can have up to two leap seconds), but the 16 bits available to you in a byte[2] array can only represent 65536 separate values.

hour:min:sec is not what people call timestamp. A timestamp is the number of seconds elapsed since 1970-01-01 and will surely not fit into 16 bits.
Assuming ranges of hours=[0;24], minutes=[0;60], seconds=[0;60] (leap seconds included) you will need 5+6+6=17 bits which still won't fit into 16 bits.
If you had a 32-bit array, it would fit:
int Hour = CTime::GetCurrentTime().GetHour();
int Minute = CTime::GetCurrentTime().GetMinute();
int Second = CTime::GetCurrentTime().GetSecond();
uint8_t array[4];
// Just an example
*(uint32_t*)array = (Hour << 12) | (Minute << 6) | Second;
This sounds somewhat like homework for me... or what is the exact purpose of doing this?

Related

C++: get time zone deviation

So I want to creata a time stamp (as a string) with the format HH:MM:SS in C++. I use std::chrono to get a unix time stamp and then calculate the hours, minutes and seconds.
// Get unix time stamp in seconds.
const auto unix_time_stamp = std::chrono::system_clock::now();
long long seconds_since_epoch = std::chrono::duration_cast<std::chrono::seconds>(unix_time_stamp.time_since_epoch()).count();
// Calculate current time (hours, minutes, seconds).
uint8_t hours = (seconds_since_epoch % 86400) / 3600;
uint8_t minutes = (seconds_since_epoch % 3600) / 60;
uint8_t seconds = (seconds_since_epoch % 60);
// Create strings for hours, minutes, seconds.
std::string hours_string = std::to_string(hours);
std::string minutes_string = std::to_string(minutes);
std::string seconds_string = std::to_string(seconds);
// Check if the number is only one digit. If it is, add a 0 in the beginning (5:3:9 --> 05:03:09).
if(hours_string.size() == 1)
{
hours_string = "0" + hours_string;
}
if(minutes_string.size() == 1)
{
minutes_string = "0" + minutes_string;
}
if(seconds_string.size() == 1)
{
seconds_string = "0" + seconds_string;
}
// Append to a final string.
std::string time_stamp = hours_string + ":" + minutes_string + ":" + seconds_string;
This is all working fine and great but there is one big problem: time zones.
With this way, I'm only calculating the time stamp for GMT. Is there any easy, fast and, most importantly, portable way to get the "offset" in seconds or minutes or hours for your system's time zone? By "portable" I mean platform-independent.
Please note: I know you can do all of this more easily with std::strftime and so on, but I really want to implement this by myself.
Some implementations of std::tm will contain a member that has the local offset as a member. ... But it isn't portable.
One trick is to take your seconds_since_epoch, and either assign it to a std::time_t, or just make its type std::time_t in the first place instead of long long.
... Oh, wait that isn't quite portable. Some platforms still use a 32 bit time_t. But assuming a 64 bit time_t ...
Then use localtime to get a std::tm:
std::tm tm = *localtime(&seconds_since_epoch);
This isn't officially portable because system_clock and time_t aren't guaranteed to have the same epoch. But in practice they do.
Now take the {year, month, day, hour, minute, second} fields out of the tm and compute a "local epoch". The hard part of this computation is converting the {year, month, day} part into a count of days. You can use days_from_civil from here to do that computation efficiently. Be sure to take the weird offsets into account for tm_year and tm_mon when doing this.
After you get this then subtract seconds_since_epoch from it:
auto offset = local_epoch - seconds_since_epoch;
This is your signed UTC offset in seconds. Positive is east of the prime meridian.
In C++20 this simplifies down to:
auto offset = std::chrono::current_zone()->get_info(system_clock::now()).offset;
and offset will have type std::chrono::seconds.
You can get a free, open-source preview of this here. It does require some installation.

Pack bits in a struct in C++ / Arduino

I have a struct:
typedef struct {
uint8_t month; // 1..12 [4 bits]
uint8_t date; // 1..31 [5 bits]
uint8_t hour; // 00..23 [5 bits]
uint8_t minute; // 00..59 [6 bits]
uint8_t second; // 00..59 [6 bits]
} TimeStamp;
but I would like to pack it so it only consumes four bytes instead of five.
Is there a way of shifting the bits to create a tighter struct?
It might not seem much, but it is going into EEPROM, so one byte saved is an extra 512 bytes in a 4 KB page (and I can use those extra six bits left over for something else too).
You're looking for bitfields.
They look like this:
typedef struct {
uint32_t month : 4; // 1..12 [4 bits]
uint32_t date : 5; // 1..31 [5 bits]
uint32_t hour : 5; // 00..23 [5 bits]
uint32_t minute : 6; // 00..59 [6 bits]
uint32_t second : 6; // 00..59 [6 bits]
} TimeStamp;
Depending on your compiler, in order to fit into four bytes with no padding, the size of the members must be four bytes (i.e. uint32_t) in this case. Otherwise, the struct members will get padded to not overflow on each byte boundary, resulting in a struct of five bytes, if using uint8_t. Using this as a general rule should help prevent compiler discrepancies.
Here's an MSDN link that goes a bit in depth into bitfields:
C++ Bit Fields
Bitfields are one "right" way to do this in general, but why not just store seconds since the start of the year instead? 4 bytes is enough to comfortably store these; in fact, 4 bytes are enough to store the seconds between 1970 and 2038. Getting the other information out of it is then a simple exercise as long as you know the current year (which you could store together with the rest of the information as long as the range of times you're interested in covers less than 70 years (and even then you could just group timestamps into 68 year ranges and store an offset for each range).
Another solution is to store the values in one 32 bits variable and retrieve the individual items with bitshifting.
uint32_t timestamp = xxxx;
uint8_t month = timestamp & 0x0F;
uint8_t date = (timestamp & 0x1F0) >> 4;
uint8_t hour = (timestamp & 0x3E00) >> 9;
uint8_t minute = (timestamp & 0xFC000) >> 14;
uint8_t second = (timestamp & 0x3F00000) >> 20;
If you can deal with two-second accuracy, the MS-DOS timestamp format used 16 bits to hold the date (year-1980 as 7 bits, month as 4, day as 5) and 16 bits for the time (hour as five, minute as six, seconds as five). On a processor like the Arduino, it may be possible to write code that splits values across a 16-bit boundary, but I think code will be more efficient if you can avoid such a split (as MS-DOS did by accepting two-second accuracy).
Otherwise, as was noted in another answer, using a 32-bit number of seconds since some base time will often be more efficient than trying to keep track of things in "calendar format". If all you ever need to do is advance from one calendar-format date to the next, the code to do that may be simpler than code to convert between calendar dates and linear dates, but if you need to do much of anything else (even step backward from a date to the previous one) you'll likely be better off converting dates to/from linear format when they're input or displayed, and otherwise simply work with linear numbers of seconds.
Working with linear numbers of seconds can be made more convenient if you pick as a baseline date March 1 of a leap year. Then while the date exceeds 1461, subtract that from the date and add 4 to the year (16-bit comparison and subtraction are efficient on the Arduino, and even in 2040 the loop may still take less time than a single 16x16 division). If the date exceeds 364, subtract 365 and increment the year, and try that up to twice more [if the date is 365 after the third subtraction, leave it].
Some care is needed to ensure that all corner cases work correctly, but even on a little 8-bit or 16-bit micro, conversions can be surprisingly efficient.

I have piece of code that gets duration from a FILETIME struct. What does it mean?

I have this function
void prtduration(const FILETIME *ft_start, const FILETIME *ft_end)
{
double duration = (ft_end->dwHighDateTime - ft_start->dwHighDateTime) *
(7 * 60 + 9 + 496e-3)
+ (ft_end->dwLowDateTime - ft_start->dwLowDateTime) / 1e7;
printf("duration %.1f seconds\n", duration);
system("pause");
}
Could anybody explain the working of the following part of the code?
(ft_end->dwHighDateTime - ft_start->dwHighDateTime) *
(7 * 60 + 9 + 496e-3)
+ (ft_end->dwLowDateTime - ft_start->dwLowDateTime) / 1e7;
Wow! What an obfuscated piece of code. Let us try to simplify it:
// Calculate the delta
FILETIME delta;
delta.dwHighDateTime = ft_end->dwHighDateTime - ft_start->dwHighDateTime;
delta.dwLowDateTime = ft_end->dwLowDateTime - ft_start->dwLowDateTime;
// Convert 100ns units to double seconds.
double secs = delta.dwHighDateTime * 429.496 + delta.dwLowDateTime/1E7
In actual fact I think this is wrong. It should be:
double secs = delta.dwHighDateTime * 429.4967296 + delta.dwLowDateTime/1E7
Or even more clearly:
double secs = (delta.dwHighDateTime * 4294967296. + delta.dwLowDateTime)/10E6
What is happening is that the high time is being multiplied by 2**32 (which converts to 100ns units then divided by 100ns to give seconds.
Note that this is still wrong because the calculation of delta is wrong (in the same way as the original). If the subtraction of the low part underflows, it fails to borrow from the high part. See Microsoft's documentation:
It is not recommended that you add and subtract values from the FILETIME structure to obtain relative times. Instead, you should copy the low- and high-order parts of the file time to a ULARGE_INTEGER structure, perform 64-bit arithmetic on the QuadPart member, and copy the LowPart and HighPart members into the FILETIME structure.
Or actually, in this case, just convert the QuadPart to double and divide. So we end up with:
ULARGE_INTEGER start,end;
start.LowPart = ft_start->dwLowDateTime;
start.HighPart = ft_start->dwHighDateTime;
end.LowPart = ft_end->dwLowDateTime;
end.HighPart = ft_end->dwHighDateTime;
double duration = (end.QuadPart - start.QuadPart)/1E7;
Aside: I bet the reason that the failure to borrow has never been spotted is that the code has never been asked to print a duration of greater than 7 minutes 9 seconds (or if it has, nobody has looked carefully at the result).
7 is very approximately frequency when FileTime variable changes its value. Namely, every 7 (+-3 or even more) minutes it increases on 1. Than we multiply it on 60 for getting value in seconds.
9 + 496e-3 - is time is seconds that deals somehow with compilation (from the start of the withdrawal to output in the console) that we are losing.
Really, it is very bad code and we shouldn't write like this.
However it have forced me to learn better about FileTime work.
Thanks everyone for answers, I very appreciate it.

Is it safe to store milliseconds since Epoch in uint32

I'm currently rewriting some old code and came across this:
gettimeofday(&tv, NULL);
unsigned int t = tv.tv_sec * 1000 + tv.tv_usec / 1000;
This really looks like they're trying to store the milliseconds since Epoch in an uint32. And for sure I thought that this would not fit so I did some testing.
#include <sys/time.h>
#include <stdint.h>
int main() {
struct timeval tv;
gettimeofday(&tv, nullptr);
uint32_t t32 = tv.tv_sec * 1000 + tv.tv_usec / 1000;
int64_t t64 = tv.tv_sec * 1000 + tv.tv_usec / 1000;
return 0;
}
And I was kind of right:
(gdb) print t32
$1 = 1730323142
(gdb) print t64
$2 = 1423364498118
So I guess it's not safe what they're doing. But what are they doing and why are they doing this and what does actually happen? (in this example 10 bits from the left will be lost, they only care about the diffs) Do they still keep millisecond precision? (yes) Note they're sending this "timestamp" over network and still use it for calculation.
No, it is not "safe": it sacrifices portability, accuracy, or both.
It is portable if you only care about the low bits, e.g. if you're sending these times on the network and then diffing them on the other side with a maximum difference of about four million seconds (46 days).
It is accurate if you only run this code on systems where int is 64 bits. There are some machines like that, but not many.
It is safer? I could answer no and yes. I said no because, at this day, we are using almost all bits (from an 32 bit number) to account the epoch since january, 1970. When you multiply by 1000 (dec) you are, almost, rotating all bit to left by something closer to 10 bits, which means losing precision.
I could say yes to your answer too. At the end of your question you said that this number is being used to taking account of the timestamp of packet in a network. The question is: How long is it time to live expect to wear off? 10 years? 10 days, 10 seconds? Losing 10 bits in precision of miliseconds will give you a large amount of time to do your calculations between two packets with precision of milisecond, which I guess is what you want

How to reach best performence with different data types?

I am working on custom class, which holds Date and Time. The main goal of that class is reaching the best performance. My target platform is Linux
Currently, I hold members like this
Year - int
Month - int
Day - int
Hour- int
Min - int
Sec - double (because I need milisecs as well).
What I am thinking now is too change types to following
Year - unsigned short
Month - unsigned char
Day - unsigned char
Hour- unsigned char
Min - unsigned char
Sec - unsigned char
Milisec - unsigned short
Which gives me 2 + 1 + 1 + 1 + 1 + 1 + 2 = 9 bytes.
As you already guessed I want to fit my class into 8 byte(there are no other members).
So what is the best approach to solve it, to merge(e. g. seconds and miliseconds) and use bit masks for retrieving values? Will it affect performance? And what if user passes integers to some setter, would type cast affect performance also ?
Thanks on advance.
There are multiple options you have here. The most compact way would be to have an integer timestamp. It would take a bit of processing to unpack it though. Another option is to use C++ bitfields to pack things tighter. For example, month only needs 4 bits, day 5 bits, minutes and seconds 6 bits. It should make things a bit slower, but only in theory. It all depends on the number of these dates you have and on the amount and kind of processing you're going to perform on them. In some cases having the struct tightly packed into bitfields would increase the performance because of higher memory throughput and better cache utilization. In other cases, the bit manipulation might become more expensive. Like always with performance, better not to guess, but measure.
The simpliest way here is to put pair of sec and millisec into one int (two bytes).
You don't need separate Sec (unsigned char) and Milisec (unsigned short), because you can put number from 0 to 60000 into one unsigned short.
Let's call it milliSecPack (unsigned short).
milliSecPack = 60 * Sec + Milisec;
And
Sec = milliSecPack / 1000;
Milisec = milliSecPack % 1000;