I am processing stored dates and times. I store them in a file in GMT in a string format
(i.e. DDMMYYYYHHMMSS). When a client queries, I convert this string to a struct tm, then convert it to seconds using mktime. I do this to check for invalid DateTime. Again I do convert seconds to string format. All these processing is fine, no issues at all.
But I have one weird issue: I stored the date and time in GMT with locale also GMT. Because of day light saving, my locale time changed to GMT+1. Now, if I query the stored date and time I get 1 hour less because the mktime function uses locale, i.e. GMT+1, to convert the struct tm to seconds (tm_isdst set to -1 so mktime detects daylight savings etc. automatically).
Any ideas how to solve this issue?
Use _mkgmtime/timegm as a complement to mktime.
time_t mkgmtime(struct tm* tm)
{
#if defined(_WIN32)
return _mkgmtime(tm);
#elif defined(linux)
return timegm(tm);
#endif
}
The Daylight Saving Time flag (tm_isdst) is greater than zero if Daylight Saving Time is in effect, zero if Daylight Saving Time is not in effect, and less than zero if the information is not available.
http://www.cplusplus.com/reference/ctime/tm/
Here is the general algorithm:
Pass your input to mktime.
Pass the output to gmtime.
Pass the output to mktime.
And here is a coding example:
struct tm input = Convert(input_string); // don't forget to set 'tm_isdst' here
time_t temp1 = mktime(&input);
struct tm* temp2 = gmtime(&temp1);
time_t output = mktime(temp2);
Note that function gmtime is not thread-safe, as it returns the address of a static struct tm.
Related
This question is similar to the following:
convert epoch to time_t
Converting time_t to int
but I don't quite have my answer there.
If you want to get the current date/time you can call time(0) or time(NULL) like in the following standard example:
// current date/time based on current system
time_t now = time(0);
I want to define a function which will return a time_t and allows the client to pass an optional default return value in the event of an error. Further, I want to set a default on that "default" argument. This provides symmetry within a library I have with one-to-one counter parts across several languages, so I'm not looking to redesign all that.
My thought was to set the default return to the epoch. Then, a client could in theory easily evaluate that return, and decide that an epoch coming back was more than likely (if not always) an indication of it being invalid. I can think of some alternatives, but nothing clean, that also fits my existing patterns.
Is there a short and sweet way to make my function signature have a default value for this object equal to the epoch? For instance
...myfunc(...., const time_t &defVal=time(0) );
would be perfect if 0 meant the epoch rather than the current date/time!
The function std::time() returns the number of seconds since the epoch as a std::time_t. Therefore to find zero seconds after the epoch set std::time_t to zero:
std::time_t t = 0;
So you could do something like:
void myfunc(const std::time_t& defVal = 0)
What is wrong with using 0? (time_t)0 represents the epoch itself (if you want to find the actual epoch date/time, pass (time_t)0 to gmtime() or localtime()).
time_t myfunc(...., time_t defVal = 0 );
Or, you could use (time_t)-1 instead, which is not a valid time, as time() returns (time_t)-1 on error, and time_t represents a positive number of seconds since the epoch.
time_t myfunc(...., time_t defVal = (time_t)-1 );
Either way provides the user with something that is easily compared, if they don't provide their own default value.
Lets say we have a text file and read some timestamp from there into a local variable "sTime":
std::string sTime = "1440966379" // this value has been read from a file.
std::time_t tTime = ? // this instance of std::time_t shall be assigned the above value.
How do I convert this string properly into std::time assuming:
We may use STL means only (no boost).
We use the C++11 standard
We don't know which CPU architecture/OS we're using (it should work cross plattform)
We can not make any (static) assumptions on how time_t is internally defined. Of course we know that in most cases it will be an integral type, probably of 32 or 64 bit length, but according to cppreference.com the actual typedef of time_t is not specified. So atoi, atol, atoll, strtoul ... etc. are out of question at least until we have made sure by other means that we actually did pick the correct one out of those possible candidates.
This will keep your time in a standards-approved format:
Need #include <chrono>
std::string sTime = "1440966379"; // this value has been read from a file.
std::chrono::system_clock::time_point newtime(std::chrono::seconds(std::stoll(sTime)));
// this gets you out to a minimum of 35 bits. That leaves fixing the overflow in the
// capable hands of Misters Spock and Scott. Trust me. They've had worse.
From there you can do arithmetic and compares on time_points.
Dumping it back out to a POSIX timestamp:
const std::chrono::system_clock::time_point epoch = std::chrono::system_clock::from_time_t(0);
// 0 is the same in both 32 and 64 bit time_t, so there is no possibility of overflow here
auto delta = newtime - epoch;
std::cout << std::chrono::duration_cast<std::chrono::seconds>(delta).count();
And another SO question deals with getting formatted strings back out:
How to convert std::chrono::time_point to std::tm without using time_t?
I'm doing a lot of calculations with times, building time objects relative to other time objects by adding seconds. The code is supposed to run on embedded devices and servers. Most documentations say about time_t that it's some arithmetic type, storing usually the time since the epoch. How safe is it to assume that time_t store a number of seconds since something? If we can assume that, then we can just use addition and subtraction rather than localtime, mktime and difftime.
So far I've solved the problem by using a constexpr bool time_tUsesSeconds, denoting whether it is safe to assume that time_t uses seconds. If it's non-portable to assume time_t is in seconds, is there a way to initialize that constant automatically?
time_t timeByAddingSeconds(time_t theTime, int timeIntervalSeconds) {
if (Time_tUsesSeconds){
return theTime + timeIntervalSeconds;
} else {
tm timeComponents = *localtime(&theTime);
timeComponents.tm_sec += timeIntervalSeconds;
return mktime(&timeComponents);
}
}
The fact that it is in seconds is stated by the POSIX specification, so, if you're coding for POSIX-compliant environments, you can rely on that.
The C++ standard also states that time_t must be an arithmetic type.
Anyway, the Unix timing system (second since the Epoch) is going to overflow in 2038. So, it's very likely that, before this date, C++ implementations will switch to other non-int data types (either a 64-bit int or a more complex datatype). Anyway, switching to a 64-bit int would break binary compatibility with previous code (since it requires bigger variables), and everything should be recompiled. Using 32-bit opaque handles would not break binary compatibility, you can change the underlying library, and everything still works, but time_t would not a time in seconds anymore, it'd be an index for an array of times in seconds. For this reason, it's suggested that you use the functions you mentioned to manipulate time_t values, and do not assume anything on time_t.
If C++11 is available, you can use std::chrono::system_clock's to_time_t and from_time_t to convert to/from std::chrono::time_point, and use chrono's arithmetic operators.
If your calculations involve the Gregorian calendar, you can use the HowardHinnant/date library, or C++20's new calendar facilities in chrono (they have essentially the same API).
There is no requirement in standard C or in standard C++ for the units that time_t represents. To work with seconds portably you need to use struct tm. You can convert between time_t and struct tm with mktime and localtime.
Rather than determine whether time_t is in seconds, since time_t is an arithmetic type, you can instead calculate a time_t value that represents one second, and work with that. This answer I wrote before explains the method and has some caveats, here's some example code (bad_time() is a custom exception class, here):
time_t get_sec_diff() {
std::tm datum_day;
datum_day.tm_sec = 0;
datum_day.tm_min = 0;
datum_day.tm_hour = 12;
datum_day.tm_mday = 2;
datum_day.tm_mon = 0;
datum_day.tm_year = 30;
datum_day.tm_isdst = -1;
const time_t datum_time = mktime(&datum_day);
if ( datum_time == -1 ) {
throw bad_time();
}
datum_day.tm_sec += 1;
const time_t next_sec_time = mktime(&datum_day);
if ( next_sec_time == -1 ) {
throw bad_time();
}
return (next_sec_time - datum_time);
}
You can call the function once and store the value in a const, and then just use it whenever you need a time_t second. I don't think it'll work in a constexpr though.
My two cents: on Windows it is in seconds over time but the time it takes for one second to increment to the next is usually 18*54.925 ms and sometimes 19*54.925. The reason for this is explained in this post.
(Answering own question)
One answer suggests that as long as one is using posix, time_t is in seconds and arithmetic on time_t should work.
A second answer calculates the time_t per second, and uses that as a factor when doing arithmetic. But there are still some assumptions about time_t made.
In the end I decided portability is more important, I don't want my code to fail silently on some embedded device. So I used a third way. It involves storing an integer denoting the time since the program starts. I.e. I define
const static time_t time0 = time(nullptr);
static tm time0Components = *localtime(&time0);
All time values used throughout the program are just integers, denoting the time difference in seconds since time0. To go from time_t to this delta seconds, I use difftime. To go back to time_t, I use something like this:
time_t getTime_t(int timeDeltaSeconds) {
tm components = time0Components;
components.tm_sec += timeDeltaSeconds;
return mktime(&components);
}
This approach allows making operations like +,- cheap, but going back to time_t is expensive. Note that the time delta values are only meaningful for the current run of the program. Note also that time0Components has to be updated when there's a time zone change.
I am looking for a function in C++ that calculates how many seconds have past from 1/1/1970 until today.
#include <time.h>
time_t seconds_past_epoch = time(0);
Available on most operating systems.
time_t time(void)
time_t time(time_t *ptr)
include: time.h
Returns the number of seconds that have passed since midnight, 1st January 1970 GMT (or pm, 31st December 1969 EST). If the parameter is not NULL, the same value is stored in the location pointed to. Follow this link for information on the time_t type. The value returned may be used as a reliable measure of elapsed time, and may be passed to ctime() or conversion into a human-readable string.
Example:
time_t t1=time(NULL);
do_something_long();
time_t t2=time(NULL);
printf("%d seconds elapsed\n", t2-t1);
time_t values are produced from the clock by time.
time_t values are produced from y,m,d,h,m,s parts by mktime and timegm.
time_t values are analysed into y,m,d,h,m,s by localtime and gmtime.
time_t values are converted to readable strings by ctime.
See man mktime:
#include <time.h>
time_t secsSinceEpoch = mktime(localtime(NULL));
What's the best way to convert datetimes between local time and UTC in C/C++?
By "datetime", I mean some time representation that contains date and time-of-day. I'll be happy with time_t, struct tm, or any other representation that makes it possible.
My platform is Linux.
Here's the specific problem I'm trying to solve: I get a pair of values containing a julian date and a number of seconds into the day. Those values are in GMT. I need to convert that to a local-timezone "YYYYMMDDHHMMSS" value. I know how to convert the julian date to Y-M-D, and obviously it is easy to convert seconds into HHMMSS. However, the tricky part is the timezone conversion. I'm sure I can figure out a solution, but I'd prefer to find a "standard" or "well-known" way rather than stumbling around.
A possibly related question is Get Daylight Saving Transition Dates For Time Zones in C
You're supposed to use combinations of gmtime/localtime and timegm/mktime. That should give you the orthogonal tools to do conversions between struct tm and time_t.
For UTC/GMT:
time_t t;
struct tm tm;
struct tm * tmp;
...
t = timegm(&tm);
...
tmp = gmtime(t);
For localtime:
t = mktime(&tm);
...
tmp = localtime(t);
All tzset() does is set the internal timezone variable from the TZ environment variable. I don't think this is supposed to be called more than once.
If you're trying to convert between timezones, you should modify the struct tm's tm_gmtoff.
If on Windows, you don't have timegm() available to you:
struct tm *tptr;
time_t secs, local_secs, gmt_secs;
time( &secs ); // Current time in GMT
// Remember that localtime/gmtime overwrite same location
tptr = localtime( &secs );
local_secs = mktime( tptr );
tptr = gmtime( &secs );
gmt_secs = mktime( tptr );
long diff_secs = long(local_secs - gmt_secs);
or something similar...
If you need to worry about converting date/time with timezone rules, you might want to look into ICU.