Convert Windows Filetime to second in Unix/Linux - c++

I have a trace file that each transaction time represented in Windows filetime format. These time numbers are something like this:
128166372003061629
128166372016382155
128166372026382245
Would you please let me know if there are any C/C++ library in Unix/Linux to extract actual time (specially second) from these numbers ? May I write my own extraction function ?

it's quite simple: the windows epoch starts 1601-01-01T00:00:00Z. It's 11644473600 seconds before the UNIX/Linux epoch (1970-01-01T00:00:00Z). The Windows ticks are in 100 nanoseconds. Thus, a function to get seconds from the UNIX epoch will be as follows:
#define WINDOWS_TICK 10000000
#define SEC_TO_UNIX_EPOCH 11644473600LL
unsigned WindowsTickToUnixSeconds(long long windowsTicks)
{
return (unsigned)(windowsTicks / WINDOWS_TICK - SEC_TO_UNIX_EPOCH);
}

FILETIME type is is the number 100 ns increments since January 1 1601.
To convert this into a unix time_t you can use the following.
#define TICKS_PER_SECOND 10000000
#define EPOCH_DIFFERENCE 11644473600LL
time_t convertWindowsTimeToUnixTime(long long int input){
long long int temp;
temp = input / TICKS_PER_SECOND; //convert from 100ns intervals to seconds;
temp = temp - EPOCH_DIFFERENCE; //subtract number of seconds between epochs
return (time_t) temp;
}
you may then use the ctime functions to manipulate it.

(I discovered I can't enter readable code in a comment, so...)
Note that Windows can represent times outside the range of POSIX epoch times, and thus a conversion routine should return an "out-of-range" indication as appropriate. The simplest method is:
... (as above)
long long secs;
time_t t;
secs = (windowsTicks / WINDOWS_TICK - SEC_TO_UNIX_EPOCH);
t = (time_t) secs;
if (secs != (long long) t) // checks for truncation/overflow/underflow
return (time_t) -1; // value not representable as a POSIX time
return t;

New answer for old question.
Using C++11's <chrono> plus this free, open-source library:
https://github.com/HowardHinnant/date
One can very easily convert these timestamps to std::chrono::system_clock::time_point, and also convert these timestamps to human-readable format in the Gregorian calendar:
#include "date.h"
#include <iostream>
std::chrono::system_clock::time_point
from_windows_filetime(long long t)
{
using namespace std::chrono;
using namespace date;
using wfs = duration<long long, std::ratio<1, 10'000'000>>;
return system_clock::time_point{floor<system_clock::duration>(wfs{t} -
(sys_days{1970_y/jan/1} - sys_days{1601_y/jan/1}))};
}
int
main()
{
using namespace date;
std::cout << from_windows_filetime(128166372003061629) << '\n';
std::cout << from_windows_filetime(128166372016382155) << '\n';
std::cout << from_windows_filetime(128166372026382245) << '\n';
}
For me this outputs:
2007-02-22 17:00:00.306162
2007-02-22 17:00:01.638215
2007-02-22 17:00:02.638224
On Windows, you can actually skip the floor, and get that last decimal digit of precision:
return system_clock::time_point{wfs{t} -
(sys_days{1970_y/jan/1} - sys_days{1601_y/jan/1})};
2007-02-22 17:00:00.3061629
2007-02-22 17:00:01.6382155
2007-02-22 17:00:02.6382245
With optimizations on, the sub-expression (sys_days{1970_y/jan/1} - sys_days{1601_y/jan/1}) will translate at compile time to days{134774} which will further compile-time-convert to whatever units the full-expression requires (seconds, 100-nanoseconds, whatever). Bottom line: This is both very readable and very efficient.

The solution that divides and adds will not work correctly with daylight savings.
Here is a snippet that works, but it is for windows.
time_t FileTime_to_POSIX(FILETIME ft)
{
FILETIME localFileTime;
FileTimeToLocalFileTime(&ft,&localFileTime);
SYSTEMTIME sysTime;
FileTimeToSystemTime(&localFileTime,&sysTime);
struct tm tmtime = {0};
tmtime.tm_year = sysTime.wYear - 1900;
tmtime.tm_mon = sysTime.wMonth - 1;
tmtime.tm_mday = sysTime.wDay;
tmtime.tm_hour = sysTime.wHour;
tmtime.tm_min = sysTime.wMinute;
tmtime.tm_sec = sysTime.wSecond;
tmtime.tm_wday = 0;
tmtime.tm_yday = 0;
tmtime.tm_isdst = -1;
time_t ret = mktime(&tmtime);
return ret;
}

Assuming you are asking about the FILETIME Structure, then FileTimeToSystemTime does what you want, you can get the seconds from the SYSTEMTIME structure it produces.

Here's essentially the same solution except this one encodes negative numbers from Ldap properly and lops off the last 7 digits before conversion.
public static int LdapValueAsUnixTimestamp(SearchResult searchResult, string fieldName)
{
var strValue = LdapValue(searchResult, fieldName);
if (strValue == "0") return 0;
if (strValue == "9223372036854775807") return -1;
return (int)(long.Parse(strValue.Substring(0, strValue.Length - 7)) - 11644473600);
}

If somebody need convert it in MySQL
SELECT timestamp,
FROM_UNIXTIME(ROUND((((timestamp) / CAST(10000000 AS UNSIGNED INTEGER)))
- CAST(11644473600 AS UNSIGNED INTEGER),0))
AS Converted FROM events LIMIT 100

Also here's a pure C#ian way to do it.
(Int32)(DateTime.FromFileTimeUtc(129477880901875000).Subtract(new DateTime(1970, 1, 1))).TotalSeconds;
Here's the result of both methods in my immediate window:
(Int32)(DateTime.FromFileTimeUtc(long.Parse(strValue)).Subtract(new DateTime(1970, 1, 1))).TotalSeconds;
1303314490
(int)(long.Parse(strValue.Substring(0, strValue.Length - 7)) - 11644473600)
1303314490
DateTime.FromFileTimeUtc(long.Parse(strValue))
{2011-04-20 3:48:10 PM}
Date: {2011-04-20 12:00:00 AM}
Day: 20
DayOfWeek: Wednesday
DayOfYear: 110
Hour: 15
InternalKind: 4611686018427387904
InternalTicks: 634389112901875000
Kind: Utc
Millisecond: 187
Minute: 48
Month: 4
Second: 10
Ticks: 634389112901875000
TimeOfDay: {System.TimeSpan}
Year: 2011
dateData: 5246075131329262904

Related

Convert timestamp string into local time

How to convert timestamp string, e.g. "1997-07-16T19:20:30.45+01:00" into UTC time. The result of conversion should be timespec structure as in utimensat input arguments.
// sorry, should be get_utc_time
timespec get_local_time(const char* ts);
P.S. I need solution using either standard Linux/C/C++ facilities (whatever that means) or Boost C++ library.
Assumption: You want the "+01:00" to be subtracted from the "1997-07-16T19:20:30.45" to get a UTC timestamp and then convert that into a timespec.
Here is a C++20 solution that will automatically handle the centisecond precision and the [+/-]hh:mm UTC offset for you:
#include <chrono>
#include <ctime>
#include <sstream>
std::timespec
get_local_time(const char* ts)
{
using namespace std;
using namespace chrono;
istringstream in{ts};
in.exceptions(ios::failbit);
sys_time<nanoseconds> tp;
in >> parse("%FT%T%Ez", tp);
auto tps = floor<seconds>(tp);
return {.tv_sec = tps.time_since_epoch().count(),
.tv_nsec = (tp - tps).count()};
}
When used like this:
auto r = get_local_time("1997-07-16T19:20:30.45+01:00");
std::cout << '{' << r.tv_sec << ", " << r.tv_nsec << "}\n";
The result is:
{869077230, 450000000}
std::chrono::parse will subtract the +/-hh:mm UTC offset from the parsed local value to obtain a UTC timestamp (to up to nanosecond precision).
If the input has precision seconds, this code will handle it. If the precision is as fine as nanoseconds, this code will handle it.
If the input does not conform to this syntax, an exception will be thrown. If this is not desired, remove in.exceptions(ios::failbit);, and then you must check in.fail() to see if the parse failed.
This code will also handle dates prior to the UTC epoch of 1970-01-01 by putting a negative value into .tv_sec, and a positive value ([0, 999'999'999]) into .tv_nsec. Note that handling pre-epoch dates is normally outside of the timespec specification, and so most C utilities will not handle such a timespec value.
If you can not use C++20, or if your vendor has yet to implement this part of C++20, there exists a header-only library which implements this part of C++20, and works with C++11/14/17. I have not linked to it here as it is not in the set: "standard Linux/C/C++ facilities (whatever that means) or Boost C++ library". I'm happy to add a link if requested.
For comparison, here's how you could do this in mostly-standard C. It's somewhat cumbersome, because C's date/time support is still rather fragmented, unlike the much more complete support which C++ has, as illustrated in Howard Hinnant's answer. (Also, two of the functions I'm going to use are not specified by the C Standard, although they're present on many/most systems.)
If you have the semistandard strptime function, and if you didn't care about subseconds and explicit time zones, it would be relatively straightforward. strptime is a (partial) inverse of strftime, parsing a time string under control of a format specifier, and constructing a struct tm. Then you can call mktime to turn that struct tm into a time_t. Then you can use the time_t to populate a struct timespec.
char *inpstr = "1997-07-16T19:20:30.45+01:00";
struct tm tm;
memset(&tm, 0, sizeof(tm));
char *p = strptime(inpstr, "%Y-%m-%dT%H:%M:%S", &tm);
if(p == NULL) {
printf("strptime failed\n");
exit(1);
}
tm.tm_isdst = -1;
time_t t = mktime(&tm);
if(t == -1) {
printf("mktime failed\n");
exit(1);
}
struct timespec ts;
ts.tv_sec = t;
ts.tv_nsec = 0;
printf("%ld %ld\n", ts.tv_sec, ts.tv_nsec);
printf("%s", ctime(&ts.tv_sec));
printf("rest = %s\n", p);
In my time zone, currently UTC+4, this prints
869095230 0
Wed Jul 16 19:20:30 1997
rest = .45+01:00
But you did have subsecond information, and you did have an explicit time zone, and there's no built-in support for those in any of the basic C time-conversion functions, so you have to do things "by hand". Here's one way to do it. I'm going to use sscanf to separate out the year, month, day, hour, minute, second, and other components. I'm going to use those components to populate a struct tm, then use the semistandard timegm function to convert them straight to a UTC time. (That is, I temporarily assume that the HH:MM:SS part was UTC.) Then I'm going to manually correct for the time zone. Finally, I'm going to populate the tv_nsec field of the struct timesec with the subsecond information I extracted back in the beginning.
int y, m, d;
int H, M, S;
int ss; /* subsec */
char zs; /* zone sign */
int zh, zm; /* zone hours, minutes */
int r = sscanf(inpstr, "%d-%d-%dT%d:%d:%d.%2d%c%d:%d",
&y, &m, &d, &H, &M, &S, &ss, &zs, &zh, &zm);
if(r != 10 || (zs != '+' && zs != '-')) {
printf("parse failed\n");
exit(1);
}
struct tm tm;
memset(&tm, 0, sizeof(tm));
tm.tm_year = y - 1900;
tm.tm_mon = m - 1;
tm.tm_mday = d;
tm.tm_hour = H;
tm.tm_min = M;
tm.tm_sec = S;
time_t t = timegm(&tm);
if(t == -1) {
printf("timegm failed\n");
exit(1);
}
long int z = ((zh * 60L) + zm) * 60;
if(zs == '+') /* East of Greenwich */
t -= z;
else t += z;
struct timespec ts;
ts.tv_sec = t;
ts.tv_nsec = ss * (1000000000 / 100);
printf("%ld %ld\n", ts.tv_sec, ts.tv_nsec);
printf("%s", ctime(&ts.tv_sec));
printf(".%02ld\n", ts.tv_nsec / (1000000000 / 100));
For me this prints
869077230 450000000
Wed Jul 16 14:20:30 1997
.45
The time zone and subsecond information have been honored.
This code makes no special provision for dates prior to 1970. I think it will work if mktime/timegm work.
As mentioned, two of these functions — strptime and timegm — are not specified by the ANSI/ISO C Standard and are therefore not guaranteed to be available everywhere.

NTP timestamps using std::chrono

I'm trying to represent NTP timestamps (including the NTP epoch) in C++ using std::chrono. Therefore, I decided to use a 64-bit unsigned int (unsigned long long) for the ticks and divide it such that the lowest 28-bit represent the fraction of a second (accepting trunction of 4 bits in comparison to the original standard timestamps), the next 32-bit represent the seconds of an epoch and the highest 4-bit represent the epoch. This means that every tick takes 1 / (2^28 - 1) seconds.
I now have the following simple implementation:
#include <chrono>
/**
* Implements a custom C++11 clock starting at 1 Jan 1900 UTC with a tick duration of 2^(-28) seconds.
*/
class NTPClock
{
public:
static constexpr bool is_steady = false;
static constexpr unsigned int era_bits = 4; // epoch uses 4 bits
static constexpr unsigned int fractional_bits = 32-era_bits; // fraction uses 28 bits
static constexpr unsigned int seconds_bits = 32; // second uses 32 bits
using duration = std::chrono::duration<unsigned long long, std::ratio<1, (1<<fractional_bits)-1>>;
using rep = typename duration::rep;
using period = typename duration::period;
using time_point = std::chrono::time_point<NTPClock>;
/**
* Return the current time of this. Note that the implementation is based on the assumption
* that the system clock starts at 1 Jan 1970, which is not defined with C++11 but seems to be a
* standard in most compilers.
*
* #return The current time as represented by an NTP timestamp
*/
static time_point now() noexcept
{
return time_point
(
std::chrono::duration_cast<duration>(std::chrono::system_clock::now().time_since_epoch())
+ std::chrono::duration_cast<duration>(std::chrono::hours(24*25567)) // 25567 days have passed between 1 Jan 1900 and 1 Jan 1970
);
};
}
Unfortunately, a simple test reveals this does not work as expected:
#include <chrono>
#include <iostream>
#include <catch2/catch.hpp>
#include "NTPClock.h"
using namespace std::chrono;
TEST_CASE("NTPClock_now")
{
auto ntp_dur = NTPClock::now().time_since_epoch();
auto sys_dur = system_clock::now().time_since_epoch();
std::cout << duration_cast<hours>(ntp_dur) << std::endl;
std::cout << ntp_dur << std::endl;
std::cout << duration_cast<hours>(sys_dur) << std::endl;
std::cout << sys_dur << std::endl;
REQUIRE(duration_cast<hours>(ntp_dur)-duration_cast<hours>(sys_dur) == hours(24*25567));
}
Output:
613612h
592974797620267184[1/268435455]s
457599h
16473577714886015[1/10000000]s
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
PackageTest.exe is a Catch v2.11.1 host application.
Run with -? for options
-------------------------------------------------------------------------------
NTPClock_now
-------------------------------------------------------------------------------
D:\Repos\...\TestNTPClock.cpp(10)
...............................................................................
D:\Repos\...\TestNTPClock.cpp(18): FAILED:
REQUIRE( duration_cast<hours>(ntp_dur)-duration_cast<hours>(sys_dur) == hours(24*25567) )
with expansion:
156013h == 613608h
===============================================================================
test cases: 1 | 1 failed
assertions: 1 | 1 failed
I also removed the offset of 25567 days in NTPClock::now asserting equality without success. I'm not sure what is going wrong here. Can anybody help?
Your tick period: 1/268'435'455 is unfortunately both extremely fine and also doesn't lend itself to much of a reduced fraction when your desired conversions are used (i.e. between system_clock::duration and NTPClock::duration. This is leading to internal overflow of your unsigned long long NTPClock::rep.
For example, on Windows the system_clock tick period is 1/10,000,000 seconds. The current value of now() is around 1.6 x 1016. To convert this to NTPClock::duration you have to compute 1.6 x 1016 times 53,687,091/2,000,000. The first step in that is the value times the numerator of the conversion factor which is about 8 x 1023, which overflows unsigned long long.
There's a couple of ways to overcome this overflow, and both involve using at least an intermediate representation with a larger range. One could use a 128 bit integral type, but I don't believe that is available on Windows, except perhaps by a 3rd party library. long double is another option. This might look like:
static time_point now() noexcept
{
using imd = std::chrono::duration<long double, period>;
return time_point
(
std::chrono::duration_cast<duration>(imd{std::chrono::system_clock::now().time_since_epoch()
+ std::chrono::hours(24*25567)})
);
};
That is, perform the offset shift with no conversion (system_clock::duration units), then convert that to the intermediate representation imd which has a long double rep, and the same period as NTPClock. This will use long double to compute 1.6 x 1016 times 53,687,091/2,000,000. Then finally duration_cast that to NTPClock::duration. This final duration_cast will end up doing nothing but casting long double to unsigned long long as the conversion factor is simply 1/1.
Another way to accomplish the same thing is:
static time_point now() noexcept
{
return time_point
(
std::chrono::duration_cast<duration>(std::chrono::system_clock::now().time_since_epoch()
+ std::chrono::hours(24*25567)*1.0L)
);
};
This takes advantage of the fact that you can multiply any duration by 1, but with alternate units and the result will have a rep with the common_type of the two arguments, but otherwise have the same value. I.e. std::chrono::hours(24*25567)*1.0L is a long double-based hours. And that long double carries through the rest of the computation until the duration_cast brings it back to NTPClock::duration.
This second way is simpler to write, but code reviewers may not understand the significance of the *1.0L, at least until it becomes a more common idiom.

c++ \ Convert FILETIME to seconds

How can I convert FILETIME to seconds? I need to compare two FILETIME objects..
I found this,
but seems like it doesn't do the trick...
ULARGE_INTEGER ull;
ull.LowPart = lastWriteTimeLow1;
ull.HighPart = lastWriteTimeHigh1;
time_t lastModified = ull.QuadPart / 10000000ULL - 11644473600ULL;
ULARGE_INTEGER xxx;
xxx.LowPart = currentTimeLow1;
xxx.HighPart = currentTimeHigh1;
time_t current = xxx.QuadPart / 10000000ULL - 11644473600ULL;
unsigned long SecondsInterval = current - lastModified;
if (SecondsInterval > RequiredSecondsFromNow)
return true;
return false;
I compared to 2 FILETIME and expected diff of 10 seconds and it gave me ~7000...
Is that a good way to extract number of seconds?
The code you give seems correct, it converts a FILETIME to a UNIX timestamp (obviously losing precision, as FILETIME has a theoretical resolution of 100 nanoseconds). Are you sure that the FILETIMEs you compare indeed have only 10 seconds of difference?
I actually use a very similar code in some software:
double time_d()
{
FILETIME ft;
GetSystemTimeAsFileTime(&ft);
__int64* val = (__int64*) &ft;
return static_cast<double>(*val) / 10000000.0 - 11644473600.0; // epoch is Jan. 1, 1601: 134774 days to Jan. 1, 1970
}
This returns a UNIX-like timestamp (in seconds since 1970) with sub-second resolution.
For the sake of comparisom
double toSeconds(const FILETIME& t)
{
return LARGE_INTEGER{t.dwLowDateTime, (long)t.dwHighDateTime}.QuadPart * 1e-7;
}
is the simplest.
You can use this macro to get time in UNIX epochs:
#define windows_time_to_unix_epoch(x) ((x) - 116444736000000000LL) / 10000000LL

Convert date and time numbers to time_t AND specify the timezone

I have the following integers:
int y, mon, d, h, min, s;
Their values are: 2012, 06, 27, 12, 47, 53 respectively. I want to represent the date time of "2012/06/27 12:47:53 UTC" if I have selected 'UTC' somewhere else in my application, or "2012/06/27 12:47:53 AEST" if I have selected 'AEST' somewhere else in my application.
I want to convert this into a time_t, and here's the code that I am current using to do so:
struct tm timeinfo;
timeinfo.tm_year = year - 1900;
timeinfo.tm_mon = mon - 1;
timeinfo.tm_mday = day;
timeinfo.tm_hour = hour;
timeinfo.tm_min = min;
timeinfo.tm_sec = sec;
//timeinfo.tm_isdst = 0; //TODO should this be set?
//TODO find POSIX or C standard way to do covert tm to time_t without in UTC instead of local time
#ifdef UNIX
return timegm(&timeinfo);
#else
return mktime(&timeinfo); //FIXME Still incorrect
#endif
So I am using a tm struct and mktime, however this is not working well, because it is always assuming my local time-zone.
What is the correct way of doing this?
So below is the solution that I have come up with so far.
It basically does one of three things:
If UNIX, simply use timegm
If not UNIX
Either, do math using the difference between UTC epoch and local epoch as an offset
Reservation: Math may be incorrect
Or, set the "TZ" environment variable to UTC temporarily
Reservation: will trip up if/ when this code needs to be multithreaded
namespace tmUtil
{
int const tm_yearCorrection = -1900;
int const tm_monthCorrection = -1;
int const tm_isdst_dontKnow = -1;
#if !defined(DEBUG_DATETIME_TIMEGM_ENVVARTZ) && !(defined(UNIX) && !defined(DEBUG_DATETIME_TIMEGM))
static bool isLeap(int year)
{
return
(year % 4) ? false
: (year % 100) ? true
: (year % 400) ? false
: true;
}
static int daysIn(int year)
{
return isLeap(year) ? 366 : 365;
}
#endif
}
time_t utc(int year, int mon, int day, int hour, int min, int sec)
{
struct tm time = {0};
time.tm_year = year + tmUtil::tm_yearCorrection;
time.tm_mon = mon + tmUtil::tm_monthCorrection;
time.tm_mday = day;
time.tm_hour = hour;
time.tm_min = min;
time.tm_sec = sec;
time.tm_isdst = tmUtil::tm_isdst_dontKnow;
#if defined(UNIX) && !defined(DEBUG_DATETIME_TIMEGM) //TODO remove && 00
time_t result;
result = timegm(&time);
return result;
#else
#if !defined(DEBUG_DATETIME_TIMEGM_ENVVARTZ)
//TODO check that math is correct
time_t fromEpochUtc = mktime(&time);
struct tm localData;
struct tm utcData;
struct tm* loc = localtime_r (&fromEpochUtc, &localData);
struct tm* utc = gmtime_r (&fromEpochUtc, &utcData);
int utcYear = utc->tm_year - tmUtil::tm_yearCorrection;
int gmtOff =
(loc-> tm_sec - utc-> tm_sec)
+ (loc-> tm_min - utc-> tm_min) * 60
+ (loc->tm_hour - utc->tm_hour) * 60 * 60
+ (loc->tm_yday - utc->tm_yday) * 60 * 60 * 24
+ (loc->tm_year - utc->tm_year) * 60 * 60 * 24 * tmUtil::daysIn(utcYear);
#ifdef UNIX
if (loc->tm_gmtoff != gmtOff)
{
StringBuilder err("loc->tm_gmtoff=", StringBuilder((int)(loc->tm_gmtoff)), " but gmtOff=", StringBuilder(gmtOff));
THROWEXCEPTION(err);
}
#endif
int resultInt = fromEpochUtc + gmtOff;
time_t result;
result = (time_t)resultInt;
return result;
#else
//TODO Find a way to do this without manipulating environment variables
time_t result;
char *tz;
tz = getenv("TZ");
setenv("TZ", "", 1);
tzset();
result = mktime(&time);
if (tz)
setenv("TZ", tz, 1);
else
unsetenv("TZ");
tzset();
return result;
#endif
#endif
}
N.B. StringBuilder is an internal class, it doesn't matter for the purposes of this question.
More info:
I know that this can be done easily using boost, et al. But this is NOT and option. I need it to be done mathematically, or using a c or c++ standard function, or combinations thereof.
timegm appears to solve this problem, however, it doesn't appear to part of the C / POSIX standard. This code currently is compiled on multiple platforms (Linux, OSX, WIndows, iOS, Android (NDK)), so I need to find a way to make it work across all of these platforms, even if the solution involves #ifdef $PLATFORM type things.
It makes me want to throw up in my mouth a little bit, but you could convert it to a string with strftime(), replace the timezone in the string and then convert it back with strptime() and into a time_t with mktime(). In detail:
#ifdef UGLY_HACK_VOIDS_WARRANTY
time_t convert_time(const struct tm* tm)
{
const size_t BUF_SIZE=256;
char buffer[BUF_SIZE];
strftime(buffer,256,"%F %H:%M:%S %z", tm);
strncpy(&buffer[20], "+0001", 5); // +0001 is the time-zone offset from UTC in hours
struct tm newtime = {0};
strptime(buffer, "%F %H:%M:%S %z", &newtime);
return mktime(&newtime);
}
#endif
However, I would highly recommend you convince the powers that be that boost is an option after all. Boost has great support for custom timezones. There are other libraries that do this elegantly as well.
If all you want is to convert a struct tm given in UTC to a time_t then you can do it like this:
#include <time.h>
time_t utc_to_time_t(struct tm* timeinfo)
{
tzset(); // load timezone information (this can be called just once)
time_t t = mktime(timeinfo);
return t - timezone;
}
This basically converts the UTC time to time_t as if the given time was local, then applies a timezone correction to the result to bring it back to UTC.
Tested on gcc/cygwin and Visual Studio 2010.
I hope this helps!
Update: As you very well pointed out, my solution above may return time_t value that is one hour off when the daylight time savings state of the queried date is different than the one for the current time.
The solution for that problem is to have an additional function that can tell you if a date falls in the DST region or not, and use that and the current DST flag to adjust the time returned by mktime. This is actually easy to do. When you call mktime() you just have to set the tm_dst member to -1 and then the system will do its best to figure out the DST at the given time for you. Assuming we trust the system on this, then you can use this information to apply a correction:
#include <time.h>
time_t utc_to_time_t(struct tm* timeinfo)
{
tzset(); // load timezone information (this can be called just once)
timeinfo->tm_isdst = -1; // let the system figure this out for us
time_t t = mktime(timeinfo) - timezone;
if (daylight == 0 && timeinfo->tm_isdst != 0)
t += 3600;
else if (daylight != 0 && timeinfo->tm_isdst == 0)
t -= 3600;
return t;
}
If you are on Linux or other UNIx or UNIX-like system then you might have a timegm function that does what you want. The linked manual page have a portable implementation so you can make it yourself. On Windows I know of no such function.
After beating my head against this for days trying to get a timegm(1) function that works on Android (which does not ship with one), I finally discovered this simple and elegant solution, which works beautifully:
time_t timegm( struct tm *tm ) {
time_t t = mktime( tm );
return t + localtime( &t )->tm_gmtoff;
}
I don't see why this wouldn't be a suitable cross-platform solution.
I hope this helps!
time_t my_timegm2(struct tm *tm)
{
time_t ret = tm->tm_sec + tm->tm_min*60 + tm->tm_hour*3600 + tm->tm_yday*86400;
ret += ((time_t)31536000) * (tm->tm_year-70);
ret += ((tm->tm_year-69)/4)*86400 - ((tm->tm_year-1)/100)*86400 + ((tm->tm_year+299)/400)*86400;
return ret;
}
There seems to be a simpler solution:
#include <time64.h>
time_t timegm(struct tm* const t)
{
return (time_t)timegm64(t);
}
Actually I have not testet yet if really it works, because I still have a bit of porting to do, but it compiles.
Here's my solution:
#ifdef WIN32
# define timegm _mkgmtime
#endif
struct tm timeinfo;
timeinfo.tm_year = year - 1900;
timeinfo.tm_mon = mon - 1;
timeinfo.tm_mday = day;
timeinfo.tm_hour = hour;
timeinfo.tm_min = min;
timeinfo.tm_sec = sec;
return timegm(&timeinfo);
This should work both for unix and windows

Converting epoch time to "real" date/time

What I want to do is convert an epoch time (seconds since midnight 1/1/1970) to "real" time (m/d/y h:m:s)
So far, I have the following algorithm, which to me feels a bit ugly:
void DateTime::splitTicks(time_t time) {
seconds = time % 60;
time /= 60;
minutes = time % 60;
time /= 60;
hours = time % 24;
time /= 24;
year = DateTime::reduceDaysToYear(time);
month = DateTime::reduceDaysToMonths(time,year);
day = int(time);
}
int DateTime::reduceDaysToYear(time_t &days) {
int year;
for (year=1970;days>daysInYear(year);year++) {
days -= daysInYear(year);
}
return year;
}
int DateTime::reduceDaysToMonths(time_t &days,int year) {
int month;
for (month=0;days>daysInMonth(month,year);month++)
days -= daysInMonth(month,year);
return month;
}
you can assume that the members seconds, minutes, hours, month, day, and year all exist.
Using the for loops to modify the original time feels a little off, and I was wondering if there is a "better" solution to this.
Be careful about leap years in your daysInMonth function.
If you want very high performance, you can precompute the pair to get to month+year in one step, and then calculate the day/hour/min/sec.
A good solution is the one in the gmtime source code:
/*
* gmtime - convert the calendar time into broken down time
*/
/* $Header: gmtime.c,v 1.4 91/04/22 13:20:27 ceriel Exp $ */
#include <time.h>
#include <limits.h>
#include "loc_time.h"
struct tm *
gmtime(register const time_t *timer)
{
static struct tm br_time;
register struct tm *timep = &br_time;
time_t time = *timer;
register unsigned long dayclock, dayno;
int year = EPOCH_YR;
dayclock = (unsigned long)time % SECS_DAY;
dayno = (unsigned long)time / SECS_DAY;
timep->tm_sec = dayclock % 60;
timep->tm_min = (dayclock % 3600) / 60;
timep->tm_hour = dayclock / 3600;
timep->tm_wday = (dayno + 4) % 7; /* day 0 was a thursday */
while (dayno >= YEARSIZE(year)) {
dayno -= YEARSIZE(year);
year++;
}
timep->tm_year = year - YEAR0;
timep->tm_yday = dayno;
timep->tm_mon = 0;
while (dayno >= _ytab[LEAPYEAR(year)][timep->tm_mon]) {
dayno -= _ytab[LEAPYEAR(year)][timep->tm_mon];
timep->tm_mon++;
}
timep->tm_mday = dayno + 1;
timep->tm_isdst = 0;
return timep;
}
The standard library provides functions for doing this. gmtime() or localtime() will convert a time_t (seconds since the epoch, i.e.- Jan 1 1970 00:00:00) into a struct tm. strftime() can then be used to convert a struct tm into a string (char*) based on the format you specify.
see: http://www.cplusplus.com/reference/clibrary/ctime/
Date/time calculations can get tricky. You are much better off using an existing solution rather than trying to roll your own, unless you have a really good reason.
An easy way (though different than the format you wanted):
std::time_t result = std::time(nullptr);
std::cout << std::asctime(std::localtime(&result));
Output:
Wed Sep 21 10:27:52 2011
Notice that the returned result will be automatically concatenated with "\n".. you can remove it using:
std::string::size_type i = res.find("\n");
if (i != std::string::npos)
res.erase(i, res.length());
Taken from: http://en.cppreference.com/w/cpp/chrono/c/time
time_t t = unixTime;
cout << ctime(&t) << endl;
This code might help you.
#include <iostream>
#include <ctime>
using namespace std;
int main() {
// current date/time based on current system
time_t now = time(0);
// convert now to string form
char* dt = ctime(&now);
cout << "The local date and time is: " << dt << endl;
// convert now to tm struct for UTC
tm *gmtm = gmtime(&now);
dt = asctime(gmtm);
cout << "The UTC date and time is:"<< dt << endl;
}
To convert a epoch string to UTC
string epoch_to_utc(string epoch) {
long temp = stol(epoch);
const time_t old = (time_t)temp;
struct tm *oldt = gmtime(&old);
return asctime(oldt);
}
and then it can be called as
string temp = "245446047";
cout << epoch_to_utc(temp);
outputs:
Tue Oct 11 19:27:27 1977
If your original time type is time_t, you have to use functions from time.h i.e. gmtime etc. to get portable code. The C/C++ standards do not specify internal format (or even exact type) for the time_t, so you cannot directly convert or manipulate time_t values.
All that is known is that time_t is "arithmetic type", but results of arithmetic operations are not specified - you cannot even add/subtract reliably. In practice, many systems use integer type for time_t with internal format of seconds since epoch, but this is not enforced by standards.
In short, use gmtime (and time.h functionality in general).