time_t raw_time = time(NULL);
tm* current_time = localtime(&raw_time);
I got the answer myself... I totally messed up the warnings. Thanks anyway.
The localtime() function dates back to when (int) was 16 bits and passing (long) on the stack was not widely supported; as such, it was specified to pass (long *), which at the time was 16 bits. It's been left as is because changing it would break enormous amounts of code. You'll find that most of the time-related functions do this, since they were the only functions at the time that used (long). (lseek() came later. Care to guess what non-(long)-using function it replaced?)
localtime requires an argument of "time_t*" which is a pointer. So you have to put the & there.
Related
This question is similar to the following:
convert epoch to time_t
Converting time_t to int
but I don't quite have my answer there.
If you want to get the current date/time you can call time(0) or time(NULL) like in the following standard example:
// current date/time based on current system
time_t now = time(0);
I want to define a function which will return a time_t and allows the client to pass an optional default return value in the event of an error. Further, I want to set a default on that "default" argument. This provides symmetry within a library I have with one-to-one counter parts across several languages, so I'm not looking to redesign all that.
My thought was to set the default return to the epoch. Then, a client could in theory easily evaluate that return, and decide that an epoch coming back was more than likely (if not always) an indication of it being invalid. I can think of some alternatives, but nothing clean, that also fits my existing patterns.
Is there a short and sweet way to make my function signature have a default value for this object equal to the epoch? For instance
...myfunc(...., const time_t &defVal=time(0) );
would be perfect if 0 meant the epoch rather than the current date/time!
The function std::time() returns the number of seconds since the epoch as a std::time_t. Therefore to find zero seconds after the epoch set std::time_t to zero:
std::time_t t = 0;
So you could do something like:
void myfunc(const std::time_t& defVal = 0)
What is wrong with using 0? (time_t)0 represents the epoch itself (if you want to find the actual epoch date/time, pass (time_t)0 to gmtime() or localtime()).
time_t myfunc(...., time_t defVal = 0 );
Or, you could use (time_t)-1 instead, which is not a valid time, as time() returns (time_t)-1 on error, and time_t represents a positive number of seconds since the epoch.
time_t myfunc(...., time_t defVal = (time_t)-1 );
Either way provides the user with something that is easily compared, if they don't provide their own default value.
Lets say we have a text file and read some timestamp from there into a local variable "sTime":
std::string sTime = "1440966379" // this value has been read from a file.
std::time_t tTime = ? // this instance of std::time_t shall be assigned the above value.
How do I convert this string properly into std::time assuming:
We may use STL means only (no boost).
We use the C++11 standard
We don't know which CPU architecture/OS we're using (it should work cross plattform)
We can not make any (static) assumptions on how time_t is internally defined. Of course we know that in most cases it will be an integral type, probably of 32 or 64 bit length, but according to cppreference.com the actual typedef of time_t is not specified. So atoi, atol, atoll, strtoul ... etc. are out of question at least until we have made sure by other means that we actually did pick the correct one out of those possible candidates.
This will keep your time in a standards-approved format:
Need #include <chrono>
std::string sTime = "1440966379"; // this value has been read from a file.
std::chrono::system_clock::time_point newtime(std::chrono::seconds(std::stoll(sTime)));
// this gets you out to a minimum of 35 bits. That leaves fixing the overflow in the
// capable hands of Misters Spock and Scott. Trust me. They've had worse.
From there you can do arithmetic and compares on time_points.
Dumping it back out to a POSIX timestamp:
const std::chrono::system_clock::time_point epoch = std::chrono::system_clock::from_time_t(0);
// 0 is the same in both 32 and 64 bit time_t, so there is no possibility of overflow here
auto delta = newtime - epoch;
std::cout << std::chrono::duration_cast<std::chrono::seconds>(delta).count();
And another SO question deals with getting formatted strings back out:
How to convert std::chrono::time_point to std::tm without using time_t?
I'm doing a lot of calculations with times, building time objects relative to other time objects by adding seconds. The code is supposed to run on embedded devices and servers. Most documentations say about time_t that it's some arithmetic type, storing usually the time since the epoch. How safe is it to assume that time_t store a number of seconds since something? If we can assume that, then we can just use addition and subtraction rather than localtime, mktime and difftime.
So far I've solved the problem by using a constexpr bool time_tUsesSeconds, denoting whether it is safe to assume that time_t uses seconds. If it's non-portable to assume time_t is in seconds, is there a way to initialize that constant automatically?
time_t timeByAddingSeconds(time_t theTime, int timeIntervalSeconds) {
if (Time_tUsesSeconds){
return theTime + timeIntervalSeconds;
} else {
tm timeComponents = *localtime(&theTime);
timeComponents.tm_sec += timeIntervalSeconds;
return mktime(&timeComponents);
}
}
The fact that it is in seconds is stated by the POSIX specification, so, if you're coding for POSIX-compliant environments, you can rely on that.
The C++ standard also states that time_t must be an arithmetic type.
Anyway, the Unix timing system (second since the Epoch) is going to overflow in 2038. So, it's very likely that, before this date, C++ implementations will switch to other non-int data types (either a 64-bit int or a more complex datatype). Anyway, switching to a 64-bit int would break binary compatibility with previous code (since it requires bigger variables), and everything should be recompiled. Using 32-bit opaque handles would not break binary compatibility, you can change the underlying library, and everything still works, but time_t would not a time in seconds anymore, it'd be an index for an array of times in seconds. For this reason, it's suggested that you use the functions you mentioned to manipulate time_t values, and do not assume anything on time_t.
If C++11 is available, you can use std::chrono::system_clock's to_time_t and from_time_t to convert to/from std::chrono::time_point, and use chrono's arithmetic operators.
If your calculations involve the Gregorian calendar, you can use the HowardHinnant/date library, or C++20's new calendar facilities in chrono (they have essentially the same API).
There is no requirement in standard C or in standard C++ for the units that time_t represents. To work with seconds portably you need to use struct tm. You can convert between time_t and struct tm with mktime and localtime.
Rather than determine whether time_t is in seconds, since time_t is an arithmetic type, you can instead calculate a time_t value that represents one second, and work with that. This answer I wrote before explains the method and has some caveats, here's some example code (bad_time() is a custom exception class, here):
time_t get_sec_diff() {
std::tm datum_day;
datum_day.tm_sec = 0;
datum_day.tm_min = 0;
datum_day.tm_hour = 12;
datum_day.tm_mday = 2;
datum_day.tm_mon = 0;
datum_day.tm_year = 30;
datum_day.tm_isdst = -1;
const time_t datum_time = mktime(&datum_day);
if ( datum_time == -1 ) {
throw bad_time();
}
datum_day.tm_sec += 1;
const time_t next_sec_time = mktime(&datum_day);
if ( next_sec_time == -1 ) {
throw bad_time();
}
return (next_sec_time - datum_time);
}
You can call the function once and store the value in a const, and then just use it whenever you need a time_t second. I don't think it'll work in a constexpr though.
My two cents: on Windows it is in seconds over time but the time it takes for one second to increment to the next is usually 18*54.925 ms and sometimes 19*54.925. The reason for this is explained in this post.
(Answering own question)
One answer suggests that as long as one is using posix, time_t is in seconds and arithmetic on time_t should work.
A second answer calculates the time_t per second, and uses that as a factor when doing arithmetic. But there are still some assumptions about time_t made.
In the end I decided portability is more important, I don't want my code to fail silently on some embedded device. So I used a third way. It involves storing an integer denoting the time since the program starts. I.e. I define
const static time_t time0 = time(nullptr);
static tm time0Components = *localtime(&time0);
All time values used throughout the program are just integers, denoting the time difference in seconds since time0. To go from time_t to this delta seconds, I use difftime. To go back to time_t, I use something like this:
time_t getTime_t(int timeDeltaSeconds) {
tm components = time0Components;
components.tm_sec += timeDeltaSeconds;
return mktime(&components);
}
This approach allows making operations like +,- cheap, but going back to time_t is expensive. Note that the time delta values are only meaningful for the current run of the program. Note also that time0Components has to be updated when there's a time zone change.
When I use this
#include<time.h>
//...
int n = time(0);
//...
I get a warning about converting time to int. Is there a way to remove this warning?
Yes, change n to be a time_t. If you look at the signature in time.h on most / all systems, you'll see that that's what it returns.
#include<time.h>
//...
time_t n = time(0);
//...
Note that Arak is right: using a 32 bit int is a problem, at a minimum, due to the 2038 bug. However, you should consider that any sort of arithmetic on an integer n (rather than a time_t) only increases the probability that your code will trip over that bug early.
PS: In case I didn't make it clear in the original answer, the best response to a compiler warning is almost always to address the situation that you're being warned about. For example, forcing higher precision data into a lower precision variable loses information - the compiler is trying to warn you that you might have just created a landmine bug that someone won't trip over until much later.
Time returns time_t and not integer. Use that type preferably because it may be larger than int.
If you really need int, then typecast it explicitly, for example:
int n = (int)time(0);
I think you are using Visual C++. The return type of time(0) is 64bit int even if you are programming for 32bit platform unlike g++. To remove the warning, just assign time(0) to 64bit variable.
You probably want to use a type of time_t instead of an int.
See the example at http://en.wikipedia.org/wiki/Time_t.
The reason is time() functions returns a time_t time so you might need to do a static cast to an int or uint in this case. Write in this way:
time_t timer;
int n = static_cast<int> (time(&timer)); // this will give you current time as an integer and it is same as time(NULL)
I have a program that uses time() and localtime() to set an internal clock, but this needs to be changed so that the internal clock is independent of the user and the "real" time. I need to be able to set any reasonable starting time, and have it count forward depending on a timer internal to the program. Any ideas on the best way to approach this? Here's the excerpt:
#define ConvertToBCD(x) ((x / 10) << 4) | (x % 10);
time_t tm;
time(&tm);
struct tm *tm_local= localtime(&tm);
tm_local->tm_year %= 100;
tm_local->tm_mon++;
timedata[0] = ConvertToBCD(tm_local->tm_year);
timedata[1] = ConvertToBCD(tm_local->tm_mon);
timedata[2] = ConvertToBCD(tm_local->tm_mday);
timedata[3] = (tm_local->tm_wday + 6) & 7;
if (!(TimeStatus & 0x02)) tm_local->tm_hour %= 12;
timedata[4] = ((tm_local->tm_hour < 12) ? 0x00 : 0x40) | ConvertToBCD(tm_local->tm_hour);
timedata[5] = ConvertToBCD(tm_local->tm_min);
timedata[6] = ConvertToBCD(tm_local->tm_sec);
A time_t, under POSIX complient systems, is just the number of seconds since the epoch, 1 Jan 1970 0:00:00.
Just add a (possibly negative) value to a time_t to change the time, ensuring that the value doesn't overflow, then use localtime as usual.
If you only need whole second resolution, then time() can be used; if you need sub-second resolution, use gettimeofday().
However, if you want to be able to control the values returned, then you will need to define yourself a surrogate for time() (or gettimeofday()). Most libraries are designed along the lines described in Plauger's The Standard C Library, and you can often provide a function called time() that behaves as you want, replacing the standard version. Alternatively, and more safely, you can revise your code to call a function of your own devising, perhaps called simulated_time(), where for production work you can have simulated_time() call the real time() (possibly via an inline function in both C99 and C++) but it can be your own version that schedules time to change as you need.
You don't need to alter your use of localtime(); it simply converts whatever time_t value you give it into a struct tm; you want it to give answers just as it always did.
The way I understand it is that you want an internal clock which gets updated according to the progress the real clock makes.
So then you would create something like this:
struct myTime
{
time_t userStart;
time_t systemStart;
time_t curTime;
};
void initTtime(struct myTime *t, time_t time)
{
t->userStart=time;
t->systemStart=time(NULL);
}
time_t getTime(struct myTime *t)
{
t->curTime = t->userStart + time(NULL)-t->systemStart;
return t->curTime;
}
so using initTime you set the current time you want to have, this gets linked to the system time at that moment in time. When you call getTime using that struct, it updates the starting point with the amount of time progressed. (Note, i haven't tested the code and you can either use the struct directly if you want).
For subsecond precision replace time() and time_t by the gettimeofday equivalent. And for conversion, ascii arting, breakdown to anything else than a second counter you can still use the unix function.