When I use this
#include<time.h>
//...
int n = time(0);
//...
I get a warning about converting time to int. Is there a way to remove this warning?
Yes, change n to be a time_t. If you look at the signature in time.h on most / all systems, you'll see that that's what it returns.
#include<time.h>
//...
time_t n = time(0);
//...
Note that Arak is right: using a 32 bit int is a problem, at a minimum, due to the 2038 bug. However, you should consider that any sort of arithmetic on an integer n (rather than a time_t) only increases the probability that your code will trip over that bug early.
PS: In case I didn't make it clear in the original answer, the best response to a compiler warning is almost always to address the situation that you're being warned about. For example, forcing higher precision data into a lower precision variable loses information - the compiler is trying to warn you that you might have just created a landmine bug that someone won't trip over until much later.
Time returns time_t and not integer. Use that type preferably because it may be larger than int.
If you really need int, then typecast it explicitly, for example:
int n = (int)time(0);
I think you are using Visual C++. The return type of time(0) is 64bit int even if you are programming for 32bit platform unlike g++. To remove the warning, just assign time(0) to 64bit variable.
You probably want to use a type of time_t instead of an int.
See the example at http://en.wikipedia.org/wiki/Time_t.
The reason is time() functions returns a time_t time so you might need to do a static cast to an int or uint in this case. Write in this way:
time_t timer;
int n = static_cast<int> (time(&timer)); // this will give you current time as an integer and it is same as time(NULL)
Related
I want to start a clock at the beginning of my program and use its elapsed time during the program to do some calculations, so the time should be in a int, long or double format. For example i want to calculate a debounce time but when i try it like this i get errors because the chrono high resolution clock is not in an int, long or double format and therefore i can't subtract 50ms from that (my debounceDelay) or save that value to a double (my lastDebounceTime). Originally i had a working Arduino Game (Pong) with an LCD and i want to convert this into a C++ console application.
On the Arduino there was this function "millis()" that gave me the runtime in ms and this worked perfectly fine. I can't find a similar function for C++.
double lastDebounceTime = 0;
double debounceDelay = 50;
void Player1Pos() {
if ((std::chrono::high_resolution_clock::now() - lastDebounceTime) > debounceDelay) {
if ((GetKeyState('A') & 0x8000) && (Player1Position == 0)) {
Player1Position = 1;
lastDebounceTime = std::chrono::high_resolution_clock::now();
}
else if ((GetKeyState('A') & 0x8000) && (Player1Position == 1)) {
Player1Position = 0;
lastDebounceTime = std::chrono::high_resolution_clock::now();
}
}
I am very new to C++ so any help is greatly appreciated.
Thank you all!
I find the question misguided in its attempt to force the answer to use "int, long, or double". Those are not appropriate types for the task at hand. For references, see A: Cast chrono::milliseconds to uint64_t? and A: C++ chrono - get duration as float or long long. The question should have asked about obtaining the desired functionality (whatever the code block is supposed to do), rather than asking about a pre-chosen approach to the desired functionality. So that is what I will answer first.
Getting the desired result
To get the code block to compile, you just have to drop the insistence that the variables be "int, long, or double". Instead, use time-oriented types for time-oriented data. Here are the first two variables in the code block:
double lastDebounceTime = 0;
double debounceDelay = 50;
The first is supposed to represent a point in time, and the second a time duration. The C++ type for representing a point in time is std::chrono::time_point, and the C++ type for a time duration is a std::chrono::duration. Almost. Technically, these are not types, but type templates. To get actual types, some template arguments need to be supplied. Fortunately, we can get the compiler to synthesize the arguments for us.
The following code block compiles. You might note that I left out details that I consider irrelevant to the question at hand, hence that I feel should have been left out of the minimal reproducible example. Take this as an example of how to simplify code when asking questions in the future.
// Use the <chrono> library when measuring time
#include <chrono>
// Enable use of the `ms` suffix.
using namespace std::chrono_literals;
std::chrono::high_resolution_clock::time_point lastDebounceTime;
// Alternatively, if the initialization to zero is not crucial:
// auto lastDebounceTime = std::chrono::high_resolution_clock::now();
auto debounceDelay = 50ms;
void Player1Pos() {
if ((std::chrono::high_resolution_clock::now() - lastDebounceTime) > debounceDelay) {
// Do stuff
lastDebounceTime = std::chrono::high_resolution_clock::now();
}
}
Subtracting two time_points produces a duration, which can be compared to another duration. The logic works and now is type-safe.
Getting the desired approach
OK, back to the question that was actually asked. You can convert the value returned by now() to an arithmetic type (integer or floating point) with the following code. You should doubts about using this code after reading the comment that goes with it.
// Get the number of [some time units] since [some special time].
std::chrono::high_resolution_clock::now().time_since_epoch().count();
The time units involved are not specified by the standard, but are instead whatever std::chrono::high_resolution_clock::period corresponds to, not necessarily milliseconds. The special time is called the clock's epoch, which could be anything. Fortunately for your use case, the exact epoch does not matter – you just need it to be constant for each run of the program, which it is. However, the unknown units could be a problem, requiring more code to handle correctly.
I find the appropriate types easier to use than trying to get this conversion correct. Especially since it required no change to your function.
I'm still learning about type casting in C++ and I'm currently doing this
long int t = time(NULL);
I'm using VS2013 and noticed the conversion from 'time_t' to 'long' warning so I thought I would type cast it to look like;
long int t = static_cast<long int> time(NULL);
However this doesn't work yet combining a static cast and a C-style cast works
long int t = static_cast<long int> (time(NULL));
I was just wondering if anyone could help shed some light on this?
time(NULL) is not a cast but a function call which returns time_t. Since time_t is not exactly the same type as long int, you see the warning.
Furthermore, static_cast<T>(value) requires the parenthesis, that is why your first version does not work.
Your question contains the answer. The static_cast generic method in the code you provide takes the time_t type as its input and converts it to a long int as its return value. This code does not contain a C-style type-cast.
long int t = static_cast<long int> (time(NULL));
Type-casting should also work too, because time_t is an arithmetic type and the C cast operator will perform the promotion to the long int type.
long int t = (long int)time(NULL);
This casting tutorial might be an interesting read for you.
A time_t value is the number of seconds since the start of Jan 1 1970. Casting that to 32-bit long you therefore restrict yourself to values representing time values before the year 2038, roughly. That's not a good idea, and the ungoodness of it is the reason for your warning.
The attempted expression
static_cast<long int> time(NULL)
is just invalid syntax. A static_cast requires a parenthesis with the value.
I'm doing a lot of calculations with times, building time objects relative to other time objects by adding seconds. The code is supposed to run on embedded devices and servers. Most documentations say about time_t that it's some arithmetic type, storing usually the time since the epoch. How safe is it to assume that time_t store a number of seconds since something? If we can assume that, then we can just use addition and subtraction rather than localtime, mktime and difftime.
So far I've solved the problem by using a constexpr bool time_tUsesSeconds, denoting whether it is safe to assume that time_t uses seconds. If it's non-portable to assume time_t is in seconds, is there a way to initialize that constant automatically?
time_t timeByAddingSeconds(time_t theTime, int timeIntervalSeconds) {
if (Time_tUsesSeconds){
return theTime + timeIntervalSeconds;
} else {
tm timeComponents = *localtime(&theTime);
timeComponents.tm_sec += timeIntervalSeconds;
return mktime(&timeComponents);
}
}
The fact that it is in seconds is stated by the POSIX specification, so, if you're coding for POSIX-compliant environments, you can rely on that.
The C++ standard also states that time_t must be an arithmetic type.
Anyway, the Unix timing system (second since the Epoch) is going to overflow in 2038. So, it's very likely that, before this date, C++ implementations will switch to other non-int data types (either a 64-bit int or a more complex datatype). Anyway, switching to a 64-bit int would break binary compatibility with previous code (since it requires bigger variables), and everything should be recompiled. Using 32-bit opaque handles would not break binary compatibility, you can change the underlying library, and everything still works, but time_t would not a time in seconds anymore, it'd be an index for an array of times in seconds. For this reason, it's suggested that you use the functions you mentioned to manipulate time_t values, and do not assume anything on time_t.
If C++11 is available, you can use std::chrono::system_clock's to_time_t and from_time_t to convert to/from std::chrono::time_point, and use chrono's arithmetic operators.
If your calculations involve the Gregorian calendar, you can use the HowardHinnant/date library, or C++20's new calendar facilities in chrono (they have essentially the same API).
There is no requirement in standard C or in standard C++ for the units that time_t represents. To work with seconds portably you need to use struct tm. You can convert between time_t and struct tm with mktime and localtime.
Rather than determine whether time_t is in seconds, since time_t is an arithmetic type, you can instead calculate a time_t value that represents one second, and work with that. This answer I wrote before explains the method and has some caveats, here's some example code (bad_time() is a custom exception class, here):
time_t get_sec_diff() {
std::tm datum_day;
datum_day.tm_sec = 0;
datum_day.tm_min = 0;
datum_day.tm_hour = 12;
datum_day.tm_mday = 2;
datum_day.tm_mon = 0;
datum_day.tm_year = 30;
datum_day.tm_isdst = -1;
const time_t datum_time = mktime(&datum_day);
if ( datum_time == -1 ) {
throw bad_time();
}
datum_day.tm_sec += 1;
const time_t next_sec_time = mktime(&datum_day);
if ( next_sec_time == -1 ) {
throw bad_time();
}
return (next_sec_time - datum_time);
}
You can call the function once and store the value in a const, and then just use it whenever you need a time_t second. I don't think it'll work in a constexpr though.
My two cents: on Windows it is in seconds over time but the time it takes for one second to increment to the next is usually 18*54.925 ms and sometimes 19*54.925. The reason for this is explained in this post.
(Answering own question)
One answer suggests that as long as one is using posix, time_t is in seconds and arithmetic on time_t should work.
A second answer calculates the time_t per second, and uses that as a factor when doing arithmetic. But there are still some assumptions about time_t made.
In the end I decided portability is more important, I don't want my code to fail silently on some embedded device. So I used a third way. It involves storing an integer denoting the time since the program starts. I.e. I define
const static time_t time0 = time(nullptr);
static tm time0Components = *localtime(&time0);
All time values used throughout the program are just integers, denoting the time difference in seconds since time0. To go from time_t to this delta seconds, I use difftime. To go back to time_t, I use something like this:
time_t getTime_t(int timeDeltaSeconds) {
tm components = time0Components;
components.tm_sec += timeDeltaSeconds;
return mktime(&components);
}
This approach allows making operations like +,- cheap, but going back to time_t is expensive. Note that the time delta values are only meaningful for the current run of the program. Note also that time0Components has to be updated when there's a time zone change.
Apologies if this question has already been answered.
#include <iostream>
#include <cstdlib>
#include <ctime>
using namespace std;
int main () {
srand( time(NULL) );
cout << rand();
}
"implicit conversion loses integer precision: 'time_t' (aka 'long') to 'unsigned int'"
Is the error message Im getting when I execute the code above. I am using xcode 4.6.1. Now when I use a different complier such as the one from codepad.org it executes perfectly fine generating what seems like random numbers so I am assuming it is an xcode issue that I need to work around?
I have JUST started programming so I am a complete beginner when it comes to this. Is there a problem with my code or is it my complier?
Any help would be appreciated!
"implicit conversion loses integer precision: 'time_t' (aka 'long') to 'unsigned int'"
You're losing precision implicitly because time() returns a long which is larger than an unsigned int on your target. In order to workaround this problem, you should explicitly cast the result (thus removing the "implicit precision loss"):
srand( static_cast<unsigned int>(time(nullptr)));
Given that it's now 2017, I'm editing this question to suggest that you consider the features provided by std::chrono::* defined in <chrono> as a part of C++11. Does your favorite compiler provide C++11? If not, it really should!
To get the current time, you should use:
#include <chrono>
void f() {
const std::chrono::time_point current_time = std::chrono::system_clock::now();
}
Why should I bother with this when time() works?
IMO, just one reason is enough: clear, explicit types. When you deal with large programs among big enough teams, knowing whether the values passed around represent time intervals or "absolute" times, and what magnitudes is critical. With std::chrono you can design interfaces and data structures that are portable and skip out on the is-that-timeout-a-deadline-or-milliseconds-from-now-or-wait-was-it-seconds blues.
As mentioned by "nio", a clean workaround would be to explicitly type cast.
Deeper explanation:
The srand() requires an unsigned int as parameter (srand(unsigned int)) but time() returns a long int (long int time()) and this is not accepted by the srand() so in order to fix this, the compiler has to simply typecast (convert) the "long int" to "unsigned int".
BUT in your case the compiler warns you about it instead (as the designers of the compiler thought you should be aware that's all).
So a simple
srand( (unsigned int) time(NULL) );
will do the trick!
(forgive me if i have done something wrong, this is my first answer on stackoverflow)
The srand() function has unsigned int as a type of argument, time_t is long type. the upper 4 bytes from long are stripped out, but there's no problem in it.
srand() will randomize the rand() algorithm with 4 lower bytes of time(), so you're supplying more data than is needed.
If you get an error, try to just explicitly cast the time_t type to unsigned int:
srand( static_cast<unsigned int>(time(NULL)) );
Another interesting thing is that if you run your program twice in the same second, you'll get the same random number, which can be sometimes undesired, that's because if you seed the rand() algorithm with the same data, it will generate the same random sequence. Or it can be desirable when you debug some piece of code and need to test the same behaviour again... then you simply use something like srand(123456).
This is not an error. The code is valid and its meaning is well defined; if a compiler refuses to compile it, the compiler does not conform to the language definition. More likely, it's a warning, and it's telling you that the compiler writer thinks that you might have made a mistake. If you insist on eliminating warning messages you could add a cast, as others have suggested. I'm not a big fan of rewriting valid, meaningful code in order to satisfy some compiler writer's notion of good style; I'd turn off the warning. If you do that, though, you might overlook other places where a conversion loses data that you didn't intend.
#include <stdlib.h>
#include <iostream> //rand
#include <time.h> //time
float randomizer(int VarMin, int VarMax){
srand((unsigned)time(NULL));
int range = (VarMax - VarMin);
float rnd = VarMin + float(range*(rand()/(RAND_MAX + 1.0)));
return rnd;
}
time_t raw_time = time(NULL);
tm* current_time = localtime(&raw_time);
I got the answer myself... I totally messed up the warnings. Thanks anyway.
The localtime() function dates back to when (int) was 16 bits and passing (long) on the stack was not widely supported; as such, it was specified to pass (long *), which at the time was 16 bits. It's been left as is because changing it would break enormous amounts of code. You'll find that most of the time-related functions do this, since they were the only functions at the time that used (long). (lseek() came later. Care to guess what non-(long)-using function it replaced?)
localtime requires an argument of "time_t*" which is a pointer. So you have to put the & there.