The problem is that my program is so fast that it doesn't detect change in time, or GetTickCount(), how can i prevent this from happening?
Thank You
GetTickCount has 5..15 millisecond precision, so "zero time difference" is a common problem.
If you need precision, use QueryPerformanceCounter.
Are you printing the running time as an integer? If you are doing division to get the elapsed time, cast the numerator or denominator as a float.
Time how long x runs take and take an average.
Additionally you can use profiling for an accurate timing.
Use
void WINAPI GetSystemTimeAsFileTime(
_Out_ LPFILETIME lpSystemTimeAsFileTime
);
instead. It has better resolution. In majority of cases this is really what is needed.
There's a very handy class on CodeProject that wraps QueryPerformanceCounter, that I use often: http://www.codeproject.com/Articles/475/The-CPerfTimer-timer-class
Or you can try using rtdsc.
for details see here: http://www.mcs.anl.gov/~kazutomo/rdtsc.html
Snippet:
#include <stdio.h>
#include "rdtsc.h"
int main(int argc, char* argv[])
{
unsigned long long a,b;
a = rdtsc();
b = rdtsc();
printf("%llu\n", b-a);
return 0;
}
Or even chrono is good, but then it needs C++11 compliance (in part).
details: std::chrono and cout
Related
So I'm helping my buddy out with some code and we've hit some weirdness in the sleep_for function:
This works, gives an "acceptable" timing of about 16.7ms (acceptable being +/- 2-4ms, but anyway):
int main()
{
long double milliseconds=16.7*1000;
auto start=std::chrono::high_resolution_clock::now();
using namespace std::chrono_literals;
std::this_thread::sleep_for(std::chrono::duration<long double, std::micro>(1670));
auto end=std::chrono::high_resolution_clock::now();
std::cout << "Slept for: "<<std::chrono::duration<float, std::milli>(end-start)<<std::endl;
}
This however, will only give you a minimum of 30ms, works as expected above 30ms:
int main()
{
long double milliseconds=16.7*1000.0;
auto start=std::chrono::high_resolution_clock::now();
using namespace std::chrono_literals;
std::this_thread::sleep_for(std::chrono::duration<long double, std::micro>(milliseconds*1000.0));
auto end=std::chrono::high_resolution_clock::now();
std::cout<<"Slept for: "<<std::chrono::duration<float, std::milli>(end-start)<<std::endl;
}
Does anyone have an explanation for this?
I've tried various castings and different periods, they all end up about the same.
Using milliseconds period and above causes a minimum of 30ms, microseconds and below have expected results.
I suspect that there are different code paths that does different clock resolutions and bottoms out or something like that, but why doesn't a variable being multiplied by 1000 to go from 'ms' to 'us' not work?
I don't get it.
Apparently this is a Windows API "quirk", calling timeBeginPeriod will set minimum period resolution not only on Win32 API calls that deal with timing, but also stdlib.
The timing with this code is nearly perfect on Linux, naturally.
Thanks to Retired Ninja for the answer!
The C `clock()` function just returns a zero
clock() function always returning 0
why C clock() returns 0
I looked up all these questions and answers
And I learned that clock() returns clock ticks per some constant which differs per systems
And time() returns the number of seconds.
First, I was trying to measure the execution time of my sorting algorithm using clock() like this:
#include <iostream>
#include <ctime>
... Some other headers and codes
a = clock();
exchange_sort();
a = clock() - a;
... Rest of the code
I tried many different data types with a like int, clock_t, long, float.
And I sorted a pretty big size array int arr[1000] with already increased order.
But the value of a was always 0, so I tried to find the reason using gdb and I set a breakpoint to the line where the sorting algorithm is located so that I can check the value of a = clock(); and there has to be some number inside the variable but there was only 0.
So after that, I tried to check whether the function was the problem itself or something else like this:
#include <iostream>
#include <iostream>
int main()
{
int a;
clock_t b;
float c;
long d;
a = clock();
b = clock();
c = clock();
d = clock();
return 0;
}
And I checked the value of each variable through gdb and there were just garbage numbers before I put the return value of clock() but after I put there were only 0s inside the variables.
So apparently clock() just returns 0 all the time in my conclusion
I really don't know how can I fix this
My g++ version is 4.4.7.
I ran this in Linux
My processor is x86_64-redhat-linux
The clock() function is a coarse measure of CPU time used. Your code doesn't use enough CPU time to measure with such a coarse measure. You should probably switch to something like getrusage instead.
My code used enough CPU time. But it seems the clock is only ticking with a step of 15625ms.
My advice is to use <chrono> instead.
I'm testing a timer based on the ctime library using the clock() function.
Please note that the code that follows is only for test purposes.
#include <ctime>
unsigned long Elapsed(void);
clock_t start = 0;
clock_t stop = 0;
int main()
{
start = std::clock();
while(1)
{
sleep(1);
cout << "Elapsed seconds: " << Elapsed() << endl;
}
return 0;
}
unsigned long Elapsed()
{
stop = std::clock();
clock_t ticks = stop - start;
double seconds = (double)ticks / CLOCKS_PER_SEC; //CLOCK_PER_SEC = 1 milion
return seconds;
}
As you can see I'm performing an implicit conversion from double to unsigned long when Elapsed() returns the calculated value.
The unsigned long limit for a 32 bit system is 2,147,483,647 and I get overflow after Elapsed() returns 2146.
Looks like the function converts "ticks" to unsigned long, CLOCK_PER_SEC to unsigned long and then it returns the value. When it converts the "ticks" it overflows.
I expected it, instead, to first calculate the value in double of "ticks"/CLOCK_PER_SEC and THEN convert it to unsigned long.
In the attempt to count more seconds I tried to return a unsigned long long data type, but the variable always overflows at the same value (2147).
Could you explain me why the compiler converts to unsigned long long "a priori" and why even with unsigned long long it overflows at the same value ?
Is there any way to write the Elapsed() function in a better way to prevent the overflow to happen ?
Contrary to popular belief, the behaviour on converting a floating point type such as a double to any integral type is undefined if the value cannot fit into that integral type.
So introducing a double in your function is a poor thing to do indeed.
Why not write return ticks / CLOCKS_PER_SEC; instead if you can allow for truncation and wrap-around effects? Or if not, use a unsigned long long as the return value.
If on your system, clock_t is a 32 bit type, then it's likely it'll wrap around after 2147 seconds like you're seeing. This is expected behavior (ref. clock). And no amount of casting will get around that. Your code needs to be able to deal with the wrap-around (either by ignoring it, or by explicitly accounting for it).
When it converts the "ticks" it overflows.
No, the clock itself "overflows"; the conversion has nothing to do with it. That said, the conversion to double is pointless. Your limitation is the type clock_t. See notes for example from this reference:
The value returned by clock() may wrap around on some implementations. For example, on a machine with 32-bit clock_t, it wraps after 2147 seconds or 36 minutes.
One alternative, if it's available to you, is to rely on POSIX standard instead of C standard library. It provides clock_gettime which can be used to get the CPU time represented in timespec. Not only does it not suffer from this overlflow (until much longer timespan), but it also may have higher resolution than clock. The linked reference page of clock() conveniently shows example usage of clock_gettime as well.
Apologies if this question has already been answered.
#include <iostream>
#include <cstdlib>
#include <ctime>
using namespace std;
int main () {
srand( time(NULL) );
cout << rand();
}
"implicit conversion loses integer precision: 'time_t' (aka 'long') to 'unsigned int'"
Is the error message Im getting when I execute the code above. I am using xcode 4.6.1. Now when I use a different complier such as the one from codepad.org it executes perfectly fine generating what seems like random numbers so I am assuming it is an xcode issue that I need to work around?
I have JUST started programming so I am a complete beginner when it comes to this. Is there a problem with my code or is it my complier?
Any help would be appreciated!
"implicit conversion loses integer precision: 'time_t' (aka 'long') to 'unsigned int'"
You're losing precision implicitly because time() returns a long which is larger than an unsigned int on your target. In order to workaround this problem, you should explicitly cast the result (thus removing the "implicit precision loss"):
srand( static_cast<unsigned int>(time(nullptr)));
Given that it's now 2017, I'm editing this question to suggest that you consider the features provided by std::chrono::* defined in <chrono> as a part of C++11. Does your favorite compiler provide C++11? If not, it really should!
To get the current time, you should use:
#include <chrono>
void f() {
const std::chrono::time_point current_time = std::chrono::system_clock::now();
}
Why should I bother with this when time() works?
IMO, just one reason is enough: clear, explicit types. When you deal with large programs among big enough teams, knowing whether the values passed around represent time intervals or "absolute" times, and what magnitudes is critical. With std::chrono you can design interfaces and data structures that are portable and skip out on the is-that-timeout-a-deadline-or-milliseconds-from-now-or-wait-was-it-seconds blues.
As mentioned by "nio", a clean workaround would be to explicitly type cast.
Deeper explanation:
The srand() requires an unsigned int as parameter (srand(unsigned int)) but time() returns a long int (long int time()) and this is not accepted by the srand() so in order to fix this, the compiler has to simply typecast (convert) the "long int" to "unsigned int".
BUT in your case the compiler warns you about it instead (as the designers of the compiler thought you should be aware that's all).
So a simple
srand( (unsigned int) time(NULL) );
will do the trick!
(forgive me if i have done something wrong, this is my first answer on stackoverflow)
The srand() function has unsigned int as a type of argument, time_t is long type. the upper 4 bytes from long are stripped out, but there's no problem in it.
srand() will randomize the rand() algorithm with 4 lower bytes of time(), so you're supplying more data than is needed.
If you get an error, try to just explicitly cast the time_t type to unsigned int:
srand( static_cast<unsigned int>(time(NULL)) );
Another interesting thing is that if you run your program twice in the same second, you'll get the same random number, which can be sometimes undesired, that's because if you seed the rand() algorithm with the same data, it will generate the same random sequence. Or it can be desirable when you debug some piece of code and need to test the same behaviour again... then you simply use something like srand(123456).
This is not an error. The code is valid and its meaning is well defined; if a compiler refuses to compile it, the compiler does not conform to the language definition. More likely, it's a warning, and it's telling you that the compiler writer thinks that you might have made a mistake. If you insist on eliminating warning messages you could add a cast, as others have suggested. I'm not a big fan of rewriting valid, meaningful code in order to satisfy some compiler writer's notion of good style; I'd turn off the warning. If you do that, though, you might overlook other places where a conversion loses data that you didn't intend.
#include <stdlib.h>
#include <iostream> //rand
#include <time.h> //time
float randomizer(int VarMin, int VarMax){
srand((unsigned)time(NULL));
int range = (VarMax - VarMin);
float rnd = VarMin + float(range*(rand()/(RAND_MAX + 1.0)));
return rnd;
}
When I use this
#include<time.h>
//...
int n = time(0);
//...
I get a warning about converting time to int. Is there a way to remove this warning?
Yes, change n to be a time_t. If you look at the signature in time.h on most / all systems, you'll see that that's what it returns.
#include<time.h>
//...
time_t n = time(0);
//...
Note that Arak is right: using a 32 bit int is a problem, at a minimum, due to the 2038 bug. However, you should consider that any sort of arithmetic on an integer n (rather than a time_t) only increases the probability that your code will trip over that bug early.
PS: In case I didn't make it clear in the original answer, the best response to a compiler warning is almost always to address the situation that you're being warned about. For example, forcing higher precision data into a lower precision variable loses information - the compiler is trying to warn you that you might have just created a landmine bug that someone won't trip over until much later.
Time returns time_t and not integer. Use that type preferably because it may be larger than int.
If you really need int, then typecast it explicitly, for example:
int n = (int)time(0);
I think you are using Visual C++. The return type of time(0) is 64bit int even if you are programming for 32bit platform unlike g++. To remove the warning, just assign time(0) to 64bit variable.
You probably want to use a type of time_t instead of an int.
See the example at http://en.wikipedia.org/wiki/Time_t.
The reason is time() functions returns a time_t time so you might need to do a static cast to an int or uint in this case. Write in this way:
time_t timer;
int n = static_cast<int> (time(&timer)); // this will give you current time as an integer and it is same as time(NULL)