I have found the usleep function in unistd.h, and I thought it was useful to wait some time before every action.But I have discovered that the thread just sleeps if it it doesn't receive any signal.For example if I press a button (I'm using OpenGL but the question is more specific about time.h and unistd.h), the thread gets awaken and I'm not getting what I want.
In time.h there is the sleep function that accepts an integer but an integer is too much ( I want to wait 0.3 seconds), so I use usleep.
I ask if there is a function to take time in milliseconds (from any GNU or whatever library).
It should work like time(), but returning milliseconds instead of seconds.Is that possibile?
If you have boost you can do it this way:
#include <boost/thread.hpp>
int main()
{
boost::this_thread::sleep(boost::posix_time::millisec(2000));
return 0;
}
This simple example, as you can see in the code, sleeps for 2000ms.
Edit:
Ok, I thought I understood the question but then I read the comments and now I'm not so sure anymore.
Perhaps you want to get how many milliseconds that has passed since some point/event? If that is the case then you could do something like:
#include <boost/chrono.hpp>
#include <boost/thread.hpp>
#include <iostream>
int main()
{
boost::chrono::high_resolution_clock::time_point start = boost::chrono::high_resolution_clock::now();
boost::this_thread::sleep(boost::posix_time::millisec(2000));
boost::chrono::milliseconds ms = boost::chrono::duration_cast<boost::chrono::milliseconds> (boost::chrono::high_resolution_clock::now() - start);
std::cout << "2000ms sleep took " << ms.count() << "ms " << "\n";
return 0;
}
(Please excuse the long lines)
This is a cross-platform function I use:
unsigned Util::getTickCount()
{
#ifdef WINDOWS
return GetTickCount();
#else
struct timeval tv;
gettimeofday(&tv, 0);
return unsigned((tv.tv_sec * 1000) + (tv.tv_usec / 1000));
#endif
}
Related
What's the best way to calculate a time difference in C++? I'm timing the execution speed of a program, so I'm interested in milliseconds. Better yet, seconds.milliseconds..
The accepted answer works, but needs to include ctime or time.h as noted in the comments.
See std::clock() function.
const clock_t begin_time = clock();
// do something
std::cout << float( clock () - begin_time ) / CLOCKS_PER_SEC;
If you want calculate execution time for self ( not for user ), it is better to do this in clock ticks ( not seconds ).
EDIT:
responsible header files - <ctime> or <time.h>
I added this answer to clarify that the accepted answer shows CPU time which may not be the time you want. Because according to the reference, there are CPU time and wall clock time. Wall clock time is the time which shows the actual elapsed time regardless of any other conditions like CPU shared by other processes. For example, I used multiple processors to do a certain task and the CPU time was high 18s where it actually took 2s in actual wall clock time.
To get the actual time you do,
#include <chrono>
auto t_start = std::chrono::high_resolution_clock::now();
// the work...
auto t_end = std::chrono::high_resolution_clock::now();
double elapsed_time_ms = std::chrono::duration<double, std::milli>(t_end-t_start).count();
if you are using c++11, here is a simple wrapper (see this gist):
#include <iostream>
#include <chrono>
class Timer
{
public:
Timer() : beg_(clock_::now()) {}
void reset() { beg_ = clock_::now(); }
double elapsed() const {
return std::chrono::duration_cast<second_>
(clock_::now() - beg_).count(); }
private:
typedef std::chrono::high_resolution_clock clock_;
typedef std::chrono::duration<double, std::ratio<1> > second_;
std::chrono::time_point<clock_> beg_;
};
Or for c++03 on *nix:
#include <iostream>
#include <ctime>
class Timer
{
public:
Timer() { clock_gettime(CLOCK_REALTIME, &beg_); }
double elapsed() {
clock_gettime(CLOCK_REALTIME, &end_);
return end_.tv_sec - beg_.tv_sec +
(end_.tv_nsec - beg_.tv_nsec) / 1000000000.;
}
void reset() { clock_gettime(CLOCK_REALTIME, &beg_); }
private:
timespec beg_, end_;
};
Example of usage:
int main()
{
Timer tmr;
double t = tmr.elapsed();
std::cout << t << std::endl;
tmr.reset();
t = tmr.elapsed();
std::cout << t << std::endl;
return 0;
}
I would seriously consider the use of Boost, particularly boost::posix_time::ptime and boost::posix_time::time_duration (at http://www.boost.org/doc/libs/1_38_0/doc/html/date_time/posix_time.html).
It's cross-platform, easy to use, and in my experience provides the highest level of time resolution an operating system provides. Possibly also very important; it provides some very nice IO operators.
To use it to calculate the difference in program execution (to microseconds; probably overkill), it would look something like this [browser written, not tested]:
ptime time_start(microsec_clock::local_time());
//... execution goes here ...
ptime time_end(microsec_clock::local_time());
time_duration duration(time_end - time_start);
cout << duration << '\n';
boost 1.46.0 and up includes the Chrono library:
thread_clock class provides access to the real thread wall-clock, i.e.
the real CPU-time clock of the calling thread. The thread relative
current time can be obtained by calling thread_clock::now()
#include <boost/chrono/thread_clock.hpp>
{
...
using namespace boost::chrono;
thread_clock::time_point start = thread_clock::now();
...
thread_clock::time_point stop = thread_clock::now();
std::cout << "duration: " << duration_cast<milliseconds>(stop - start).count() << " ms\n";
In Windows: use GetTickCount
//GetTickCount defintition
#include <windows.h>
int main()
{
DWORD dw1 = GetTickCount();
//Do something
DWORD dw2 = GetTickCount();
cout<<"Time difference is "<<(dw2-dw1)<<" milliSeconds"<<endl;
}
You can also use the clock_gettime. This method can be used to measure:
System wide real-time clock
System wide monotonic clock
Per Process CPU time
Per process Thread CPU time
Code is as follows:
#include < time.h >
#include <iostream>
int main(){
timespec ts_beg, ts_end;
clock_gettime(CLOCK_PROCESS_CPUTIME_ID, &ts_beg);
clock_gettime(CLOCK_PROCESS_CPUTIME_ID, &ts_end);
std::cout << (ts_end.tv_sec - ts_beg.tv_sec) + (ts_end.tv_nsec - ts_beg.tv_nsec) / 1e9 << " sec";
}
`
just in case you are on Unix, you can use time to get the execution time:
$ g++ myprog.cpp -o myprog
$ time ./myprog
For me, the most easy way is:
#include <boost/timer.hpp>
boost::timer t;
double duration;
t.restart();
/* DO SOMETHING HERE... */
duration = t.elapsed();
t.restart();
/* DO OTHER STUFF HERE... */
duration = t.elapsed();
using this piece of code you don't have to do the classic end - start.
Enjoy your favorite approach.
Just a side note: if you're running on Windows, and you really really need precision, you can use QueryPerformanceCounter. It gives you time in (potentially) nanoseconds.
Get the system time in milliseconds at the beginning, and again at the end, and subtract.
To get the number of milliseconds since 1970 in POSIX you would write:
struct timeval tv;
gettimeofday(&tv, NULL);
return ((((unsigned long long)tv.tv_sec) * 1000) +
(((unsigned long long)tv.tv_usec) / 1000));
To get the number of milliseconds since 1601 on Windows you would write:
SYSTEMTIME systime;
FILETIME filetime;
GetSystemTime(&systime);
if (!SystemTimeToFileTime(&systime, &filetime))
return 0;
unsigned long long ns_since_1601;
ULARGE_INTEGER* ptr = (ULARGE_INTEGER*)&ns_since_1601;
// copy the result into the ULARGE_INTEGER; this is actually
// copying the result into the ns_since_1601 unsigned long long.
ptr->u.LowPart = filetime.dwLowDateTime;
ptr->u.HighPart = filetime.dwHighDateTime;
// Compute the number of milliseconds since 1601; we have to
// divide by 10,000, since the current value is the number of 100ns
// intervals since 1601, not ms.
return (ns_since_1601 / 10000);
If you cared to normalize the Windows answer so that it also returned the number of milliseconds since 1970, then you would have to adjust your answer by 11644473600000 milliseconds. But that isn't necessary if all you care about is the elapsed time.
If you are using:
tstart = clock();
// ...do something...
tend = clock();
Then you will need the following to get time in seconds:
time = (tend - tstart) / (double) CLOCKS_PER_SEC;
This seems to work fine for intel Mac 10.7:
#include <time.h>
time_t start = time(NULL);
//Do your work
time_t end = time(NULL);
std::cout<<"Execution Time: "<< (double)(end-start)<<" Seconds"<<std::endl;
I am trying to write a simple c++ function sleep(int millisecond) that will sleep the program for user-specific millisecond.
Here is my code:
#include <iostream>
#include <time.h>
using namespace std;
void sleep(unsigned int mseconds) {
clock_t goal = mseconds + clock();
while (goal > clock());
}
int main() {
cout << "Hello World !" << endl;
sleep(3000);
cout << "Hello World 2" << endl;
}
The sleep() function works perfectly when I run this code on windows but doesn't work on Linux. Can anyone figure it out what's wrong with my code?
I don't know why everyone is dancing around your question instead of answering it.
You are attempting to implement your own sleep-like function, and your implementation, while it does busy wait instead of sleeping in the kernelspace (meaning that processor will be "actively" running code to sleep your program, instead of telling the machine your program is sleeping and it should run other code), is just fine.
The problem is that clock() is not required to return milliseconds. clock() will return system time/process time elapsed in ticks from epoch. What unit that time will take depends on the implementation.
For instance, on my machine, this is what the man page says:
DESCRIPTION
The clock() function determines the amount of processor time used since
the invocation of the calling process, measured in CLOCKS_PER_SECs of a
second.
RETURN VALUES
The clock() function returns the amount of time used unless an error
occurs, in which case the return value is -1.
SEE ALSO
getrusage(2), clocks(7)
STANDARDS
The clock() function conforms to ISO/IEC 9899:1990 (``ISO C90'') and
Version 3 of the Single UNIX Specification (``SUSv3'') which requires
CLOCKS_PER_SEC to be defined as one million.
As you can see from the bolded part, a tick is one-one-millionth of a second, aka a microsecond (not a millisecond). To "sleep" for 3 seconds, you'll need to call your sleep(3000000) and not sleep(3000).
With C++11 you can use sleep_for.
#include <chrono>
#include <thread>
void sleep(unsigned int mseconds) {
std::chrono::milliseconds dura( mseconds);
std::this_thread::sleep_for( dura );
}
You can use build-in sleep() function which takes pospond time as seconds not in milliseconds and have to include unistd.h standard library as build-in sleep() function is defined under this library.
Try it:
#include <iostream>
#include <unistd.h>
using namespace std;
int main() {
cout << "Hello World !" << endl;
sleep(3); //wait for 3 seconds
cout << "Hello World 2" << endl;
}
:P
There is no standard C API for milliseconds on Linux so you will have to use usleep. POSIX sleep takes seconds.
I need to know how create a timer or measure out 500ms in C++ in a linux environment. I have tried using gettimeofday and using the time structure but cant get the correct precision for milliseconds. What I am trying to do is have an operation continue for a max of 500ms...after 500ms something else happens.
If you have access to C++11 then your best bet it to use std::chrono library
http://en.cppreference.com/w/cpp/chrono/duration
I aren't entirely sure what you want to do with it do you want to wait for exactly 500ms?
you can so this for that
std::this_thread::sleep_for(std::chrono::milliseconds(500));
you can do an operation until 500 milliseconds has elapsed by getting a time pointer and check to see whether timepoint - system_time::now() is greater than 500ms
//if you compiler supports it you can use auto
std::chrono::system_clock::time_point start=std::chrono::system_clock::now();
while(start-std::chrono::system_clock::now()
< std::chrono::milliseconds(500))
{
//do action
}
If you don't have C++11 this will also work with boost chrono library. The advantage of this approach is that it is portable unlike using linux time functions.
Your question isn't really clear about why you "can't get the correct precision" or what happens when you try to do that, but if you're having trouble with gettimeofday, consider using clock_gettime instead. man clock_gettime for details.
Since you are in Linux, you can use the system call usleep
int usleep(useconds_t usec);
Which will let your process sleep for some microseconds period.
#include <chrono>
#include <iostream>
#include <future>
#include <atomic>
void keep_busy(std::chrono::milliseconds this_long,std::atomic<bool> *canceled) {
auto start = std::chrono::high_resolution_clock::now();
while(std::chrono::high_resolution_clock::now() < start+this_long) {
std::cout << "work\n";
std::this_thread::sleep_for(std::chrono::milliseconds(50));
if(canceled->load()) {
std::cout << "canceling op\n";
throw "operation canceled";
}
}
}
int main() {
std::atomic<bool> canceled(false);
auto future = std::async(std::launch::async,
keep_busy,std::chrono::milliseconds(600),&canceled);
std::this_thread::sleep_for(std::chrono::milliseconds(500));
canceled.store(true);
try {
future.get();
std::cout << "operation succeded\n";
} catch( char const *e) {
std::cout << "operation failed due to: " << e << '\n';
}
}
I'm not entirely sure this is correct...
I am still new to C++, is the clock function absolute (meaning it counts how long you sleep for), or is it how much time the application actually executes for?
I want a reliable way to produce exact intervals of 1 second. I am saving files, so I need to account for that. I was returning the runtime for that in milliseconds, and then sleeping for the remainder.
Is there a more accurate or simpler way to do this?
EDIT:
The main problem I am having is that I am getting a negative number:
double FCamera::getRuntime(clock_t* end, clock_t* start)
{
return((double(end - start)/CLOCKS_PER_SEC)*1000);
}
clock_t start = clock();
doWork();
clock_t end = clock();
double runtimeInMilliseconds = getRuntime(&end, &start);
It's giving me a negative number, what's up with that?
Walter
clock() returns the number of clock ticks elapsed since the program was launched. If you want to convert the value returned by clock into seconds divide by CLOCKS_PER_SEC (and multiply for the other way around).
There is just one pitfall, the initial moment of reference used by clock as the beginning of the program execution may vary between platforms. To calculate the actual processing times of a program, the value returned by clock should be compared to a value returned by an initial call to clock.
EDIT
larsman has been so kind to post other pitfalls in the comments. I have included them here for future reference.
On several other implementations, the value returned by clock() also includes the times of any children whose status has been collected via wait(2) (or another wait-type call). Linux does not include the times of waited-for children in the value returned by clock().
Note that the time can wrap around. On a 32-bit system where CLOCKS_PER_SEC equals 1000000 [as mandated by POSIX] this function will return the same value approximately every 72 minutes.
EDIT2
After messing around a while here is my portable (Linux/Windows) msleep. Be wary though, I'm not experienced with C/C++ and will most likely contain the stupidest error ever.
#ifdef _WIN32
#include <windows.h>
#define msleep(ms) Sleep((DWORD) ms)
#else
#include <unistd.h>
inline void msleep(unsigned long ms) {
while (ms--) usleep(1000);
}
#endif
You missed * (pointer) ,
Your argument is pointer (address of clock_t variable)
so, Your code must be modified::
return((double(*end - *start)/CLOCKS_PER_SEC)*1000);
Under windows, you can use:
VOID WINAPI Sleep(
__in DWORD dwMilliseconds
);
In linux, you will want to use:
#include <unistd.h>
unsigned int sleep(unsigned int seconds);
Notice the parameter difference - milliseconds under windows and seconds under linux.
My approach relies on:
int gettimeofday(struct timeval *tv, struct timezone *tz);
which gives the number of seconds and microseconds since the Epoch. According to the man pages:
The tv argument is a struct timeval (as specified in <sys/time.h>):
struct timeval {
time_t tv_sec; /* seconds */
suseconds_t tv_usec; /* microseconds */
};
So here we go:
#include <sys/time.h>
#include <iostream>
#include <iomanip>
static long myclock()
{
struct timeval tv;
gettimeofday(&tv, NULL);
return (tv.tv_sec * 1000000) + tv.tv_usec;
}
double getRuntime(long* end, long* start)
{
return (*end - *start);
}
void doWork()
{
sleep(3);
}
int main(void)
{
long start = myclock();
doWork();
long end = myclock();
std::cout << "Time elapsed: " << std::setprecision(6) << getRuntime(&end, &start)/1000.0 << " miliseconds" << std::endl;
std::cout << "Time elapsed: " << std::setprecision(3) << getRuntime(&end, &start)/1000000.0 << " seconds" << std::endl;
return 0;
}
Outputs:
Time elapsed: 3000.08 miliseconds
Time elapsed: 3 seconds
I am writing a program that will be used on a Solaris machine. I need a way of keeping track of how many seconds has passed since the start of the program. I'm talking very simple here. For example I would have an int seconds = 0; but how would I go about updating the seconds variable as each second passes?
It seems that some of the various time functions that I've looked at only work on Windows machines, so I'm just not sure.
Any suggestions would be appreciated.
Thanks for your time.
A very simple method:
#include <time.h>
time_t start = time(0);
double seconds_since_start = difftime( time(0), start);
The main drawback to this is that you have to poll for the updates. You'll need platform support or some other lib/framework to do this on an event basis.
Use std::chrono.
#include <chrono>
#include <iostream>
int main(int argc, char *argv[])
{
auto start_time = std::chrono::high_resolution_clock::now();
auto current_time = std::chrono::high_resolution_clock::now();
std::cout << "Program has been running for " << std::chrono::duration_cast<std::chrono::seconds>(current_time - start_time).count() << " seconds" << std::endl;
return 0;
}
If you only need a resolution of seconds, then std::steady_clock should be sufficient.
You are approaching it backwards. Instead of having a variable you have to worry about updating every second, just initialize a variable on program start with the current time, and then whenever you need to know how many seconds have elapsed, you subtract the now current time from that initial time. Much less overhead that way, and no need to nurse some timing related variable update.
#include <stdio.h>
#include <time.h>
#include <windows.h>
using namespace std;
void wait ( int seconds );
int main ()
{
time_t start, end;
double diff;
time (&start); //useful call
for (int i=0;i<10;i++) //this loop is useless, just to pass some time.
{
printf ("%s\n", ctime(&start));
wait(1);
}
time (&end);//useful call
diff = difftime(end,start);//this will give you time spent between those two calls.
printf("difference in seconds=%f",diff); //convert secs as u like
system("pause");
return 0;
}
void wait ( int seconds )
{
clock_t endwait;
endwait = clock () + seconds * CLOCKS_PER_SEC ;
while (clock() < endwait) {}
}
this should work fine on solaris/unix also, just remove win refs
You just need to store the date/time when application started. Whenever you need to display for how long your program is running get current date/time and subtract the when application started.