How do you add a timed delay to a C++ program? - c++

I am trying to add a timed delay in a C++ program, and was wondering if anyone has any suggestions on what I can try or information I can look at?
I wish I had more details on how I am implementing this timed delay, but until I have more information on how to add a timed delay I am not sure on how I should even attempt to implement this.

An updated answer for C++11:
Use the sleep_for and sleep_until functions:
#include <chrono>
#include <thread>
int main() {
using namespace std::this_thread; // sleep_for, sleep_until
using namespace std::chrono; // nanoseconds, system_clock, seconds
sleep_for(nanoseconds(10));
sleep_until(system_clock::now() + seconds(1));
}
With these functions there's no longer a need to continually add new functions for better resolution: sleep, usleep, nanosleep, etc. sleep_for and sleep_until are template functions that can accept values of any resolution via chrono types; hours, seconds, femtoseconds, etc.
In C++14 you can further simplify the code with the literal suffixes for nanoseconds and seconds:
#include <chrono>
#include <thread>
int main() {
using namespace std::this_thread; // sleep_for, sleep_until
using namespace std::chrono_literals; // ns, us, ms, s, h, etc.
using std::chrono::system_clock;
sleep_for(10ns);
sleep_until(system_clock::now() + 1s);
}
Note that the actual duration of a sleep depends on the implementation: You can ask to sleep for 10 nanoseconds, but an implementation might end up sleeping for a millisecond instead, if that's the shortest it can do.

In Win32:
#include<windows.h>
Sleep(milliseconds);
In Unix:
#include<unistd.h>
unsigned int microsecond = 1000000;
usleep(3 * microsecond);//sleeps for 3 second
sleep() only takes a number of seconds which is often too long.

#include <unistd.h>
usleep(3000000);
This will also sleep for three seconds. You can refine the numbers a little more though.

Do you want something as simple like:
#include <unistd.h>
sleep(3);//sleeps for 3 second

Note that this does not guarantee that the amount of time the thread sleeps will be anywhere close to the sleep period, it only guarantees that the amount of time before the thread continues execution will be at least the desired amount. The actual delay will vary depending on circumstances (especially load on the machine in question) and may be orders of magnitude higher than the desired sleep time.
Also, you don't list why you need to sleep but you should generally avoid using delays as a method of synchronization.

You can try this code snippet:
#include<chrono>
#include<thread>
int main(){
std::this_thread::sleep_for(std::chrono::nanoseconds(10));
std::this_thread::sleep_until(std::chrono::system_clock::now() + std::chrono::seconds(1));
}

You can also use select(2) if you want microsecond precision (this works on platform that don't have usleep(3))
The following code will wait for 1.5 second:
#include <sys/select.h>
#include <sys/time.h>
#include <unistd.h>`
int main() {
struct timeval t;
t.tv_sec = 1;
t.tv_usec = 500000;
select(0, NULL, NULL, NULL, &t);
}
`

I found that "_sleep(milliseconds);" (without the quotes) works well for Win32 if you include the chrono library
E.g:
#include <chrono>
using namespace std;
main
{
cout << "text" << endl;
_sleep(10000); // pauses for 10 seconds
}
Make sure you include the underscore before sleep.

Yes, sleep is probably the function of choice here. Note that the time passed into the function is the smallest amount of time the calling thread will be inactive. So for example if you call sleep with 5 seconds, you're guaranteed your thread will be sleeping for at least 5 seconds. Could be 6, or 8 or 50, depending on what the OS is doing. (During optimal OS execution, this will be very close to 5.) Another useful feature of the sleep function is to pass in 0. This will force a context switch from your thread.
Some additional information:
http://www.opengroup.org/onlinepubs/000095399/functions/sleep.html

The top answer here seems to be an OS dependent answer; for a more portable solution you can write up a quick sleep function using the ctime header file (although this may be a poor implementation on my part).
#include <iostream>
#include <ctime>
using namespace std;
void sleep(float seconds){
clock_t startClock = clock();
float secondsAhead = seconds * CLOCKS_PER_SEC;
// do nothing until the elapsed time has passed.
while(clock() < startClock+secondsAhead);
return;
}
int main(){
cout << "Next string coming up in one second!" << endl;
sleep(1.0);
cout << "Hey, what did I miss?" << endl;
return 0;
}

to delay output in cpp for fixed time, you can use the Sleep() function by including windows.h header file
syntax for Sleep() function is Sleep(time_in_ms)
as
cout<<"Apple\n";
Sleep(3000);
cout<<"Mango";
OUTPUT. above code will print Apple and wait for 3 seconds before printing Mango.

Syntax:
void sleep(unsigned seconds);
sleep() suspends execution for an interval (seconds).
With a call to sleep, the current program is suspended from execution for the number of seconds specified by the argument seconds. The interval is accurate only to the nearest hundredth of a second or to the accuracy of the operating system clock, whichever is less accurate.

Many others have provided good info for sleeping. I agree with Wedge that a sleep seldom the most appropriate solution.
If you are sleeping as you wait for something, then you are better off actually waiting for that thing/event. Look at Condition Variables for this.
I don't know what OS you are trying to do this on, but for threading and synchronisation you could look to the Boost Threading libraries (Boost Condition Varriable).
Moving now to the other extreme if you are trying to wait for exceptionally short periods then there are a couple of hack style options. If you are working on some sort of embedded platform where a 'sleep' is not implemented then you can try a simple loop (for/while etc) with an empty body (be careful the compiler does not optimise it away). Of course the wait time is dependant on the specific hardware in this case.
For really short 'waits' you can try an assembly "nop". I highly doubt these are what you are after but without knowing why you need to wait it's hard to be more specific.

On Windows you can include the windows library and use "Sleep(0);" to sleep the program. It takes a value that represents milliseconds.

Related

std::this_thread::sleep_for time resolution

So I'm helping my buddy out with some code and we've hit some weirdness in the sleep_for function:
This works, gives an "acceptable" timing of about 16.7ms (acceptable being +/- 2-4ms, but anyway):
int main()
{
long double milliseconds=16.7*1000;
auto start=std::chrono::high_resolution_clock::now();
using namespace std::chrono_literals;
std::this_thread::sleep_for(std::chrono::duration<long double, std::micro>(1670));
auto end=std::chrono::high_resolution_clock::now();
std::cout << "Slept for: "<<std::chrono::duration<float, std::milli>(end-start)<<std::endl;
}
This however, will only give you a minimum of 30ms, works as expected above 30ms:
int main()
{
long double milliseconds=16.7*1000.0;
auto start=std::chrono::high_resolution_clock::now();
using namespace std::chrono_literals;
std::this_thread::sleep_for(std::chrono::duration<long double, std::micro>(milliseconds*1000.0));
auto end=std::chrono::high_resolution_clock::now();
std::cout<<"Slept for: "<<std::chrono::duration<float, std::milli>(end-start)<<std::endl;
}
Does anyone have an explanation for this?
I've tried various castings and different periods, they all end up about the same.
Using milliseconds period and above causes a minimum of 30ms, microseconds and below have expected results.
I suspect that there are different code paths that does different clock resolutions and bottoms out or something like that, but why doesn't a variable being multiplied by 1000 to go from 'ms' to 'us' not work?
I don't get it.
Apparently this is a Windows API "quirk", calling timeBeginPeriod will set minimum period resolution not only on Win32 API calls that deal with timing, but also stdlib.
The timing with this code is nearly perfect on Linux, naturally.
Thanks to Retired Ninja for the answer!

What is the maximum time interval chrono can measure in C++?

I'm quiet experienced in programming, but new to C++. I'm trying to measure the time it takes to run a code. In the future I might write code that can take hours/days to finish itself. Therefore it is important for me to know the limits of the chrono time measurement. Accuracy in milliseconds should be sufficient.
What is the maximum time I can measure?
I have used the following code, please let me know if this can be improved:
#include <chrono>
using namespace std::chrono;
auto start = high_resolution_clock::now();
// calculations here
auto finish = high_resolution_clock::now();
duration<double> elapsed = finish - start; // elapsed time in seconds
cout << elapsed.count();
Here's an informative little HelloWorld:
#include <chrono>
#include <iostream>
int
main()
{
using namespace std::chrono;
using namespace std;
using years = duration<double, ratio_multiply<ratio<86'400>, ratio<146'097, 400>>>;
cout << years{high_resolution_clock::time_point::max() -
high_resolution_clock::now()}.count()
<< " years until overflow\n";
}
I first create a double-based years duration so that the output is easy to read. Then I subtract now() from the time_point's max(), convert that to years and print it out.
For me this just output:
292.256 years until overflow
std::chrono::milliseconds is guaranteed to be stored on a underlying signed integer of at least 45 bits which means that if your elapsed duration is less than 544 years you should be fine.
Source: https://en.cppreference.com/w/cpp/chrono/duration
Edit: As orlp pointed out you might have some issues if/when the clock overflow (but I do not see any mention of it on cppreference).
Also,
The high_resolution_clock is not implemented consistently across different standard library implementations, and its use should be avoided.
[...]
Generally one should just use std::chrono::steady_clock or std::chrono::system_clock directly instead of std::chrono::high_resolution_clock: use steady_clock for duration measurements, and system_clock for wall-clock time.

usleep inside loop takes too long [duplicate]

This question already has answers here:
usleep() to calculate elapsed time behaves weird
(2 answers)
Closed 4 years ago.
In the below C++ program, I am using the function usleep() to sleep for a 1.5 seconds. I implemented that in 2 equivalent methods as illustrated below:
#include <iostream>
#include <unistd.h>
using namespace std;
int main() {
//METHOD #1
cout<<"sleep"<<endl;
usleep(1500000);
cout<<"wake up"<<endl;
//METHOD #2
cout<<"sleep"<<endl;
for(int i=0; i<1500000; i++)
usleep(1);
cout<<"wake up"<<endl;
return 0;
}
however the results came as follows:
First method: takes exactly 1.5 seconds
Second method: takes around 1.5 minutes !
Actually, I will need the second method. According to this Answer, I think I need a more accurate function that usleep(). Could any one help ?
From the documentation (emphasis mine)
The usleep() function suspends execution of the calling thread for
(at least) usec microseconds. The sleep may be lengthened slightly
by any system activity or by the time spent processing the call or by
the granularity of system timers.
So in other words, the reason why it takes longer is because it's now "going to sleep" and "waking up" 1500000 times instead of just once, and with such a short sleep duration, that overhead may be much bigger than the actual microsecond sleep.

Calculate Clocks Per Sec

Am I doing it correctly? At times, my program will print 2000+ for the chrono solution and it always prints 1000 for the CLOCKS_PER_SEC..
What is that value I'm actually calculating? Is it Clocks Per Sec?
#include <iostream>
#include <chrono>
#include <thread>
#include <ctime>
std::chrono::time_point<std::chrono::high_resolution_clock> SystemTime()
{
return std::chrono::high_resolution_clock::now();
}
std::uint32_t TimeDuration(std::chrono::time_point<std::chrono::high_resolution_clock> Time)
{
return std::chrono::duration_cast<std::chrono::nanoseconds>(SystemTime() - Time).count();
}
int main()
{
auto Begin = std::chrono::high_resolution_clock::now();
std::this_thread::sleep_for(std::chrono::milliseconds(1));
std::cout<< (TimeDuration(Begin) / 1000.0)<<std::endl;
std::cout<<CLOCKS_PER_SEC;
return 0;
}
In order to get the correct ticks per second on Linux, you need to use the return value of ::sysconf(_SC_CLK_TCK) (declared in the header unistd.h), rather than the macro CLOCKS_PER_SEC.
The latter is a constant defined in the POSIX standard – it is unrelated to the actual ticks per second of your CPU clock. For example, see the man page for clock:
C89, C99, POSIX.1-2001. POSIX requires that CLOCKS_PER_SEC equals 1000000 independent of the actual resolution.
However, note that even when using the correct ticks-per-second constant, you still won't get the number of actual CPU cycles per second. "Clock tick" is a special unit used by the CPU clock. There is no standardized definition of how it relates to actual CPU cycles.
In boost's library, there is a timer class, use CLOCKS_PER_SEC to calculate the maximum time the timer can elapse. It said that on Windows CLOCKS_PER_SEC is 1000 and on Mac OS X, Linux it is 1000000. So on the latter OSs, the accuracy is higher.

Slowing C++ output on terminal

I wrote a program that simulates the Game of Life. Basically the world is implemented by a bi-dimensional std::vector of bool. If the bool is true the cell is alive, if is false the cell is dead. The output of the program is the system at each time step, completely in ASCII code:
[ ][0][ ]
[ ][ ][0]
[0][0][0]
The problem is that the program runs obviously fast and each time step is printed too quickly: I can't see how the system evolves. Is there some trick to slow down the output (or directly the program)?
EDIT: I'm on Mac OS X 10.7. My compiler is GCC 4.7.
You can use standard C++ (C++11):
#include <thread>
#include <chrono>
#include <iostream>
int main() {
while (true) {
// draw loop
std::this_thread::sleep_for(std::chrono::milliseconds(20));
}
}
Alternatively, you could use a library that lets you specify an interval at which to call your draw function. OS X has Grand Central Dispatch (a.k.a. libdispatch). Using GCD you could create a dispatch timer source that calls your draw function with a specified frequency.
dispatch_source_t timer = dispatch_source_create(
DISPATCH_SOURCE_TYPE_TIMER, 0, 0, dispatch_get_main_queue());
dispatch_source_set_timer(timer, DISPATCH_TIME_NOW,
duration_cast<nanoseconds>(milliseconds(20)).count(),
duration_cast<nanoseconds>(milliseconds( 5)).count());
// the API is defined to use nanoseconds, but I'd rather work in milliseconds
// so I use std::chrono to do the conversion above
dispatch_source_set_event_handler(timer,
[]{ your_draw_function(); });
// I'm not sure if GCC 4.7 actually supports converting C++11 lambdas to
// Apple's C blocks, or if it even supports blocks. Clang supports this.
dispatch_resume(timer);
dispatch_main();
libdispatch reference
Whatever system you are using, it will have some kind of sleep function that you can call that will suspend your program for a specified period of time. You do not specify what OS you use, so I cant give exact details, but it sounds like the approach you are looking for.
If you call sleep for a certain length of time after drawing each update of the image, your program will sleep for that time before resuming and drawing the next update. This should give you chance to actually see the changes
If you want higher resolution time sleep you can look at nanosleep and usleep
1.You can use
int tmp; std::cin >> tmp;
and program will ask you before to go further.
2.You can use loop over some calculations. Like
double Tmp[1000000];
for( int i = 0; i < 1000000; i++ )
Tmp[i] = i;
for( int i = 0; i < 1000000; i++ )
Tmp[i] = sin(sin(sin(Tmp[i])));
3.You can check which delay-functions you have available for you. Example is "Sleep(nSeconds)" here
4.You can save and verify you system time. Like:
while (time() < time_end){};