c++ timer linux - c++

I need to know how create a timer or measure out 500ms in C++ in a linux environment. I have tried using gettimeofday and using the time structure but cant get the correct precision for milliseconds. What I am trying to do is have an operation continue for a max of 500ms...after 500ms something else happens.

If you have access to C++11 then your best bet it to use std::chrono library
http://en.cppreference.com/w/cpp/chrono/duration
I aren't entirely sure what you want to do with it do you want to wait for exactly 500ms?
you can so this for that
std::this_thread::sleep_for(std::chrono::milliseconds(500));
you can do an operation until 500 milliseconds has elapsed by getting a time pointer and check to see whether timepoint - system_time::now() is greater than 500ms
//if you compiler supports it you can use auto
std::chrono::system_clock::time_point start=std::chrono::system_clock::now();
while(start-std::chrono::system_clock::now()
< std::chrono::milliseconds(500))
{
//do action
}
If you don't have C++11 this will also work with boost chrono library. The advantage of this approach is that it is portable unlike using linux time functions.

Your question isn't really clear about why you "can't get the correct precision" or what happens when you try to do that, but if you're having trouble with gettimeofday, consider using clock_gettime instead. man clock_gettime for details.

Since you are in Linux, you can use the system call usleep
int usleep(useconds_t usec);
Which will let your process sleep for some microseconds period.

#include <chrono>
#include <iostream>
#include <future>
#include <atomic>
void keep_busy(std::chrono::milliseconds this_long,std::atomic<bool> *canceled) {
auto start = std::chrono::high_resolution_clock::now();
while(std::chrono::high_resolution_clock::now() < start+this_long) {
std::cout << "work\n";
std::this_thread::sleep_for(std::chrono::milliseconds(50));
if(canceled->load()) {
std::cout << "canceling op\n";
throw "operation canceled";
}
}
}
int main() {
std::atomic<bool> canceled(false);
auto future = std::async(std::launch::async,
keep_busy,std::chrono::milliseconds(600),&canceled);
std::this_thread::sleep_for(std::chrono::milliseconds(500));
canceled.store(true);
try {
future.get();
std::cout << "operation succeded\n";
} catch( char const *e) {
std::cout << "operation failed due to: " << e << '\n';
}
}
I'm not entirely sure this is correct...

Related

C++ condition variable with no time out

Recently, I met a problem which is related with condition variable in C++. The code is shown below :
#include <iostream>
#include <thread>
#include <chrono>
#include <mutex>
#include <condition_variable>
std::condition_variable cv;
std::mutex mutex;
int main(){
std::unique_lock<std::mutex> uniqueLock(mutex);
while (true)
{
if(cv.wait_for(uniqueLock, std::chrono::milliseconds(1000)) == std::cv_status::no_timeout)
{
std::cout << "has image" << std::endl;
}
else
{
std::cout<< "time out " << std::endl;
}
}
return 0;
}
The goal of this code is that : each time when condition variable is notified in another thread (cv.notify()), it show "has image " in the console, and if it can not be notified more than 1000 milliseconds, it shows "time out".
So the theoretical output of the above code is (because the condition variable is not notified) :
time out
time out
time out
time out
But when i execute this code in the Vs2015, I found that the output is strange:
has image
time out
has image
time out
time out
time out
has image
has image
time out
time out
time out
time out
time out
has image
has image
I would like to know why i have this output and how can i achieve my goal
Thanks !
I don't know what the cause of your error is (but there are some plausible explanations in the comments). However, one way to fix your issue is to use the other overload of wait_for, which includes a predicate.
It could look something like this (hasImage is just a bool here, replace it with something that makes sense for your needs - !imageStorage.empty() or similar):
while (true)
{
if (cv.wait_for(uniqueLock, std::chrono::milliseconds(1000), []() {return hasImage;}))
{
std::cout << "has image" << std::endl;
hasImage = false;
}
else
{
std::cout << "time out " << std::endl;
}
}
The pertinent point is that the predicate checks if there actually is a new image, and if there isn't then it should continue to wait.
One limitation with this method is that, if the predicate returns false (no image), then you don't know if the condition variable woke due to a spurious wakeup, a timeout, or if there actually was an image but another thread just took it away before this one woke up. But if that is something your design can handle, then this variation works very well.

wait_until behavior for time_point::max

On an embedded platform I ran into the issue that when waiting on a condition until time_point<clock>::max(), the program enters a busy loop completely using a CPU.
The program I am running is:
#include <mutex>
#include <condition_variable>
#include <iostream>
int main() {
std::mutex mutex;
std::condition_variable condition;
using namespace std::chrono;
using clock = steady_clock;
for (;;) {
auto forever = time_point<clock>::max();
std::unique_lock<std::mutex> lock(mutex);
std::cout << "Now waiting" << std::endl;
condition.wait_until(lock, forever);
std::cout << "Now waking up" << std::endl;
}
return 0;
}
I was quite sure this is a bug, and running this on my host's compiler (g++ 4.7) the application behaved as I expected (blocking forever). When writing a bug report I wanted to attach an ideone sample demonstrating the issue, but ideone also runs into a busy loop:
http://ideone.com/XPy0Wn
Now I am unsure who is correct here. Is there a standard definition of how wait_until on a condition should behave when the second argument is time_point<clock>::max()?
You likely observe a (silly) conversion of steady clock to system clock time:
#include <chrono>
#include <iostream>
using namespace std::chrono;
time_t silly_steady_clock_to_time_t( steady_clock::time_point t )
{
return system_clock::to_time_t(system_clock::now()
+ (t - steady_clock::now()));
}
int main() {
auto system_time = system_clock::to_time_t(system_clock::now());
auto forever = time_point<steady_clock>::max();
auto forever_time = silly_steady_clock_to_time_t(forever);
std::cout << ctime(&forever_time) << '\n';
std::cout << ctime(&system_time) << '\n';
return 0;
}
Output:
Fri Jun 16 11:40:31 1724
Tue Sep 27 15:44:54 2016
Note: the steady forever_time is in the past.
A change of clock to using clock = system_clock; will fix the issue.
As mentioned in the comments, if you want to try to track it down, you should check the return type of the call to wait_until.
It can be either std::cv_status::timeout or std::cv_status::no_timeout.
By doing that, you'll be able to understand what's going on there.
As mentioned in the standard, the return type adheres to the following rules:
cv_status::timeout if the absolute timeout specified by abs_time expired, otherwise cv_status::no_timeout.
Moreover:
The function will unblock when signaled by a call to notify_one(), a call to notify_all(), expiration of the absolute timeout specified by abs_time, or spuriously.
Likely the last one is your case and it's unlikely a bug.
You should rather look for the reasons that give place to those spurious wakeups.

C++ function runs in windows perfectly but not linux?

I am trying to write a simple c++ function sleep(int millisecond) that will sleep the program for user-specific millisecond.
Here is my code:
#include <iostream>
#include <time.h>
using namespace std;
void sleep(unsigned int mseconds) {
clock_t goal = mseconds + clock();
while (goal > clock());
}
int main() {
cout << "Hello World !" << endl;
sleep(3000);
cout << "Hello World 2" << endl;
}
The sleep() function works perfectly when I run this code on windows but doesn't work on Linux. Can anyone figure it out what's wrong with my code?
I don't know why everyone is dancing around your question instead of answering it.
You are attempting to implement your own sleep-like function, and your implementation, while it does busy wait instead of sleeping in the kernelspace (meaning that processor will be "actively" running code to sleep your program, instead of telling the machine your program is sleeping and it should run other code), is just fine.
The problem is that clock() is not required to return milliseconds. clock() will return system time/process time elapsed in ticks from epoch. What unit that time will take depends on the implementation.
For instance, on my machine, this is what the man page says:
DESCRIPTION
The clock() function determines the amount of processor time used since
the invocation of the calling process, measured in CLOCKS_PER_SECs of a
second.
RETURN VALUES
The clock() function returns the amount of time used unless an error
occurs, in which case the return value is -1.
SEE ALSO
getrusage(2), clocks(7)
STANDARDS
The clock() function conforms to ISO/IEC 9899:1990 (``ISO C90'') and
Version 3 of the Single UNIX Specification (``SUSv3'') which requires
CLOCKS_PER_SEC to be defined as one million.
As you can see from the bolded part, a tick is one-one-millionth of a second, aka a microsecond (not a millisecond). To "sleep" for 3 seconds, you'll need to call your sleep(3000000) and not sleep(3000).
With C++11 you can use sleep_for.
#include <chrono>
#include <thread>
void sleep(unsigned int mseconds) {
std::chrono::milliseconds dura( mseconds);
std::this_thread::sleep_for( dura );
}
You can use build-in sleep() function which takes pospond time as seconds not in milliseconds and have to include unistd.h standard library as build-in sleep() function is defined under this library.
Try it:
#include <iostream>
#include <unistd.h>
using namespace std;
int main() {
cout << "Hello World !" << endl;
sleep(3); //wait for 3 seconds
cout << "Hello World 2" << endl;
}
:P
There is no standard C API for milliseconds on Linux so you will have to use usleep. POSIX sleep takes seconds.

boost thread and try_join_for gives different output each time

Suppose that I have the following code:
#include <boost/chrono.hpp>
#include <boost/thread.hpp>
#include <iostream>
int main()
{
boost::thread thd([]{ std::cout << "str \n"; });
boost::this_thread::sleep_for(boost::chrono::seconds(3));
if (thd.try_join_for(boost::chrono::nanoseconds(1)))
{
std::cout << "Finished \n";
}
else
{
std::cout << "Running \n";
}
}
MSVC-12.0 and boost 1.55 gives me the different output each time when I start this program. For example,
str
Finished
str
Finished
str
Running
When I change boost::chrono::nanoseconds to boost::chrono::microseconds the output is looks as expected.
Why? What am I doing wrong? Is it a bug in boost library? Is there a ticket about in in boost bug tracker?
Thanks in advance.
Your program simply has a race, most probably due to the fact that 1 nanosecond is awfully short.
try_join_for is implemented by calling try_join_until, a function that will attempt joining until a certain timepoint has been reached:
// I stripped some unrelated template stuff from the code
// to make it more readable
bool try_join_for(const chrono::duration& rel_time)
{
return try_join_until(chrono::steady_clock::now() + rel_time);
}
bool try_join_until(const chrono::time_point& t)
{
system_clock::time_point s_now = system_clock::now();
bool joined= false;
do {
Clock::duration d = ceil<nanoseconds>(t-Clock::now());
if (d <= Clock::duration::zero())
return false; // in case the Clock::time_point t is already reached
// only here we attempt to join for the first time:
joined = try_join_until(s_now + d);
} while (! joined);
return true;
}
The problem is now that try_join_until will check whether the requested time_point has been reached before attempting the join. As you can see, it needs to perform two other calls to clock::now() and some computation to compare the obtained values to the deadline given by the user. This may or may not be completed before the clock jumps beyond your given 1 nanosecond deadline, resulting in the unpredictability of the output.
Be aware that in general timing dependent code like this is fragile. Even with timeouts in the order of milliseconds, if you get preempted at a bad point during execution and there is a high load on the CPU, you might miss a deadline in rare cases. So be sure to always chose your deadlines carefully and never make assumptions that a deadline will be big enough in all possible cases.
What is wrong with just calling .join()? If you insist you can check before you join:
#include <boost/chrono.hpp>
#include <boost/thread.hpp>
#include <iostream>
int main()
{
boost::thread thd([]{ std::cout << "str\n"; });
boost::this_thread::sleep_for(boost::chrono::seconds(3));
if (thd.joinable())
thd.join();
}
Note that the behaviour is Undefined anyway if you fail to join a thread before program exit. Use
futures,
condition variables or
semaphores
to signal job completion if that's what you were trying to monitor.

Take time in milliseconds

I have found the usleep function in unistd.h, and I thought it was useful to wait some time before every action.But I have discovered that the thread just sleeps if it it doesn't receive any signal.For example if I press a button (I'm using OpenGL but the question is more specific about time.h and unistd.h), the thread gets awaken and I'm not getting what I want.
In time.h there is the sleep function that accepts an integer but an integer is too much ( I want to wait 0.3 seconds), so I use usleep.
I ask if there is a function to take time in milliseconds (from any GNU or whatever library).
It should work like time(), but returning milliseconds instead of seconds.Is that possibile?
If you have boost you can do it this way:
#include <boost/thread.hpp>
int main()
{
boost::this_thread::sleep(boost::posix_time::millisec(2000));
return 0;
}
This simple example, as you can see in the code, sleeps for 2000ms.
Edit:
Ok, I thought I understood the question but then I read the comments and now I'm not so sure anymore.
Perhaps you want to get how many milliseconds that has passed since some point/event? If that is the case then you could do something like:
#include <boost/chrono.hpp>
#include <boost/thread.hpp>
#include <iostream>
int main()
{
boost::chrono::high_resolution_clock::time_point start = boost::chrono::high_resolution_clock::now();
boost::this_thread::sleep(boost::posix_time::millisec(2000));
boost::chrono::milliseconds ms = boost::chrono::duration_cast<boost::chrono::milliseconds> (boost::chrono::high_resolution_clock::now() - start);
std::cout << "2000ms sleep took " << ms.count() << "ms " << "\n";
return 0;
}
(Please excuse the long lines)
This is a cross-platform function I use:
unsigned Util::getTickCount()
{
#ifdef WINDOWS
return GetTickCount();
#else
struct timeval tv;
gettimeofday(&tv, 0);
return unsigned((tv.tv_sec * 1000) + (tv.tv_usec / 1000));
#endif
}