c++ wait_until strange timeout behaviour - c++

I'm trying to write a kind of thread pool in C++. The code works fine in OSX, but under Linux I'm experiencing a strange behavior.
After a bit of debugging, I found the problem is due to a call to std::condition_variable::wait_until that I must be doing in a wrong way.
With the code below I expect the loop to be looped once every three seconds:
#include <mutex>
#include <chrono>
#include <iostream>
#include <memory>
#include <condition_variable>
#include <thread>
using namespace std;
typedef std::chrono::steady_clock my_clock;
typedef std::chrono::duration<float, std::ratio<1> > seconds_duration;
typedef std::chrono::time_point<my_clock, seconds_duration> timepoint;
timepoint my_begin = my_clock::now();
float timepointToFloat(timepoint time) {
return time.time_since_epoch().count() - my_begin.time_since_epoch().count();
}
void printNow(std::string mess) {
timepoint now = my_clock::now();
cout << timepointToFloat(now) << " " << mess << endl;;
};
void printNow(std::string mess, timepoint time ) {
timepoint now = my_clock::now();
cout << timepointToFloat(now) << " " << mess << " " << timepointToFloat(time) << endl;;
};
int main() {
mutex _global_mutex;
condition_variable _awake_global_execution;
auto check_predicate = [](){
cout << "predicate called" << endl;
return false;
};
while (true) {
{ // Expected to loop every three seconds
unique_lock<mutex> lock(_global_mutex);
timepoint planned_awake = my_clock::now() + seconds_duration(3);
printNow("wait until", planned_awake);
_awake_global_execution.wait_until(lock, planned_awake, check_predicate);
}
printNow("finish wait, looping");
}
return 0;
}
However, sometimes I get as output:
<X> wait until <X+3>
predicate called
(...hangs here for a long time)
(where X is a number), so it seems the timeout is not scheduled after three seconds. Sometimes instead I get:
<X> wait until <X+3>
predicate called
predicate called
<X> finish wait, looping
<X> wait until <X+3> (another loop)
predicate called
predicate called
<X> finish wait, looping
(...continue looping without waiting)
so it seems the timeout is scheduled after a small fraction of seconds. I think I'm messing up something with the timeout timepoint, but I cannot figure out what I'm doing wrong.
If it may be relevant, this code works fine in OSX, while in Linux (Ubuntu 16.04, gcc 5.4, compiled with "g++ main.cc -std=c++11 -pthread") I'm experiencing the strange behavior.
How can I get it work?

Try to cast your timeout to your clock's duration:
auto planned_awake = my_clock::now() +
std::chrono::duration_cast<my_clock::duration>(secon‌​ds_duration(3));

Related

Vector push_backs done in multiple threads (within 20 sec) don't show correct vector size after wait_until(20 sec) in first thread

Based on the boost chat example and the wait_until example I'm trying to implement
adding messages to a vector (shared object) when a constructor is called
and the size of the vector should be given after a wait of 20 seconds.
What happens below when 2 CreateBlock calls happen within 20 seconds,
is that the first call of CreateBlock waits 20 seconds and then the size of the vector is 1,
then there's another wait for 20 seconds, also with vector size of 1.
CreateBlock call on the server, called asynchronously from 2 different clients:
CreateBlock cb(message_j);
The class:
#include <condition_variable>
#include <thread>
#include <chrono>
#include <vector>
#include "json.hpp"
class CreateBlock
{
public:
CreateBlock(nlohmann::json &message_j)
{
std::thread th (&CreateBlock::waits, this, std::ref(message_j));
th.join();
}
private:
void waits(nlohmann::json &message_j)
{
{
std::lock_guard<std::mutex> guard(cv_m);
message_j_vec_.push_back(message_j);
std::cout << "message_j_vec_.size(): " << message_j_vec_.size() << std::endl; // = 1
}
std::unique_lock<std::mutex> lk(cv_m);
if(cv.wait_for(lk, 20s, [=]{return i == 1;}))
{
std::cerr << "Thread finished waiting. i == " << i << '\n';
}
else
{
std::cerr << "Thread timed out. i == " << i << '\n';
std::cout << "message_j_vec_.size(): " << message_j_vec_.size() << std::endl; // = 1
}
}
private:
std::vector<nlohmann::json> message_j_vec_;
std::condition_variable cv;
std::mutex cv_m;
int i;
};
I did try to make the vector static, put the vector in a separate class,
did CreateBlock* cb = new CreateBlock(message_j), tried to put the push_back in the constructor and the thread creation in another method, ...
but none of those experiments lead to the wished result.
Any idea on how to correctly solve this issue?
Tia.
Well there was not really a response in here, so I knew I had to look differently at the problem than I showed here. And so I looked into a previous step of the code and worked then towards a solution with a static variable.
I finally succeeded into getting a correct size in the place where I wanted it.
Thanks for pointing me into the right direction by remaining silent! Not the conventional way, but it worked. Took me a good week though!!

How to add a delay to code in C++.

I want to add a delay so that one line will run and then after a short delay the second one will run. I'm fairly new to C++ so I'm not sure how I would do this whatsoever. So ideally, in the code below it would print "Loading..." and wait at least 1-2 seconds and then print "Loading..." again. Currently it prints both instantaneously instead of waiting.
cout << "Loading..." << endl;
// The delay would be between these two lines.
cout << "Loading..." << endl;
in c++ 11 you can use this thread and crono to do it:
#include <chrono>
#include <thread>
...
using namespace std::chrono_literals;
...
std::this_thread::sleep_for(2s);
to simulate a 'work-in-progress report', you might consider:
// start thread to do some work
m_thread = std::thread( work, std::ref(*this));
// work-in-progress report
std::cout << "\n\n ... " << std::flush;
for (int i=0; i<10; ++i) // for 10 seconds
{
std::this_thread::sleep_for(1s); //
std::cout << (9-i) << '_' << std::flush; // count-down
}
m_work = false; // command thread to end
m_thread.join(); // wait for it to end
With output:
... 9_8_7_6_5_4_3_2_1_0_
work abandoned after 10,175,240 us
Overview: The method 'work' did not 'finish', but received the command to abandon operation and exit at timeout. (a successful test)
The code uses chrono and chrono_literals.
In windons OS
#include <windows.h>
Sleep( sometime_in_millisecs ); // note uppercase S
In Unix base OS
#include <unistd.h>
unsigned int sleep(unsigned int seconds);
#include <unistd.h>
int usleep(useconds_t usec); // Note usleep - suspend execution for microsecond intervals
You want the sleep(unsigned int seconds) function from unistd.h. Call this function between the cout statements.

C++ running timer in the background?

I've recently tried cpp, in the thing I'm making I'm trying to make it so that a variable with the value of 20 is subtracted by 1 every second, but I also need the machine to be waiting for an input from the user. I tried using for loops but they won't proceed until the input is placed or until the variable runs out. I looked at clock but they don't seem to fit my need, or maybe I just misunderstood their purpose.
Any suggestions?
As has already been suggested in the comments, threading is one way to do this. There is a nice self-contained example here (which I've borrowed from in the code below).
In the code below an asynchronous function is launched. Details on these here. This returns a future object which will contain the result once the job has finished.
In this case the job is listening to cin (typically the terminal input) and will return when some data is entered (i.e. when enter is pressed).
In the meantime the while loop will be running which keeps a track of how much time has passed, decrements the counter, and also returns if the asynchronous job finishes. It wasn't clear from your question if this is exactly the behaviour you want but it gives you the idea. It will print out value of decremented variable, but user can enter text, and it will print that out once user presses enter.
#include <iostream>
#include <thread>
#include <future>
#include <time.h>
int main() {
// Enable standard literals as 2s and ""s.
using namespace std::literals;
// Execute lambda asyncronously (waiting for user input)
auto f = std::async(std::launch::async, [] {
auto s = ""s;
if (std::cin >> s) return s;
});
// Continue execution in main thread, run countdown and timer:
int countdown = 20;
int countdownPrev = 0;
std::chrono::steady_clock::time_point begin = std::chrono::steady_clock::now();
std::chrono::steady_clock::time_point end;
double elapsed;
while((f.wait_for(5ms) != std::future_status::ready) && countdown >= 0) {
end = std::chrono::steady_clock::now();
elapsed = std::chrono::duration_cast<std::chrono::milliseconds>(end - begin).count();
countdown = 20 - (int) (elapsed/1000);
if (countdown != countdownPrev) {
std::cout << "Counter now: " << std::fixed << countdown << std::endl;
countdownPrev = countdown;
}
}
if (countdown == -1) {
std::cout << "Countdown elapsed" << std::endl;
return -1;
} else {
std::cout << "Input was: " << f.get() << std::endl;
return 0;
}
}
P.S. to get this to work on my compiler I have to compile it with g++ -pthread -std=c++14 file_name.cpp to correctly link the threading library and allow use of c++14 features.

Different behavior of boost::condition_variable under VC++ and GCC

On my computer, running on Windows 7, the following code, compiled in Visual C++ 2010 with Boost 1.53, outputs
no timeout
elapsed time (ms): 1000
The same code compiled with GCC 4.8 (online link) outputs
timeout
elapsed time (ms): 1000
My opinion is that the VC++ output is not correct and it should be timeout. Does anyone have the same output (i.e. no timeout) in VC++? If yes, then is it a bug in the Win32 implementation of boost::condition_variable?
The code is
#include <boost/thread.hpp>
#include <iostream>
int main(void) {
boost::condition_variable cv;
boost::mutex mx;
boost::unique_lock<decltype(mx)> lck(mx);
boost::chrono::system_clock::time_point start = boost::chrono::system_clock::now();
const auto cv_res = cv.wait_for(lck, boost::chrono::milliseconds(1000));
boost::chrono::system_clock::time_point end = boost::chrono::system_clock::now();
const auto count = (boost::chrono::duration_cast<boost::chrono::milliseconds>(end - start)).count();
const std::string str = (cv_res == boost::cv_status::no_timeout) ? "no timeout" : "timeout";
std::cout << str << std::endl;
std::cout << "elapsed time (ms): " << count << std::endl;
return 0;
}
If we read the documentation we see:
Atomically call lock.unlock() and blocks the current thread. The
thread will unblock when notified by a call to this->notify_one() or
this->notify_all(), after the period of time indicated by the rel_time
argument has elapsed, or spuriously... [Emphasis mine]
What you are almost certainly seeing is that the VS implementation is treating it as a spurious wakeup that happens to be at the end of the expected duration while the other implementation is treating it as a timeout.

boost deadline_timer not waiting

I tried using the boost deadline_timer in this simple test application, but had some trouble. The goal is for the timer to trigger every 45 milliseconds using the expires_at() member function of the deadline_timer. (I need an absolute time, so I'm not considering expires_from_now(). I am also not concerned about drift at the moment). When I run the program, wait() does not wait for 45 ms! Yet, no errors are reported. Am I using the library incorrectly somehow?
Sample program:
#include <boost/asio.hpp>
#include <boost/thread.hpp>
#include <boost/bind.hpp>
#include <boost/date_time/posix_time/posix_time.hpp>
#include <iostream>
using namespace std;
int main()
{
boost::asio::io_service Service;
boost::shared_ptr<boost::thread> Thread;
boost::asio::io_service::work RunForever(Service);
Thread = boost::shared_ptr<boost::thread>(new boost::thread(boost::bind(&boost::asio::io_service::run, &Service)));
boost::shared_ptr<boost::asio::deadline_timer> Timer(new boost::asio::deadline_timer(Service));
while(1)
{
boost::posix_time::time_duration Duration;
Duration = boost::posix_time::microseconds(45000);
boost::posix_time::ptime Start = boost::posix_time::microsec_clock::local_time();
boost::posix_time::ptime Deadline = Start + Duration;
boost::system::error_code Error;
size_t Result = Timer->expires_at(Deadline, Error);
cout << Result << ' ' << Error << ' ';
Timer->wait(Error);
cout << Error << ' ';
boost::posix_time::ptime End = boost::posix_time::microsec_clock::local_time();
(cout << "Duration = " << (End - Start).total_milliseconds() << " milliseconds" << endl).flush();
}
return 0;
}
You are mixing local time with system time. The time that asio is comparing your local time to is most likely some number of hours after the time that you want your deadline set to so wait returns immediately (depending on where you live; this same code could wait for several hours as well). To avoid this point of confusion, absolute times should be derived from asio::time_traits.
#include <boost/asio.hpp>
#include <boost/asio/time_traits.hpp>
#include <boost/thread.hpp>
#include <boost/bind.hpp>
#include <boost/date_time/posix_time/posix_time.hpp>
#include <iostream>
using namespace std;
typedef boost::asio::time_traits<boost::posix_time::ptime> time_traits_t;
int main() {
boost::asio::io_service Service;
boost::shared_ptr<boost::thread> Thread;
boost::asio::io_service::work RunForever(Service);
Thread = boost::shared_ptr<boost::thread>(new boost::thread(boost::bind(&boost::asio::io_service::run, &Service)));
boost::shared_ptr<boost::asio::deadline_timer> Timer(new boost::asio::deadline_timer(Service));
while(1)
{
boost::posix_time::time_duration Duration;
Duration = boost::posix_time::microseconds(45000);
boost::posix_time::ptime Start = time_traits_t::now();
boost::posix_time::ptime Deadline = Start + Duration;
boost::system::error_code Error;
size_t Result = Timer->expires_at(Deadline, Error);
cout << Result << ' ' << Error << ' ';
Timer->wait(Error);
cout << Error << ' ';
boost::posix_time::ptime End = boost::posix_time::microsec_clock::local_time();
(cout << "Duration = " << (End - Start).total_milliseconds() << " milliseconds" << endl).flush();
}
return 0;
}
That should work out for you in this case.
You are mixing asynchronous methods io_service::run with synchronous methods deadline_timer::wait. This will not work. Either use deadline_timer::async_wait with io_service::run, or skip the io_service::run and just use deadline_timer::wait. You also don't need a thread to invoke io_service:run if you go the asynchronous route, one thread will do just fine. Both concepts are explained in detail in the Basic Skills section of the Asio tutorial.
void print(const boost::system::error_code& /*e*/)
{
std::cout << "Hello, world!\n";
}
int main()
{
boost::asio::io_service io;
boost::asio::deadline_timer t(io, boost::posix_time::seconds(5));
t.async_wait(print);
io.run();
return 0;
}
Note you will need to give some work for your io_service to service prior to invoking run(). In this example, async_wait is that work.
Potentially unrelated: 45ms is quite a small delta. In my experience the smallest time for any handler to make it through the Asio epoll reactor queue is around 30 ms, this can be considerably longer at higher loads. Though it all largely depends on your application.