std::queue::empty() not working? - c++

I'm going crazy with this piece of code. I have a thread that calls regularly to this method:
void deliverMsgQ() {
if(delMsgQ_mutex.try_lock() == false){
return;
}
while(delMsgQ.empty() == false){
std::vector<unsigned char> v = delMsgQ.front();
delMsgQ.pop();
}
delMsgQ_mutex.unlock();
}
void processInMsgQ() {
if(inMsgQ_mutex.try_lock()){
if(delMsgQ_mutex.try_lock() == false){
inMsgQ_mutex.unlock();
}
}else{
return;
}
while(!inMsgQ.empty()){
std::vector<unsigned char> msg;
inMsgQ.front()->getData(msg);
std::cout << "Adding to del-msg-q: " << msg.size() << std::endl;
delMsgQ.push(msg);
delete inMsgQ.front();
inMsgQ.pop();
}
inMsgQ_mutex.unlock();
delMsgQ_mutex.unlock();
}
I have another thread pushing vector to the queue also periodically. These two threads are the only ones that touch the queue delMsgQ.
My problems comes in the first function posted, for some reason delMsgQ.empty() at some point returns false even though it has no vectors in it, and therefore I end up calling pop twice. This causes the size function to become a huge unrealistic number and then the program goes into segmentation fault. I can fix this if I add an additional check right before calling pop, but I would expect that checking once would be enough since I'm also using mutexes. So the other alternative would be that maybe I'm using mutexes wrong, but as far as I know this is the proper way to use them in this case. So I was hoping that maybe someone smarter could let me know if there is something I'm missing? I hope this code is enough, I can provide with more code if necessary although no other function touch the queue that is failing.
best regards

Your code in processInMsgQ() (spaced out a little better but functionally identical) is problematic:
if (inMsgQ_mutex.try_lock()) {
if (delMsgQ_mutex.try_lock() == false) {
// Point A
inMsgQ_mutex.unlock();
// Point B.
}
} else {
return;
}
// Point C.
In the case where it locks inMsgQ_mutex but fails to lock delMsgQ_mutex (point A), it will free the first and then drop through to point C. This means you will be doing stuff that requires both locks without either lock, and that's unlikely to end well :-)
As a solution, you could put another return at point B but the following code is probably cleaner:
// If either lock fails, return ensuring that neither is locked.
if (! inMsgQ_mutex.try_lock()) {
return;
}
if (! delMsgQ_mutex.try_lock() {
inMsgQ_mutex.unlock();
return;
}
// At this point, you have both locks. Carry on ...
You'll notice I've also changed your some_boolean == false to the more usual ! some_boolean. That's the more accepted way of doing that particular check.

Related

Use of goto in this very specific case... alternatives?

I have a question about the possibile use of goto in a C++ code: I know that goto shall be avoided as much as possibile, but in this very particular case I'm having few difficulties to find good alternatives that avoid using multiple nested if-else and/or additional binary flags...
The code is like the following one (only the relevant parts are reported):
// ... do initializations, variable declarations, etc...
while(some_flag) {
some_flag=false;
if(some_other_condition) {
// ... do few operations (20 lines of code)
return_flag=foo(input_args); // Function that can find an error, returning false
if(!return_flag) {
// Print error
break; // jump out of the main while loop
}
// ... do other more complex operations
}
index=0;
while(index<=SOME_VALUE) {
// ... do few operations (about 10 lines of code)
return_flag=foo(input_args); // Function that can find an error, returning false
if(!return_flag) {
goto end_here; // <- 'goto' statement
}
// ... do other more complex operations (including some if-else and the possibility to set some_flag to true or leave it to false
// ... get a "value" to be compared with a saved one in order to decide whether to continue looping or not
if(value<savedValue) {
// Do other operations (about 20 lines of code)
some_flag=true;
}
// ... handle 'index'
it++; // Increse number of iterations
}
// ... when going out from the while loop, some other operations must be done, at the moment no matter the value of some_flag
return_flag=foo(input_args);
if(!return_flag) {
goto end_here; // <- 'goto' statement
}
// ... other operations here
// ... get a "value" to be compared with a saved one in order to decide whether to continue looping or not
if(value<savedValue) {
// Do other operations (about 20 lines of code)
some_flag=true;
}
// Additional termination constraint
if(it>MAX_ITERATIONS) {
some_flag=false;
}
end_here:
// The code after end_here checks for some_flag, and executes some operations that must always be done,
// no matter if we arrive here due to 'goto' or due to normal execution.
}
}
// ...
Every time foo() returns false, no more operations should be executed, and the code should execute the final operations as soon as possible. Another requirement is that this code, mainly the part inside the while(index<=SOME_VALUE) shall run as fast as possible to try to have a good overall performance.
Is using a 'try/catch' block, with the try{} including lots of code inside (while, actually, the function foo() can generate errors only when called, that is in two distinct points of the code only) a possibile alternative? Is is better in this case to use different 'try/catch' blocks?
Are there other better alternatives?
Thanks a lot in advance!
Three obvious choices:
Stick with goto
Associate the cleanup code with the destructor of some RAII class. (You can probably write it as the delete for a std::unique_ptr as a lambda.)
Rename your function as foo_internal, and change it to just return. Then write the cleanup in a new foo function which calls foo_internal
So:
return_t foo(Args...) {
const auto result = foo_internal(Args..);
// cleanup
return result;
}
In general, your function looks too long, and needs decomposing into smaller bits.
One way you can do it is to use another dummy loop and break like so
int state = FAIL_STATE;
do {
if(!operation()) {
break;
}
if(!other_operation()) {
break;
}
// ...
state = OK_STATE;
} while(false);
// check for state here and do necessary cleanups
That way you can avoid deep nesting levels in your code beforehand.
It's C++! Use exceptions for non-local jumps:
try {
if(some_result() < threshold) throw false;
}
catch(bool) {
handleErrors();
}
// Here follows mandatory cleanup for both sucsesses and failures

What's a proper way to use set_alert_notify to wake up main thread?

I'm trying to write my own torrent program based on libtorrent rasterbar and I'm having problems getting the alert mechanism working correctly. Libtorrent offers function
void set_alert_notify (boost::function<void()> const& fun);
which is supposed to
The intention of of the function is that the client wakes up its main thread, to poll for more alerts using pop_alerts(). If the notify function fails to do so, it won't be called again, until pop_alerts is called for some other reason.
so far so good, I think I understand the intention behind this function. However, my actual implementation doesn't work so good. My code so far is like this:
std::unique_lock<std::mutex> ul(_alert_m);
session.set_alert_notify([&]() { _alert_cv.notify_one(); });
while (!_alert_loop_should_stop) {
if (!session.wait_for_alert(std::chrono::seconds(0))) {
_alert_cv.wait(ul);
}
std::vector<libtorrent::alert*> alerts;
session.pop_alerts(&alerts);
for (auto alert : alerts) {
LTi_ << alert->message();
}
}
however there is a race condition. If wait_for_alert returns NULL (since no alerts yet) but the function passed to set_alert_notify is called before _alert_cw.wait(ul);, the whole loop waits forever (because of second sentence from the quote).
For the moment my solution is just changing _alert_cv.wait(ul); to _alert_cv.wait_for(ul, std::chrono::milliseconds(250)); which reduces number of loops per second enough while keeping latency low enough.
But it's really more workaround then solution and I keep thinking there must be proper way to handle this.
You need a variable to record the notification. It should be protected by the same mutex that owns the condition variable.
bool _alert_pending;
session.set_alert_notify([&]() {
std::lock_guard<std::mutex> lg(_alert_m);
_alert_pending = true;
_alert_cv.notify_one();
});
std::unique_lock<std::mutex> ul(_alert_m);
while(!_alert_loop_should_stop) {
_alert_cv.wait(ul, [&]() {
return _alert_pending || _alert_loop_should_stop;
})
if(_alert_pending) {
_alert_pending = false;
ul.unlock();
session.pop_alerts(...);
...
ul.lock();
}
}

How to properly delete pointers in std::stack?

So, I'm curious about this thing I can't figure out.
I'm creating some new objects and passing them to a function which stores them in a std::stack.
However, when i want to delete them - they do not actually get deleted, and as such, memory usage will proceed to climb "forever" with my test loop.
Why?
bool StateMachine::changeState(BaseState *state) {
if (state == nullptr) {
delete states.top();
states.pop();
if (states.size() == 0) {
return false;
}
} else if (state != states.top()) {
states.push(state);
}
return true;
}
Test loop:
while (true) {
machine.changeState(new MenuState);
machine.changeState(nullptr);
}
Using a std::unique_ptr instead of raw works, and now ram usage is constant, but still - I wanna know.
Cheers!
Your code should be correct given the preconditions you've mentioned, but notice that you can allocate and delete objects without reclaiming operating system allocated memory, especially if you leave allocation holes in memory. So check both if memory starts growing then stops and for memory leaks elsewhere, like inside BaseState.
If you're in doubt about preconditions, add an else clause in your if and print something. I should never happen, but if it does, then there may be some problem calling states.top().

std::function in combination with thread c++11 fails debug assertion in vector

I want to build a helper class that can accept an std::function created via std::bind) so that i can call this class repeaded from another thread:
short example:
void loopme() {
std::cout << "yay";
}
main () {
LoopThread loop = { std::bind(&loopme) };
loop.start();
//wait 1 second
loop.stop();
//be happy about output
}
However, when calling stop() my current implementation will raise the following error: debug assertion Failed , see Image: i.stack.imgur.com/aR9hP.png.
Does anyone know why the error is thrown ?
I don't even use vectors in this example.
When i dont call loopme from within the thread but directly output to std::cout, no error is thrown.
Here the full implementation of my class:
class LoopThread {
public:
LoopThread(std::function<void(LoopThread*, uint32_t)> function) : function_{ function }, thread_{ nullptr }, is_running_{ false }, counter_{ 0 } {};
~LoopThread();
void start();
void stop();
bool isRunning() { return is_running_; };
private:
std::function<void(LoopThread*, uint32_t)> function_;
std::thread* thread_;
bool is_running_;
uint32_t counter_;
void executeLoop();
};
LoopThread::~LoopThread() {
if (isRunning()) {
stop();
}
}
void LoopThread::start() {
if (is_running_) {
throw std::runtime_error("Thread is already running");
}
if (thread_ != nullptr) {
throw std::runtime_error("Thread is not stopped yet");
}
is_running_ = true;
thread_ = new std::thread{ &LoopThread::executeLoop, this };
}
void LoopThread::stop() {
if (!is_running_) {
throw std::runtime_error("Thread is already stopped");
}
is_running_ = false;
thread_->detach();
}
void LoopThread::executeLoop() {
while (is_running_) {
function_(this, counter_);
++counter_;
}
if (!is_running_) {
std::cout << "end";
}
//delete thread_;
//thread_ = nullptr;
}
I used the following Googletest code for testing (however a simple main method containing the code should work):
void testfunction(pft::LoopThread*, uint32_t i) {
std::cout << i << ' ';
}
TEST(pfFiles, TestLoop)
{
pft::LoopThread loop{ std::bind(&testfunction, std::placeholders::_1, std::placeholders::_2) };
loop.start();
std::this_thread::sleep_for(std::chrono::milliseconds(500));
loop.stop();
std::this_thread::sleep_for(std::chrono::milliseconds(2500));
std::cout << "Why does this fail";
}
Your use of is_running_ is undefined behavior, because you write in one thread and read in another without a synchronization barrier.
Partly due to this, your stop() doesn't stop anything. Even without this UB (ie, you "fix" it by using an atomic), it just tries to say "oy, stop at some point", by the end it does not even attempt to guarantee the stop happened.
Your code calls new needlessly. There is no reason to use a std::thread* here.
Your code violates the rule of 5. You wrote a destructor, then neglected copy/move operations. It is ridiculously fragile.
As stop() does nothing of consequence to stop a thread, your thread with a pointer to this outlives your LoopThread object. LoopThread goes out of scope, destroying what the pointer your std::thread stores. The still running executeLoop invokes a std::function that has been destroyed, then increments a counter to invalid memory (possibly on the stack where another variable has been created).
Roughly, there is 1 fundamental error in using std threading in every 3-5 lines of your code (not counting interface declarations).
Beyond the technical errors, the design is wrong as well; using detach is almost always a horrible idea; unless you have a promise you make ready at thread exit and then wait on the completion of that promise somewhere, doing that and getting anything like a clean and dependable shutdown of your program is next to impossible.
As a guess, the vector error is because you are stomping all over stack memory and following nearly invalid pointers to find functions to execute. The test system either puts an array index in the spot you are trashing and then the debug vector catches that it is out of bounds, or a function pointer that half-makes sense for your std function execution to run, or somesuch.
Only communicate through synchronized data between threads. That means atomic data, or mutex guarded, unless you are getting ridiculously fancy. You don't understand threading enough to get fancy. You don't understand threading enough to copy someone who got fancy and properly use it. Don't get fancy.
Don't use new. Almost never, ever use new. Use make_shared or make_unique if you absolutely have to. But use those rarely.
Don't detach a thread. Period. Yes this means you might have to wait for it to finish a loop or somesuch. Deal with it, or write a thread manager that does the waiting at shutdown or somesuch.
Be extremely clear about what data is owned by what thread. Be extremely clear about when a thread is finished with data. Avoid using data shared between threads; communicate by passing values (or pointers to immutable shared data), and get information from std::futures back.
There are a number of hurdles in learning how to program. If you have gotten this far, you have passed a few. But you probably know people who learned along side of you that fell over at one of the earlier hurdles.
Sequence, that things happen one after another.
Flow control.
Subprocedures and functions.
Looping.
Recursion.
Pointers/references and dynamic vs automatic allocation.
Dynamic lifetime management.
Objects and Dynamic dispatch.
Complexity
Coordinate spaces
Message
Threading and Concurrency
Non-uniform address spaces, Serialization and Networking
Functional programming, meta functions, currying, partial application, Monads
This list is not complete.
The point is, each of these hurdles can cause you to crash and fail as a programmer, and getting each of these hurdles right is hard.
Threading is hard. Do it the easy way. Dynamic lifetime management is hard. Do it the easy way. In both cases, extremely smart people have mastered the "manual" way to do it, and the result is programs that exhibit random unpredictable/undefined behavior and crash a lot. Muddling through manual resource allocation and deallocation and multithreaded code can be made to work, but the result is usually someone whose small programs work accidentally (they work insofar as you fixed the bugs you noticed). And when you master it, initial mastery comes in the form of holding an entire program's "state" in uour head and understanding how it works; this fails to scale to large many-developer code bases, so younusually graduate to having large programs that work accidentally.
Both make_unique style and only-immutable-shared-data based threading are composible strategies. This means if small pieces are correct, and you put them together, the resulting program is correct (with regards to resource lifetime and concurrency). That permits local mastery of small-scale threading or resource management to apply to larfe-scale programs in the domain that these strategies work.
After following the guide from #Yakk i decided to restructure my programm:
bool is_running_ will change to td::atomic<bool> is_running_
stop() will not only trigger the stopping, but will activly wait for the thread to stop via a thread_->join()
all calls of new are replaced with std::make_unique<std::thread>( &LoopThread::executeLoop, this )
I have no experience with copy or move constructors. So i decided to forbid them. This should prevent me from accidently using this. If i sometime in the future will need those i have to take a deepter look on thoose
thread_->detach() was replaced by thread_->join() (see 2.)
This is the end of the list.
class LoopThread {
public:
LoopThread(std::function<void(LoopThread*, uint32_t)> function) : function_{ function }, is_running_{ false }, counter_{ 0 } {};
LoopThread(LoopThread &&) = delete;
LoopThread(const LoopThread &) = delete;
LoopThread& operator=(const LoopThread&) = delete;
LoopThread& operator=(LoopThread&&) = delete;
~LoopThread();
void start();
void stop();
bool isRunning() const { return is_running_; };
private:
std::function<void(LoopThread*, uint32_t)> function_;
std::unique_ptr<std::thread> thread_;
std::atomic<bool> is_running_;
uint32_t counter_;
void executeLoop();
};
LoopThread::~LoopThread() {
if (isRunning()) {
stop();
}
}
void LoopThread::start() {
if (is_running_) {
throw std::runtime_error("Thread is already running");
}
if (thread_ != nullptr) {
throw std::runtime_error("Thread is not stopped yet");
}
is_running_ = true;
thread_ = std::make_unique<std::thread>( &LoopThread::executeLoop, this );
}
void LoopThread::stop() {
if (!is_running_) {
throw std::runtime_error("Thread is already stopped");
}
is_running_ = false;
thread_->join();
thread_ = nullptr;
}
void LoopThread::executeLoop() {
while (is_running_) {
function_(this, counter_);
++counter_;
}
}
TEST(pfThread, TestLoop)
{
pft::LoopThread loop{ std::bind(&testFunction, std::placeholders::_1, std::placeholders::_2) };
loop.start();
std::this_thread::sleep_for(std::chrono::milliseconds(50));
loop.stop();
}

C++ - Threads without coordinating mechanism like mutex_Lock

I attended one interview two days back. The interviewed guy was good in C++, but not in multithreading. When he asked me to write a code for multithreading of two threads, where one thread prints 1,3,5,.. and the other prints 2,4,6,.. . But, the output should be 1,2,3,4,5,.... So, I gave the below code(sudo code)
mutex_Lock LOCK;
int last=2;
int last_Value = 0;
void function_Thread_1()
{
while(1)
{
mutex_Lock(&LOCK);
if(last == 2)
{
cout << ++last_Value << endl;
last = 1;
}
mutex_Unlock(&LOCK);
}
}
void function_Thread_2()
{
while(1)
{
mutex_Lock(&LOCK);
if(last == 1)
{
cout << ++last_Value << endl;
last = 2;
}
mutex_Unlock(&LOCK);
}
}
After this, he said "these threads will work correctly even without those locks. Those locks will reduce the efficiency". My point was without the lock there will be a situation where one thread will check for(last == 1 or 2) at the same time the other thread will try to change the value to 2 or 1. So, My conclusion is that it will work without that lock, but that is not a correct/standard way. Now, I want to know who is correct and in which basis?
Without the lock, running the two functions concurrently would be undefined behaviour because there's a data race in the access of last and last_Value Moreover (though not causing UB) the printing would be unpredictable.
With the lock, the program becomes essentially single-threaded, and is probably slower than the naive single-threaded code. But that's just in the nature of the problem (i.e. to produce a serialized sequence of events).
I think the interviewer might have thought about using atomic variables.
Each instantiation and full specialization of the std::atomic template defines an atomic type. Objects of atomic types are the only C++ objects that are free from data races; that is, if one thread writes to an atomic object while another thread reads from it, the behavior is well-defined.
In addition, accesses to atomic objects may establish inter-thread synchronization and order non-atomic memory accesses as specified by std::memory_order.
[Source]
By this I mean the only thing you should change is remove the locks and change the lastvariable to std::atomic<int> last = 2; instead of int last = 2;
This should make it safe to access the last variable concurrently.
Out of curiosity I have edited your code a bit, and ran it on my Windows machine:
#include <iostream>
#include <atomic>
#include <thread>
#include <Windows.h>
std::atomic<int> last=2;
std::atomic<int> last_Value = 0;
std::atomic<bool> running = true;
void function_Thread_1()
{
while(running)
{
if(last == 2)
{
last_Value = last_Value + 1;
std::cout << last_Value << std::endl;
last = 1;
}
}
}
void function_Thread_2()
{
while(running)
{
if(last == 1)
{
last_Value = last_Value + 1;
std::cout << last_Value << std::endl;
last = 2;
}
}
}
int main()
{
std::thread a(function_Thread_1);
std::thread b(function_Thread_2);
while(last_Value != 6){}//we want to print 1 to 6
running = false;//inform threads we are about to stop
a.join();
b.join();//join
while(!GetAsyncKeyState('Q')){}//wait for 'Q' press
return 0;
}
and the output is always:
1
2
3
4
5
6
Ideone refuses to run this code (compilation errors)..
Edit: But here is a working linux version :) (thanks to soon)
The interviewer doesn't know what he is talking about. Without the locks you get races on both last and last_value. The compiler could for example reorder the assignment to last before the print and increment of last_value, which could lead to the other thread executing on stale data. Furthermore you could get interleaved output, meaning things like two numbers not being seperated by a linebreak.
Another thing, which could go wrong is that the compiler might decide not to reload last and (less importantly) last_value each iteration, since it can't (safely) change between those iterations anyways (since data races are illegal by the C++11 standard and aren't acknowledged in previous standards). This means that the code suggested by the interviewer actually has a good chance of creating infinite loops of doing absoulutely doing nothing.
While it is possible to make that code correct without mutices, that absolutely needs atomic operations with appropriate ordering constraints (release-semantics on the assignment to last and acquire on the load of last inside the if statement).
Of course your solution does lower efficiency due to effectivly serializing the whole execution. However since the runtime is almost completely spent inside the streamout operation, which is almost certainly internally synchronized by the use of locks, your solution doesn't lower the efficiency anymore then it already is. Waiting on the lock in your code might actually be faster then busy waiting for it, depending on the availible resources (the nonlocking version using atomics would absolutely tank when executed on a single core machine)