Im using visual studio 2012 and c++11. I dont understand why this does not work:
void client_loop(bool &run)
{
while ( run );
}
int main()
{
bool running = true;
std::thread t(&client_loop,std::ref(running));
running = false ;
t.join();
}
In this case, the loop of thread t never finishes but I explicity set running to false. run and running have the same location. I tried to set running as a single global variable but nothing happens. I tried to pass a pointer value too but nothing.
The threads use the same heap. I really don't understand. Can anyone help me?
Your program has Undefined Behavior, because it introduces a data race on the running variable (one thread writes it, another thread reads it).
You should use a mutex to synchronize access, or make running an atomic<bool>:
#include <iostream>
#include <thread>
#include <atomic>
void client_loop(std::atomic<bool> const& run)
{
while (run.load());
}
int main()
{
std::atomic<bool> running(true);
std::thread t(&client_loop,std::ref(running));
running = false ;
t.join();
std::cout << "Arrived";
}
See a working live example.
The const probably doesn't affect the compiler's view of the code. In a single-threaded application, the value won't change (and this particular program is meaningless). In a multi-threaded application, since it's an atomic type, the compiler can't optimize out the load, so in fact there's no real issue here. It's really more a matter of style; since main modifies the value, and client_loop looks for that modification, it doesn't seem right to me to say that the value is const.
Related
I have several processes but only one should be running at the time. This means that let's say the Process1 is running and if the Process2 get launched, then Process2 should wait until Process1 is complete. I am considering the boost named_mutex for this purpose. In order to avoid a scenario where mutex may not get released if some exception is thrown, it looks like boost::lock_guard could be useful. I came up with the following simplified version of the code.
#include <iostream>
#include <boost/interprocess/sync/named_mutex.hpp>
#include <boost/thread.hpp>
#include <chrono>
#include <thread>
using namespace boost::interprocess;
#pragma warning(disable: 4996)
int main()
{
std::cout << "Before taking lock" << std::endl;
named_mutex mutex(open_or_create, "some_name");
boost::lock_guard<named_mutex> guard(mutex) ;
// Some work that is simulated by sleep
std::cout << "now wait for 10 second" << std::endl;
std::this_thread::sleep_for(std::chrono::seconds(10));
std::cout << "Hello World";
}
So far, so good. When this program is running, I hit Ctl+C so the program gets aborted (kind of simulation of program crashed, unhandled exception etc). After that when I run the application, the program gets hung on the following line of code.
named_mutex mutex(open_or_create, "some_name");
boost::lock_guard<named_mutex> guard(mutex) ;
If I change the mutex name, then it works fine without getting hung. However, it looks like mutex named some_name is somehow "remembered" on the machine in some sort of bad state. This results in any application that tries to acquire a mutex with name some_name gets hung on this line of code. If I change this mutex name to let' say some_name2, the program works fine again.
Can someone please explain what is causing this behavior?
How can I reset the behavior for this particular mutex?
Most importantly, how to avoid this scenario in a real application?
As explained in this answer to the question linked by #ppetraki above, boost::interprocess:named_mutex, unfortunately, uses a file lock on Windows rather than an actual mutex. If your application terminates abnormally, that file lock will not be removed from the system. This is actually subject to an open issue.
Looking at the source code, we see that, if BOOST_INTERPROCESS_USE_WINDOWS is defined, internal_mutex_type maps to a windows_named_mutex which, internally, uses a windows_named_sync, which seems to just be using a file lock in the end. I'm not sure what exactly is the rationale of this choice of implementation. Whatever it may be, there does not seem to be any way to get boost::interprocess to use a proper named mutex on Windows. I would suggest to simply create a named mutex yourself using CreateMutex, for example:
#include <type_traits>
#include <memory>
#include <stdexcept>
#include <mutex>
#include <iostream>
#define NOMINMAX
#define WIN32_LEAN_AND_MEAN
#include <windows.h>
struct CloseHandleDeleter { void operator ()(HANDLE h) const { CloseHandle(h); } };
class NamedMutex
{
std::unique_ptr<std::remove_pointer_t<HANDLE>, CloseHandleDeleter> m;
public:
NamedMutex(const wchar_t* name)
: m(CreateMutexW(nullptr, FALSE, name))
{
if (!m)
throw std::runtime_error("failed to create mutex");
}
void lock()
{
if (WaitForSingleObject(m.get(), INFINITE) == WAIT_FAILED)
throw std::runtime_error("something bad happened");
}
void unlock()
{
ReleaseMutex(m.get());
}
};
int main()
{
try
{
NamedMutex mutex(L"blub");
std::lock_guard lock(mutex);
std::cout << "Hello, World!" << std::endl;
}
catch (...)
{
std::cerr << "something went wrong\n";
return -1;
}
return 0;
}
Can someone please explain what is causing this behavior?
The mutex is global.
How can I reset the behavior for this particular mutex?
Call boost::interprocess::named_mutex::remove("mutex_name");
Most importantly, how to avoid this scenario in a real application?
It depends on what your outer problem is. Perhaps a more sensible solution is to use a file lock instead. A file lock will go away when a process is destroyed.
Updates:
I understand mutex is global but what happens with that mutex that causes the program to hang?
The first program acquired the mutex and never released it so the mutex is still held. Mutexes are typically held while shared state is put into an inconsistent state, so automatically releasing the mutex would be disastrous.
How can I determine if that mutex_name is in a bad state so its time to call the remove on it?
In your case you really can't because you picked the wrong tool for the job. The same logic you would use to tell if the mutex was in a sane state would just solve your whole problem, so the mutex just made things harder. Instead, use a file lock. It may be useful to write the process name and process ID into the file to help in troubleshooting.
I have a code at work that starts multiple threads that doing some operations and if any of them fail they set the shared variable to false.
Then main thread joins all the worker threads. Simulation of this looks roughly like this (I commented out the possible fix which I don't know if it's needed):
#include <thread>
#include <atomic>
#include <vector>
#include <iostream>
#include <cassert>
using namespace std;
//atomic_bool success = true;
bool success = true;
int main()
{
vector<thread> v;
for (int i = 0; i < 10; ++i)
{
v.emplace_back([=]
{
if (i == 5 || i == 6)
{
//success.store(false, memory_order_release);
success = false;
}
});
}
for (auto& t : v)
t.join();
//assert(success.load(memory_order_acquire) == false);
assert(success == false);
cout << "Finished" << endl;
cin.get();
return 0;
}
Is there a possibility that main thread will read the success variable as true even though one of the workers set it to false?
I found that thread::join() is a full memory barrier (source) but does that imply synchronized-with relationship with the following read of success variable from the main thread, so that we're guaranteed to get newest value?
Is the fix I posted (in the commented code) necessary in this case (or maybe another fix if this one is wrong)?
Is there a possibility that read of success variable will be optimized away (since it's not volatile) and we will get old value regardless of suppossed to exist implicit memory barrier on thread::join?
The code is suppossed to work on multiple architectures (cannot remember all of them, I don't have makefile in front of me) but there are atleast x86, amd64, itanium, arm7.
Thanks for any help with this.
Edit: I've modified the example, because in real situation more then one thread can try to write to success variable.
The code above represents a data race, and the use of join cannot change that fact. If only one thread wrote to the variable, it would be fine. But you have two threads writing to it, with no synchronization between them. That's a data race.
join simply means "all side effects of that thread's operation have completed and are now visible to you." That does not create ordering or synchronization between that thread and any thread other than your own.
If you used an atomic_bool, then it wouldn't be UB; it would be guaranteed to be false. But because there is a data race, you get pure UB. It might be true, false, or nasal demons.
In the following code example, program execution never ends.
It creates a thread which waits for a global bool to be set to true before terminating. There is only one writer and one reader. I believe that the only situation that allows the loop to continue running is if the bool variable is false.
How is it possible that the bool variable ends up in an inconsistent state with just one writer?
#include <iostream>
#include <pthread.h>
#include <unistd.h>
bool done = false;
void * threadfunc1(void *) {
std::cout << "t1:start" << std::endl;
while(!done);
std::cout << "t1:done" << std::endl;
return NULL;
}
int main()
{
pthread_t threads;
pthread_create(&threads, NULL, threadfunc1, NULL);
sleep(1);
done = true;
std::cout << "done set to true" << std::endl;
pthread_exit(NULL);
return 0;
}
There's a problem in the sense that this statement in threadfunc1():
while(!done);
can be implemented by the compiler as something like:
a_register = done;
label:
if (a_register == 0) goto label;
So updates to done will never be seen.
There is really nothing that prevents the compiler from optimizing the while-loop away. Use atomic or a mutex to access the bool from more than one thread. That is the only supported and correct solution. As you are using posix, a mutex would be the right solution in this case.
And don't use volatile. There is a posix standard that states what has to work and volatile is not a solution that has a guaranty to work.
And there is an othere problem: There is no guaranty that your newly created thread every started to run, before you set the flag to false.
For such simple example volatile is enough. But for vast majority of real world situations it is not. Use conditional variable for this task. They look weird at the first glance but actually they are quite logical. On x86 bool IS atomic to read/write (for ARM, probably, not). Also there is an obstacle with vector: it is NOT a vector of bools, it is a bitfield. To write vector from several threads use vector (or bool arr[SIZE]).
Also you don't join with thread, it is wrong.
Race condition means: when two threads are accessing the same object, and at least one of them is a write.
It means you will have two types of racing, write-write conflict and write-read conflict.
Back to your code, you essentially have two threads, one is the main thread, and another one is the one you created with pthread_create.
One of them is a read: while(!done), and one of them is a write: done = true.
You have race condition for sure.
Is a race condition possible when only one thread writes to a bool variable in c++?
Yes. In your case, the main thread is also a thread (i.e. you have one thread writing and one thread reading).
How is it possible that the bool variable ends up in an inconsistent state with just one writer?
The compiler is (should be) an optimizing compiler. It will probably optimize the reading of the done variable, unless you take care to avoid that (use std::atomic<bool> done instead).
its not guaranteed that the assignment to a bool which is one byte is atomic
This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Is there a way to cancel/detach a future in C++11?
There is a member function which runs asynchronously using std::future and std::async. In some case, I need to cancel it. (The function loads near objects consecutively and sometimes an objects gets out of range while loading it.) I already read the answers to this question addressing the same issue, but I cannot get it work.
This is simplified code with the same structure as my actual program has. Calling Start() and Kill() while the asynchronous is running, causes a crash because of access violation for input.
In my eyes the code should work as follows. When Kill() is called, the running flag is disabled. The next command get() should wait for thread to end, which it does soon since it checks the running flag. After the thread is canceled, the input pointer is deleted.
#include <vector>
#include <future>
using namespace std;
class Class
{
future<void> task;
bool running;
int *input;
vector<int> output;
void Function()
{
for(int i = 0; i < *input; ++i)
{
if(!running) return;
output.push_back(i);
}
}
void Start()
{
input = new int(42534);
running = true;
task = async(launch::async, &Class::Function, this);
}
void Kill()
{
running = false;
task.get();
delete input;
}
};
It seems like the thread doesn't notice toggling the running flag to false. What is my mistake?
Since noone's actually answered the question yet I'll do so.
The writes and reads to the running variable are not atomic operations, so there is nothing in the code that causes any synchronisation between the two threads, so nothing ever ensures that the async thread sees that the variable has changed.
One possible way that can happen is that the compiler analyzes the code of Function, determines that there are never any writes to the variable in that thread, and as it's not an atomic object writes by other threads are not required to be visible, so it's entirely legal to rearrange the code to this:
void Function()
{
if(!running) return;
for(int i = 0; i < *input; ++i)
{
output.push_back(i);
}
}
Obviously in this code if running changes after the function has started it won't cause the loop to stop.
There are two ways the C++ standard allows you to synchronize the two threads, which is either to use a mutex and only read or write the running variable while the mutex is locked, or to make the variable an atomic variable. In your case, changing running from bool to atomic<bool> will ensure that writes to the variable are synchronized with reads from it, and the async thread will terminate.
The following example runs successfully (i.e. doesn't hang) if compiled using Clang 3.2 or GCC 4.7 on Ubuntu 12.04, but hangs if I compile using VS11 Beta or VS2012 RC.
#include <iostream>
#include <string>
#include <thread>
#include "boost/thread/thread.hpp"
void SleepFor(int ms) {
std::this_thread::sleep_for(std::chrono::milliseconds(ms));
}
template<typename T>
class ThreadTest {
public:
ThreadTest() : thread_([] { SleepFor(10); }) {}
~ThreadTest() {
std::cout << "About to join\t" << id() << '\n';
thread_.join();
std::cout << "Joined\t\t" << id() << '\n';
}
private:
std::string id() const { return typeid(decltype(thread_)).name(); }
T thread_;
};
int main() {
static ThreadTest<std::thread> std_test;
static ThreadTest<boost::thread> boost_test;
// SleepFor(100);
}
The issue appears to be that std::thread::join() never returns if it is invoked after main has exited. It is blocked at WaitForSingleObject in _Thrd_join defined in cthread.c.
Uncommenting SleepFor(100); at the end of main allows the program to exit properly, as does making std_test non-static. Using boost::thread also avoids the issue.
So I'd like to know if I'm invoking undefined behaviour here (seems unlikely to me), or if I should be filing a bug against VS2012?
Tracing through Fraser's sample code in his connect bug (https://connect.microsoft.com/VisualStudio/feedback/details/747145)
with VS2012 RTM seems to show a fairly straightforward case of deadlocking. This likely isn't specific to std::thread - likely _beginthreadex suffers the same fate.
What I see in the debugger is the following:
On the main thread, the main() function has completed, the process cleanup code has acquired a critical section called _EXIT_LOCK1, called the destructor of ThreadTest, and is waiting (indefinitely) on the second thread to exit (via the call to join()).
The second thread's anonymous function completed and is in the thread cleanup code waiting to acquire the _EXIT_LOCK1 critical section. Unfortunately, due to the timing of things (whereby the second thread's anonymous function's lifetime exceeds that of the main() function) the main thread already owns that critical section.
DEADLOCK.
Anything that extends the lifetime of main() such that the second thread can acquire _EXIT_LOCK1 before the main thread avoids the deadlock situation. That's why the uncommenting the sleep in main() results in a clean shutdown.
Alternatively if you remove the static keyword from the ThreadTest local variable, the destructor call is moved up to the end of the main() function (instead of in the process cleanup code) which then blocks until the second thread has exited - avoiding the deadlock situation.
Or you could add a function to ThreadTest that calls join() and call that function at the end of main() - again avoiding the deadlock situation.
I realize this is an old question regarding VS2012, but the bug is still present in VS2013. For those who are stuck on VS2013, perhaps due to Microsoft's refusal to provide upgrade pricing for VS2015, I offer the following analysis and workaround.
The problem is that the mutex (at_thread_exit_mutex) used by _Cnd_do_broadcast_at_thread_exit() is either not yet initialized, or has already been destroyed, depending on the exact circumstances. In the former case, _Cnd_do_broadcast_at_thread_exit() tries to initialize the mutex during shutdown, causing a deadlock. In the latter case, where the mutex has already been destroyed via the atexit stack, the program will crash on the way out.
The solution I found is to explicitly call _Cnd_do_broadcast_at_thread_exit() (which thankfully is declared publicly) early during program startup. This has the effect of creating the mutex before anyone else tries to access it, as well as ensuring that the mutex continues to exist until the last possible moment.
So, to fix the problem, insert the following code at the bottom of a source module, for instance somewhere below main().
#pragma warning(disable:4073) // initializers put in library initialization area
#pragma init_seg(lib)
#if _MSC_VER < 1900
struct VS2013_threading_fix
{
VS2013_threading_fix()
{
_Cnd_do_broadcast_at_thread_exit();
}
} threading_fix;
#endif
I believe your threads have already been terminated and their resources freed following the termination of your main function and before static destruction. This is the behavior of the VC runtimes dating back to at least VC6.
Do child threads exit when the parent thread terminates
boost thread and process cleanup on windows
My answer is too far late, but hope will help someone.
I was stucked by this bug, and i find a trick to solve this problem,it worked in my code.
int main()
{
ThreadTest trick_obj; //trick... You can put this line of code anywhere
static ThreadTest std_test;
return 1;
}
I have been battling this bug for a day, and found the following work-around, which turned out the be the least dirty trick:
Instead of returning, one can use the standard Windows API function call ExitThread() to terminate the thread. This method of course may mess up the internal state of the std::thread object and associated library, but since the program is going to terminate anyway, well, so be it.
#include <windows.h>
template<typename T>
class ThreadTest {
public:
ThreadTest() : thread_([] { SleepFor(10); ExitThread(NULL); }) {}
~ThreadTest() {
std::cout << "About to join\t" << id() << '\n';
thread_.join();
std::cout << "Joined\t\t" << id() << '\n';
}
private:
std::string id() const { return typeid(decltype(thread_)).name(); }
T thread_;
};
The join() call apparently works correctly. However, I chose to use a more safe method in our solution. One can get the thread HANDLE via std::thread::native_handle(). With this handle we can call the Windows API directly to join the thread:
WaitForSingleObject(thread_.native_handle(), INFINITE);
CloseHandle(thread_.native_handle());
Thereafter, the std::thread object must not be destroyed, as the destructor would try to join the thread a second time. So we just leave the std::thread object dangling at program exit.