If I create an object which is going to be accessed by two different std::threads, do I need to make any special provisions when I create the object or pass it to the threads?
For example:
class Alpha
{
public:
int x;
};
void Foo(Alpha* alpha)
{
while (true)
{
alpha->x++;
std::cout << "Foo: alpha.x = " << alpha->x << std::endl;
}
}
void Bar(Alpha* alpha)
{
while (true)
{
alpha->x++;
std::cout << "Bar: alpha.x = " << alpha->x << std::endl;
}
}
int main(int argc, char * argv[])
{
Alpha alpha;
alpha.x = 0;
std::thread t1(Foo, &alpha);
std::thread t2(Bar, &alpha);
t1.join();
t2.join();
return 0;
}
This compiles fine, and seems to run fine too. But I haven't explicitly told my program that alpha needs to be accessed by two different threads. Should I be doing this differently?
You have race condition on alpha.x, as both threads may write when the other read/write its value. You may fix that by changing type of x into std::atomic<int> or by using protecting read/write access by mutex.
If an object is going to be accessed by multiple threads, then you must make provisions for synchronization. In your case, it will suffice to declare the variable x as an atomic:
#include <atomic>
class Alpha
{
public:
std::atomic<int> x;
};
This will guarantee that any function which increments "x" will actually use the atomic fetch_and_add() method. This guarantees that each thread that increments the variable "x" will get a unique incremented value of x.
In the code you have posted, it is extremely possible that both threads will get a value of 1 for x if the executions interleave just right.
Related
I have an application object which can receive messages from multiple services running in multiple threads. The message gets dispatched internally by an instance of a dispatcher object in the threads of the services. The application can at any time change the current dispatcher. Dispatchers never get destroyed. The services never outlive the application.
Here's an example code
#include <iostream>
#include <thread>
#include <atomic>
#include <cstdlib>
#include <functional>
using namespace std;
using Msg = int;
struct Dispatcher
{
virtual ~Dispatcher() = default;
virtual void dispatchMessage(Msg msg) = 0;
};
struct DispatcherA : Dispatcher
{
void dispatchMessage(Msg msg)
{
cout << "Thread-safe dispatch of " << msg << " by A" << endl;
}
};
struct DispatcherB : Dispatcher
{
void dispatchMessage(Msg msg)
{
cout << "Thread-safe dispatch of " << msg << " by B" << endl;
}
};
struct Application
{
Application() : curDispatcher(&a) {}
void sendMessage(Msg msg)
{
// race here as this is called (and dereferenced) from many threads
// and can be changed by the main thread
curDispatcher->dispatchMessage(msg);
}
void changeDispatcher()
{
// race her as this is changed but can be dereferenced by many threads
if (rand() % 2) curDispatcher = &a;
else curDispatcher = &b;
}
atomic_bool running = true;
Dispatcher* curDispatcher; // race on this
DispatcherA a;
DispatcherB b;
};
void service(Application& app, int i) {
while (app.running) app.sendMessage(i++);
}
int main()
{
Application app;
std::thread t1(std::bind(service, std::ref(app), 1));
std::thread t2(std::bind(service, std::ref(app), 20));
for (int i = 0; i < 10000; ++i)
{
app.changeDispatcher();
}
app.running = false;
t1.join();
t2.join();
return 0;
}
I am aware that there is a race condition here. The curDispatcher pointer gets accessed by many threads and it can be changed at the same time by the main thread. It can be fixed by making the pointer atomic and explicitly loading it on every sendMessage call.
I don't want to pay the price of the atomic loads.
Can something bad happen of this?
Here's what I can think of:
The value of curDispatcher can get cached by a service and it can always call the same one, even if the app has changed the value. I'm ok with that. If I stop being ok with that, I can make it volatile. Newly created services should be ok, anyway.
If this ever runs on a 32-bit CPU which emulates 64-bit, the writes and reads of the pointer will not be instruction-level atomic and it might lead to invalid pointer values and crashes: I am making sure that this only runs on 64-bit CPUs.
Destroying dispatchers isn't safe. As I said: I'm never destroying dispatchers.
???
I am researching mutexes.
I come up with this example that seems to work without any synchronization.
#include <cstdint>
#include <thread>
#include <iostream>
constexpr size_t COUNT = 10000000;
int g_x = 0;
void p1(){
for(size_t i = 0; i < COUNT; ++i){
++g_x;
}
}
void p2(){
int a = 0;
for(size_t i = 0; i < COUNT; ++i){
if (a > g_x){
std::cout << "Problem detected" << '\n';
}
a = g_x;
}
}
int main(){
std::thread t1{ p1 };
std::thread t2{ p2 };
t1.join();
t2.join();
std::cout << g_x << '\n';
}
My assumptions are following:
Thread 1 change the value of g_x, but it is the only thread that change the value, so theoretically this suppose to be OK.
Thread 2 reads the value of g_x. Reads suppose to be atomic on x86 and ARM. So there must be no problem there too. I have example with several read threads and it works OK too.
With other words, write is not shared and reads are atomic.
Are the assumptions correct?
There's certainly a data race here: g_x is not an std::atomic; it is written to by one thread, and read from by another. So the results are undefined.
Note that the CPU memory model is only part of the deal. The compiler might do all sorts of optimizations (using registers, reordering etc.) if you don't declare your shared variables properly.
As for mutexes, you do not need one here. Declaring g_x as atomic should remove the UB and guarantee proper communication between the threads. Btw, the for in p2 can probably be optimized out even if you're using atomics, but I assume this is just a reduced code and not the real thing.
Can someone please help me understand what i am doing wrong here?
When i make the data members of the class non atomic it is working fine.
class AtomicTest
{
atomic<int> A{ 0 };
atomic<int> B{ 0 };
public :
AtomicTest() { }
void func1()
{
A = 1;
cout << "func1 " << A << endl;;
}
void func2()
{
cout << "func2 " << A << endl;
A = A + 1;
cout << A << endl;
}
};
int main()
{
AtomicTest a;
thread t1(&AtomicTest::func1, std::ref(a)); // I tried to move as well, i know ref would share data between two threads but i can use mutex to protect data if its needed but i cannot even call the func1
//thread t2(&AtomicTest::func2, std::ref(a));
t1.join();
//t2.join();
return 0;
}
Due to concurrency without additional synchronization in this case it's impossible to predict program behavior:
A = A + 1;
This simple line consists of 1 atomic load and 1 atomic store operations. Even default memory order (std::memory_order_seq_cst) wont give you any guarantees about mixed simultaneous execution of these two threads. Standard says such case has an undefined behavior -- there is data race for your variable (it doesn't matter is it atomic or not).
Try to add some lock primitives (such as std::mutex) or change the logic to use special atomic functions (such as fetch_add, exchange etc.). See more at the cpp-reference.
If I want to get the result from a thread, which of the following code is correct? Or there exists better way to achieve the same goal?
void foo(int &result) {
result = 123;
}
int bar() {
return 123;
}
int main() {
int foo_result;
std::thread t1(foo, std::ref(foo_result));
t1.join();
std::future<int> t2 = std::async(bar);
int bar_result = t2.get();
}
And another case,
void baz(int beg, int end, vector<int> &a) {
for (int idx = beg; idx != end; ++idx) a[idx] = idx;
}
int main() {
vector<int> a(30);
thread t0(baz, 0, 10, ref(a));
thread t1(baz, 10, 20, ref(a));
thread t2(baz, 20, 30, ref(a));
t0.join();
t1.join();
t2.join();
for (auto x : a) cout << x << endl;
}
There are many ways.
See the example code at the bottom of http://en.cppreference.com/w/cpp/thread/future
In C++11, you want to use std::future
From this documentation link:
The class template std::future provides a mechanism to access the result of asynchronous operations
And some sample code, also from that link, to illustrate its use.
#include <iostream>
#include <future>
#include <thread>
int main()
{
// future from a packaged_task
std::packaged_task<int()> task([](){ return 7; }); // wrap the function
std::future<int> f1 = task.get_future(); // get a future
std::thread(std::move(task)).detach(); // launch on a thread
// future from an async()
std::future<int> f2 = std::async(std::launch::async, [](){ return 8; });
// future from a promise
std::promise<int> p;
std::future<int> f3 = p.get_future();
std::thread( [](std::promise<int>& p){ p.set_value(9); },
std::ref(p) ).detach();
std::cout << "Waiting..." << std::flush;
f1.wait();
f2.wait();
f3.wait();
std::cout << "Done!\nResults are: "
<< f1.get() << ' ' << f2.get() << ' ' << f3.get() << '\n';
}
The second one is simpler, better and safer.
With the first one, you are sharing an object, bar, between the two threads. You clearly need to enforce some form of synchronization or policy in order to safely use the result object.
Another issue with the first one is that the lifetime of the referenced result object is tied to the lifetime of the original object which, in your case, is in the initiating thread. This might be very unsafe if the referred-to object leaves out of scope with the worker thread still unfinished doing its job, and has still not written to the result object.
The second one is much better as it address the two issues above. You can also use it on any function that returns its result without that function knowing it is being concurrently executed. Of course you still need be careful not to introduce data races and undefined behavior when sharing data, especially with global variables.
To be honest, I think your second example is a bit contrived. You typically don't want to use separate threads in doing such a trivial task. With that, you are exposing yourself into a data race. Even if you do synchronize their access, the overhead of initiating a thread and synchronization would put it at a serious disadvantage against single threaded code.
I'm learning concurrent programming and what I want to do is have a class where each object it responsible for running its own Boost:Thread. I'm a little over my head with this code because it uses A LOT of functionality that I'm not that comfortable with (dynamically allocated memory, function pointers, concurrency etc etc). It's like every line of code I had to check some references to get it right.
(Yes, all allocated memory is accounted for in the real code!)
I'm having trouble with the mutexes. I declare it static and it seems to get the same value for all the instances (as it should). The code is STILL not thread safe.
The mutex should stop the the threads (right?) from progressing any further in case someone else locked it. Because mutexes are scoped (kind of a neat functionality) and it's within the if statement that should look the other threads out no? Still I get console out puts that clearly suggests it is not thread safe.
Also I'm not sure I'm using the static vaiable right. I tried different ways of refering to it (Seller::ticketSaleMutex) but the only thing that worked was "this->ticketSaleMutex" which seems very shady and it seems to defeat the purpose of it being static.
Seller.h:
class Seller
{
public:
//Some vaiables
private:
//Other variables
static boost::mutex ticketSaleMutex; //Mutex definition
};
Seller.cpp:
boost::mutex Seller::ticketSaleMutex; //Mutex declaration
void Seller::StartTicketSale()
{
ticketSale = new boost::thread(boost::bind(&Seller::SellTickets, this));
}
void Seller::SellTickets()
{
while (*totalSoldTickets < totalNumTickets)
{
if ([Some time tick])
{
boost::mutex::scoped_lock(this->ticketSaleMutex);
(*totalSoldTickets)++;
std::cout << "Seller " << ID << " sold ticket " << *totalSoldTickets << std::endl;
}
}
}
main.cpp:
int main(int argc, char**argv)
{
std::vector<Seller*> seller;
const int numSellers = 10;
int numTickets = 40;
int *soldTickets = new int;
*soldTickets = 0;
for (int i = 0; i < numSellers; i++)
{
seller.push_back(new Seller(i, numTickets, soldTickets));
seller[i]->StartTicketSale();
}
}
This will create a temporary that is immediately destroyed:
boost::mutex::scoped_lock(this->ticketSaleMutex);
resulting in no synchronization. You need to declare a variable:
boost::mutex::scoped_lock local_lock(this->ticketSaleMutex);