I want print a permutation of {0, 1, 2, 3} set by a multithread program written in C++11.
The source code is this:
#include <iostream>
#include <stdio.h>
#include <thread>
#include <vector>
#include <chrono>
using namespace std;
void func(int index);
int main()
{
vector<thread> threads;
for (int i = 0; i < 4; i++)
{
auto var = [&]()
{
return func(i);
};
threads.push_back(thread(var));
}
for (auto& thread : threads)
thread.join();
}
void func(int index)
{
cout << index;
for (int i = 0; i < 10000; i++);
}
I expect in output a permutation of 0123, but I receive weird results, like these:
0223
0133
0124
I don't understand this weird behaviour, in particualar I cannot explain the presence of number 4.
Probably this is a beginner's mistake, I thanks anyway everybody will help me.
You are capturing i by reference:
auto var = [&]()
{
return func(i);
};
As such, when the thread that gets eventually started gets kicked off and going, it does not have an actual copy, the value of i that it has at this moment, but it has only a reference to i.
Which was probably incremented, once or twice now. If you consider that a reference is really just an ordinary pointer with a thin layer of makeup on top of it, you should be able to figure this out on your own. The thread gets a pointer to i, which, who knows how many times it could've been incremented, by the time the thread gets going.
And, technically, since i could've even gone out of scope here, if the for loop terminated before the thread started executing, this is undefined behavior.
You're invoking undefined behaviors in three ways:
Firstly, you're capturing the value of a stack variable by reference rather than by value, so when the thread starts it will call the lambda and use the then-current value of i rather than the value at the time of capture.
[edit: No-longer true as of C++11] Second is the thread-safety of cout.
The third is an assumption of the order in which the threads execute which is not guaranteed. [edit:] This includes not only the order in which they start but in which they access cout to write their output.
But do you need to solve the order of execution?
If you do, then instead of passing the values to the threads, put them into a queue and give the threads access to the queue.
#include <iostream>
#include <thread>
#include <vector>
#include <mutex>
#include <chrono>
#include <queue>
class LockedQueue {
std::queue<int> queue_;
mutable std::mutex mutex_;
public:
LockedQueue() = default;
// these don't have to be deleted, but you'd have to decide whether or
// not each operation needed to invoke a lock, and in the case of operator=
// you have two mutexes to contend with.
LockedQueue(const LockedQueue&) = delete;
LockedQueue(LockedQueue&&) = delete;
LockedQueue& operator=(const LockedQueue&) = delete;
LockedQueue& operator=(LockedQueue&&) = delete;
void push(int value) {
std::lock_guard<std::mutex> lock(mutex_);
queue_.push(value);
}
int pop() {
std::lock_guard<std::mutex> lock(mutex_);
int value = queue_.front();
queue_.pop();
return value;
}
bool empty() const {
std::lock_guard<std::mutex> lock(mutex_);
return queue_.empty();
}
};
void func(LockedQueue& work, LockedQueue& results);
int main()
{
LockedQueue work, results;
std::vector<std::thread> threads;
for (int i = 0; i < 4; i++)
{
work.push(i);
threads.emplace_back(func, std::ref(work), std::ref(results));
}
for (auto& thread : threads)
thread.join();
while (!results.empty()) {
int i = results.pop();
std::cout << i;
}
}
void func(LockedQueue& work, LockedQueue& results)
{
int index = work.pop();
using namespace std::chrono_literals;
std::this_thread::sleep_for(1s);
results.push(index);
}
http://ideone.com/7G0JEO
We are still not guaranteed to get our results back in-order: it's quite possible that the thread that takes 0 off the queue is then pre-empted and doesn't execute again until 1, 2 and 3 have had their results pushed onto the result queue.
As Sam Varshavchik mentioned about the undefined behavior of the i when it when it goes out of scope, i suggest to join each thread created inside the loop where i exists by adding this :
threads[i].join();
And don't forget to remove the :
for (auto& thread : threads)
thread.join();
Your main function should be like this :
int main()
{
vector<thread> threads;
for (int i = 0; i < 4; i++)
{
auto var = [&]()
{
return func(i);
};
threads.push_back(thread(var));
threads[i].join(); // joining the thread after its creation.
}
system("pause");
return 0;
}
Amrane Abdelkader.
Related
I am currently practicing the use of multiple threads in C++. The program is simplified as follow. In this case, I have a global variable Obj, and within each task, a get function is processed by thread and thread detach will be called after.
In practice, get may take a great amount of time to run. If there are many tasks, get will be called repetitively (since each task has its own get function). I wonder if I can design a program where when one task has already obtained the data using get function and the data has been wrote to obj.text, then the rest of tasks can directly access or wait for the data from obj.text.
Can I use std::shared_ptr, std::future, std::async in c++ to implement this? If so, how to design the program? Any advice is greatly appreciated.
#include <chrono>
#include <future>
#include <iostream>
#include <memory>
#include <thread>
#include <vector>
using namespace std;
class Info {
public:
Info() { Ids = 10; };
int Ids;
std::string text;
};
Info Objs;
class Module {
public:
Module() {}
virtual void check(int &id){};
virtual void get(){};
};
class task1 : public Module {
public:
task1() { std::cout << "task1" << std::endl; }
void check(int &id) override {
thread s(&task1::get, this);
s.detach();
};
// The function will first do some other work (here, I use sleep to represent
// that) then set the value of Objs.text
void get() override {
// The task may take 2 seconds , So use text instead
std::this_thread::sleep_for(std::chrono::seconds(5));
Objs.text = "AAAA";
std::cout << Objs.text << std::endl;
};
};
class task2 : public Module {
public:
task2() { std::cout << "task2" << std::endl; }
void check(int &id) override {
thread s(&task2::get, this);
s.detach();
};
// The function will first do some other work (here, I use sleep to represent
// that) then set the value of Objs.text
void get() {
std::this_thread::sleep_for(std::chrono::seconds(5));
Objs.text = "AAAA";
std::cout << Objs.text << std::endl;
};
};
int main() {
std::vector<std::unique_ptr<Module>> modules;
modules.push_back(std::make_unique<task1>());
modules.push_back(std::make_unique<task2>());
for (auto &m : modules) {
m->check(Objs.Ids);
}
std::this_thread::sleep_for(std::chrono::seconds(12));
return 0;
}
It is a plain producer-consumer problem.
You have multiple “get()” producers. And did not implemented consumers yet.
First, you should have multiple “Info” for multithread. If there is only one Info, multithread programming is useless. I recommend “concurrent_queue”.
Second, “detach()” is not a good idea. You can’t manage child threads. You’d better use “join()”
My code sample follows. I used Visual Studio 2022
#include <chrono>
#include <iostream>
#include <thread>
#include <vector>
#include <concurrent_queue.h>
using namespace std;
class Info {
public:
Info() { Ids = 10; };
int Ids;
std::string text;
};
concurrency::concurrent_queue<Info> Objs;
void producer()
{
while (true) {
Info obj;
std::this_thread::sleep_for(std::chrono::seconds(5));
obj.text = "AAAA\n";
Objs.push(obj);
}
}
void consumer()
{
while (true) {
std::this_thread::sleep_for(std::chrono::seconds(1));
Info obj;
bool got_it = Objs.try_pop(obj);
if (got_it) {
std::cout << obj.text;
}
}
}
int main() {
const int NUM_CORES = 6;
std::vector<std::thread> threads;
for (int i = 0; i < NUM_CORES / 2; ++i)
threads.emplace_back(producer);
for (int i = 0; i < NUM_CORES / 2; ++i)
threads.emplace_back(consumer);
for (auto& th : threads) th.join();
}
I am fairly new to C++ and very new to using mutex. I am trying to implement a thread safe queue by #ChewOnThis_Trident from this answer.
Essentially I have different threads adding messages to a queue and I need to preserve the order they are being added. However the messages require some conditional modifications before being added. In the real code listeners on separate threads call unique "handleMessage" functions that modify the message before adding to them to the queue. A separate thread checks to see if messages are in the queue and handles them in order. In the full code, I know the listeners are receiving the messages in the correct order, but they are failing to add them to the queue in the correct order.
I think the problem is there is some time elapsing between when a message is received and if it is being modified, causing messages to fall out of order.
For practical reasons in the real code, I can't do these modifications inside of "Safequeue::enqueue".
In my example two threads can add to the queue. One thread reads from it. The "message" in this case is a random int. "UsesQ" handles adding to the queue, and message modification (Ex. makes all ints odd).
I think another mutex is needed when "UsesQ::addQ" is called, but it would need to be shared across all the threads and I'm not sure if I am not sure how to implement it.
In the example I am struggling of thinking of a way to test if the order is correct.
Here is the example:
#include <queue>
#include <mutex>
#include <condition_variable>
#include <stdio.h>
#include <stdlib.h>
#include <iostream>
#include <assert.h>
#include <pthread.h>
#include <unistd.h>
class SafeQueue
{// A threadsafe-queue.
public:
SafeQueue(void)
: q()
, m()
, cv()
{}
~SafeQueue(void)
{}
// Add an element to the queue.
void enqueue(int i)
{
std::lock_guard<std::mutex> lock(m);
q.push(i);
cv.notify_one();
}
// Get the "front"-element.
// If the queue is empty, wait till a element is avaiable.
int dequeue(void)
{
std::unique_lock<std::mutex> lock(m);
while(q.empty())
{
// release lock as long as the wait and reaquire it afterwards.
cv.wait(lock);
}
int val = q.front();
q.pop();
return val;
}
private:
std::queue<int> q;
mutable std::mutex m;
std::condition_variable cv;
};
class UsesQ
{
private:
int readVal;
int lastReadVal = 1;
public:
SafeQueue & Q;
UsesQ(SafeQueue & Q): Q(Q){};
~UsesQ(){};
void addQ(int i)
{
if(i% 2 == 0)
{
i++;//some conditional modification to the initial "message"
}
Q.enqueue(i);
}
void removeQ()
{
readVal = Q.dequeue();
}
};
void* run_add(void* Ptr)
{
UsesQ * UsesQPtr = (UsesQ *)Ptr;
for(;;)
{
int i = rand();//simulate an incoming "message"
UsesQPtr->addQ(i);
}
pthread_exit (NULL);
return NULL;
}
void* run_remove(void* Ptr)
{
UsesQ * UsesQPtr = (UsesQ *)Ptr;
for(;;)
{
UsesQPtr->removeQ();
}
pthread_exit (NULL);
return NULL;
}
int main()
{
SafeQueue Q;
UsesQ * UsesQPtr = new UsesQ(std::ref(Q));
pthread_t thread1;
pthread_create(&thread1, NULL, run_add, UsesQPtr);
pthread_t thread2;
pthread_create(&thread2, NULL, run_add, UsesQPtr);
pthread_t thread3;
pthread_create(&thread3, NULL, run_remove, UsesQPtr);
while(1)
{
usleep(1);
printf(".\n");
}
};
Complied with the pthread tag
g++ main.cpp -pthread
Thank you for any help.
I wrote this sample program to mimic what I'm trying to do in a larger program.
I have some data that will come from the user and be passed into a thread for some processing. I am using mutexes around the data the flags to signal when there is data.
Using the lambda expression, is a pointer to *this send to the thread? I seem to be getting the behavior I expect in the cout statement.
Are the mutexes used properly around the data?
Is putting the atomics and mutexes as a private member of the class a good move?
foo.h
#pragma once
#include <atomic>
#include <thread>
#include <vector>
#include <mutex>
class Foo
{
public:
Foo();
~Foo();
void StartThread();
void StopThread();
void SendData();
private:
std::atomic<bool> dataFlag;
std::atomic<bool> runBar;
void bar();
std::thread t1;
std::vector<int> data;
std::mutex mx;
};
foo.c
#include "FooClass.h"
#include <thread>
#include <string>
#include <iostream>
Foo::Foo()
{
dataFlag = false;
}
Foo::~Foo()
{
StopThread();
}
void Foo::StartThread()
{
runBar = true;
t1 = std::thread([=] {bar(); });
return;
}
void Foo::StopThread()
{
runBar = false;
if(t1.joinable())
t1.join();
return;
}
void Foo::SendData()
{
mx.lock();
for (int i = 0; i < 5; ++i) {
data.push_back(i);
}
mx.unlock();
dataFlag = true;
}
void Foo::bar()
{
while (runBar)
{
if(dataFlag)
{
mx.lock();
for(auto it = data.begin(); it < data.end(); ++it)
{
std::cout << *it << '\n';
}
mx.unlock();
dataFlag = false;
}
}
}
main.cpp
#include "FooClass.h"
#include <iostream>
#include <string>
int main()
{
Foo foo1;
std::cout << "Type anything to end thread" << std::endl;
foo1.StartThread();
foo1.SendData();
// type something to end threads
char a;
std::cin >> a;
foo1.StopThread();
return 0;
}
You ensure that the thread is joined using RAII techniques? Check.
All data access/modification is either protected through atomics or mutexs? Check.
Mutex locking uses std::lock_guard? Nope. Using std::lock_guard wraps your lock() and unlock() calls with RAII. This ensures that even if an exception occurs while within the lock, that the lock is released.
Is putting the atomics and mutexes as a private member of the class a good move?
Its neither good nor bad, but in this scenario, where Foo is a wrapper for a std::thread that does work and controls the synchronization, it makes sense.
Using the lambda expression, is a pointer to *this send to the thread?
Yes, you can also do t1 = std::thread([this]{bar();}); to make it more explicit.
As it stands, with your dataFlag assignments after the locks, you may encounter problems. If you call SendData twice such that bar processes the first one but is halted before setting dataFlag = false so that the second call adds the data, sets the flag to true only to have bar set it back to false. Then, you'll have data that has been "sent" but bar doesn't think there's anything to process.
There may be other tricky situations, but this was just one example; moving it into the lock clears up that problem.
for example, your SendData should look like:
void Foo::SendData()
{
std::lock_guard<std::mutex> guard(mx);
for (int i = 0; i < 5; ++i) {
data.push_back(i);
}
dataFlag = true;
}
Hello,
i am quite new to C++ but I have 6 years Java experience, 2 years C experience and some knowledge of concurrency basics. I am trying to create a threadpool to handle tasks. it is below with the associated test main.
it seems like the error is generated from
void ThreadPool::ThreadHandler::enqueueTask(void (*task)(void)) {
std::lock_guard<std::mutex> lock(queueMutex);
as said by my debugger, but doing traditional cout debug, i found out that sometimes it works without segfaulting and removing
threads.emplace(handler->getSize(), handler);
from ThreadPool::enqueueTask() improves stability greatly.
Overall i think it is related too my bad use of condition_variable (called idler).
compiler: minGW-w64 in CLion
.cpp
#include <iostream>
#include "ThreadPool.h"
ThreadPool::ThreadHandler::ThreadHandler(ThreadPool *parent) : parent(parent) {
thread = std::thread([&]{
while (this->parent->alive){
if (getSize()){
std::lock_guard<std::mutex> lock(queueMutex);
(*(queue.front()))();
queue.pop_front();
} else {
std::unique_lock<std::mutex> lock(idlerMutex);
idler.wait(lock);
}
}
});
}
void ThreadPool::ThreadHandler::enqueueTask(void (*task)(void)) {
std::lock_guard<std::mutex> lock(queueMutex);
queue.push_back(task);
idler.notify_all();
}
size_t ThreadPool::ThreadHandler::getSize() {
std::lock_guard<std::mutex> lock(queueMutex);
return queue.size();
}
void ThreadPool::enqueueTask(void (*task)(void)) {
std::lock_guard<std::mutex> lock(threadsMutex);
std::map<int, ThreadHandler*>::iterator iter = threads.begin();
threads.erase(iter);
ThreadHandler *handler = iter->second;
handler->enqueueTask(task);
threads.emplace(handler->getSize(), handler);
}
ThreadPool::ThreadPool(size_t size) {
for (size_t i = 0; i < size; ++i) {
std::lock_guard<std::mutex> lock(threadsMutex);
ThreadHandler *handler = new ThreadHandler(this);
threads.emplace(handler->getSize(), handler);
}
}
ThreadPool::~ThreadPool() {
std::lock_guard<std::mutex> lock(threadsMutex);
auto it = threads.begin(), end = threads.end();
for (; it != end; ++it) {
delete it->second;
}
}
.h
#ifndef WLIB_THREADPOOL_H
#define WLIB_THREADPOOL_H
#include <mutex>
#include <thread>
#include <list>
#include <map>
#include <condition_variable>
class ThreadPool {
private:
class ThreadHandler {
std::condition_variable idler;
std::mutex idlerMutex;
std::mutex queueMutex;
std::thread thread;
std::list<void (*)(void)> queue;
ThreadPool *parent;
public:
ThreadHandler(ThreadPool *parent);
void enqueueTask(void (*task)(void));
size_t getSize();
};
std::multimap<int, ThreadHandler*> threads;
std::mutex threadsMutex;
public:
bool alive = true;
ThreadPool(size_t size);
~ThreadPool();
virtual void enqueueTask(void (*task)(void));
};
#endif //WLIB_THREADPOOL_H
main:
#include <iostream>
#include <ThreadPool.h>
ThreadPool pool(3);
void fn() {
std::cout << std::this_thread::get_id() << '\n';
pool.enqueueTask(fn);
};
int main() {
std::cout << "Hello, World!" << std::endl;
pool.enqueueTask(fn);
return 0;
}
Your main() function invokes enqueueTask().
Immediately afterwards, your main() returns.
This gets the gears in motion for winding down your process. This involves invoking the destructors of all global objects.
ThreadPool's destructor then proceeds to delete all dynamically-scoped threads.
While the threads are still running. Hilarity ensues.
You need to implement the process for an orderly shutdown of all threads.
This means setting active to false, kicking all of the threads in the shins, and then joining all threads, before letting nature take its course, and finally destroy everything.
P.S. -- you need to fix how alive is being checked. You also need to make access to alive thread-safe, protected by a mutex. The problem is that the thread could be holding a lock on one of two differented mutexes. This makes this process somewhat complicated. Some redesign is in order, here.
Sometimes this implementation and execution of BlockingQueue just works. Sometimes it segfaults. Any idea why?
#include <thread>
using std::thread;
#include <mutex>
using std::mutex;
#include <iostream>
using std::cout;
using std::endl;
#include <queue>
using std::queue;
#include <string>
using std::string;
using std::to_string;
#include <functional>
using std::ref;
template <typename T>
class BlockingQueue {
private:
mutex mutex_;
queue<T> queue_;
public:
T pop() {
this->mutex_.lock();
T value = this->queue_.front();
this->queue_.pop();
this->mutex_.unlock();
return value;
}
void push(T value) {
this->mutex_.lock();
this->queue_.push(value);
this->mutex_.unlock();
}
bool empty() {
this->mutex_.lock();
bool check = this->queue_.empty();
this->mutex_.unlock();
return check;
}
};
void fillWorkQueue(BlockingQueue<string>& workQueue) {
int size = 40000;
for(int i = 0; i < size; i++)
workQueue.push(to_string(i));
}
void doWork(BlockingQueue<string>& workQueue) {
while(!workQueue.empty()) {
workQueue.pop();
}
}
void multiThreaded() {
BlockingQueue<string> workQueue;
fillWorkQueue(workQueue);
thread t1(doWork, ref(workQueue));
thread t2(doWork, ref(workQueue));
t1.join();
t2.join();
cout << "done\n";
}
int main() {
cout << endl;
// Multi Threaded
cout << "multiThreaded\n";
multiThreaded();
cout << endl;
}
See here:
What do I get from front() of empty std container?
Bad things happen if you call .front() on an empty container, better check .empty() first.
Try:
T pop() {
this->mutex_.lock();
T value;
if( !this->queue_.empty() )
{
value = this->queue_.front(); // undefined behavior if queue_ is empty
// may segfault, may throw, etc.
this->queue_.pop();
}
this->mutex_.unlock();
return value;
}
Note: Since atomic operations are important on this kind of queue, I'd recommend API changes:
bool pop(T &t); // returns false if there was nothing to read.
Better yet, if you're actually using this where it matters, you probably want to mark items in use before deleting in case of failure.
bool peekAndMark(T &t); // allows one "marked" item per thread
void deleteMarked(); // if an item is marked correctly, pops it.
void unmark(); // abandons the mark. (rollback)
The problem should lay here:
while(!itemQueue.empty()) {
itemQueue.pop();
}
You reserve the mutex when checking of a value is left, then you release the mutex and it might happen that another thread is executed, finds out that a value is left and pops it. In the worst case no item is left afterwards and the first thread tries to pop while no element is left.
The solution is to make the front/pop calls on the internal queue in the same section than the check for empty in the same locked section, then the behavior would always be defined.
Another suggestion would be to use std::lock_guard when working with mutex because it improves the readability and does ensure that the mutex is released no matter what happens.
Considering the fact those two advice, your pop method could look like:
T pop() {
std::lock_guard lock(this->mutex_); //mutex_ is locked
T value;
if( !this->queue_.empty() )
{
value = this->queue_.front();
this->queue_.pop();
}
return value;
} //mutex_ is freed