I'm trying to learn std::thread in C++ today. My goal is to spawn two threads, one a network listener, and the other a network transmitter. The listener will get feedback messages from the destination server and then send commands to the transmitter through a shared queue (non blocking). I'm a little confused about the proper way to share the queue.
I have a class called Network that contains everything for the above, here's part of it.
//part of my class
std::mutex mMessageMutex;
//message is class for messages to share
queue<message> mMessageQueue;
std::thread mTCPthread;
std::thread mUDPthread;
Network::threadTest(){
mTCPthread = std::thread(&Sender::tcpThread, this);
mUDPthread = std::thread(&Sender::udpThread, this);
mTCPthread.detach();
mUDPthread.detach();
}
Network::tcpThread(){
//wait for messages
while(1)
if(messageFromNetwork)
mMessageQueue.push(_message);
}
Network::udpThread(){
//do some work
while(1){
if(mMessageQueue.size > 0){
message = mMessageQueue.front();
mMessageQueue.pop();
//process message
}
}
}
Then I create a new network object and kick off threadTest in this example main function. Finally I wait in a loop for some kind of user intervention.
///////////////////
int main(void){
std::unique_ptr<Network> _network(new Network());
unique_ptr->threadTest();
while(1){
//wait on user input or stop signal
}
}
So from my reading I think that as long as I don't exit main or if I did this in another function, as long as I don't let _network go out of scope, then mMessageMutex and mMessageQueue will be available for the two threads to use?
I also just learned how to use unique_ptr (things have changed in c++ for an old timer!). So I'm hoping that when _network does go out of scope that mTCPthread, mUDPthread, and _network will all be destroyed and not leak.
Thank you
Related
I have two threads. The first creates a Logic object, detaching a second thread to spin, blocking on OpenSSL socket to receive messages:
struct Logic
{
Logic()
{
std::thread t1(&Logic::run, this);
t1.detach();
}
void run()
{
while(true)
{
// Gets data from SSL (blocking socket)
// Processes data
// Updates timestamp
}
}
uint64_t timestamp;
};
The first thread returns, enters a while loop and continually checks if the detached thread is still running (or whether its blocked permanently).
while(true)
{
Logic logic();
while(true)
{
if(timestamp_not_updated)
{
break; // Break, destroy current Logic object and create another
}
}
}
If the timestamp stops being updated, the inner while loop breaks, causing the Logic object to be destroyed and a new one created.
When this restart behaviour triggers I get a seg fault. thread apply all bt shows 3 threads, not 2. The original detached thread (blocking on OpenSSL) still exists. I thought this would get destroyed due to the object.
How do I stop a detached thread which is blocking/waiting on a resource, so I can restart my class? I need the blocking behaviour because I don't have anything else to do (besides receive the packet) and it's better for performance, than to keep calling in to OpenSSL.
I want to make function that when receive buffer from socket, thread make whole program freeze out of my function until my function is finished. I try these as below
Function Listen
void Listen(can* _c) {
while (true)
{
std::lock_guard<std::mutex>guard(_c->connection->mutex);
thread t(&connect_tcp::Recv_data,_c->connection,_c->s,ref(_c->response),_c->signals);
if (t.joinable())
t.join();
}
}
Function dataset_browseCan
void dataset_browseCan(can* _c) {
thread org_th(Listen, _c); // I call thread here
org_th.detach();
dataset_browse(_c->cotp, _c->mms_obj, _c->connection, _c->response, _c->list, _c->size_encoder, _c->s);
dataset_signals_browse(_c->cotp, _c->mms_obj, _c->connection, _c->response, _c->list, _c->size_encoder, _c->s);
Sleep(800);
_c->signals = new Signals[_c->response.real_signals_and_values.size()];
}
Function Recv Data
void connect_tcp::Recv_data(SOCKET s,mms_response &response,Signals *signals) {
LinkedList** list = new LinkedList * [1000];
uint8_t* buffer = new uint8_t [10000];
Sleep(800);
/*std::lock_guard<std::mutex>guard(mutex);*/
thread j(recv,s, (char*)buffer, 10000, 0);
j.join()
/*this->mutex.unlock();*/
decode_bytes(response,buffer, list,signals);
}
I tried mutex and this_thread::sleep_for() but everytime my main function keep running.
Is make program freeze possible ?
You use threads in order to allow things to keep running while something else is happening, so wanting to "stop main" seems counter-intuitive.
However, if you want to share data between threads (e.g. between the thread that runs main and a background thread) then you need to use some form of synchronization. One way to do that is to use a std::mutex. If you lock the mutex before every access, and unlock it afterwards (using std::lock_guard or std::unique_lock) then it will prevent another thread from locking the same mutex while you are accessing the data.
If you need to prevent concurrent access for a long time, then you should not hold a mutex for the whole time. Either consider whether threads are the best solution to your problem, or use a mutex-protected flag to indicate whether the data is ready, and then either poll or use std::condition_variable or similar to wait until the flag is set.
UPDATE: I have provided the reason of problem and its solution in my answer below.
I want to implement multi-threading which is based upon Producer-consumer approach for an image processing task. For my case, the Producer thread should grabs the images and put them into a container whereas the consumer thread should extract the images from the Container thread. I think that I should use queue for the implementation of container.
I want to use the following code as suggested in this SO answer. But I have become quite confused with the implementation of container and putting the incoming image into it in the Producer thread.
PROBLEM: The image displayed by the first consumer thread does not contain the full data. And, the second consumer thread never displays any image. May be, there is some race situation or lock situation due to which the second thread is not able to access the data of queue at all. I have already tried to use Mutex.
#include <vector>
#include <thread>
#include <memory>
#include <queue>
#include <opencv2/highgui.hpp>
#include <opencv2/core.hpp>
#include <opencv2/imgproc.hpp>
Mutex mu;
struct ThreadSafeContainer
{
queue<unsigned char*> safeContainer;
};
struct Producer
{
Producer(std::shared_ptr<ThreadSafeContainer> c) : container(c)
{
}
void run()
{
while(true)
{
// grab image from camera
// store image in container
Mat image(400, 400, CV_8UC3, Scalar(10, 100,180) );
unsigned char *pt_src = image.data;
mu.lock();
container->safeContainer.push(pt_src);
mu.unlock();
}
}
std::shared_ptr<ThreadSafeContainer> container;
};
struct Consumer
{
Consumer(std::shared_ptr<ThreadSafeContainer> c) : container(c)
{
}
~Consumer()
{
}
void run()
{
while(true)
{
// read next image from container
mu.lock();
if (!container->safeContainer.empty())
{
unsigned char *ptr_consumer_Image;
ptr_consumer_Image = container->safeContainer.front(); //The front of the queue contain the pointer to the image data
container->safeContainer.pop();
Mat image(400, 400, CV_8UC3);
image.data = ptr_consumer_Image;
imshow("consumer image", image);
waitKey(33);
}
mu.unlock();
}
}
std::shared_ptr<ThreadSafeContainer> container;
};
int main()
{
//Pointer object to the class containing a "container" which will help "Producer" and "Consumer" to put and take images
auto ptrObject_container = make_shared<ThreadSafeContainer>();
//Pointer object to the Producer...intialize the "container" variable of "Struct Producer" with the above created common "container"
auto ptrObject_producer = make_shared<Producer>(ptrObject_container);
//FIRST Pointer object to the Consumer...intialize the "container" variable of "Struct Consumer" with the above created common "container"
auto first_ptrObject_consumer = make_shared<Consumer>(ptrObject_container);
//SECOND Pointer object to the Consumer...intialize the "container" variable of "Struct Consumer" with the above created common "container"
auto second_ptrObject_consumer = make_shared<Consumer>(ptrObject_container);
//RUN producer thread
thread producerThread(&Producer::run, ptrObject_producer);
//RUN first thread of Consumer
thread first_consumerThread(&Consumer::run, first_ptrObject_consumer);
//RUN second thread of Consumer
thread second_consumerThread(&Consumer::run, second_ptrObject_consumer);
//JOIN all threads
producerThread.join();
first_consumerThread.join();
second_consumerThread.join();
return 0;
}
I don't see an actual question in your original question, so I'll give you the reference material I used to implement producer-consumer in my college course.
http://cs360.byu.edu/static/lectures/winter-2014/semaphores.pdf
Slides 13 and 17 give good examples of producer-consumer
I made use of this in the lab which I have posted on my github here:
https://github.com/qzcx/Internet_Programming/tree/master/ThreadedMessageServer
If you look in my server.cc you can see my implementation of the producer-consumer pattern.
Remember that using this pattern that you can't switch the order of the wait statements or else you can end up in deadlock.
Hope this is helpful.
EDIT:
Okay, so here is a summary of the consumer-producer pattern in my code linked above. The idea behind the producer consumer is to have a thread safe way of passing tasks from a "producer" thread to "consumer" worker threads. In the case of my example, the work to be done is to handle client requests. The producer thread (.serve()) monitors the incoming socket and passes the connection to consumer threads (.handle()) to handle the actual request as they come in. All of the code for this pattern is found in the server.cc file (with some declarations/imports in server.h).
For the sake of being brief, I am leaving out some detail. Be sure to go through each line and understand what is going on. Look up the library functions I am using and what the parameters mean. I'm giving you a lot of help here, but there is still plenty of work for you to do to gain a full understanding.
PRODUCER:
Like I mentioned above, the entire producer thread is found in the .serve() function. It does the following things
Initializes the semaphores. There are two version here because of OS differences. I programmed on a OS X, but had to turn in code on Linux. Since Semaphores are tied to the OS, it is important to understand how to use semaphores in your particular setup.
It sets up the socket for the client to talk to. Not important for your application.
Creates the consumer threads.
Watches the client socket and uses the producer pattern to pass items to the consumers. This code is below
At the bottom of the .serve() function you can see the following code:
while ((client = accept(server_,(struct sockaddr *)&client_addr,&clientlen)) > 0) {
sem_wait(clients_.e); //buffer check
sem_wait(clients_.s);
clients_.q->push(client);
sem_post(clients_.s);
sem_post(clients_.n); //produce
}
First, you check the buffer semaphore "e" to ensure there is room in your queue to place the request. Second, acquire the semaphore "s" for the queue. Then add your task (In this case, a client connection) to the queue. Release the semaphore for the queue. Finally, signal to the consumers using semaphore "n".
Consumer:
In the .handle() method you really only care about the very beginning of the thread.
while(1){
sem_wait(clients_.n); //consume
sem_wait(clients_.s);
client = clients_.q->front();
clients_.q->pop();
sem_post(clients_.s);
sem_post(clients_.e); //buffer free
//Handles the client requests until they disconnect.
}
The consumer does similar actions to the producer, but in opposite fashion. First the consumer waits for the producer to signal on the semaphore "n". Remember since there are multiple consumers it is completely random which consumer might end up acquiring this semaphore. They fight over it, but only one can move passed this point per sem_post of that semaphore. Second, they acquire the queue semaphore like the producer does. Pop the first item off the queue and release the semaphore. Finally, they signal on the buffer semaphore "e" that there is now more room in the buffer.
Disclaimer:
I know the semaphores have terrible names. They match my professor's slides since that's where I learned it. I think they stand for the following:
e for empty : this semaphore stops the producer from pushing more items on the queue if it is full.
s for semaphore : My least favorite. But my professor's style was to have a struct for each shared data struct. In this case "clients_" is the struct including all three semaphores and the queue. Basically this semaphore is there to ensure no two threads touch the same data structure at the same time.
n for number of items in the queue.
Ok, so to make it as simple as possible. You will need 2 threads, mutex, queue and 2 thread processing functions.
Header.h
static DWORD WINAPI ThreadFunc_Prod(LPVOID lpParam);
static DWORD WINAPI ThreadFunc_Con(LPVOID lpParam);
HANDLE m_hThread[2];
queue<int> m_Q;
mutex m_M;
Add all needed stuff, these are just core parts you need
Source.cpp
DWORD dwThreadId;
m_hThread[0] = CreateThread(NULL, 0, this->ThreadFunc_Prod, this, 0, &dwThreadId);
// same for 2nd thread
DWORD WINAPI Server::ThreadFunc_Prod(LPVOID lpParam)
{
cYourClass* o = (cYourClass*) lpParam;
int nData2Q = GetData(); // this is whatever you use to get your data
m_M.lock();
m_Q.push(nData2Q);
m_M.unlock();
}
DWORD WINAPI Server::ThreadFunc_Con(LPVOID lpParam)
{
cYourClass* o = (cYourClass*) lpParam;
int res;
m_M.lock();
if (m_Q.empty())
{
// bad, no data, escape or wait or whatever, don't block context
}
else
{
res = m_Q.front();
m_Q.pop();
}
m_M.unlock();
// do you magic with res here
}
And in the end of main - don't forget to use WaitForMultipleObjects
All possible examples can be found directly in MSDN so there is quite nice commentary about that.
PART2:
ok, so I believe header is self-explainable, so I will give you little bit more description to source. Somewhere in your source (can be even in Constructor) you create threads - the way how to create thread may differ but idea is the same (in win - thread is run right after its creation in posix u have to join). I believe u shall have somewhere a function which starts all your magic, lets call it MagicKicker()
In case of posix, create thread in constructor and join em in your MagicKicker(), win - create in MagicKicker()
Than you would need to declare (in header) two function where you thread function will be implemented ThreadFunc_Prod and ThreadFunc_Prod , important magic here is that you will pass reference to your object to this function (coz thread are basically static) so u can easy access shared resources as queues, mutexes, etc...
These function are actually doing the work. You actually have all u need in you code, just use this as adding routine in Producer:
int nData2Q = GetData(); // this is whatever you use to get your data
m_M.lock(); // locks mutex so nobody cant enter mutex
m_Q.push(nData2Q); // puts data from producer to share queue
m_M.unlock(); // unlock mutex so u can access mutex in your consumer
And add this to your consumer:
int res;
m_M.lock(); // locks mutex so u cant access anything wrapped by mutex in producer
if (m_Q.empty()) // check if there is something in queue
{
// nothing in you queue yet OR already
// skip this thread run, you can i.e. sleep for some time to build queue
Sleep(100);
continue; // in case of while wrap
return; // in case that u r running some framework with threadloop
}
else // there is actually something
{
res = m_Q.front(); // get oldest element of queue
m_Q.pop(); // delete this element from queue
}
m_M.unlock(); // unlock mutex so producer can add new items to queue
// do you magic with res here
The problem mentioned in my question was that the image displayed by the Consumer thread was not containing complete data. The image displayed by the Consumer thread contains several patches which suggest that it could not get the full data produced by Producer thread.
ANSWER The reason behind it is the declaration of Mat image inside the while loop of Consumer thread. The Mat instance created inside the while loop gets deleted once the second round of while loop starts and therefore the Producer thread was never able to access the data of Mat image created in the Consumer thread.
SOLUTION: I should have done it something like this
struct ThreadSafeContainer
{
queue<Mat> safeContainer;
};
struct Producer
{
Producer(std::shared_ptr<ThreadSafeContainer> c) : container(c)
{
}
void run()
{
while(true)
{
// grab image from camera
// store image in container
Mat image(400, 400, CV_8UC3, Scalar(10, 100,180) );
mu.lock();
container->safeContainer.push(Mat);
mu.unlock();
}
}
std::shared_ptr<ThreadSafeContainer> container;
};
struct Consumer
{
Consumer(std::shared_ptr<ThreadSafeContainer> c) : container(c)
{
}
~Consumer()
{
}
void run()
{
while(true)
{
// read next image from container
mu.lock();
if (!container->safeContainer.empty())
{
Mat image= container->safeContainer.front(); //The front of the queue contain the image
container->safeContainer.pop();
imshow("consumer image", image);
waitKey(33);
}
mu.unlock();
}
}
std::shared_ptr<ThreadSafeContainer> container;
};
I'm trying to get my hands on multi threading and it's not working so far. I'm creating a program which allows serial communication with a device and it's working quite well without multi threading. Now I want to introduce threads, one thread to continuously send packets, one thread to receive and process packets and another thread for a GUI.
The first two threads need access to four classes in total, but using pthread_create() I can only pass one argument. I then stumled upon a post here on stack overflow (pthread function from a class) where Jeremy Friesner presents a very elegant way. I then figured that it's easiest to create a Core class which contains all the objects my threads need access to as well as all functions for the threads.So here's a sample from my class Core:
/** CORE.CPP **/
#include "SerialConnection.h" // Clas for creating a serial connection using termios
#include "PacketGenerator.h" // Allows to create packets to be transfered
#include <pthread.h>
#define NUM_THREADS 4
class Core{
private:
SerialConnection serial; // One of the objects my threads need access to
pthread_t threads[NUM_THREADS];
pthread_t = _thread;
public:
Core();
~Core();
void launch_threads(); // Supposed to launch all threads
static void *thread_send(void *arg); // See the linked post above
void thread_send_function(); // See the linked post above
};
Core::Core(){
// Open serial connection
serial.open_connection();
}
Core::~Core(){
// Close serial connection
serial.close_connection();
}
void Core::launch_threads(){
pthread_create(&threads[0], NULL, thread_send, this);
cout << "CORE: Killing threads" << endl;
pthread_exit(NULL);
}
void *Core::thread_send(void *arg){
cout << "THREAD_SEND launched" << endl;
((Core *)arg)->thread_send_function();
return NULL;
}
void Core::thread_send_function(){
generator.create_hello_packet();
generator.send_packet(serial);
pthread_exit(NULL);
}
Problem is now that my serial object crashes with segmentation fault (that pointer stuff going on in Core::thread_send(void *arg) makes me suspicious. Even when it does not crash, no data is transmitted over the serial connection even though the program executed without any errors. Execution form main:
/** MAIN.CPP (extract) VARIANT 1 **/
int main(){
Core core;
core.launch_threads(); // No data is transferred
}
However, if I call the thread_send_function directly (the one the thread is supposed to execute), the data is transmitted over the serial connection flawlessly:
/** MAIN.CPP (extract) VARIANT 1 **/
int main(){
Core core;
core.thread_send_function(); // Data transfer works
}
Now I'm wondering what the proper way of dealing with this situation is. Instead of that trickery in Core.cpp, should I just create a struct holding pointers to the different classes I need and then pass that struct to the pthread_create() function? What is the best solution for this problem in general?
The problem you have is that your main thread exits the moment it created the other thread, at which point the Core object is destroyed and the program then exits completely. This happens while your newly created thread tries to use the Core object and send data; you either see absolutely nothing happening (if the program exits before the thread ever gets to do anything) or a crash (if Core is destroyed while the thread tries to use it). In theory you could also see it working correctly, but because the thread probably takes a bit to create the packet and send it, that's unlikely.
You need to use pthread_join to block the main thread just before quitting, until the thread is done and has exited.
And anyway, you should be using C++11's thread support or at least Boost's. That would let you get rid of the low-level mess you have with the pointers.
Suppose we have two workers. Each worker has an id of 0 and 1. Also suppose that we have jobs arriving all the time, each job has also an identifier 0 or 1 which specifies which worker will have to do this job.
I would like to create 2 threads that are initially locked, and then when two jobs arrive, unlock them, each of them does their job and then lock them again until other jobs arrive.
I have the following code:
#include <iostream>
#include <thread>
#include <mutex>
using namespace std;
struct job{
thread jobThread;
mutex jobMutex;
};
job jobs[2];
void executeJob(int worker){
while(true){
jobs[worker].jobMutex.lock();
//do some job
}
}
void initialize(){
int i;
for(i=0;i<2;i++){
jobs[i].jobThread = thread(executeJob, i);
}
}
int main(void){
//initialization
initialize();
int buffer[2];
int bufferSize = 0;
while(true){
//jobs arrive here constantly,
//once the buffer becomes full,
//we unlock the threads(workers) and they start working
bufferSize = 2;
if(bufferSize == 2){
for(int i = 0; i<2; i++){
jobs[i].jobMutex.unlock();
}
}
break;
}
}
I started using std::thread a few days ago and I'm not sure why but Visual Studio gives me an error saying abort() has been called. I believe there's something missing however due to my ignorance I can't figure out what.
I would expect this piece of code to actually
Initialize the two threads and then lock them
Inside the main function unlock the two threads, the two threads will do their job(in this case nothing) and then they will become locked again.
But it gives me an error instead. What am I doing wrong?
Thank you in advance!
For this purpose you can use boost's threadpool class.
It's efficient and well tested. opensource library instead of you writing newly and stabilizing it.
http://threadpool.sourceforge.net/
main()
{
pool tp(2); //number of worker threads-currently its 2.
// Add some tasks to the pool.
tp.schedule(&first_task);
tp.schedule(&second_task);
}
void first_task()
{
...
}
void second_task()
{
...
}
Note:
Suggestion for your example:
You don't need to have individual mutex object for each thread. Single mutex object lock itself will does the synchronization between all the threads. You are locking mutex of one thread in executejob function and without unlocking another thread is calling lock with different mutex object leading to deadlock or undefined behaviour.
Also since you are calling mutex.lock() inside whileloop without unlocking , same thread is trying to lock itself with same mutex object infinately leading to undefined behaviour.
If you donot need to execute threads parallel you can have one global mutex object can be used inside executejob function to lock and unlock.
mutex m;
void executeJob(int worker)
{
m.lock();
//do some job
m.unlock();
}
If you want to execute job parallel use boost threadpool as I suggested earlier.
In general you can write an algorithm similar to the following. It works with pthreads. I'm sure it would work with c++ threads as well.
create threads and make them wait on a condition variable, e.g. work_exists.
When work arrives you notify all threads that are waiting on that condition variable. Then in the main thread you start waiting on another condition variable work_done
Upon receiving work_exists notification, worker threads wake up, and grab their assigned work from jobs[worker], they execute it, they send a notification on work_done variable, and then go back to waiting on the work_exists condition variable
When main thread receives work_done notification it checks if all threads are done. If not, it keeps waiting till the notification from last-finishing thread arrives.
From cppreference's page on std::mutex::unlock:
The mutex must be unlocked by all threads that have successfully locked it before being destroyed. Otherwise, the behavior is undefined.
Your approach of having one thread unlock a mutex on behalf of another thread is incorrect.
The behavior you're attempting would normally be done using std::condition_variable. There are examples if you look at the links to the member functions.