Thread about socket communication - c++

I want to make function that when receive buffer from socket, thread make whole program freeze out of my function until my function is finished. I try these as below
Function Listen
void Listen(can* _c) {
while (true)
{
std::lock_guard<std::mutex>guard(_c->connection->mutex);
thread t(&connect_tcp::Recv_data,_c->connection,_c->s,ref(_c->response),_c->signals);
if (t.joinable())
t.join();
}
}
Function dataset_browseCan
void dataset_browseCan(can* _c) {
thread org_th(Listen, _c); // I call thread here
org_th.detach();
dataset_browse(_c->cotp, _c->mms_obj, _c->connection, _c->response, _c->list, _c->size_encoder, _c->s);
dataset_signals_browse(_c->cotp, _c->mms_obj, _c->connection, _c->response, _c->list, _c->size_encoder, _c->s);
Sleep(800);
_c->signals = new Signals[_c->response.real_signals_and_values.size()];
}
Function Recv Data
void connect_tcp::Recv_data(SOCKET s,mms_response &response,Signals *signals) {
LinkedList** list = new LinkedList * [1000];
uint8_t* buffer = new uint8_t [10000];
Sleep(800);
/*std::lock_guard<std::mutex>guard(mutex);*/
thread j(recv,s, (char*)buffer, 10000, 0);
j.join()
/*this->mutex.unlock();*/
decode_bytes(response,buffer, list,signals);
}
I tried mutex and this_thread::sleep_for() but everytime my main function keep running.
Is make program freeze possible ?

You use threads in order to allow things to keep running while something else is happening, so wanting to "stop main" seems counter-intuitive.
However, if you want to share data between threads (e.g. between the thread that runs main and a background thread) then you need to use some form of synchronization. One way to do that is to use a std::mutex. If you lock the mutex before every access, and unlock it afterwards (using std::lock_guard or std::unique_lock) then it will prevent another thread from locking the same mutex while you are accessing the data.
If you need to prevent concurrent access for a long time, then you should not hold a mutex for the whole time. Either consider whether threads are the best solution to your problem, or use a mutex-protected flag to indicate whether the data is ready, and then either poll or use std::condition_variable or similar to wait until the flag is set.

Related

How to stop a detached thread which is blocking on a socket?

I have two threads. The first creates a Logic object, detaching a second thread to spin, blocking on OpenSSL socket to receive messages:
struct Logic
{
Logic()
{
std::thread t1(&Logic::run, this);
t1.detach();
}
void run()
{
while(true)
{
// Gets data from SSL (blocking socket)
// Processes data
// Updates timestamp
}
}
uint64_t timestamp;
};
The first thread returns, enters a while loop and continually checks if the detached thread is still running (or whether its blocked permanently).
while(true)
{
Logic logic();
while(true)
{
if(timestamp_not_updated)
{
break; // Break, destroy current Logic object and create another
}
}
}
If the timestamp stops being updated, the inner while loop breaks, causing the Logic object to be destroyed and a new one created.
When this restart behaviour triggers I get a seg fault. thread apply all bt shows 3 threads, not 2. The original detached thread (blocking on OpenSSL) still exists. I thought this would get destroyed due to the object.
How do I stop a detached thread which is blocking/waiting on a resource, so I can restart my class? I need the blocking behaviour because I don't have anything else to do (besides receive the packet) and it's better for performance, than to keep calling in to OpenSSL.

How to let a thread wait itself out without using Sleep()?

I want the while loop in the thread to run , wait a second, then run again, so on and so on., but this don't seem to work, how would I fix it?
main(){
bool flag = true;
pthread = CreateThread(NULL, 0, ThreadFun, this, 0, &ThreadIP);
}
ThreadFun(){
while(flag == true)
WaitForSingleObject(pthread,1000);
}
This is one way to do it, I prefer using condition variables over sleeps since they are more responsive and std::async over std::thread (mainly because std::async returns a future which can send information back the the starting thread. Even if that feature is not used in this example).
#include <iostream>
#include <chrono>
#include <future>
#include <condition_variable>
// A very useful primitive to communicate between threads is the condition_variable
// despite its name it isn't a variable perse. It is more of an interthread signal
// saying, hey wake up thread something may have changed that's interesting to you.
// They come with some conditions of their own
// - always use with a lock
// - never wait without a predicate
// (https://www.modernescpp.com/index.php/c-core-guidelines-be-aware-of-the-traps-of-condition-variables)
// - have some state to observe (in this case just a bool)
//
// Since these three things go together I usually pack them in a class
// in this case signal_t which will be used to let thread signal each other
class signal_t
{
public:
// wait for boolean to become true, or until a certain time period has passed
// then return the value of the boolean.
bool wait_for(const std::chrono::steady_clock::duration& duration)
{
std::unique_lock<std::mutex> lock{ m_mtx };
m_cv.wait_for(lock, duration, [&] { return m_signal; });
return m_signal;
}
// wiat until the boolean becomes true, wait infinitely long if needed
void wait()
{
std::unique_lock<std::mutex> lock{ m_mtx };
m_cv.wait(lock, [&] {return m_signal; });
}
// set the signal
void set()
{
std::unique_lock<std::mutex> lock{ m_mtx };
m_signal = true;
m_cv.notify_all();
}
private:
bool m_signal { false };
std::mutex m_mtx;
std::condition_variable m_cv;
};
int main()
{
// create two signals to let mainthread and loopthread communicate
signal_t started; // indicates that loop has really started
signal_t stop; // lets mainthread communicate a stop signal to the loop thread.
// in this example I use a lambda to implement the loop
auto future = std::async(std::launch::async, [&]
{
// signal this thread has been scheduled and has started.
started.set();
do
{
std::cout << ".";
// the stop_wait_for will either wait 500 ms and return false
// or stop immediately when stop signal is set and then return true
// the wait with condition variables is much more responsive
// then implementing a loop with sleep (which will only
// check stop condition every 500ms)
} while (!stop.wait_for(std::chrono::milliseconds(500)));
});
// wait for loop to have started
started.wait();
// give the thread some time to run
std::this_thread::sleep_for(std::chrono::seconds(3));
// then signal the loop to stop
stop.set();
// synchronize with thread stop
future.get();
return 0;
}
While the other answer is a possible way to do it, my answer will mostly answer from a different angle trying to see what could be wrong with your code...
Well, if you don't care to wait up to one second when flag is set to false and you want a delay of at least 1000 ms, then a loop with Sleep could work but you need
an atomic variable (for ex. std::atomic)
or function (for ex. InterlockedCompareExchange)
or a MemoryBarrier
or some other mean of synchronisation to check the flag.
Without proper synchronisation, there is no guarantee that the compiler would read the value from memory and not the cache or a register.
Also using Sleep or similar function from a UI thread would also be suspicious.
For a console application, you could wait some time in the main thread if the purpose of you application is really to works for a given duration. But usually, you probably want to wait until processing is completed. In most cases, you should usually wait that threads you have started have completed.
Another problem with Sleep function is that the thread always has to wake up every few seconds even if there is nothing to do. This can be bad if you want to optimize battery usage. However, on the other hand having a relatively long timeout on function that wait on some signal (handle) might make your code a bit more robust against missed wakeup if your code has some bugs in it.
You also need a delay in some cases where you don't really have anything to wait on but you need to pull some data at regular interval.
A large timeout could also be useful as a kind of watch dog timer. For example, if you expect to have something to do and receive nothing for an extended period, you could somehow report a warning so that user could check if something is not working properly.
I highly recommand you to read a book on multithreading like Concurrency in Action before writing multithread code code.
Without proper understanding of multithreading, it is almost 100% certain that anyone code is bugged. You need to properly understand the C++ memory model (https://en.cppreference.com/w/cpp/language/memory_model) to write correct code.
A thread waiting on itself make no sense. When you wait a thread, you are waiting that it has terminated and obviously if it has terminated, then it cannot be executing your code. You main thread should wait for the background thread to terminate.
I also usually recommand to use C++ threading function over the API as they:
Make your code portable to other system.
Are usually higher level construct (std::async, std::future, std::condition_variable...) than corresponding Win32 API code.

And odd use of conditional variable with local mutex

Poring through legacy code of old and large project, I had found that there was used some odd method of creating thread-safe queue, something like this:
template < typename _Msg>
class WaitQue: public QWaitCondition
{
public:
typedef _Msg DataType;
void wakeOne(const DataType& msg)
{
QMutexLocker lock_(&mx);
que.push(msg);
QWaitCondition::wakeOne();
}
void wait(DataType& msg)
{
/// wait if empty.
{
QMutex wx; // WHAT?
QMutexLocker cvlock_(&wx);
if (que.empty())
QWaitCondition::wait(&wx);
}
{
QMutexLocker _wlock(&mx);
msg = que.front();
que.pop();
}
}
unsigned long size() {
QMutexLocker lock_(&mx);
return que.size();
}
private:
std::queue<DataType> que;
QMutex mx;
};
wakeOne is used from threads as kind of "posting" function" and wait is called from other threads and waits indefinitely until a message appears in queue. In some cases roles between threads reverse at different stages and using separate queues.
Is this even legal way to use a QMutex by creating local one? I kind of understand why someone could do that to dodge deadlock while reading size of que but how it even works? Is there a simpler and more idiomatic way to achieve this behavior?
Its legal to have a local condition variable. But it normally makes no sense.
As you've worked out in this case is wrong. You should be using the member:
void wait(DataType& msg)
{
QMutexLocker cvlock_(&mx);
while (que.empty())
QWaitCondition::wait(&mx);
msg = que.front();
que.pop();
}
Notice also that you must have while instead of if around the call to QWaitCondition::wait. This is for complex reasons about (possible) spurious wake up - the Qt docs aren't clear here. But more importantly the fact that the wake and the subsequent reacquire of the mutex is not an atomic operation means you must recheck the variable queue for emptiness. It could be this last case where you previously were getting deadlocks/UB.
Consider the scenario of an empty queue and a caller (thread 1) to wait into QWaitCondition::wait. This thread blocks. Then thread 2 comes along and adds an item to the queue and calls wakeOne. Thread 1 gets woken up and tries to reacquire the mutex. However, thread 3 comes along in your implementation of wait, takes the mutex before thread 1, sees the queue isn't empty, processes the single item and moves on, releasing the mutex. Then thread 1 which has been woken up finally acquires the mutex, returns from QWaitCondition::wait and tries to process... an empty queue. Yikes.

Use same boost:thread variable to create multiple threads

In the following example(not all the code included just the necessary portions):
class A
{
public:
void FlushToDisk(char* pData, unsigned int uiSize)
{
char* pTmp = new char[uiSize];
memcpy(pTmp, pData, uiSize);
m_Thread = boost::thread(&CSimSwcFastsimExporter::WriteToDisk, this, pTmp, uiSize);
}
void WriteToDisk(char* pData, unsigned int uiSize)
{
m_Mtx.lock();
m_ExportFile.write(pData, uiSize);
delete[] pData;
m_Mtx.unlock();
}
boost::thread m_Thread;
boost::mutex m_Mtx
}
is it safe to use the m_Thread that way since the FlushToDisk method can be called while the created thread is executing the WriteToDisk method.
Or should I do something like:
m_Thread.join();
m_Thread = boost::thread(&CSimSwcFastsimExporter::WriteToDisk, this, pTmp, uiSize);
Would this second solution be slower than the first?
From what i saw at http://www.boost.org/doc/libs/1_59_0/doc/html/thread/thread_management.html#thread.thread_management.tutorial
"When the boost::thread object that represents a thread of execution is destroyed the thread becomes detached. Once a thread is detached, it will continue executing until the invocation of the function or callable object supplied on construction has completed, or the program is terminated".
So in my case the threads should not be interrupted or?
Thanks in advance.
The second solution will pause the main thread to wait until the writer thread completes. You would be able to remove mutex if you go this way. You are guaranteed to have one file writing thread.
The first solution is going to allow main thread to continue, and will create an uncontrolled writing thread - serialized on the mutex. While you might believe this is better (main thread will not wait) I do not like this solution for several reasons.
First, you do not have any control over the number of created threads. If the function is called often, and the operation is slow, you can easily run out of threads! Second, and much more important, you will accumulate a backlog of detached threads waiting on mutex. If your main application decides to exit, all those threads will be silently killed and the updates will be lost.

(C++ Threads): Creating worker threads that will be listening to jobs and executing them concurrently when wanted

Suppose we have two workers. Each worker has an id of 0 and 1. Also suppose that we have jobs arriving all the time, each job has also an identifier 0 or 1 which specifies which worker will have to do this job.
I would like to create 2 threads that are initially locked, and then when two jobs arrive, unlock them, each of them does their job and then lock them again until other jobs arrive.
I have the following code:
#include <iostream>
#include <thread>
#include <mutex>
using namespace std;
struct job{
thread jobThread;
mutex jobMutex;
};
job jobs[2];
void executeJob(int worker){
while(true){
jobs[worker].jobMutex.lock();
//do some job
}
}
void initialize(){
int i;
for(i=0;i<2;i++){
jobs[i].jobThread = thread(executeJob, i);
}
}
int main(void){
//initialization
initialize();
int buffer[2];
int bufferSize = 0;
while(true){
//jobs arrive here constantly,
//once the buffer becomes full,
//we unlock the threads(workers) and they start working
bufferSize = 2;
if(bufferSize == 2){
for(int i = 0; i<2; i++){
jobs[i].jobMutex.unlock();
}
}
break;
}
}
I started using std::thread a few days ago and I'm not sure why but Visual Studio gives me an error saying abort() has been called. I believe there's something missing however due to my ignorance I can't figure out what.
I would expect this piece of code to actually
Initialize the two threads and then lock them
Inside the main function unlock the two threads, the two threads will do their job(in this case nothing) and then they will become locked again.
But it gives me an error instead. What am I doing wrong?
Thank you in advance!
For this purpose you can use boost's threadpool class.
It's efficient and well tested. opensource library instead of you writing newly and stabilizing it.
http://threadpool.sourceforge.net/
main()
{
pool tp(2); //number of worker threads-currently its 2.
// Add some tasks to the pool.
tp.schedule(&first_task);
tp.schedule(&second_task);
}
void first_task()
{
...
}
void second_task()
{
...
}
Note:
Suggestion for your example:
You don't need to have individual mutex object for each thread. Single mutex object lock itself will does the synchronization between all the threads. You are locking mutex of one thread in executejob function and without unlocking another thread is calling lock with different mutex object leading to deadlock or undefined behaviour.
Also since you are calling mutex.lock() inside whileloop without unlocking , same thread is trying to lock itself with same mutex object infinately leading to undefined behaviour.
If you donot need to execute threads parallel you can have one global mutex object can be used inside executejob function to lock and unlock.
mutex m;
void executeJob(int worker)
{
m.lock();
//do some job
m.unlock();
}
If you want to execute job parallel use boost threadpool as I suggested earlier.
In general you can write an algorithm similar to the following. It works with pthreads. I'm sure it would work with c++ threads as well.
create threads and make them wait on a condition variable, e.g. work_exists.
When work arrives you notify all threads that are waiting on that condition variable. Then in the main thread you start waiting on another condition variable work_done
Upon receiving work_exists notification, worker threads wake up, and grab their assigned work from jobs[worker], they execute it, they send a notification on work_done variable, and then go back to waiting on the work_exists condition variable
When main thread receives work_done notification it checks if all threads are done. If not, it keeps waiting till the notification from last-finishing thread arrives.
From cppreference's page on std::mutex::unlock:
The mutex must be unlocked by all threads that have successfully locked it before being destroyed. Otherwise, the behavior is undefined.
Your approach of having one thread unlock a mutex on behalf of another thread is incorrect.
The behavior you're attempting would normally be done using std::condition_variable. There are examples if you look at the links to the member functions.