In QT, from main(GUI) thread I am creating a worker thread to perform a certain operation which accesses a resource shared by both threads. On certain action in GUI, main thread has to manipulate the resource. I tried using QMutex to lock that particular resource. This resource is continuously used by the worker thread, How to notify main thread on this?
Tried using QWaitCondition but it was crashing the application.
Is there any other option to notify and achieve synchronisation between threads?
Attached the code snippet.
void WorkerThread::IncrementCounter()
{
qDebug() << "In Worker Thread IncrementCounter function" << endl;
while(stop == false)
{
mutex.lock();
for(int i = 0; i < 100; i++)
{
for(int j = 0; j < 100; j++)
{
counter++;
}
}
qDebug() << counter;
mutex.unlock();
}
qDebug() << "In Worker Thread Aborting " << endl;
}
//Manipulating the counter value by main thread.
void WorkerThread::setCounter(int value)
{
waitCondition.wait(&mutex);
counter = value;
waitCondition.notify_one();
}
You are using the wait condition completely wrong.
I urge you to read up on mutexes and conditions, and maybe look at some examples.
wait() will block execution until either notify_one() or notify_all() is called somewhere. Which of course cannot happen in your code.
You cannot wait() a condition on one line and then expect the next two lines to ever be called if they contain the only wake up calls.
What you want is to wait() in one thread and notify_XXX() in another.
You could use shared memory from within the same process. Each thread could lock it before writing it, like this:
QSharedMemory *shared=new QSharedMemory("Test Shared Memory");
if(shared->create(1,QSharedMemory::ReadWrite))
{
shared->lock();
// Copy some data to it
char *to = (char*)shared->data();
const char *from = &dataBuffer;
memcpy(to, from, dataSize);
shared->unlock();
}
You should also lock it for reading. If strings are wanted, reading strings can be easier that writing them, if they are zero terminated. You'll want to convert .toLatin1() to get a zero-terminated string which you can get the size of a string. You might get a lock that multiple threads can read from, with shared->attach(); but that's more for reading the shared memory of a different process..
You might just use this instead of muteces. I think if you try to lock it, and something else already has it locked, it will just block until the other process unlocks it.
Related
I'd like, instead of having my threads wait, doing nothing, for other threads to finish using data, to do something else in the meantime (like checking for input, or re-rendering the previous frame in the queue, and then returning to check to see if the other thread is done with its task).
I think this code that I've written does that, and it "seems" to work in the tests I've performed, but I don't really understand how std::memory_order_acquire and std::memory_order_clear work exactly, so I'd like some expert advice on if I'm using those correctly to achieve the behaviour I want.
Also, I've never seen multithreading done this way before, which makes me a bit worried. Are there good reasons not to have a thread do other tasks instead of waiting?
/*test program
intended to test if atomic flags can be used to perform other tasks while shared
data is in use, instead of blocking
each thread enters the flag protected part of the loop 20 times before quitting
if the flag indicates that the if block is already in use, the thread is intended to
execute the code in the else block (only up to 5 times to avoid cluttering the output)
debug note: this doesn't work with std::cout because all the threads are using it at once
and it's not thread safe so it all gets garbled. at least it didn't crash
real world usage
one thread renders and draws to the screen, while the other checks for input and
provides frameData for the renderer to use. neither thread should ever block*/
#include <fstream>
#include <atomic>
#include <thread>
#include <string>
struct ThreadData {
int numTimesToWriteToDebugIfBlockFile;
int numTimesToWriteToDebugElseBlockFile;
};
class SharedData {
public:
SharedData() {
threadData = new ThreadData[10];
for (int a = 0; a < 10; ++a) {
threadData[a] = { 20, 5 };
}
flag.clear();
}
~SharedData() {
delete[] threadData;
}
void runThread(int threadID) {
while (this->threadData[threadID].numTimesToWriteToDebugIfBlockFile > 0) {
if (this->flag.test_and_set(std::memory_order_acquire)) {
std::string fileName = "debugIfBlockOutputThread#";
fileName += std::to_string(threadID);
fileName += ".txt";
std::ofstream writeFile(fileName.c_str(), std::ios::app);
writeFile << threadID << ", running, output #" << this->threadData[threadID].numTimesToWriteToDebugIfBlockFile << std::endl;
writeFile.close();
writeFile.clear();
this->threadData[threadID].numTimesToWriteToDebugIfBlockFile -= 1;
this->flag.clear(std::memory_order_release);
}
else {
if (this->threadData[threadID].numTimesToWriteToDebugElseBlockFile > 0) {
std::string fileName = "debugElseBlockOutputThread#";
fileName += std::to_string(threadID);
fileName += ".txt";
std::ofstream writeFile(fileName.c_str(), std::ios::app);
writeFile << threadID << ", standing by, output #" << this->threadData[threadID].numTimesToWriteToDebugElseBlockFile << std::endl;
writeFile.close();
writeFile.clear();
this->threadData[threadID].numTimesToWriteToDebugElseBlockFile -= 1;
}
}
}
}
private:
ThreadData* threadData;
std::atomic_flag flag;
};
void runThread(int threadID, SharedData* sharedData) {
sharedData->runThread(threadID);
}
int main() {
SharedData sharedData;
std::thread thread[10];
for (int a = 0; a < 10; ++a) {
thread[a] = std::thread(runThread, a, &sharedData);
}
thread[0].join();
thread[1].join();
thread[2].join();
thread[3].join();
thread[4].join();
thread[5].join();
thread[6].join();
thread[7].join();
thread[8].join();
thread[9].join();
return 0;
}```
The memory ordering you're using here is correct.
The acquire memory order when you test and set your flag (to take your hand-written lock) has the effect, informally speaking, of preventing any memory accesses of the following code from becoming visible before the flag is tested. That's what you want, because you want to ensure that those accesses are effectively not done if the flag was already set. Likewise, the release order on the clear at the end prevents any of the preceding accesses from becoming visible after the clear, which is also what you need so that they only happen while the lock is held.
However, it's probably simpler to just use a std::mutex. If you don't want to wait to take the lock, but instead do something else if you can't, that's what try_lock is for.
class SharedData {
// ...
private:
std::mutex my_lock;
}
// ...
if (my_lock.try_lock()) {
// lock was taken, proceed with critical section
my_lock.unlock();
} else {
// lock not taken, do non-critical work
}
This may have a bit more overhead, but avoids the need to think about atomicity and memory ordering. It also gives you the option to easily do a blocking wait if that later becomes useful. If you've designed your program around an atomic_flag and later find a situation where you must wait to take the lock, you may find yourself stuck with either spinning while continually retrying the lock (which is wasteful of CPU cycles), or something like std::this_thread::yield(), which may wait for longer than necessary after the lock is available.
It's true this pattern is somewhat unusual. If there is always non-critical work to be done that doesn't need the lock, commonly you'd design your program to have a separate thread that just does the non-critical work continuously, and then the "critical" thread can just block as it waits for the lock.
I am a complete beginner with threads therefore I'm not able to resolve this problem myself.
I have two threads which should run in parallel. The first thread should read in the data (simulate receive queue thread) and once data is ready the second thread shall process (processing thread) the data. The problem is, that the second thread will wait for a change of the conditional variable an infinite amount of time.
If I remove the for loop of the first thread, conditional variable will notify the second thread but the thread will only execute once. Why is the conditional variable not notified if it is used within the for loop?
My goal is to read in all data of a CSV file in the first thread and store it dependent on the rows content in a vector in the second thread.
Thread one look like this
std::mutex mtx;
std::condition_variable condVar;
bool event_angekommen{false};
void simulate_event_readin(CSVLeser leser, int sekunden, std::vector<std::string> &csv_reihe)
{
std::lock_guard<std::mutex> lck(mtx);
std::vector<std::vector<std::string>> csv_daten = leser.erhalteDatenobj();
for (size_t idx = 1; idx < csv_daten.size(); idx++)
{
std::this_thread::sleep_for(std::chrono::seconds(sekunden));
csv_reihe = csv_daten[idx];
event_angekommen = true;
condVar.notify_one();
}
}
Thread two looks like this:
void detektiere_events(Detektion detektion, std::vector<std::string> &csv_reihe, std::vector<std::string> &pir_events)
{
while(1)
{
std::cout<<"Warte"<<std::endl;
std::unique_lock<std::mutex> lck(mtx);
condVar.wait(lck, [] {return event_angekommen; });
std::cout<<"Detektiere Events"<<std::endl;
std::string externes_event_user_id = csv_reihe[4];
std::string externes_event_data = csv_reihe[6];
detektion.neues_event(externes_event_data, externes_event_user_id);
if(detektion.pruefe_Pir_id() == true)
{
pir_events.push_back(externes_event_data);
};
}
}
and my main looks like this:
int main(void)
{
Detektion detektion;
CSVLeser leser("../../Example Data/collectedData_Protocol1.csv", ";");
std::vector<std::string> csv_reihe;
std::vector<std::string> pir_values = {"28161","28211","28261","28461","285612"};
std::vector<std::string> pir_events;
std::thread thread[2];
thread[0] = std::thread(simulate_event_readin, leser, 4, std::ref(csv_reihe));
thread[1] = std::thread(detektiere_events,detektion, std::ref(csv_reihe), std::ref(pir_events));
thread[0].join();
thread[1].join();
}
I'm not a C++ expert, but the code seems understandable enough to see the issue.
Your thread 1 grabs the lock once and doesn't release it until the end of its lifetime. It may signal that the condition is fulfilled, but it never actually releases the lock to allow other threads to act.
To fix this, move std::lock_guard<std::mutex> lck(mtx); inside the loop, after sleeping. This way, the thread will take and release the lock on each iteration, giving the other thread an opportunity to act while sleeping.
I have a program in which we can monitor 2 objects at same time.
myThread = new thread (thred1, id);
vec.push_back (myThread);
In thred1 function,i use Boolean function to read the stored values from a different vector and it runs parallely like this:
element found 2 -- hj
HUMIDITY-1681692777 DISPLAYED IN RH
element found 1 -- hj
TEMPERATURE--1714636915 IN DEGREE CELSIUS
This keeps on running as that is what my program should do.
I have a case where I need to get ID from the user and stop that particular thread and the other should keep running till I stop it.Can someone help me with that?
void thred1 (int id)
{
bool err = false;
while (stopThread == false)
{
for (size_t i = 0; i < v.size (); i++)
{
if (id == v[i]->id)
{
cout << "element found " << v[i]->id << " -- " << v[i]->name << endl;
v[i]->Read ();
this_thread::sleep_for (chrono::seconds (4));
err = true;
break;
}
}
if (!err)
{
cout << "element not found" << endl;
break;
}
}
}
Suspension
1. Assuming you want to suspend the monitor thread but only temporarily (i.e making any changes) then you can just use a mutex. Lock it before accessing the shared vector and unlock it when you're done, ensuring that only one thread can access the data at a time.
2. You can actively suspend the thread using OS support such as SuspendThread and ResumeThread, in the case of Windows, when it's ready.
Termination
1. You could use an event for each monitor thread, name being linked to the ID would work. At each iteration of the monitor check for the termination event, ending the thread if it's active.
2. Pass some variable to each thread, store them in a map with the thread handle being the key, and similar to the previous option just check the value for each iteration.
3. Store all threads in a map with the handle as key, terminating it directly with OS support.
Honestly there are a ton of ways to do this, the best implementation depends on why exactly you want to stop the monitor thread. Any sort of synchronization object like a mutex should be fine if you're reading from one thread and writing from another. Otherwise, just storing all threads with the internal ID as key and the thread as the value should be fine for terminating monitor threads on demand.
This is my sender thread once after it is called for first time its finish its execution. I Couldn't be able to resume this sender thread. Is There any mechanism in C++ to resume threads.
void ClientSocket::sender()
{
char buf[1024];
//readBuffer = m_ptrsendStream->Read_Adt(filePath);
//readStream();
//cout << readBuffer.str()<<endl;
cout << "write stream to send through socket\n" << endl;
cin >> buf;
if (isConnected == 0)
{
//send(clientSock, readBuffer.str().c_str(), strlen((char *)readBuffer.str().c_str()), 0);
send(clientSock, buf, strlen(buf), 0);
cout << "sending stream :\n"<<endl << buf << endl;
}
}
//this is where my thread creation happens and join() happens.
int main(int argc, char *argv[])
{
ClientSocket objSocket(argv[1]);
sender_thread = make_shared<thread>([&objSocket]() {
objSocket.sender();
});
try
{
if (sender_thread->joinable())
sender_thread->join();
}
No, once your thread has joined it's done and you need to create a new one.
If you have this pattern where you are constantly creating new threads it might be worthwhile to think about using a threadpool to avoid the overhead of constantly spawning new threads.
In addition, if this is related to networking it's probably best to avoid using threads and instead use something asynchronous like boost::asio.
Terminated threads cannot be resumed (this is not a C++ limitation, but a general limitation; when speaking about resuming thread, it is usually about resuming after previously suspending it).
After join() has returned, corresponding thread is already terminated; it has no state (except maybe for zobmie stuff and return code, but this is of no use for your purposes), and there is nothing to resume
However, it is possible to run your sender() function in another thread, just create another instance of your thread.
EDIT: I concur with #inf on using asio instead of threads whenever possible.
You want resume thread which is completed , normally thread resume used continue from suspended threads . Instead of resuming the thread ,stop come of thread un till it finish all actions , make use of while or wait in thread .
I'm using semaphores with shared-memory for communicating between multi-producers and multi-clients. There are two main kinds of semaphores in my system, which are "stored semaphores" and "processed semaphores".
The system run as following: Producers continously put data into the shared-memory, and then increase the stored semaphore's value, while the consumers is in the loop, waiting for such stored semaphored. The consumers, after receiving data from producer, will process such data and then, increase the processed semaphore's value. Producers will get their results by waiting on "processed semaphore"
The producer code:
for(int i =0;i<nloop;i++){
usleep(100);
strcpy(shared_mem[i], "data for processing");
sem_post(&shared_mem[i].stored_semaphored);
if(sem_timedwait(&msg_ptr->processed_semaphore,&ts)==-1){ //waiting for result
if(errno == ETIMEDOUT){
}
break;
}else{
//success
}
}
the consumer code:
for (int j = 0; j < MAX_MESSAGE; j++) {
if (sem_trywait(&(shm_ptr->messages[j].stored_semaphore)) == -1) {
if (errno == EAGAIN) {
} else {
//success ==> process data
//post result back on the shared memory, and increase
//the processed semahore
strcpy(shared_mem[j].output, "Processed data");
sem_post(&(shared_mem[j].processed_semaphore));
}
}
}//for loop over MAX_MESSAGE
My problem is that the for loop in the consumer code is wasting almost 100 % CPU because in the case of no data from producer, this for loop run continously.
My question is that there is any other ways for waiting on a set of semaphores, (which may be similar to the waiting mechanism by SELECT, POLL, or EPOLL), which does not waste CPU time.
Hope see your answer. Thanks so much!
As far as I know there isn't a way to wait on a set of semaphores. This means that all accesses need to be funnelled through a single semaphore. You're looping over a set of semaphores, so they collectively can become one object. That consumer needs to know when any of the semaphores has been signalled, so use an additional sem_post on a new semaphore to signal that the set of semaphores has changed.
Your producer code becomes something like this:
....
sem_post(&shared_mem[i].stored_semaphored);
sem_post(&list_changed_semaphore); /* Wake the consumer. */
....
and the consumer:
/* Block until a consumer has indicated that it has changed the semaphore list */
if (!sem_wait(&list_changed_semaphore)) {
/* At least one producer has signalled a change. */
for (int j = 0; j < MAX_MESSAGE; j++) {
if (sem_trywait(&(shm_ptr->messages[j].stored_semaphore)) == -1) {
}
}
}
Instead of using a semaphore for list_changed_semaphore you could use a pthread_cond_t condition variable to signal that something in your set of semaphores has changed. The list_changed_semaphore does not need to be a counter as the example shown here, it only needs to be a single bit to indicate that a producer has modified the list.