Stop scanf from waiting input, with another thread - c++

I would like to "send a message" to scanf from a thread to the main program, I'm asking how to give to "scanf" function, or "cin" function, something to stop waiting.
You usually write something on the console and press "enter".
How Can I do the same from another thread?
Example:
int main()
{
///// Some code to make the thread work ecc
std::cin >> mystring;
std::cout << mystring; // It should be "Text into mystring";
}
// From the other thread running...
void mythread()
{
std::string test = "Text into mystring";
// Write test to scanf! How?
}
How can I achieve that??

As I understood you are tring to send information between threads. The official name is called Interthread Communication.
If you want to use scanf you should use pipes which is a communication tool between Processes not Threads
Here is a way you can communicate between threads. Reader threads represents your scanf thread. Writer thread represents mythread.
The system is simple. You have a shared memory. When one thread tries to write it, it locks the memory(which is queue in example) and writes. When the other one tries to read it, it again locks the memory and reads it and then deletes(pops from queue) it. If the queue is empty the reader thread waits until someone writes something in it.
struct MessageQueue
{
std::queue<std::string> msg_queue;
pthread_mutex_t mu_queue;
pthread_cond_t cond;
};
{
// In a reader thread, far, far away...
MessageQueue *mq = <a pointer to the same instance that the main thread has>;
std::string msg = read_a_line_from_irc_or_whatever();
pthread_mutex_lock(&mq->mu_queue);
mq->msg_queue.push(msg);
pthread_mutex_unlock(&mq->mu_queue);
pthread_cond_signal(&mq->cond);
}
{
// Main thread
MessageQueue *mq = <a pointer to the same instance that the main thread has>;
while(1)
{
pthread_mutex_lock(&mq->mu_queue);
if(!mq->msg_queue.empty())
{
std::string s = mq->msg_queue.top();
mq->msg_queue.pop();
pthread_mutex_unlock(&mq->mu_queue);
handle_that_string(s);
}
else
{
pthread_cond_wait(&mq->cond, &mq->mu_queue)
}
}

Related

C++ thread writes continuously and read at unknown time

I am trying to get the following scenario to work, but have not been successful so far.
I have 2 threads, a worker (that writes) and a reader.
The worker continuously modifies the values of a class "someClassToModify".
The Reader makes a read access every x seconds (unknown) and reads the current state of the class someClassToModify.
How do I make sure that the Reader reads someClassToModify immediately, without delay, when it "wants" to and the Worker continues immediately afterwards ?
What is important here is that the reader always gets immediate access when it needs it to take a "snapshot" of someClassToModify .
At the moment, the Writer seems to be always "faster", and sometimes several more "Writes" are made before it is the reader's turn. That is, the reader then does not get the actual value that he wanted.
Example:
Work
Work
Work
Work
Read
Work
Work
Read
Work
Work
Work
Work
Read
....
SomeClass someClassToModify; //Class to modify and read
std::thread Worker([this] {
while(true) {
// work work work (write)
// modify someClassToModify
}
}).detach();
std::thread Reader([this] {
while(true) {
//at an unknown time (random) read value from someClassToModify
}
}).detach();
thanks for your help here
So, first of all you should note that threads switch at random. That means that worker can do something multiple times before even a single reader gets chance to do anything.
Second thing is that reader can access function and for example come to 3rd line and then the context switch happens. You want to avoid that.
The way to avoid that is by using lock or semaphore.
That means that your class someClassToModify method should disable context switch while reading. It doesnt need to have anything special for explicit switch as it will sleep afterwards for X seconds and during that time the modifier will work.
You should check std::lock.
Basically you would want something like this, wrap this inside your class
#include<iostream>
#include<thread>
#include<mutex>
#include <chrono>
int i = 0;
int x = 1000;
//this is mutex for waiting
std::mutex myLock;
void read() {
while (true) {
//you lock the function, so it will not change context
myLock.lock();
std::cout << i << std::endl;
std::cout << "reader" << std::endl;
//once it is finish you can unlock it
myLock.unlock();
//wait for x miliseconds
std::this_thread::sleep_for(std::chrono::milliseconds(x));
}
}
void write() {
while (true) {
//writer also has lock so you dont end up with bad values on i
myLock.lock();
i++;
std::cout << "writer" << std::endl;
myLock.unlock();
}
}
int main() {
std::thread th1(read);
std::thread th2(write);
th1.join();
th2.join();
return 0;
}

Thread about socket communication

I want to make function that when receive buffer from socket, thread make whole program freeze out of my function until my function is finished. I try these as below
Function Listen
void Listen(can* _c) {
while (true)
{
std::lock_guard<std::mutex>guard(_c->connection->mutex);
thread t(&connect_tcp::Recv_data,_c->connection,_c->s,ref(_c->response),_c->signals);
if (t.joinable())
t.join();
}
}
Function dataset_browseCan
void dataset_browseCan(can* _c) {
thread org_th(Listen, _c); // I call thread here
org_th.detach();
dataset_browse(_c->cotp, _c->mms_obj, _c->connection, _c->response, _c->list, _c->size_encoder, _c->s);
dataset_signals_browse(_c->cotp, _c->mms_obj, _c->connection, _c->response, _c->list, _c->size_encoder, _c->s);
Sleep(800);
_c->signals = new Signals[_c->response.real_signals_and_values.size()];
}
Function Recv Data
void connect_tcp::Recv_data(SOCKET s,mms_response &response,Signals *signals) {
LinkedList** list = new LinkedList * [1000];
uint8_t* buffer = new uint8_t [10000];
Sleep(800);
/*std::lock_guard<std::mutex>guard(mutex);*/
thread j(recv,s, (char*)buffer, 10000, 0);
j.join()
/*this->mutex.unlock();*/
decode_bytes(response,buffer, list,signals);
}
I tried mutex and this_thread::sleep_for() but everytime my main function keep running.
Is make program freeze possible ?
You use threads in order to allow things to keep running while something else is happening, so wanting to "stop main" seems counter-intuitive.
However, if you want to share data between threads (e.g. between the thread that runs main and a background thread) then you need to use some form of synchronization. One way to do that is to use a std::mutex. If you lock the mutex before every access, and unlock it afterwards (using std::lock_guard or std::unique_lock) then it will prevent another thread from locking the same mutex while you are accessing the data.
If you need to prevent concurrent access for a long time, then you should not hold a mutex for the whole time. Either consider whether threads are the best solution to your problem, or use a mutex-protected flag to indicate whether the data is ready, and then either poll or use std::condition_variable or similar to wait until the flag is set.

C++ Semaphore Confusion?

So, I'm writing a sort of oscilloscope-esque program that reads the the serial port on the computer and performs an fft on this data to convert it to the frequency spectrum. I ran into an issue though with the layout of my program which is broken up into a SerialHandler class (utilizing boost::Asio), an FFTHandler class, and a main function. The SerialHandler class uses the boost::Asio`` async_read_some function to read from the port and raise an event called HandleOnPortReceive which then reads the data itself.
The issue was that I couldn't find a way to pass that data from the event handler, being raised by an io_service object on another thread, to the FFTHandler class, which is on yet another thread. I was recommended to use semaphores to solve my problem, but I have next to no knowledge on semaphore.h usage, so my implementation is now rather broken and doesn't do much of anything it's supposed to.
Here's some code if that makes it a little clearer:
using namespace Foo;
//main function
int main(void){
SerialHandler serialHandler;
FFTHandler fftHandler;
sem_t *qSem_ptr = &qSem;
sem_init(qSem_ptr, 1, 0);
//create separate threads for both the io_service and the AppendIn so that neither will block the user input statement following
serialHandler.StartConnection(tempInt, tempString); //these args are defined, but for brevity's sake, I ommitted the declaration
t2= new boost::thread(boost::bind(&FFTHandler::AppendIn, &fftHandler, q, qSem));
//allow the user to stop the program and avoid the problem of an infinite loop blocking the program
char inChar = getchar();
if (inChar) {...some logic to stop reading}
}
namespace Foo{
boost::thread *t1;
boost::thread *t2;
sem_t qSem;
std::queue<double> q;
boost::mutex mutex_;
class SerialHandler{
private:
char *rawBuffer; //array to hold incoming data
boost::asio::io_service ioService;
boost::asio::serial_port_ptr serialPort;
public:
void SerialHandler::StartConnection(int _baudRate, string _comPort){
//some functionality to open the port that is irrelevant to the question goes here
AsyncReadSome(); //starts the read loop
//create thread for io_service object and let function go out of scope
t1 = new boost::thread(boost::bind(&boost::asio::io_service::run, &ioService));
}
void SerialHandler::AsyncReadSome(){
//there's some other stuff here for error_catching, but this is the only important part
serialPort->async_read_some (
boost::asio::buffer(rawBuffer, SERIAL_PORT_READ_BUF_SIZE),
boost::bind(
&SerialHandler::HandlePortOnReceive,
this, boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred, q));
}
void SerialHandler::HandlePortOnReceive(const boost::system::error_code& error, size_t bytes_transferred, std::queue<double>& q){
boost::mutex::scoped_lock lock(mutex_);
//more error checking goes here, but I've made sure they aren't returning and are not the issue
for (unsigned int i =0; i<bytes_transferred; i++){
unsigned char c = rawBuffer[i];
double d = (double) c; //loop through buffer and read
if (c==endOfLineChar){
} else //if not delimiting char, push into queue and post semaphore
{
q.push(d);
//cout << d << endl;
sem_post(&qSem);
cout << q.front() << endl;
cout << "size is: " << q.size() << endl;
}
}
//loop back on itself and start the next read
AsyncReadSome();
}
}
class FFTHandler{
private:
double *in; //array to hold inputs
fftw_complex *out; //holds outputs
int currentIndex;
bool filled;
const int N;
public:
void AppendIn(std::queue<double> &q, sem_t &qSem){
while(1){ //this is supposed to stop thread from exiting and going out of scope...it doesn't do that at all effectively...
cout << "test" << endl;
sem_wait(&_qSem); //wait for data...this is blocking but I don't know why
double d = _q.front();
_q.pop();
in[currentIndex]=d; //read queue, pop, then append in array
currentIndex++;
if (currentIndex == N){ //run FFT if full and reset index
currentIndex = N-overlap-1;
filled = true;
RunFFT();
}
}
}
}
}
That debug line in FFTHandler::AppendIn(..) is indeed firing, so the thread is being created, but it's immediateley going out of scope it seems and destructing the thread, because it seems I've set up the while to respond incorrectly to the semaphore.
TLDR: That was a long explanation to simply say, "I don't understand semaphores but need to somehow implement them. I tried, failed, so now I'm coming here to hopefully receive help on this code from somebody more knowledgeable than me.
UPDATE: So after playing around with some debug statements, it seems that the issue is that the while(1){...} statement is indeed firing, but, the sem_wait(&_qSem); is causing it to block. For whatever reason it is waiting indefinitely and despite the fact that the semaphore is being posted, it continues to wait and never progress beyond that line.
Since you're already using boost::mutex and its scoped lock type, I suggest you use boost::condition_variable instead of a POSIX semaphore. Otherwise you're mixing C++11-style synchronisation with POSIX synchronisation.
You lock the mutex when adding to the queue, but I don't see anything locking the mutex to read from the queue. It also looks like you're looping back to call AsyncReadSome while the mutex is still locked.
Pick a single form of synchronisation, and then use it correctly.
The initial value of the semaphore is 0 which is valid for this case. So it needs a sem_post for FFTHandler::AppendIn() to be unblocked. But I dont see the code that invokes SerialHandler::AsyncReadSome() for the first time for the serial port to be read and the push to happen into the queue. If you fix that part of the code, I think sem_post would happen and the FFTHandler thread would run. As the first step you can have debug prints one after the sem_wait and one inside AsyncReadSome() function, and my guess is that both wont get executed.
So, essentially you would want to ensure that 'reading' gets initiated and is kept alive as part of the main thread or a different thread.

Reader and writer in multithread in C++

Here is my questions. I have two threads writer1 and writer2 who modify the attributs of a struture, the writer1 writes in attribut1, the writer2 writes in attribut2. And I have a thread Reader who reads the structure. What I'm waiting for is: When writer1 is writing, writer2 can also write at the same time.(It dosen't cause problem because they modify the different attributs). Of course, when writer2 is writing, writer1 can also write at the same time. But when the Reader is reading the values of the structure, neither writer1 nor writer2 can be writing at the same time.I shoule be sure that the value I'm reading is not changing by other threads
Exemple:
typedef struct
{
int a;
double b;
} data;
data glob;
int main()
{
thread reader([]()
{
while(1)
{
sleep(1s);
cout << glob;
}
});
thread writer1([]()
{
while(1)
glob.a++;
});
thread writer2([]()
{
while(1)
glob.b++;
});
int i;
cin >>i;
}
One end of the solution plane - single mutex and a single conditional variable shared by both writers and the reader.
The opposite end - two atomic variables and reader spinning between the two.
And architecturally most clean (and also fast when done right) - Inbox queue for the reader key-ed by the writer ID so no two messages from the same writer can be queued.
just use mutex ;)
It's very simple to use and will solve u problem.
http://en.cppreference.com/w/cpp/thread/mutex

Producer-consumer based multi-threading for image processing

UPDATE: I have provided the reason of problem and its solution in my answer below.
I want to implement multi-threading which is based upon Producer-consumer approach for an image processing task. For my case, the Producer thread should grabs the images and put them into a container whereas the consumer thread should extract the images from the Container thread. I think that I should use queue for the implementation of container.
I want to use the following code as suggested in this SO answer. But I have become quite confused with the implementation of container and putting the incoming image into it in the Producer thread.
PROBLEM: The image displayed by the first consumer thread does not contain the full data. And, the second consumer thread never displays any image. May be, there is some race situation or lock situation due to which the second thread is not able to access the data of queue at all. I have already tried to use Mutex.
#include <vector>
#include <thread>
#include <memory>
#include <queue>
#include <opencv2/highgui.hpp>
#include <opencv2/core.hpp>
#include <opencv2/imgproc.hpp>
Mutex mu;
struct ThreadSafeContainer
{
queue<unsigned char*> safeContainer;
};
struct Producer
{
Producer(std::shared_ptr<ThreadSafeContainer> c) : container(c)
{
}
void run()
{
while(true)
{
// grab image from camera
// store image in container
Mat image(400, 400, CV_8UC3, Scalar(10, 100,180) );
unsigned char *pt_src = image.data;
mu.lock();
container->safeContainer.push(pt_src);
mu.unlock();
}
}
std::shared_ptr<ThreadSafeContainer> container;
};
struct Consumer
{
Consumer(std::shared_ptr<ThreadSafeContainer> c) : container(c)
{
}
~Consumer()
{
}
void run()
{
while(true)
{
// read next image from container
mu.lock();
if (!container->safeContainer.empty())
{
unsigned char *ptr_consumer_Image;
ptr_consumer_Image = container->safeContainer.front(); //The front of the queue contain the pointer to the image data
container->safeContainer.pop();
Mat image(400, 400, CV_8UC3);
image.data = ptr_consumer_Image;
imshow("consumer image", image);
waitKey(33);
}
mu.unlock();
}
}
std::shared_ptr<ThreadSafeContainer> container;
};
int main()
{
//Pointer object to the class containing a "container" which will help "Producer" and "Consumer" to put and take images
auto ptrObject_container = make_shared<ThreadSafeContainer>();
//Pointer object to the Producer...intialize the "container" variable of "Struct Producer" with the above created common "container"
auto ptrObject_producer = make_shared<Producer>(ptrObject_container);
//FIRST Pointer object to the Consumer...intialize the "container" variable of "Struct Consumer" with the above created common "container"
auto first_ptrObject_consumer = make_shared<Consumer>(ptrObject_container);
//SECOND Pointer object to the Consumer...intialize the "container" variable of "Struct Consumer" with the above created common "container"
auto second_ptrObject_consumer = make_shared<Consumer>(ptrObject_container);
//RUN producer thread
thread producerThread(&Producer::run, ptrObject_producer);
//RUN first thread of Consumer
thread first_consumerThread(&Consumer::run, first_ptrObject_consumer);
//RUN second thread of Consumer
thread second_consumerThread(&Consumer::run, second_ptrObject_consumer);
//JOIN all threads
producerThread.join();
first_consumerThread.join();
second_consumerThread.join();
return 0;
}
I don't see an actual question in your original question, so I'll give you the reference material I used to implement producer-consumer in my college course.
http://cs360.byu.edu/static/lectures/winter-2014/semaphores.pdf
Slides 13 and 17 give good examples of producer-consumer
I made use of this in the lab which I have posted on my github here:
https://github.com/qzcx/Internet_Programming/tree/master/ThreadedMessageServer
If you look in my server.cc you can see my implementation of the producer-consumer pattern.
Remember that using this pattern that you can't switch the order of the wait statements or else you can end up in deadlock.
Hope this is helpful.
EDIT:
Okay, so here is a summary of the consumer-producer pattern in my code linked above. The idea behind the producer consumer is to have a thread safe way of passing tasks from a "producer" thread to "consumer" worker threads. In the case of my example, the work to be done is to handle client requests. The producer thread (.serve()) monitors the incoming socket and passes the connection to consumer threads (.handle()) to handle the actual request as they come in. All of the code for this pattern is found in the server.cc file (with some declarations/imports in server.h).
For the sake of being brief, I am leaving out some detail. Be sure to go through each line and understand what is going on. Look up the library functions I am using and what the parameters mean. I'm giving you a lot of help here, but there is still plenty of work for you to do to gain a full understanding.
PRODUCER:
Like I mentioned above, the entire producer thread is found in the .serve() function. It does the following things
Initializes the semaphores. There are two version here because of OS differences. I programmed on a OS X, but had to turn in code on Linux. Since Semaphores are tied to the OS, it is important to understand how to use semaphores in your particular setup.
It sets up the socket for the client to talk to. Not important for your application.
Creates the consumer threads.
Watches the client socket and uses the producer pattern to pass items to the consumers. This code is below
At the bottom of the .serve() function you can see the following code:
while ((client = accept(server_,(struct sockaddr *)&client_addr,&clientlen)) > 0) {
sem_wait(clients_.e); //buffer check
sem_wait(clients_.s);
clients_.q->push(client);
sem_post(clients_.s);
sem_post(clients_.n); //produce
}
First, you check the buffer semaphore "e" to ensure there is room in your queue to place the request. Second, acquire the semaphore "s" for the queue. Then add your task (In this case, a client connection) to the queue. Release the semaphore for the queue. Finally, signal to the consumers using semaphore "n".
Consumer:
In the .handle() method you really only care about the very beginning of the thread.
while(1){
sem_wait(clients_.n); //consume
sem_wait(clients_.s);
client = clients_.q->front();
clients_.q->pop();
sem_post(clients_.s);
sem_post(clients_.e); //buffer free
//Handles the client requests until they disconnect.
}
The consumer does similar actions to the producer, but in opposite fashion. First the consumer waits for the producer to signal on the semaphore "n". Remember since there are multiple consumers it is completely random which consumer might end up acquiring this semaphore. They fight over it, but only one can move passed this point per sem_post of that semaphore. Second, they acquire the queue semaphore like the producer does. Pop the first item off the queue and release the semaphore. Finally, they signal on the buffer semaphore "e" that there is now more room in the buffer.
Disclaimer:
I know the semaphores have terrible names. They match my professor's slides since that's where I learned it. I think they stand for the following:
e for empty : this semaphore stops the producer from pushing more items on the queue if it is full.
s for semaphore : My least favorite. But my professor's style was to have a struct for each shared data struct. In this case "clients_" is the struct including all three semaphores and the queue. Basically this semaphore is there to ensure no two threads touch the same data structure at the same time.
n for number of items in the queue.
Ok, so to make it as simple as possible. You will need 2 threads, mutex, queue and 2 thread processing functions.
Header.h
static DWORD WINAPI ThreadFunc_Prod(LPVOID lpParam);
static DWORD WINAPI ThreadFunc_Con(LPVOID lpParam);
HANDLE m_hThread[2];
queue<int> m_Q;
mutex m_M;
Add all needed stuff, these are just core parts you need
Source.cpp
DWORD dwThreadId;
m_hThread[0] = CreateThread(NULL, 0, this->ThreadFunc_Prod, this, 0, &dwThreadId);
// same for 2nd thread
DWORD WINAPI Server::ThreadFunc_Prod(LPVOID lpParam)
{
cYourClass* o = (cYourClass*) lpParam;
int nData2Q = GetData(); // this is whatever you use to get your data
m_M.lock();
m_Q.push(nData2Q);
m_M.unlock();
}
DWORD WINAPI Server::ThreadFunc_Con(LPVOID lpParam)
{
cYourClass* o = (cYourClass*) lpParam;
int res;
m_M.lock();
if (m_Q.empty())
{
// bad, no data, escape or wait or whatever, don't block context
}
else
{
res = m_Q.front();
m_Q.pop();
}
m_M.unlock();
// do you magic with res here
}
And in the end of main - don't forget to use WaitForMultipleObjects
All possible examples can be found directly in MSDN so there is quite nice commentary about that.
PART2:
ok, so I believe header is self-explainable, so I will give you little bit more description to source. Somewhere in your source (can be even in Constructor) you create threads - the way how to create thread may differ but idea is the same (in win - thread is run right after its creation in posix u have to join). I believe u shall have somewhere a function which starts all your magic, lets call it MagicKicker()
In case of posix, create thread in constructor and join em in your MagicKicker(), win - create in MagicKicker()
Than you would need to declare (in header) two function where you thread function will be implemented ThreadFunc_Prod and ThreadFunc_Prod , important magic here is that you will pass reference to your object to this function (coz thread are basically static) so u can easy access shared resources as queues, mutexes, etc...
These function are actually doing the work. You actually have all u need in you code, just use this as adding routine in Producer:
int nData2Q = GetData(); // this is whatever you use to get your data
m_M.lock(); // locks mutex so nobody cant enter mutex
m_Q.push(nData2Q); // puts data from producer to share queue
m_M.unlock(); // unlock mutex so u can access mutex in your consumer
And add this to your consumer:
int res;
m_M.lock(); // locks mutex so u cant access anything wrapped by mutex in producer
if (m_Q.empty()) // check if there is something in queue
{
// nothing in you queue yet OR already
// skip this thread run, you can i.e. sleep for some time to build queue
Sleep(100);
continue; // in case of while wrap
return; // in case that u r running some framework with threadloop
}
else // there is actually something
{
res = m_Q.front(); // get oldest element of queue
m_Q.pop(); // delete this element from queue
}
m_M.unlock(); // unlock mutex so producer can add new items to queue
// do you magic with res here
The problem mentioned in my question was that the image displayed by the Consumer thread was not containing complete data. The image displayed by the Consumer thread contains several patches which suggest that it could not get the full data produced by Producer thread.
ANSWER The reason behind it is the declaration of Mat image inside the while loop of Consumer thread. The Mat instance created inside the while loop gets deleted once the second round of while loop starts and therefore the Producer thread was never able to access the data of Mat image created in the Consumer thread.
SOLUTION: I should have done it something like this
struct ThreadSafeContainer
{
queue<Mat> safeContainer;
};
struct Producer
{
Producer(std::shared_ptr<ThreadSafeContainer> c) : container(c)
{
}
void run()
{
while(true)
{
// grab image from camera
// store image in container
Mat image(400, 400, CV_8UC3, Scalar(10, 100,180) );
mu.lock();
container->safeContainer.push(Mat);
mu.unlock();
}
}
std::shared_ptr<ThreadSafeContainer> container;
};
struct Consumer
{
Consumer(std::shared_ptr<ThreadSafeContainer> c) : container(c)
{
}
~Consumer()
{
}
void run()
{
while(true)
{
// read next image from container
mu.lock();
if (!container->safeContainer.empty())
{
Mat image= container->safeContainer.front(); //The front of the queue contain the image
container->safeContainer.pop();
imshow("consumer image", image);
waitKey(33);
}
mu.unlock();
}
}
std::shared_ptr<ThreadSafeContainer> container;
};