I am trying to handle socket in different threads creating runtime failure. See following code.
void MySocket::Lock()
{
m_LockCount++;
if( m_LockCount )
{
CSocket::Create( 8080 );
}
}
void MySocket::Unlock()
{
m_LockCount--;
if( !m_LockCount )
{
CSocket::Close();
}
}
I am calling Lock() from one thread and Unlock() from other. When it executes CSocket::Close() it gives an exception.
I googled for this bug and got some reasons.
This happens because; a CSocket object should be used only in the context of a single thread because the SOCKET handle encapsulated by a CAsyncSocket object is stored in a per-thread handle map. They are also suggesting a solution by sharing SOCKET handles between threads (http://support.microsoft.com/kb/175668). But this is not possible in my case since I am excepting some notification callback which will not work with above solution. Can anybody suggest a mechanism to share CSocket among threads without effecting notification callbacks?
You coould just use the socket directly and stop using the, obviously, flawed MFC implementation ...
If, as you say, "a CSocket object should be used only in the context of a single thread," then there is no "mechanism to share CSocket among threads".
In other words, one of the threads needs to own the CSocket, and the others can't mess with it.
In such cases, the solution is to use an inter-thread messaging system. That way, one of the other threads can send a message to the owner saying, "Hey, buddy, close your socket!"
The details of how you would do that messaging depend entirely on the context of your program.
I would advise you to use some higher-level (and less buggy) socket API like Boost.Asio. Note that it does not make sockets thread-safe anyway (see there). You have to use some lock/unlock facility.
I am not sure i understand your question about sharing sockets among threads without using notification callbacks. Between threads T1 and T2, supposing T1 manages a socket, there are only two ways for T2 to become aware of a socket event. Either some notification launched by T1 or a question asked by T2 to T1, either on a regular basis or in a blocking call.
Related
I'm fuguring out how to use Futures with non-blocking event driven code (in a separate thread or not, both) but how can i end the future from a slot (~resolve the promise based on an signal) ?
QByteArray RfidCardReader::startTask(QByteArray send)
{
if(this->busy==false) {
this->sendFrame(send);
QObject::connect(this, &RfidCardReader::frameReady,
[=]() {/*this must be the startTask return*/ return this->int_read_buffer;});
} else {
throw 0;//Handle a queue instead
}
}
QFuture<QByteArray> RfidCardReader::send(QByteArray passed_send)
{
return QtConcurrent::run(QThreadPool::globalInstance(), this->startTask, passed_send);
}
basically what I want to do using only an instance is wrap a serial device (whic is sync by nature) in a queue of Futures but with only non blocking code using signals like &QIODevice::bytesWritten &QIODevice::readyRead etc... if there are better approches to the problem let me know, i would be glad to know the right way to write readable async code in Qt without blocking in separate threads
A serial device is asynchronous by nature, and using the serial port concurrently from multiple threads is undefined behavior. You can certainly resolve futures from any thread, but there's nothing in Qt that will give you a future on the same thread. Recall that a QFuture is not a class that you can sensibly instantiate. The default-constructed class is useless.
To get an idea of how to handle asynchronous serial I/O, see for example this answer.
Then you can use the undocumented <QFutureInterface> header, and create your own implementation that can wrap higher-level aspects of your protocol, i.e. commands/requests. You could then group such futures, and use a single watcher to determine when they are done.
Your approach is quite interesting in fact, and I might develop a complete example.
I have a callback function which will be called in a thread that I don't have any access or control to (a library created that thread, and requires me to exposure the callback function to that thread). Since a zmq socket is not thread safe, here is what I'm doing:
void callback () {
zmq::socket_t tmp_sock(...); // create a socket which will be used only once
...
}
However, the callback is being invoked very frequently (hundreds of times per sec). Is there a better solution to use the socket more efficiently? I asked this because The Guide says: If you are opening and closing a lot of sockets, that's probably a sign that you need to redesign your application.
Edit:
Based on #raffian's answer. A thread_local static (available in C++11) variable in the callback function works fine.
I asked the same question, but in Java:
The principals are the same: pre-initialize a pool of worker threads, each with a dedicated socket, ready to use for reading/writing. In the Java example, I use ThreadLocal; I suppose in C++ you can use #include <boost/thread/tss.hpp>. This approach is consistent with ZeroMq's guide; use sockets only in the threads that created them.
I'm not a C++ programmer, but if you use this approach, you'll have to do something like this:
void callback () {
workerPool.doTask( new Task(args here));
...
}
Create a Task, with arguments, and send it to the workerPool, where it's assigned to a thread with dedicated socket. You'll want to create the worker pool with enough threads to accommodate load, nevertheless, concurrency shouldn't be a concern.
I'm writing a class "Tmt" that acts between a server and clients through sockets. My Tmt class will receive data from server, build up a queue internally and perform some operation on the data in the queue before they are available to the client.
I have already setup the socket connection and I can call
receiverData(); // to get data from server
The client will use my class Tmt as follows:
Tmt mytmt=new Tmt();
mymt.getProcessedData(); //to get one frame.
My question is how to let the Tmt class keep receiving data from server in the background once it is created and add them to the queue. I have some experience in multi-thread in C, but I'm not sure how this "working in the background" concept will be implemented in a class in C++. Please advice, thanks!
One option would be to associate a thread with each instance of the class (perhaps by creating a thread in the constructor). This thread continuously reads data from the network and adds the data to the queue as it becomes available. If the thread is marked private (i.e. class clients aren't aware of its existence), then it will essentially be running "in the background" with no explicit intervention. It would be up to the Tmt object to manage its state.
As for actual thread implementations in C++, you can just use Good ol' Pthreads in C++ just fine. However, a much better approach would probably be to use the Boost threading library, which encapsulates all the thread state into its own class. They also offer a whole bunch of synchronization primitives that are just like the pthread versions, but substantially easier to use.
Hope this helps!
By the way - I'd recommend just naming the class Transmit. No reason to be overly terse. ;-)
IMHO, multithreading is not the best solution for this kind of classes.
Introducing background threads can cause many problems, you must devise guards against multiple unnecessary thread creation at the least. Also threads need apparent initialize and cleanup. For instance, usual thread cleanup include thread join operation (wait for thread to stop) that could cause deadlocks, resource leaks, irresponsible UIs, etc.
Single thread asynchronous socket communication could be more appropriate to this scenario.
Let me draw sample code about this:
class Tmt {
...
public:
...
bool doProcessing()
{
receiverData();
// process data
// return true if data available
}
T getProcessedData()
{
// return processed data
}
}
Tmt class users must run loop doing doProcessing, getProcessedData call.
Tmt myTmt;
...
while (needMoreData)
{
if (myTmt.doProcessing())
{
myTmt.getProcessedData();
...
}
}
If Tmt users want background processing they can easily create another thread and doing Tmt job in there. This time, thread management works are covered by Tmt users.
If Tmt users prefer single thread approach they can do it without any problem.
Also noted that famous curl library uses this kind of design.
This is a question about generic c++ event driven applications design.
Lets assume that we have two threads, a "Dispatcher" (or "Engine"...) and a "Listener" (or "Client"...).
Let's assume that I write the Dispatcher code, and release it as a library. I also write the Listener interface, of course.
When the Dispatcher executes (after Listener registration)
listenerInstance.onSomeEvent();
the event handling code will actually be executed by the Dispatcher thread, so if the person that implements the Listener writes something like
void Listener::onSomeEvent() { while(true) ; }
The Dispatcher will stuck forever.
Is there a "plain old c++" (I mean no boost or libsigc++) way to "decouple" the two classes, so I can be sure that my Dispatcher will work fine whatever the Listeners does in the callbacks?
bye and thanks in advance,
Andrea
Well if the event gets invoked in the same thread (as I seem to understand can be a requirement), then there isn't much you can do about it.
If this is under a Win32 app with a message pump, you could register a windows message and call PostMessage with data representing this event and you can patch the message loop to interpret that message and call the event. What you gain is a decoupling of sorts, the event call is asynchronous (ie the event call will return no matter what). But later on when you process your messages and actually call the event, your main thread will still be stalled and nothing else will run until the event handler is ready.
Another alternative is just creating a new thread (or using a thread pool) for your call. This won't work for events that require a certain thread (ie ui updating threads). Additionally this adds synchronization overhead and thread spawning overhead AND you might starve the system of threads and/or cpu time.
But really, I don't think it's your job as the library designer to anticipate and avoid these problems. If the end-user wants to create a long event handler, let him spawn a new thread on his own. If he doesn't and just wants his specific thread to handle an event, let him. It simplifies your job and doesn't add any overhead that's not needed.
I'm afraid there's no native C++ way to do this. For windows, you can use asynchronous procedure calls (APC)
One approach could be to call the onSomeEvent into a dedicated thread. This is not 100% bullet proof but it would avoid the while(true); issue.
I hope it helps
There is a pure C++ way to achieve what you're mentioning. However, it's very ineffective. Here's a sample:
class Listener
{
bool myHasEvent;
private:
void ProcessEvent()
{
while (true)
{
if (!myHasEvent)
continue; //spin lock
// Do real processing
myHasEvent = false;
}
}
public:
void onSomeEvent() { myHasEvent = true; }
};
However, I would recommend against this approach. Instead, I would transform this into more platform-specific code. I would replace the if (!myHasEvent) continue; spin lock with a OS-specific wait routine (i.e. WaitForSingleObject on Win32) passing an Event Handle. Then, in onSomeEvent, instead of myHasEvent = true; I would set the event into signaled state (i.e. SetEvent on Win32). This would be a lot more effective because the thread wouldn't eat processor time during waiting.
Another method is the PostMessage as suggested by Blindly.
I have a design question. Is it better to define separate classes for SENDING and RECEIVING. Or, is it better to define a single Thread class? I like the idea of a single Thread class because it is easier to share a queue which can be locked by mutex.
Design Option #1 (Separate):
mySendThread = new SendThread(); // Have thread properties and separate members
myRcvThread = new RcvThread(); // Have thread properties and separate members
Design Option #2 (Master):
Master thread -
Execute()
{
if (threadType == RCV_THREAD)
{
globalVar = new MasterThread(serialPortHandle);
}
while (!Terminated)
{
if (threadType == RCV_THREAD)
{
if(globalVar)
{
// do work
}
}
if (threadType == SND_THREAD)
{
tCountSnd = GetTickCount() / SND_THREAD_DELAY;
if (tCountSnd != tCountSnd2) {
tCountSnd2 = tCountSnd;
if (globalVar) {
// do sending work
}
}
}
}
}
I think it's better to completely decouple the purpose or execution of a thread from the actual thread abstraction that you'll be using.
Make your thread class just a thin wrapper to allow you to start, stop, and join a thread. Have it take a functor object (or function pointer) in the constructor for the actual execution.
Or better yet, use one of the many available thread abstractions already out there instead of writing your own (boost::thread for one, but I bet whatever framework you're using already has a thread class).
I've designed a thread for communicating on the serial port (in Python, not C++, but it doesn't matter much) as follows:
There's a single thread and two queues - one for sent and one for received messages. The thread always listens (asynchronously) on both the serial port (for received data) and the sending queue (to send stuff the application asks to send).
If data arrived on the serial port, it's placed in the receive queue for the application's use
If the application placed data into the send queue, the thread sends it down the serial port
This design makes more sense to me because the single resource (the serial port) is held by a single thread, and not shared by two. Breaking it to several classes sounds like an overkill to me, since reading/writing from queues and reading/writing from the serial port is a trivial operation (naturally the serial port is wrapped in a convenient class - by the way I really recommend this class by Ramon De Klein)
Oh, and it works very well.
Regarding the queue to be shared .. wrap it in a separate class and implement the mutex handling there. Every thread class holds a reference to the queue wrapper and doesn't need to deal around with mutexes at all.
2nd choice is clearly a bad one.
It is better to have 2 different classes , maybe you can have a base class which has common implementation. This is just an initial assessment please provide more information about your problem only then a good analysis of problem can be done