I have a callback function which will be called in a thread that I don't have any access or control to (a library created that thread, and requires me to exposure the callback function to that thread). Since a zmq socket is not thread safe, here is what I'm doing:
void callback () {
zmq::socket_t tmp_sock(...); // create a socket which will be used only once
...
}
However, the callback is being invoked very frequently (hundreds of times per sec). Is there a better solution to use the socket more efficiently? I asked this because The Guide says: If you are opening and closing a lot of sockets, that's probably a sign that you need to redesign your application.
Edit:
Based on #raffian's answer. A thread_local static (available in C++11) variable in the callback function works fine.
I asked the same question, but in Java:
The principals are the same: pre-initialize a pool of worker threads, each with a dedicated socket, ready to use for reading/writing. In the Java example, I use ThreadLocal; I suppose in C++ you can use #include <boost/thread/tss.hpp>. This approach is consistent with ZeroMq's guide; use sockets only in the threads that created them.
I'm not a C++ programmer, but if you use this approach, you'll have to do something like this:
void callback () {
workerPool.doTask( new Task(args here));
...
}
Create a Task, with arguments, and send it to the workerPool, where it's assigned to a thread with dedicated socket. You'll want to create the worker pool with enough threads to accommodate load, nevertheless, concurrency shouldn't be a concern.
Related
I want to write my own asynchronous function based on Asio 1.19.2 (without boost).
The motivation is to create an interface that works with a JSON-RPC-like protocol (not JSON-RPC exactly).
On the socket it would look something like
--> { "method": "subtract", "params": [42, 23], "id": 1 }
<-- ... some irrelevant notifications, etc.
<-- { "result": 19, "id": 1 } <-- ID matches initial call
I have this figured out using std::futures (without Asio). That method signature is something like:
// near useless, because futures don't allow for continuation
std::future<std::string> json_rpc(std::string method, json::array args);
To implement that, I just needed to:
Keep the socket open in the background (it could also be passed into the method or whatever).
Store a set of pending promises.
When you call json_rpc, make a new promise, add it to the set, return the get_future().
Send the RPC on the socket.
If you get a { "result" ... } message on the socket, find the pending future with that ID and set_value() its promise with the result.
How do I translate this future-based approach into an asio-based one, that has a similar interface to e.g. async_read_until? In other words, I want to implement something like:
template <typename CompletionToken>
void async_json_rpc(std::string method, json::array args, CompletionToken&& completion_token);
I found this tutorial from beast, but it uses Boost.Beast, seems outdated, and the only asynchronous part of it is where it calls async_read_some. My code is not really able to make use of these asio async_ "primitives", since my completion handler needs to be invoked at some time outside of the OS's control.
I also found these examples from the asio docs, but they have the same limitation of just wrapping other async_ calls. They also get increasingly unmanageably complex, for application code.
The core of my question, I suppose, is: I have an async function that basically works like it uses asio::use_future as its CompletionToken. How do I let it use any kind of CompletionToken? What is the promise.set_value() equivallent for an asio completion handler?
I'm very interested in simple to understand solutions, so that I'm not the only one on my team able to maintain these sorts of functions.
To clarify further (from my comments):
I already have "servicing completions from a service thread" implemented, but I only support futures (if I was writing an asio async method, its as if I've only made the asio::use_future overload). I'm asking how to support CompletionHandlers in general—i.e. how to write the other async_* overloads that can take lambdas or yield_contexts.
I think you are asking how to implement the "strand"? I've done things similar to this.
In general, you need a worker thread that services a queue. It will loop and execute each item in the queue, but then sleep when it is empty. Adding something to the queue will signal the thread to wake up, if necessary.
A fancier variation is to use a thread pool and common queue, pruning excess threads if you have too many and creating more if there is too much work for the existing threads. Optimally this requires OS support (I've implemented this for Windows Completion Ports).
When you want to do something async, you put the promise on the queue rather than starting a dedicated thread for that.
Or... were you looking at coroutines, to do co_await calls in C++?
You can not write custom async_ overloads in Asio. Asio is not a general-purpose async library, it's a wrapper around epoll/kqueue/IOCP and as such is limited to non-blocking socket I/O.
The good news is you don't need that, as both std::future<std::string> json_rpc(...) as well as void async_json_rpc (..., CompletionCallback) can be implemented using the normal Asio async_read API - you either resolve the promise or invoke your callback in the completion handler of async_read that reads the RPC response from the socket.
I'm fuguring out how to use Futures with non-blocking event driven code (in a separate thread or not, both) but how can i end the future from a slot (~resolve the promise based on an signal) ?
QByteArray RfidCardReader::startTask(QByteArray send)
{
if(this->busy==false) {
this->sendFrame(send);
QObject::connect(this, &RfidCardReader::frameReady,
[=]() {/*this must be the startTask return*/ return this->int_read_buffer;});
} else {
throw 0;//Handle a queue instead
}
}
QFuture<QByteArray> RfidCardReader::send(QByteArray passed_send)
{
return QtConcurrent::run(QThreadPool::globalInstance(), this->startTask, passed_send);
}
basically what I want to do using only an instance is wrap a serial device (whic is sync by nature) in a queue of Futures but with only non blocking code using signals like &QIODevice::bytesWritten &QIODevice::readyRead etc... if there are better approches to the problem let me know, i would be glad to know the right way to write readable async code in Qt without blocking in separate threads
A serial device is asynchronous by nature, and using the serial port concurrently from multiple threads is undefined behavior. You can certainly resolve futures from any thread, but there's nothing in Qt that will give you a future on the same thread. Recall that a QFuture is not a class that you can sensibly instantiate. The default-constructed class is useless.
To get an idea of how to handle asynchronous serial I/O, see for example this answer.
Then you can use the undocumented <QFutureInterface> header, and create your own implementation that can wrap higher-level aspects of your protocol, i.e. commands/requests. You could then group such futures, and use a single watcher to determine when they are done.
Your approach is quite interesting in fact, and I might develop a complete example.
I'm writing a class "Tmt" that acts between a server and clients through sockets. My Tmt class will receive data from server, build up a queue internally and perform some operation on the data in the queue before they are available to the client.
I have already setup the socket connection and I can call
receiverData(); // to get data from server
The client will use my class Tmt as follows:
Tmt mytmt=new Tmt();
mymt.getProcessedData(); //to get one frame.
My question is how to let the Tmt class keep receiving data from server in the background once it is created and add them to the queue. I have some experience in multi-thread in C, but I'm not sure how this "working in the background" concept will be implemented in a class in C++. Please advice, thanks!
One option would be to associate a thread with each instance of the class (perhaps by creating a thread in the constructor). This thread continuously reads data from the network and adds the data to the queue as it becomes available. If the thread is marked private (i.e. class clients aren't aware of its existence), then it will essentially be running "in the background" with no explicit intervention. It would be up to the Tmt object to manage its state.
As for actual thread implementations in C++, you can just use Good ol' Pthreads in C++ just fine. However, a much better approach would probably be to use the Boost threading library, which encapsulates all the thread state into its own class. They also offer a whole bunch of synchronization primitives that are just like the pthread versions, but substantially easier to use.
Hope this helps!
By the way - I'd recommend just naming the class Transmit. No reason to be overly terse. ;-)
IMHO, multithreading is not the best solution for this kind of classes.
Introducing background threads can cause many problems, you must devise guards against multiple unnecessary thread creation at the least. Also threads need apparent initialize and cleanup. For instance, usual thread cleanup include thread join operation (wait for thread to stop) that could cause deadlocks, resource leaks, irresponsible UIs, etc.
Single thread asynchronous socket communication could be more appropriate to this scenario.
Let me draw sample code about this:
class Tmt {
...
public:
...
bool doProcessing()
{
receiverData();
// process data
// return true if data available
}
T getProcessedData()
{
// return processed data
}
}
Tmt class users must run loop doing doProcessing, getProcessedData call.
Tmt myTmt;
...
while (needMoreData)
{
if (myTmt.doProcessing())
{
myTmt.getProcessedData();
...
}
}
If Tmt users want background processing they can easily create another thread and doing Tmt job in there. This time, thread management works are covered by Tmt users.
If Tmt users prefer single thread approach they can do it without any problem.
Also noted that famous curl library uses this kind of design.
This is a question about generic c++ event driven applications design.
Lets assume that we have two threads, a "Dispatcher" (or "Engine"...) and a "Listener" (or "Client"...).
Let's assume that I write the Dispatcher code, and release it as a library. I also write the Listener interface, of course.
When the Dispatcher executes (after Listener registration)
listenerInstance.onSomeEvent();
the event handling code will actually be executed by the Dispatcher thread, so if the person that implements the Listener writes something like
void Listener::onSomeEvent() { while(true) ; }
The Dispatcher will stuck forever.
Is there a "plain old c++" (I mean no boost or libsigc++) way to "decouple" the two classes, so I can be sure that my Dispatcher will work fine whatever the Listeners does in the callbacks?
bye and thanks in advance,
Andrea
Well if the event gets invoked in the same thread (as I seem to understand can be a requirement), then there isn't much you can do about it.
If this is under a Win32 app with a message pump, you could register a windows message and call PostMessage with data representing this event and you can patch the message loop to interpret that message and call the event. What you gain is a decoupling of sorts, the event call is asynchronous (ie the event call will return no matter what). But later on when you process your messages and actually call the event, your main thread will still be stalled and nothing else will run until the event handler is ready.
Another alternative is just creating a new thread (or using a thread pool) for your call. This won't work for events that require a certain thread (ie ui updating threads). Additionally this adds synchronization overhead and thread spawning overhead AND you might starve the system of threads and/or cpu time.
But really, I don't think it's your job as the library designer to anticipate and avoid these problems. If the end-user wants to create a long event handler, let him spawn a new thread on his own. If he doesn't and just wants his specific thread to handle an event, let him. It simplifies your job and doesn't add any overhead that's not needed.
I'm afraid there's no native C++ way to do this. For windows, you can use asynchronous procedure calls (APC)
One approach could be to call the onSomeEvent into a dedicated thread. This is not 100% bullet proof but it would avoid the while(true); issue.
I hope it helps
There is a pure C++ way to achieve what you're mentioning. However, it's very ineffective. Here's a sample:
class Listener
{
bool myHasEvent;
private:
void ProcessEvent()
{
while (true)
{
if (!myHasEvent)
continue; //spin lock
// Do real processing
myHasEvent = false;
}
}
public:
void onSomeEvent() { myHasEvent = true; }
};
However, I would recommend against this approach. Instead, I would transform this into more platform-specific code. I would replace the if (!myHasEvent) continue; spin lock with a OS-specific wait routine (i.e. WaitForSingleObject on Win32) passing an Event Handle. Then, in onSomeEvent, instead of myHasEvent = true; I would set the event into signaled state (i.e. SetEvent on Win32). This would be a lot more effective because the thread wouldn't eat processor time during waiting.
Another method is the PostMessage as suggested by Blindly.
I am trying to handle socket in different threads creating runtime failure. See following code.
void MySocket::Lock()
{
m_LockCount++;
if( m_LockCount )
{
CSocket::Create( 8080 );
}
}
void MySocket::Unlock()
{
m_LockCount--;
if( !m_LockCount )
{
CSocket::Close();
}
}
I am calling Lock() from one thread and Unlock() from other. When it executes CSocket::Close() it gives an exception.
I googled for this bug and got some reasons.
This happens because; a CSocket object should be used only in the context of a single thread because the SOCKET handle encapsulated by a CAsyncSocket object is stored in a per-thread handle map. They are also suggesting a solution by sharing SOCKET handles between threads (http://support.microsoft.com/kb/175668). But this is not possible in my case since I am excepting some notification callback which will not work with above solution. Can anybody suggest a mechanism to share CSocket among threads without effecting notification callbacks?
You coould just use the socket directly and stop using the, obviously, flawed MFC implementation ...
If, as you say, "a CSocket object should be used only in the context of a single thread," then there is no "mechanism to share CSocket among threads".
In other words, one of the threads needs to own the CSocket, and the others can't mess with it.
In such cases, the solution is to use an inter-thread messaging system. That way, one of the other threads can send a message to the owner saying, "Hey, buddy, close your socket!"
The details of how you would do that messaging depend entirely on the context of your program.
I would advise you to use some higher-level (and less buggy) socket API like Boost.Asio. Note that it does not make sockets thread-safe anyway (see there). You have to use some lock/unlock facility.
I am not sure i understand your question about sharing sockets among threads without using notification callbacks. Between threads T1 and T2, supposing T1 manages a socket, there are only two ways for T2 to become aware of a socket event. Either some notification launched by T1 or a question asked by T2 to T1, either on a regular basis or in a blocking call.