I have a dispatcher thread and a listener thread. When I dispatch a command, I want to wait for response before I send follow up command. Moreover I need to examine the respond before I can proceed with 2nd command, the least of which is to confirm the response is received and everything is okay. My pseudo code is below:
void MainWindow::downloadData()
{
dispatcher->getInfo(); // sends command
// QString response = receiver->response() // idealy I would like to check response but since its async, i can't really do that!
dipatcher->askData(); // the 2nd command and so forth
}
Is there any elegant way to solve this issue? The only way I can think of is if I use the same thread and all calls are blocking but that's not necessarily a good solution.
In Qt, I could use signals and slots to connect them in cascading manner so when the first signal is triggered it initiates the whole sequence of operation (each slot emitting a new signal) but seems rather dirty as well.
One of the most robust ways to handle asynchronous events and process a chains/graphs of actions upon these events are FSMs. Qt provides a basis for implementing FSMs with its Qt-State machine framework. I'd suggest to go this way. Unfortunately all the examples provided by Qt for FSM are dealing with GUIs and animations.
The advantage of FSM approach is, FSMs can be represented both as graphs and as tables. The first option is great for understanding, the second for validation, that there are no endless loops and "dead" ends.
I've built on basis of Qt-FSM framework own framework for defining FSMs in a domain specific language. I use it for controlling a complex machine having couple of sensors actors all working asynchronously. Using DSL helps me to implement in higher abstraction - in the abstraction level of FSM-graphs.
Related
I have a situation where I have a single Emitter object and a set of Receivers. The receivers are of the same class, and actually represent a set of devices of the same type. I'm using the Qt framework.
The Emitter itself first gets a signal asking for information from one of the devices.
In the corresponding slot, the Emitter has to check to see which of the Receivers are 'ready', and then send its own signal to request data to one of the devices (whichever is ready first).
The Emitter receives signals very quickly, on the order of milliseconds. There are three ways I can think of safely requesting data from only one of the devices (the devices live in their own threads, so I need a thread-safe mechanism). The number of devices isn't static, and can change. The total number of devices is quite small (definitely under 5-6).
1) Connect to all the devices when they are added or removed. Emit the one request and have the devices objects themselves filter out whether the request is for them using some specific device tag. This method is nice because the request slot where the check occurs will execute in a dedicated thread's context, but wasteful as the number of devices go up.
2) Connect and disconnect from the object within the Emitter on the fly when it's necessary to send a request.
3) Use QMetaObject::invokeMethod() when its necessary to send a request.
Performance is important. Does anyone know which method is the 'best', or if there's a better one altogether?
Regards
Pris
Note: To clarify: Emitter gets a signal from the application, to get info by querying the device. Crazy ASCII art go:
(app)<---->(emitter)<------>(receivers)<--|-->physical devices
Based on the information you have provided I would still recommend a Reactor implementation. If you don't use ACE then you can implement your own. The basic architecture is as follows:
use select to wake up when signal or data is received from the App.
If there is a socket ready on the sending list then you just pick one and send it data
When data is sent the Receiver removes itself from the set of sockets/handlers that are available
When data is processed the Reciever re-registers itself to the list of available recipients.
The reason I suggested ACE is because it has one of the simplest to use implementations of the Reactor pattern.
I'm amusing here this is multi thread environment.
If you are restricted to Qt signal / slot system between then the answer for your specific questions:
1) is definitely not the way to go. On an emit from the Emitter a total number of events equal to the number of Receivers will be queued for the thread(s) event loops of the devices, then the same number of slot calls will occur once the thread(s) reach those event. Even if most of the lost just if(id!=m_id) return; on their first line, its a significant amount of things going on in the core of Qt. Place a breakpoint in one of your slots that is evoked by a Qt::QueuedConnection signal and validate this looking at the actual stack trace. Its usually at least 4 call deep from the xyEventLoop::processEvents(...), so "just returning" is definitely not "free" in terms of time.
2) Not sure how Qt's inner implementation actually is, but from what I know connecting and disconnecting most likely include inserting and removing the sender and receiver into some lists, which are most likely accessed with QMutex locking. - might also be "expensive" time-wise, and rapidly connecting and disconnecting is definitely not a best practice.
3) Probably the least "expensive time-wise" solution you can find that is still using Qt's singnal-slot system.
optionally) Take a look at QSignalMapper. It is designed exactly for what you planned to do in option 1).
There are more optimal solutions to communicate between your Emitter and Receivers, but as a best practice I'd first choose the option that is most easy to use and fast to implement, yet has a chance of being fast enough run-time (that is option 3). ). Then, when its done, see if it meets your performance requirements. If it does not, and only then, consider using shared memory with mutexes in a data provider - data consumer architecture (Emitter thread rapidly post request data in a circular list, while the Receiver thread(s) reads them whenever have time, then post results back a similar way, while the Emitter thread constantly polls for done results.)
I'm writing a daemon that needs to both run in the background and take care of tasks and also receive input directly from a frontend. I've been attempting to use sockets to take care of this task, however, I can't get it to work properly since sockets pause the program while waiting for a connection. Is there anyway to get around this?
I'm using the socket wrappers provided at http://linuxgazette.net/issue74/tougher.html
Thank you for any and all help
You will need to use threads to make the socket operations asynchronous. Or use some library that has already implemented it, one of the top ones is Boost Asio.
There are a few ways to handle this problem. This most common is using an event loop and something like libevent. Then you use non-blocking sockets.
Doing this in an event driven fashion can require a big shift in your program logic. But doing it with threads has its own complexities and isn't clearly a better choice.
Usually the daemons use event loops to avoid the problem of waiting for events.
It's the smartest solution to the problem that you present (do not wait to an asynchronous event). รง
Althought, usually the entire daemon is build over the event loop and it's callback architecture, and can cause a partial rewritting, so usually the quick and dirty solution is creating a separate thread to handle those events wich usually creates more bugs than it solves. So, use an event loop:
libevent.
glib event loop.
libev.
boost::asio
...
From your description, you have already divided your application into a frontend (receiving input) and backend (socket handling and tasks). If the input from the frontend is sent over the socket (via the backend) rather receiving input from the socket then it seems like you are describing a client and not a server. Client programs are typically not implemented as daemons.
You have created a blocking socket and need to either monitor in a separate thread execution a thread or even separate process) or make a non-blocking socket and poll frequently for updates.
The link to the LinuxGazette is a basic intro to network programming. If you would like a little more depth then take a look at Beej's Guide to Network Programming where the various API calls available to you are explained in a little detail.. and will, perhaps, make you appreciate more wrapper libraries such as Boost::ASIO.
Can be worth retaining control of the event loop yourself - its no complicated and provides flexibility down the track.
"C++ pseudo-code" for an event loop.
while (!done)
{
bool workDone = false;
// Loop over each event source or internal worker
for each module
{
// If it has work to do, do some.
if (module.hasWorkDoTo())
{
// Generally, do as little work as possible; e.g. process a single event for this module.
// But tinker with this to manage priorities if need be.
// E.g. Maybe allow the GUI to flush its queue.
module.doSomeWork();
workDone = true;
}
}
if (!workDone)
{
// System idle. No Sleep for a bit so we have benign idle baheviour.
nanosleep(...);
}
}
This is somewhat related to this question, but I think I need to know a little bit more. I've been trying to get my head around how to do this for a few days (whilst working on other parts), but the time has come for me to bite the bullet and get multi-threaded. Also, I'm after a bit more information than the question linked.
Firstly, about multi-threading. As I have been testing my code, I've not bothered with any multi-threading. It's just a console application that starts a connection to a test server and everything else is then handled. The main loop is this:
while(true)
{
Root::instance().performIO(); // calls io_service::runOne();
}
When I write my main application, I'm guessing this solution won't be acceptable (as it would have to be called in the message loop which, whilst possible, would have issues when the message queue blocks waiting for a message. You could change it so that the message-loop doesn't block, but then isn't that going to whack the CPU usage through the roof?)
The solution it seems is to throw another thread at it. Okay, fine. But then I've read that io_service::run() returns when there is no work to do. What is that? Is that when there's no data, or no connections? If at least one connection exists does it stay alive? If so, that's not so much of a problem as I only have to start up a new thread when the first connection is made and I'm happy if it all stops when there is nothing going on at all. I guess I am confused by the definition of 'no work to do'.
Then I have to worry about synchronizing my boost thread with my main GUI thread. So, I guess my questions are:
What is the best-practice way of using boost::asio in a client application with regard to threads and keeping them alive?
When writing to a socket from the main thread to the IO thread, is synchronization achieved using boost::asio::post, so that the call happens later in the io_service?
When data is received, how do people get the data back to the UI thread? In the past when I used completion ports, I made a special event that could post the data back to the main UI thread using a ::SendMessage. It wasn't elegant, but it worked.
I'll be reading some more today, but it would be great to get a heads up from someone who has done this already. The Boost::asio documentation isn't great, and most of my work so far has been based on a bit of the documentation, some trial/error, some example code on the web.
1) Have a look at io_service::work. As long as an work object exists io_service::run will not return. So if you start doing your clean up, destroy the work object, cancel any outstanding operations, for example an async_read on a socket, wait for run to return and clean up your resources.
2) io_service::post will asynchronously execute the given handler from a thread running the io_service. A callback can be used to get the result of the operation executed.
3) You needs some form of messaging system to inform your GUI thread of the new data. There are several possibilities here.
As far as your remark about the documention, I thing Asio is one of the better documented boost libraries and it comes with clear examples.
boost::io_service::run() will return only when there's nothing to do, so no async operations are pending, e.g. async accept/connection, async read/write or async timer wait. so before calling io_service::run() you first have to start any async op.
i haven't got do you have console or GUI app? in any case multithreading looks like a overkill. you can use Asio in conjunction with your message loop. if it's win32 GUI you can call io_service::run_one() from you OnIdle() handler. in case of console application you can setup deadline_timer that regularly checks (every 200ms?) for user input and use it with io_service::run(). everything in single thread to greatly simplify the solution
1) What is the best-practice way of using
boost::asio in a client application
with regard to threads and keeping
them alive?
As the documentation suggests, a pool of threads invoking io_service::run is the most scalable and easiest to implement.
2) When writing to a socket from the main
thread to the IO thread, is
synchronization achieved using
boost::asio::post, so that the call
happens later in the io_service?
You will need to use a strand to protect any handlers that can be invoked by multiple threads. See this answer as it may help you, as well as this example.
3) When data is received, how do people
get the data back to the UI thread? In
the past when I used completion ports,
I made a special event that could post
the data back to the main UI thread
using a ::SendMessage. It wasn't
elegant, but it worked.
How about providing a callback in the form of a boost::function when you post an asynchronous event to the io_service? Then the event's handler can invoke the callback and update the UI with the results.
When data is received, how do people get the data back to the UI thread? In the past when I used completion ports, I made a special event that could post the data back to the main UI thread using a ::SendMessage. It wasn't elegant, but it worked
::PostMessage may be more appropriate.
Unless everything runs in one thread these mechanisms must be used to safely post events to the UI thread.
I have a remote server which handles various different commands, one of which is an event fetching method.
The event fetch returns right away if there is 1 or more events listed in the queue ready for processing. If the event queue is empty, this method does not return until a timeout of a few seconds. This way I don't run into any HTTP/socket timeouts. The moment an event becomes available, the method returns right away. This way the client only ever makes connections to the server, and the server does not have to make any connections to the client.
This event mechanism works nicely. I'm using the boost library to handle queues, event notifications, etc.
Here's the problem. While the server is holding back on returning from the event fetch method, during that time, I can't issue any other commands.
In the source code, XmlRpcDispatch.cpp, I'm seeing in the "work" method, a simple loop that uses a blocking call to "select".
Seems like while the handling of a method is busy, no other requests are processed.
Question: am I not seeing something and can XmlRpcpp (xmlrpc++) handle multiple requests asynchronously? Does anyone know of a better xmlrpc library for C++? I don't suppose the Boost library has a component that lets me issue remote commands?
I actually don't care about the XML or over-HTTP feature. I simply need to issue (asynchronous) commands over TCP in any shape or form?
I look forward to any input anyone might offer.
I had some problems with XMLRPC also, and investigated many solutions like GSoap and XMLRPC++, but in the end I gave up and wrote the whole HTTP+XMLRPC from scratch using Boost.ASIO and TinyXML++ (later I swaped TinyXML to expat). It wasn't really that much work; I did it myself in about a week, starting from scratch and ending up with many RPC calls fully implemented.
Boost.ASIO gave great results. It is, as its name says, totally async, and with excellent performance with little overhead, which to me was very important because it was running in an embedded environment (MIPS).
Later, and this might be your case, I changed XML to Google's Protocol-buffers, and was even happier. Its API, as well as its message containers, are all type safe (i.e. you send an int and a float, and it never gets converted to string and back, as is the case with XML), and once you get the hang of it, which doesn't take very long, its very productive solution.
My recomendation: if you can ditch XML, go with Boost.ASIO + ProtobufIf you need XML: Boost.ASIO + Expat
Doing this stuff from scratch is really worth it.
I'm programming an online game for two reasons, one to familiarize myself with server/client requests in a realtime environment (as opposed to something like a typical web browser, which is not realtime) and to actually get my hands wet in that area, so I can proceed to actually properly design one.
Anywho, I'm doing this in C++, and I've been using winsock to handle my basic, basic network tests. I obviously want to use a framelimiter and have 3D going and all of that at some point, and my main issue is that when I do a send() or receive(), the program kindly idles there and waits for a response. That would lead to maybe 8 fps on even the best internet connection.
So the obvious solution to me is to take the networking code out of the main process and start it up in its own thread. Ideally, I would call a "send" in my main process which would pass the networking thread a pointer to the message, and then periodically (every frame) check to see if the networking thread had received the reply, or timed out, or what have you. In a perfect world, I would actually have 2 or more networking threads running simultaneously, so that I could say run a chat window and do a background download of a piece of armor and still allow the player to run around all at once.
The bulk of my problem is that this is a new thing to me. I understand the concept of threading, but I can see some serious issues, like what happens if two threads try to read/write the same memory address at the same time, etc. I know that there are already methods in place to handle this sort of thing, so I'm looking for suggestions on the best way to implement something like this. Basically, I need thread A to be able to start a process in thread B by sending a chunk of data, poll thread B's status, and then receive the reply, also as a chunk of data., ideally without any major crashing going on. ^_^ I'll worry about what that data actually contains and how to handle dropped packets, etc later, I just need to get that happening first.
Thanks for any help/advice.
PS: Just thought about this, may make the question simpler. Is there a way to use the windows event handling system to my advantage? Like, would it be possible to have thread A initialize data somewhere, then trigger an event in thread B to have it pick up the data, and vice versa for thread B to tell thread A it was done? That would probably solve a lot of my problems, since I don't really need both threads to be able to work on the data at the same time, more of a baton pass really. I just don't know if this is possible between two different threads. (I know one thread can create its own messages for the event handler.)
The easiest thing
for you to do, would be to simply invoke the windows API QueueUserWorkItem. All you have to specify is the function that the thread will execute and the input passed to it. A thread pool will be automatically created for you and the jobs executed in it. New threads will be created as and when is required.
http://msdn.microsoft.com/en-us/library/ms684957(VS.85).aspx
More Control
You could have a more detailed control using another set of API's which can again manage the thread pool for you -
http://msdn.microsoft.com/en-us/library/ms686980(VS.85).aspx
Do it yourself
If you want to control all aspects of your thread creation and the pool management you would have to create the threads yourself, decide how they should end , how many to create etc (beginthreadex is the api you should be using to create threads. If you use MFC you should use AfxBeginThread function).
Send jobs to worker threads - Io completion Ports
In this case, you would also have to worry about how to communicate your jobs - i would recommend IoCOmpletionPorts to do that. It is the most scalable notification mechanism that i currently know of made for this purpose. It has the additional advantage that it is implemented in the kernel so you avoid all kinds of dead loack sitautions you would encounter if you decide to handroll something yourself.
This article will show you how with code samples -
http://blogs.msdn.com/larryosterman/archive/2004/03/29/101329.aspx
Communicate Back - Windows Messages
You could use windows messages to communicate the status back to your parent thread since it is doing the message wait anyway. use the PostMessage function to do this. (and check for errors)
ps : You could also allocate the data that needs to be sent out on a dedicated pointer and then the worker thread could take care of deleting it after sending it out. That way you avoid the return pointer traffic too.
BlodBath's suggestion of non-blocking sockets is potentially the right approach.
If you're trying to avoid using a multithreaded approach, then you could investigate the use of setting up overlapped I/O on your sockets. They will not block when you do a transmit or receive, but have the added bonus of giving you the option of waiting for multiple events within your single event loop. When your transmit has finished, you will receive an event. (see this for some details)
This is not incompatible with a multithreaded approach, so there's the option of changing your mind later. ;-)
On the design of your multithreaded app. the best thing to do is to work out all of the external activities that you want to be alerted to. For example, so far in your question you've listed network transmits, network receives, and user activity.
Depending on the number of concurrent connections you're going to be dealing with you'll probably find it conceptually simpler to have a thread per socket (assuming small numbers of sockets), where each thread is responsible for all of the processing for that socket.
Then you can implement some form of messaging system between your threads as RC suggested.
Arrange your system so that when a message is sent to a particular thread and event is also sent. Your threads can then be sent to sleep waiting for one of those events. (as well as any other stimulus - like socket events, user events etc.)
You're quite right that you need to be careful of situations where more than one thread is trying to access the same piece of memory. Mutexes and semaphores are the things to use there.
Also be aware of the limitations that your gui has when it comes to multithreading.
Some discussion on the subject can be found in this question.
But the abbreviated version is that most (and Windows is one of these) GUIs don't allow multiple threads to perform GUI operations simultaneously. To get around this problem you can make use of the message pump in your application, by sending custom messages to your gui thread to get it to perform gui operations.
I suggest looking into non-blocking sockets for the quick fix. Using non-blocking sockets send() and recv() do not block, and using the select() function you can get any waiting data every frame.
See it as a producer-consumer problem: when receiving, your network communication thread is the producer whereas the UI thread is the consumer. When sending, it's just the opposite. Implement a simple buffer class which gives you methods like push and pop (pop should be blocking for the network thread and non-blocking for the UI thread).
Rather than using the Windows event system, I would prefer something that is more portable, for example Boost condition variables.
I don't code games, but I've used a system similar to what pukku suggested. It lends nicely to doing things like having the buffer prioritize your messages to be processed if you have such a need.
I think of them as mailboxes per thread. You want to send a packet? Have the ProcessThread create a "thread message" with the payload to go on the wire and "send" it to the NetworkThread (i.e. push it on the NetworkThread's queue/mailbox and signal the condition variable of the NetworkThread so he'll wake up and pull it off). When the NetworkThread receives the response, package it up in a thread message and send it back to the ProcessThread in the same manner. Difference is the ProcessThread won't be blocked on a condition variable, just polling on mailbox.empty( ) when you want to check for the response.
You may want to push and pop directly, but a more convenient way for larger projects is to implement a toThreadName, fromThreadName scheme in a ThreadMsg base class, and a Post Office that threads register their Mailbox with. The PostOffice then has a send(ThreadMsg*); function that gets/pushes the messages to the appropriate Mailbox based on the to and from. Mailbox (the buffer/queue class) contains the ThreadMsg* = receiveMessage(), basically popping it off the underlying queue.
Depending on your needs, you could have ThreadMsg contain a virtual function process(..) that could be overridden accordingly in derived classes, or just have an ordinary ThreadMessage class with a to, from members and a getPayload( ) function to get back the raw data and deal with it directly in the ProcessThread.
Hope this helps.
Some topics you might be interested in:
mutex: A mutex allows you to lock access to specific resources for one thread only
semaphore: A way to determine how many users a certain resource still has (=how many threads are accessing it) and a way for threads to access a resource. A mutex is a special case of a semaphore.
critical section: a mutex-protected piece of code (street with only one lane) that can only be travelled by one thread at a time.
message queue: a way of distributing messages in a centralized queue
inter-process communication (IPC) - a way of threads and processes to communicate with each other through named pipes, shared memory and many other ways (it's more of a concept than a special technique)
All topics in bold print can be easily looked up on a search engine.