First what I did (minimum sample will be provided if it's not just me doing something plain stupid):
I have a GUI application that shall support several network interfaces to change content that's displayed in the GUI. The network interfaces are realized as plugins that are dynamically loaded on GUI startup. The GUI application provides a boost::asio::io_service object that it passes via reference to the interfaces so they can use that to build the asynchronous I/O. In the GUI thread this io_service object is than polled to synchronise the network interfaces' access to the content.
The problem now is that the handlers don't get called by the io_service object when it is polled. To narrow this down I implemented only one interface and created the io_service object therein, still calling the poll from the GUI thread and that works.
My question now is: is it possible that there is a general problem with passing the io_service object into DLL functions loaded at runtime?
If the scenario is too unclear, I'll provide a minimum example.
EDIT: I feel really stupid :) Just hacked together a minimum example and that - of course - works like a charm. That pretty much means the problem origins from some other part of the software.
So thanks everyone for their input!
To make this question at least a little bit useful:
Anyone who wants to do something similar (plugins for network synchronized via boost::asio::io_service), you can download the minimum example here.
I would check several options:
* Maybe the object is copied at some point rather than passed by reference; you can make it boost::noncopyable to prevent this from happening.
* Check the return value of poll if it is bigger than 0 some handler was run; if it is 0 the problem is boost think there are no handler.
* Add a test handler in your GUI app to rule out the option it is DLL-related problem.
Happy debugging!
Related
I am creating simple online chat with server and client in one application. I wrote client-side, but i don't know how will be correct use QTcpServer.
Need i create QTcpServer in new thread? So that I can connect to it as a client from this application. If yes, how do it? Or it's useless and not needed idea?
Need i create new thread for every new connection in order to process it?
I am developing a chat as a course project for a university
Assuming you are using Qt's networking APIs, you don't need to use multiple threads. The reason is that Qt's APIs are designed around a non-blocking event-loop model, so it is expected that no function-call should ever take more than a negligible amount of time (e.g. a few milliseconds) to return, after which the main thread's QEventLoop resumes execution and can therefore handle other tasks in a timely manner, all from within a single thread.
That said, there are a few optional methods in the Qt API that are blocking, and in a single-threaded application, calling those methods risks making your application un-responsive for (however long it takes for those methods to return). Fortunately those methods aren't necessary, and they are clearly documented. I recommend avoiding them, as there are always better, non-blocking ways to achieve the same result in Qt, e.g. by connecting the appropriate signals to the appropriate slots.
To sum up: threads aren't necessary in Qt-based networking, and your program will be simpler, more reliable, and easier to debug if you don't use threads. When implementing server-like functionality, a QTcpServer object is useful; you might want to have a look at this example program for cues on how to use it.
Hey
I'm using gRPC with the async API. That requires constructing reactors based on classes like ClientBidiReactor or ServerBidiReactor
If I understand correctly, the gRPC works like this: It takes threads from some thread pool, and using these threads it executes certain methods of the reactors that are being used.
The problem
Now, the problem is when the reactors become stateful. I know that the methods of a single reactor will most probably be executed sequentially, but they may be run from different threads, is this correct? If so, then is it possible that we may encounter a problem described for instance here?
Long story short, if we have an unsynchronized state in such circumstances, is it possible that one thread will update the state, then a next method from the reactor will be executed from a different thread and it will see the not-updated value because the state's new value has not been flushed to the main memory yet?
Honestly, I'm a little confused about this. In the grpc examples here and here this doesn't seem to be addressed (the mutex is for a different purpose there and the values are not atomic).
I used/linked examples for the bidi reactors but this refers to all types of reactors.
Conclusion / questions
There are basically a couple of questions from me at this point:
Are the concerns valid here and do I properly understand everything or did I miss something? Does the problem exist?
Do we need to manually synchronize reactors' state or is it handled by the library somehow(I mean is flushing to the main memory handled)?
Are the library authors aware of this? Did they keep this in mind while they were coding examples I linked?
Thank you in advance for any help, all the best!
You're right that the examples don't showcase this very well, there's some room for improvement. The operation-completion reaction methods (OnReadInitialMetadataDone, OnReadDone, OnWriteDone, ...) can be called concurrently from different threads owned by the gRPC library, so if your code accesses any shared state, you'll want to coordinate that yourself (via synchronization, lock-free types, etc). In practice, I'm not sure how often it happens, or which callbacks are more likely to overlap.
The original callback API spec says a bit more about this, under a "Thread safety" clause: L67: C++ callback-based asynchronous API. The same is reiterated a few places in the callback implementation code itself - client_callback.h#L234-236 for example.
So, this is the problem:
I have written a wrapper class exposing simplified API for the libtorrent c++ library. It (the wrapper) has a stack-allocated member, which is libtorrent's main session object.
The library itself uses boost framework, and its threading features - it is multithreaded. (I must say that I'm not really familiar with boost.)
Now, I wanted to create a simple MFC dialog-based application that will have a couple of buttons for managing the session, progress bar, etc.
The destructor of a libtorrent session may take a while to finish (since it needs to notify the trackers that it's closing). The user is prompted on exit with a MessageBox to confirm download termination, so I thought it was a good idea to put my wrapper object as a member of the app class, rather than the CDialog (the wrapper destructor, and consequently the session's will kick in after the dialog is closed). Libtorrent docs also state that it is a good idea to close UI such as windows before the destructor is invoked.
And here comes the fun part - everything works fine, until I close the dialog. The process continues to live for a couple of seconds, and then crashes with some boost-related locks/critical section stuff (that's where the debugger pointed, some lock / release call in one of the boost's headers)...
EDIT
Seems that while closing, some thread checks are performed from the main window, and it gets into some "irregular" state where it does something that makes the boost fail. I'm thinking some kind of a "join" is needed for the gui thread, to wait for other threads termination...
If anyone understood what I was trying to explain here, and has some idea what am I doing wrong, or has an alternative solution to this concept, I'd really appreciate it.
Thanks.
You can wait for the Boost threads to join prior to exiting. I have an Output_Processor class that uses a Boost thread. I interface to it through a queue. Once I want to shutdown the app, I put a shutdown command in its queue. The Output_Processor thread returns after processing that command. Then my block on join returns and the rest of the app can shutdown gracefully.
...
_output_processor_queue->write(shutdown_command);
// Wait for output processor thread to join.
_output_processor_thread->join();
_output_processor_initialized = false;
...
OK, the problem is resolved.
All I did is that I initially created a dynamic wrapper object, and deleted it after doModal() returns. At that point the main thread blocks, waiting till the deletion operation is over, which is basically until the libtorrent session is destructed. However, the peculiar behavior of non-dynamic object remains.
I am currently in the process of refactoring an mid-sized software project. It contains a central kernel-like class that is used by multiple threads. Currently, this class uses a Glib::Dispatcher for handling signals that are emitted by multiple threads. Since one goal of the refactoring proccess is to get rid of glibmm entirely (since Qt shall be used as the new framework), I am trying to figure out a way of how to "simulate" the dispatcher functionality using Boost. I already looked into Boost.Signals and Boost.Signals2, but neither one of these libraries seems to offer an alternative to the dispatcher.
To clarify what the dispatcher shall do, here's a short description from the official documentation:
Glib::Dispatcher works similar to sigc::signal. But unlike
normal signals, the notification happens asynchronously through a
pipe. This is a simple and efficient way of communicating between
threads, and especially useful in a thread model with a single GUI
thread.
No mutex locking is involved, apart from the operating system's
internal I/O locking. That implies some usage rules:
Only one thread may connect to the signal and receive notification, but multiple
senders are allowed even without locking.
The GLib main loop must run in the receiving thread (this will be the GUI thread usually).
The Dispatcher object must be instantiated by the receiver thread.
The Dispatcher object should be instantiated before creating any of the
sender threads, if you want to avoid extra locking.
The Dispatcher object must be deleted by the receiver thread.
All Dispatcher objects instantiated by the same receiver thread must use the same main
context.
Could you give me some pointers in the right direction? Is this the sort of functionality I can achieve using Boost.Signals or Boost.Signals2?
Edit: As a commenter rightly pointed out, using Qt would perhaps be an option. However, the class that I am refactoring is very low-level and I do not want to add this additional dependency.
I think there is no simple way to do that, removing Glib in flavour of boost won't solve the problem which is more an architechtural issue than anything else. Replacing with Boost not gonna fix the design issue.
You should model your own signal interface, and try to adapt for each library, including Glib in the first place since it is already working, adding another indirection level to your problem will let you fix that issue.
Boost can help you if you look at boost::function. I dont consider replacing glib with boost to be a real step forward, boost is not a graphical library and it will be required at some point to add an interface with an implementation layer to your graphic engine.
I have now opted for a total rewrite of the class in question. It turns out that I do not require the dispatcher functionality in the way it was provided by Glib. Instead, it was enough to use the normal boost::signals2 signals, coupled with some signals from Qt for the actual graphical interaction.
This isn't so much of a problem now as I've implemented my own collection but still a little curious on this one.
I've got a singleton which provides access to various common components, it holds instances of these components with thread ID's so each thread should (and does, I checked) have it's own instance of the component such as an Oracle database access library.
When running the system (which is a C++ library being called by a C# application) with multiple incoming requests everything seems to run fine for a while but then it crashes out with an AccessViolation exception. Stepping through the debugger the problem appears to be when one thread finishes and clears out it's session information (held in a std::map object) the session information held in a separate collection instance for the other thread also appears to be cleared out.
Is this something anyone else has encountered or knows about? I've tried having a look around but can't find anything about this kind of problem.
Cheers
Standard C++ containers do not concern themselves with thread safety much. Your code sounds like it is modifying the map instance from two different threads or modifying the map in one thread and reading from it in another. That is obviously wrong. Use some locking primitives to synchronize the access between the threads.
If all you want is a separate object for each thread, you might want to take a look at boost::thread_specific_ptr.
How do you manage giving each thread its own session information? Somewhere under there you have classes managing the lifetimes of these objects, and this is where it appears to be going wrong.