Is there some Boost functionality for simulating a Glib::Dispatcher? - c++

I am currently in the process of refactoring an mid-sized software project. It contains a central kernel-like class that is used by multiple threads. Currently, this class uses a Glib::Dispatcher for handling signals that are emitted by multiple threads. Since one goal of the refactoring proccess is to get rid of glibmm entirely (since Qt shall be used as the new framework), I am trying to figure out a way of how to "simulate" the dispatcher functionality using Boost. I already looked into Boost.Signals and Boost.Signals2, but neither one of these libraries seems to offer an alternative to the dispatcher.
To clarify what the dispatcher shall do, here's a short description from the official documentation:
Glib::Dispatcher works similar to sigc::signal. But unlike
normal signals, the notification happens asynchronously through a
pipe. This is a simple and efficient way of communicating between
threads, and especially useful in a thread model with a single GUI
thread.
No mutex locking is involved, apart from the operating system's
internal I/O locking. That implies some usage rules:
Only one thread may connect to the signal and receive notification, but multiple
senders are allowed even without locking.
The GLib main loop must run in the receiving thread (this will be the GUI thread usually).
The Dispatcher object must be instantiated by the receiver thread.
The Dispatcher object should be instantiated before creating any of the
sender threads, if you want to avoid extra locking.
The Dispatcher object must be deleted by the receiver thread.
All Dispatcher objects instantiated by the same receiver thread must use the same main
context.
Could you give me some pointers in the right direction? Is this the sort of functionality I can achieve using Boost.Signals or Boost.Signals2?
Edit: As a commenter rightly pointed out, using Qt would perhaps be an option. However, the class that I am refactoring is very low-level and I do not want to add this additional dependency.

I think there is no simple way to do that, removing Glib in flavour of boost won't solve the problem which is more an architechtural issue than anything else. Replacing with Boost not gonna fix the design issue.
You should model your own signal interface, and try to adapt for each library, including Glib in the first place since it is already working, adding another indirection level to your problem will let you fix that issue.
Boost can help you if you look at boost::function. I dont consider replacing glib with boost to be a real step forward, boost is not a graphical library and it will be required at some point to add an interface with an implementation layer to your graphic engine.

I have now opted for a total rewrite of the class in question. It turns out that I do not require the dispatcher functionality in the way it was provided by Glib. Instead, it was enough to use the normal boost::signals2 signals, coupled with some signals from Qt for the actual graphical interaction.

Related

The correct variant of implementation of the server-client in one application? Qt6

I am creating simple online chat with server and client in one application. I wrote client-side, but i don't know how will be correct use QTcpServer.
Need i create QTcpServer in new thread? So that I can connect to it as a client from this application. If yes, how do it? Or it's useless and not needed idea?
Need i create new thread for every new connection in order to process it?
I am developing a chat as a course project for a university
Assuming you are using Qt's networking APIs, you don't need to use multiple threads. The reason is that Qt's APIs are designed around a non-blocking event-loop model, so it is expected that no function-call should ever take more than a negligible amount of time (e.g. a few milliseconds) to return, after which the main thread's QEventLoop resumes execution and can therefore handle other tasks in a timely manner, all from within a single thread.
That said, there are a few optional methods in the Qt API that are blocking, and in a single-threaded application, calling those methods risks making your application un-responsive for (however long it takes for those methods to return). Fortunately those methods aren't necessary, and they are clearly documented. I recommend avoiding them, as there are always better, non-blocking ways to achieve the same result in Qt, e.g. by connecting the appropriate signals to the appropriate slots.
To sum up: threads aren't necessary in Qt-based networking, and your program will be simpler, more reliable, and easier to debug if you don't use threads. When implementing server-like functionality, a QTcpServer object is useful; you might want to have a look at this example program for cues on how to use it.

Is it safe to change the reactor's state using the async API without manual synchronization?

Hey
I'm using gRPC with the async API. That requires constructing reactors based on classes like ClientBidiReactor or ServerBidiReactor
If I understand correctly, the gRPC works like this: It takes threads from some thread pool, and using these threads it executes certain methods of the reactors that are being used.
The problem
Now, the problem is when the reactors become stateful. I know that the methods of a single reactor will most probably be executed sequentially, but they may be run from different threads, is this correct? If so, then is it possible that we may encounter a problem described for instance here?
Long story short, if we have an unsynchronized state in such circumstances, is it possible that one thread will update the state, then a next method from the reactor will be executed from a different thread and it will see the not-updated value because the state's new value has not been flushed to the main memory yet?
Honestly, I'm a little confused about this. In the grpc examples here and here this doesn't seem to be addressed (the mutex is for a different purpose there and the values are not atomic).
I used/linked examples for the bidi reactors but this refers to all types of reactors.
Conclusion / questions
There are basically a couple of questions from me at this point:
Are the concerns valid here and do I properly understand everything or did I miss something? Does the problem exist?
Do we need to manually synchronize reactors' state or is it handled by the library somehow(I mean is flushing to the main memory handled)?
Are the library authors aware of this? Did they keep this in mind while they were coding examples I linked?
Thank you in advance for any help, all the best!
You're right that the examples don't showcase this very well, there's some room for improvement. The operation-completion reaction methods (OnReadInitialMetadataDone, OnReadDone, OnWriteDone, ...) can be called concurrently from different threads owned by the gRPC library, so if your code accesses any shared state, you'll want to coordinate that yourself (via synchronization, lock-free types, etc). In practice, I'm not sure how often it happens, or which callbacks are more likely to overlap.
The original callback API spec says a bit more about this, under a "Thread safety" clause: L67: C++ callback-based asynchronous API. The same is reiterated a few places in the callback implementation code itself - client_callback.h#L234-236 for example.

An event system - like signal / slot in Qt without forking - C++

I would like to know how to design a system that can offer a solid framework to handle signals and the connection between the signal/s and the method/s without writing a really unpleasant cycle that iterates over and over with some statement for forking the flow of the application.
In other words I would like to know the theory behind the signal slot mechanism of Qt or similar.
I'm naming Qt for no particular reason, it's just probably one of the most used and well tested library for this so it's a reference in the C++ world, but any idea about the design of this mechanism will be good.
Thanks.
At a high level, Qt's signal/slots and boost's signal library work like the Observer Pattern (they just avoid needing an Observer base class).
Each "signal" keeps track of what "slots" are observing it, and then iterates over all of them when the signal is emitted.
As for how to specifically implement this, the C++ is pretty similar to the Java code in the Wikipedia article. If you want to avoid using an interface for all observers, boost uses templates and Qt uses macros and a special pre-compiler (called moc).
It sounds like you are asking for everything but without any losses.
There are a few general concepts that I am aware of for handling asynchronous input and changes such as "keys being pressed" and "touch events" and "an object that changes its own state".
Most of these concepts and mechanisms are useful for all sorts of program flow and can cross lots of boundaries: process, thread, etc. This isn't the most exhaustive list but they cover many of the ones I've come across.
State Machines
Threads
Messages
Event Loops
Signals and Slots
Polling
Timers
Call Back Functions
Hooking Input
Pipes
Sockets
I would recommend researching these in Wikipedia or in the Qt Documentation or in a C++ book and see what works or what mechanism you want to work into your framework.
Another really good idea is to look at how programming architects have done it in the past, such as in the source of Linux or how the Windows API lets you access this kind of information in their frameworks.
Hope that helps.
EDIT: Response to comment/additions to the question
I would manage a buffer/queue of incoming coordinates, and have an accessor for the latest coordinate. Then I would keep track of events such as the start of a touch/tap/drag and the end of one, and have some sort of timer for when a long touch is performed, and a minimum change measurement for when a dragged touch is performed.
If I am using this with just one program, I would try to make a interface that is similar to what I could find in use. I've heard of OpenSoundControl being used for this kind of input. I've set up a thread that collects the coordinates and keeps track of the events. Then I poll for that information in the program/class that needs to use it.

Framework for a server application (preferably, using BOOST C++)

I am thinking of writing a server application - along the lines of mySQL or Apache.
The main requirements are:
Clients will communicate with the server via TCP/IP (sockets)
The server will spawn a new child process to handle requests (ala Apache)
Ideally, I would like to use the BOOST libraries rather than attempt to reinvent my own. There must be code somewhere that does most of what I am trying to do - so I can use it (or atleast part of it as my starting point) can anyone point me to a useful link?
In the (hopefully unlikely) event that there is no code I can use as a starting point, can someone point out the most appropriate BOOST libraries to use - and a general guideline on how to proceeed.
My main worry is how to know when one of the children has crashed. AFAIK, there are two ways of doing this:
Using heartbeats between the parent and children (this quickly becomes messy, and introduces more things that could go wrong)
Somehow wrap the spawning of the process with a timeout parameter - but this is a dumb approach, because if a child is carrying out time intensive work, the parent may incorrectly think that the child has died
What is the best practises of making the parent aware that a child has died?
[Edit]
BTW, I am developing/running/deploying on Linux
On what platform (Windows/Linux/both)? Processes on Windows are considered more heavy-weight than on Linux, so you may indeed consider threads.
Also, I think it is better (like Apache does) not to spawn a process for each request but to have a process pool, so you save the cost of creating a process, especially on Windows.
If you are on Linux, can waitpid() be useful for you? You can use it in the non-blocking mode to check recurrently with some interval whether one of the child processes terminated
I can say for sure that Pion is your only stable option.
I have never used it but I intend to, and the API looks very clean.
As for the Boost libraries you would need:
Boost.Asio
Boost.Threading
Boost.Spirit (or something similar to parse the HTTP protocol)
Boost.IPC
What about using threads (which are supported by Boost) rather than forking the process? This would allow you to make queries about the state of a child and, imho, threads are simpler to handle than forking.
Generally Boost.Asio is good point to begin with.
But several points to be aware of:
Boost.Asio is very good library but it is not very fork aware, so don't try to share Asio
event loop between several fork processes - this would not work (i.e. - if boost::asio::io_service was created before fork - don't use it in more then one process after it)
Also it does not allow you to release file handler from boost::asio::XX::socket
so only way is to call dup and then pass it to child process.
But to be honest? I don't think you'll find any network event loop library that is
fork aware (maybe with exception of CppCMS's booster.aio that I had written
to be fork aware by myself).
Waiting for children is quite simple you can define a signal handler with sigaction
on SIGCHLD signal that is send then child crashes or exits.
So all you need to do is handle this signal and in main loop call waitpid when such
signal received.
With asio you can use "self-pipe" trick to wake the loop from sleep from signal handler.
First, take a look at CPPCMS. It might already fit your needs.
Now, as pointed by others, boost::asio is a good starting point but is really the basics of the task.
Maybe you'll be more interested in the works being done about server-code based on boost::asio : cpp-netlib (that is made to be submitted in boost once done) The author's blog.
I've made an FOSS library for creating C++ applications in a modular way. It's hosted at
https://github.com/chilabot/chila
here's my blog: http://chilatools.blogspot.com/view/sidebar
It's specially suited for generic server creation (that was my motivation for constructing it), but I think it can be used for any kind of application.
The part that has to be deployed with the final binary is LGPL, so it can be used with commercial applications.

Boost: what exactly is not threadsafe in Boost.Signals?

I read at multiple places that Boost.Signals is not threadsafe but I haven't found much more details about it. This simple quote doesn't say really that much. Most applications nowadays have threads - even if they try to be single threaded, some of their libraries may use threads (for example libsdl).
I guess the implementation doesn't have problems with other threads not accessing the slot. So it is at least threadsafe in this sense.
But what exactly works and what would not work? Would it work to use it from multiple threads as long as I don't ever access it at the same time? I.e. if I build my own mutexes around the slot?
Or am I forced to use the slot only in that thread where I created it? Or where I used it for the first time?
I don't think it's too clear either, and one of the library reviewers said here:
I also don't liked the fact that only three times the word 'thread' was named.
Boost.signals2 wants to be a 'thread safe signals' library. Therefore some more
details and especially more examples concerning on that area should be given to
the user.
One way of figuring it out is to go to the source and see what they're using _mutex / lock() to protect. Then just imagine what would happen if those calls weren't there. :)
From what I can gather, it's ensuring simple things like "if one thread is doing connects or disconnects, that won't cause a different thread which is iterating through the slots attached to those signals to crash". Kind of like how using a thread-safe version of the C runtime library assures that if two threads make valid calls to printf at the same time then there won't be a crash. (Not to say the output you'll get will make any sense—you're still responsible for the higher order semantics.)
It doesn't seem to be like Qt, in which the thread a certain slot's code gets run on is based on the target slot's "thread affinity" (which means emitting a signal can trigger slots on many different threads to run in parallel.) But I guess not supporting that is why the boost::signal "combiners" can do things like this.
One problem I see is that one thread can connect or disconnect while another thread is signalling.
You can easily wrap your signal and connect calls with mutexes. However, it is non-trivial to wrap the connections. (connect returns connections which you can use to disconnect).