Signals instead of exceptions - c++

Let's suppose we are developing a store, and, depending on the session state, the user is allowed to do different things. For example, suppose a widget must be blocked during a while in some specific moment, because some specific user actions, and the user tries again.
Of course, the most obvious implementation will be launching an exception in the corresponding function (the specific event handler), to say the action is currently blocked. That's similar to a concrete problem of mine. In that case, it was more convenient for me, instead of throwing an exception, make the function a "no-op", but emiting a boost::signal2's signal. The GUI does whatever he wants to do, inform the user or whatever. But perhaps the GUI only wants to inform the user once, so, it just disconnect to the signal after the first call.
And I liked it. It's pretty beautiful and elegant: to make it a no-op and emit a signal. No stack unwinding, functions can be marked as noexcept, you enable more optimizations in consequence, and you deal with the excepcional cases only when you want, connecting and desconnecting to the signals as wished.
Now it comes the question, what if I want to generalize the method substituting each exception for signals? even for non-GUI applications?
In that case, are boost::signals2 more inneficient than exceptions? Because it's a common hearing that try/catch blocks, no-noexcept functions, and stack unwinding causes overhead and avoid the compiler do a lot of possible optimizations. On the other hand, boost::signals2 is thread-safe, which causes extra overhead.
Is it my idea a bad idea at all?
I hope my question is not close for being "too broad" or "opinion-based", because its a question of design (and optimization) after all; although not too much specific, I have to admit.
Note: The GUI is a website. The thing is, I'm using Wt, a library to do websites in C++, which translate a hierarchy of widgets and signals to HTML/Javascript/Ajax, and my long-term project is to create a suite for creating GUIs in both, desktop/mobile (Qt) and web (Javascript) from a common infrastructure with an unique C++ back-end. Wt allows a mapping between C++/Javascript slots for a same event; for example, a click: if Javascript or Ajax is not available, the event is sent to the server and the C++ slot is called. If it is available, the event is executed on the client using the Javascript version. In case a same (GUI) event has more than one slot, the order of execution of slots is unspecified, and if both slots are C++ calls, they could be even executed in parallel on the server if there's enough threads available in the thread pool.

Related

Is it a good practice to use signals and slots in Qt also when no input from GUI occurs?

I've earned experience in C++ but I'm new at Qt. I was given a real project to work on, developed by someone that doesn't work for this company any longer. I don't know if it is a good practice and I apologize in advance for the terminology that might well not be adedcquate: I noticed that this project is literally full of signals/slots pairs that I deem unnecessary. More precisely: the classes that dictate the logic of the applications see each other, it would be sufficient to expose some public methods to trigger the desired procedures but, nevertheless, this is almost always achieved using signals and slots (and I say it again here: even when no input from GUI occurs). Given that I'm a newby in Qt, is it a good practice to do so? Thanks.
Edit: the cases that I reported don't encompass signals coming from timers, threads or whatever. This guy used signals and slots pairs like if it was a substitution for direct method call from class, say, A to class B
Overuse of signals and slots is a very bad and unfortunately very common practice. It hides dependencies, it makes code hard to debug and basically unmaintainable in the long term. Unfortunately many programmers think this it is a good practice because they achieve "decoupling", which seems as a holy grail to them. This is a nonsense.
I do not say you should not use signals and slots at all. I only say you should not overuse them. Signals and slots are the perfect tool to implement Observer design pattern to have a "reactive" system in which objects react to other objects having changed their states. Only this is the correct use of signals and slots. Almost every other use of signals and slots is wrong. The most extreme case which I have seen was implementing a getter function with a signal-slot connection. The signal sent a reference to a variable and the slot filled it with a value and then it returned to the emitter. This is just mad!
How you know that your signals and slots implement Observer pattern correctly? These are rules of thumb which follow from my quite long experience with Qt:
The nature of the signal is that the emitter announces publicly (signals are always public - except if you use a private class dummy parameter) by sending out some signal that its state has somehow changed.
The emitter does not care who are the observers or whether there are any observers at all, i.e. the emitter must not depend on observers in any way.
It is never the emitter's responsibility to establish or manage the connection - do not ever do it! The connection/disconnection is the responsibility of the observer (then it often connects to a private slot) or of some parent object which knows of the existence of both, the emitter and the observer (in that case the mutual parent connects emitter's signal to observers public slot).
It is normal that you will see lots of signal-slot connections in GUI layer and this is perfectly OK (note: GUI layer includes view models!). This is because GUI is typically a reactive system where objects react to other objects or to some changes in the underlying layers. But you will probably see much less signal-slot connections in the business logic layer (btw. in many projects business logic is coded without using Qt).
Regarding naming: I have encountered an interesting code smell. When the observer's public (!) slot is called like onSomethingHappened() - with emphasis on the prefix on. This is almost always sign of bad design and abuse of signals and slots. Usually this slot should be a) made private and the connection should be established by the observer or b) should be renamed to doSomething() or c) should be renamed and should be called as normal method instead of using signals and slots.
And a note about why overuse of signals and slots are hard to maintain. There are many potential problems in the long term which can break your code:
The dependencies with signals and slots are often hidden in a distant seemingly unrelated part of code. This relates to the signal-slot abuse when emitter actually depends on the observer but this is not clear when looking at the emitter's code. If your class depends on some other class/module, this dependency should be explicit and clearly visible.
When signals and slots are connected and then disconnected programmatically by your code, you often end up in state when you forgot do disconnect and you now have multiple connections. Having multiple connections is often overlooked because it often does not do any harm, it only makes the code somewhat slower, i.e. changed text is updated multiple times instead of once only - nobody will catch this issue unless you have a thousand-fold connection. these multiplying connections are somewhat similar to memory leaks. Small memory leaks remain often unnoticed, which is similar to multiple connections.
It often happens that you depend on the order in which the connections are established. And when these order-dependent connections are established in distant parts of code, you are in bad trouble, this code will fall apart sooner or later.
To check whether I do not have multiple connections or whether the connection/disconnection was successful, I am using these my helper utils https://github.com/vladimir-kraus/qtutils/blob/main/qtutils/safeconnect.h
PS: In the text above I am using term "emitter" (emits the signal) and "observer" (observes the emitter and receives the signal). Sometimes people use "sender" and "receiver" instead. My intention was to emphasize the fact that the emitter emits a signals without actually knowing whether anyone receives it. The word "sender" gives the impression that you send the signal to someone, which is however exactly the cause of signal-slot overuse and bad design. So using "sender" only leads to confusion, IMO. And by using "observer" I wanted to emphasize that signals and slots are the tool to implement the Observer design pattern.
PPS: Signals and slots are also the perfect tool for async communicating between threads in Qt. This use case may be one of the very few exceptions to the principles which I described above.
It depends of course, but mostly yes, it's a correct practice because it keeps object decoupled. Two classes which sees each other does not means they can use each other if they're not in a relation of master-slave o don't follow a logical hierarchy. Mostly you will couple everything in a non-reversible way in a result of flipper-effect of calls. The proof could be you want to fix that "making methods public" which may breaks incapsulation and contract of a class, which may lead to bad design choice non dependant from using Qt.
Since we're not seeing the actual code it could be he is misusing signals too, but from your explanation I'd go with the first option.
Signals and slots mechanism is a central feature of Qt.
In general, signals and slots are preferred/used because:
They allow asynchronous execution via queued connections.
They are loosely coupled.
They allow to connect n signals to one slot, one signal to n slots and signal to another signal.
In your project, if signal-slot mechanism has been used to achieve the above, then it is likely the right usage.
GUI input handling isn't the only place where signal-slot mechanism is used.
Unless we know your project's use cases it is difficult to comment if the signal-slot mechanism has been misused/overused.

An event system - like signal / slot in Qt without forking - C++

I would like to know how to design a system that can offer a solid framework to handle signals and the connection between the signal/s and the method/s without writing a really unpleasant cycle that iterates over and over with some statement for forking the flow of the application.
In other words I would like to know the theory behind the signal slot mechanism of Qt or similar.
I'm naming Qt for no particular reason, it's just probably one of the most used and well tested library for this so it's a reference in the C++ world, but any idea about the design of this mechanism will be good.
Thanks.
At a high level, Qt's signal/slots and boost's signal library work like the Observer Pattern (they just avoid needing an Observer base class).
Each "signal" keeps track of what "slots" are observing it, and then iterates over all of them when the signal is emitted.
As for how to specifically implement this, the C++ is pretty similar to the Java code in the Wikipedia article. If you want to avoid using an interface for all observers, boost uses templates and Qt uses macros and a special pre-compiler (called moc).
It sounds like you are asking for everything but without any losses.
There are a few general concepts that I am aware of for handling asynchronous input and changes such as "keys being pressed" and "touch events" and "an object that changes its own state".
Most of these concepts and mechanisms are useful for all sorts of program flow and can cross lots of boundaries: process, thread, etc. This isn't the most exhaustive list but they cover many of the ones I've come across.
State Machines
Threads
Messages
Event Loops
Signals and Slots
Polling
Timers
Call Back Functions
Hooking Input
Pipes
Sockets
I would recommend researching these in Wikipedia or in the Qt Documentation or in a C++ book and see what works or what mechanism you want to work into your framework.
Another really good idea is to look at how programming architects have done it in the past, such as in the source of Linux or how the Windows API lets you access this kind of information in their frameworks.
Hope that helps.
EDIT: Response to comment/additions to the question
I would manage a buffer/queue of incoming coordinates, and have an accessor for the latest coordinate. Then I would keep track of events such as the start of a touch/tap/drag and the end of one, and have some sort of timer for when a long touch is performed, and a minimum change measurement for when a dragged touch is performed.
If I am using this with just one program, I would try to make a interface that is similar to what I could find in use. I've heard of OpenSoundControl being used for this kind of input. I've set up a thread that collects the coordinates and keeps track of the events. Then I poll for that information in the program/class that needs to use it.

How can I write cross platform c++ that handles signals?

This question is more for my personal curiosity than anything important. I'm trying to keep all my code compatible with at least Windows and Mac. So far I've learned that I should base my code on POSIX and that's just great but...
Windows doesn't have a sigaction function so signal is used? According to:
What is the difference between sigaction and signal? there are some problems with signal.
The signal() function does not block other signals from arriving while the current handler is executing; sigaction() can block other signals until the current handler returns.
The signal() function resets the signal action back to SIG_DFL (default) for almost all signals. This means that the signal() handler must reinstall itself as its first action. It also opens up a window of vulnerability between the time when the signal is detected and the handler is reinstalled during which if a second instance of the signal arrives, the default behaviour (usually terminate, sometimes with prejudice - aka core dump) occurs.
If two SIGINT's come quickly then the application will terminate with default behavior. Is there any way to fix this behavior? What other implications do these two issues have on a process that, for instance wants to block SIGINT? Are there any other issues that I'm likely to run across while using signal? How do I fix them?
You really don't want to deal with signal()'s at all.
You want "events".
Ideally, you'll find a framework that's portable to all the main environments you wish to target - that would determine your choice of "event" implementation.
Here's an interesting thread that might help:
Game Objects Talking To Each Other
PS:
The main difference between signal() and sigaction() is that sigaction() is "signal()" on steroids - more options, allows SA_RESTART, etc. I'd discourage using either one unless you really, really need to.

Periodically call a C function without manually creating a thread

I have implemented a WebSocket handler in C++ and I need to send ping messages once in a while. However, I don't want to start one thread per socket/one global poll thread which only calls the ping function but instead use some OS functionality to call my timer function. On Windows, there is SetTimer but that requires a working message loop (which I don't have.) On Linux there is timer_create, which looks better.
Is there some portable, low-overhead method to get a function called periodically, ideally with some custom context? I.e. something like settimer (const int millisecond, const void* context, void (*callback)(const void*))?
[Edit] Just to make this a bit clearer: I don't want to have to manage additional threads. On Windows, I guess using CreateThreadpoolTimer on the system thread pool will do the trick, but I'm curious to hear if there is a simpler solution and how to port this over to Linux.
If you are intending to go cross-platform, I would suggest you use a cross platform event library like libevent.
libev is newer, however currently has weak Win32 support.
If you use sockets, you can use select, to wait sockets events with timeout,
and in this loop calc time and call callback in suitable time.
If you are looking for a timer that will not require an additional thread, let you do your work transparently and then call the timer function at the appropriate time in the same thread by pre-emptively interrupting your application, then there is no such portable thing.
The first reason is that it's downright dangerous. That's like writing a multi-threaded application with absolutely no synchronization. The second reason is that it is extremely difficult to have good semantics in multi-threaded applications. Which thread should execute the timer callback?
If you're writing a web-socket handler, you are probably already writing a select()-based loop. If so, then you can just use select() with a short timeout and check the different connections for which you need to ping each peer.
Whenever you have asynchronous events, you should have an event loop. This doesn't need to be some system default one, like Windows' message loop. You can create your own. But you should be using it.
The whole point about event-based programming is that you are decoupling your code handling to deal with well-defined functional fragments based on these asynchronous events. Without an event loop, you are condemning yourself to interleaving code that get's input and produces output based on poorly defined "states" that are just fragments of procedural code.
Without a well-defined separation of states using an event-based design, code quickly becomes unmanageable. Because code pauses inside procedures to do input tasks, you have lifetimes of objects that will not span entire procedure scopes, and you will begin to write if (nullptr == xx) in various places that access objects created or destroyed based on events. Dispatch becomes comnbinatorially complex because you have different events expected at each input point and no abstraction.
However, simply using an event loop and dispatch to state machines, you've decreased handling complexity to basic management of handlers (O(n) handlers versus O(mn) branch statements with n types of events and m states). You decouple handling but still allow for functionality to change depending on state. But now these states are well-defined using state classes. And new states can be added if the requirements of the product change.
I'm just saying, stop trying to avoid an event loop. It's a software pattern for very important reasons, all of which have to do with producing professional, reusable, scalable code. Use Boost.ASIO or some other framework for cross platform capabilities. Don't get in the habit of doing it wrong just because you think it will be less of an effort. In the end, even if it's not a professional project that needs maintenance long term, you want to practice making your code professional so you can do something with your skills down the line.

WinForm-style Invoke() in unmanaged C++

I've been playing with a DataBus-type design for a hobby project, and I ran into an issue. Back-end components need to notify the UI that something has happened. My implementation of the bus delivers the messages synchronously with respect to the sender. In other words, when you call Send(), the method blocks until all the handlers have called. (This allows callers to use stack memory management for event objects.)
However, consider the case where an event handler updates the GUI in response to an event. If the handler is called, and the message sender lives on another thread, then the handler cannot update the GUI due to Win32's GUI elements having thread affinity. More dynamic platforms such as .NET allow you to handle this by calling a special Invoke() method to move the method call (and the arguments) to the UI thread. I'm guessing they use the .NET parking window or the like for these sorts of things.
A morbid curiosity was born: can we do this in C++, even if we limit the scope of the problem? Can we make it nicer than existing solutions? I know Qt does something similar with the moveToThread() function.
By nicer, I'll mention that I'm specifically trying to avoid code of the following form:
if(! this->IsUIThread())
{
Invoke(MainWindowPresenter::OnTracksAdded, e);
return;
}
being at the top of every UI method. This dance was common in WinForms when dealing with this issue. I think this sort of concern should be isolated from the domain-specific code and a wrapper object made to deal with it.
My implementation consists of:
DeferredFunction - functor that stores the target method in a FastDelegate, and deep copies the single event argument. This is the object that is sent across thread boundaries.
UIEventHandler - responsible for dispatching a single event from the bus. When the Execute() method is called, it checks the thread ID. If it does not match the UI thread ID (set at construction time), a DeferredFunction is allocated on the heap with the instance, method, and event argument. A pointer to it is sent to the UI thread via PostThreadMessage().
Finally, a hook function for the thread's message pump is used to call the DeferredFunction and de-allocate it. Alternatively, I can use a message loop filter, since my UI framework (WTL) supports them.
Ultimately, is this a good idea? The whole message hooking thing makes me leery. The intent is certainly noble, but are there are any pitfalls I should know about? Or is there an easier way to do this?
I have been out of the Win32 game for a long time now, but the way we used to achieve this was by using PostMessage to post a windows message back to the UI thread and then handle the call from there, passing the additional info you need in wParam/lParam.
In fact I wouldn't be surprised if that is how .NET handles this in Control.Invoke.
Update: I was currios so I checked with reflector and this is what I found.
Control.Invoke calls MarshaledInvoke which does a bunch of checkes etc. but the interesting calls are to RegisterWindowMessage and PostMessage. So things have not changed that much :)
A little bit of follow-up info:
There are a few ways you can do this, each of which has advantages and disadvantages:
The easiest way is probably the QueueUserAPC() call. APCs are a bit too in-depth to explain, but the only drawback is they may run when you're not ready for them if the thread gets put into an alertable wait state accidently. Because of this, I avoided them. For short applications, this is probably OK.
The second way involves using PostThreadMessage(), as previously mentioned. This is better than QueueUserAPC() in that your callbacks aren't sensitive to the UI thread being in an alertable wait state, but using this API has the problem of your callbacks not being run at all. See Raymond Chen's discussion on this. To get around this, you need to put a hook on the thread's message queue.
The third way involves setting up an invisible, message-only window whose WndProc calls the deferred call, and using PostMessage() for your callback data. Because it is directed at a specific window, the messages won't get eaten in modal UI situations. Also, message-only windows are immune to system message broadcasts (thus preventing message ID collisions). The downside is it requires more code than the other options.