Intro:
I am writing an app which displays a list of custom QWidgets (namely, “ScopeWidget”) in a container widget with VBoxlayout (“ScopeView”).
Every scopeWidget has a pointer to an own data source class (“Scope”). These objects are logically arranged in groups, i.e. there are some shared parameters (“ScopeShared”) among objects in one group.
These parameters are needed when retrieving (or preparing) the data which has to be displayed on a scopeWidget.
One step further:
A scopeWidget needs two sets of parameters: these given by “Scope” and given by “ScopeGroup”.
A scopeWidget can, by user action, change some of the shared parameters in a group, thus invalidating all previous retrieved data, held by “Scope“s in this group.
By default, there is no displayable data in a “Scope”. Data is retrieved on demand – when a paintEvent occurs (this is the source of the problem). To get displayable data in a “Scope”, one have to process all “Scopes” in this particular group (which yields usable data for all “Scope“s in the group).
How it works:
The user forces one of the scopeWidgets to change shared data in a group. After making these changes, all data held by “Scope“s in this group, is invalidated, so the change event reprocesses the whole group. And calls update() for all scopeWidgets in this group. Widgets are redrawn. This works…
The problem:
…is a paintEvent which occured spontanously. When the user changes something – I_know that this happened and I can process the scopeGroup prior to enqueuing updates of widgets. But when “something else” (the system itself) executes a paint event, I_need to process the whole group before any painting happens.
So the paint event does no painting directly, but executes scopeGroup processing, and after that, it paints the widget which had the painting event, and calls update() for all other widgets in that group – which in turn causes new paint events, which cause a next scopeGroup processing, one paint(), and update()‘s for other widgets, which causes new paint events – it ends up as recursive repaint (and scopeGroup processing)
Lame solution:
flags – spontaneous paint events do group processing, one paint() for the requesting widget, and update()‘s on the rest of widgets in the group, together with setting flags for every widget.
This pseudocode depicts this mechanism:
paintEvent(QWidget *w)
{
if(w->flag)
{
w->paint(); w->flag=0;
}
else
{
w->group()->process();
w->paint();
QWidget *x;
foreach(x,group) if(x!=w) { x->flag=1; x->update(); }
}
What would be IMHO better:
a) if widgets could be painted without a prior paint event (or call to update() or repaint() )… this would be the best ;], but it doesnt work in the straightforward and obvious way – is there any other way? – or
b) force the system to call a custom function instead the paint event
Are these ‘better’ solutions possible?
Related
For test purposes I'd like to create and display a widget. For now I only need the widget to render correctly but in the future I may want to extend this so I simulate various events to see how the widget behaves.
From various sources it would appear that the following should work:
QApplication app;
QPushButton button("Hello");
button.show();
// Might also be necessary:
QApplication::processEvents();
But for me the widget does not render correctly. A window is created to display the widget, however it is entirely black.
I can get the widget to render correctly by adding the following lines:
std::this_thread::sleep_for(std::chrono::milliseconds(10));
QApplication::processEvents();
With 10 milliseconds being about the smallest time necessary to get the widget to render correctly.
Does anyone know how to get this to work without the time delay, or know why the delay is necessary?
To test Qt GUI application you need at least QApplication instance and event loop being processed. The fastest way is just use QTEST_MAIN macro, this answer explains in a nice way what it does exactly. However, to have more flexibility (e.g. to use GTest, GMock) you can also simply create QAplication instance in your tests main (no need to call exec).
Then, to have the events processed, you should invoke QTest::qWait. This will process your events for the specified amount of time. It is a good practice to use qWaitFor which accepts a predicate - this way you avoid race conditions in your tests.
In the particular scenario, when you expect some signal to be emitted, you can also use similar functionality of QSignalSpy::wait.
Small example when we want to wait until some parameters are passed from one item to another:
QSignalSpy spy(&widget1, &Widget1::settingsChanged);
widget2->applySettings();
ASSERT_TRUE(spy.wait(5000));
// do some further tests based on the content of passed settings
Why don't you want to have the application run exec ? The process of displaying a widget is not "static". You don't "draw" the widget on the screen, but rather you have an application that listen for various events and receives draw events from the windowing manager. The application can only draw the widget when the windowing manager asks it to.
The reason your second code works is that you wait sufficiently long for the windowing manager to have sent the "draw" request in your conditions. This does not guarantee it will always work.
If you want to guarantee the display of the widget, you need to start a loop and wait until you have received at least one draw event, but even that isn't foolproof.
As expertly described by Vincent Fourmond, widgets are not a one-off deal. The GUI is non-blocking and for this, it needs to run in an event loop.
The exec() method starts this event loop which you mimicked by polling.
While it is possible to combine Qt's event loop with other event loops, I would recommend you a simpler solution:
Proceed your program within the event loop by calling a method when it starts. Find an excellent answer here on how to do this: https://stackoverflow.com/a/8877968/21974
As you mentioned unit testing, there is also a signal you can use for doing checks at the end of the lifecycle (before widgets are destroyed): QApplication::aboutToQuit. This will happen when the last window is closed (programmatically or by the user).
So when I use a setText() on a QLabel for example, Qt automatically updates the view/gui for me and the new text is shown, but what happens behind the scenes? Is there an update function that gets called automatically when using functions like setText()?
Thanks!!
You should check the basic documentation in this link.
The internal system is a little bit more complex but in general, it follows the observer pattern. This mechanism allows the detection of a user action or changing state, and respond to this action.
Low-level interactions, like refreshing the screen are implemented via the Event System
In Qt, events are objects, derived from the abstract QEvent class, that represent things that have happened either within an application or as a result of outside activity that the application needs to know about. Events can be received and handled by any instance of a QObject subclass, but they are especially relevant to widgets. This document describes how events are delivered and handled in a typical application.
So, regarding the display process, there is a dedicated event. A QWidget object handles/subscribe to a PaintEvent, see QWidget::paintEvent.
This event handler can be reimplemented in a subclass to receive paint events passed in event. A paint event is a request to repaint all or part of a widget.
When you call, QLineEdit::setText(), the widget will be repainted the next time a display event is triggered, based in the OS configuration, refresh rate, etc.
For high-level interactions, Qt uses a similar pattern based in the signal/slot mechanism:
Observer pattern is used everywhere in GUI applications and often leads to some boilerplate code. Qt was created with the idea of removing this boilerplate code and providing a nice and clean syntax, and the signal and slots mechanism is the answer.
I have program where the user selects a block and can move it around the screen; however, the blocks that are inside the region of the main block need to move in relation to the main block's motion.
I accomplished this by making a QGraphicsItemGroup of these blocks and setting the right flags.
dragGroup = this->scene()->createItemGroup(items);
this->dragGroup->setHandlesChildEvents(true);
this->dragGroup->setFlag(ItemIsMovable);
This, consequently, ignores the main items mouseReleaseEvent() where the group is dismantled. Therefore, the blocks inside the main block are stuck in the group and cannot be moved independently after the initial drag is complete. My idea is to make a custom QGraphicsItemGroup where its mouseReleaseEvent() calls the main blocks event to kill the group.
I cannot figure out how to do this. I have tried sub classing it but the scene creates the base class and not my derived class, and a dynamic_cast didn't seem to work in this case.
OR, is there another way of accomplishing the same thing, having QGraphicsItemGroup control some events and the blocks themselves control other? Or is a QGraphicsItemGroup not the right fit in this situation.
According to Effbot's Tkinterbook on Events and Bindings, I can prevent newlines from being inserted into a Text widget via this code:
text.bind("<Return>", lambda e: "break")
Which does work, but it prevents the <Return> event from reaching the parent form, which has its own <Return> binding that performs work on the Text widget and others. What I want to do is catch events like <Return>, <KP_Enter>, etc, in the Text widget and prevent the newline from being inserted, but I still want that event to propagate upwards. I can't find a good way of doing this, because Text widgets have no form of validation like Entry widgets (which is where this kind of work would normally be done).
I am thinking that if I override <KeyPress> and check event.keycode for 13, I can skip the internal call to ::tk::TextInsert and instead invoke whatever function internal to Tk is responsible for passing events up to the next elements in the bindtags, based on reading the TCL code in text.tcl in Python.
You mention bindtags, which sounds like you know what they are. Yet you also talk of events which propagate to their "parent form", which events don't normally do. The only time a <return> event will propagate to its parent is if the parent is in the bindtags. This will be true if the parent is the root window, but not for any other unless you explicitly add the parent to the bindtags.
When you do return "break" in a binding, you prevent other bindtags from acting on the event. There is no way to skip the immediately preceeding bindtag but allow additional bindtags to process the event. And, there's no way (short of regenerating the event) to have other widgets that are not part of the bindtags process the event.
If you have a binding on a frame, and one on the text widget, and you want both to fire, just have your text widget binding call the code associated with the other binding. For example:
self.parent.bind("<Return>", self.validate_form)
Self.text.bind("<Return>", self.validate_form)
If self.validate_form returns "break", this should work as you expect.
I'm working on a Qt app, and at some point I have a class (I name it here "engine") that is governing the program: it has a signal with a timeout which makes the class to draw, and evolve the app logic. Morevoer, it receives events that are caught from a QGraphicsScene.
Each engine "tick", the update() is called on the Scene, updating the drawing according to the app evolution.
Naturally, I want the drawing to be synchronized with the reactions of the events, otherwise, a drawing of some object could be made while the reaction of a event was destroying that same object, causing a SegFault.
I've tried using a queue on the engine such that I would only make the engine to react to those events on a specific place of a update, thus not interfering with the drawing part.
Two problems rised:
I cannot make a copy of a QGraphicsEvent. Apparently the copy operator is private (which I assume is for a good reason anyway).
When the class is processing the events, before the drawing, it can also happen that a new event appears, which can be "bad" because of non-synchronization of it
Taking into account this situation, is there any approach that can solve this situation? Is there any standard procedure in Qt for this? I mean, how do I ensure the drawing is not potentially desynchronized with the events' reactions of the application?