According to Effbot's Tkinterbook on Events and Bindings, I can prevent newlines from being inserted into a Text widget via this code:
text.bind("<Return>", lambda e: "break")
Which does work, but it prevents the <Return> event from reaching the parent form, which has its own <Return> binding that performs work on the Text widget and others. What I want to do is catch events like <Return>, <KP_Enter>, etc, in the Text widget and prevent the newline from being inserted, but I still want that event to propagate upwards. I can't find a good way of doing this, because Text widgets have no form of validation like Entry widgets (which is where this kind of work would normally be done).
I am thinking that if I override <KeyPress> and check event.keycode for 13, I can skip the internal call to ::tk::TextInsert and instead invoke whatever function internal to Tk is responsible for passing events up to the next elements in the bindtags, based on reading the TCL code in text.tcl in Python.
You mention bindtags, which sounds like you know what they are. Yet you also talk of events which propagate to their "parent form", which events don't normally do. The only time a <return> event will propagate to its parent is if the parent is in the bindtags. This will be true if the parent is the root window, but not for any other unless you explicitly add the parent to the bindtags.
When you do return "break" in a binding, you prevent other bindtags from acting on the event. There is no way to skip the immediately preceeding bindtag but allow additional bindtags to process the event. And, there's no way (short of regenerating the event) to have other widgets that are not part of the bindtags process the event.
If you have a binding on a frame, and one on the text widget, and you want both to fire, just have your text widget binding call the code associated with the other binding. For example:
self.parent.bind("<Return>", self.validate_form)
Self.text.bind("<Return>", self.validate_form)
If self.validate_form returns "break", this should work as you expect.
Related
Here's the situation:
I have a custom widget subclassed from QTabWidget that I've implemented to accept QDropEvents for files. When files are dropped on the TabWidget they are opened for editing. This part works fine. However, I want to implement drag and drop functionality as part of the editor (think like a LabView-esque GUI). I have properly implemented the event handlers and acceptsDrops on the EditorWidget but the TabWidget receives all the events and attempts to process them as files. I can differentiate file-related events from the editor's events by mimedata but I can't figure out how to pass the event from the TabWidgeton to the appropriate EditorWidget.
So the question:
How can I pass a QDropEvent from the widget which received it from the system to another widget which it owns? Alternatively, how do I tell the system which widget should receive the event, based on the contents of said event?
What I've tried:
I can't call the dropEvent method of the child as it's protected. I could create a series of my own methods that pass the events around but that seems redundant and fragile. I've looked into installing an EventFilter, but from what I can tell that only discards events, it doesn't say "not me try someone else."
Thanks in advance for your assistance!
Intersting! I think that accepting the event in the parent widget, and then trying to forward it to the child widget, is not the right approach architecturally. It would basically violate encapsulation (objects handling their own events).
If I were you, I would investigate why the child widget isn't seeing the event first. Children widgets are on top of their parents, so your child widget should have a first go at the event. Did you call setAcceptDrops(true)?
When you fix that, in the child widget event handler you can analyze the event and call event->ignore() if the event should be forwarded to the parent QTabWidget. If you don't call ignore(), the child will "consume" the event and it will not be propagated to the parent!
Here's an old blog post on event propagation that could help:
http://blog.qt.io/blog/2006/05/27/mouse-event-propagation/
Solving my own problem:
As Pliny stated the child should see the event first. My problem appears to have been that in EditorWidget I had not implemented dragEnterEvent and dragMoveEvent so even though I had implemented dropEvent in EditorWidget the TabWidget took control of the drag and therefore stole the drop.
I have a combo which is disabled, but adding an element to it will emit the currentIndexChanged(int) signal.
I expected signals to be naturally turned off when a widget is disabled, but it's not the case. I know there is blockSignals(bool), but if there are many widgets whose signals must be "blocked when disabled", blockSignals would require a Boolean state for each widget.
How can I disable the signals sent by a widget when it is disabled (and not alter its blockSignals state)?
EDIT
To clarify: since this is a widget, user cannot interact with it when it's disabled, but some signals are emitted when altering the widget programmatically. In my case there are two interesting signals:
currentIndexChanged(int) and activated(int)
The problem in my code is that I sometimes alter the combo programmatically AND I wish it to emit a signal, and sometimes it's the user that alters the combo by interacting. That's why I am using currentIndexChanged and not activated.
In both cases, anyway, I don't want the signals to be emitted when widget is disabled.
The QComboBox signals are user interaction based from end user point of view if you only have a QComboBox and nothing else as your question seems to imply.
I simply cannot reproduce the issue. I have just made a short program where I cannot get any of the QComboBox signals emitted since I cannot simply interact with the widget.
Edit: It might be a good idea to upate your question with more context for the casual readers, but based on further clarification in comments, yes, programatically it might be the case, but then signals might be useful to process programmatically, too, with corresponding slots, so it is not a major improvement if Qt blocks them automatically.
Luckily, the feature you wish to have is already available:
bool QObject::blockSignals(bool block)
If block is true, signals emitted by this object are blocked (i.e., emitting a signal will not invoke anything connected to it). If block is false, no such blocking will occur.
The return value is the previous value of signalsBlocked().
Note that the destroyed() signal will be emitted even if the signals for this object have been blocked.
If you want to do it for many widgets, create a simple function that you call instead of myWidget->setDisabled(true);:
inline bool disableAndBlockSignals(QWidget *widget)
{
widget->setDisabled(true);
return widget->blockSignals(true);
}
If you want to disable only some of them, say, currentIndexChanged, you can use disconnect manually then.
You can disconnect signals with the QObject::disconnect(); when you want to block them and then reconnecting them when you want to unblock them.
In your case, the signals are the sole source of state information carried on to other objects. If you disable them, the other objects won't ever get any notification that the state was changed. This can cause bugs in the objects that depend on being informed of your widget's state.
There are at least two solutions:
Don't change the widget's state. You can certainly defer the update of the widget's contents until after it gets reenabled.
Create a proxy that monitors the originating widget's state, and queues up the signals (with compression) until the widget gets reenabled.
Due to those workarounds, your design may require a rework. Perhaps it'd be better if you could cope with signals from those disabled widgets. You should also evaluate whether disabling a widget doesn't break the user experience. What if the user wants to see the contents of a widget, but doesn't mean to change the current setting? A disabled widget in such a case is going too far. You can make you own, perhaps subclassed, widget, acting so that the control is not disabled, but the current element stays fixed. This could even be a separate object, applicable to any control - through judicious leverage of the user property.
Intro:
I am writing an app which displays a list of custom QWidgets (namely, “ScopeWidget”) in a container widget with VBoxlayout (“ScopeView”).
Every scopeWidget has a pointer to an own data source class (“Scope”). These objects are logically arranged in groups, i.e. there are some shared parameters (“ScopeShared”) among objects in one group.
These parameters are needed when retrieving (or preparing) the data which has to be displayed on a scopeWidget.
One step further:
A scopeWidget needs two sets of parameters: these given by “Scope” and given by “ScopeGroup”.
A scopeWidget can, by user action, change some of the shared parameters in a group, thus invalidating all previous retrieved data, held by “Scope“s in this group.
By default, there is no displayable data in a “Scope”. Data is retrieved on demand – when a paintEvent occurs (this is the source of the problem). To get displayable data in a “Scope”, one have to process all “Scopes” in this particular group (which yields usable data for all “Scope“s in the group).
How it works:
The user forces one of the scopeWidgets to change shared data in a group. After making these changes, all data held by “Scope“s in this group, is invalidated, so the change event reprocesses the whole group. And calls update() for all scopeWidgets in this group. Widgets are redrawn. This works…
The problem:
…is a paintEvent which occured spontanously. When the user changes something – I_know that this happened and I can process the scopeGroup prior to enqueuing updates of widgets. But when “something else” (the system itself) executes a paint event, I_need to process the whole group before any painting happens.
So the paint event does no painting directly, but executes scopeGroup processing, and after that, it paints the widget which had the painting event, and calls update() for all other widgets in that group – which in turn causes new paint events, which cause a next scopeGroup processing, one paint(), and update()‘s for other widgets, which causes new paint events – it ends up as recursive repaint (and scopeGroup processing)
Lame solution:
flags – spontaneous paint events do group processing, one paint() for the requesting widget, and update()‘s on the rest of widgets in the group, together with setting flags for every widget.
This pseudocode depicts this mechanism:
paintEvent(QWidget *w)
{
if(w->flag)
{
w->paint(); w->flag=0;
}
else
{
w->group()->process();
w->paint();
QWidget *x;
foreach(x,group) if(x!=w) { x->flag=1; x->update(); }
}
What would be IMHO better:
a) if widgets could be painted without a prior paint event (or call to update() or repaint() )… this would be the best ;], but it doesnt work in the straightforward and obvious way – is there any other way? – or
b) force the system to call a custom function instead the paint event
Are these ‘better’ solutions possible?
In the documentation of Ember.StateManager it's said that : "Inside of an action method the given state should delegate goToState calls on its StateManager". Does it mean that if I send an action message, I necessarily need to transit to another state. Is it possible to stay in the same state but doing some task by sending an action ? For example, I'm in a state "loading" and I run two actions "preprocess" and "display".
Very simply: an action message may but does not have to transition to another state.
Something you didn't ask, but is related and important: it is a bad idea and bad design to call goToState in an enter or exit method.
When dealing with statecharts in general, you can do whatever you want. It's not mandatory to switch states in an event handler. A common case would be an event handler that shows a cancel/save dialog. You can easily put the dialog on the page in the event handler, and proceed accordingly depending on which button is pressed.
A separate question is should every event handler basically just go to another state. In the above scenario, you can certainly go to a "confirm" state, the state-enter method will show the dialog, and there would be two handlers, one for each button. Those handler would in turn go to other states.
Both design choices I think are equally valid, at least in that scenario. If you choose to implement a separate state for every action, you will end up with a lot of small but concise states. If you choose to do stuff in the event handlers themselves, your states will be bigger, but there will be less of them.
One thing I will say is if an event handler is getting complicated, you are probably better of with a new state. Also, be consistent.
For you specific scenario, if I'm reading it right, you want to load data and then change the display to show the data, based on an event. In this case, I would use new states.
So you press a button that starts the process
In the event handler, go to some sort of 'MyDataSection' state
Initial substate is 'loadData'
Enter state method of 'loadData' starts the loading process
Event handler 'dataLoaded' in 'loadData' to handle when the data loads; this means you need to fire an event when the data loads
'dataLoaded' event goes to the 'show' state
show state shows the view (or removes activity indicator etc) and handles any events that come from the display.
What's good here is that if you have multiple ways to get to this section of the app, all actions that lead to this section only need to go to this state, and everything will always happen the same. Also note that since the view event handlers are on the show state, if the user hits a button while the data is loading, nothing will happen.
I am trying to create a situation in my wxWidgets application where a user can type something into a text box, and if there are one or more characters in the text box, other controls become enabled. As such, I created an event handler that checks TextBox->IsEmpty() on the event wxEVT_COMMAND_TEXT_UPDATED. However, this seems to be called before the changes to the text in the text box take place. Is there any way to get an event to fire after the changes have occurred?
Thank you.
EDIT: Code I am using.
I am using Connect() to set up the event handling, so there is no event table to speak of. This is the code I am using:
cur->mTextBox = new wxTextCtrl(mParentFrame, wxID_ANY, wxT(""), wxDefaultPosition, wxDefaultSize);
mParentFrame->Connect(wxID_ANY, wxEVT_COMMAND_TEXT_UPDATED, wxCommandEventHandler(iguiFrame::correctTextBoxes));
correctTextBoxes is a public method of my wxFrame derived class, which calls a function containing only the following code:
if(cur->mTextBox->IsEmpty())
{
wxMessageBox("Empty!");
}
The message box always pops up "one character" too late.
As #ravenspoint mentioned, this event should have been fired after the change was made, but I also wanted to point out that even in the cases where an event is fired just before a change is made, the change is almost always passed into your event handler with the event parameter.
So for this case, you may want to actually just check the value of event.GetString() in correctTextBoxes() to see the new value being set on the text control.