How to simulate a mouse click with touch data? - c++

I'm working on a Qt5 application to attempt to make use of raw input data from a touchscreen because touch is not fully supported by my Linux kernel (2.6.32). I wrote a parser for the raw data coming from /dev/input/eventX because even though X doesn't support touch, the events are still there.
I'm able to read the events just fine, and wrote a wrapper called "TouchPoint" which contains the ID and (x,y) coordinates of each touch point, as well as a boolean indicating whether or not the touch point is currently "active" (ie: the user currently has that finger on the screen). It's worth noting that I also output the number of points of touch currently on the screen, and the value is accurate.
My issue is that I can't seem to figure out how to accurately simulate a mouse click with it. With multi-touch events in Linux, each touch point is assigned a "tracking ID" when the user presses a finger to the screen, and when that point is lifted, an event setting that slot's tracking ID to -1 is generated. I use this change of ID from some value to -1 to indicate that a touch point is "down" or not:
void TouchPoint::setID(int nid) {
if((id == -1) && (nid >= 0)) down = true;
else if(nid == -1) down = false;
id = nid;
}
So to try and simulate a mouse click, I do this when a tracking ID event is read:
else if(event.code == ABS_MT_TRACKING_ID) {
bool before = touchPoints[cSlot].isDown(); // where cSlot is the current slot the events are referring to
touchPoints[cSlot].setID(event.value);
bool after = touchPoints[cSlot].isDown();
if(!before && after) touch(touchPoints[cSlot]);
}
And then the touch method does this:
void MainWindow::touch(TouchPoint tp) {
if(ui->touchMe->rect().contains(QPoint(tp.getX(), tp.getY()))) {
on_touchMe_clicked();
}
}
The button does not respond to me directly touching it, but sometimes if I wildly flail my fingers around the screen, the message box that should show when it's pressed will show, but when it does, my fingers are always somewhere on another area of the screen.
Can anyone think of something I might be doing wrong here? Is my method of checking to see if the touch point is within the button's bounds wrong? Or is my logic for keeping track of the "down" status of the touch points wrong? Or something else entirely?
UPDATE: I just noticed two things by outputting the screen coordinates of both the touch location and the button location.
The digitizer resolution is larger than the screen resolution, so I needed to scale the X and Y coordinates coming from the raw events.
My coordinates are offset because they are screen coordinates.
UPDATE 2: I've now dealt with the two issues mentioned in my last update, but I'm still not able to figure out how to accurately simulate a touch/mouse event to be able to interact with the interface. I also modified my TouchPoint class to have a boolean flag indicating whether or not the touch point was just updated, which is set to true when the tracking ID is updated and reset when an EV_SYN event is raised. This is so I can get the X and Y position events before creating the event for the application. I tried the following:
Using the QApplication class to post a mouse event to the QApplication::desktop()->screen() widget that has the position of the touch point.
Using the QTest class to raise a touchEvent to the QApplication::desktop()->screen() widget using the press and release methods for the given slot and position of the touch point and then using the bool event(QEvent*) method to try to catch the event. I also enabled the WA_AcceptTouchEvents attribute on the main window.
Neither of these works, as for some reason, when I have the "bool event(QEvent*)" method in place, the signal emitted from the thread reading /dev/input/eventX doesn't trigger the slot in the main window class. and I can't seem to find any other method to accomplish simulating the events. Anyone have any ideas?

You wrote that you sometimes get your message box "clicked" when your fingers are in a "wrong position" so it sounds like you a mixing global screen coordinates and widget local coordinates. It seems, in your MainWindow::touch you try to check if a point in global screen coordinates, e.g. (530, 815) is inside a widget's geometry (in its local coordinates). QWidget::rect() returns internal geometry of the button (widget), i.e. a rectangle of widget's width and height, e.g. (0, 0, 60, 100).
You have to move the rect to a right position in global screen coordinates. QWidget::pos() returns widget local position to its parent. You can use QWidget::mapToGlobal to translate widget coordinate position to global screen coordinates. Then your rect is something like (500, 850, 60, 100) and you should get a hit and get your slot called.
However, better approach is to use QApplication::widgetAt to get the widget in a specific screen position and generate a mouse click for it.
You can generate a mouse click for a widget by posting a mouse press and release to it, something like below:
QPoint screenPos = QPoint(tp.getX(), tp.getY());
QWidget *targetWidget = QApplication::widgetAt(screenPos);
QPoint localPos = targetWidget->mapFromGlobal(screenPos);
QMouseEvent *eventPress = new QMouseEvent(QEvent::MouseButtonPress, localPos, screenPos, Qt::LeftButton, Qt::LeftButton, Qt::NoModifier);
QApplication::postEvent(targetWidget, eventPress);
QMouseEvent *eventRelease = new QMouseEvent(QEvent::MouseButtonRelease, localPos, screenPos, Qt::LeftButton, Qt::LeftButton, Qt::NoModifier);
QApplication::postEvent(targetWidget, eventRelease);

Related

Change QWidget Parent During Mouse Event

I'm trying to create a detachable type style widget, like in the way Chrome tabs are detachable (class is called Tab). I have everything working, except for a bug where sometimes (maybe 50% of the time), the Tab object never gets the mouse release event, and stops getting mouse move events.
Essentially, the detaching system works by allowing drags in the mouse press/move/release functions, just like normal. The mouseMoveEvent checks the total distance moved from the start, and if over a certain amount, will start the "detaching" process. The detaching process involves setting the parent widget to 0 (top level widget, undecorated window), so the Tab object is pretty much floating above everything, under the mouse, and continues to be dragged along with it until released.
I ran through all the QEvent items being delivered, and I found that when this issue occurs, the QEvent::MouseMove items (and all mouse events after this) are being sent to the TabBar (the Tab object's original parent). This occurs directly after calling setParent(0) on the Tab.
Basic mouse handling overview:
void Tab::mousePressEvent(*) {
[set up some boolean, start positions, etc]
}
void Tab::mouseMoveEvent(*) {
[track the updated position]
if (positionChange > STATIC_AMOUNT)
detachTab();
}
void Tab::mouseReleaseEvent(*) {
[return the Tab to its original position, and set the parent back to the TabBar]
}
void Tab::detachTab() {
QPoint mappedPos = mapToGlobal(0, 0);
setParent(0); //The loss of MouseMove events occurs when this returns.
move(mappedPos);
show();
raise();
}
Here are the events that the Tab object receives (first row is QEvent type, second is the name)
[Tab::detachTab() started]
[setParent(0) started]
QEvent::Hide
QEvent::Leave
qApp QEvent::MouseMove [ TabBar ] <-- now the TabBar is soaking up the mouse events
QEvent::HideToParent
QEvent::ParentAboutToChange
QEvent::ParentChange
[setParent(0) returned]
....
Summed up: my draggable QWidget loses QEvent::MouseMove and QEvent::MouseButtonRelease events after having its parent set to 0.
Any advice would be really appreciated!
A bit tricky workaround. I didn't test it, it's just an idea.
When your mouse hovers draggable part of a widget you may create topmost widget (let's call it Shade) with Qt::FramelessWindowHint (and possible with Qt::WA_TranslucentBackground). You may manipulate with Shade apperance via reimplementing paintEvent. For example - draw content of original widget, or draw some transparent preview, etc.
Then you may resize a Shade during dragging, to show user that widget will be detached. You will not loose mouse capture.
When user release mouse - you remember position of Shade, destroy it and detach+move original widget.
Feel free to ask, if you want more details.
Here is similar question.
So you suppose to use QDocWidget and enforce stacking of this widgets using tabifyDockWidget.

How can I selectively make QWidget accept focus from a mouse click?

I'm creating an application that provides a visual representation of nodes and lines connecting them together. Both nodes and lines are represented by custom QWidgets, WidgetNode and WidgetLine, say.
I've implemented WidgetLine as a transparent widget that is large enough to contain the start and end point of the line, and a custom function to draw the line itself.
I would like it so that if the user clicks on or right next to the line then the WidgetLine receives focus, but if they click further away from the line (but still over the rectangular area covered by WidgetLine's geometry) then the click is completely ignored by WidgetLine and passed on to the widget below.
I first tried doing this with a custom focusInEvent() function on WidgetLine, but found the mouse clicks weren't propagating below. I then tried setting the focus policy to Qt::NoFocus and using a custom mousePressEvent() using setFocus() to manually set the focus when appropriate, but the mouse events are still not being propagated to widgets above even when I call ignore() on them.
Finally, I've tried installing an event filter to reject mouse events, with this event filter function
bool WidgetLineFilter::eventFilter(QObject* object, QEvent* event)
{
assert(object == mCord);
if (event->type() == QEvent::MouseButtonPress)
{
QMouseEvent* e = dynamic_cast<QMouseEvent*>(event);
assert(e);
if (e)
{
QPoint mouseRelativeToParent = mCord->mapToParent(e->pos());
// calculate distance of mouse click to patch cord
QLineF line(mCord->line());
float distance = distanceFromPointToLine(QVector2D(line.p1()), QVector2D(line.p2()), QVector2D(mouseRelativeToParent));
qDebug() << distance;
const float distanceThreshold = 2.f;
if (distance < distanceThreshold)
{
qDebug() << "consuming mouse click for focus";
mCord->setFocus(Qt::MouseFocusReason);
return true;
}
else{
qDebug() << "mousepressevent too far for focus";
return QObject::eventFilter(object, event);
}
}
}
return false;
}
But this still does not propagate the mouse events to the parent on the "mousepressevent too far for focus" case. I've also tried returning false and true from here, and calling ignore on e but the widgets below are not receiving the click.
(NB the above approaches do work in the sense that WidgetLine only gets focus at the right time, it's just the widgets below that aren't receiving the press events when it doesn't get focus.)
Any ideas on how to fix this?
Store the mouse pos in a global var. Have all of your widgets have enter/leave events like the following and use that to check what widget you're in / near when you run the func.
void QGLWidget::enterEvent(QEvent *)
{
setFocus();
}
void QGLWidget::leaveEvent(QEvent *)
{
clearFocus();
}
In the end, I created an event filter to selectively intercept mouse events and installed it on the base window and recursively on every single child widget of the base window (installing it on new child widgets when they are created). This filter calls the base window with each mouse press event which then iterates through each WidgetLine, testing if they should be selected by this mouse press and setting focus on them if they should. If they all test false then the filter releases the event, otherwise the filter consumes it.
WidgetLine's are then set to be transparent for mouse events with
setAttribute(Qt::WA_TransparentForMouseEvents);
It's messier than it ought to be to achieve this but does the trick.

How does one retrieve selected area from QGraphicsView?

I need my QGraphicsView to react on user selection - that is, change display when user selects area inside it. How can i do that?
As far as i can tell, selection in Qt Graphics framework normaly works through selection of items. I haven't found any methods/properties that touch on selected area, save for QGraphicsVliew::rubberBandSelectionMode, which does not help.
After some going through documentation, i found different solution.
In QGraphicsView there is a rubberbandChanged signal, that contained just the information i wanted to use. So, i handled it in a slot, resulting in the handler of following form:
void
MyImplementation::rubberBandChangedHandler(QRect rubberBandRect, QPointF fromScenePoint, QPointF toScenePoint)
{
// in default mode, ignore this
if(m_mode != MODE_RUBBERBANDZOOM)
return;
if(rubberBandRect.isNull())
{
// selection has ended
// zoom onto it!
auto sceneRect = mapToScene(m_prevRubberband).boundingRect();
float w = (float)width() / (float)sceneRect.width();
float h = (float)height() / (float)sceneRect.height();
setImageScale(qMin(w, h) * 100);
// ensure it is visible
ensureVisible(sceneRect, 0, 0);
positionText();
}
m_prevRubberband = rubberBandRect;
}
To clarify: my implementation zooms on selected area. To that effect class contains QRect called m_prevRubberband. When user stop selection with rubberband, parameter rubberBandRect is null, and saved value of rectangle can be used.
On related note, to process mouse events without interfering with rubber band handling, m_prevRubberband can be used as a flag (by checking it on being null). However, if mouseReleaseEvent is handled, check must be performed before calling default event handler, because it will set m_prevRubberband to null rectangle.
You can use qgraphicsscenemouseevent.
On MousePress save the current position and on MouseRelease you can compute a bounding rect using the current position and the MousePress position.
This gives you the selected area.
If you need custom shapes you could track the mouse movement (MouseMove) to get the shape.
An Example that uses qgraphicsscenemouseevent can be found here.

Accessing the the coordinates in a QPushbutton clicked slot

I have a QPushButton with an image that has two areas that I want to handle differently when clicked. Due to the positioning on the image, I cannot really use separate buttons for the two images.
What I'd like to do is, in the slot where I am handling the click, have it check the coordinates of the click to determine which area was clicked.
Is there a way to access this?
This is what first comes to mind. It should work, although there may be a simpler way:
Create your own class that derives from QPushButton.
Override the necessary mouse events (mousePressEvent and mouseReleaseEvent) and, before calling the base class implementation, set a property in your object using setProperty with the position of the mouse click. (The position is available from the event parameter.)
In the slot handling the event, use sender() to get the button, and read the property using property().
If you don't need to treat your object as the base class (QPushButton*) you could just create a new signal that includes the mouse event and attach that to your slot. Then you wouldn't need the property at all.
You can get the current mouse position using QCursor::pos() which returns the position of the cursor (hot spot) in global screen coordinates.
Now screen coordinates are not easy to use, and probably not what you want. Luckily there is a way to transform screen coordinates to coordinates relative to a widget.
QPoint _Position = _Button->mapFromGlobal(QCursor::pos());
This should tell you where on the button the mouse was when the user clicked. And you can take it from there.
Building on #Liz's simple mechanism, here's what I did; this is in a slot method that is called on pressed() but generalizes to other situations. Note that using pushButton->geometry() gives you coordinates that are already in global space so you don't need to mapFromGlobal.
void MainWindow::handlePlanButtonPress()
{
int clickX = QCursor::pos().x();
int middle = m_buttonPlan->geometry().center().x();
if ( clickX < middle ) {
// left half of button was pressed
m_buttonPlan->setStyleSheet(sStyleLargeBlueLeft);
} else {
// right half of button was pressed
m_buttonPlan->setStyleSheet(sStyleLargeBlueRight);
}
}

Qt -- pass events to multiple objects?

I basically have 3 layers (Window > Scene > View) that each need to handle a mouseMove event without blocking the others. It seems only the youngest child is getting the event though. I was hoping I could process the event and then call event->ignore() to pass the event back up the stack, but it doesn't seem to be working.
Some relevant code if you need it:
void EditorWindow::createScene() {
m_scene = new EditorScene(this);
m_view = new EditorView(m_scene);
// ...
}
void EditorScene::mouseMoveEvent(QGraphicsSceneMouseEvent* mouseEvent) {
printf("B\n");
// ...
}
void EditorView::mouseMoveEvent(QMouseEvent* event) {
printf("C\n");
event->ignore();
}
Only "C" is being printed. Note that EditorScene and EditorView receive different types of mouse events so it's not completely trivial to pass them around.
The EditorWindow also needs the mouse coordinates; currently I'm sending a signal from one of the children which is caught by the window... but it shouldn't really be necessary to relay it that way, should it?
Found this nice article. Calling ignore() tells Qt to find another receiver. Sounds like it should work, but perhaps it means an unrelated receiver. The proper way to propagate it is actually to call BaseClass::Event like so:
void EditorView::mouseMoveEvent(QMouseEvent* event) {
QGraphicsView::mouseMoveEvent(event); // propogate to parent widget
printf("C\n");
}
Now it's printing BCBCBC... which is great, but I can't seem to nudge it up one more level...
Another edit: It was being propogated up properly, I just didn't have setMouseTracking enabled.
QGraphicsView::mouseMoveEvent(event);
Doesn't propagate up to the parent -- it actually propagates down to the scene.
Here is what's happens -- QGraphicsView receives QMouseEvent, translates it into QGraphicsSceneMouseEvent and passes it to the scene. Scene then passes it to appropriate item or, in your case, prints "B". Event handler then returns back to EditorView and prints "C".
Then, if you explicitly ignore event (mouse move is accepted by default), Qt event handler will pass the event to parent of EditorView. So try ignoring after you print "C".
Another thing about mouse move is this:
If mouse tracking is switched off, mouse move events only occur if a mouse button is pressed while the mouse is being moved. If mouse tracking is switched on, mouse move events occur even if no mouse button is pressed.
So make sure you have tracking enabled on parent of EditorView (or that you press buttons :)).
EDIT:
BTW, EditorScene is not a parent of EditorView. Well, it is in your code, but only in QObject meaning of parentship (memory management only).
QGraphicsScene and View don't have normal family relationship -- scene can have multiple views and those views are children of unrelated parents.
For window event propagation purposes you must have QWidget based parent. In fact, I'm pretty sure you reparent EditorView to EditorWindow, or one of its children (when you add it into layout).
INSTAEDIT:
For coordinates you want View itself to emit a signal. Both for decoupling reasons and because you probably want to show local coordinates of the view, and not of the parent window and not screen coordinates (right?). If you actually want scene coordinates, View is right choice too, because it knows transformation matrix.
Coordinates go like this:
Screen -> EditorWindow local -> EditorView local -> Scene transformed -> whatever item local transformed.
QGraphicsView::mousePressEvent( e ) in my mousePressEvent did the trick!