I need my QGraphicsView to react on user selection - that is, change display when user selects area inside it. How can i do that?
As far as i can tell, selection in Qt Graphics framework normaly works through selection of items. I haven't found any methods/properties that touch on selected area, save for QGraphicsVliew::rubberBandSelectionMode, which does not help.
After some going through documentation, i found different solution.
In QGraphicsView there is a rubberbandChanged signal, that contained just the information i wanted to use. So, i handled it in a slot, resulting in the handler of following form:
void
MyImplementation::rubberBandChangedHandler(QRect rubberBandRect, QPointF fromScenePoint, QPointF toScenePoint)
{
// in default mode, ignore this
if(m_mode != MODE_RUBBERBANDZOOM)
return;
if(rubberBandRect.isNull())
{
// selection has ended
// zoom onto it!
auto sceneRect = mapToScene(m_prevRubberband).boundingRect();
float w = (float)width() / (float)sceneRect.width();
float h = (float)height() / (float)sceneRect.height();
setImageScale(qMin(w, h) * 100);
// ensure it is visible
ensureVisible(sceneRect, 0, 0);
positionText();
}
m_prevRubberband = rubberBandRect;
}
To clarify: my implementation zooms on selected area. To that effect class contains QRect called m_prevRubberband. When user stop selection with rubberband, parameter rubberBandRect is null, and saved value of rectangle can be used.
On related note, to process mouse events without interfering with rubber band handling, m_prevRubberband can be used as a flag (by checking it on being null). However, if mouseReleaseEvent is handled, check must be performed before calling default event handler, because it will set m_prevRubberband to null rectangle.
You can use qgraphicsscenemouseevent.
On MousePress save the current position and on MouseRelease you can compute a bounding rect using the current position and the MousePress position.
This gives you the selected area.
If you need custom shapes you could track the mouse movement (MouseMove) to get the shape.
An Example that uses qgraphicsscenemouseevent can be found here.
Related
I'm working on a Qt5 application to attempt to make use of raw input data from a touchscreen because touch is not fully supported by my Linux kernel (2.6.32). I wrote a parser for the raw data coming from /dev/input/eventX because even though X doesn't support touch, the events are still there.
I'm able to read the events just fine, and wrote a wrapper called "TouchPoint" which contains the ID and (x,y) coordinates of each touch point, as well as a boolean indicating whether or not the touch point is currently "active" (ie: the user currently has that finger on the screen). It's worth noting that I also output the number of points of touch currently on the screen, and the value is accurate.
My issue is that I can't seem to figure out how to accurately simulate a mouse click with it. With multi-touch events in Linux, each touch point is assigned a "tracking ID" when the user presses a finger to the screen, and when that point is lifted, an event setting that slot's tracking ID to -1 is generated. I use this change of ID from some value to -1 to indicate that a touch point is "down" or not:
void TouchPoint::setID(int nid) {
if((id == -1) && (nid >= 0)) down = true;
else if(nid == -1) down = false;
id = nid;
}
So to try and simulate a mouse click, I do this when a tracking ID event is read:
else if(event.code == ABS_MT_TRACKING_ID) {
bool before = touchPoints[cSlot].isDown(); // where cSlot is the current slot the events are referring to
touchPoints[cSlot].setID(event.value);
bool after = touchPoints[cSlot].isDown();
if(!before && after) touch(touchPoints[cSlot]);
}
And then the touch method does this:
void MainWindow::touch(TouchPoint tp) {
if(ui->touchMe->rect().contains(QPoint(tp.getX(), tp.getY()))) {
on_touchMe_clicked();
}
}
The button does not respond to me directly touching it, but sometimes if I wildly flail my fingers around the screen, the message box that should show when it's pressed will show, but when it does, my fingers are always somewhere on another area of the screen.
Can anyone think of something I might be doing wrong here? Is my method of checking to see if the touch point is within the button's bounds wrong? Or is my logic for keeping track of the "down" status of the touch points wrong? Or something else entirely?
UPDATE: I just noticed two things by outputting the screen coordinates of both the touch location and the button location.
The digitizer resolution is larger than the screen resolution, so I needed to scale the X and Y coordinates coming from the raw events.
My coordinates are offset because they are screen coordinates.
UPDATE 2: I've now dealt with the two issues mentioned in my last update, but I'm still not able to figure out how to accurately simulate a touch/mouse event to be able to interact with the interface. I also modified my TouchPoint class to have a boolean flag indicating whether or not the touch point was just updated, which is set to true when the tracking ID is updated and reset when an EV_SYN event is raised. This is so I can get the X and Y position events before creating the event for the application. I tried the following:
Using the QApplication class to post a mouse event to the QApplication::desktop()->screen() widget that has the position of the touch point.
Using the QTest class to raise a touchEvent to the QApplication::desktop()->screen() widget using the press and release methods for the given slot and position of the touch point and then using the bool event(QEvent*) method to try to catch the event. I also enabled the WA_AcceptTouchEvents attribute on the main window.
Neither of these works, as for some reason, when I have the "bool event(QEvent*)" method in place, the signal emitted from the thread reading /dev/input/eventX doesn't trigger the slot in the main window class. and I can't seem to find any other method to accomplish simulating the events. Anyone have any ideas?
You wrote that you sometimes get your message box "clicked" when your fingers are in a "wrong position" so it sounds like you a mixing global screen coordinates and widget local coordinates. It seems, in your MainWindow::touch you try to check if a point in global screen coordinates, e.g. (530, 815) is inside a widget's geometry (in its local coordinates). QWidget::rect() returns internal geometry of the button (widget), i.e. a rectangle of widget's width and height, e.g. (0, 0, 60, 100).
You have to move the rect to a right position in global screen coordinates. QWidget::pos() returns widget local position to its parent. You can use QWidget::mapToGlobal to translate widget coordinate position to global screen coordinates. Then your rect is something like (500, 850, 60, 100) and you should get a hit and get your slot called.
However, better approach is to use QApplication::widgetAt to get the widget in a specific screen position and generate a mouse click for it.
You can generate a mouse click for a widget by posting a mouse press and release to it, something like below:
QPoint screenPos = QPoint(tp.getX(), tp.getY());
QWidget *targetWidget = QApplication::widgetAt(screenPos);
QPoint localPos = targetWidget->mapFromGlobal(screenPos);
QMouseEvent *eventPress = new QMouseEvent(QEvent::MouseButtonPress, localPos, screenPos, Qt::LeftButton, Qt::LeftButton, Qt::NoModifier);
QApplication::postEvent(targetWidget, eventPress);
QMouseEvent *eventRelease = new QMouseEvent(QEvent::MouseButtonRelease, localPos, screenPos, Qt::LeftButton, Qt::LeftButton, Qt::NoModifier);
QApplication::postEvent(targetWidget, eventRelease);
I'm making a Diagram (Fluxogram) program and for days I'm stuck with this issue:
I have a custom QGraphicsScene that expands horizontally whenever I place an item to it's rightmost area. The problem is that my custom arrows (they inherit QGraphicsPathItem) disappear from the scene whenever it's boundingRect() center is scrolled off the view. Everytime the scene expands, both it's sceneRect() and the view's sceneRect() are updated as well.
I've:
set ui->graphicsView->setViewportUpdateMode(QGraphicsView::FullViewportUpdate)
the item flags QGraphicsItem::ItemIgnoresTransformations and QGraphicsItem::ItemSendsGeometryChanges, setActive(true) on the item as well, and everytime I add an arrow to the scene i call the update(sceneRect()) method. Still, everytime I scroll the view, as soon as the arrow's boundingRect() center moves away from the view, all the arrow disappears. If I scroll back and the boundingRect() center enters the view, all the arrow appears again.
Can someone give me a tip of what I might be missing? I've been using Qt's example project diagramscene as reference, so a lot of my code is similar (the "press item toolButton -> click on the scene" relation to insert items, the way they place the arrows to connect the objects,...).
In the meanwhile I'll try to make a minimal running example that can show what my issue is.
Your Arrow object inherits from QGraphicsPathItem, which I expect also implements the QGraphicsItem::shape function.
Override the shape function in your Arrow class, to return the shape of the item. This, along with the boundingRect is used to collision detection and detection of an item on-screen.
In addition, before changing the shape of an item by changing its boundingRect, you need to call prepareGeometryChange.
As the docs state: -
Prepares the item for a geometry change. Call this function before changing the bounding rect of an item to keep QGraphicsScene's index up to date.
So, in the Arrow class, store a QRectF called m_boundingRect and in the constructor: -
prepareGeometryChange();
m_boundingRect = QRectF(-x, -y, x*2, y*2);
Then return m_boundingRect in the boundingRect() function.
If this is still an issue, I expect it's something in QGraphicsPainterPath that's causing the problem, in which case, you can simply inherit from QGraphicsItem and store a QPainterPath with which you draw in the item's paint function and also return the painter path in shape().
You are making your life too complicated. Do not subclass QGraphicsPathItem just use it and update its path value every time position of anchors (from to) changes.
I am failing in trying to restore a particular view (visible area) in a subclassed QGraphicsView on a subclassed QGraphicsScene.
The visible rectangle of the scene viewed should be
mapToScene( rect() ).boundingRect()
which is
QRectF(27.8261,26.9565 673.043x507.826)
I obtain these values in the destructor of QGraphicsView, they seem valid.
After restoring the window geometry, I am using said values in the first QGraphicsView::resizeEvent (the first event has an invalid old size QSize(-1, -1)) in the attempt to restore the area shown in view using
fitInView( QRectF(....) , Qt::IgnoreAspectRatio);
This triggers a number of scrollContentsBy events before showing the view, another resize and scroll event after, then the mainwindow showEvent fires, causing more resize scroll and resize events.
I am sure this sequence is neccessary for the GUI layout to build, but I am now confused as to how I could run fitInView once everything is set up.
Using a QTimer::singleShot to run the function well after the GUI is shown, I get approximately the desired result, but the matrix and viewed area differ:
ui->graphicsView->matrix();
ui->graphicsView->mapToScene( ui->graphicsView->rect()).boundingRect();
Before:
"[m11=1 m12=0 m21=0 m22=1 dx=0 dy=0] (x=27 y=27 w=774 h=508)"
Restored:
"[m11=0.954724 m12=0 m21=0 m22=0.954724 dx=0 dy=0] (x=17.8062 y=24.0907 w=810.705 h=532.091)"
So fitInView() does not serve my purpose very well - is there another, reliable way?
I'm using Qt 4.8.1 with MSVC2010
Another approach is to restore transform and window scroll position like so:
settings->setValue("view", mapToScene( rect() ).boundingRect() );
settings->setValue("transform", transform() );
settings->sync();
and restore them
QTransform trans = settings->value("transform", QTransform()).value<QTransform>();
QRectF view = settings->value( "view", QRectF() ).value<QRectF>();
setTransform( trans );
centerOn( view.center() );
But this method, too, has an offset.
Before
"[m11=4.96282 m12=0 m21=0 m22=4.96282 dx=0 dy=0] (x=29.6203 y=29.4188 w=155.96 h=104.981)"
After
"[m11=4.96282 m12=0 m21=0 m22=4.96282 dx=0 dy=0] (x=54.8076 y=53.8001 w=155.96 h=104.981)"
An offset also is present when scrollbars are hidden.
Moving the code to showEvent() does not affect result.
What is finally working: in showEvent, restore the transform, restore scroll-bar values:
setTransform( trans );
verticalScrollBar()->setValue( v );
horizontalScrollBar()->setValue( h );
Scroll bar visibility does not affect the view position and size, they are just "overlays".
Use case: This should be a fairly common problem. In a normal QMainWindow with QMdiArea lives an mdiChild with a QGraphicsView. This view displays a QGraphicsScene with QGraphicsItems inside. A right-click at one of these items selects (focusses) the item and opens a context menu, which is conveniently placed at the screen coordinates QGraphicsSceneMouseEvent::screenPos(). This is working as expected.
Now I'd like to show the same context menu when the user presses a key (e.g. Qt::Key_Menu). I know the selected (focussed) item, I know the view which displays the scene.
So my question is:
What is the correct way to get the position (in global, screen coordinates) of the visible representation of a QGraphicsItem within a scene?
Here is what's not working:
QGraphicsItem *item = ...; // is the currently selected item next to which the context menu shall be opened
QGraphicsScene *scene = ...; // is the scene that hosts the item
QGraphicsView *graphicsView = ...; // is the view displaying the scene, this inherits from QWidget
// get the position relative to the scene
QPointF sp = item->scenePos();
// or use
QPointF sp = item->mapToScene(item->pos());
// find the global (screen) position of the item
QPoint global = graphicsView->mapToGlobal(graphicsView->mapFromScene(sp));
// now
myContextMenu.exec(global);
// should open the context menu at the top left corner of the QGraphicsItem item, but it goes anywhere
The doc says:
If you want to know where in the viewport an item is located, you can call QGraphicsItem::mapToScene() on the item, then QGraphicsView::mapFromScene() on the view.
Which is exactly what I'm doing, right?
Just stumbled upon a thread in a german forum that hints to:
QGraphicsView *view = item->scene()->views().last();
or even nicer:
QGraphicsView *view;
foreach (view, this->scene()->views())
{
if (view->underMouse() || view->hasFocus()) break;
}
// (use case in the forum thread:) // QMenu *menu = new QMenu(view);
Using that might allow a more generalized answer to my question...
I found a working solution.
The QGraphicsItem must be visible on the screen.
(Probably if it's not visible because the view shows some other point of the scene, one could restrain the point to the view's viewport's rect.)
// get the screen position of a QGraphicsItem
// assumption: the scene is displayed in only one view or the first view is the one to determine the screen position for
QPoint sendMenuEventPos; // in this case: find the screen position to display a context menu at
QGraphicsItem *pFocusItem = scene()->focusItem();
if(scene() != NULL // the focus item belongs to a scene
&& !scene()->views().isEmpty() // that scene is displayed in a view...
&& scene()->views().first() != NULL // ... which is not null...
&& scene()->views().first()->viewport() != NULL // ... and has a viewport
)
{
QGraphicsView *v = scene()->views().first();
QPointF sceneP = pFocusItem->mapToScene(pFocusItem->boundingRect().bottomRight());
QPoint viewP = v->mapFromScene(sceneP);
sendMenuEventPos = v->viewport()->mapToGlobal(viewP);
}
if(sendMenuEventPos != QPoint())
{
// display the menu:
QMenu m;
m.exec(sendMenuEventPos);
}
It is important to use the view's viewport for mapping the view coords to global coords.
The detection of the context menu key (Qt::Key_Menu) happens in the keyPressEvent() of the "main" QGraphicsItem (due to the structure of my program).
The code seems to be correct. But there might be some problem with the creation of the context menu.
Have you set the parent of the QContextMenu to MainWindow (or something of that sort in your application)??
I think that might be the problem in your case.
Good Luck!!
Just a stab in the dark but have a look at this http://www.qtcentre.org/threads/36992-Keyboard-shortcut-event-not-received.
On looking through the Qt documentation, it seems the use of QGraphicsView may cause some exceptional behaviour with regards to shortcuts.
It looks as if there might be a normative way of achieving the result you desire.
Depending how you are implementing your context menu, shortcuts and QGraphicsView, you might need to set the Qt::ContextMenuPolicy for the QGraphicsView appropriately and build and call the menu differently.
I'm quite interested in this question as I will need to do something quite similar shortly!
I have a QPushButton with an image that has two areas that I want to handle differently when clicked. Due to the positioning on the image, I cannot really use separate buttons for the two images.
What I'd like to do is, in the slot where I am handling the click, have it check the coordinates of the click to determine which area was clicked.
Is there a way to access this?
This is what first comes to mind. It should work, although there may be a simpler way:
Create your own class that derives from QPushButton.
Override the necessary mouse events (mousePressEvent and mouseReleaseEvent) and, before calling the base class implementation, set a property in your object using setProperty with the position of the mouse click. (The position is available from the event parameter.)
In the slot handling the event, use sender() to get the button, and read the property using property().
If you don't need to treat your object as the base class (QPushButton*) you could just create a new signal that includes the mouse event and attach that to your slot. Then you wouldn't need the property at all.
You can get the current mouse position using QCursor::pos() which returns the position of the cursor (hot spot) in global screen coordinates.
Now screen coordinates are not easy to use, and probably not what you want. Luckily there is a way to transform screen coordinates to coordinates relative to a widget.
QPoint _Position = _Button->mapFromGlobal(QCursor::pos());
This should tell you where on the button the mouse was when the user clicked. And you can take it from there.
Building on #Liz's simple mechanism, here's what I did; this is in a slot method that is called on pressed() but generalizes to other situations. Note that using pushButton->geometry() gives you coordinates that are already in global space so you don't need to mapFromGlobal.
void MainWindow::handlePlanButtonPress()
{
int clickX = QCursor::pos().x();
int middle = m_buttonPlan->geometry().center().x();
if ( clickX < middle ) {
// left half of button was pressed
m_buttonPlan->setStyleSheet(sStyleLargeBlueLeft);
} else {
// right half of button was pressed
m_buttonPlan->setStyleSheet(sStyleLargeBlueRight);
}
}