What is the structure of VML mouse event object? - raphael

I would like to know the structure (properties and methods) of a VML mouse event object.
I'm using Raphael, and I want to know the mouse coordinates when an element is hovered. On Firefox and Chrome, event.pageX and event.pageY are working, but not in IE8.
var paper = Raphael(document.getElementById('map', 300, 300));
paper.circle(50, 50, 40).attr({fill: 'black'}).mouseover(function(event){
alert(event.pageX);
});
Here is the JSFiddle.

In fact, it is not specific to VML : all mouse events have the same structure in IE.
I used clientX and clientY, with a rectification due to the fact that clientX is the offset relative to the viewport, whereas pageX is relative to the page.

Related

QGraphicsScene mouseMoveEvent does not work until QGraphicsView wheelEvent

I have a strange issue that I have been unable to determine the cause. Basically, I created a 2D view with pan and zoom functionality and a scene with items that can moved with grid snapping. To move an item in the scene I extended Scene::mousePressEvent to get a pointer to the item and Scene::mouseMoveEvent to keep the item tracked on the cursor. To drop the item, I used Scene::mousePressEvent again. To pan, I extended View::mousePressEvent, View::mouseReleaseEvent, and View::mouseMoveEvent and to zoom I extended View::wheelEvent.
Now for the symptoms:
I start the application with an item in the Scene. If I click and hold, then move the mouse, the item moves as intended. As soon as I release the mouse button, the item stops moving. I can click to drop and the item is placed according to the drop code in Scene::mousePressEvent. Try again and still the item only moves when the mouse button is pressed.
Then comes the strange part: If I use the mouse wheel to zoom the View, everything performs as expected after that event. The mouse is clicked to select an item, it moves as I move the mouse and drops when I click again.
So the obvious solution:
wheelEvent(new QWheelEvent(QPointF(0,0),0,Qt::NoButton,Qt::NoModifier));
called at the creation of the View and everything works fine. It calls the extended View::wheelEvent with no change to the view and before the scene is even created, but afterwards the programs behaves as expected.
So I'm here to see if any of the excellent Qt experts out there can explain this strange behavior. Any comments or direction are appreciated.
In case it helps, here is the View::wheelEvent override code. tform is a QTransform with which I maintain zoom. Also, I've tried with and without the call to the base method but there is no change in behavior.
void SchematicView::wheelEvent(QWheelEvent* event)
{
// Scale the view / do the zoom
double scaleFactor = 1.1;
if(event->delta() > 0 && tform.m11() < max_zoom) {
tform.scale(scaleFactor,scaleFactor);
} else if (event->delta() < 0 && tform.m11() > min_zoom){
tform.scale(1.0/scaleFactor,1.0/scaleFactor);
}
setTransformationAnchor(QGraphicsView::AnchorUnderMouse);
setTransform(tform);
QGraphicsView::wheelEvent(event);
}
Without a SSCCE to look at and test with, it's hard to say for sure, but what you're describing sounds a lot like your mouseMoveEvent() callback is only getting called when the mouse button is being held down during the move. That, in turn, sounds a lot like the expected behavior for mouseMoveEvent(), as documented in QWidget::mouseMoveEvent():
If mouse tracking is switched off, mouse move events only occur if a
mouse button is pressed while the mouse is being moved. If mouse
tracking is switched on, mouse move events occur even if no mouse
button is pressed.
If that is indeed the problem, then a call to setMouseTracking(true) may get you the behavior you are looking for.
On a broader level, note that there are easier ways of obtaining the behavior you are trying to implement -- for example, to allow the user to drag and drop items in a QGraphicsScene, all you really have to do is call setFlags(QGraphicsItem::ItemIsMovable) on any QGraphicsItems that you want the user to be able to drag around. No hand-coding of event handlers is necessary unless you are trying to obtain some non-standard behaviors.

Setting exact view area (rect) of QGraphicsScene in QGraphicsView - a better fitInView()?

I am failing in trying to restore a particular view (visible area) in a subclassed QGraphicsView on a subclassed QGraphicsScene.
The visible rectangle of the scene viewed should be
mapToScene( rect() ).boundingRect()
which is
QRectF(27.8261,26.9565 673.043x507.826)
I obtain these values in the destructor of QGraphicsView, they seem valid.
After restoring the window geometry, I am using said values in the first QGraphicsView::resizeEvent (the first event has an invalid old size QSize(-1, -1)) in the attempt to restore the area shown in view using
fitInView( QRectF(....) , Qt::IgnoreAspectRatio);
This triggers a number of scrollContentsBy events before showing the view, another resize and scroll event after, then the mainwindow showEvent fires, causing more resize scroll and resize events.
I am sure this sequence is neccessary for the GUI layout to build, but I am now confused as to how I could run fitInView once everything is set up.
Using a QTimer::singleShot to run the function well after the GUI is shown, I get approximately the desired result, but the matrix and viewed area differ:
ui->graphicsView->matrix();
ui->graphicsView->mapToScene( ui->graphicsView->rect()).boundingRect();
Before:
"[m11=1 m12=0 m21=0 m22=1 dx=0 dy=0] (x=27 y=27 w=774 h=508)"
Restored:
"[m11=0.954724 m12=0 m21=0 m22=0.954724 dx=0 dy=0] (x=17.8062 y=24.0907 w=810.705 h=532.091)"
So fitInView() does not serve my purpose very well - is there another, reliable way?
I'm using Qt 4.8.1 with MSVC2010
Another approach is to restore transform and window scroll position like so:
settings->setValue("view", mapToScene( rect() ).boundingRect() );
settings->setValue("transform", transform() );
settings->sync();
and restore them
QTransform trans = settings->value("transform", QTransform()).value<QTransform>();
QRectF view = settings->value( "view", QRectF() ).value<QRectF>();
setTransform( trans );
centerOn( view.center() );
But this method, too, has an offset.
Before
"[m11=4.96282 m12=0 m21=0 m22=4.96282 dx=0 dy=0] (x=29.6203 y=29.4188 w=155.96 h=104.981)"
After
"[m11=4.96282 m12=0 m21=0 m22=4.96282 dx=0 dy=0] (x=54.8076 y=53.8001 w=155.96 h=104.981)"
An offset also is present when scrollbars are hidden.
Moving the code to showEvent() does not affect result.
What is finally working: in showEvent, restore the transform, restore scroll-bar values:
setTransform( trans );
verticalScrollBar()->setValue( v );
horizontalScrollBar()->setValue( h );
Scroll bar visibility does not affect the view position and size, they are just "overlays".

How to allow to type text in raphael object, say rect?

I have created a Raphael rect like below:
var rect1 = paper.rect(100,100,100,100)
Now, I want that as I click on the rect a cursor appears and user is allowed to type/key in some text
I am very very new to JS and Raphael.
That's not a natural use of Raphael. Think of it primarily as a drawing library. If you look through the SVG specifications or any of the demos on the RaphaelJS page, you'll get the idea.
But Raphael integrates naturally with native Javascript or jQuery. I would place a borderless textarea on top of your rectangle and activate (and focus) it when the user clicks on the space, like so:
var paper = Raphael("canvas", 300, 300),
rect1 = paper.rect(100,100,100,100).attr({fill: "#FFF"});
rect1.click(function(e) {
$('#text').show();
$('#text').focus();
});
​
http://jsfiddle.net/NtKKZ/
(Note that you need to fill the rectangle with white in order for the click event to fire.)

Relative coordinates with slider widget containing thumb widget

My slider had a child button widget. I have recently changed my gui to send the mouse coordinates as relative to the widget, so if the widget is at 50,50 and the mouse is at 50,50, it will now report as 0,0.
This has created some problems for my slider. When I would drag around the button it would position to value.
The only solution I have thought of is to take the given mouse coordinate, add back the absolute position of the button, then subtract the absolute position of the slider.
I was however hoping for a solution that did not envole absolute positioning.
I used to receive the absolute mouse position, when I did, I positioned it like this:
int mousePos = getOrientation() == AGUI_HORIZONTAL ?
mouseArgs.getPosition().getX() - getAbsolutePosition().getX() :
mouseArgs.getPosition().getY() - getAbsolutePosition().getY();
setValue(positionToValue(mousePos));
mouseArgs gives the mouse position relative to the button. Not relative to the slider, which is what would be needed.
I can obtain the relative locations of the widgets, but I don't think that would do it.
Thanks
Since this is a custom GUI, I'm going to make some assumptions and restate your question to make sure I'm answering your question.
You have a slider widget that contains a thumb widget. The user can click and drag the thumb to reposition it. Your GUI sends mouse notifications to your widgets in local relative space rather than absolute screen space. Your question is how can the thumb widget respond to the mouse events to move itself around without having to use absolute coordinates?
You can do this by changing who's responsibility it is to move the thumb widget. It should be the job of the slider widget to position its thumb widget. By doing it that way, all your coordinates can be in the slider widget's local relative space. Basically it'd be something like (assuming you have some kind of event notification):
When created, the slider widget registers for the various mouse events on its child thumb widget.
When the thumb receives mouse events, it raises the event passing along its local coordinate.
Slider widget receives these events and translates the coordinate from thumb local space to slider local space (i.e., click_x = thumb_x + thumb_mouse_x).
Slider can then use this coordinate, which is in the slider's local relative space, to move the thumb.
In general, parents should be responsible for their children's layout.

Finding current mouse position in QT

This is my first attempt at writing a QT app, and I'm just trying to get a feel for how it works. My goal is to have a 400x400 widget which knows the exact position of the mouse when the mouse is hovering over it. For example, if the mouse was hovering in the top left corner, the position might be 10,10 (or something similar). If the mouse is in the bottom right corner, it might say 390,390.
Eventually, these coordinates will be displayed in a label on the main window, but that should be trivial. I'm stuck at the actual fetching of the coordinates. Any ideas?
For your widget, you must enable mouse tracking.
Then, you can either install an event filter, paying attention to mouse events and looking for the move event, or you can inherit from QWidget and override the mouse event, looking for mouse move events.
http://doc.qt.io/qt-4.8/qwidget.html#mouseTracking-prop
http://doc.qt.io/qt-4.8/eventsandfilters.html
http://doc.qt.io/qt-4.8/qmouseevent.html
If you are ever in a situation when you don't need actual tracking, just position at the moment, you can use QCursor::pos().