I know that it is possible to use affine transformations with Qt. But is it also possible to set a complete custom global transformation method?
Use case: Drawing projected geographic points (lat, long) or retrieve mouse events etc. in geographic corrdinates (in the context of a QGraphicsScene/View).
At the moment I use it like that (little bit pseudo code):
MyPoint pt = myProjection(geographicPoint);
QPoint(pt.x, pt.y);
// or, to make it shorter, but essentially it's the same
QPoint p = myProjection(geoPoint);
geoPoint = myBackProjection(mouseEvent.getPoint());
And I'd like to do "register" my transformation methods somewhere so that the QGraphicsView (or whoever is responsible) internally uses these methods before it draws something on the screen.
Or does that neither make sense (because it would introduce problems where I don't expect them, e.g. in calculating distances) nor is it possible?
QGraphicsView uses a QTransform matrix, which is a 3x3 matrix. Therefore you can only do linear transformations. In Qt5/QtQuick or with QtQuick3d you may be able to achieve this with a QML ShaderProgram.
In "pure" Qt4 you would have to derive your own custom class from QGraphicsView, or QAbstractScrollArea or QGraphicsScene and override the paint methods.
A good approach would be to encapsulate this transform in your QGraphicsItem shape() method. Suppose you have a point defined as a pair of latitude and longitude, it would store pairs of lat lon, but its shape method would return projected values (usually in pixel equivalents).
If you feel like it, you could even subclass QPoint to make it geographically aware, thus creating it with latlon, but exposing projected values in its x, y properties.
Related
Using moveto and lineto to draw various lines on a window canvas...
What is the simplest way to determine at run-time if an object, like a bit map or a picture control is in "contact" (same x,y coordinates) with a line(s) that had been drawn with lineto on a window canvas?
A simple example would be a ball (bitmap or picture) "contacting" a drawn border and rebounding... What is the easiest way to know if "contact" occurs between the object, picture or bitmap and any line that exists on the window?
If I get it right you want collision detection/avoidance between circular object and line(s) while moving. There are more option to do this I know of...
Vector approach
you need to remember all the rendered stuff in vector form too so you need list of all rendered lines, objects etc ... Then for particular object loop through all the other ones and check for collision algebraically with vector math. Like detecting intersection between bounding boxes and then with particular line/polyline/polygon or what ever.
Raster approach
This is simpler to mplement and sometimes even faster but less acurate (only pixel precision). The idea is to clear object last position with background color. Then check all the pixels that would be rendered at new position and if no other than background color present then no colision occurs so you can render the pixels. If any non background color present then render the object on the original position again as collision occur.
You can also check between old and new position and place the object on first non collision position so you are closer to the edge...
This approach need fast pixel access otherwise it woul dbe too slow. Standard Canvas does not allow this without using BitBlt from GDI. Luckily VCL GRaphics::TBitmap has ScanLine[] property allowing direct pixel access without any performance hit if used right. See example of it in your other question I answered:
bitmap rotate using direct pixel access
accessing ScanLine[y][x] is as slow as Pixels[x][y] but you can store all the pointers to each line of bitmap once and then just use that instead which is the same as accessing your own 2D array. So you really need just bitmap->Height calls of ScanLine[y] for entire image rendering after any resize or assigment of bitmap...
If you got tile based scene you can use this approach on tiles instead of pixels something like this:
What is the best way to move an object on the screen? but it is in asm ...
Field approach
This one is also considered to be a vector approach but does not require collision checks. Instead each object creates repulsive force the bigger the closer you are to it which is added to the Newton/D'Alembert physics driving force. When coefficients set properly it will avoid collisions on its own. This is used also for automatic placement of items etc... for more info see:
How to implement a constraint solver for 2-D geometry?
Hybrid approach
You can combine any of the above approaches together to better suite your needs. For example see:
Path generation for non-intersecting disc movement on a plane
I am new to VTK. I would like to know how VTK abstract picker behaves for multiple actors of different opacity values. Let's consider two actors, with one
inside another. When I set the opacity of the outer surface to 0.3, while
keeping the opacity of the inner one 1.0.Since the outer one is semi-transparent, I can see the inner actor in the overlap region of the two actors. When I perform picking in that region,the resulting coordinates is from the inner surface itself, and when I pick some point other than the overlap region,I am getting outer surface coordinates. How can I perform picking operation based on opacity values? Such that i want to pick one actor at a time. Anybody please help..
vtkAbstractPicker is, as the name suggest, just as an abstract class that defines interface for picking, but nothing more. When choosing actual pickers, you basically have a choice between picking based on ray casting or "color picking" using graphical hardware (see the linked documentation for actual vtk classes that implement those).
Now to the actual problem, if I understood what you wrote correctly, you are facing a rather simple sorting problem. The opacity can be seen as kind of a priority - the actors with higher opacity should be picked even if they are inside others with lower opacity, right? Then all you need to do is get all the actors that are underneath your mouse cursor and then choose the one with highest opacity, or the closest one for cases when they have the same opacity.
I think the easiest way to implement this is using the vtkPropPicker (vtkProp is a parent class for actor so this is a good picker for picking actors). It is one of the "hardware" pickers, using the color picking algorithm. The basic algorithm is that each pickable object is rendered using a different color into a hidden buffer (texture). This color (which after all is a 32-bit number like any other) serves as an ID of that object: when the user clicks on a screen, you read the color of the pixel from the picking texture on the clicked coordinates and then you simply look into the map of objects under the ID that is equal to that color and you have the object. Obviously, it cannot use any transparency - the individual colors are IDs of the objects, blending them would make them impossible to identify.
However, the vtkPropPicker provides a method:
// Perform a pick from the user-provided list of vtkProps
// and not from the list of vtkProps that the render maintains.
// If something is picked, a 1 is returned, otherwise 0 is returned.
// Use the GetViewProp() method to get the instance of vtkProp that was picked.
int PickProp (double selectionX, double selectionY,
vtkRenderer *renderer, vtkPropCollection *pickfrom);
What you can do with this is simply first call PickProp(mouseClickX, mouseClickY, renderer of your render window, pickfrom) providing only the highest-priority actors in the pickfrom collection i.e. the actors with the highest opacity. Underneath, this will do a render of all the provided actors using the color-coding algorithm and tell you which actor is underneath the specified coordinates. If it picks something (return value is 1, you call GetViewProp on it and it gives you the pointer to your picked actor), you keep it, if it does not (return value is 0), you call it again, this time providing actors with lower opacity and so on until you pick something or you test all the actors.
You can do the same with the ray-casting pickers like vtkPicker as well - it cast a ray underneath your mouse and gives you all intersections with everything in the scene. But the vtkPicker's API is optimized for finding the closest intersection, it might be a bit of work to get all of them and then sorting them and in the end, I believe the solution using vtkPropPicker will be faster anyway.
If this solution is good, you might want to look at vtkHardwareSelector, which uses the same algorithm, but unlike the vtkPropPicker allows you to access the underlying picking texture many times, so that you don't need to re-render for every picking query. Depending on how your rendering pipeline is set up, this might be a more efficient solution (= if you make a lot of picking without updating the scene).
I'm very experienced with Qt's QGraphicsScene, but I'm hoping someone can clarify a detail with regard to the boundingRect and shape methods for a QGraphicsItem. As far as I can find, the documentation doesn't address this specific concern.
I have a situation where I need to calculate a shape for many complex paths with the shape including a slight buffer zone to make the paths easier for the user to click on and select. I'm using the QPainterPathStroker, and it's expensive. I'm currently attempting to delay the shape calculation until the shape method is actually called, and that's helping with the performance.
The situation now is that the bounding rectangle is calculated from the path boundaries plus any pen width, which is correct for enclosing the painted area. However, when the shape result is calculated, it is larger than the bounding rectangle because the selection buffer zone is larger than the drawing area.
Is that a problem? Is it acceptable that the boundingRect does NOT enclose the shape result area? Or do I need to recalculate the boundingRect when I recalculate the shape?
Thank you.
Doug McGrath
QGraphicsItem::shape is used for object collision detection, hit tests and knowing where mouse clicks occur.
In contrast, QGraphicsItem::boundingRect is used when drawing an object, knowing when an object is off the screen, or obscured. As the documentation states for boundingRect: -
QGraphicsView uses this to determine whether the item requires redrawing.
Therefore, the boundingRect should wholly encompass the QPainterPathreturned from the shape function.
I'm now using QGraphicsView and QGraphicsScene for showing some charts. Depending on values of that charts (they are histograms) I change the scale. I also draw some text (they are derived from QGraphicsItem) for showing their values like this
But I don't scale texts like charts, so it brings to a problem. If I don't scale texts, so I get it's bounding rect's real coordinates. I want to get scale of y axis for drawing texts in right positions.
So my question how may I get the scale in QGraphicsItem or in QGraphicsScene.
Thank you in advance.
If you implement your own function for scaling, then there is not a direct way for getting scale in QGraphicsItem. You can make a static variables in QGraphicsView (where you count scale) for scales and static functions for getting them.
Well, there is a solution for that. In QGraphicsScene you can get transformation matrix like this
QTransform matrix = views().at(0)->transform();
views() actually returns a QList of QGraphicsViews. After getting matrix for getting scale (for example vertical) you can do this
qreal verticalScale = matrix.m22();
Difference for getting scale in QGrapcsItem is just
QTransform matrix = scene()->views().at(0)->transform();
And the rest is same.
I receive an array of coordinates (double coordinates with -infinity < x < +infinity and 0 <= y <= 10) and want to draw a polyline using those points. I want the graph to always begin on the left border of my image, and end at the right. The bottom border of my image always represents a 0 y-value, and the top border always a 10 y-value. The width and height of the image that is created are decided by the user at runtime.
I want to realize this using Qt, and QImage in combination with QPainter seem to be my primary weapons of choice. The problem I am currently trying to solve is:
How to convert my coordinates to pixels in my image?
The y-values seem to be fairly simple, since I know the minimum and maximum of the graph beforehand, but I am struggling with the x-values. My approach so far is to find the min- and max-x-value and scale each point respectively.
Is there a more native approach?
Since one set of coordinates serves for several images with different widths and heights, I wondered whether a vector graphic (svg) may be a more suitable approach, but I couldn't find material on creating svg-files within Qt yet, just working with existing files. I would be looking for something comparable to the Windows metafiles.
Is there a close match to metafiles in Qt?
QGraphicsScene may help in this case. You plot the graph with either addPolygon() or addPath(). Then render the scene into a bitmap with QGraphicsScene::render()
The sceneRect will automatically grow as you add items to it. At the end of the "plotting" you will get the final size/bounds of the graph. Create a QImage and use it as the painter back store to render the scene.
QGraphicsScene also allows you to manipulate the transformation matrix to fit the orientation and scale to your need.
Another alternative to use QtOpenGL to render your 2d graph to a openGL context. No conversion/scaling of coordinates is required. Once you get past the opengl basics you can pick appropriate viewPort / eye parameters to achieve any zoom/pan level.