Qt | creating a dynamic object in a customized scene - c++

Good day to all!
I would like to learn from the respected community how it is possible using Qt Designer to create a simple GUI that performs what is shown in the attached image. Namely: after a single click of the LMB, a marker is placed at the place of the click, indicating the beginning of the polygon, and a temporary polygon of the desired type (but of different sizes) begins to follow the mouse cursor until we click on the LMB again, after which the final polygon will be built from the point of the first click of the LMB to the point of the second click. Such a ready-made polygon will have to be stored somewhere in memory for further work with it.
As a polygon here, I show a complex sector built on some calculated points, but as a simple example, you can take a straight line - I don't think that the essence of the program changes much.
I have already looked at many examples of different implementations of different things on QGraphicsScene, but I could not figure out exactly how to create an object in the way I needed. That is, I know that we need a separate class describing the polygon and calculating the coordinates of which it consists, but I don't quite understand how to implement the dynamics - with each mousemoveivent, delete the temporary polygon and draw a new one for the new coordinates, or how?
P.S. If it won't be so difficult, could someone also show how to implement what was conceived through the redefined paintScene class, replacing the standard QGraphicsScene class and inheriting from it? I am faced with the fact that when creating a custom class of the scene object, clicking on the objects attached to it is ignored by the program, and instead of, for example, dragging an object on the scene when clicking on it, the mousePressEvent of the scene, not the object, is triggered, and I do not understand what the problem is.
P.P.S. I apologize for my English and thank everyone for any help!

Related

VTK - Interacting with multiple objects

I am fairly new to VTK and I really enjoy using it.
Now, I want to interact with multiple objects independently. For example, if I have 5 objects in a render window, I want to only move, rotate and interact with the one selected object; whilst the rest of the 4 objects stay where it is.
At the moment, the camera is doing the magic and as I rotate the independent object, other objects move at the same time and I don't want that to happen.
I also want to store all the objects in memory.
I intend to use C++.
This is my sort of class structure...
class ScreenObjects
{
vtkActor (LinkedList); // I intend on using a linkedlist to store all the actors
public:
ScreenObjects(); // Constructor. Initializes vtkActor to null.
void readSTLFile(); // Reads the STL File
bool setObject(); // Sets current object, so you can only interact with the selected object
}
I am missing quite a lot of functions and detail in my class, as I don't know what else to include that would be of use. I was also thinking of joining two objects together, but again, I don't know how to incorporate that in my class; any information on that would be appreciated.
Would really appreciate it if I could be given ideas. This is something of big interest to me and it would really mean a lot to me, and I mean this deep down from my heart.
First of all you should read some tutorials and presentations like this one for example:
http://www.cs.rpi.edu/~cutler/classes/visualization/F10/lectures/03_interaction.pdf
I say that cause it looks like you're currently just moving the camera.
Then you should look into the VTK examples. They are very helpful for all VTK classes.
Especially for your problem have a look at:
http://www.vtk.org/Wiki/VTK/Examples/Cxx/Interaction/Picking
Basicly you have to create a vtkRenderWindowInteractor derived class to get the mouse events (onMouseDown,onMouseUp,onMouseMove,...).
And a vtkPropPicker to shoot a ray from your mouse position into the 3D view and get the vtkActor.
Now you can store onMouseDown inside your vtkRenderWindowInteractor derived class the vtkActor you'd like to move and the currentMouse position. When the user release the mouse (onMouseUp) just get the new mouse position and use the difference of the onMouseDown/onMouseUp positions to modify the vtkActor position.

VTKActor not visible after render but visible on camera->resetview()

I am working on a qt-vtk project. We have a line drawing function. where straight lines are created between two mouse click position. But once actor is created it is not visible. I was calling render function just after adding the actor. But it didn't work. But if i do camera->resetview() lines become visible , but entire perspective changes. Where am i doing wrong ?
thanks
Rwik
This may not be relevant to you, but I had this exact same problem (in ActiViz [managed VTK]) and wrangled with it for a week, so I hope this helps someone out there. It turned out to be a problem with the location of the lines we wanted to draw on the canvas; they were too far away from the camera (on the Z axis) to be visible.
For us, we were trying to draw a cross on the viewing area wherever the user clicked. The data points were there, as were the actors and whatnot, but they would only be visible in the scene if you called resetCamera() and thusly changed the camera's configuration.
Initially, I blamed the custom interactor that we had to add to cirvumvent the default interactor's swallowing of MouseUp events (intended behavior). Investigation revealed that this seemed unlikely.
After this I shifted the blame onto the camera under the suspicion that perhaps the reset call was making a call to some kind of update method which I wasn't aware of. I called resetCamera() and then reverted the camera values to what they were initially.
When this was successfully done, it eventuated that the crosses would appear when the camera zoomed out and then disappear again as soon as it was set back, and it was at this point I realized that it was something to do with the scene.
At this point, I checked the methods we were using to retrieve the mouse location in 3D and realized that the z value was enormous and it was placing the points too far away as a byproduct of VTK's methods to convert 2D locations on the control to 3D locations in the scene and vice versa.
So after all that, a very mundane and avoidable mistake that originated from the methods renderer.DisplayToWorld() and WorldToDisplay().
This might not be everyone's problem, but I hope I've spared someone a week of fiddling around with VTK.
I think that's a bit hard to help, without see the code, but have you tried using
ui->qvtkwidget->update();
, where ui is the instance of your class derived from QMainWindow?

Efficient method for finding object in map based on coordinates

I am building an editor using C++/Qt which has a click-and-drag feel to it. The behavour is similar to schematic editors (Eagle, KiCAD, etc), Microsoft Visio, or other programs where you drag objects from a toolbar into a central editing area.
My problem is that when the user clicks inside the custom widget I want to be able to select the instance of the box-like object and manipulate it. There will also be lines connecting the boxes together. However, I can't decide on an efficient method for selecting those objects.
I have two main thoughts on how to do the programming for this: The first is that the widget which is drawing the entire editor would simply encapsulate every one of the instances of the box. The other is to have each instance of the box (which is in my Model) carry with it an instance of a QWidget which would handle rendering the box (which would be in my View...but it would end up being strongly attached to the model). As for the lines connecting them, since they don't have a square bounding boxes they will have to be rendered by the containing widget.
So here is the summary of how I see this being done:
The editor widget turns into a container which holds the widgets and the widgets process their own click events. The potential issues here are that I don't know how to make the custom widget turn into a layout which lets click-and-drag functionality.
The editor widget takes care of all the rendering and processes the mouse clicks (the easier way in that I don't have to worry about layout...its just selecting the instances efficiently that I don't know what would be best).
So, now that there is a bit of background, for the 2nd method I plan on having each box-like instance having a bounding rectangle and the lines being represented by 3-4 pixel wide bounding rectangle segments (they are at 90 degree angles). I could iterate through every single box and line, but that seems really inefficient.
The big question: Is there some sort of data structure I can hold rectangles in and link them to widgets (or anything else for that matter) and then give it two coordinates (such as mouse coordinates) and have it spit me out the bounding box or linked object that those coordinates are inside of?
It sounds like your real question is about finding a good way to implement your editor, not the specifics of rectangle intersection performance.
You may be interested in Qt's "Diagram Scene" example project, which demonstrates the QGraphicsScene API. It sounds like a good fit for the scenario you describe. (The full source for the example ships with Qt.)
The best part is that you still don't have to implement hit testing yourself, because the API already provides what you are looking for (e.g., QGraphicsScene::itemAt()).
It's worth noting that internally, QGraphicsScene uses a simple iterative method to perform hit tests. As others have pointed out, this isn't going to be a serious bottleneck unless your scenes have a lot of individual items.

Display custom image and have the ability to catch mouse input

I am trying to display several images in the MainWindow and have the ability to perform certain actions when these images receive mousePressEvent.
For this, I thought I would create a new class, derived from QWidget, which had a QImage private member. With this, I as able to override the paintEvent and the mousePressEvent to both display the image and catch mouse input.
However, the problem is that the image drawn in the MainWindow has the widget size and its form (rectangular), and the image does not possess the same form (not even a regular form). This causes some problems because I am able to catch the mousePressEvent on parts that do not belong to my Image, but do belong to the Widget area.
Does anyone have any ideas on how to solve this?
I am still very new to QT, so don't please don't mind any huge mistake :)
Update:
Ok, I tried another approach.
I now have a graphicView on my MainWindow, with a graphicScene linked to it. To this scene, I added some graphicItems.
To implement my graphicItems I had to overload the boundingRect and paint method. As far as the documentation goes, my understanding is that QT uses shape method (which calls boundingRect) to determine the shape of the object to draw. This could possibly be the solution to my problem, or at least one solution to improve the accuracy of the drawn shape.
As of now, i am returning a square with my image size in boundingRect. Does anyone know how I can "adapt" the shape to my image (which resembles a cylinder)?
If you want to use QWidget you should take look at the mask concept to refine the space occupied by your images (be careful about the performance of this implementation).
If you decide to go with the QGraphicsItems you can re-implement the virtual method QPainterPath QGraphicsItem::shape () const to return a path representing as well as possible the boundary of your images.
The Qt documentation says :
The shape is used for many things, including collision detection, hit tests, and for the QGraphicsScene::items() functions.
I encourage you to go with the QGraphics system, it's sounds more appropriate in your case.
What I understand is that you want to shrink/expand/transform your image to fit a custom form (like a cylinder).
What I suggest is to use the OpenGL module QGLWidget and to map each coordinate of your rectangular image (0,0) (0,1) (1,0) (1,1) to a custom form created in OpenGL.
It will act as a texture mapping.
Edit: you can still catch mouse input in OpenGL

Coordinate confusion

I subclassed QGraphicsItem and reimplemented paint.
In paint I wrote something like this for labeling the item:
painter->drawText("Test",10,40);
After some time I think It may be useful to handle labeling with seperate item. So I wrote something like this.
QGraphicsTextItem *label = new QGraphicsTextItem("TEST",this);
setPos(10,40);
But two "TEST" drawing do not appear in the same place on screen. I guess difference may be related with item coordinates - scene coordinates. I tried all mapFrom... and mapTo... combinations inside QGraphicsItem interface but no progress. I want to drawings to appear in the same place on screen.
What I miss?
I assume that you are using the same font size and type in both cases. If the difference in position is very small the reason can be the QGraphicTextItem is using some padding for the text it contains. I would try to use QGraphicsSimpleTextItem that is not going to add fancy stuff internally and see if you still have the same problem. The coordinates system is the same one if you use painter or setPost so that is not the problem. If this doesn't help I will suggest to specify the same rect for both to avoid Qt adding it owns separation spaces.