Do I need a visitor for my component? - c++

I'm trying to do a small and simple GUI in C++ (with SDL). I'm experimenting with the Composite pattern to have a flexible solution.
I've got a Widget class, with Component objects : for instance, there is a PaintingComponent ; if I want to draw a box, I'll use a PaintingBoxComponent, that inherits from the PaintingComponent.
The ideal Widget class would look a bit like that :
Class Widget
{
private:
vector<Component*> myComponents;
public:
// A small number of methods able to communicate with the component
// without knowing their types
}
My question is simple : what is the best way to activate this component when I need it ?
I first went with a "display" function in the Widget class. But I see two problems :
1°) I'm losing the pure polymorphism of "Compoonent" in Widget, since I'm forced to declare a particular component of the widget as PaintingComponent. I can deal with this, since it's logical that a Widget should be displayed.
2°) More troublesome, I need to pass information between my main programm and my PaintingComponent. Either I pass the SDL_Surface* screen to the PaintingComponent, and it paints the image it drew on it, or I give to my component a reference to the object that need to receive the image it has drawn (and this object will paint the image on the screen). In both cases, Widget will have to handle the data, and will have to know what a SDL_Surface* is. I'm loosing the loose coupling, and I don't want that.
Then, I considered using a "Visitor" pattern, but I'm not used to it and before I try to implement it, I'd like to have your advice.
How would you proceed to have a flexible and solid solution in this case ? Thanks in advance !

If you plan to change graphic system later, you could implement this pattern. Visitor goes to root node, then recursively to all children, drawing them on some surface (known only to Visitor itself). You can gather "display list" with that, then optimize it before drawing (for example, on OpenGL apply z-sorting (lower z first).

Related

Qt | creating a dynamic object in a customized scene

Good day to all!
I would like to learn from the respected community how it is possible using Qt Designer to create a simple GUI that performs what is shown in the attached image. Namely: after a single click of the LMB, a marker is placed at the place of the click, indicating the beginning of the polygon, and a temporary polygon of the desired type (but of different sizes) begins to follow the mouse cursor until we click on the LMB again, after which the final polygon will be built from the point of the first click of the LMB to the point of the second click. Such a ready-made polygon will have to be stored somewhere in memory for further work with it.
As a polygon here, I show a complex sector built on some calculated points, but as a simple example, you can take a straight line - I don't think that the essence of the program changes much.
I have already looked at many examples of different implementations of different things on QGraphicsScene, but I could not figure out exactly how to create an object in the way I needed. That is, I know that we need a separate class describing the polygon and calculating the coordinates of which it consists, but I don't quite understand how to implement the dynamics - with each mousemoveivent, delete the temporary polygon and draw a new one for the new coordinates, or how?
P.S. If it won't be so difficult, could someone also show how to implement what was conceived through the redefined paintScene class, replacing the standard QGraphicsScene class and inheriting from it? I am faced with the fact that when creating a custom class of the scene object, clicking on the objects attached to it is ignored by the program, and instead of, for example, dragging an object on the scene when clicking on it, the mousePressEvent of the scene, not the object, is triggered, and I do not understand what the problem is.
P.P.S. I apologize for my English and thank everyone for any help!

VTK - Interacting with multiple objects

I am fairly new to VTK and I really enjoy using it.
Now, I want to interact with multiple objects independently. For example, if I have 5 objects in a render window, I want to only move, rotate and interact with the one selected object; whilst the rest of the 4 objects stay where it is.
At the moment, the camera is doing the magic and as I rotate the independent object, other objects move at the same time and I don't want that to happen.
I also want to store all the objects in memory.
I intend to use C++.
This is my sort of class structure...
class ScreenObjects
{
vtkActor (LinkedList); // I intend on using a linkedlist to store all the actors
public:
ScreenObjects(); // Constructor. Initializes vtkActor to null.
void readSTLFile(); // Reads the STL File
bool setObject(); // Sets current object, so you can only interact with the selected object
}
I am missing quite a lot of functions and detail in my class, as I don't know what else to include that would be of use. I was also thinking of joining two objects together, but again, I don't know how to incorporate that in my class; any information on that would be appreciated.
Would really appreciate it if I could be given ideas. This is something of big interest to me and it would really mean a lot to me, and I mean this deep down from my heart.
First of all you should read some tutorials and presentations like this one for example:
http://www.cs.rpi.edu/~cutler/classes/visualization/F10/lectures/03_interaction.pdf
I say that cause it looks like you're currently just moving the camera.
Then you should look into the VTK examples. They are very helpful for all VTK classes.
Especially for your problem have a look at:
http://www.vtk.org/Wiki/VTK/Examples/Cxx/Interaction/Picking
Basicly you have to create a vtkRenderWindowInteractor derived class to get the mouse events (onMouseDown,onMouseUp,onMouseMove,...).
And a vtkPropPicker to shoot a ray from your mouse position into the 3D view and get the vtkActor.
Now you can store onMouseDown inside your vtkRenderWindowInteractor derived class the vtkActor you'd like to move and the currentMouse position. When the user release the mouse (onMouseUp) just get the new mouse position and use the difference of the onMouseDown/onMouseUp positions to modify the vtkActor position.

Efficient method for finding object in map based on coordinates

I am building an editor using C++/Qt which has a click-and-drag feel to it. The behavour is similar to schematic editors (Eagle, KiCAD, etc), Microsoft Visio, or other programs where you drag objects from a toolbar into a central editing area.
My problem is that when the user clicks inside the custom widget I want to be able to select the instance of the box-like object and manipulate it. There will also be lines connecting the boxes together. However, I can't decide on an efficient method for selecting those objects.
I have two main thoughts on how to do the programming for this: The first is that the widget which is drawing the entire editor would simply encapsulate every one of the instances of the box. The other is to have each instance of the box (which is in my Model) carry with it an instance of a QWidget which would handle rendering the box (which would be in my View...but it would end up being strongly attached to the model). As for the lines connecting them, since they don't have a square bounding boxes they will have to be rendered by the containing widget.
So here is the summary of how I see this being done:
The editor widget turns into a container which holds the widgets and the widgets process their own click events. The potential issues here are that I don't know how to make the custom widget turn into a layout which lets click-and-drag functionality.
The editor widget takes care of all the rendering and processes the mouse clicks (the easier way in that I don't have to worry about layout...its just selecting the instances efficiently that I don't know what would be best).
So, now that there is a bit of background, for the 2nd method I plan on having each box-like instance having a bounding rectangle and the lines being represented by 3-4 pixel wide bounding rectangle segments (they are at 90 degree angles). I could iterate through every single box and line, but that seems really inefficient.
The big question: Is there some sort of data structure I can hold rectangles in and link them to widgets (or anything else for that matter) and then give it two coordinates (such as mouse coordinates) and have it spit me out the bounding box or linked object that those coordinates are inside of?
It sounds like your real question is about finding a good way to implement your editor, not the specifics of rectangle intersection performance.
You may be interested in Qt's "Diagram Scene" example project, which demonstrates the QGraphicsScene API. It sounds like a good fit for the scenario you describe. (The full source for the example ships with Qt.)
The best part is that you still don't have to implement hit testing yourself, because the API already provides what you are looking for (e.g., QGraphicsScene::itemAt()).
It's worth noting that internally, QGraphicsScene uses a simple iterative method to perform hit tests. As others have pointed out, this isn't going to be a serious bottleneck unless your scenes have a lot of individual items.

Best Practice in Cocos2d

I am at the start of my cocos2d adventure, and have some ideological questions to ask. I am making a small space-shooter game, am I right to use the following class structure?
Scene
Background Layer
Infinite parallax background
Game Layer
Space ships
Bullets
Control Layer
Joystick
Buttons
and a followup question — what is the best practice in accessing objects from other layers? For example, when the joystick is updated, it must rotate the space ship and move the background. Both of these are in other layers. Is there some recommended way to go about this or should I simply get the desired objects by Tag and operate on them?
Cocos is a big singleton-based system, which may not appeal to some developers but is often used in Cocos apps and is the fundamental architecture of the framework. If you have one main scene and many subsequent layers added to that scene, and you want controls from one layer to affect sprites or logic on other layers, there really is nothing wrong with making your main scene a singleton and sending the information from the joystick layer back to the scene to handle for manipulating other layers or sprites. I do this all the the time and this technique is used in countless Cocos tutorials in books and online, so you can feel that you aren't breaking too many rules if you do it this way (and it's also quite easy to do).
If you instead choose to use pointers in one layer to send data to other layers, this can get you into a lot of trouble since one node should never own another node that it doesn't have a specific parent-child relationship with. Doing so can cause crashes and problems with the native Cocos cleanup methods when you remove scenes later, and potentially leak memory. You could use a weak reference in such a case instead, but that is still dependent on one layer expecting another layer to always be around, which may not be the case.
Sending data back to the main game scene to then dispatch and use accordingly is really efficient.
This seems like a perfectly reasonable way to arrange your objects, this is a method I use.
For accessing objects, I would keep an explicit reference to the object as a member variable and use it directly. (Using tags isn't a bad option, I just find it can get a little messy).
#interface Class1 : NSObject
{
CCLayer *backgroundLayer;
CCLayer *contentLayer;
CCLayer *hudLayer;
CCSprite *objectIMayNeedToUseOnBackgroundLayer;
CCNode *objectIMayNeedToUseOnContentLayer;
}
Regarding tags, one method I use to make sure the tag numbers I'm assigning are unique is define an enum as follows:
typedef enum
{
kTag_BackgroundLayer = 100,
kTag_BackgroundImage,
kTag_GameLayer = 200,
kTag_BadGuy,
kTag_GoodGuy,
kTag_Obstacle,
kTag_ControlLayer = 300
kTag_Joystick,
kTag_Buttons
};
Most times I'll also just set zOrder and tag properties of CCNodes (i.e. CCSprites, CCLabelTTFs, etc.) the same, so you can actually use the enum to define your zOrder, too.

Display custom image and have the ability to catch mouse input

I am trying to display several images in the MainWindow and have the ability to perform certain actions when these images receive mousePressEvent.
For this, I thought I would create a new class, derived from QWidget, which had a QImage private member. With this, I as able to override the paintEvent and the mousePressEvent to both display the image and catch mouse input.
However, the problem is that the image drawn in the MainWindow has the widget size and its form (rectangular), and the image does not possess the same form (not even a regular form). This causes some problems because I am able to catch the mousePressEvent on parts that do not belong to my Image, but do belong to the Widget area.
Does anyone have any ideas on how to solve this?
I am still very new to QT, so don't please don't mind any huge mistake :)
Update:
Ok, I tried another approach.
I now have a graphicView on my MainWindow, with a graphicScene linked to it. To this scene, I added some graphicItems.
To implement my graphicItems I had to overload the boundingRect and paint method. As far as the documentation goes, my understanding is that QT uses shape method (which calls boundingRect) to determine the shape of the object to draw. This could possibly be the solution to my problem, or at least one solution to improve the accuracy of the drawn shape.
As of now, i am returning a square with my image size in boundingRect. Does anyone know how I can "adapt" the shape to my image (which resembles a cylinder)?
If you want to use QWidget you should take look at the mask concept to refine the space occupied by your images (be careful about the performance of this implementation).
If you decide to go with the QGraphicsItems you can re-implement the virtual method QPainterPath QGraphicsItem::shape () const to return a path representing as well as possible the boundary of your images.
The Qt documentation says :
The shape is used for many things, including collision detection, hit tests, and for the QGraphicsScene::items() functions.
I encourage you to go with the QGraphics system, it's sounds more appropriate in your case.
What I understand is that you want to shrink/expand/transform your image to fit a custom form (like a cylinder).
What I suggest is to use the OpenGL module QGLWidget and to map each coordinate of your rectangular image (0,0) (0,1) (1,0) (1,1) to a custom form created in OpenGL.
It will act as a texture mapping.
Edit: you can still catch mouse input in OpenGL