VTK - Interacting with multiple objects - c++

I am fairly new to VTK and I really enjoy using it.
Now, I want to interact with multiple objects independently. For example, if I have 5 objects in a render window, I want to only move, rotate and interact with the one selected object; whilst the rest of the 4 objects stay where it is.
At the moment, the camera is doing the magic and as I rotate the independent object, other objects move at the same time and I don't want that to happen.
I also want to store all the objects in memory.
I intend to use C++.
This is my sort of class structure...
class ScreenObjects
{
vtkActor (LinkedList); // I intend on using a linkedlist to store all the actors
public:
ScreenObjects(); // Constructor. Initializes vtkActor to null.
void readSTLFile(); // Reads the STL File
bool setObject(); // Sets current object, so you can only interact with the selected object
}
I am missing quite a lot of functions and detail in my class, as I don't know what else to include that would be of use. I was also thinking of joining two objects together, but again, I don't know how to incorporate that in my class; any information on that would be appreciated.
Would really appreciate it if I could be given ideas. This is something of big interest to me and it would really mean a lot to me, and I mean this deep down from my heart.

First of all you should read some tutorials and presentations like this one for example:
http://www.cs.rpi.edu/~cutler/classes/visualization/F10/lectures/03_interaction.pdf
I say that cause it looks like you're currently just moving the camera.
Then you should look into the VTK examples. They are very helpful for all VTK classes.
Especially for your problem have a look at:
http://www.vtk.org/Wiki/VTK/Examples/Cxx/Interaction/Picking
Basicly you have to create a vtkRenderWindowInteractor derived class to get the mouse events (onMouseDown,onMouseUp,onMouseMove,...).
And a vtkPropPicker to shoot a ray from your mouse position into the 3D view and get the vtkActor.
Now you can store onMouseDown inside your vtkRenderWindowInteractor derived class the vtkActor you'd like to move and the currentMouse position. When the user release the mouse (onMouseUp) just get the new mouse position and use the difference of the onMouseDown/onMouseUp positions to modify the vtkActor position.

Related

Qt | creating a dynamic object in a customized scene

Good day to all!
I would like to learn from the respected community how it is possible using Qt Designer to create a simple GUI that performs what is shown in the attached image. Namely: after a single click of the LMB, a marker is placed at the place of the click, indicating the beginning of the polygon, and a temporary polygon of the desired type (but of different sizes) begins to follow the mouse cursor until we click on the LMB again, after which the final polygon will be built from the point of the first click of the LMB to the point of the second click. Such a ready-made polygon will have to be stored somewhere in memory for further work with it.
As a polygon here, I show a complex sector built on some calculated points, but as a simple example, you can take a straight line - I don't think that the essence of the program changes much.
I have already looked at many examples of different implementations of different things on QGraphicsScene, but I could not figure out exactly how to create an object in the way I needed. That is, I know that we need a separate class describing the polygon and calculating the coordinates of which it consists, but I don't quite understand how to implement the dynamics - with each mousemoveivent, delete the temporary polygon and draw a new one for the new coordinates, or how?
P.S. If it won't be so difficult, could someone also show how to implement what was conceived through the redefined paintScene class, replacing the standard QGraphicsScene class and inheriting from it? I am faced with the fact that when creating a custom class of the scene object, clicking on the objects attached to it is ignored by the program, and instead of, for example, dragging an object on the scene when clicking on it, the mousePressEvent of the scene, not the object, is triggered, and I do not understand what the problem is.
P.P.S. I apologize for my English and thank everyone for any help!

Ways to implement a game logic layer into current 2d game architecture

I'm developing a 2d fighting game in c++ (for learning purposes) and I'm having a hard time figuring out how to properly implement game logic. For a quick overview of my current architecture, I have component classes that act as data holders and I have 'systems' which are just functions designed to act on those components. I have a scene class which holds an array of fighters that are currently in game and this scene is passed to individual sub-systems which then can freely act on fighter components, updating fighters state:
//Add a fighter object to array of fighters and set starting position
scene.CreatePlayerFighter(160.0f, 0.0f);
scene.CreateAIFighter(80.0f, 45.0f);
gameWolrd.Init(scene);
Renderer.Init(scene, window);
AI.Init(scene);
//etc...
//Game loop
while (true)
{
Input.Update(scene);
Physics.Update(scene);
AI.Update(scene);
//etc....
window.ClearBuffers();
Renderer.Update(scene, colorShaderProgram);
window.SwapBuffers();
}
Again, inside each sub-system (Renderer, AI, Input, etc.) all fighter components are being passed into system functions and spit back out with new values which are then inserted back into fighters:
void Physics::Update(Scene& scene)
{
for (Fighter& fighter : scene.fighters)
{
//Update fighter position based on fighter's current velocity which has been set by input
TransformComponent newFighterPosition = System::MoveFighter(fighter.GetComponent<TransformComponent>(), fighter.GetComponent<VelocityComponent>());
//Insert new TransformComponent to update fighter's position
fighter.Insert<TransformComponent>(newFighterPosition);
}
}
This current architecture has the advantage of being loosley coupled in that I can add and remove systems very easily without effecting the fighter class or it's components directly. The problem is everything is hopelessly serial, as my scene is passed into each sub-system one by one to update fighters. I mention this problem because one of my thoughts to implement a game logic layer was to have higher level classes just call specific game engine system functions directly like physics.MoveFighter(TransformComponent, VelocityComponent, float amountToMove); where I can add additional parameters to give the user at the game logic layer level more control. Of course doing things this way means system functions would be called and invoked in any order the user of the game logic saw fit. Would there be a way to still implement a game logic layer in this way and maybe queue up all calls and reorder them to run correctly within the game engine? Or would there be a better way to try and implement game logic within my current architecture?
As I see your goal, you could just store movementFactor field in fighter, and allow to change it from game logic layer. In Physics class delta is then mul-ied by that field. If you use component like system you probably do not want to update components manually, only operate with data.
Logic update order is a complex problem: imagine a chicken falling on an egg. It happens that in the same frame chicken touches the egg and egg is ready to show us new little chick. What should be executed first? That depends on what is updated first: physics or eggs, and on the fact whether reactions apply immediately. The best scenario (most fair usually) would be for them not to apply immediately, but form a stack of objects (components) state changes, for them to be resolved independently after the action phase.
Also make sure you do not want to stick with existing entity system like entityx.

Best Practice in Cocos2d

I am at the start of my cocos2d adventure, and have some ideological questions to ask. I am making a small space-shooter game, am I right to use the following class structure?
Scene
Background Layer
Infinite parallax background
Game Layer
Space ships
Bullets
Control Layer
Joystick
Buttons
and a followup question — what is the best practice in accessing objects from other layers? For example, when the joystick is updated, it must rotate the space ship and move the background. Both of these are in other layers. Is there some recommended way to go about this or should I simply get the desired objects by Tag and operate on them?
Cocos is a big singleton-based system, which may not appeal to some developers but is often used in Cocos apps and is the fundamental architecture of the framework. If you have one main scene and many subsequent layers added to that scene, and you want controls from one layer to affect sprites or logic on other layers, there really is nothing wrong with making your main scene a singleton and sending the information from the joystick layer back to the scene to handle for manipulating other layers or sprites. I do this all the the time and this technique is used in countless Cocos tutorials in books and online, so you can feel that you aren't breaking too many rules if you do it this way (and it's also quite easy to do).
If you instead choose to use pointers in one layer to send data to other layers, this can get you into a lot of trouble since one node should never own another node that it doesn't have a specific parent-child relationship with. Doing so can cause crashes and problems with the native Cocos cleanup methods when you remove scenes later, and potentially leak memory. You could use a weak reference in such a case instead, but that is still dependent on one layer expecting another layer to always be around, which may not be the case.
Sending data back to the main game scene to then dispatch and use accordingly is really efficient.
This seems like a perfectly reasonable way to arrange your objects, this is a method I use.
For accessing objects, I would keep an explicit reference to the object as a member variable and use it directly. (Using tags isn't a bad option, I just find it can get a little messy).
#interface Class1 : NSObject
{
CCLayer *backgroundLayer;
CCLayer *contentLayer;
CCLayer *hudLayer;
CCSprite *objectIMayNeedToUseOnBackgroundLayer;
CCNode *objectIMayNeedToUseOnContentLayer;
}
Regarding tags, one method I use to make sure the tag numbers I'm assigning are unique is define an enum as follows:
typedef enum
{
kTag_BackgroundLayer = 100,
kTag_BackgroundImage,
kTag_GameLayer = 200,
kTag_BadGuy,
kTag_GoodGuy,
kTag_Obstacle,
kTag_ControlLayer = 300
kTag_Joystick,
kTag_Buttons
};
Most times I'll also just set zOrder and tag properties of CCNodes (i.e. CCSprites, CCLabelTTFs, etc.) the same, so you can actually use the enum to define your zOrder, too.

Game screen management

I am working on a screen manager for a miniature game engine, and so far I cannot find a proper solution to managing screen objects without using the 'blob' for each one of the screens. Is blob tolerable in such circumstances where I need a list of renderable objects in one controller?
I would consider using the MVC pattern in this situation. Otherwise, if you're not careful, it's very easy to end up with a bunch of spaghetti code where the screen code is reaching into the game code, and vice versa.
I have recently coded something you might call a "screen manager".
I started with the idea that, whatever game I make, the render system is going to be pretty much the same in terms of how to render (how to manage the hardware). The thing that changes is what is rendered, and how to draw it (do I want a box or a circle or a bitmap.. representing what... etc).
So basically the "game state" is responsible for knowing how to render itself, and should do so when given a render surface from the screen manager or graphics system (It should also be responsible for other things like knowing how input, physics, etc act upon itself).
I implemented it with a singleton for the GraphicsSystem object, which was called something like this:
GameState gs;
Graphics::System().Init(DOUBLE_BUFFER, 640, 480);
...
while(still_looping) {
...
// When it is time to render:
Graphics::System().RenderGameState(&gs);
}
And how, you ask, does the Graphics::System() singleton know how to render the game state? It knows because the game state is inherited from a listener exposed by the graphics system...
//within GraphicsSystem.h...
class BaseRenderer
{
public:
virtual void Render(BITMAP *render_surface) = 0;
};
//GameState defined with:
class GameState : public BaseRenderer
{
public:
void Render(BITMAP *render_surface);
...
You can do this with nearly all the subsystems... (probably not timing, as it is needed in the game loop).
Why singletons? Well, it is C++ and I'm assuming there is only 1 screen, or graphics subsystem to render with. I'm not sure if you are using multiple screens, or a mobile phone or a console. The other way I would do it is to have the graphics system as static global variables in a separate file, giving them file scope only, and having accessor functions in that file (my old C way of doing things).
The key though is encapsulation. Let your screen manager manage the hardware. Let your game state dictate how itself should be expressed.
If this misses the point, please clear up your question and I can edit the answer.

Do I need a visitor for my component?

I'm trying to do a small and simple GUI in C++ (with SDL). I'm experimenting with the Composite pattern to have a flexible solution.
I've got a Widget class, with Component objects : for instance, there is a PaintingComponent ; if I want to draw a box, I'll use a PaintingBoxComponent, that inherits from the PaintingComponent.
The ideal Widget class would look a bit like that :
Class Widget
{
private:
vector<Component*> myComponents;
public:
// A small number of methods able to communicate with the component
// without knowing their types
}
My question is simple : what is the best way to activate this component when I need it ?
I first went with a "display" function in the Widget class. But I see two problems :
1°) I'm losing the pure polymorphism of "Compoonent" in Widget, since I'm forced to declare a particular component of the widget as PaintingComponent. I can deal with this, since it's logical that a Widget should be displayed.
2°) More troublesome, I need to pass information between my main programm and my PaintingComponent. Either I pass the SDL_Surface* screen to the PaintingComponent, and it paints the image it drew on it, or I give to my component a reference to the object that need to receive the image it has drawn (and this object will paint the image on the screen). In both cases, Widget will have to handle the data, and will have to know what a SDL_Surface* is. I'm loosing the loose coupling, and I don't want that.
Then, I considered using a "Visitor" pattern, but I'm not used to it and before I try to implement it, I'd like to have your advice.
How would you proceed to have a flexible and solid solution in this case ? Thanks in advance !
If you plan to change graphic system later, you could implement this pattern. Visitor goes to root node, then recursively to all children, drawing them on some surface (known only to Visitor itself). You can gather "display list" with that, then optimize it before drawing (for example, on OpenGL apply z-sorting (lower z first).