I am working on a qt-vtk project. We have a line drawing function. where straight lines are created between two mouse click position. But once actor is created it is not visible. I was calling render function just after adding the actor. But it didn't work. But if i do camera->resetview() lines become visible , but entire perspective changes. Where am i doing wrong ?
thanks
Rwik
This may not be relevant to you, but I had this exact same problem (in ActiViz [managed VTK]) and wrangled with it for a week, so I hope this helps someone out there. It turned out to be a problem with the location of the lines we wanted to draw on the canvas; they were too far away from the camera (on the Z axis) to be visible.
For us, we were trying to draw a cross on the viewing area wherever the user clicked. The data points were there, as were the actors and whatnot, but they would only be visible in the scene if you called resetCamera() and thusly changed the camera's configuration.
Initially, I blamed the custom interactor that we had to add to cirvumvent the default interactor's swallowing of MouseUp events (intended behavior). Investigation revealed that this seemed unlikely.
After this I shifted the blame onto the camera under the suspicion that perhaps the reset call was making a call to some kind of update method which I wasn't aware of. I called resetCamera() and then reverted the camera values to what they were initially.
When this was successfully done, it eventuated that the crosses would appear when the camera zoomed out and then disappear again as soon as it was set back, and it was at this point I realized that it was something to do with the scene.
At this point, I checked the methods we were using to retrieve the mouse location in 3D and realized that the z value was enormous and it was placing the points too far away as a byproduct of VTK's methods to convert 2D locations on the control to 3D locations in the scene and vice versa.
So after all that, a very mundane and avoidable mistake that originated from the methods renderer.DisplayToWorld() and WorldToDisplay().
This might not be everyone's problem, but I hope I've spared someone a week of fiddling around with VTK.
I think that's a bit hard to help, without see the code, but have you tried using
ui->qvtkwidget->update();
, where ui is the instance of your class derived from QMainWindow?
Related
Good day to all!
I would like to learn from the respected community how it is possible using Qt Designer to create a simple GUI that performs what is shown in the attached image. Namely: after a single click of the LMB, a marker is placed at the place of the click, indicating the beginning of the polygon, and a temporary polygon of the desired type (but of different sizes) begins to follow the mouse cursor until we click on the LMB again, after which the final polygon will be built from the point of the first click of the LMB to the point of the second click. Such a ready-made polygon will have to be stored somewhere in memory for further work with it.
As a polygon here, I show a complex sector built on some calculated points, but as a simple example, you can take a straight line - I don't think that the essence of the program changes much.
I have already looked at many examples of different implementations of different things on QGraphicsScene, but I could not figure out exactly how to create an object in the way I needed. That is, I know that we need a separate class describing the polygon and calculating the coordinates of which it consists, but I don't quite understand how to implement the dynamics - with each mousemoveivent, delete the temporary polygon and draw a new one for the new coordinates, or how?
P.S. If it won't be so difficult, could someone also show how to implement what was conceived through the redefined paintScene class, replacing the standard QGraphicsScene class and inheriting from it? I am faced with the fact that when creating a custom class of the scene object, clicking on the objects attached to it is ignored by the program, and instead of, for example, dragging an object on the scene when clicking on it, the mousePressEvent of the scene, not the object, is triggered, and I do not understand what the problem is.
P.P.S. I apologize for my English and thank everyone for any help!
As I understand rendering textures in SDL2, everything is waiting behind the scenes and a texture appears after using the SDL_RenderPresent() function and vanishes with SDL_RenderClear(), which you use before advancing to the next frame.
I understand that as far as it goes for imagery, but what about functionality? I have two button textures linked to mouse events that I want to see and use at different times in different places. I've got them rendering during different enum states and each button does indeed appear and disappear on cue when the states change.
However, since both button textures are always "there" even while not being rendered, I can still mouse click on the invisible button that isn't being rendered at any given time. This doesn't seem to be an issue for mouse motion events, just mouse button events. How do I make a texture inactive as well as invisible when it's not being rendered?
I solved this one with some tinkering and a more experienced programmer named mbozzi's help to clue me in the right direction as to what was going on. The underlying issue was due to my completely decoupling the GUI logic and GUI rendering. Which is what we are always told to do: decouple everything, right? But I needed to couple the logic and rendering that I want to occur at the same time and place.
My event poll>>mouse input>>image rendering code was one giant loop. However, when I split that giant loop into separate mini event poll>>mouse input>>image rendering loops that each runs independently (but not concurrently, I just put them in their own different enum game states), that clears up the issue. So, if anyone has a similar problem with clicking invisible buttons, hopefully this will help.
I'm making a platform game, and everything was great. My sprite moved when i touched the left side of the screen, and jumped when i touched the right side. But then i decided to make it move by itself. so i added the ccmoveto function, but now it does not jump! I'm new to cocos2d but everything is working ok, except this, already searched but couldn't find the answer can someone please help me?
I tried everything, but it only jumps if i delete the ccmoveto action.
I'm using cocos2d 2.0
Thank you!!
CcMoveTo will override any manual position changes, inluding changes from other actions like CCJump. Your character is set to move to destination in a straight line, no matter what.
It's issues like these why I always recommend not to use actions for gameplay logic. Especially position, you need to retain full control over it. Use a direction vector and integrate position every update: and you're free to do everything you need.
my advice is to use one of the physics engines provided with cocos2d: Box2D and Chipmunk physics. Thanks to this engines you can define the characteristics of the world (i.e. gravity vector) a shape and a mass for your sprite (i.e. a rectangle with a weight). Then when you need it to jump you will just create a force vector with the characteristics you need (i.e. angle, etc.) and keep updated your sprite with its physical body. This will make your sprite jump and land quite realistically.
Is there any way to change the touch priority for cocos2d iOS sprites? What I have are multiple cards on the screen and they are arrayed in an arc, just like it would when you hold them in your hands. So in this setup, they overlap, and I need to recognize on which card the touch was made. I could measure the coordinates of each vertex of cards and determine the visible area of a card and then check if the touch was made inside that area (couldn't I?) but I thought there would be an easier way to deal with this, say changing the touch priority? Which means that the card closest to the screen would have the highest priority and it'll keep decreasing along the way into the background, so that even if the touch was made on 2 sprites at once (the above and below one), it would be registered only on the sprite with higher priority.
Reading on the internet only revealed ways to change the priority for a sprite and layer so that it defines whether the touch was made on the layer or the sprite, but that's not what I want.
As far as I know, by default you get exactly that behavior, the sprites closer (on the z ax) to you have priority. However, I think they pass down the event to the ones behind them as well. So, what i think you need to do is to eat the event when it gets to any of your sprites. To do that, just return NO when overwriting the "touchBegin" method. Hope it helps.
Hey i'm trying to make a GUI lib with SFML, and everything's done except one problem: Making the Interface stay still even when the camera moves or zooms in.
This would be easy to fix if zooming wasn't possible, but zooming out means scaling the contents of the Interface up, which causes it's text/images to become blurry.
Does anyone have a way to get around this issue? (preferably only using SFML, but if i must i can add OpenGL stuff....)
Stop using the same camera and perspective matrices for your GUI as you do for your scene.
In the case of SFML apply a different view, then restore the old one.