Rendering issue regarding imagery versus functionality - c++

As I understand rendering textures in SDL2, everything is waiting behind the scenes and a texture appears after using the SDL_RenderPresent() function and vanishes with SDL_RenderClear(), which you use before advancing to the next frame.
I understand that as far as it goes for imagery, but what about functionality? I have two button textures linked to mouse events that I want to see and use at different times in different places. I've got them rendering during different enum states and each button does indeed appear and disappear on cue when the states change.
However, since both button textures are always "there" even while not being rendered, I can still mouse click on the invisible button that isn't being rendered at any given time. This doesn't seem to be an issue for mouse motion events, just mouse button events. How do I make a texture inactive as well as invisible when it's not being rendered?

I solved this one with some tinkering and a more experienced programmer named mbozzi's help to clue me in the right direction as to what was going on. The underlying issue was due to my completely decoupling the GUI logic and GUI rendering. Which is what we are always told to do: decouple everything, right? But I needed to couple the logic and rendering that I want to occur at the same time and place.
My event poll>>mouse input>>image rendering code was one giant loop. However, when I split that giant loop into separate mini event poll>>mouse input>>image rendering loops that each runs independently (but not concurrently, I just put them in their own different enum game states), that clears up the issue. So, if anyone has a similar problem with clicking invisible buttons, hopefully this will help.

Related

Is it not possible to have multiple objects in Qt being painted simultaneously?

I read the following in the Qt Documentation.
Qt documentation on QPainter
The original question on SO, that I looked into.
So, I had two classes, with their own paint() functions. The paint functions would be called upon receiving their respective paint events, that were triggered on different and independent actions by the user. This worked fine.
Now for some reason, I need to show and update both the objects at the same time.
So simply, adding both of the items to the scene does not work. Only one of them is shown and updated. Refactoring the code is not an issue for me. I can re-arrange the two classes so that they are both drawn from one paint().
But this really makes me wonder then, and this is my question (for which I've googled a bit too), how are scenes with many dozens of objects then painted simultaneously (at least they give an illusion of concurrency)? Using threads somehow or through some time-based interleaving?
Maybe it's a silly question. I dunno.
It is indeed a silly question about an imaginary problem that doesn't not exist in reality. The graphics view will schedule consecutive draws for the items in the order needed to produce the desired result. Now if your code doesn't implement the desired result, that's a whole different subject. There is no concurrency, those are consecutive operations that only take place in the main thread.
If your drawing is very complex, draw using a secondary thread on a QImage and use the QImage as cache to draw your items in their respective paint functions.
Now for some reason, I need to show and update both the objects at the
same time.
What might that reason be? What does "at the same time" mean? In a single frame? Is a millisecond apart too much to qualify for "at the same time"?
Re QWidget painting: The paint events are delivered to individual widgets by the widget compositor. The way it works with the default raster back end is as follows: The topmost widget in the hierarchy is backed by a QImage. When any of the sub-widgets are to be repainted, the compositor delivers composite paint events to the widgets that overlay the area to be repainted. This is done sequentially as the compositor traverses the widget graph.
Re QGraphicsItem painting: The paint "events" are delivered to individual items by the scene. The items to be painted are selected basing on what area needs updating, what items were explicitly marked for update, etc. The painter is set up to correctly composite the item with the rest of the scene. The calls to paint are done sequentially as the scene traverses the item graph.
It would be, in general, impossible to do these in parallel due to data dependencies, and the fact that there are no requirements for the paintEvent or paint to be thread-safe.
Your problem is not directly related to this at all, you need to show a complete code example that reproduces your issue. Most likely your implementation of the item ignores some of the requirements for the item's behavior.

VTKActor not visible after render but visible on camera->resetview()

I am working on a qt-vtk project. We have a line drawing function. where straight lines are created between two mouse click position. But once actor is created it is not visible. I was calling render function just after adding the actor. But it didn't work. But if i do camera->resetview() lines become visible , but entire perspective changes. Where am i doing wrong ?
thanks
Rwik
This may not be relevant to you, but I had this exact same problem (in ActiViz [managed VTK]) and wrangled with it for a week, so I hope this helps someone out there. It turned out to be a problem with the location of the lines we wanted to draw on the canvas; they were too far away from the camera (on the Z axis) to be visible.
For us, we were trying to draw a cross on the viewing area wherever the user clicked. The data points were there, as were the actors and whatnot, but they would only be visible in the scene if you called resetCamera() and thusly changed the camera's configuration.
Initially, I blamed the custom interactor that we had to add to cirvumvent the default interactor's swallowing of MouseUp events (intended behavior). Investigation revealed that this seemed unlikely.
After this I shifted the blame onto the camera under the suspicion that perhaps the reset call was making a call to some kind of update method which I wasn't aware of. I called resetCamera() and then reverted the camera values to what they were initially.
When this was successfully done, it eventuated that the crosses would appear when the camera zoomed out and then disappear again as soon as it was set back, and it was at this point I realized that it was something to do with the scene.
At this point, I checked the methods we were using to retrieve the mouse location in 3D and realized that the z value was enormous and it was placing the points too far away as a byproduct of VTK's methods to convert 2D locations on the control to 3D locations in the scene and vice versa.
So after all that, a very mundane and avoidable mistake that originated from the methods renderer.DisplayToWorld() and WorldToDisplay().
This might not be everyone's problem, but I hope I've spared someone a week of fiddling around with VTK.
I think that's a bit hard to help, without see the code, but have you tried using
ui->qvtkwidget->update();
, where ui is the instance of your class derived from QMainWindow?

OpenGL window draws fine, but all the windows on top of my OpenGL window go black

I have an app that mixes OpenGL with Motif. The big main window that has OpenGL in it redraws fine. But, the sub windows sitting on top of it all go black. Specifically, just the parts of those subwindows that are right on top of the main window. Those subwindows all have just Motif code in them (except for one).
The app doesn't freeze up or dump core. Data is still flowing, and as text fields, etcetera of various subwindows get updated, those parts redraw. Dragging windows across each other or minimizing/unminimizing also trigger redraws. The timing of the "blackout" is random. I run the same 1-hour dataset every time and sometimes the blackout happens 5 minutes into the run and sometimes 30 minutes in, etc.
I went through the process of turning off sections of code until the problem stopped. Narrowed it down more and more and found it had to do with the use of the depth buffer. In other words, when I comment out the glEnable(GL_ENABLE_DEPTH_TEST), the problem goes away. So the problem seems to have something do with the use of the depth buffer.
As far as I can tell, the depth buffer is being cleared before redrawing is done, as it should. There's if-statements wrapped around the glClear calls, so I put messages in there and confirmed that the glClear of the depth buffer is indeed happening even when the blackout happens. Also, glGetError didn't return anything.
UPDATE 6/30/2014
Looks like there's still at least one person looking at this (thanks, UltraJoe). If I remember correctly, it turned out that it was sometimes swapping buffers without first defining the back buffer and drawing anything to it. It wasn't obvious to me before because it's such a long routine. There were some other minor things I had to clean-up, but I think that was the main cause.
How did you create the OpenGL window/context. Did you just get the X11 Window handle of your Motif main window and created the OpenGL context on that one? Or did you create a own subwindow within that Motif window for OpenGL?
You should not use any window managed by a toolkit directly, unless this was some widget for exclusive OpenGL use. The reason is, that most toolkits don't create a own sub-window for each an every element and also reuse parts of their graphics resources.
Thus you should create a own sub-window for OpenGL, and maybe a further subwindow using glXCreateWindow as well.
This is an old question, I know, but the answer may help someone else.
This sounds like you're selecting a bad visual for your OpenGL window, or you're creating a new colormap that's overriding the default. If at all possible, choose a DirectColor 24-plane visual for everything in your application. DirectColor visuals use read-only color cells, but 24 planes will allow every supported color to be available to every window without having to overwrite color cells.

Are offscreen animations ignored by rendering and CPU?

Just wondering how Cocos manages the CPU cycle and graphics engine for CCSprites that are offscreen, including those in the middle of an animation. If you have many animated sprites going on and off the screen, I could check and stop each animation when it's off the screen then restart it when it is about to come back on, but I'm wondering if this is necessary?
Suppose you had a layer with a bunch of them and you make the layer invisible, but don't stop the sprite animations. Will they still use CPU time?
I just did a quick test (good question :) ), in a game where i can slide the screen over a large map that contains images of soldiers performing an 'idle' animation. They continue running when off-screen (I tacked a CCCallFunc in a sequence in a repeat forever, to a simple selector that logs).
I suspect they would also run when the object is not visible. It kind of makes sense, especially for animations. If you look at my use case, if the animation were stopped, it could cause a cognitive disconnect if the user slided the soldier in and out of view, especially when the soldier is doing a walk on the map - he could actually walk-in the view without the user having done any interaction with the screen.

Black flicker while resizing translucent Qt widget (only when Aero is enabled)?

I have a top-level Qt widget with the FramelessWindowHint flag and the WA_TranslucentBackground attribute set. It has several children, each of which draws an image on it. They are not in a layout. Instead, I simply move them around when something changes (it is not user-resizable).
There are two states to the window - a big state and a small state. When I switch between them, I resize the window and reposition the children. The problem is that as the window resizes, a black box is briefly flashed on the top-level window before the images are painted over it.
The problem goes away if I disable Aero. I found brief mention of this problem being fixed in an article describing a new release of Qt (this release is long past), but it still doesn't work.
Any ideas why?
Thanks!
I don't have experience with Qt specifically, but I have worked with other windowing toolkits. Typically you see this kind of flashing when you are drawing updates directly to the screen. The fix is to instead use Double buffering, which basically means that you render your updates into an offscreen buffer (a bitmap of some sort, in the purest sense of the word), and then copy the entire updated image to screen in a single, fast operation.
The reason you only see the flickering sometimes is simply an artifact of how quickly your screen refreshes versus how quickly the updates are drawn. If you get "lucky" then all the updates occur between screen refreshes and you may not see any flicker.