My app uses modern OpenGL and MFC. I have a very large model, which means rendering is heavy and slow. I want to avoid frequent redraw of the large model as much as possible.
I know OpenGL always redraws everything to show updates. What I am looking for is sth like below:
The large model will be rendered once and then stays as background.
On top of the model, users can drag&draw line works. During the drag&draw process, the OpenGL will constantly update/redraw the line works(but not the model).
When users are done with line works, the large model will update once to include the lineworks.
The problem is that how do I make OpenGL only redraw the line works but keep the model as background during the drag&draw?
Related
I've got a C++/QWidgets application with 2 Qt3DWindow embedded. The 3D windows can be hidden (i.e. the user switches to another view), while work is done in other parts of the application. Whenever the user switches back to the view with the embedded Qt3DWindow there are a lot of changes to be made to the scenegraph.
Rendering is set to OnDemand, but whenever the view is switched back to the 3D windows there is a big lag, up to several seconds. Profiling the application has shown that a lot of work is done for rendering at that point because the scenegraph is changing (most time is spent in the material classes).
My working theory is, that each change to the scenegraph (i.e. removing an entity, adding a new one) causes a separate render update, which leads to the lag due to the amount of separate changes.
So my question is:
Is there a way to stop all render updates until the scenegraph has been updated/modified completely?
I'm thinking of something similar to the beginResetModel() and endResetModel() methods in QAbstractItemModel.
Say, my application has a 3D window rendering a tin model of millions of triangles using OpenGL.
Goal: For some operations of users, there is no need to update the 3D window. The 3D view can just stay idle with previously rendered content, without repeatly calulate the rotation/translation/scaling/texture things. I assume this will save lots of CPU and GPU time.
Current design: I have a rendering while loop running all the time. If I stop the while loop, then the content is rendered once and disappear.
Question: is there a way to achieve the goal? Can anyone give a direction?
Instead of having a continuous rendering loop, you can use OpenGL only to render your window when the system sends you an event to repaint the window. Additionally you invalidate your own window if you know that the contents of it changed (e.g. due to a reaction to a mouse click). In fact this is the proper way to draw in your window for anything other than latency sensitive applications.
The exact specifics highly depend on the API you use to create your window. For example,
With WinAPI you render on WM_PAINT and invalidate with InvalidateRect.
With Xlib you render on Expose and invalidate with IDK.
With Qt you render in QOpenGLWindow::paintGL and invalidate with update().
With GLUT you render in glutDisplayFunc and invalidate with glutPostRedisplay.
... and so on. This is not OpenGL specific in any way.
I am writing a video display software capable of displaying multiple video streams. For this I have a GridView holding VideoOutputs in QML connected to a QAbstractListModel derived class in c++ which provides instances of an object with a QAbstractVideoSurface Q_PROPERTY. It's working quite beautifully so far.
The video frames I am displaying come with metadata, however, containing data for axis aligned bounding boxes. I don't know beforehand how many boxes there are, the number could even change on a frame by frame basis, their position and size is also not set.
Ultimately, it should look something like this:
As I need to be able to display a few video streams at once, and preferably at 30+ fps, I need a fast method of drawing these boxes. Using QPainter on the QImage on which the QVideoFrame is based is rather slow so I was considering a few other approaches:
Using the QML object Rectangle in a Repeater with a c++ provided model (Was hoping to simply provide a QVariantList::fromVector() ): Could work, however I would need a lot of models which in turn I need to provide to QML with a model, and I would likely need to call begin/endResetModel every frame that the boxes change to cause QML to update - this is also very slow.
Using a Shader to draw the boxes: This is a rather difficult approach. I'm no stranger to shaders, but in Qt/Qml I don't know how to provide the shader with the information necessary.
Using OpenGL directly to draw the boxes: Again, I have no clue how to do this, but I think I could work it out if I googled.
My question: Which one, if any, of these approaches is the best? If none of these, which other approach could I use?
Thank you so much for taking the time to read my rather long question!
I've been developing a 2d RPG based on the LWJGL alongside with Java 1.6 for 3 months now. My next goal is to write all of the non-game-ish stuff. This includes menus, text input boxes, buttons and things like the inventory and character information screens. As I am a Computer Engineering student, I'm trying to write everything on my own (except, of course, for the OpenGL part of the LWJGL) so that I "test" myself on the writing of a simple 2d game engine.
I know that making such things from scratch requires basically mapping textures to quads (like the buttons), writing stuff on them and testing mouse/keyboard events which trigger other events inside the code.
The doubt I have is: should I use VBO's (as I'm using for the actual game rendering) or Immediate Mode when rendering such elements? I don't really know if Immediate Mode would be such a drop on performance. Another point is: do the interface elements have to be updated as fast as the game itself? I don't think so, because nothing is actually moving... Are actual games made like that?
Immediate Mode is more straightforward for the task, you would not need to take care about caching and controls composition/batching. Performance dropoff is not that big, unless you render a lot of text (thousands of letters) with each glyph in a separate glBegin..glEnd. If you don't use VBO anywhere else I would recommend trying it out for text output and doing everything else in easier Immediate Mode.
GUI elements might not change as often as game state does, but there's a catch - you could need to update them each time there's a cursor interaction (e.g. button gets OnMouseOver event and needs to be rendered with a highlight). These kind of events may happen very frequently, so thats why rendering may be needed at a full speed.
i came about this problem and knew it could be done better.
The problem:
When overlaying a QGLWidget (Qt OpenGL contextview) with Qt widgets, Qt redraws those widgets after every Qt frame.
Qt isn’t built to redraw entire windows with >60fps constantly, so that’s enormously slow.
My idea:
Make Qt use something other to draw upon: a transparent texture. Make OpenGL use this texture whenever it redraws and draw it on top of everything else. Make Qt redirect all interaction with the OpenGL context view to the widgets drawn onto the texture.
The advantage would be that Qt only has to redraw whenever it has to (e.g. a widget is hovered or clicked, or the text cursor in a text field blinks), and can do partial redraws which are faster.
My Question:
How to approach this? how can I tell Qt to draw to a texture? how can i redirect interaction with a widget to another one (e.g. if i move the mouse above the region in the context view where a checkbox sits in the drawn-to-texture widget, Qt should register this event to the checkbox and repaint to reflect itshovered state)
I separate my 2D and 3D rendering out for my CAD-like app for the very same reasons you have, although in my case my the 2D stuff are not widgets - but it shouldn't make a difference. This is how would approach the problem:
When your widget changes render it onto a QGLFramebufferObject, do this by using the FBO as the QPaintDevice for a QPainter in your QGLWidget::paintEvent(..) and calling myWidget->render( myQPainter, ...). Repeat this for however many widgets you have, but only onto the same FBO - don't create an FBO for each one... Remember to clear it first, like a 'normal' framebuffer.
When your current OpenGL background changes, render it onto another QGLFramebufferObject using standard OpenGL calls, in the same way.
Create a pass through vertex shader (the 'camera' will just be a unit cube), and a very simple fragment shader that can layer the two textures on top of each other.
At the end of the QGLWidget::paintEvent(..), activate your shader program, bind your framebuffers as textures for it (myFBO->texture() gets the handle), and render a unit quad. Because your camera is a unit square, and the viewport size defined the FBO size, it will fill the viewport pixel perfect.
However, that's the easy part... The hard part is the widget interaction. Because you are essentially rendering a 'proxy', you going to have to relay the interaction between the 'real' and 'proxy' widget, whilst keeping the 'real' widget invisible. Here's how would I start:
Some operating systems are a bit weird about rendering widgets without ever showing them, so you may have to show and then hide the widget after instantiation - because of the clever painting queue in Qt, it's unlikely to actually make it to the screen.
Catch all mouse events in the viewport, work out which 'proxy' widget the cursor is over (if any), and then offset it to get the relative position for the 'real' hidden widget - this value will depend on what parent object the 'real' widget has, if any. Then pass the event onto the 'real' widget before redrawing the widget framebuffer.
I should state that I also had to create a 'flagging' system to handle redraws nicely. You don't want every widget event to trigger a widget FBO redraw, because there could many simultaneous events (don't just think about the mouse) - but you would only want one redraw. So I created a system where if anything in the application could change anything in the viewport visually, then it would flag the viewport as 'dirty'. Then setup a QTimer for however many fps you are aiming for (in my situation the scene could get very heavy, so I also timed how long a frame took and then used that value +10% as the timer delay for the next check, this way the system isn't bombarded when rendering gets laggy). And then check the dirty status: if it's dirty, redraw; otherwise don't. I found life got easier with two dirty flags, one for the 3D stuff and one for the 2D - but if you need to maintain a constant draw rate for the OpenGL drawing there's probably no need for two.
I imagine what I did wasn't the easiest way to do it, but it provides plenty of scope for tuning and profiling - which makes life easier in the long run. All the answers are definitely not in this post, but hopefully it will get you on the way to a strategy.