Using Qt3D for a huge topography model? - c++

My scene currently consists of a huge topography model (Mill. Verticies). Now the scene becomes more complex and contains many smaller 3D objects. Does it make sense to switch from a QFrambufferObject (with C ++ / OpenGL) to Qt3D?
What about the speed?
Which file format (Wavefront OBJ, Stanford Triangle Format PLY, STL (STereoLithography)) is suitable? I am currently loading the data from the .hgt format into a vector array and then into the QFrambufferObject.
With Qt3D, is it possible to load only the tiles in the graphics card that are currently visible in the viewport or to delete them as soon as they leave the viewport when the camera angles change?

That's difficult to say. I assume by
Does it make sense to switch
you mean will it be fast enough. Regarding your two points:
Qt3D uses assimp internally to load models, so if that library is able to load hgt files, so should be Qt3D.
What do you mean by reload? Do they change dynamically in the model? Or do you mean re-render? There's QFrustumCullung but that seems to render the whole entity still. Do you know whether your object consists of multiple parts? Because when Qt3D loads models (i.e. whole scenes) from files I think it persists its structure. If your model is split into multiple standalone components it could be that QFrustumCulling improves rendering speed because they don't get drawn when they are not viewed.
I personally feel like Qt3D is slower compared to a hand-crafted renderer in OpenGL. But it constantly improves so if it doesn't take long to try it out maybe it's worth a shot. Also, the flexibility of Qt3D might make up for the slower rendering speed.

Related

best way to wrap opengl models

In short: What is the "preferred" way to wrap OpenGL's buffers, shaders and/or matrices required for a more high level "model" object?
I am trying to write this tiny graphics engine in C++ built on core OpenGL 3.3 and I would like to implement an as clean as possible solution to wrapping a higher level "model" object, which would contain its vertex buffer, global position/rotation, textures (and also a shader maybe?) and potentially other information.
I have looked into this open source engine, called GamePlay3D and don't quite agree with many aspects of its solution to this problem. Is there any good resource that discusses this topic for modern OpenGL? Or is there some simple and clean way to do this?
That depends a lot on what you want to be able to do with your engine. Also note that these concepts are the same with DirectX (or any other graphic API), so don't focus too much your search on OpenGL. Here are a few points that are very common in a 3D engine (names can differ):
Mesh:
A mesh contains submeshes, each submesh contains a vertex buffer and an index buffer. The idea being that each submesh will use a different material (for example, in the mesh of a character, there could be a submesh for the body and one for the clothes.)
Instance:
An instance (or mesh instance) references a mesh, a list of materials (one for each submesh in the mesh), and contains the "per instance" shader uniforms (world matrix etc.), usually grouped in a uniform buffer.
Material: (This part changes a lot depending on the complexity of the engine). A basic version would contain some textures, some render states (blend state, depth state), a shader program, and some shader uniforms that are common to all instances (for example a color, but that could also be in the instance depending on what you want to do.)
More complex versions usually separates the materials in passes (or sometimes techniques that contain passes) that contain everything that's in the previous paragraph. You can check Ogre3D documentation for more info about that and to take a look at one possible implementation. There's also a very good article called Designing a Data-Driven Renderer in GPU PRO 3 that describes an even more flexible system based on the same idea (but also more complex).
Scene: (I call it a scene here, but it could really be called anything). It provides the shader parameters and textures from the environment (lighting values, environment maps, this kind of things).
And I thinks that's it for the basics. With that in mind, you should be able to find your way around the code of any open-source 3D engine if you want the implementation details.
This is in addition to Jerem's excellent answer.
At a low level, there is no such thing as a "model", there is only buffer data and the code used to process it. At a high level, the concept of a "model" will differ from application to application. A chess game would have a static mesh for each chess piece, with shared textures and materials, but a first-person shooter could have complicated models with multiple parts, swappable skins, hit boxes, rigging, animations, et cetera.
Case study: chess
For chess, there are six pieces and two colors. Let's over-engineer the graphics engine to show how it could be done if you needed to draw, say, thousands of simultaneous chess games in the same screen, instead of just one game. Here is how you might do it.
Store all models in one big buffer. This buffer has all of the vertex and index data for all six models clumped together. This means that you never have to switch buffers / VAOs when you're drawing pieces. Also, this buffer never changes, except when the user goes into settings and chooses a different style for the chess pieces.
Create another buffer containing the current location of each piece in the game, the color of each piece, and a reference to the model for that piece. This buffer is updated every frame.
Load the necessary textures. Maybe the normals would be in one texture, and the diffuse map would be an array texture with one layer for white and another for black. The textures are designed so you don't have to change them while you're drawing chess pieces.
To draw all the pieces, you just have to update one buffer, and then call glMultiDrawElementsIndirect()... once per frame, and it draws all of the chess pieces. If that's not available, you can fall back to glDrawElements() or something else.
Analysis
You can see how this kind of design won't work for everything.
What if you have to stream new models into memory, and remove old ones?
What if the models have different size textures?
What if the models are more complex, with animations or forward kinematics?
What about translucent models?
What about hit boxes and physics data?
What about different LODs?
The problem here is that your solution, and even the very concept of what a "model" is, will be very different depending on what your needs are.

Large 3D scene streaming

I'm working on a 3D engine suitable for very large scene display.
Appart of the rendering itself (frustum culling, occlusion culling, etc.), I'm wondering what is the best solution for scene management.
Data is given as a huge list of 3D meshs, with no relation between them, so I can't generate portals, I think...
The main goal is to be able to run this engine on systems with low RAM (500MB-1GB), and the scenes loaded into it are very large and can contain millions of triangles, which leads to very intensive memory usage. I'm actually working with a loose octree right now, constructed on loading, it works well on small and medium scenes, but many scenes are just to huge to fit entirely in memory, so here come my question:
How would you handle scenes to load and unload chunks dynamically (and ideally seamlessly), and what would you base on to determine if a chunk should be loaded/unloaded? If needed, I can create a custom file format, as scenes are being exported using a custom exporter on known 3D authoring tools.
Important information: Many scenes can't be effectively occluded, because of their construction.
Example: A very huge pipe network, so there isn't so much occlusion but very high number of elements.
I think that the best solution will be a "solution pack", a pack of different techniques.
Level of detail(LOD) can reduce memory footprint if unused levels are not loaded. It can be changed more or less seamlessly by using an alpha mix between the old and the new detail. The easiest controller will use mesh distance to camera.
Freeing the host memory(RAM) when the object has been uploaded to the GPU (device), and obviously free all unsued memory (OpenGL resources too). Valgrind can help you with this one.
Use low quality meshes and use tessellation to increase visual quality.
Use VBO indexing, this should reduce VRAM usage and increase performance
Don't use meshes if possible, terrain can be rendered using heightmaps. Some things can be procedurally generated.
Use bump or/and normalmaps. This will improve quality, then you can reduce vertex count.
Divide those "pipes" into different meshes.
Fake 3D meshes with 2D images: impostors, skydomes...
If the vast amount of ram is going to be used by textures, there are commercial packages available such as the GraniteSDK that offer seamless LOD-based texture streaming using a virtual texture cache. See http://graphinesoftware.com/granite . Alternatively you can look at http://ir-ltd.net/
In fact you can use the same technique to construct poly's on the fly from texture data in the shader, but it's going to be a bit more complicated.
For voxels there is a techniques to construct oct-trees entirely in GPU memory, and page in/out the parts you really need. The rendering can then be done using raycasting. See this post: Use octree to organize 3D volume data in GPU , http://www.icare3d.org/research/GTC2012_Voxelization_public.pdf and http://www.cse.chalmers.se/~kampe/highResolutionSparseVoxelDAGs.pdf
It comes down to how static the scene is going to be, and following from that, how well you can pre-bake the data according to your vizualization needs. It would already help if you can determine visibility constraints up front (e.g. google Potential Visiblity Sets) and organize it so that you can stream it at request. Since the visualizer will have limits, you always end up with a strategy to fit a section of the data into GPU memory as quickly and accurately as possible.

What is the purpose of a graphics library such as OpenGL?

I realize this is probably a ridiculous question, but before trying to figure out what libraries to use for which projects, I think it makes sense to really understand the purpose of such libraries first.
A lot of video games use libraries like OpenGL. All the tutorials I've seen of such libraries demonstrate how to write code that tells the computer to draw something. Thing is, in games these days everything is modeled using software such as Zbrush, Maya, or 3ds Max. The models are textured and are good to go. It seems like all you'd need to do is write an animation loop that draws the models and updates repeatedly rather than actually program the code to draw every little thing. That would be both extremely time consuming and would make the models useless. So where does OpenGL or Direct 3D come in in relation to video games and 3d art? What is so crucial about them when all the graphics are already created and just need to be loaded and drawn? Are they used mainly for shaders and effects?
This question may just prove how new I am to this, but it's one I've never heard asked. I'm just starting to learn programming and I'm understanding the code and logic fairly well, but I don't understand graphics libraries or certain frameworks at all and tutorials are not helping.
It seems like all you'd need to do is write an animation loop that draws the models and updates repeatedly rather than actually program the code to draw every little thing.
Everything that happens in a computer does so because a program of some form tells it exactly what to do. The letters that this message is composed of only appear because your web-browser of choice downloaded this file via TCP/IP over an HTTP protocol, decoded its UTF-8-encoded text, interpreted that text as defined by the XML, HTML, JavaScript, and so forth standards, and then displayed the visible portion as defined by the Unicode standard for text layout and in accord with HTML et al, using the displaying and windowing abilities of your OS or window manager or whatever.
Every single part of that operation, from the downloading of the file to its display, is governed by a piece of code. Every pixel you are looking at on the screen is where it is because some code put it there.
HTML alone doesn't mean anything. You cannot just take an HTML file and blast it to the screen. Some code must interpret it. You can interpret HTML as a text file, but if you do, it loses all formatting, and you get to see all of the tags. A web browsers can interpret it as proper HTML, in which case you get to see the formatting. But in every case, the meaning of the HTML file is determined by how it is used.
The "draws the model" part of your proposed algorithm must be done by someone. If you don't write that code, then you must be using a library or some other system that will cause the model to appear. And what does that library do? How does it cause the model to appear?
A model, like an HTML web page, is meaningless by itself. Or to put it another way, your algorithm can be boiled down to this:
Animate the model.
????
Profit!
You're missing a key component: how to actually interpret the model and cause it to appear on the screen. OpenGL/D3D/a software rasterizer/etc is vital for that task.
A lot of video games use libraries like OpenGL.
First and foremost: OpenGL is not a library per-se, but an API (specification). The OpenGL API may be implemented in form as a software library, but these days is much more common to implement OpenGL in form of a driver that turns OpenGL function calls into control commands to a graphics processor sitting on a graphics card (GPU).
All the tutorials I've seen of such libraries demonstrate how to write code that tells the computer to draw something.
Yes. This is because things need to be drawn to make any use of them.
Thing is, in games these days everything is modeled using software such as Zbrush, Maya, or 3ds Max.
At this point the models just consist of a large list of numbers, and further numbers that tell, how the other numbers form some sort of geometry. Those numbers are not some sort of ready to use image.
The models are textured and are good to go.
They are a bunch of numbers, and what they have is some additional numbers controlling texturing. The textures themself are in turn just numbers.
It seems like all you'd need to do is write an animation loop that draws the models
And how do you think this drawing is going to happen? There's no magic "here you have a model, display it" function. Because for one the way in which the numbers making up a model may have any kind of meaning. So some program must give meaning to those numbers. And that is a renderer.
and updates repeatedly rather than actually program the code to draw every little thing.
Again, there is no magic "draw it" function. Drawing a model involves going through each of its numbers, it consists of, and turning those into drawing commands to the GPU.
That would be both extremely time consuming and would make the models useless.
How are the models useless, when they are what is controlling the issuing of commands to OpenGL. Or do you think OpenGL is used to actually "create" models?
So where does OpenGL or Direct 3D come in in relation to video games and 3d art?
It is used to turn the numbers a 3D model, as it is saved away from a modeller, into something pleasant to look at.
What is so crucial about them when all the graphics are already created
The graphics is not yet created, when the model is done. What's created is a model, and some auxilliary data in form of textures and shaders, which are then turned into graphics in realtime, at the execution time of the program.
and just need to be loaded and drawn?
Again, after being loaded, a model is just a bunch of numbers. And drawing means, turning those numbers into something to look at, which requires sending drawing commands to the graphics processor (GPU), which happens using a API like OpenGL or Direct3D
Are they used mainly for shaders and effects?
They are used to turn the numbers generated by a 3D modelling program (Blender, Maya, ZBrush) into an actual picture.
You have data. Like a model, with vertices, normals, and textures. As #datenwolf stated above, those are all just numbers sitting on the hard drive or in RAM, not colors on the screen.
Your CPU (which is where the program you write runs) can't talk to the screen directly. Instead, you send the data you want to draw to the GPU. The GPU then draws the data. Graphics APIs like OpenGL and Direct3D allow programs running on the CPU to send data to the GPU and customize how the GPU draws it. This is a gross simplification, but it sounds like you just need an overview.
Ultimately, every graphics program must go through a graphics API. When you draw an image, for example, you send the GPU the image, and the GPU draws it on the screen. Draw some text? Send the data to the GPU. The GPU draws it. Remember, your code can't talk to the screen. It CAN talk to the GPU through OpenGL or Direct3D, and the GPU then draws the data.
Before OpenGL and DirectX, the games had to use special instructions depending on what graphics card you had. When you bought a new game, you had to check carefully if your card was supported, or you couldn't use the game.
OpenGL and DirectX is a standardized API to the grapics cards. A library is delivered by the manufacturer of the card. If they follow the specification, you are guaranteed that games will work (if they also follow the same specification).
Open Graphics Library (OpenGL) is a cross-language, cross-platform application programming interface (API) for rendering 2D and 3D vector graphics. The API is typically used to interact with a graphics processing unit (GPU), to achieve hardware-accelerated rendering.

3D model manipulation for a Desktop Augmented Reality application

I'm working on an Augmented Reality project that uses multiple markers to get positions for 3D models that I'm planning to overlay. (I'm doing this from scratch using OpenCV and I'm not using ARToolkit or any other off the shelf marker detection libraries).
Environment: Visual C++ 2008, Windows 7, Core2Duo 1GB ram, OpenCV 2.3
I want the 3D models to be manipulated by user so it will turn out to a sort of simulation.
For this I'm planning to use OpenGL. What are your suggestions, recommendations? Can the simulation part be done by using OpenGL itself or will i need to use something like OpenSceneGraph/ODE/Unity 3D/Ogre 3D?
This is for an academic project so better if I can produce more self-coded system rather than using off-the-shelf products.
it would seem that OpenGL is pretty enough for your needs (drawing a model with a specific colour and size).
If you're new to OpenGL, and you are not going to be using it for your future projects, it might be easier to use the old fixed-function pipeline, which already has the lighting and color system ready and doesn't require you to learn how to write shaders.
For your project, you will need a texture where you would copy the image from camera using glTexSubImage2D() which you would in turn draw to background (or you can use glDrawPixels() in case you don't require any scaling). After that, you need to have your model, complete with normals for lighting. Models can be eg. exported from Blender or 3DS Max to ascii format, which is pretty easy to parse. Then you can draw the model. Colors can be changed using glColor3f() before drawing the model (make sure you don't specify different color while drawing the model). Positioning of the models is done using matrices. The old OpenGL have some handy and easy-to-use functions for rotating and translating objects. There are also functions for scaling the objects (changing size), so that is covered pretty easy. All you need is to figure out camera position, relative to the marker (which i believe is implemented in OpenCV).
If you were to use the forward-compatible OpenGL, you would need to set up vertex buffer objects to contain model data and write vertex and fragment shaders to shade and display your model. That's kinda more work for which you get extended flexibility. But you can use shaders in the old OpenGL as well, if you decide you need them (eg. for some special effects).
Learning how to use a scenegraph or an engine (ogre) can take some time, i would not recommend it for your task.

What is the most efficient way to manage a large set of lines in OpenGL?

I am working on a simple CAD program which uses OpenGL to handle on-screen rendering. Every shape drawn on the screen is constructed entirely out of simple line segments, so even a simple drawing ends up processing thousands of individual lines.
What is the best way to communicate changes in this collection of lines between my application and OpenGL? Is there a way to update only a certain subset of the lines in the OpenGL buffers?
I'm looking for a conceptual answer here. No need to get into the actual source code, just some recommendations on data structure and communication.
You can use a simple approach such as using a display list (glNewList/glEndList)
The other option, which is slightly more complicated, is to use Vertex Buffer Objects (VBOs - GL_ARB_vertex_buffer_object). They have the advantage that they can be changed dynamically whereas a display list can not.
These basically batch all your data/transformations up and them execute on the GPU (assuming you are using hardware acceleration) resulting in higher performance.
Vertex Buffer Objects are probably what you want. Once you load the original data set in, you can make modifications to existing chunks with glBufferSubData().
If you add extra line segments and overflow the size of your buffer, you'll of course have to make a new buffer, but this is no different than having to allocate a new, larger memory chunk in C when something grows.
EDIT: A couple of notes on display lists, and why not to use them:
In OpenGL 3.0, display lists are deprecated, so using them isn't forward-compatible past 3.0 (2.1 implementations will be around for a while, of course, so depending on your target audience this might not be a problem)
Whenever you change anything, you have to rebuild the entire display list, which defeats the entire purpose of display lists if things are changed often.
Not sure if you're already doing this, but it's worth mentioning you should try to use GL_LINE_STRIP instead of individual GL_LINES if possible to reduce the amount of vertex data being sent to the card.
My suggestion is to try using a scene graph, some kind of hierarchical data structure for the lines/curves. If you have huge models, performance will be affected if you have plain list of lines. With a graph/tree structure you can check easily which items are visible and which are not by using bounding volumes. Also with a scenegraph you can apply transformation easily and reuse geometries.