Deferred shading, store position or construct it from depth [closed] - c++

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I'm in the middle of implementing deferred shading in an engine I'm working on, and now I have to make a decision on whether to use a full RGB32F texture to store positions, or reconstruct it from the depth buffer. So it's basically a RGB32F texel fetch vs a matrix vector multiplication in the fragment shader. Also the trade between memory and extra ALU operations.
Please direct me to useful resources and tell me your own experience with the subject.

In my opinion it is preferable to recalculate the position from depth. This is what I do in my deferred engine. The recalculation is a fast enough to not even show up when I've been profiling the render loop. And that (virtually no performance impact) compared to ~24MB of extra video memory usage (for a 1920x1080 texture) was an easy choice for me.

Related

glMultiDrawElements with GL_TRIANGLES - usage [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
In my OpenGL projects I have always used glDrawElements or glDrawElementsBaseVertex with GL_TRIANGLES. When rendering a complex object I can, for example sort meshes based on the material index, update textures and uniforms and call glDrawElements for each group.
I have started exploring other draw commands. I wonder when glMuliDrawElements is useful. I have found the following example. There glMulitDrawElements is used with GL_TRIANGLE_STRIP and is in fact a equivalent for primitive restart functionality. That is clear.
When glMuliDrawElements/glMultiDrawArrays with GL_TRIANGLES may be useful ? Could you provide please some example ?
You would use them when you have multiple, discontiguous ranges of primitives to draw (without any state changes between them).
The availability of gl_DrawID in vertex shaders makes it possible to issue multiple draws in such a way that the VS can index into some memory to find the specific per-model data for that rendering call. So in that event, you could draw dozens of entirely separate models, using different transforms stored in a buffer, all with the same draw call.
So long as you can put their per-model data in a buffer object indexable by the draw identifier (and so long as all of these models use the same vertex format and buffer storage), you could render much of the entire scene in a single call.

What is the best way to create a glass or ice effect in OpenGL [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I have tried blending and this seems to provide a basic glass effect but I feel there must be a better way to generate a glass or ice style effect. What would people suggest ? Is there something that can be done with semi-transparent textures ?
This is a very broad and complex question and the answer entirely depends on what kind of result (in terms of realism etc.) you are trying to get, what kind of lighting you want etc. Most of these effects, and materials in general, are the domain of shaders. A lot can be achieved with choosing the right textures with the right material parameters - again depending on what you consider an acceptable result.
GPU Gems book has a chapter on glass simulation (see 19.3.2):
GPU Gems 2 - Generic Refraction Simulation
When it comes to ice, there are again a ton of different things to consider depending on the complexity you want - see this answer here:
How to render realistic ice?

CSG Modeling in OpenGL [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 16 days ago.
Improve this question
I am dealing with Constructive Solid Geometry(CSG) modeling with OpenGL.
I want to know how to implement binary operation. I read something about Gold Feather Algorithm and I know about OpenCSG but after reading its source code, I found it too complicated to understand. I just need a simple shortest OpenGL example how to implement it.
There's no restrict in Algorithm as long as it is easy to implement.
OpenGL will not help you. OpenGL is a rendering library/API. It draws points, lines and triangles; it's up to you to tell it what to draw. OpenGL does not maintain a scene or even has a notion of coherent geometric objects. Hence CSG is not something that goes into OpenGL.
Nicol Bolas is correct - OpenGL will not help with CSG, it only provides a way to draw 3D things onto a 2D screen.
OpenCSG is essentially "fake" CSG by using using OpenGL's depthbuffers, stencils and shaders to make it appear that 3D objects have had a boolean operation performed on them.
CSG is a huge task and I doubt you will ever find an "algorithm easy to understand"
Have a look at this project: http://code.google.com/p/carve/ which performs CSG on the triangles/faces which you would then draw to OpenGL

Per many frame operations in OpenGL? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
This is a question about 3D programming in general, but I'm learning OpenGL if that makes the answer any different. What I wonder is if all of the work in displaying a image always has to start from scratch for each new frame, or if there's some way to save intermediate data that could be reused when rendering the next frame, instead of having to be recomputed? Let's say you're standing right next to a mountain, then the stuff on the other side of the mountain are occluded by the mountain, there could be a lot of stuff on the other side of the mountain that simply doesn't have to be rendered because it can't be seen. Now assume that your character can't walk particularly fast, then there's no way that the stuff on the other side of the mountain could be visible already in the next frame, or maybe not even the next 100 frames. Is it possible to avoid having to do the same occlusion check in each frame?
The problem you're referring to is called "hidden surface removal" and "occlusion culling".
In realtime graphics it's usual to rerender each frame from scratch. However every good renderer will omit all the things that are definitively not visible. There are various algorithms for this.

opengl vbo advice [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I have a model designed in Blender that i am going to render, in my game engine built using opengl, after exporting it to collada. The model in blender is divided into groups. These groups contains vertices normals and textures with there own index. is it a good thing to do if i rendered each group as a separate vbo or should i render the whole model as a single vbo?
Rules of thumb are:
always do as few OpenGL state changes as possible;
always talk to OpenGL as little as possible.
The main reason being that talking to the GPU is costly. The GPU is much happier if you give it something it'll take a long time to do and then don't bother it again while it's working. So it's likely that a single VBO would be a better solution, unless it would lead to a substantial increase in the amount of storage you need to use and hence run against caching elsewhere.