Reverse triangle lookup from affected pixels? - opengl

Assume I have a 3D triangle mesh, and a OpenGL framebuffer to which I can render the mesh.
For each rendered pixel, I need to build a list of triangles that rendered to that pixel, even those that are occluded.
The only way I could think of doing this is to individually render each triangle from the mesh, then go through each pixel in the framebuffer to determine if it was affected by the triangle (using the depth buffer or a user-defined fragment shader output variable). I would then have to clear the framebuffer and do the same for the next triangle.
Is there a more efficient way to do this?
I considered, for each fragment in the fragment shader, writing out a triangle identifier, but GLSL doesn't allow outputting a list of integers.

For each rendered pixel, I need to build a list of triangles that rendered to that pixel, even those that are occluded.
You will not be able to do it for entire scene. There's no structure that allow you to associate "list" with every pixel.
You can get list of primitives that affected certain area using select buffer (see glRenderMode(GL_SELECT)).
You can get scene depth complexity using stencil buffer techniques.
If there are 8 triangles total, then you can get list of triangles that effected every pixel using stencil buffer (basically, assign unique (1 << n) stencil value to each triangle, and OR it with existing stencil buffer value for every stencil OP).
But to solve it in generic case, you'll need your own rasterizer and LOTS of memory to store per-pixel triangle lists. The problem is quite similar to multi-layered depth buffer, after all.
Is there a more efficient way to do this?
Actually, yes, but it is not hardware accelerated and OpenGL has nothing to do it. Store all rasterized triangles in OCT-tree. Launch a "ray" through that OCT-tree for every pixel you want to test, and count triangles this ray hits. That's collision detection problem.

Related

Stencil-like shader masking, but with shader control over how the mask is created?

What I'm trying to accomplish:
Draw geometry with backface culling disabled, and with a clip plane defined, such that the geometry is clipped and the drawn backfaces fill in the exposed interior to create a mask.
Then in another pass, render a quad which exactly matches the position of the clip plane, using a mask from the first pass to cover only the exposed holes with a textured surface, giving the illusion that the geometry is properly modified.
The clipped geometry may create multiple separated holes, so I can't just draw the plane over top without masking.
I can get part of the way there in the shader, testing gl_FrontFacing to see which pixels are over an exposed interior. However, I can't find a way to record this for a later pass.
My first thought was to use the stencil buffer, which would work great for constraining the second pass, but as far as I can tell there's no way for a shader to selectively write to it.
The only other thing that comes to mind is to use a framebuffer and have the shader write to an extra output buffer, and then feed that back in to the second pass to do the filtering manually. But:
How would I know what part of the texture to sample? I assume I'd need to calculate screen positions of vertices, pass that to the interpolated by the fragment shader, and test/discard from there?
Is there a better or perhaps more automatic (similar to stencil buffer) way to accomplish this?

How to colour vertices as a grid (like wireframe mode) using shaders?

I've created a plane with six vertices per square that form a terrain.
I colour each vertex using the terrain height value in the pixel shader.
I'm looking for a way to colour pixels between vertexes black, while keeping everything else the same to create a grid effect. The same effect you get from wireframe mode, except for the diagonal line, and the transparent part should be the normal colour.
My terrain, and how it looks in wireframe mode:
How would one go about doing this in pixel shader, or otherwise?
See "Solid Wireframe" - NVIDIA paper from a few years ago.
The idea is basically this: include a geometry shader that generates barycentric coordinates as a varying for each vertex. In your fragment / pixel shader, check the value of the bary components. If they are below a certain threshold, you color the pixel however you'd like your wireframe to be colored. Otherwise, light it as you normally would.
Given a face with vertices A,B,C, you'd generate barycentric values of:
A: 1,0,0
B: 0,1,0
C: 0,0,1
In your fragment shader, see if any component of the bary for that fragment is less than, say, 0.1. If so, it means that it's close to one of the edges of the face. (Which component is below the threshold will also tell you which edge, if you want to get fancy.)
I'll see if I can find a link and update here in a few.
Note that the paper is also ~10 years old. There are ways to get bary coordinates without the geometry shader these days in some situations, and I'm sure there are other workarounds. (Geometry shaders have their place, but are not the greatest friend of performance.)
Also, while geom shaders come with a perf hit, they're significantly faster than a second pass to draw a wireframe. Drawing in wireframe mode in GL (or DX) carries a significant performance penalty because you're asking the rasterizer to simulate Bresenham's line algorithm. That's not how rasterizers work, and it is freaking slow.
This approach also solves any z-fighting issues that you may encounter with two passes.
If your mesh were a single triangle, you could skip the geometry shader and just pack the needed values into a vertex buffer. But, since vertices are shared between faces in any model other than a single triangle, things get a little complicated.
Or, for fun: do this as a post processing step. Look for high ddx()/ddy() (or dFdx()/dFdy(), depending on your API) values in your fragment shader. That also lets you make some interesting effects.
Given that you have a vertex buffer containing all the vertices of your grid, make an index buffer that utilizes the vertex buffer but instead of making groups of 3 for triangles, use pairs of 2 for line segments. This will be a Line List and should contain all the pairs that make up the squares of the grid. You could generate this list automatically in your program.
Rough algorithm for rendering:
Render your terrain as normal
Switch your primitive topology to Line List
Assign the new index buffer
Disable Depth Culling (or add a small height value to each point in the vertex shader so the grid appears above the terrain)
Render the Line List
This should produce the effect you are looking for of the terrain drawn and shaded with a square grid on top of it. You will need to put a switch (via a constant buffer) in your pixel shader that tells it when it is rendering the grid so it can draw the grid black instead of using the height values.

Opengl: coloring a world map?

Here is a task that every GIS application can do: given some polygons, fill each polygon with a chosen color. Like this: image
What is the best way of doing this repeatedly in Opengl? That is, the polygons do not change, and I want to vary the data for coloring to produce difference renderings.
Redrawing polygons for each rendering is the most straightforward solution, but it seems to be a waste, since the geometries do not change at all.
Or is it better to create a stencil for each polygon, and stencil print the entire map? If there are too many polygons, will doing hundreds or thousands of rendering passes create a problem?
For each vertex of a polygon, map a certain color.That means when you send the data to the shaders, with each call the vertex array object sends 2 parameters: a vector which is needed in the vertex shader and a vector which will be used as the fragment color.That is the simplest way.
For example think of a triangle drawn in opengl . if you send its vertices to the vertex shader and set a color in the fragment shader everytime when a vertex enters the shader pipeline it will be positioned accordingly and on the screen set with the given color from the fragment shader.
The technique which I poorly explained ( sry I am not the best at explanations) , is used in the colored triangle example in which colors interpolate.Red maped to a corner , Green maped to another , and Blue to the last. If you set it so the red color maps to every corner you get your colored triangle.That is the basic principle.Oh and you draw the minimum count of triangles and you need one pair of shaders .
Note : a polygon is made out of N triangles and you need to map the same color to every vertex of each triangle drawn in that polygon.
I think a bigger issue will be that OpenGL doesn't support polygons or vector drawing in general, but there are libraries for this. You'll have to use an existing solution for vector drawing, or failing that, you'll have to convert from your GIS data (usually a list of points for a polygon) to triangles. This is likely the biggest obstacle.
The fact that the geometry doesn't change isn't really an issue, you would generally store geometry into one or more buffers, then create logic to only draw what is visible inside your view point area, perhaps even go as far to only generate the geometry for the visible area.
See also this question and it's answers.
Rendering Vector Graphics in OpenGL?

How many depthtextures can i bind to a framebuffer?

I am trying to create shadow maps of many objects in a sceneRoom with their shadows being projected on the sceneRoom. Untill now i've been able to project the shadows of the sceneRoom on itself, but i want to project the shadows of other Objects in the sceneRoom on the sceneRoom's floor.
is it possible to create multiple depth textures in one framebuffer? or should i use several Framebuffers where each has one depth texture?
There is only one GL_DEPTH_ATTACHMENT point, so you can only have at most one attached depth buffer at any time. So you have to use some other method.
No, there is only one attachment point (well, technically two if you count GL_DEPTH_STENCIL_ATTACHMENT) for depth in an FBO. You can only attach one thing to the depth, but that does not mean you are limited to a single image.
You can use an array texture to store multiple depth images and then attach this array texture to GL_DEPTH_ATTACHMENT.
However, the only way to draw into an explicit array level in this texture would be to use a Geometry Shader to do layered rendering. Since it sounds like each one of these depth images you are interested in are actually completely different sets of geometry, this does not sound like the approach you want. If you used a Geometry Shader to do this, you would process the same set of geometry for each layer.
One thing you could consider is actually using a single depth buffer, but packing your shadow maps into an atlas. If each of your shadow maps is 512x512, you could store 4 of them in a single texture with dimensions 1024x1024 and adjust texture coordinates (and viewport when you draw into the atlas) appropriately. The reason you might consider doing this is because changing the render target (FBO state) tends to be the most expensive thing you would do between draw calls in a series of depth-only draws. You might change a few uniforms or vertex pointers, but those are dirt cheap to change.

Off-screen multiple render targets using Frame Buffer Object (FBO) or?

Situation: Generating N samples of a shape and corresponding edges (using Sobel filter or my own) with different transformations and rotations, while viewport (size=600*600) and camera remain constants. i.e. there will be N samples + N corresponding edges.
I am thinking to do like this,
Use One FBO with 2 renderbuffers [i.e. size of each buffer will be= (N *600) * 600]- 1st for N shapes and 2nd for edges of the corresponding shapes
Questions:
Which is the best way to achieve above things?
Though viewport size is 600*600pixels but shape will only occupy around 50*50pixels. So is there any efficient way to apply edge detection on bounding box/AABB region only on 2nd buffer? Also only reading 2N bounding box (N sample + N corresponding edges) in efficient way?
1 : I'm not sure what you call "best way". Use Multiple Render Targets : you create two 600*N textures, bind them both to the FBO with glDrawArrays, and in your fragment shader, so something like that :
layout(location = 0) out vec3 color;
layout(location = 1) out vec3 edges;
When writing to "color" and "edges", you'll effectively write in your textures.
2 : You shouldn't do this. Compute your bounding boxes on the CPU, and project them (i.e. multiply each corner by your ModelViewProjection matrix) to get the bounding boxes in 2D
By the way : Compute your bounding boxes first, so that you won't need 600*600 textures but 50*50...
EDIT : You usually restrict the drawn zone with glViewPort. But ther is only one viewport, and you need several. You can try the Viewport array extension and live on the bleeding edge, or pass the AABB in a texture, or don't worry about that until performance matters...
Oh, and you can't use Sobel just like that... Sobel requires that you can read all texels around, which is not the case since you're currently rendering said texels. Either make a two-pass algorithm without MRTs (first color, then edges) or don't use Sobel and guess you edges in the shader ( I don't really see how )
Like Calvin said, you have to first render your object into the the first framebuffer and then bind this as texture (use texture attachment rather than a renderbuffer) for the second pass to find the edges, as the edge detection usually needs access to a pixel's surrounding pixels.
Regarding your second question, you could probably use the stencil buffer. Just draw your shapes in the first pass and let them write a reference value into the stencil buffer. Then do the edge detection (usually by rendering a screen sized quad with the corrseponding fragment shader) and configure the stencil test to only pass where the stencil buffer contains the reference value. This way (assuming early-z hardware, which is quite common now) the fragment shader will only be executed on the pixels the shape has actually been drawn onto.