Off-screen multiple render targets using Frame Buffer Object (FBO) or? - opengl

Situation: Generating N samples of a shape and corresponding edges (using Sobel filter or my own) with different transformations and rotations, while viewport (size=600*600) and camera remain constants. i.e. there will be N samples + N corresponding edges.
I am thinking to do like this,
Use One FBO with 2 renderbuffers [i.e. size of each buffer will be= (N *600) * 600]- 1st for N shapes and 2nd for edges of the corresponding shapes
Questions:
Which is the best way to achieve above things?
Though viewport size is 600*600pixels but shape will only occupy around 50*50pixels. So is there any efficient way to apply edge detection on bounding box/AABB region only on 2nd buffer? Also only reading 2N bounding box (N sample + N corresponding edges) in efficient way?

1 : I'm not sure what you call "best way". Use Multiple Render Targets : you create two 600*N textures, bind them both to the FBO with glDrawArrays, and in your fragment shader, so something like that :
layout(location = 0) out vec3 color;
layout(location = 1) out vec3 edges;
When writing to "color" and "edges", you'll effectively write in your textures.
2 : You shouldn't do this. Compute your bounding boxes on the CPU, and project them (i.e. multiply each corner by your ModelViewProjection matrix) to get the bounding boxes in 2D
By the way : Compute your bounding boxes first, so that you won't need 600*600 textures but 50*50...
EDIT : You usually restrict the drawn zone with glViewPort. But ther is only one viewport, and you need several. You can try the Viewport array extension and live on the bleeding edge, or pass the AABB in a texture, or don't worry about that until performance matters...
Oh, and you can't use Sobel just like that... Sobel requires that you can read all texels around, which is not the case since you're currently rendering said texels. Either make a two-pass algorithm without MRTs (first color, then edges) or don't use Sobel and guess you edges in the shader ( I don't really see how )

Like Calvin said, you have to first render your object into the the first framebuffer and then bind this as texture (use texture attachment rather than a renderbuffer) for the second pass to find the edges, as the edge detection usually needs access to a pixel's surrounding pixels.
Regarding your second question, you could probably use the stencil buffer. Just draw your shapes in the first pass and let them write a reference value into the stencil buffer. Then do the edge detection (usually by rendering a screen sized quad with the corrseponding fragment shader) and configure the stencil test to only pass where the stencil buffer contains the reference value. This way (assuming early-z hardware, which is quite common now) the fragment shader will only be executed on the pixels the shape has actually been drawn onto.

Related

Why do we need texture filtering in OpenGL?

When mapping texture to a geometry when we can choose the filtering method between GL_NEAREST and GL_LINEAR.
In the examples we have a texture coordinate surrounded by the texels like so:
And it's explained how each algorithm chooses what color the fragment be, for example linear interpolate all the neighboring texels based on distance from the texture coordinate.
Isn't each texture coordinate is essentially the fragment position which are mapped to pixel on screen? So how these coordinates are smaller than the texels which are essentially pixels and the same size as fragments?
A (2D) texture can be looked at as a function t(u, v), whose output is a "color" value. This is a pure function, so it will return the same value for the same u and v values. The value comes from a lookup table stored in memory, indexed by u and v, rather than through some kind of computation.
Texture "mapping" is the process whereby you associate a particular location on a surface with a particular location in the space of a texture. That is, you "map" a surface location to a location in a texture. As such, the inputs to the texture function t are often called "texture coordinates". Some surface locations may map to the same position on a texture, and some texture positions may not have surface locations mapped to them. It all depends on the mapping
An actual texture image is not a smooth function; it is a discrete function. It has a value at the texel locations (0, 0), and another value at (1, 0), but the value of a texture at (0.5, 0) is undefined. In image space, u and v are integers.
Your picture of a zoomed in part of the texture is incorrect. There are no values "between" the texels, because "between the texels" is not possible. There is no number between 0 and 1 on an integer number line.
However, any useful mapping from surface to the texture function is going to need to happen in a continuous space, not a discrete space. After all, it's unlikely that every fragment will land exactly on a location that maps to an exact integer within a texture. After all, especially in shader-based rendering, a shader can just invent a mapping arbitrarily. The "mapping" could be based on light directions (projective texturing), the elevation of a fragment relative to some surface, or anything a user might want. To a fragment shader, a texture is just a function t(u, v) which can be evaluated to produce a value.
So we really want that function to be in a continuous space.
The purpose of filtering is to create a continuous function t by inventing values in-between the discrete texels. This allows you to declare that u and v are floating-point values, rather than integers. We also get to normalize the texture coordinates, so that they're on the range [0, 1] rather than being based on the texture's size.
Texture filtering does not decide what color the fragment should be. This is what the fragment shader does. However, the fragment shader may sample a texture at a given position to get a color. It may directly return that color or it can process it (e.g. add shading etc.)
Texture filtering happens at sampling. The texture coordinates are not necessarily perfect pixel positions. E.g., the texture could be the material of a 3D model that you show in a perspective view. Then a fragment may cover more than a single texel or it may cover less. Or it might not be aligned with the texture grid. In all cases you need some kind of filtering.
For applications that render a sprite at its original size without any deformation, you usually don't need filtering as you have a 1:1 mapping from screen pixels to texels.

How to colour vertices as a grid (like wireframe mode) using shaders?

I've created a plane with six vertices per square that form a terrain.
I colour each vertex using the terrain height value in the pixel shader.
I'm looking for a way to colour pixels between vertexes black, while keeping everything else the same to create a grid effect. The same effect you get from wireframe mode, except for the diagonal line, and the transparent part should be the normal colour.
My terrain, and how it looks in wireframe mode:
How would one go about doing this in pixel shader, or otherwise?
See "Solid Wireframe" - NVIDIA paper from a few years ago.
The idea is basically this: include a geometry shader that generates barycentric coordinates as a varying for each vertex. In your fragment / pixel shader, check the value of the bary components. If they are below a certain threshold, you color the pixel however you'd like your wireframe to be colored. Otherwise, light it as you normally would.
Given a face with vertices A,B,C, you'd generate barycentric values of:
A: 1,0,0
B: 0,1,0
C: 0,0,1
In your fragment shader, see if any component of the bary for that fragment is less than, say, 0.1. If so, it means that it's close to one of the edges of the face. (Which component is below the threshold will also tell you which edge, if you want to get fancy.)
I'll see if I can find a link and update here in a few.
Note that the paper is also ~10 years old. There are ways to get bary coordinates without the geometry shader these days in some situations, and I'm sure there are other workarounds. (Geometry shaders have their place, but are not the greatest friend of performance.)
Also, while geom shaders come with a perf hit, they're significantly faster than a second pass to draw a wireframe. Drawing in wireframe mode in GL (or DX) carries a significant performance penalty because you're asking the rasterizer to simulate Bresenham's line algorithm. That's not how rasterizers work, and it is freaking slow.
This approach also solves any z-fighting issues that you may encounter with two passes.
If your mesh were a single triangle, you could skip the geometry shader and just pack the needed values into a vertex buffer. But, since vertices are shared between faces in any model other than a single triangle, things get a little complicated.
Or, for fun: do this as a post processing step. Look for high ddx()/ddy() (or dFdx()/dFdy(), depending on your API) values in your fragment shader. That also lets you make some interesting effects.
Given that you have a vertex buffer containing all the vertices of your grid, make an index buffer that utilizes the vertex buffer but instead of making groups of 3 for triangles, use pairs of 2 for line segments. This will be a Line List and should contain all the pairs that make up the squares of the grid. You could generate this list automatically in your program.
Rough algorithm for rendering:
Render your terrain as normal
Switch your primitive topology to Line List
Assign the new index buffer
Disable Depth Culling (or add a small height value to each point in the vertex shader so the grid appears above the terrain)
Render the Line List
This should produce the effect you are looking for of the terrain drawn and shaded with a square grid on top of it. You will need to put a switch (via a constant buffer) in your pixel shader that tells it when it is rendering the grid so it can draw the grid black instead of using the height values.

What other ways can I draw the outline of an object?

I have a very simple case. I want to draw the outline of an object, in this case I think they'll only be spheres, but I'd like to not rely on this.
I've found methods such as:
Draw the object to the stencil buffer
Turn on wireframe mode
Draw the object with thick lines
Draw the real object on the top
The problem I have with this method is that my models have a lot of vertices, and this requires me to draw it three times. I'm getting some significant frame rate drops.
Are there other ways to do this? My next guess would be to draw circles on the final render as a post-process effect, seeing as I'm only looking at spheres. But I'd much much rather do this for more than just spheres.
Is there something I can do in an existing shader to outline?
I'd also like the outline to appear when the object is behind others.
I'm using OpenGL 4.3.
I know 3 ways of doing contour rendering:
Using the stencil buffer
The first one is a slightly modified version of the one you described: you first render your object as normal with stencil buffer on, then you slightly scale it and render it plain color where the stencil buffer is not filled. You can find an explanation of this technique here.
Using image processing techniques
The second one is a post-process step, where you look for edges using image processing filters (like the sobel operator) and you compose your rendering with your contour detection result. The good thing with the sobel operator is that it is separable; this means you can do the detection in two 1D passes, which is more efficient that doing one 2D pass.
Using the geometry shader
Last but not least, you can use the geometry shader to extract the silhouette of your mesh. The idea is to use adjacent vertices of a triangle to detect if one edge of this triangle (let call it t0) is a contour.
To do this, for each edge ei of t0:
build a new triangle ti using the vertices of ei and its associated vertex,
compute the normal ni of ti, and the normal n0 of t0, transform them both in view space (the silhouette depends on the point of view),
compute the dot product between n0 and ni. If its value is negative, this means that the normals are in opposite directions and the edge ei is a silhouette edge.
You then build a quad around ei, emit each of its vertices and color them the way you want in the fragment shader.
This is the basic idea of this algorithm. Using only this will result in aliased edges, with holes between them, but this can be improved. You can read this paper, and this blog post for further informations.
In my experience you get good results if you render the outlined object in white (unlit) to a texture as big as the final framebuffer, then draw a framebuffer-sized quad with that texture and have the fragment shader blur or otherwise process it and apply the desired color.
I have an example here

How can I apply a depth test to vertices (not fragments)?

TL;DR I'm computing a depth map in a fragment shader and then trying to use that map in a vertex shader to see if vertices are 'in view' or not and the vertices don't line up with the fragment texel coordinates. The imprecision causes rendering artifacts, and I'm seeking alternatives for filtering vertices based on depth.
Background. I am very loosely attempting to implement a scheme outlined in this paper (http://dash.harvard.edu/handle/1/4138746). The idea is to represent arbitrary virtual objects as lots of tangent discs. While they wanted to replace triangles in some graphics card of the future, I'm implementing this on conventional cards; my discs are just fans of triangles ("Discs") around center points ("Points").
This is targeting WebGL.
The strategy I intend to use, similar to what's done in the paper, is:
Render the Discs in a Depth-Only pass.
In a second (or more) pass, compute what's visible based solely on which Points are "visible" - ie their depth is <= the depth from the Depth-Only pass at that x and y.
I believe the authors of the paper used a gaussian blur on top of the equivalent of a GL_POINTS render applied to the Points (ie re-using the depth buffer from the DepthOnly pass, not clearing it) to actually render their object. It's hard to say: the process is unfortunately a one line comment, and I'm unsure of how to duplicate it in WebGL anyway (a naive gaussian blur will just blur in the background pixels that weren't touched by the GL_POINTS call).
Instead, I'm hoping to do something slightly different, by rerendering the discs in a second pass instead as cones (center of disc becomes apex of cone, think "close the umbrella") and effectively computing a voronoi diagram on the surface of the object (ala redbook http://www.glprogramming.com/red/chapter14.html#name19). The idea is that an output pixel is the color value of the first disc to reach it when growing radiuses from 0 -> their natural size.
The crux of the problem is that only discs whose centers pass the depth test in the first pass should be allowed to carry on (as cones) to the 2nd pass. Because what's true at the disc center applies to the whole disc/cone, I believe this requires evaluating a depth test at a vertex or object level, and not at a fragment level.
Since WebGL support for accessing depth buffers is still poor, in my first pass I am packing depth info into an RGBA Framebuffer in a fragment shader. I then intended to use this in the vertex shader of the second pass via a sampler2D; any disc center that was closer than the relative texture2D() lookup would be allowed on to the second pass; otherwise I would hack "discarding" the vertex (its alpha would be set to 0 or some flag set that would cause discard of fragments associated with the disc/cone or etc).
This actually kind of worked but it caused horrendous z-fighting between discs that were close together (very small perturbations wildly changed which discs were visible). I believe there is some floating point error between depth->rgba->depth. More importantly, though, the depth texture is being set by fragment texel coords, but I'm looking up vertices, which almost certainly don't line up exactly on top of relevant texel coordinates; so I get depth +/- noise, essentially, and the noise is the issue. Adding or subtracting .000001 or something isn't sufficient: you trade Type I errors for Type II. My render became more accurate when I switched from NEAREST to LINEAR for the depth texture interpolation, but it still wasn't good enough.
How else can I determine which disc's centers would be visible in a given render, so that I can do a second vertex/fragment (or more) pass focused on objects associated with those points? Or: is there a better way to go about this in general?

Reverse triangle lookup from affected pixels?

Assume I have a 3D triangle mesh, and a OpenGL framebuffer to which I can render the mesh.
For each rendered pixel, I need to build a list of triangles that rendered to that pixel, even those that are occluded.
The only way I could think of doing this is to individually render each triangle from the mesh, then go through each pixel in the framebuffer to determine if it was affected by the triangle (using the depth buffer or a user-defined fragment shader output variable). I would then have to clear the framebuffer and do the same for the next triangle.
Is there a more efficient way to do this?
I considered, for each fragment in the fragment shader, writing out a triangle identifier, but GLSL doesn't allow outputting a list of integers.
For each rendered pixel, I need to build a list of triangles that rendered to that pixel, even those that are occluded.
You will not be able to do it for entire scene. There's no structure that allow you to associate "list" with every pixel.
You can get list of primitives that affected certain area using select buffer (see glRenderMode(GL_SELECT)).
You can get scene depth complexity using stencil buffer techniques.
If there are 8 triangles total, then you can get list of triangles that effected every pixel using stencil buffer (basically, assign unique (1 << n) stencil value to each triangle, and OR it with existing stencil buffer value for every stencil OP).
But to solve it in generic case, you'll need your own rasterizer and LOTS of memory to store per-pixel triangle lists. The problem is quite similar to multi-layered depth buffer, after all.
Is there a more efficient way to do this?
Actually, yes, but it is not hardware accelerated and OpenGL has nothing to do it. Store all rasterized triangles in OCT-tree. Launch a "ray" through that OCT-tree for every pixel you want to test, and count triangles this ray hits. That's collision detection problem.