I'm trying to code a texture reprojection using a UV gBuffer (this is a texture that contains the UV desired value for mapping at that pixel)
I think that this should be easy to understand just by seeing this picture (I cannot attach due low reputation):
http://www.andvfx.com/wp-content/uploads/2012/12/3-objectes.jpg
The first image (the black/yellow/red/green one) is the UV gBuffer, it represents the uv values, the second one is the diffuse channel and the third the desired result.
Making this on OpenGL is pretty trivial.
Draw a simple rectangle and use as fragmented shader this pseudo-code:
float2 newUV=texture(UVgbufferTex,gl_TexCoord[0]).xy;
float3 finalcolor=texture(DIFFgbufferTex,newUV);
return float4(finalcolor,0);
OpenGL takes care about selecting the mipmap level, the anisotropic filtering etc, meanwhile if I make this on regular CPU process I get a single pixel for finalcolor so my result is crispy.
Any advice here? I was wondering about computing manually a kind of mipmaps and select the level by checking the contiguous pixel but not sure if this is the right way, also I doubt how to deal with that since it could be changing fast on horizontal but slower on vertical or viceversa.
In fact I don't know how this is computed internally on OpenGL/DirectX since I used this kind of code for a long time but never thought about the internals.
You are on the right track.
To select mipmap level or apply anisotropic filtering you need a gradient. That gradient comes naturally in GL (in fragment shaders) because it is computed for all interpolated variables after rasterization. This all becomes quite obvious if you ever try to sample a texture using mipmap filtering in a vertex shader.
You can compute the LOD (lambda) as such:
ρ = max (((du/dx)2 + (dv/dx)2)1/2
, ((du/dy)2 + (dv/dy)2)1/2)
λ = log2 ρ
The texture is picked basing on the size on the screen after reprojection. After you emit a triangle, check the rasterization size and pick the appropriate mipmap.
As for filtering, it's not that hard to implement i.e. bilinear filtering manually.
Related
TL;DR I'm computing a depth map in a fragment shader and then trying to use that map in a vertex shader to see if vertices are 'in view' or not and the vertices don't line up with the fragment texel coordinates. The imprecision causes rendering artifacts, and I'm seeking alternatives for filtering vertices based on depth.
Background. I am very loosely attempting to implement a scheme outlined in this paper (http://dash.harvard.edu/handle/1/4138746). The idea is to represent arbitrary virtual objects as lots of tangent discs. While they wanted to replace triangles in some graphics card of the future, I'm implementing this on conventional cards; my discs are just fans of triangles ("Discs") around center points ("Points").
This is targeting WebGL.
The strategy I intend to use, similar to what's done in the paper, is:
Render the Discs in a Depth-Only pass.
In a second (or more) pass, compute what's visible based solely on which Points are "visible" - ie their depth is <= the depth from the Depth-Only pass at that x and y.
I believe the authors of the paper used a gaussian blur on top of the equivalent of a GL_POINTS render applied to the Points (ie re-using the depth buffer from the DepthOnly pass, not clearing it) to actually render their object. It's hard to say: the process is unfortunately a one line comment, and I'm unsure of how to duplicate it in WebGL anyway (a naive gaussian blur will just blur in the background pixels that weren't touched by the GL_POINTS call).
Instead, I'm hoping to do something slightly different, by rerendering the discs in a second pass instead as cones (center of disc becomes apex of cone, think "close the umbrella") and effectively computing a voronoi diagram on the surface of the object (ala redbook http://www.glprogramming.com/red/chapter14.html#name19). The idea is that an output pixel is the color value of the first disc to reach it when growing radiuses from 0 -> their natural size.
The crux of the problem is that only discs whose centers pass the depth test in the first pass should be allowed to carry on (as cones) to the 2nd pass. Because what's true at the disc center applies to the whole disc/cone, I believe this requires evaluating a depth test at a vertex or object level, and not at a fragment level.
Since WebGL support for accessing depth buffers is still poor, in my first pass I am packing depth info into an RGBA Framebuffer in a fragment shader. I then intended to use this in the vertex shader of the second pass via a sampler2D; any disc center that was closer than the relative texture2D() lookup would be allowed on to the second pass; otherwise I would hack "discarding" the vertex (its alpha would be set to 0 or some flag set that would cause discard of fragments associated with the disc/cone or etc).
This actually kind of worked but it caused horrendous z-fighting between discs that were close together (very small perturbations wildly changed which discs were visible). I believe there is some floating point error between depth->rgba->depth. More importantly, though, the depth texture is being set by fragment texel coords, but I'm looking up vertices, which almost certainly don't line up exactly on top of relevant texel coordinates; so I get depth +/- noise, essentially, and the noise is the issue. Adding or subtracting .000001 or something isn't sufficient: you trade Type I errors for Type II. My render became more accurate when I switched from NEAREST to LINEAR for the depth texture interpolation, but it still wasn't good enough.
How else can I determine which disc's centers would be visible in a given render, so that I can do a second vertex/fragment (or more) pass focused on objects associated with those points? Or: is there a better way to go about this in general?
I've just added an uv-scaling feature and I've discovered that the mipmapping is not working as I expected. I pass this scale to the shader which simply multiplies the input texcoord with this value.
However the result is "ziggy" like when I don't have any mipmap. This is because I modified the texture coordinates and the driver don't know which mip level it should choose.
How should I handle this kind of situation?
In WebGL, is it possible to write to the fragment's depth value or control the fragment's depth value in some other way?
As far as I could find, gl_FragDepth is not present in webgl 1.x, but I am wondering if there is any other way (extensions, browser specific support, etc) to do it.
What I want to archive is to have a ray traced object play along with other elements drawn using the usual model, view, projection.
There is the extension EXT_frag_depth
Because it's an extension it might not be available everywhere so you need to check it exists.
var isFragDepthAvailable = gl.getExtension("EXT_frag_depth");
If isFragDepthAvailable is not falsey then you can enable it in your shaders with
#extension GL_EXT_frag_depth : enable
Otherwise you can manipulate gl_Position.z in your vertex shader though I suspect that's not really a viable solution for most needs.
Brad Larson has a clever workaround for this that he uses in Molecules (full blog post):
To work around this, I implemented my own custom depth buffer using a
frame buffer object that was bound to a texture the size of the
screen. For each frame, I first do a rendering pass where the only
value that is output is a color value corresponding to the depth at
that point. In order to handle multiple overlapping objects that might
write to the same fragment, I enable color blending and use the
GL_MIN_EXT blending equation. This means that the color components
used for that fragment (R, G, and B) are the minimum of all the
components that objects have tried to write to that fragment (in my
coordinate system, a depth of 0.0 is near the viewer, and 1.0 is far
away). In order to increase the precision of depth values written to
this texture, I encode depth to color in such a way that as depth
values increase, red fills up first, then green, and finally blue.
This gives me 768 depth levels, which works reasonably well.
EDIT: Just realized WebGL doesn't support min blending, so not very useful. Sorry.
I'm working on a Minecraft-like engine as a hobby project to see how far the concept of voxel terrains can be pushed on modern hardware and OpenGL >= 3. So, all my geometry consists of quads, or squares to be precise.
I've built a raycaster to estimate ambient occlusion, and use the technique of "bent normals" to do the lighting. So my normals aren't perpendicular to the quad, nor do they have unit length; rather, they point roughly towards the space where least occlusion is happening, and are shorter when the quad receives less light. The advantage of this technique is that it just requires a one-time calculation of the occlusion, and is essentially free at render time.
However, I run into trouble when I try to assign different normals to different vertices of the same quad in order to get smooth lighting. Because the quad is split up into triangles, and linear interpolation happens over each triangle, the result of the interpolation clearly shows the presence of the triangles as ugly diagonal artifacts:
The problem is that OpenGL uses barycentric interpolation over each triangle, which is a weighted sum over 3 out of the 4 corners. Ideally, I'd like to use bilinear interpolation, where all 4 corners are being used in computing the result.
I can think of some workarounds:
Stuff the normals into a 2x2 RGB texture, and let the texture processor do the bilinear interpolation. This happens at the cost of a texture lookup in the fragment shader. I'd also need to pack all these mini-textures into larger ones for efficiency.
Use vertex attributes to attach all 4 normals to each vertex. Also attach some [0..1] coefficients to each vertex, much like texture coordinates, and do the bilinear interpolation in the fragment shader. This happens at the cost of passing 4 normals to the shader instead of just 1.
I think both these techniques can be made to work, but they strike me as kludges for something that should be much simpler. Maybe I could transform the normals somehow, so that OpenGL's interpolation would give a result that does not depend on the particular triangulation used.
(Note that the problem is not specific to normals; it is equally applicable to colours or any other value that needs to be smoothly interpolated across a quad.)
Any ideas how else to approach this problem? If not, which of the two techniques above would be best?
As you clearly understands, the triangle interpolation that GL will do is not what you want.
So the normal data can't be coming directly from the vertex data.
I'm afraid the solutions you're envisioning are about the best you can achieve. And no matter what you pick, you'll need to pass down [0..1] coefficients from the vertex to the shader (including 2x2 textures. You need them for texture coordinates).
There are some tricks you can do to somewhat simplify the process, though.
Using the vertex ID can help you out with finding which vertex "corner" to pass from vertex to fragment shader (our [0..1] values). A simple bit test on the lowest 2 bits can let you know which corner to pass down, without actual vertex data input. If packing texture data, you still need to pass an identifier inside the texture, so this may be moot.
if you use 2x2 textures to allow the interpolation, there are (were?) some gotchas. Some texture interpolators don't necessarily give a high precision interpolation if the source is in a low precision to begin with. This may require you to change the texture data type to something of higher precision to avoid banding artifacts.
Well... as you're using Bent normals technique, the best way to increase result is to pre-tessellate mesh and re-compute with mesh with higher tessellation.
Another way would be some tricks within pixel shader... one possible way - you can actually interpolate texture on your own (and not use built-in interpolator) in pixel shader, which could help you a lot. And you're not limited just to bilinear interpolation, you could do better, F.e. bicubic interpolation ;)
I know that normal mapping describes the process of adding detail to meshes without increasing the polygon count, and that this is achieved by using specific normal textures for manipulating the way light is applied to the object. Okay.
But what is bump mapping then? Is it just another term for normal mapping?
How do the visual results compare? Can both techniques be combined?
Bump Mapping describes a general technique for simulating bumps and wrinkles on the surface of an object. This is normally accomplished by manipulating surface normals when doing lighting calculations.
Normal Mapping is a variation of Bump Mapping in which the surface normals are provided via a texture, with normals embedded into the RGB channels of the image.
Other techniques, such as Parallax Mapping, are also Bump Mapping techniques because they distort the surface normals.
To answer the second part of the question, they could fairly easily be combined. The base surface normals could be determined from a normal mapping and then modified via another bump mapping technique.
Bump mapping was originally suggested by Jim Blinn back in 1978. His system basically works by perturbing the normal on a surface by using the height of that texel and the height of the surrounding texels.
This is quite similar to DUDV bumpmapping (You may recall the original environment mapped bump mapping as introduced in DX6 which was DUDV). This works by pre-calculating the derivatives from above so that you can miss out the first stage of the calculation (as it does not change each frame).
Normal mapping is a very similar technique that works by, simply, replacing the normal at each texel position. Conceptually its much simpler.
There is another technique that produces "similar" results. It is called emboss bump mapping. This method works by using multipass rendering. Basically you end up subtracting a gray scale heightmap from the last pass but offsetting it a small amount based on the light direction.
There are other ways of emulating surface topology as well.
Elevation mapping uses the height map as an alpha texture and then renders multiple slices through that texture with a different alpha value to simulate the change in height. If not performed correctly, however, the slices can be very visible.
Displacement mapping works by generating a 3D mesh that uses the texture as its basis. This, obviously, massively increase your vertex count.
Steep parallax, relief mapping, etc are the newest techniques. They work by casting a ray through the heightmap until it intersects. This has the big advantage that if a lump should block out the texture behing it now does as the ray doesn't hit the heightmap behind where it initially hits so always displays the "closest" texel.