How can I deterministically detect the shader fragment location in its 2x2 pixel quad? - glsl

I've been trying to utilize the techniques in Eric Penner's "Shader Amortization using
Pixel Quad Message Passing" from GPU Pro 2, Chapter VI.2. The basic idea is that modern GPU's process fragment shaders in 2x2 fragment quads, and you can use ddx() and ddy() to get the value of some_var at all four fragments as long as the following hold:
Your GPU supports high-quality derivatives
You know which fragment you're processing (top-left, top-right, bottom-left, bottom-right)
This opens up a lot of opportunities for fragment shader optimization (like distributing texture fetches over a 2x2 pixel quad) that you'd need Compute Shaders to beat.
My problem is this:
I can't deterministically detect which fragment I'm processing. Ideally, each fragment block would start at even-numbered output pixel coords like (0, 0), (2, 0), ... (1024, 1024), ..., so you'd just need to check whether the output pixel x and y coords are even or odd to know which fragment you're currently processing. The method Penner uses in the book assumes this works...but it seems to be going wrong for me.
Unfortunately, my 2x2 fragment quads appear to be starting in nondeterministic places: I've seen them start at (even, even), (even, odd), and (odd, even). I can't remember if I've seen (odd, odd) or not, but anyway, the arrangement seems to depend on a myriad of factors I don't understand, including the output resolution and shader specifics. (I'm testing on an 8800 GTS, in case anyone's wondering.)
Does anyone know what might be causing this nondeterminism or have any documentation on it? I understand there's virtually no official standardization in this area, but I'm more interested in how things work in practice on modern desktop-level GPU's, and I'm hoping there's a way to get this technique to work. If no one knows how to reason about the even/odd start behavior, does anyone know any other way of determining the current fragment's relative location in its 2x2 quad?
Thanks :)

As it turns out, the premise of my question was mostly wrong:
The 2x2 fragment quads DO almost always start on even pixel numbers...as long as the output resolution is even-numbered.
If the output resolution is odd-numbered (a possibility with the underlying program I'm working with), things can get more complicated, for obvious reasons. I don't expect there's any uniformity here across drivers/GPU's/etc. either, but my current tests (which themselves may still be buggy) appear to demonstrate 2x2 pixel quads starting at an odd pixel along the dimension with odd resolution, at least when the odd dimension is horizontal.
All of this weirdness helped obscure my bigger issue: The code I used to detect the fragment's location in the pixel quad was buggy. I tested by setting the texture coordinates equal within a pixel quad (set to the pixel quad center)...or so I thought. However, I calculated the screen coordinates based on a full-screen quad where the uv mapping has the +v axis pointing downward. The screenspace origin starts at the bottom-left, because it's based on the top-right quadrant of Cartesian coordinates, and I accidentally forgot to invert the v-coordinate of the uv offset I used to find the pixel quad center. Many of my nondeterministic observations came from failing to check my assumptions while debugging and misinterpreting things as a result, particularly in combination with odd resolutions.
This was an embarrassing mistake I should have caught a lot sooner, but I figured I'd detail it as a warning to others to always double-check the direction of your vertical axis when you're dealing with opposite-facing coordinate frames. ;)
UPDATE:
I ran across a situation where 2x2 pixel quads started on even pixel numbers even when the resolution was odd. Thanks to the nondeterminism under odd resolutions, I had to work out another solution:
If you're deriving your screen pixel numbers from the uv coords of a fullscreen quad (for post-processing), the fragment location derived from this is only useful for arranging/placing shared samples between fragments, etc., not for the quad-pixel communication itself. You'll need to have screen pixel numbers with respect to the screenspace origin for that. You can derive these from vertex positions, or you can use ddx().x and ddy().y on the uv-based pixel numbers to find out their screen direction and mirror the fragment position in the appropriate direction from there.
Calculate the fragment location based on your screen pixel numbers (with respect to the true screenspace origin) and the assumption 2x2 pixel quads start on even pixels. (If you used uv-based pixel numbers, now is the time to mirror things.)
Do a ddx().x and ddy().y on the fragment location, and if they're negative in either direction, you know the pixel quad starts at an odd pixel number in that direction...so mirror in that direction.
If you calculate two fragment positions, one based on a uv origin and one based on a screen origin, use the uv-based one for reasoning about uv-based sample placement, and use the screen-based one for actually obtaining the values of a variable at neighboring fragments.
Profit.
I'll post a link to my working MIT-licensed code once I release it on Github, along with usage examples (the speedup is unfortunately not what I expected, but whatever ;)). I'm just waiting to get done with a larger shader I'll be uploading along with it.

Related

How can I apply a depth test to vertices (not fragments)?

TL;DR I'm computing a depth map in a fragment shader and then trying to use that map in a vertex shader to see if vertices are 'in view' or not and the vertices don't line up with the fragment texel coordinates. The imprecision causes rendering artifacts, and I'm seeking alternatives for filtering vertices based on depth.
Background. I am very loosely attempting to implement a scheme outlined in this paper (http://dash.harvard.edu/handle/1/4138746). The idea is to represent arbitrary virtual objects as lots of tangent discs. While they wanted to replace triangles in some graphics card of the future, I'm implementing this on conventional cards; my discs are just fans of triangles ("Discs") around center points ("Points").
This is targeting WebGL.
The strategy I intend to use, similar to what's done in the paper, is:
Render the Discs in a Depth-Only pass.
In a second (or more) pass, compute what's visible based solely on which Points are "visible" - ie their depth is <= the depth from the Depth-Only pass at that x and y.
I believe the authors of the paper used a gaussian blur on top of the equivalent of a GL_POINTS render applied to the Points (ie re-using the depth buffer from the DepthOnly pass, not clearing it) to actually render their object. It's hard to say: the process is unfortunately a one line comment, and I'm unsure of how to duplicate it in WebGL anyway (a naive gaussian blur will just blur in the background pixels that weren't touched by the GL_POINTS call).
Instead, I'm hoping to do something slightly different, by rerendering the discs in a second pass instead as cones (center of disc becomes apex of cone, think "close the umbrella") and effectively computing a voronoi diagram on the surface of the object (ala redbook http://www.glprogramming.com/red/chapter14.html#name19). The idea is that an output pixel is the color value of the first disc to reach it when growing radiuses from 0 -> their natural size.
The crux of the problem is that only discs whose centers pass the depth test in the first pass should be allowed to carry on (as cones) to the 2nd pass. Because what's true at the disc center applies to the whole disc/cone, I believe this requires evaluating a depth test at a vertex or object level, and not at a fragment level.
Since WebGL support for accessing depth buffers is still poor, in my first pass I am packing depth info into an RGBA Framebuffer in a fragment shader. I then intended to use this in the vertex shader of the second pass via a sampler2D; any disc center that was closer than the relative texture2D() lookup would be allowed on to the second pass; otherwise I would hack "discarding" the vertex (its alpha would be set to 0 or some flag set that would cause discard of fragments associated with the disc/cone or etc).
This actually kind of worked but it caused horrendous z-fighting between discs that were close together (very small perturbations wildly changed which discs were visible). I believe there is some floating point error between depth->rgba->depth. More importantly, though, the depth texture is being set by fragment texel coords, but I'm looking up vertices, which almost certainly don't line up exactly on top of relevant texel coordinates; so I get depth +/- noise, essentially, and the noise is the issue. Adding or subtracting .000001 or something isn't sufficient: you trade Type I errors for Type II. My render became more accurate when I switched from NEAREST to LINEAR for the depth texture interpolation, but it still wasn't good enough.
How else can I determine which disc's centers would be visible in a given render, so that I can do a second vertex/fragment (or more) pass focused on objects associated with those points? Or: is there a better way to go about this in general?

Texture tiling with continuous random offset?

I have a texture and a mesh, if I apply the texture on the mesh, it tiles it continuously as one would expect. The offset for each tile is equal.
The problem:
Non-tilable texture or texture with some outstanding elements are looking repetitive and cheap.
Example:
Solution Attempt
My first attempt was to programatically generate a texture size of a mesh with randomised offsets for each tiles. Of course the size of the texture became a problem, let alone the GPU limitation of a single texture max size.
What I would like to do
I would like to know if there's a way to make a Unity shader or a material that would load a single texture and tile it with random offsets for each tile and do it only once to keep the performance high?
I believe you might try one of techniques invented by Inigo Quilez (http://www.iquilezles.org/www/articles/texturerepetition/texturerepetition.htm).
Basically, non-tilable textures and textures with some outstanding elements are different problems.
Non-tilable textures
There are 2 ways of solving it:
Fixing the texture itself;
Mirrored repeat can be used in some cases (see GL_MIRRORED_REPEAT)
Textures with some outstanding elements
This can be solved in the following ways (or conjunction of them):
Modifying the texture (this includes enlargement as well);
Using multitexturing;
Well, maybe mirrored repeat can be used as well in some cases.
Shifting texture coordinates randomly
Unfortunately, I can't think of any case of these 2 problems (except, maybe, white nose textures) where texture coordinates shifting is a solution.
You are looking at this problem the wrong way. All games face this issue. They hide it simply by a) varying textures a lot instead of texturing large areas with the same texture and b) through level design. Imagine this plane filled with barns, gras, trees, fences and what not - suddenly the mono-textured surface blends in with its surroundings. Also camera angle plays a huge role in this. Try changing your camera position close to the ground and the repeating texture is much less noticeable.
Your plane is just a very extreme example. You should not try to fix it at this point but rather continue to build your game. Or design your textures to repeat well without showing clear patterns. The extreme would be a flatcolored texture. But generally large outdoor terrain textures simply have very little structure, almost being like noise, plus they don't use colors with any contrast, just shades of the same color.
Your offset idea won't work. Perhaps it might work technically (it may be inefficient though). But random offsets can't cover up the patterns, instead it will create new ones because the textures won't smoothly interpolate at their edges anymore, so you could clearly see a grid of squares. That I guess would be even uglier and more noticeable.
Lastly you can increase texture size or scale (blurryness may need to be covered up as explained above). In relation to camera angle this would be the easiest, most effective fix. Or at least an improvement.
old thread, but relevant to many I think. You can do this in a shader, by randomizing the Vertex position on the XZ plane, (or better) the UV co-ordinates, based on the world space of the co-ordinates.
The texture will still tile.... but instead of being in a straight line... it will be in a random wiggly line. This is great for stuff like terrain, grass etc.... but obviously no good if you want to maintain straight lines in your textures.
A second option is diffuse-detail shader. It tiles one texture up close to camera, and another when further away (which you can make softer / more blurry
Third option... blend 2 textures together, with different UV tiling scale (non divisible. e.g not scale 2 and 4, but use 1 and 2.334556) on each, so the pattern is harder to see

glsl pixel shader- distance to closest target pixel

i have a 2k x 1k image with randomly placed "target" pixels. these pixels are pure red.
in a frag/pixel shader, for each pixel that is not red (target color), i need to find the distance to the closest red pixel. i'll use this distance value to create a gradient.
i found this answer, which seems the closest to my problem ---
Finding closest non-black pixel in an image fast ---
but it's not glsl specific.
i have the option to send my red target pixels into the frag shader as a texture buffer array. but i think it would be cleaner if i didn't need to.
A shader cannot read and write to the same texture because that would introduce too many constraints and complexities about the sequence of execution and would make caching much more difficult. So you're talking about sending some data about the red pixels in and getting the distance information out.
Fragment shaders run in parallel and it's much more expensive to perform random-access texture reads than to read from a location that is known outside of the shader, primarily due to pipelining considerations. The pre-programmable situation where sampling coordinates are known at vertices and then interpolated across the face of the geometry is still the most optimal way to access a texture.
So, writing a shader that, for each pixel, did a search outwards for a red pixel would be extremely inefficient. It's definitely possible, doing much the algorithm you link to, but probably not the smartest way around.
Ideally you'd phrase things the other way around and use some sort of accumulation. So:
clear your output buffer to its maximal values;
for each red location:
for every output fragment, work out the distance from the location;
check what distance is already stored for that fragment;
if the new distance is less than that stored, replace the stored version.
The easiest way to do that in OpenGL is likely going to be to use a depth buffer, because that has the per-fragment steps (2) and (3) implemented directly in hardware.
So for each each fragment you're going to calculate the distance from the current red fragment. You're going to output that as depth. When you're finished with all red dots you can use the depth buffer as input to a shader that outputs appropriate colours.
To avoid 2000 red spots turning into a 2000-pass drawing algorithm which would quickly run up against memory bandwidth, you'll probably want to write a single shader that does a large number of red dots at once and outputs a single depth value.
You should check GL_MAX_UNIFORM_LOCATIONS to find out how many uniforms you can push at once. It's guaranteed to be at least 1024 on recent versions of desktop OpenGL. You'll probably want to generate your shader dynamically.
Why don't you gaussian blur the whole image with a large radius, but at each iteration keep adding the red pixels back into the equation at full intensity so they bleed out. The red channel of the final blur would be your distance values - the higher values are closer to the red pixels. It's an approximation, but then you can make use of heavily optimised blur shaders.

How to render a textured polygon on top of another?

Let's say i have 2 textured triangles.
I want to draw one triangle over the other one, such that the top one is basically laying on top of the second one.
Now technically they are on the same plane, but they do not share the same "space" (they do not intersect), though visually it is tough to tell at a certain distance.
Basically when these triangles are very close together (in parallel) i see texture "artifacts". I should ONLY see the triangle that is on top. But what im seeing is that the triangle in the background tends to "bleed" through.
Is there a way to alleviate this side effect, like increasing the depth precision or something? Maybe even increase the tessellation of the triangles?
* Update *
I am using vertex and index buffers. This is using OpenGL ES on iPhone.
I dont know if this picture will help or make things worse. But here it is. Two triangles very close to each other along the Z-axis (but not touching). (NOTE: the normal vector for these triangles are going straight towards you).
You can increase the depth precision up to 32 bits per pixel. However, if the 2 triangles are coplanar, that likely won't fix the problem. If they aren't coplanar (it's really hard to tell from your description what you're talking about), then increasing the depth precision might help. If you're using FBOs for your drawing, simply create the depth texture with 32-bits per component by using GL_DEPTH_COMPONENT32 for the internal format. There are several examples here. If you're not using FBOs, please describe how you create your context (also what OS you're on - Windows, OS X, Linux?).
You could try changing the Depth Buffer function to something more appropriate...
glDepthFunc(GL_ALWAYS) - Essentially disables depth testing
glDepthFunc(GL_GEQUAL) - Overwrites when greater OR equal
If they are too close (assuming they are parallel, not on the same plane), you will get precision errors (like banding artifacts=. Try adding some small offset to the top polygon using glPolygonOffsset: http://www.opengl.org/sdk/docs/man/xhtml/glPolygonOffset.xml Check this simple tutorial: http://www.felixgers.de/teaching/jogl/polygonOffset.html
EDIT: Also try increasing precision as #user1118321 says.
What you are describing is called Z-Fighting (http://en.wikipedia.org/wiki/Z-fighting).
Sadly depth buffers only have limited precision, so if the difference in depth of two polygons is smaller than the precision of the depth buffer, you can't predict which polygon will pass the depth test and be drawn.
As others have said, you can increase the precision of the depth buffer so that polygons have to be closer to each other before the z-fighting artifacts occur, or you can disable the depth test so you are ensured that polygons rendered wont be blocked by anything previously drawn.

OpenGL texturing via vertex alphas, how to avoid following diagonal lines?

http://img136.imageshack.us/img136/3508/texturefailz.png
This is my current program. I know it's terribly ugly, I found two random textures online ('lava' and 'paper') which don't even seem to tile. That's not the problem at the moment.
I'm trying to figure out the first steps of an RPG. This is a top-down screenshot of a 10x10 heightmap (currently set to all 0s, so it's just a plane), and I texture it by making one pass per texture per quad, and each vertex has alpha values for each texture so that they blend with OpenGL.
The problem is that, notice how the textures trend along diagonals, and even though I'm drawing with GL_QUAD, this is presumably because the quads are turned into sets of two triangles and then the alpha values at the corners have more weight along the hypotenuses... But I wasn't expecting that to matter at all. By drawing quads, I was hoping that even though they were split into triangles at some low level, the vertex alphas would cause the texture to radiate in a circular outward gradient from the vertices.
How can I fix this to make it look better? Do I need to scrap this and try a whole different approach? IS there a different approach for something like this? I'd love to hear alternatives as well.
Feel free to ask questions and I'll be here refreshing until I get a valid answer, so I'll comment as fast as I can.
Thanks!!
EDIT:
Here is the kind of thing I'd like to achieve. No I'm obviously not one of the billions of noobs out there "trying to make a MMORPG", I'm using it as an example because it's very much like what I want:
http://img300.imageshack.us/img300/5725/runescapehowdotheytile.png
How do you think this is done? Part of it must be vertex alphas like I'm doing because of the smooth gradients... But maybe they have a list of different triangle configurations within a tile, and each tile stores which configuration it uses? So for example, configuration 1 is a triangle in the topleft and one in the bottomright, 2 is the topright and bottomleft, 3 is a quad on the top and a quad on the bottom, etc? Can you think of any other way I'm missing, or if you've got it all figured out then please share how they do it!
The diagonal artefacts are caused by having all of your quads split into triangles along the same diagonal. You define points [0,1,2,3] for your quad. Each quad is split into triangles [0,1,2] and [1,2,3]. Try drawing with GL_TRIANGLES and alternating your choice of diagonal. There are probably more efficient ways of doing this using GL_TRIANGLE_STRIP or GL_QUAD_STRIP.
i think you are doing it right, but you should increase the resolution of your heightmap a lot to get finer tesselation!
for example look at this heightmap renderer:
mdterrain
it shows the same artifacts at low resolution but gets better if you increase the iterations
I've never done this myself, but I've read several guides (which I can't find right now) and it seems pretty straight-forward and can even be optimized by using shaders.
Create a master texture to control the mixing of 4 sub-textures. Use the r,g,b,a components of the master texture as a percentage mix of each subtextures ( lava, paper, etc, etc). You can easily paint a master texture using paint.net, photostop, gimp and just paint into each color channel. You can compute the resulting texture before hand using all 5 textures OR you can calculate the result on the fly with a fragment shader. I don't have a good example of either, but I think you can figure it out given how far you've come.
The end result will be "pixel" pefect blending (depends on the textures resolution and filtering) and will avoid the vertex blending issues.