There are a great number of questions relating to exactly two points in texture coordinates, and my shader already works with that concept. 1.0, 1.0 shows the entire image, and 1.0 / frame in one dimension or another displays the appearance of... well, unfortunately, it displays everything between 0.0 of the quad, to the decimal value of the division of the frame.
What I'd like to do is, from the shader, control all four points of the texture coordinates. In every tutorial and every sample, the texture coordinate vec is always a vec2, implying that you only have control over the two end-points, and not the starting points. Is there a way to eliminate this limitation?
To give you an idea of why I want to do this (If it isn't blatantly obvious already), I'd like to pick a tile or animated frame out of a larger sheet.
Ideally, I'd also be able to find the dimensions (Width and height) of the image in the shader, but if necessary, it isn't that difficult to pass those values in. I believe at this time I'm using GLSL 2, meaning I'm unable to use the textureSize2D function in the shader (Already tried it).
Simplifying things UV coodrinate pair you pass to texture command means a point to read from texture. just one point, not an area. Depending on sampler state and whether minification/magnification occur or not more then one texel can be used to calculate value of that point, but still it is one value connected to UV provided.
Related
TL;DR I'm computing a depth map in a fragment shader and then trying to use that map in a vertex shader to see if vertices are 'in view' or not and the vertices don't line up with the fragment texel coordinates. The imprecision causes rendering artifacts, and I'm seeking alternatives for filtering vertices based on depth.
Background. I am very loosely attempting to implement a scheme outlined in this paper (http://dash.harvard.edu/handle/1/4138746). The idea is to represent arbitrary virtual objects as lots of tangent discs. While they wanted to replace triangles in some graphics card of the future, I'm implementing this on conventional cards; my discs are just fans of triangles ("Discs") around center points ("Points").
This is targeting WebGL.
The strategy I intend to use, similar to what's done in the paper, is:
Render the Discs in a Depth-Only pass.
In a second (or more) pass, compute what's visible based solely on which Points are "visible" - ie their depth is <= the depth from the Depth-Only pass at that x and y.
I believe the authors of the paper used a gaussian blur on top of the equivalent of a GL_POINTS render applied to the Points (ie re-using the depth buffer from the DepthOnly pass, not clearing it) to actually render their object. It's hard to say: the process is unfortunately a one line comment, and I'm unsure of how to duplicate it in WebGL anyway (a naive gaussian blur will just blur in the background pixels that weren't touched by the GL_POINTS call).
Instead, I'm hoping to do something slightly different, by rerendering the discs in a second pass instead as cones (center of disc becomes apex of cone, think "close the umbrella") and effectively computing a voronoi diagram on the surface of the object (ala redbook http://www.glprogramming.com/red/chapter14.html#name19). The idea is that an output pixel is the color value of the first disc to reach it when growing radiuses from 0 -> their natural size.
The crux of the problem is that only discs whose centers pass the depth test in the first pass should be allowed to carry on (as cones) to the 2nd pass. Because what's true at the disc center applies to the whole disc/cone, I believe this requires evaluating a depth test at a vertex or object level, and not at a fragment level.
Since WebGL support for accessing depth buffers is still poor, in my first pass I am packing depth info into an RGBA Framebuffer in a fragment shader. I then intended to use this in the vertex shader of the second pass via a sampler2D; any disc center that was closer than the relative texture2D() lookup would be allowed on to the second pass; otherwise I would hack "discarding" the vertex (its alpha would be set to 0 or some flag set that would cause discard of fragments associated with the disc/cone or etc).
This actually kind of worked but it caused horrendous z-fighting between discs that were close together (very small perturbations wildly changed which discs were visible). I believe there is some floating point error between depth->rgba->depth. More importantly, though, the depth texture is being set by fragment texel coords, but I'm looking up vertices, which almost certainly don't line up exactly on top of relevant texel coordinates; so I get depth +/- noise, essentially, and the noise is the issue. Adding or subtracting .000001 or something isn't sufficient: you trade Type I errors for Type II. My render became more accurate when I switched from NEAREST to LINEAR for the depth texture interpolation, but it still wasn't good enough.
How else can I determine which disc's centers would be visible in a given render, so that I can do a second vertex/fragment (or more) pass focused on objects associated with those points? Or: is there a better way to go about this in general?
I've been trying to utilize the techniques in Eric Penner's "Shader Amortization using
Pixel Quad Message Passing" from GPU Pro 2, Chapter VI.2. The basic idea is that modern GPU's process fragment shaders in 2x2 fragment quads, and you can use ddx() and ddy() to get the value of some_var at all four fragments as long as the following hold:
Your GPU supports high-quality derivatives
You know which fragment you're processing (top-left, top-right, bottom-left, bottom-right)
This opens up a lot of opportunities for fragment shader optimization (like distributing texture fetches over a 2x2 pixel quad) that you'd need Compute Shaders to beat.
My problem is this:
I can't deterministically detect which fragment I'm processing. Ideally, each fragment block would start at even-numbered output pixel coords like (0, 0), (2, 0), ... (1024, 1024), ..., so you'd just need to check whether the output pixel x and y coords are even or odd to know which fragment you're currently processing. The method Penner uses in the book assumes this works...but it seems to be going wrong for me.
Unfortunately, my 2x2 fragment quads appear to be starting in nondeterministic places: I've seen them start at (even, even), (even, odd), and (odd, even). I can't remember if I've seen (odd, odd) or not, but anyway, the arrangement seems to depend on a myriad of factors I don't understand, including the output resolution and shader specifics. (I'm testing on an 8800 GTS, in case anyone's wondering.)
Does anyone know what might be causing this nondeterminism or have any documentation on it? I understand there's virtually no official standardization in this area, but I'm more interested in how things work in practice on modern desktop-level GPU's, and I'm hoping there's a way to get this technique to work. If no one knows how to reason about the even/odd start behavior, does anyone know any other way of determining the current fragment's relative location in its 2x2 quad?
Thanks :)
As it turns out, the premise of my question was mostly wrong:
The 2x2 fragment quads DO almost always start on even pixel numbers...as long as the output resolution is even-numbered.
If the output resolution is odd-numbered (a possibility with the underlying program I'm working with), things can get more complicated, for obvious reasons. I don't expect there's any uniformity here across drivers/GPU's/etc. either, but my current tests (which themselves may still be buggy) appear to demonstrate 2x2 pixel quads starting at an odd pixel along the dimension with odd resolution, at least when the odd dimension is horizontal.
All of this weirdness helped obscure my bigger issue: The code I used to detect the fragment's location in the pixel quad was buggy. I tested by setting the texture coordinates equal within a pixel quad (set to the pixel quad center)...or so I thought. However, I calculated the screen coordinates based on a full-screen quad where the uv mapping has the +v axis pointing downward. The screenspace origin starts at the bottom-left, because it's based on the top-right quadrant of Cartesian coordinates, and I accidentally forgot to invert the v-coordinate of the uv offset I used to find the pixel quad center. Many of my nondeterministic observations came from failing to check my assumptions while debugging and misinterpreting things as a result, particularly in combination with odd resolutions.
This was an embarrassing mistake I should have caught a lot sooner, but I figured I'd detail it as a warning to others to always double-check the direction of your vertical axis when you're dealing with opposite-facing coordinate frames. ;)
UPDATE:
I ran across a situation where 2x2 pixel quads started on even pixel numbers even when the resolution was odd. Thanks to the nondeterminism under odd resolutions, I had to work out another solution:
If you're deriving your screen pixel numbers from the uv coords of a fullscreen quad (for post-processing), the fragment location derived from this is only useful for arranging/placing shared samples between fragments, etc., not for the quad-pixel communication itself. You'll need to have screen pixel numbers with respect to the screenspace origin for that. You can derive these from vertex positions, or you can use ddx().x and ddy().y on the uv-based pixel numbers to find out their screen direction and mirror the fragment position in the appropriate direction from there.
Calculate the fragment location based on your screen pixel numbers (with respect to the true screenspace origin) and the assumption 2x2 pixel quads start on even pixels. (If you used uv-based pixel numbers, now is the time to mirror things.)
Do a ddx().x and ddy().y on the fragment location, and if they're negative in either direction, you know the pixel quad starts at an odd pixel number in that direction...so mirror in that direction.
If you calculate two fragment positions, one based on a uv origin and one based on a screen origin, use the uv-based one for reasoning about uv-based sample placement, and use the screen-based one for actually obtaining the values of a variable at neighboring fragments.
Profit.
I'll post a link to my working MIT-licensed code once I release it on Github, along with usage examples (the speedup is unfortunately not what I expected, but whatever ;)). I'm just waiting to get done with a larger shader I'll be uploading along with it.
What happens when a shader reaches the primitive edge and there is a
color=texture2D(texture, vec2(texCoord.x+some_positive_value, texCoord.y));
somewhere in it? I mean, what value does color get in such a call, transparent black(0,0,0,0)? There seems to be no error in doing this, but I really need to ask if this is safe to use, and are there any visible artifacts to expect. I'm making a blur shader and all tutorials I've seen use this method to access adjacent pixels.
You define what happens. What you're after is "texture wrapping"
But there's still the problem with the blur itself. There is no data outside the texture, so either you apply a wrap mode (GL_CLAMP_TO_EDGE is probably what you want) and accept there will be imperfections, or render the input to the blur slightly larger.
Possible imperfections are shown below. I've blurred a circle in gimp before and after moving it past the edge. Then filled the centre so you can see the difference better. Note the misshapen fourth circle caused by the blur operation's assumption about how the colour continues outside the border.
Just so you know texture2D applies filtering, which can be bypassed with texelFetch (note that this takes coordinates in pixels instead of normalized zero to one texture coordinates).
I need to pass near and far value in glPerspective in opengl code. I am getting all my vertices in eye space by multiplying with ModelViewMatrix in the vertex shader. My problem is that, I need to find the minimum and maximum value out of this, so that I can pass that value to glPerspective. How would I do that? Do I need to calculate them in the vertex shader or in the client space( C code) ?
near and far are typically not calculated, but just set to reasonable values. 'near' should be close enough that near objects are not clipped, but not so near that all z-buffer precision is gone. 'far' just needs to be far enough away for anything you want to render.
In any event the vertex shader isn't the best place to calculate them, because the matrices get passed in to the shader, so you need to know the values before you get that far.
(it can be viable/useful to calculate near/far dynamically for things like shadows, where you want high-precision - in which case, base it on bounding volumes of the objects you want to render, or some other such approximation).
I'm currently working on a cylinder shaped terrain produced by a height map.
What happens in the program is simple, there is a texture for the colors of the terrain that has the alpha value of regions in with i want it to be invisible and another texture ARGB with the A being the gray scale for the heights and RGB is the normal for the light.
The texture is such that the A value goes from 1 to 255 and I'm reserving the 0 for the regions with holes, meaning i don't want then to exist.
So in theory no problem, I'm making those regions invisible based on the first texture but on practice what's happening is that the program is considering the 0 as the minimum height and, even with the texture on top, is creating some lines towards this regions of 0, like trying to make its triangle but not getting there because i cut the next vertex by making it invisible.
Notice the lines going to the center of the cylinder
This is how it gets when i stop making those vertex invisible
So, just to say, i used the function Clip() on the pixel shader to make it invisible.
Basically what i need of help:
Is it possible, the same way i use clip() on the pixel shader i do something like that on the vertex shader and get rid of the unwanted vertex?
Basically, is possible to just say to ignore value 0?
Any ideas to fix this? i thinking of making all the vertex that are 0 become the value of his neighbor, that way those lines wouldn't go to the center but to the same plane as the cylinder itself.
Another thing is that we can see that the program is interpolating the values from one vertex to the next, that is why i cuts on halfway through to the invisible vertex
I'm working with Directx11 API with C++ and the program uses Tessellation.
Thank you for your time and will be very glad with any input on this!
Well i did resolve a bit of this issue.
I made the texture with the height values pass through a modifier that created another texture with the zero values substituted by the side pixel with value different then zero or change for 128.0f.
with that it made the weird lines direction be more accurate not going to the center of the cylinder but along the line.