I have a shader that calculates contour lines based on a parameter in the program, in my case the height value of a mesh. The calculation is performed using standard derivatives as follows:
float contourWidth = 0.5;
float f = abs( fract( height ) - 0.5 );
float df = fwidth( height );
float mi = max( 0.0, contourWidth - 1.0 );
float ma = max( 1.0 , contourWidth );
float contour = clamp( (f - df * mi ) / ( df * ( ma - mi ) ), 0.0, 1.0 );
This works as expected, however when I feed the height parameter from a texture, and zoom in, such that the rendered pixels are much smaller than the sampled texels, artifacts begin to appear.
The sampled texture has linear filtering and to investigate I implemented linear filtering manually in the shader to try to isolate the problem. This resolved the issue, but I'd like to understand why this is happening and if the only solution is to manually implement linear filtering in the shader as I have, or if there is a better way.
Below is a comparison of the two rendering techniques:
I have created an working example on Shadertoy to demonstrate the issue: https://www.shadertoy.com/view/MljcDy
I'm seeing this issue on Mac OSX as well as mobile Safari (where the atrifacts are even worse)
Related
I've been trying for some time now to get a screen-space pixel (provided by a deferred HLSL shader) to convert to light space. The results have been surprising to me as my light rendering seems to be tiling the depth buffer.
Importantly, the scene camera (or eye) and the light being rendered from start in the same position.
First, I extract the world position of the pixel using the code below:
float3 eye = Eye;
float4 position = {
IN.texCoord.x * 2 - 1,
(1 - IN.texCoord.y) * 2 - 1,
zbuffer.r,
1
};
float4 hposition = mul(position, EyeViewProjectionInverse);
position = float4(hposition.xyz / hposition.w, hposition.w);
float3 eyeDirection = normalize(eye - position.xyz);
The result seems to be correct as rendering the XYZ position as RGB respectively yields this (apparently correct) result:
The red component seems to be correctly outputting X as it moves to the right, and blue shows Z moving forward. The Y factor also looks correct as the ground is slightly below the Y axis.
Next (and to be sure I'm not going crazy), I decided to output the original depth buffer. Normally I keep the depth buffer in a Texture2D called DepthMap passed to the shader as input. In this case, however, I try to undo the pixel transformation by offsetting it back into the proper position and multiplying it by the eye's view-projection matrix:
float4 cpos = mul(position, EyeViewProjection);
cpos.xyz = cpos.xyz / cpos.w;
cpos.x = cpos.x * 0.5f + 0.5f;
cpos.y = 1 - (cpos.y * 0.5f + 0.5f);
float camera_depth = pow(DepthMap.Sample(Sampler, cpos.xy).r, 100); // Power 100 just to visualize the map since scales are really tiny
return float4(camera_depth, camera_depth, camera_depth, 1);
This yields a correct looking result as well (though I'm not 100% sure about the Z value). Also note that I've made the results exponential to better visualize the depth information (this is not done when attempting live comparisons):
So theoretically, I can use the same code to convert that pixel world position to light space by multiplying by the light's view-projection matrix. Correct? Here's what I tried:
float4 lpos = mul(position, ShadowLightViewProjection[0]);
lpos.xyz = lpos.xyz / lpos.w;
lpos.x = lpos.x * 0.5f + 0.5f;
lpos.y = 1 - (lpos.y * 0.5f + 0.5f);
float shadow_map_depth = pow(ShadowLightMap[0].Sample(Sampler, lpos.xy).r, 100); // Power 100 just to visualize the map since scales are really tiny
return float4(shadow_map_depth, shadow_map_depth, shadow_map_depth, 1);
And here's the result:
And another to show better how it's mapping to the world:
I don't understand what is going on here. It seems it might have something to do with the projection matrix, but I'm not that good with math to know for sure what is happening. It's definitely not the width/height of the light map as I've tried multiple map sizes and the projection matrix is calculated using FOV and aspect ratios never inputing width/height ever.
Finally, here's some C++ code showing how my perspective matrix (used for both eye and light) is calculated:
const auto ys = std::tan((T)1.57079632679f - (fov / (T)2.0));
const auto xs = ys / aspect;
const auto& zf = view_far;
const auto& zn = view_near;
const auto zfn = zf - zn;
row1(xs, 0, 0, 0);
row2(0, ys, 0, 0);
row3(0, 0, zf / zfn, 1);
row4(0, 0, -zn * zf / zfn, 0);
return *this;
I'm completely at a loss here. Any guidance or recommendations would be greatly appreciated!
EDIT - I also forgot to mention that the tiled image is upside down as if the y flip broke it. That's strange to me as it's required to get it back to eye texture space correctly.
I did some tweaking and fixed things here and there. Ultimately, my biggest issue was an unexpectedly transposed matrix. It's a bit complicated as to how the matrix got transposed, but that's why things were flipped. I also changed to D32 depth buffers (though I'm not sure that helped any) and made sure that any positions divided by their W affected all component (including W).
So code like this: hposition.xyz = hposition.xyz / hposition.w
became this: hposition = hposition / hposition.w
After all this tweaking, it's starting to look more like a shadow map.
Oh and the transposed matrix was the ViewProjection of the light.
I am working on conventional Whitted ray tracing, and trying to interpolate surface of hitted triangle as if it was convex instead of flat.
The idea is to treat triangle as a parametric surface s(u,v) once the barycentric coordinates (u,v) of hit point p are known.
This surface equation should be calculated using triangle's positions p0, p1, p2 and normals n0, n1, n2.
The hit point itself is calculated as
p = (1-u-v)*p0 + u*p1 + v*p2;
I have found three different solutions till now.
Solution 1. Projection
The first solution I came to. It is to project hit point on planes that come through each of vertexes p0, p1, p2 perpendicular to corresponding normals, and then interpolate the result.
vec3 r0 = p0 + dot( p0 - p, n0 ) * n0;
vec3 r1 = p1 + dot( p1 - p, n1 ) * n1;
vec3 r2 = p2 + dot( p2 - p, n2 ) * n2;
p = (1-u-v)*r0 + u*r1 + v*r2;
Solution 2. Curvature
Suggested in a paper of Takashi Nagata "Simple local interpolation of surfaces using normal vectors" and discussed in question "Local interpolation of surfaces using normal vectors", but it seems to be overcomplicated and not very fast for real-time ray tracing (unless you precompute all necessary coefficients). Triangle here is treated as a surface of the second order.
Solution 3. Bezier curves
This solution is inspired by Brett Hale's answer. It is about using some interpolation of the higher order, cubic Bezier curves in my case.
E.g., for an edge p0p1 Bezier curve should look like
B(t) = (1-t)^3*p0 + 3(1-t)^2*t*(p0+n0*adj) + 3*(1-t)*t^2*(p1+n1*adj) + t^3*p1,
where adj is some adjustment parameter.
Computing Bezier curves for edges p0p1 and p0p2 and interpolating them gives the final code:
float u1 = 1 - u;
float v1 = 1 - v;
vec3 b1 = u1*u1*(3-2*u1)*p0 + u*u*(3-2*u)*p1 + 3*u*u1*(u1*n0 + u*n1)*adj;
vec3 b2 = v1*v1*(3-2*v1)*p0 + v*v*(3-2*v)*p2 + 3*v*v1*(v1*n0 + v*n2)*adj;
float w = abs(u-v) < 0.0001 ? 0.5 : ( 1 + (u-v)/(u+v) ) * 0.5;
p = (1-w)*b1 + w*b2;
Alternatively, one can interpolate between three edges:
float u1 = 1.0 - u;
float v1 = 1.0 - v;
float w = abs(u-v) < 0.0001 ? 0.5 : ( 1 + (u-v)/(u+v) ) * 0.5;
float w1 = 1.0 - w;
vec3 b1 = u1*u1*(3-2*u1)*p0 + u*u*(3-2*u)*p1 + 3*u*u1*( u1*n0 + u*n1 )*adj;
vec3 b2 = v1*v1*(3-2*v1)*p0 + v*v*(3-2*v)*p2 + 3*v*v1*( v1*n0 + v*n2 )*adj;
vec3 b0 = w1*w1*(3-2*w1)*p1 + w*w*(3-2*w)*p2 + 3*w*w1*( w1*n1 + w*n2 )*adj;
p = (1-u-v)*b0 + u*b1 + v*b2;
Maybe I messed something in code above, but this option does not seem to be very robust inside shader.
P.S. The intention is to get more correct origins for shadow rays when they are casted from low-poly models. Here you can find the resulted images from test scene. Big white numbers indicates number of solution (zero for original image).
P.P.S. I still wonder if there is another efficient solution which can give better result.
Keeping triangles 'flat' has many benefits and simplifies several stages required during rendering. Approximating a higher order surface on the other hand introduces quite significant tracing overhead and requires adjustments to your BVH structure.
When the geometry is being treated as a collection of facets on the other hand, the shading information can still be interpolated to achieve smooth shading while still being very efficient to process.
There are adaptive tessellation techniques which approximate the limit surface (OpenSubdiv is a great example). Pixar's Photorealistic RenderMan has a long history using subdivision surfaces. When they switched their rendering algorithm to path tracing, they've also introduced a pretessellation step for their subdivision surfaces. This stage is executed right before rendering begins and builds an adaptive triangulated approximation of the limit surface. This seems to be more efficient to trace and tends to use less resources, especially for the high-quality assets used in this industry.
So, to answer your question. I think the most efficient way to achieve what you're after is to use an adaptive subdivision scheme which spits out triangles instead of tracing against a higher order surface.
Dan Sunday describes an algorithm that calculates the barycentric coordinates on the triangle once the ray-plane intersection has been calculated. The point lies inside the triangle if:
(s >= 0) && (t >= 0) && (s + t <= 1)
You can then use, say, n(s, t) = nu * s + nv * t + nw * (1 - s - t) to interpolate a normal, as well as the point of intersection, though n(s, t) will not, in general, be normalized, even if (nu, nv, nw) are. You might find higher order interpolation necessary. PN-triangles were a similar hack for visual appeal rather than mathematical precision. For example, true rational quadratic Bezier triangles can describe conic sections.
I am working on a procedural texture, it looks fine, except very far away, the small texture pixels disintegrate into noise and moiré patterns.
I have set out to find a solution to average and quantise the scale of the pattern far away and close up, so that close by it is in full detail, and far away it is rounded off so that one pixel of a distant mountain only represents one colour found there, and not 10 or 20 colours at that point.
It is easy to do it by rounding the World_Position that the volumetric texture is based on using an if statement i.e.:
if( camera-pixel_distance > 1200 meters ) {wpos = round(wpos/3)*3;}//---round far away pixels
return texturefucntion(wpos);
the result of rounding far away textures is that they will look like this, except very far away:
the trouble with this is i have to make about 5 if conditions for the various distances, and i have to estimate a random good rounding value
I tried to make a function that cuts the distance of the pixel into distance steps, and applies a LOD devider to the pixel_worldposition value to make it progressively rounder at distance but i got nonsense results, actually the HLSL was totally flipping out. here is the attempt:
float cmra= floor(_WorldSpaceCameraPos/500)*500; //round camera distance by steps of 500m
float dst= (1-distance(cmra,pos)/4500)*1000 ; //maximum faraway view is 4500 meters
pos= floor(pos/dst)*dst;//close pixels are rounded by 1000, far ones rounded by 20,30 etc
it returned nonsense patterns that i could not understand.
Are there good documented algorithms for smoothing and rounding distance texture artifacts? can i use the scren pixel resolution, combined with the distance of the pixel, to round each pixel to one color that stays a stable color?
Are you familiar with the GLSL (and I would assume HLSL) functions dFdx() and dFdy() or fwidth()? They were made specifically to solve this problem. From the GLSL Spec:
genType dFdy (genType p)
Returns the derivative in y using local differencing for the input argument p.
These two functions are commonly used to estimate the filter width used to anti-alias procedural textures.
and
genType fwidth (genType p)
Returns the sum of the absolute derivative in x and y using local differencing for the input argument p, i.e.: abs (dFdx (p)) + abs (dFdy (p));
OK i found some great code and a tutorial for the solution, it's a simple code that can be tweaked by distance and many parameters.
from this tutorial:
http://www.yaldex.com/open-gl/ch17lev1sec4.html#ch17fig04
half4 frag (v2f i) : COLOR
{
float Frequency = 0.020;
float3 pos = mul (_Object2World, i.uv).xyz;
float V = pos.z;
float sawtooth = frac(V * Frequency);
float triangle = (abs(2.0 * sawtooth - 1.0));
//return triangle;
float dp = length(float2(ddx(V), ddy(V)));
float edge = dp * Frequency * 8.0;
float square = smoothstep(0.5 - edge, 0.5 + edge, triangle);
// gl_FragColor = vec4(vec3(square), 1.0);
if (pos.x>0.){return float4(float3(square), 1.0);}
if (pos.x<0.){return float4(float3(triangle), 1.0);}
}
So, I've got an imposter (the real geometry is a cube, possibly clipped, and the imposter geometry is a Menger sponge) and I need to calculate its depth.
I can calculate the amount to offset in world space fairly easily. Unfortunately, I've spent hours failing to perturb the depth with it.
The only correct results I can get are when I go:
gl_FragDepth = gl_FragCoord.z
Basically, I need to know how gl_FragCoord.z is calculated so that I can:
Take the inverse transformation from gl_FragCoord.z to eye space
Add the depth perturbation
Transform this perturbed depth back into the same space as the original gl_FragCoord.z.
I apologize if this seems like a duplicate question; there's a number of other posts here that address similar things. However, after implementing all of them, none work correctly. Rather than trying to pick one to get help with, at this point, I'm asking for complete code that does it. It should just be a few lines.
For future reference, the key code is:
float far=gl_DepthRange.far; float near=gl_DepthRange.near;
vec4 eye_space_pos = gl_ModelViewMatrix * /*something*/
vec4 clip_space_pos = gl_ProjectionMatrix * eye_space_pos;
float ndc_depth = clip_space_pos.z / clip_space_pos.w;
float depth = (((far-near) * ndc_depth) + near + far) / 2.0;
gl_FragDepth = depth;
For another future reference, this is the same formula as given by imallett, which was working for me in an OpenGL 4.0 application:
vec4 v_clip_coord = modelview_projection * vec4(v_position, 1.0);
float f_ndc_depth = v_clip_coord.z / v_clip_coord.w;
gl_FragDepth = (1.0 - 0.0) * 0.5 * f_ndc_depth + (1.0 + 0.0) * 0.5;
Here, modelview_projection is 4x4 modelview-projection matrix and v_position is object-space position of the pixel being rendered (in my case calculated by a raymarcher).
The equation comes from the window coordinates section of this manual. Note that in my code, near is 0.0 and far is 1.0, which are the default values of gl_DepthRange. Note that gl_DepthRange is not the same thing as the near/far distance in the formula for perspective projection matrix! The only trick is using the 0.0 and 1.0 (or gl_DepthRange in case you actually need to change it), I've been struggling for an hour with the other depth range - but that is already "baked" in my (perspective) projection matrix.
Note that this way, the equation really contains just a single multiply by a constant ((far - near) / 2) and a single addition of another constant ((far + near) / 2). Compare that to multiply, add and divide (possibly converted to a multiply by an optimizing compiler) that is required in the code of imallett.
I need to interpolate or change gradually a sequence of colors, so it goes from colorA to colorB to colorC to colorD and them back to colorA, this need to be based on time elapsed in milliseconds, any help will be much appreciated (algorithms, pseudo code will be great).
Note that I am working with RGB, it could be 0-255 or 0.0-1.0 range.
This is what I have so far, I need to change the colors on every "timePeriod", them I calculate the percentage of time elapsed and change the colors, the problem with this code is that there is a jump when it goes from A to B to B to C and so on
int millisNow = ofGetElapsedTimeMillis();
int millisSinceLastCheck = millisNow - lastTimeCheck;
if ( millisSinceLastCheck > timePeriod ) {
lastTimeCheck = millisNow;
millisSinceLastCheck = 0;
colorsIndex++;
if ( colorsIndex == colors.size()-1 ) colorsIndex = 0;
cout << "color indes: " << colorsIndex << endl;
cout << "color indes: " << colorsIndex + 1 << endl;
}
timeFraction = (float)(millisSinceLastCheck) / (float)(timePeriod);
float p = timeFraction;
colorT.r = colors[colorsIndex].r * p + ( colors[colorsIndex+1].r * ( 1.0 - p ) );
colorT.g = colors[colorsIndex].g * p + ( colors[colorsIndex+1].g * ( 1.0 - p ) );
colorT.b = colors[colorsIndex].b * p + ( colors[colorsIndex+1].b * ( 1.0 - p ) );
colorT.normalize();
Thanks in advance
Your code is mostly correct, but you are doing the interpolation backwards: i.e. you are interpolating B->A, then C->B, then D->C, etc. This causes the discontinuity when switching colors.
You should replace this:
colorT.r = colors[colorsIndex].r * p + ( colors[colorsIndex+1].r * ( 1.0 - p ) );
with:
colorT.r = colors[colorsIndex].r * (1.0 - p) + ( colors[colorsIndex+1].r * p );
and the same for the other lines.
Also, as others have said, using a different color space than RGB can provide better looking results.
There are two ways to handle interpolating colors. One is fast and easy (what you're doing), the other is slightly slower but can look better in some circumstances.
The first is the obvious, simple method of (x * s) + (y * (1-s)), which is pure linear interpolation and does what the name suggests. However, on certain color pairs (say green and orange), you get some nasty colors in the middle (a dirty brown). That's because you're lerping each component (R, G and B) and there are points where the combination is unpleasant. If you just need the most basic lerp, then this is the method you want, and your code is about right.
If you want a better-looking but slightly slower effect, you'll want to interpolate in HSL colorspace. Since the hue, saturation and lum are each interpolated, you get what color you would expect between them and can avoid a majority of the ugly ones. Since colors are typically drawn in some sort of wheel, this method is aware of that (where as basic RGB lerp acts like it's working with 3 discrete lines).
To use an HSL lerp, you need to convert the RGB values, lerp between the results, and convert back. This page has some formulas that may be useful for that, and this one has PHP code to handle it.
Interpolating the R, G, and B components will produce working code. The one shortcoming is that the steps you produce won't necessarily appear the same, even though they're mathematically equal.
If that bothers you, you could convert values from RGB to something like L*a*b* (which is designed to correspond more closely to human perception), do your interpolation on those values, and then convert each interpolated value back to RGB for display.
What you've got already looks very good, but I'd simplify the math a little bit:
int millisNow = ofGetElapsedTimeMillis();
int millisSinceLastCheck = millisNow % timerPeriod;
int colorsIndex = (millisNow / timerPerod) % (colors.size() - 1);
float p = (float)(millisSinceLastCheck) / (float)(timePeriod);
colorT.r = colors[colorsIndex+1].r * p + ( colors[colorsIndex].r * ( 1.0 - p ) );
colorT.g = colors[colorsIndex+1].g * p + ( colors[colorsIndex].g * ( 1.0 - p ) );
colorT.b = colors[colorsIndex+1].b * p + ( colors[colorsIndex].b * ( 1.0 - p ) );
colorT.normalize();
We're doing this on a project I'm currently working on. We just treat the R, G, B values independently and transition from color1 to color2 based on how many "steps" there are in between. We have discrete values so we have a look-up table approach, but you could do the same thing with floating point and just calculate the RGB values dynamically.
If you still have questions, I could post some Java code.
Separate the three components (RBG) and interpolate each
separately, using the classical interpolation algorithm.