I want to utilize shaders to not only discard fragments if they are on one side of a predefined plane but also render a contour along the intersection.
My fragment shader currently does something along the lines of:
float dot = dot(world_coordinate, normalize(clipping_normal.xyz)) - clipping_normal.w;
if (dot > 0.0f)
discard;
this works but without the desired contour. I tried comparing the dot product against values close to 0.0 but this results in a contour with varying width depending on view etc...
This is what I am trying to achieve. Notice that the white contour/edge of where the plane intersects the sphere is of consistent width:
So below is what the result I currently see:
With the fragment shader:
in vec4 color;
in vec3 world_position;
out vec4 frag_color;
void main()
{
float dist = (dot(clipping_plane.xyz, world_position) - clipping_plane.w) /
dot(clipping_plane.xyz, clipping_plane.xyz);
if(dist >= 0.0f && dist < 0.05f)
frag_color = vec4(0.0f, 0.0f, 0.0f, 1.0f);
else if(dist < 0.0f)
discard;
else
frag_color = ComputePhong(color);
}
The contour of the intersection also belongs to the clipping plane, so the distance to this plane is zero.
Using dot(point, normal) is not enough. You need d= A·x + B·y + C·z + D which is the numerator (without "modulus") of the full distance point-to-plane formula. See plane geometry.
This calculated d gives you not only distance (add the square in the denominator if normal A,B,C is not unitary), but also its sign tells you which side of the plane the point is.
Working inside fragment shader likely you work with NDC coordinates. So transform A,B,C,D into NDC too.
Related
Here is my question, i will list them to make it clear:
I am writing a program drawing squares in 2D using instancing.
My camera direction is (0,0,-1), camera up is (0,1,0), camera position is (0,0,3), and the camera position changes when i press some keys.
What I want is that, when I zoom in (the camera moves closer to the square), the square's size(in the screen) won't change. So in my shader:
#version 330 core
layout(location = 0) in vec2 squareVertices;
layout(location = 1) in vec4 xysc;
out vec4 particlecolor;
uniform mat4 VP;
void main()
{
float particleSize = xysc.z;
float color = xysc.w;
gl_Position = VP* vec4(xysc.x, xysc.y, 2.0, 1.0) + vec4(squareVertices.x*particleSize,squareVertices.y*particleSize,0,0);
particlecolor = vec4(1.0f * color , 1.0f * (1-color), 0.0f, 0.5f);
}
Please notice that, inorder to keep the squares' size unchanged, what I do is:
1. transform the center of the square first
VP * vec4(xysc.x, xysc.y, 2.0, 1.0)
2. then compute one of the four corners (x,y,z,1) of the square
+ vec4(squareVertices.x*particleSize,squareVertices.y*particleSize,0,0);
instead of:
gl_Position = VP* (vec4(xysc.x, xysc.y, 2.0, 1.0) + vec4(squareVertices.x*particleSize,squareVertices.y*particleSize,0,0));
However when I move the camera closer to z=0 plane. The squares' size grows unexpectedly. Where is the problem? I can provide a demo code if necessary.
Sounds like you use a perspective projection, and the formula you use in steps 1 and 2 won't work because VP * vec4 will in the general case result in a vec4(x,y,z,w) with the w value != 1, and adding a vec4(a,b,0,0) to that will just get you vec3( (x+a)/w, (y+b)/w, z) after the perspective divide, while you seem to want vec3(x/w + a, y/w +b, z). So the correct approach is to scale a and b by w and add that before the divde: vec4(x+a*w, y+b*w, z, w).
Note that when you move your camera closer to the geometry, the effective w value will approach towards zero, so (x+a)/w will be a greater than x/w + a, resulting in your geometry getting bigger.
I am rendering 2 triangles to make a square, using a single draw call with GL_TRIANGLE_STRIP.
I calculate position and uv in the shader with:
vec2 uv = vec2(gl_VertexID >> 1, gl_VertexID & 1);
vec2 position = uv * 333.0f;
float offset = 150.0f;
mat4 model = mat4(1.0f);
model[3][1] = offset;
gl_Position = projection * model * position;
projection is a regular orthographic projection that matches screen size.
In the fragment shader I want to draw each first line of pixels with blue and each second line of pixels with red color.
int v = int(uv.y * 333.0f);
if (v % 2 == 0) {
color = vec4(1.0f, 0.0f, 0.0f, 1.0f);
} else {
color = vec4(0.0f, 0.0f, 1.0f, 1.0f);
}
This works ok, however if I use offset that will give me a subpixel translation:
offset = 150.5f;
The 2 triangles don't get matching uvs as seen in this picture:
What am I doing wrong?
Attribute interpolation is done only with a finite precision. That means that due to round-off errors, even a difference in 1 ulp (unit-last-place, i.e. least significant digit) can cause the result rounded to the other direction. Since the order of operations in the hardware interpolation unit can be different between the two triangles, the values prior to rounding can be slightly different. OpenGL does not provide any guarantees about that.
For example you might be getting 1.499999 in the upper triangle and 1.50000 in the lower triangle. Consequently when you add an offset of 0.5 then 1.99999 will be rounded down to 1.00000 but 2.000000 will be rounded to 2.00000.
If pixel perfect results are important to you I suggest you calculate uv coordinates manually from the gl_FragCoord.xy. In a case of an axis-aligned rectangle, as in your example, it is straightforward to do.
I have the following fragment shader:
#version 330
layout(location=0) out vec4 frag_colour;
in vec2 texelCoords;
uniform sampler2D uTexture; // the color
uniform sampler2D uTextureHeightmap; // the heightmap
uniform float uSunDistance = -10000000.0; // really far away vertically
uniform float uSunInclination; // height from the heightmap plane
uniform float uSunAzimuth; // clockwise rotation point
uniform float uQuality; // used to determine number of steps and steps size
void main()
{
vec4 c = texture(uTexture,texelCoords);
vec2 textureD = textureSize(uTexture,0);
float d = max(textureD.x,textureD.y); // use the largest dimension to determine stepsize etc
// position the sun in the centre of the screen and convert from spherical to cartesian coordinates
vec3 sunPosition = vec3(textureD.x/2,textureD.y/2,0) + vec3( uSunDistance*sin(uSunInclination)*cos(uSunAzimuth),
uSunDistance*sin(uSunInclination)*sin(uSunAzimuth),
uSunDistance*cos(uSunInclination) );
float height = texture2D(uTextureHeightmap, texelCoords).r; // starting height
vec3 direction = normalize(vec3(texelCoords,height) - sunPosition); // sunlight direction
float sampleDistance = 0;
float samples = d*uQuality;
float stepSize = 1.0 / ((samples/d) * d);
for(int i = 0; i < samples; i++)
{
sampleDistance += stepSize; // increase the sample distance
vec3 newPoint = vec3(texelCoords,height) + direction * sampleDistance; // get the coord for the next sample point
float newHeight = texture2D(uTextureHeightmap,newPoint.xy).r; // get the height of that sample point
// put it in shadow if we hit something that is higher than our starting point AND is heigher than the ray we're casting
if(newHeight > height && newHeight > newPoint.z)
{
c *= 0.5;
break;
}
}
frag_colour = c;
}
The purpose is for it to cast shadows based on a heightmap. Pretty nifty, and the results look good.
However, there's a problem where the shadows appear longer when they are horizontal compared to vertical. If I make the window size different, with a window that is taller than wide, I get the opposite effect. I.e., the shadows are casting longer in the longer dimension.
This tells me that it's to do with the way I'm stepping in the above shader, but I can't tell the problem.
To illustrate, here is the with a uSunAzimuth that results in a horizontally cast shadow:
And here is the exact same code with a uSunAzimuth for a vertical shadow:
It's not very pronounced in these low resolution images, but in larger resolutions the effect gets more exaggerated. Essentially; the shadow when you measure how it casts in all 360 degrees of azimuth clears out an ellipse instead of a circle.
The shadow fragment shader operates on a "snapshot" of the viewport. When your scene is rendered and this "snapshot" is generated, then the vertex positions are transformed by the projection matrix. The projection matrix describes the mapping from 3D points of a scene, to 2D points of the viewport and takes in account the aspect ration of the viewport.
(see Both depth buffer and triangle face orientation are reversed in OpenGL,
and Transform the modelMatrix).
This causes that the high map (uTextureHeightmap) represents a rectangular field of view, dependent on the aspect ratio.
But the texture coordinates, which you use to access the height map describe a quad in the range (0, 0) to (1, 1).
This mismatch must be balanced, by scaling with the aspect ratio.
vec3 direction = ....;
float aspectRatio = textureD.x / textureD.y;
direction.xy *= vec2( 1.0/aspectRatio, 1.0 );
I just needed to adjust the direction slightly.
float aspectCorrection = textureD.x / textureD.y;
...
vec3 direction = normalize(vec3(texelCoords,height) - sunPosition);
direction.y *= aspectCorrection;
I am writing a 2D game using OpenGL and I have planned a shadow casting algorithm which needs a transformation of a texture from Polar Coordinates to Rectangular Coordinates. The desired effect is the following:
From this:
To this:
I know the formulas for converting coordinates between both Polar and Rectangular systems but I am having problems on writing the shader to achieve the desired effect.
My shader receives a texture as an input and should draw the warped texture to the screen. I planned the following (knowing that the fragment shader acts upon one fragment at a time):
Find the coordinates of the current fragment using gl_FragCoord.xy
Determine r and theta that correspond to the point (x, y).
Transform r and theta into texture_x and texture_y (which will be used to sample the texture)
Transfer the sampled pixel to the current fragment
My final result is the same input texture rotated 90 degrees clock-wise. I think that I'm missing something on step 3. I might be just getting the same x and y of the current fragment, because I'm simply using both the transform and inverse transform formulas.
How should I proceed to get the expected result?
Here is my shader:
#version 120
uniform sampler2D tex;
void main() {
vec2 fragCoords = gl_FragCoord.xy - vec2(128, 128); //shift the coordinates so that 0, 0 is in the center of the screen (the final texture is 256 * 256)
fragCoords /= vec2(256, 256);
float r = sqrt(pow(fragCoords.x, 2) + pow(fragCoords.y, 2));
float theta = atan(fragCoords.y, fragCoords.x);
if (fragCoords.y/fragCoords.x <= 0.5 && fragCoords.y/fragCoords.x >= -0.5) {
r *= 1/(256*sin(theta));
} else {
r *= 1/(0.5*256*cos(theta));
}
vec2 texCoords = vec2(r, theta);
vec4 texFrag = texture2D(tex, texCoords);
gl_FragColor = texFrag * vec4(1.0, 0.0, 0.0, 1.0);
}
In your shader you're first translating into polar coordinates
float r = sqrt(pow(fragCoords.x, 2) + pow(fragCoords.y, 2));
float theta = atan(fragCoords.y, fragCoords.x);
and then you't translating them back into cartesian
float tX = r * sin(theta);
float tY = r * cos(theta);
You want to stay in polar coordinates, so just plug r and theta into the texture coordinates
vec2 texCoords = vec2(r , theta);
vec4 texFrag = texture2D(tex, texCoords);
However by the looks of the images you pasted there's some renormalization step involved, so that (r, theta) will cover a rectangular area. If I'm not entirely mistaken, then r is scaled by the distance it takes a ray from the center-bottom to intersect with the rectangular area. If we assume theta=0 to be straight up, then for the range [-atan(0.5)…atan(0.5)] it's scaled by 1/(height*sin(theta)) and outside that range by 1/(0.5*width*cos(theta))
Consider the following scene.
I transformed pMin and pMax from world space to viewport space. The area bound by pMin and pMax follows the users mouse sliding over the plane (larger rectangle).
Is there a way inside the fragment shader to decide if the fragment lies within the inner area or not? I tried comparing with gl_FragCoord.x and gl_FragCoord.z but it does not yield correct results.
if((gl_FragCoord.x < splitMax.x && gl_FragCoord.x > splitMin.x)
&& (gl_FragCoord.z < splitMax.z && gl_FragCoord.z > splitMin.z)){
//within area following the mouse
} else {
//outside of area following the mouse
}
In cascaded shadow mapping, shadow maps are chosen based on the fragment's z value and whether it lies inside the computed frustum split z values. I'm trying to do the same only that I want my look up to also consider the x coordinate.
Thanks to a guy in ##opengl on freenode, I managed to get this working the following way:
vertex shader: Transform the incoming vertex to world space
out vec4 worldPos;
...
worldPos = modelMatrix * vec4(vertex, 1.0);
fragment shader: Send in pMin and pMax in world space coordinates
in vec3 pMin, pMax;
in vec4 worldPos;
...
if((worldPos.x > pMin.x && worldPos.x < pMax.x) && (worldPos.z > pMin.z && worldPos.z < pMax.z)){
FragColor = vec4(1.0, 0.0, 0.0, 1.0);
} else {
FragColor is scene lighting
}
Result: