Water refraction causing odd effects, and wont go above 1.0 - glsl

I've been trying to generate water effects via fragment shader in shader toy. Right now I'm trying to use octave ridge noise multifractals in order to generate a water-like surface. So far, this works fine. The next step I wanted to take was doing proper light refraction through this "water surface", however I end up with a strange situation.
I've set up the shader to raymarch two spheres, with soft shadowing, and a plane with a checkerboard pattern to easily see the distortion.
Here is my shader.
https://www.shadertoy.com/view/XddcRB
(x and y axis rotation via, top, down, left, right, wasd for lateral movement)
Look down to see the refraction surface (if you move too far down, you will be below the surface, and no refraction will take place)
At first glance it will appear as if this effect is working, but if you move into the water you'll see that until you move below the surface (where no refraction takes place), the checkerboard doesn't actually obey y axis movmement to change its perspective (that is, when looking down and moving up or down, the pattern stays the same)
I thought that was because I didn't take into account the new origin with displaying the pattern, however, when I change the line to decide if we should show the refraction,
vec3 normal_c;
float water_surface = gradientNoiseRayMarch(ray, origin, normal_c);
vec3 water_ray = origin + ray * water_surface;
float depth;
if(origin.y > water_ray.y){
vec3 refract_ray = refract(ray, normal_c, 1.0);
ray = normalize(refract_ray);
//origin = water_ray;
depth = rayMarch(normalize(refract_ray), water_ray);
}
...
vec3 surface_point = vec3(origin+ray*depth);
float value = getCheckerPattern(surface_point.xz, 2.0);
to
vec3 normal_c;
float water_surface = gradientNoiseRayMarch(ray, origin, normal_c);
vec3 water_ray = origin + ray * water_surface;
float depth;
if(origin.y > water_ray.y){
vec3 refract_ray = refract(ray, normal_c, 1.0);
ray = normalize(refract_ray);
origin = water_ray; //CHANGED
depth = rayMarch(normalize(refract_ray), water_ray);
}
...
vec3 surface_point = vec3(origin+ray*depth);
float value = getCheckerPattern(surface_point.xz, 2.0);
it removes the refraction effect entirely.
Additionally if I try to increase the index of refraction, I see nothing below the surface (even with 1.1 index of refraction).
vs
In addition to these issues, the spheres never appear to be warped like the pattern does.
I would think that I would need to set the origin of each ray at the contact point of the water, and angle each ray from that point (hence why I thought it would be necessary to set the origin = to water_ray which is the ray up until the surface of the water).

Related

How to rotate a translated texture around its center

I'm working on the following shader that
translates (on y)
rotates
repeats (tiles)
a texture:
uniform sampler2D texture;
uniform vec2 resolution;
varying vec4 vertColor;
varying vec4 vertTexCoord;
uniform float rotation;
uniform float yTranslation;
void main() {
vec2 repeat = vec2(2, 2);
vec2 coord = vertTexCoord.st;
coord.y += yTranslation;
float sin_factor = sin(rotation);
float cos_factor = cos(rotation);
coord += vec2(0.5);
coord = coord * mat2(cos_factor, sin_factor, -sin_factor, cos_factor) * 0.3;
coord -= vec2(0.5);
coord = vec2(mod(coord.x * repeat.x, 1.0f), mod(coord.y * repeat.y, 1.0f));
gl_FragColor = texture2D(texture, coord) * vertColor;
}
Current behavior
Desired behavior
I want the texture to always rotate around the center, no matter how far it has been translated.
Simply swapping the order of the steps results in weird behavior. What am I missing?
Your problem statement in the question is really wrong: Your addtional comment (to a now deleted answer):
I have a boat that always stays in the center of the screen, the water texture (controlled by this shader) under it moves to make it look like the boat is moving. The movement of the water texture is controlled by rotation (for steering) and yTranslation (for how far the boat has moved forwards/backwards)
makes it clear that you're asking for a different thing, and the approach described in the question is simply not going to solve your problem.
When your boat moves and rotates, it will basically travel on a curve (and you want the inverse of that curve to travel through texture space). But your 2dof parameters rotation and yTranslation are not capable of describing the curve. Your problem needs at least another parameter xTranslation - so in the end, you need a 2D vector describing the position of your boat + an angle describing the rotation. And you need to properly accumulate this data at each simulation step:
update the rotation accordingly
calculate the 2D vector your ship is heading tom as defined by the current rotation
scale it according to the velocity of the movement
accumulate it onto the position vector.
Then, your shader simply has to
1. translate the texcoords by position (or -position, whatever you store)
2. Rotate around the pivot point (which is constant and only depends on how you layed out your texture space)
coord = vec2(mod(coord.x * repeat.x, 1.0f), mod(coord.y * repeat.y, 1.0f));
that's a waste of GPU ALU cycles, the TMUs will already do the mod for you with the GL_REPEAT wrap modes.
However, what you now have here is rotation, scaling and translation: so just use a single matrix for the whole texcoord transformation - the accumulation of the 2D position that I talked about earlier can nicely by done with the matrix representations. It will also remove the sin and cos from your shader, which is another huge waste right now.

Why differs gl_FragCoord.z from ((pos.z / pos.w) + 1.0) * 0.5?

Does anyone know why 'depth' (vertShader) differs from 'gl_FragCoord.z' (rendered from opengl)? Especially with decreasing z the difference becomes higher. Is it possible that 'depth' is at higher z values more precise?
.vsh
out float depth;
void main (void) {
vec4 pos = mvpMatrix * vertex;
depth = ((pos.z / pos.w) + 1.0) * 0.5;
gl_Position = pos;
}
.fsh
in float depth;
void main(void) {
gl_FragDepth = depth;// or gl_FragCoord.z;
}
There are a couple of issues with your approach, with the main points are:
gl_FragCoord.z is hyperbolically distorted window space z value. However, the hyperoblical z/w value for each vertex is just linearily interpolated in screen space for each framgent. But when you use a varying out float depth = (pos.z / pos.w), the GL will do a perspective-corrected interpolation which is non-linear. You could fix this by using flat out float depth.
(pos.z/pos.w) doesn't even make sense. Think about it: if the point lies in a plane where the camera is, you'll get pos.w=0, and no valid result. gl_FragCoord.z does not have this issue because the clipping is done before the divide, and it will do the divide for a new vertex which lies on the near plane, and which you'll never going to see (there's no vertex shader invocation for that).
The issue is also present when points lie behind the camera, they will end up mirrored in front of the camera. If you have a primitive where vertices lie on both sides of the camera, you will get complete bullshit as your interpolated depth value, no matter which interpolation method you chose.

Shadow Map Produces Incorrect Results

I'm attempting to implement shadow mapping into my deferred rendering pipeline, but I'm running into a few issues actually generating the shadow map, then shadowing the pixels – pixels that I believe should be shadowed simply aren't.
I have a single directional light, which is the 'sun' in my engine. I have deferred rendering set up for lighting, which works properly thus far. I render the scene again into a depth-only FBO for the shadow map, using the following code to generate the view matrix:
glm::vec3 position = r->getCamera()->getCameraPosition(); // position of level camera
glm::vec3 lightDir = this->sun->getDirection(); // sun direction vector
glm::mat4 depthProjectionMatrix = glm::ortho<float>(-10,10,-10,10,-10,20); // ortho projection
glm::mat4 depthViewMatrix = glm::lookAt(position + (lightDir * 20.f / 2.f), -lightDir, glm::vec3(0,1,0));
glm::mat4 lightSpaceMatrix = depthProjectionMatrix * depthViewMatrix;
Then, in my lighting shader, I use the following code to determine whether a pixel is in shadow or not:
// lightSpaceMatrix is the same as above, FragWorldPos is world position of the texekl
vec4 FragPosLightSpace = lightSpaceMatrix * vec4(FragWorldPos, 1.0f);
// multiply non-ambient light values by ShadowCalculation(FragPosLightSpace)
// ... do more stuff ...
float ShadowCalculation(vec4 fragPosLightSpace) {
// perform perspective divide
vec3 projCoords = fragPosLightSpace.xyz / fragPosLightSpace.w;
// vec3 projCoords = fragPosLightSpace.xyz;
// Transform to [0,1] range
projCoords = projCoords * 0.5 + 0.5;
// Get closest depth value from light's perspective (using [0,1] range fragPosLight as coords)
float closestDepth = texture(gSunShadowMap, projCoords.xy).r;
// Get depth of current fragment from light's perspective
float currentDepth = projCoords.z;
// Check whether current frag pos is in shadow
float bias = 0.005;
float shadow = (currentDepth - bias) > closestDepth ? 1.0 : 0.0;
// Ensure that Z value is no larger than 1
if(projCoords.z > 1.0) {
shadow = 0.0;
}
return shadow;
}
However, that doesn't really get me what I'm after. Here's a screenshot of the output after shadowing, as well as the shadow map half-assedly converted to an image in Photoshop:
Render output
Shadow Map
Since the directional light is the only light in my shader, it seems that the shadow map is being rendered pretty close to correctly, since the perspective/direction roughly match. However, what I don't understand is why none of the teapots actually end up casting a shadow on the others.
I'd appreciate any pointers on what I might be doing wrong. I think that my issue lies either in the calculation of that light space matrix (I'm not sure how to properly calculate that, given a moving camera, such that the stuff that's in view will be updated,) or in the way I determine whether the texel the deferred renderer is shading is in shadow or not. (FWIW, I determine the world position from the depth buffer, but I've proven that this calculation is working correctly.)
Thanks for any help.
Debugging shadow problems can be tricky. Lets start with a few points:
If you look at your render closely, you will actually see a shadow on one of the pots in the top left corner.
Try rotating your sun, this usually helps to see if there are any problems with the light transform matrix. From your output, it seems the sun is very horizontal and might not cast shadows on this setup. (another angle might show more shadows)
It appears as though you are calculating the matrix correctly, but try shrinking your maximum depth in glm::ortho(-10,10,-10,10,-10,20) to tightly fit your scene. If the depth is too large, you will lose precision and shadow will have artifacts.
To visualize where the problem is coming from further, try outputing the result from your shadow map lookup from here:
closestDepth = texture(gSunShadowMap, projCoords.xy).r
If the shadow map is being projected correctly, then you know you have a problem in your depth comparisons. Hope this helps!

OpenGL point sprites not always rendered front to back

I'm working on a game engine with LWJGL3 in which all objects are point sprites. It uses an orthographic camera and I wrote a vertex shader that calculates the 3D position of each sprite (which also causes the fish-eye lens effect). I calculate the distance to the camera and use that value as the depth value for each point sprite. The data for these sprites is stored in chunks of 16x16 in VBOs.
The issue that I'm having is that the sprites are not always rendered front to back. When looking away from the origin, the depth testing works as intended, but when looking in the direction of the origin, sprites are rendered from back to front which causes a big performance drop.
This might seem like depth testing is not enabled, but when I disable depth testing, sprites in the back are drawn on top of the ones in front, so that is not the case.
Here's the full vertex shader:
#version 330 core
#define M_PI 3.1415926535897932384626433832795
uniform mat4 camRotMat; // Virtual camera rotation
uniform vec3 camPos; // Virtual camera position
uniform vec2 fov; // Virtual camera field of view
uniform vec2 screen; // Screen size (pixels)
in vec3 pos;
out vec4 vColor;
void main() {
// Compute distance and rotated delta position to camera
float dist = distance(pos, camPos);
vec3 dXYZ = (camRotMat * vec4(camPos - pos, 0)).xyz;
// Compute angles of this 3D position relative center of camera FOV
// Distance is never negative, so negate it manually when behind camera
vec2 rla = vec2(atan(dXYZ.x, length(dXYZ.yz)),
atan(dXYZ.z, length(dXYZ.xy) * sign(-dXYZ.y)));
// Find sprite size and coordinates of the center on the screen
float size = screen.y / dist * 2; // Sprites become smaller based on their distance
vec2 px = -rla / fov * 2; // Find pixel position on screen of this object
// Output
vColor = vec4((1 - (dist * dist) / (64 * 64)) + 0.5); // Brightness
gl_Position = vec4(px, dist / 1000, 1.0); // Position on the screen
gl_PointSize = size; // Sprite size
}
In the first image, you can see how the game normally looks. In the second one, I've disabled alpha-testing, so you can see sprites are rendered front to back. But in the third image, when looking in the direction of the origin, sprites are being drawn back to front.
Edit:
I am almost 100% certain the depth value is set correctly. The size of the sprites is directly linked to the distance, and they resize correctly when moving around. I also set the color to be brighter when the distance is lower, which works as expected.
I also set the following flags (and ofcourse clear the frame and depth buffer):
GL11.glEnable(GL11.GL_DEPTH_TEST);
GL11.glDepthFunc(GL11.GL_LEQUAL);
Edit2:
Here's a gif of what it looks like when you rotate around: https://i.imgur.com/v4iWe9p.gifv
Edit3:
I think I misunderstood how depth testing works. Here is a video of how the sprites are drawn over time: https://www.youtube.com/watch?v=KgORzkM9U2w
That explains the initial problem, so now I just need to find a way to render them in a different order depending on the camera rotation.

How to set a specific eye point using perspective view with shaders

In these days I am reading the Learning Modern 3D Graphics Programming book by Jason L. McKesson. Basically it is a book about the OpenGL 3.3 and I am now at the chapter 4, that is about orthographic and perspective view.
At the end of the chapter, under the "Further Study" section he suggests to try few things like implementing a variable eye point (he used at the begin (0, 0, 0) in camera space for semplicity) and an arbitrary perspective plane location.
He says I am going to need to offset the X, Y camera-space positions of the vertices by E_x and E_y respectively.
I cannot understand this passage, how am I supposed to use a variable eye point modifying only the X, Y offsets?
Edit: could it be something like this?
#version 330
layout(location = 0) in vec4 position;
layout(location = 1) in vec4 color;
smooth out vec4 theColor;
uniform vec2 offset;
uniform vec2 E;
uniform float zNear;
uniform float zFar;
uniform float frustumScale;
void main()
{
vec4 cameraPos = position + vec4(offset.x, offset.y, 0.0, 0.0);
vec4 clipPos;
clipPos.xy = cameraPos.xy * frustumScale + vec4(E.x, E.y, 0.0, 0.0);
clipPos.z = cameraPos.z * (zNear + zFar) / (zNear - zFar);
clipPos.z += 2 * zNear * zFar / (zNear - zFar);
clipPos.w = cameraPos.z / (-E.z);
gl_Position = clipPos;
theColor = color;
}
Edit2: thanks Boris, your picture helped a lot :) especially because:
it makes clear what you previously stated regarding thinking E as projection place position and not eye point position
it underlines that the size of the project plane must be always [-1, 1], passage that I read on the book without fully understanding what it meant
Just a curiosity, why do you mention multiplying after subtracting? Is it for the same reason the book says, that is aspect ratio? Because everything logically push me doing exactly the opposite, that is first translation (-2) and then multiplication (/5).. Or maybe with the term "scaling", the book refers to the reshape function?
Here, we are interested in computing a transformation from Camera Coordinates (CC) to Normalized Device Coordinates (NDC).
Think of E as the position of the projection plane in Camera Coordinates, instead of the position of the eye point according to the projection plane. In Camera Coordinates, the eye point is by definition located at the origin, at least in my interpretation of what "Camera Coordinate" means: a coordinate frame centered from where you look at the scene. (You can mathematically define a perspective transformation centered from anywhere, but this means your input space is not the camera space, imho. This is what the World->Camera transformation is for, as you will see in chapter 6)
Summary:
you are in camera space, hence your eye point is located at (0,0,0)
you are looking toward the negative Z-axis
your projection plane is parallel to the xOy plane, with a size of [-1,1] in both direction
This is the picture here (each tick is 0.5 unit):
In this picture, you can see that the projection plane (bottom side of the gray trapezoid) is centered in (0,0,-1), with a size of [-1,1] in both X and Y direction.
Now, what is asked is instead of choosing (0,0,-1) for the center of this plane, to choose an arbitrary (E.x, E.y, E.z) position (assumes E.z is negative). But the plane has still to be parallel to xOy axis and with the same size.
You can see that the dimension E.xy plays a very different role than E.z, reason why E.xy will be involved in an substraction, while E.z will be involved in a division. This is easy to see with an example:
assume zNear = -E.z (not necessarily the case, but you can in fact always change frustumScale to have an equivalent perspective satisfying this)
consider the point E (which is the center of the projection plane).
What is its coordinate in NDC space? It is (0,0,-1) by definition. What you've done is substracting E.xy, but dividing by -E_z.
Your code got this idea, but still some things are wrong:
First, you defined uniform vec2 E; instead of uniform vec3 E; (just a typo, not a big deal)
The line clipPos.xy = ... ; is about vec2 arithmetic. Hence, you can only multiply by scalar values (i.e., a float), or add/substract vec2 values. Hence, vec4(E.x, E.y, 0.0, 0.0) is of incorrect type, you should use E.xy instead, which has the correct type vec2.
You should in fact substract E.xy instead of add it. This is easy to see in my example above.
Finally, things are more subtle ;-)
I made a picture to illustrate the modifications:
Each tick is 1 unit in this picture. Top left is your Camera Coordinate Space, with displayed zNear, zFar, and two possible projection planes. In blue is the one used in the explanation and shader here, and the red one is the one you now want to use. The colored areas correponds to what should be visible in you final screen, e.g. what should be in the cube [-1,1]^3 in the NDC Space. Hence, if you use the blue projection plane, you want to obtain the space in top right, and if you use the red projection plane, you want to optain the space in the bottom. To do this, you can observe that you need to perform the scaling and translation in NDC space, e.g. after the perspective division! (I think what is written in the book is either incorrect, or interpret the question differently).
Hence you want to do, in euclidean coordinate (i.e., not homogeneous coordinate, e.g. without W coordinate):
clipPosEuclideanRed.xy = clipPosEuclideanBlue.xy * (-E.z) - E.xy;
clipPosEuclideanRed.z = clipPosEuclideanBlue.z;
However, because you are in homogeneous coordinates, this values are in fact:
clipPosEuclidean.xyz = clipPos.xyz / clipPos.w; // with clipPos.w = -cameraPos.z;
Hence, you have to composate by writing:
clipPosRed.xy = clipPosBlue.xy * (-E.z) - E.xy * (-cameraPos.z);
clipPosRed.z = clipPosBlue.z;
So my solution to this problem would be to add only one line:
void main()
{
vec4 cameraPos = position + vec4(offset.x, offset.y, 0.0, 0.0);
vec4 clipPos;
clipPos.xy = cameraPos.xy * frustumScale;
// only add this line
clipPos.xy = - clipPos.xy * E.z + E.xy * cameraPos.z;
clipPos.z = cameraPos.z * (zNear + zFar) / (zNear - zFar);
clipPos.z += 2 * zNear * zFar / (zNear - zFar);
clipPos.w = -cameraPos.z;
gl_Position = clipPos;
theColor = color;
}