Related
Here is my question, i will list them to make it clear:
I am writing a program drawing squares in 2D using instancing.
My camera direction is (0,0,-1), camera up is (0,1,0), camera position is (0,0,3), and the camera position changes when i press some keys.
What I want is that, when I zoom in (the camera moves closer to the square), the square's size(in the screen) won't change. So in my shader:
#version 330 core
layout(location = 0) in vec2 squareVertices;
layout(location = 1) in vec4 xysc;
out vec4 particlecolor;
uniform mat4 VP;
void main()
{
float particleSize = xysc.z;
float color = xysc.w;
gl_Position = VP* vec4(xysc.x, xysc.y, 2.0, 1.0) + vec4(squareVertices.x*particleSize,squareVertices.y*particleSize,0,0);
particlecolor = vec4(1.0f * color , 1.0f * (1-color), 0.0f, 0.5f);
}
Please notice that, inorder to keep the squares' size unchanged, what I do is:
1. transform the center of the square first
VP * vec4(xysc.x, xysc.y, 2.0, 1.0)
2. then compute one of the four corners (x,y,z,1) of the square
+ vec4(squareVertices.x*particleSize,squareVertices.y*particleSize,0,0);
instead of:
gl_Position = VP* (vec4(xysc.x, xysc.y, 2.0, 1.0) + vec4(squareVertices.x*particleSize,squareVertices.y*particleSize,0,0));
However when I move the camera closer to z=0 plane. The squares' size grows unexpectedly. Where is the problem? I can provide a demo code if necessary.
Sounds like you use a perspective projection, and the formula you use in steps 1 and 2 won't work because VP * vec4 will in the general case result in a vec4(x,y,z,w) with the w value != 1, and adding a vec4(a,b,0,0) to that will just get you vec3( (x+a)/w, (y+b)/w, z) after the perspective divide, while you seem to want vec3(x/w + a, y/w +b, z). So the correct approach is to scale a and b by w and add that before the divde: vec4(x+a*w, y+b*w, z, w).
Note that when you move your camera closer to the geometry, the effective w value will approach towards zero, so (x+a)/w will be a greater than x/w + a, resulting in your geometry getting bigger.
I'm trying to implement Normal Mapping, using a simple cube that i created. I followed this tutorial https://learnopengl.com/Advanced-Lighting/Normal-Mapping but i can't really get how normal mapping should be done when drawing 3d objects, since the tutorial is using a 2d object.
In particular, my cube seems almost correctly lighted but there's something i think it's not working how it should be. I'm using a geometry shader that will output green vector normals and red vector tangents, to help me out. Here i post three screenshot of my work.
Directly lighted
Side lighted
Here i actually tried calculating my normals and tangents in a different way. (quite wrong)
In the first image i calculate my cube normals and tangents one face at a time. This seems to work for the face, but if i rotate my cube i think the lighting on the adiacent face is wrong. As you can see in the second image, it's not totally absent.
In the third image, i tried summing all normals and tangents per vertex, as i think it should be done, but the result seems quite wrong, since there is too little lighting.
In the end, my question is how i should calculate normals and tangents.
Should i consider per face calculations or sum vectors per vertex across all relative faces, or else?
EDIT --
I'm passing normal and tangent to the vertex shader and setting up my TBN matrix. But as you can see in the first image, drawing face by face my cube, it seems that those faces adjacent to the one i'm looking directly (that is well lighted) are not correctly lighted and i don't know why. I thought that i wasn't correctly calculating my 'per face' normal and tangent. I thought that calculating some normal and tangent that takes count of the object in general, could be the right way.
If it's right to calculate normal and tangent as visible in the second image (green normal, red tangent) to set up the TBN matrix, why does the right face seems not well lighted?
EDIT 2 --
Vertex shader:
void main(){
texture_coordinates = textcoord;
fragment_position = vec3(model * vec4(position,1.0));
mat3 normalMatrix = transpose(inverse(mat3(model)));
vec3 T = normalize(normalMatrix * tangent);
vec3 N = normalize(normalMatrix * normal);
T = normalize(T - dot(T, N) * N);
vec3 B = cross(N, T);
mat3 TBN = transpose(mat3(T,B,N));
view_position = TBN * viewPos; // camera position
light_position = TBN * lightPos; // light position
fragment_position = TBN * fragment_position;
gl_Position = projection * view * model * vec4(position,1.0);
}
In the VS i set up my TBN matrix and i transform all light, fragment and view vectors to tangent space; doing so i won't have to do any other calculation in the fragment shader.
Fragment shader:
void main() {
vec3 Normal = texture(TextSamplerNormals,texture_coordinates).rgb; // extract normal
Normal = normalize(Normal * 2.0 - 1.0); // correct range
material_color = texture2D(TextSampler,texture_coordinates.st); // diffuse map
vec3 I_amb = AmbientLight.color * AmbientLight.intensity;
vec3 lightDir = normalize(light_position - fragment_position);
vec3 I_dif = vec3(0,0,0);
float DiffusiveFactor = max(dot(lightDir,Normal),0.0);
vec3 I_spe = vec3(0,0,0);
float SpecularFactor = 0.0;
if (DiffusiveFactor>0.0) {
I_dif = DiffusiveLight.color * DiffusiveLight.intensity * DiffusiveFactor;
vec3 vertex_to_eye = normalize(view_position - fragment_position);
vec3 light_reflect = reflect(-lightDir,Normal);
light_reflect = normalize(light_reflect);
SpecularFactor = pow(max(dot(vertex_to_eye,light_reflect),0.0),SpecularLight.power);
if (SpecularFactor>0.0) {
I_spe = DiffusiveLight.color * SpecularLight.intensity * SpecularFactor;
}
}
color = vec4(material_color.rgb * (I_amb + I_dif + I_spe),material_color.a);
}
Handling discontinuity vs continuity
You are thinking about this the wrong way.
Depending on the use case your normal map may be continous or discontinous. For example in your cube, imagine if each face had a different surface type, then the normals would be different depending on which face you are currently in.
Which normal you use is determined by the texture itself and not by any blending in the fragment.
The actual algorithm is
Load rgb values of normal
Convert to -1 to 1 range
Rotate by the model matrix
Use new value in shading calculations
If you want continous normals, then you need to make sure that the charts in the texture space that you use obey that the limits of the texture coordinates agree.
Mathematically that means that if U and V are regions of R^2 that map to the normal field N of your Shape then if the function of the mapping is f it should be that:
If lim S(x_1, x_2) = lim S(y_1, y_2) where {x1,x2} \subset U and {y_1, y_2} \subset V then lim f(x_1, x_2) = lim f(y_1, y_2).
In plain English, if the cooridnates in your chart map to positions that are close in the shape, then the normals they map to should also be close in the normal space.
TL;DR do not belnd in the fragment. This is something that should be done by the normal map itself when its baked, not'by you when rendering.
Handling the tangent space
You have 2 options. Option n1, you pass the tangent T and the normal N to the shader. In which case the binormal B is T X N and the basis {T, N, B} gives you the true space where normals need to be expressed.
Assume that in tangent space, x is side, y is forward z is up. Your transformed normal becomes (xB, yT, zN).
If you do not pass the tangent, you must first create a random vector that is orthogonal to the normal, then use this as the tangent.
(Note N is the model normal, where (x,y,z) is the normal map normal)
I've been working on a deferred renderer to do lighting with, and it works quite well, albeit using a position buffer in my G-buffer. Lighting is done in world space.
I have tried to implement an algorithm to recreate the world space positions from the depth buffer, and the texture coordinates, albeit with no luck.
My vertex shader is nothing particularly special, but this is the part of my fragment shader in which I (attempt to) calculate the world space position:
// Inverse projection matrix
uniform mat4 projMatrixInv;
// Inverse view matrix
uniform mat4 viewMatrixInv;
// texture position from vertex shader
in vec2 TexCoord;
... other uniforms ...
void main() {
// Recalculate the fragment position from the depth buffer
float Depth = texture(gDepth, TexCoord).x;
vec3 FragWorldPos = WorldPosFromDepth(Depth);
... fun lighting code ...
}
// Linearizes a Z buffer value
float CalcLinearZ(float depth) {
const float zFar = 100.0;
const float zNear = 0.1;
// bias it from [0, 1] to [-1, 1]
float linear = zNear / (zFar - depth * (zFar - zNear)) * zFar;
return (linear * 2.0) - 1.0;
}
// this is supposed to get the world position from the depth buffer
vec3 WorldPosFromDepth(float depth) {
float ViewZ = CalcLinearZ(depth);
// Get clip space
vec4 clipSpacePosition = vec4(TexCoord * 2.0 - 1.0, ViewZ, 1);
// Clip space -> View space
vec4 viewSpacePosition = projMatrixInv * clipSpacePosition;
// Perspective division
viewSpacePosition /= viewSpacePosition.w;
// View space -> World space
vec4 worldSpacePosition = viewMatrixInv * viewSpacePosition;
return worldSpacePosition.xyz;
}
I still have my position buffer, and I sample it to compare it against the calculate position later, so everything should be black:
vec3 actualPosition = texture(gPosition, TexCoord).rgb;
vec3 difference = abs(FragWorldPos - actualPosition);
FragColour = vec4(difference, 0.0);
However, what I get is nowhere near the expected result, and of course, lighting doesn't work:
(Try to ignore the blur around the boxes, I was messing around with something else at the time.)
What could cause these issues, and how could I get the position reconstruction from depth working successfully? Thanks.
You are on the right track, but you have not applied the transformations in the correct order.
A quick recap of what you need to accomplish here might help:
Given Texture Coordinates [0,1] and depth [0,1], calculate clip-space position
Do not linearize the depth buffer
Output: w = 1.0 and x,y,z = [-w,w]
Transform from clip-space to view-space (reverse projection)
Use inverse projection matrix
Perform perspective divide
Transform from view-space to world-space (reverse viewing transform)
Use inverse view matrix
The following changes should accomplish that:
// this is supposed to get the world position from the depth buffer
vec3 WorldPosFromDepth(float depth) {
float z = depth * 2.0 - 1.0;
vec4 clipSpacePosition = vec4(TexCoord * 2.0 - 1.0, z, 1.0);
vec4 viewSpacePosition = projMatrixInv * clipSpacePosition;
// Perspective division
viewSpacePosition /= viewSpacePosition.w;
vec4 worldSpacePosition = viewMatrixInv * viewSpacePosition;
return worldSpacePosition.xyz;
}
I would consider changing the name of CalcViewZ (...) though, that is very much misleading. Consider calling it something more appropriate like CalcLinearZ (...).
I'm working on parallax mapping (from this tutorial: http://sunandblackcat.com/tipFullView.php?topicid=28) and I seem to only get good results when I move along one axis (e.g. left-to-right) while looking at a parallaxed quad. The image below illustrates this:
You can see it clearly at the left and right steep edges. If I'm moving to the right the right steep edge should have less width than the left one (which looks correct on the left image) [Camera is at right side of cube]. However, if I move along a different axis (instead of west to east I now move top to bottom) you can see that this time the steep edges are incorrect [Camera is again on right side of cube].
I'm using the most simple form of parallax mapping and even that has the same problems. The fragment shader looks like this:
void main()
{
vec2 texCoords = fs_in.TexCoords;
vec3 viewDir = normalize(viewPos - fs_in.FragPos);
vec3 V = normalize(fs_in.TBN * viewDir);
vec3 L = normalize(fs_in.TBN * lightDir);
float height = texture(texture_height, texCoords).r;
float scale = 0.2;
vec2 texCoordsOffset = scale * V.xy * height;
texCoords += texCoordsOffset;
// calculate diffuse lighting
vec3 N = texture(texture_normal, texCoords).rgb * 2.0 - 1.0;
N = normalize(N); // normal already in tangent-space
vec3 ambient = vec3(0.2f);
float diff = clamp(dot(N, L), 0, 1);
vec3 diffuse = texture(texture_diffuse, texCoords).rgb * diff;
vec3 R = reflect(L, N);
float spec = pow(max(dot(R, V), 0.0), 32);
vec3 specular = vec3(spec);
fragColor = vec4(ambient + diffuse + specular, 1.0);
}
TBN matrix is created as follows in the vertex shader:
vs_out.TBN = transpose(mat3(normalize(tangent), normalize(bitangent), normalize(vs_out.Normal)));
I use the transpose of the TBN to transform all relevant vectors to tangent space. Without offsetting the TexCoords, the lighting looks solid with normal mapped texture so my guess is that it's not the TBN matrix that's causing the issues. What could be causing this that it only works in one direction?
edit
Interestingly, If I invert the y coordinate of the TexCoords input variable parallax mapping seems to work. I have no idea why this works though and I need it to work without the inversion.
vec2 texCoords = vec2(fs_in.TexCoords.x, 1.0 - fs_in.TexCoords.y);
How do I compute an eye space coordinate from window space (pixel in the frame buffer) coordinates + pixel depth value in GLSL please (gluUnproject in GLSL so to speak)?
Looks to be duplicate of GLSL convert gl_FragCoord.z into eye-space z.
Edit (complete answer):
// input: x_coord, y_coord, samplerDepth
vec2 xy = vec2(x_coord,y_coord); //in [0,1] range
vec4 v_screen = vec4(xy, texture(samplerDepth,xy), 1.0 );
vec4 v_homo = inverse(gl_ProjectionMatrix) * 2.0*(v_screen-vec4(0.5));
vec3 v_eye = v_homo.xyz / v_homo.w; //transfer from homogeneous coordinates
Assuming you've stuck with a fixed pipeline-style model, view and projection, you can just implement exactly the formula given in the gluUnProject man page.
There's no matrix inversion built into GLSL, so ideally you'd so that on the CPU. So you need to supply a uniform of the inverse of your composed modelViewProjection matrix. gl_FragCoord is in window coordinates, so you also need to supply the view dimensions.
So, you'd probably end up with something like (coding extemporaneously):
vec4 unProjectedPosition = invertedModelViewProjection * vec4(
2.0 * (gl_FragCoord.x - view[0]) / view[2] - 1.0,
2.0 * (gl_FragCoord.y - view[1]) / view[3] - 1.0,
2.0 * gl_FragCoord.z - 1.0,
1.0);
If you've implemented your own analogue of the old matrix stack then you're probably fine inverting a matrix. Otherwise, it's possibly a more daunting topic than you had anticipated and you might be better off using MESA's open source implementation (see invert_matrix, the third function in that file), just because it's well tested if nothing else.
Well, a guy on opengl.org has pointed out that the clip space coordinates the projection produces are divided by clipPos.w to compute the normalized device coordinates. When reversing the steps from fragment over ndc to clip space coordinates, you need to reconstruct that w (which happens to be -z from the corresponding view space (camera) coordinate), and multiply the ndc coordinate with that value to compute the proper clip space coordinate (which you can turn into a view space coordinate by multiplying it with the inverse projection matrix).
The following code assumes that you are processing the frame buffer in a post process. When processing it while rendering geometry, you can use gl_FragCoord.z instead of texture2D (sceneDepth, ndcPos.xy).r.
Here is the code:
uniform sampler2D sceneDepth;
uniform mat4 projectionInverse;
uniform vec2 clipPlanes; // zNear, zFar
uniform vec2 windowSize; // window width, height
#define ZNEAR clipPlanes.x
#define ZFAR clipPlanes.y
#define A (ZNEAR + ZFAR)
#define B (ZNEAR - ZFAR)
#define C (2.0 * ZNEAR * ZFAR)
#define D (ndcPos.z * B)
#define ZEYE -(C / (A + D))
void main()
{
vec3 ndcPos;
ndcPos.xy = gl_FragCoord.xy / windowSize;
ndcPos.z = texture2D (sceneDepth, ndcPos.xy).r; // or gl_FragCoord.z
ndcPos -= 0.5;
ndcPos *= 2.0;
vec4 clipPos;
clipPos.w = -ZEYE;
clipPos.xyz = ndcPos * clipPos.w;
vec4 eyePos = projectionInverse * clipPos;
}
Basically this is a GLSL version of gluUnproject.
I just realized that it's unnecessary to do these computations in the fragment shader. You can save a couple operations by doing this on the CPU and multiplying it with the MVP inverse (assuming glDepthRange(0, 1), feel free to edit):
glm::vec4 vp(left, right, width, height);
glm::mat4 viewportMat = glm::translate(
vec3(-2.0 * vp.x / vp.z - 1.0, -2.0 * vp.y / vp.w - 1.0, -1.0))
* glm::scale(glm::vec3(2.0 / vp.z, 2.0 / vp.w, 2.0));
glm::mat4 mvpInv = inverse(mvp);
glm::mat4 vmvpInv = mvpInv * viewportMat;
shader->uniform("vmvpInv", vmvpInv);
In the shader:
vec4 eyePos = vmvpInv * vec4(gl_FragCoord.xyz, 1);
vec3 pos = eyePos.xyz / eyePos.w;
I think all available answers are touching the problem from an aspect, and khronos.org has a Wiki page with a few different cases listed and explained with shader code, so it's worth posting here.
Compute eye space from window space.