openGL cubemap reflections in view space are wrong - opengl

I am following this tutorial and I managed to add a cube map to my scene. Then I tried to add reflections to my object, differently from the tutorial, I made my GLSL code in view space. However, the reflections seem a bit off. They are always reflecting the same side whatever angle you are facing, in my case, you always see a rock on the reflected object, but the rock is only on one side of my cube map.
Here is a video showing the effect:
.
I tried with other shaped objects, like a cube, and the effect is the same. I also found this book, that shows an example of a view space reflections, and it seems I am doing something similar to it, but it still won't result in the desired effect.
My vertex shader code:
#version 330 core
layout (location = 0) in vec3 aPos;
layout (location = 1) in vec3 normal;
layout (location = 2) in vec2 aTexCoord;
uniform mat4 Model;
uniform mat4 View;
uniform mat4 Projection;
out vec2 TexCoord;
out vec3 aNormal;
out vec3 FragPos;
void main()
{
aNormal = mat3(transpose(inverse(View * Model))) * normal;
FragPos = vec3(View * Model * vec4(aPos,1.0));
gl_Position = Projection * vec4(FragPos,1.0);
TexCoord = aTexCoord;
}
My vertex code:
#version 330 core
out vec4 FragColor;
in vec3 FragPos;
in vec3 aNormal;
uniform samplerCube skybox;
void main(){
vec3 I = normalize(FragPos);
vec3 R = reflect(I,normalize(aNormal));
FragColor = vec4(texture(skybox,R).rgb,1.0);
}

Since you do the computations, in the fragment shader, in view space, the reflected vector (R) is a vector in view space, too. The cubemap (skybox) represents a map of the environment, in world space.
You have to transform R form view space to world space. That can be done by the inverse view matrix The inverse matrix can be computed by the glsl built-in function inverse:
#version 330 core
out vec4 FragColor;
in vec3 FragPos;
in vec3 aNormal;
uniform samplerCube skybox;
uniform mat4 View;
void main() {
vec3 I = normalize(FragPos);
vec3 viewR = reflect(I, normalize(aNormal));
vec3 worldR = inverse(mat3(View)) * viewR;
FragColor = vec4(texture(skybox, worldR).rgb, 1.0);
}
Note, the view matrix transforms form world space to view space, this the inverse view matrix transforms form view space to world space. See also Invertible matrix.

This is a late answer but I just wanted to give additional information why it is behaving like this.
Imagine your reflective object is simply a 6 sided cube. Each face can be thought of as a mirror. Now because you are in view space every coordinate of that mirror plane that is visible from your viewpoint does have a negative z value. Lets us look at the point directly at the center. This vector looks like (0, 0, -z) and because the side of the cube is like a mirror it will get reflected directly back to you (0, 0, +z). So you end up sampling from GL_TEXTURE_CUBE_MAP_POSITIVE_Z of your cube map.
In shader code it looks like:
vec3 V = normalize(-frag_pos_view_space); // vector from fragment to view point (0,0,0) in view space
vec3 R = reflect(-V, N); // invert V because reflect expects incident vector
vec3 color = texture(skybox, R).xyz;
Now, let us move to the other side of the cube and look at that mirror plane. In view space, the coordinate you are looking at is still (0,0,-z) at the center, will be reflected around the normal and gets back to you, so the reflected vector again looks like (0,0, +z). This means even if you are at the other side of your cube you will sample the same face in your cube map.
So what you have to do is go back into world space using the inverse of your view matrix. If in addition then you rendered the skybox itself by applying a rotation you will even have to transform your reflected vector with the inverse of the model matrix that you used to transform the skybox otherwise the reflections will still be wrong.

Related

Weird behaviour when multiplying transformation matrix with normal vectors

I'm trying to apply a lighting per-pixel in my 3d engine but I'm having some trouble understanding what can be wrong with my geometry. I'm a beginner in OpenGL so please bear with me if my question may sound stupid, I'll explain as best as I can.
My vertex shader:
#version 400 core
layout(location = 0) in vec3 position;
in vec2 textureCoordinates;
in vec3 normal;
out vec2 passTextureCoordinates;
out vec3 normalVectorFromVertex;
out vec3 vectorFromVertexToLightSource;
out vec3 vectorFromVertexToCamera;
uniform mat4 transformation;
uniform mat4 projection;
uniform mat4 view;
uniform vec3 lightPosition;
void main(void) {
vec4 mainPosition = transformation * vec4(position, 1.0);
gl_Position = projection * view * mainPosition;
passTextureCoordinates = textureCoordinates;
normalVectorFromVertex = (transformation * vec4(normal, 1.0)).xyz;
vectorFromVertexToLightSource = lightPosition - mainPosition.xyz;
}
My fragment-shader:
#version 400 core
in vec2 passTextureCoordinates;
in vec3 normalVectorFromVertex;
in vec3 vectorFromVertexToLightSource;
layout(location = 0) out vec4 out_Color;
uniform sampler2D textureSampler;
uniform vec3 lightColor;
void main(void) {
vec3 versor1 = normalize(normalVectorFromVertex);
vec3 versor2 = normalize(vectorFromVertexToLightSource);
float dotProduct = dot(versor1, versor2);
float lighting = max(dotProduct, 0.0);
vec3 finalLight = lighting * lightColor;
out_Color = vec4(finalLight, 1.0) * texture(textureSampler, passTextureCoordinates);
}
The problem: Whenever I multiply my transformation matrix for the normal vector with a homogeneous coordinate of 0.0 like so: transformation * vec4(normal, 0.0), my resulting vector is getting messed up in such a way that whenever the pipeline goes to the fragment shader, my dot product between the vector that goes from my vertex to the light source and my normal is probably outputting <= 0, indicating that the lightsource is in an angle that is >= π/2 and therefore all my pixels are outputting rgb(0,0,0,1). But for the weirdest reason that I cannot understand geometrically, if I calculate transformation * vec4(normal, 1.0) the lighting appears to work kind of fine, except for extremely weird behaviours, like 'reacting' to distance. I mean, using this very simple lighting technique the vertex brightness is completely agnostic to distance, since it would imply the calculation of the vectors length, but I'm normalizing them before applying the dot product so there is no way that this is expected to me.
One thing that is clearly wrong to me, is that my transformation matrix have the translation components applied before multiplying the normal vectors, which will "move and point" the normals in the direction of the translation, which is wrong. Still I'm not sure if I should be getting this results. Any insights are appreciated.
Whenever I multiply my transformation matrix for the normal vector with a homogeneous coordinate of 0.0 like so: transformation * vec4(normal, 0.0), my resulting vector is getting messed up
What if you have non-uniform scaling in that transformation matrix?
Imagine a flat square surface, all normals are pointing up. Now you scale that surface to stretch in the horizontal direction: what would happen to normals?
If you don't adjust your transformation matrix to not have the scaling part in it, the normals will get skewed. After all, you only care about the object's orientation when considering the normals and the scale of the object is irrelevant to where the surface is pointing to.
Or think about a circle:
img source
You need to apply inverse transpose of the model view matrix to avoid scaling the normals when transforming the normals. Another SO question discusses it, as well as this video from Jaime King teaching Graphics with OpenGL.
Additional resources on transforming normals:
LearnOpenGL: Basic Lighting
Lighthouse3d.com: The Normal Matrix

Inverted geometry gBuffer positions for perspective. Orthographic is ok?

I have a deferred renderer which appears to work correctly, depth, colour and shading comes out correctly. However the position buffer is fine for orthographic, while the geometry appears 'inverted' (or depth disabled) when using a perspective projection.
I am getting the following buffer outputs for orthographic.
With the final 'shaded' image currently looking correct.
However when I am using a perspective projection I get the following buffers coming out...
And final image is fine, although I don't incorporate any position buffer information at the moment (N.B Only doing 'headlight' shading at the moment)
While the final image appears correct, the depth buffer appears to be ignored for my position buffer...(there is no glDisable(GL_DEPTH_TEST) in the code.
The depth and normal buffers looks ok to me, it's only the 'position' buffer which appears to be ignoring the depth? The render pipeline is exactly the same in for ortho and perspective with the only difference being the projection matrix.
I use glm::ortho, and glm::perspective and I calculate my near/far clipping distances on the fly based on the scene AABB. For orthographic my near/far is 1 & 11.4734 respectively, and for perspective it is 11.0875 & 22.5609... The width and height values are the same, fov is 45 for perspective projection.
I do have these calls before drawing any geometry...
glEnable(GL_DEPTH_TEST);
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
Which I use for compositing different layers as part of the render pipeline.
Am I doing anything wrong here? or am I misunderstanding something?
Here are my shaders...
Vertex shader of gBuffer...
#version 430 core
layout (std140) uniform MatrixPV
{
mat4 P;
mat4 V;
};
layout(location = 0) in vec3 InPoint;
layout(location = 1) in vec3 InNormal;
layout(location = 2) in vec2 InUV;
uniform mat4 M;
out vec4 Position;
out vec3 Normal;
out vec2 UV;
void main()
{
mat4 VM = V * M;
gl_Position = P * VM * vec4(InPoint, 1.0);
Position = P * VM * vec4(InPoint, 1.0);
Normal = mat3(M) * InNormal;
UV = InUV;
}
Fragment shader of gBuffer...
#version 430 core
layout(location = 0) out vec4 gBufferPicker;
layout(location = 1) out vec4 gBufferPosition;
layout(location = 2) out vec4 gBufferNormal;
layout(location = 3) out vec4 gBufferDiffuse;
in vec3 Normal;
in vec4 Position;
vec4 Diffuse();
uniform vec4 PickerColour;
void main()
{
gBufferPosition = Position;
gBufferNormal = vec4(Normal.xyz, 1.0);
gBufferPicker = PickerColour;
gBufferDiffuse = Diffuse();
}
And here is the 'second pass' shader to visualise the position buffer...
#version 430 core
uniform sampler2D debugBufferPosition;
in vec2 UV;
out vec4 frag;
void main()
{
vec3 val = texture(debugBufferPosition, UV).xyz;
frag = vec4(val.xyz, 1.0);
}
I haven't used the position buffer data yet, and I know I can reconstruct it without having to store them in another buffer, however the positions are useful for me for other reasons and I would like to know why they are coming out as they are for perspective?
What you actually write in the position buffer is the clip space coordinate
Position = P * VM * vec4(InPoint, 1.0);
The clip space coordinate is a Homogeneous coordinates and transformed to the normaliced device cooridnate (which is a Cartesian coordinate by a Perspective divide.
ndc = gl_Position.xyz / gl_Position.w;
At orthographic projection the w component is 1, but at perspective projection, the w component contains a value which depends on the z component (depth) of the (cartesian ) view space coordinate.
I recommend to store the normalized device coordinate to the position buffer, rather than the clip space coordinate. e.g.:
gBufferPosition = vec4(Position.xyz / Position.w, 1.0);

GLSL cubemap reflection shader

I'm developing OpenGL application and having problem implementing cubemap reflection shader: reflection rotates with camera around the object, it's is same from any point of view.
Here is my vertex shader:
in vec4 in_Position;
in vec4 in_Normal;
out vec3 ws_coords;
out vec3 normal;
mat4 uniform_ModelViewProjectionMatrix;
mat4 uniform_ModelViewMatrix;
mat4 uniform_ModelMatrix;
mat3 uniform_NormalMatrix;
vec3 uniform_CameraPosition;
...
ws_coords = (uniform_ModelViewMatrix * in_Position).xyz;
normal = normalize(uniform_NormalMatrix * in_Normal);
And fragment:
uniform samplerCube uniform_ReflectionTexture;
...
vec3 normal = normalize(normal);
vec3 reflectedDirection = reflect(normalize(ws_coords), normal);
frag_Color = texture(uniform_ReflectionTexture, reflectedDirection).xyz
All shaders I found over the internet have same issue or producing weird results for me.
I guess I need to rotate reflected direction with camera rotation but I have no idea how can I do that. On shader input I have world space camera position, MVP, MV, M and Normal matrices.
Can you please help me implementing shader, that takes in account camera direction.
This part seems a bit odd to me:
vec3 reflectedDirection = reflect(normalize(ws_coords), normal);
The first argument to reflect has to be a vector that goes from the pixel position to the camera position, in world space.
I suggest you have a camera world position, then take your in_Position to world space (I don't know which space they're currently in) and create a normalized vector from that. Then reflect it with a world space normal vector and sample your cubemap.
Okay, I found an answer,
my problem was that I did calculations in ViewSpace, that is why reflection was static. Also my NormalMatrix was in ViewSpace.
So fix is
ws_coords = (uniform_ModelMatrix * in_Position).xyz;
normal = normalize(uniform_NormalMatrix * in_Normal);
and changing Normal matrix from viewspace to modelspace.

Why does this Phong shader work?

I recently wrote a Phong shader in GLSL as part of a school assignment. I started with tutorials, then played around with the code until I got it working. It works perfectly fine as far as I can tell, but there's one line in particular I wrote where I don't understand why it does work.
The vertex shader:
#version 330
layout (location = 0) in vec3 Position; // Vertex position
layout (location = 1) in vec3 Normal; // Vertex normal
out vec3 Norm;
out vec3 Pos;
out vec3 LightDir;
uniform mat3 NormalMatrix; // ModelView matrix without the translation component, and inverted
uniform mat4 MVP; // ModelViewProjection Matrix
uniform mat4 ModelView; // ModelView matrix
uniform vec3 light_pos; // Position of the light
void main()
{
Norm = normalize(NormalMatrix * Normal);
Pos = Position;
LightDir = NormalMatrix * (light_pos - Position);
gl_Position = MVP * vec4(Position, 1.0);
}
The fragment shader:
#version 330
in vec3 Norm;
in vec3 Pos;
in vec3 LightDir;
layout (location = 0) out vec4 FragColor;
uniform mat3 NormalMatrix;
uniform mat4 ModelView;
void main()
{
vec3 normalDirCameraCoords = normalize(Norm);
vec3 vertexPosLocalCoords = normalize(Pos);
vec3 lightDirCameraCoords = normalize(LightDir);
float dist = max(length(LightDir), 1.0);
float intensity = max(dot(normalDirCameraCoords, lightDirCameraCoords), 0.0) / pow(dist, 1.001);
vec3 h = normalize(lightDirCameraCoords - vertexPosLocalCoords);
float intSpec = max(dot(h, normalDirCameraCoords), 0.0);
vec4 spec = vec4(0.9, 0.9, 0.9, 1.0) * (pow(intSpec, 100) / pow(dist, 1.2));
FragColor = max((intensity * vec4(0.7, 0.7, 0.7, 1.0)) + spec, vec4(0.07, 0.07, 0.07, 1.0));
}
So I'm doing the method where you calculate the half vector between the light vector and the camera vector, then dot it with the normal. That's all good. However, I do two things that are strange.
Normally, everything is done in eye coordinates. However, Position, which I pass from the vertex shader to the fragment shader, is in local coordinates.
This is the part that baffles me. On the line vec3 h = normalize(lightDirCameraCoords - vertexPosLocalCoords); I'm subtracting the light vector in camera coordinates with the vertex position in local coordinates. This seems utterly wrong.
In short, I understand what this code is supposed to be doing, and how the half vector method of phong shading works.
But why does this code work?
EDIT: The starter code we were provided is open source, so you can download the completed project and look at it directly if you'd like. The project is for VS 2012 on Windows (you'll need to set up GLEW, GLM, and freeGLUT), and should work on GCC with no code changes (maybe a change or two to the makefile library paths).
Note that in the source files, "light_pos" is called "gem_pos", as our light source is the little gem you move around with WSADXC. Press M to get Phong with multiple lights.
The reason this works is happenstance, but it's interesting to see why it still works.
Phong shading is three techniques in one
With phong shading, we have three terms: specular, diffuse, and ambient; these three terms represent the three techniques used in phong shading.
None of these terms strictly require a vector space; you can make phong shading work in world, local, or camera spaces as long as you are consistant. Eye space is usually used for lighting, as it is easier to work with and the conversions are simple.
But what if you are at origin? Now you are multiplying by zero; it's easy to see that there's no difference between any of the vector spaces at origin. By coincidence, at origin, it doesn't matter what vector space you are in; it'll work.
vec3 h = normalize(lightDirCameraCoords - vertexPosLocalCoords);
Notice that it's basically subtracting 0; this is the only time local is used, and it's used in the one place that it can do the least damage. Since the object is at origin, all it's vertices should be at or very close to origin as well. At origin, the approximation is exact; all vector spaces converge. Very close to origin, it's very close to exact; even if we used exact reals, it'd be a very small divergence, but we don't use exact reals, we use floats, compounding the issue.
Basically, you got lucky; this wouldn't work if the object wasn't at origin. Try moving it and see!
Also, you aren't using Phong shading; you are using Blinn-Phong shading (that's the name for the replacement of reflect() with a half vector, just for reference).

OpenGL Pointlight Shadowmapping with Cubemaps

I want to calculate the shadows of my pointlights with the following two passes:
First, I render the scene from pointlight's view into a cubemap into all six directions with the scene-objects' modelspace, the according viewmatrix for the cubemap's face and a projection matrix with 90 degree FOV. Then I store the distance in worldspace between the vertex and the lightposition (which is the camera's position, so just the length of the vertex rendered in worldspace).
Is it right to store worldspace here?
The cubemap is a GL_DEPTH_COMPONENT typed texture. For directional and spotlights shadowing works quite well, but those are single 2D textures
This is the shader with which I try to store the distances:
VertexShader:
#version 330
layout(location = 0) in vec3 vertexPosition;
uniform mat4 modelMatrix;
uniform mat4 viewMatrix;
uniform mat4 projectionMatrix;
out vec4 fragmentPosition_ws;
void main(){
gl_Position = projectionMatrix * viewMatrix * modelMatrix * vec4(vertexPosition, 1.0);
fragmentPosition_ws = modelMatrix * vec4(vertexPosition, 1.0);
}
FragmentShader:
#version 330
// Ouput data
layout(location = 0) out float fragmentdist;
in vec4 fragmentPosition_ws;
void main(){
fragmentdist = length(fragmentPosition_ws.xyz);
}
In the second step, when rendering the lighting itself, I try to get those distance values like this:
float shadowFactor = 0.0;
vec3 fragmentToLightWS = lightPos_worldspace - fragmentPos_worldspace;
float distancerad = texture(shadowCubeMap, vec3(fragmentToLightWS)).x;
if(distancerad + 0.001 > length(fragmentToLightWS)){
shadowFactor = 1.0;
}
Notes:
shadowCubeMap is a sampler of type samplerCube
lightPos_worldspace is the lightposition in worldspace (lights are already in worldspace - no modelmatrix)
fragmentPos_worldspace is the fragmentposition in worldspace ( * modelmatrix)
The result is, that everything is lighted aka. not in shadow. I am sure, that rendering into shadowmap works. I tried several implementations of calculating the shadow and sometimes a saw something like shadows, that could be objects. BUT this was with NDC shadowdepths and not the distancemethod. So check this also for mistakes.
So, finally I made it. I got shadows :)
The solution:
I used as suggested the old shadowmap technique with depthvalues. I sample from the cubemap still using the difference of light to vertex (both in worldspace) but I compare the value with the vertexToDepth() method from the other question mentioned.
Thanks for your help and clarifying points
The point is: Always be sure to compare the same values! When depthmap stores worldspace-depth, then also compare with such a value.