How to rotate a 2D image with opengl and glsl? - opengl

In my already existing and functioning vertex shader I have something like this:
layout (location = 0) in vec3 aPos;
uniform mat4 projection;
void main(){
gl_Position = projection * vec4(aPos, 1.0);
}
The aPos is a position vector3 and the uniform variable projection is used to obtain screen-space coordinates.
In examples I've seen, the way rotation is implemented is by calculating it in code (which I've already done) and multiplying the result by vec4(aPos, 1.0);, however, I have a uniform inside my shader, the others do not projection which is already multiplying by vec4(aPos, 1.0).
My question is: What do I need to do to apply the rotation inside the vertex shader?
Do I create another uniform for the rotation result and multiply that one by both the projection and vec4(aPos, 1.0) ?
How do I do this?

Related

How to properly transform normals for diffuse lighting in OpenGL?

In the attempt to get diffuse lighting correct, I read several articles and tried to apply them as close as possible.
However, even if the transform of normal vectors seems close to be right, the lighting still slides slightly over the object (which should not be the case for a fixed light).
Note 1: I added bands based on the dot product to make the problem more apparent.
Note 2: This is not Sauron eye.
In the image two problems are apparent:
The normal is affected by the projection matrix: when the viewport is horizontal, the normals display an elliptic shading (as in the image). When the viewport is vertical (height>width), the ellipse is vertical.
The shading move over the surface when the camera is rotated around the object.This is not much visible with normal lighting, but get apparent when projecting patterns from the light source.
Code and attempts:
Unfortunately, a minimal working example get soon very large, so I will only post relevant code. If this is not enough, as me and I will try to publish somewhere the code.
In the drawing function, I have the following matrix creation:
glm::mat4 projection = glm::perspective(45.0f, (float)m_width/(float)m_height, 0.1f, 200.0f);
glm::mat4 view = glm::translate(glm::mat4(1), glm::vec3(0.0f, 0.0f, -2.5f))*rotationMatrix; // make the camera 2.5f away, and rotationMatrix is driven by the mouse.
glm::mat4 model = glm::mat4(1); //The sphere at the center.
glm::mat4 mvp = projection * view * model;
glm::mat4 normalVp = projection * glm::transpose(glm::inverse(view * model));
In the vertex shader, the mvp is used to transform position and normals:
#version 420 core
uniform mat4 mvp;
uniform mat4 normalMvp;
in vec3 in_Position;
in vec3 in_Normal;
in vec2 in_Texture;
out Vertex
{
vec4 pos;
vec4 normal;
vec2 texture;
} v;
void main(void)
{
v.pos = mvp * vec4(in_Position, 1.0);
gl_Position = v.pos;
v.normal = normalMvp * vec4(in_Normal, 0.0);
v.texture = in_Texture;
}
And in the fragment shader, the diffuse shading is applied:
#version 420 core
in Vertex
{
vec4 pos;
vec4 normal;
vec2 texture;
} v;
uniform sampler2D uSampler1;
out vec4 out_Color;
uniform mat4 mvp;
uniform mat4 normalMvp;
uniform vec3 lightsPos;
uniform float lightsIntensity;
void main()
{
vec3 color = texture2D(uSampler1, v.texture);
vec3 lightPos = (mvp * vec4(lightsPos, 1.0)).xyz;
vec3 lightDirection = normalize( lightPos - v.pos.xyz );
float dot = clamp(dot(lightDirection, normalize(v.normal.xyz)), 0.0, 1.0);
vec3 ambient = 0.3 * color;
vec3 diffuse = dot * lightsIntensity * color;
// Here I have my debug code to add the projected bands on the image.
// kind of if(dot>=0.5 && dot<0.75) diffuse +=0.2;...
vec3 totalLight = ambient + diffuse;
out_Color = vec4(totalLight, 1.0);
}
Question:
How to properly transform the normals to get diffuse shading?
Related articles:
How to calculate the normal matrix?
GLSL normals with non-standard projection matrix
OpenGL Diffuse Lighting Shader Bug?
http://www.opengl-tutorial.org/beginners-tutorials/tutorial-3-matrices/
http://www.lighthouse3d.com/tutorials/glsl-12-tutorial/the-normal-matrix/
Mostly, all sources agree that it should be enough to multiply the projection matrix by the transpose of the inverse of the model-view matrix. That is what I think I am doing, but the result is not right apparently.
Lighting calculations should not be performed in clip space (including the projection matrix). Leave the projection away from all variables, including light positions etc., and you should be good.
Why is that? Well, lighting is a physical phenomenon that essentially depends on angles and distances. Therefore, to calculate it, you should choose a space that preserves these things. World space or camera space are two examples of angle and distance-preserving spaces (compared to the physical space). You may of course define them differently, but in most cases they are. Clip space preserves neither of the two, hence the angles and distances you calculate in this space are not the physical ones you need to determine physical lighting.

Weird behaviour when multiplying transformation matrix with normal vectors

I'm trying to apply a lighting per-pixel in my 3d engine but I'm having some trouble understanding what can be wrong with my geometry. I'm a beginner in OpenGL so please bear with me if my question may sound stupid, I'll explain as best as I can.
My vertex shader:
#version 400 core
layout(location = 0) in vec3 position;
in vec2 textureCoordinates;
in vec3 normal;
out vec2 passTextureCoordinates;
out vec3 normalVectorFromVertex;
out vec3 vectorFromVertexToLightSource;
out vec3 vectorFromVertexToCamera;
uniform mat4 transformation;
uniform mat4 projection;
uniform mat4 view;
uniform vec3 lightPosition;
void main(void) {
vec4 mainPosition = transformation * vec4(position, 1.0);
gl_Position = projection * view * mainPosition;
passTextureCoordinates = textureCoordinates;
normalVectorFromVertex = (transformation * vec4(normal, 1.0)).xyz;
vectorFromVertexToLightSource = lightPosition - mainPosition.xyz;
}
My fragment-shader:
#version 400 core
in vec2 passTextureCoordinates;
in vec3 normalVectorFromVertex;
in vec3 vectorFromVertexToLightSource;
layout(location = 0) out vec4 out_Color;
uniform sampler2D textureSampler;
uniform vec3 lightColor;
void main(void) {
vec3 versor1 = normalize(normalVectorFromVertex);
vec3 versor2 = normalize(vectorFromVertexToLightSource);
float dotProduct = dot(versor1, versor2);
float lighting = max(dotProduct, 0.0);
vec3 finalLight = lighting * lightColor;
out_Color = vec4(finalLight, 1.0) * texture(textureSampler, passTextureCoordinates);
}
The problem: Whenever I multiply my transformation matrix for the normal vector with a homogeneous coordinate of 0.0 like so: transformation * vec4(normal, 0.0), my resulting vector is getting messed up in such a way that whenever the pipeline goes to the fragment shader, my dot product between the vector that goes from my vertex to the light source and my normal is probably outputting <= 0, indicating that the lightsource is in an angle that is >= π/2 and therefore all my pixels are outputting rgb(0,0,0,1). But for the weirdest reason that I cannot understand geometrically, if I calculate transformation * vec4(normal, 1.0) the lighting appears to work kind of fine, except for extremely weird behaviours, like 'reacting' to distance. I mean, using this very simple lighting technique the vertex brightness is completely agnostic to distance, since it would imply the calculation of the vectors length, but I'm normalizing them before applying the dot product so there is no way that this is expected to me.
One thing that is clearly wrong to me, is that my transformation matrix have the translation components applied before multiplying the normal vectors, which will "move and point" the normals in the direction of the translation, which is wrong. Still I'm not sure if I should be getting this results. Any insights are appreciated.
Whenever I multiply my transformation matrix for the normal vector with a homogeneous coordinate of 0.0 like so: transformation * vec4(normal, 0.0), my resulting vector is getting messed up
What if you have non-uniform scaling in that transformation matrix?
Imagine a flat square surface, all normals are pointing up. Now you scale that surface to stretch in the horizontal direction: what would happen to normals?
If you don't adjust your transformation matrix to not have the scaling part in it, the normals will get skewed. After all, you only care about the object's orientation when considering the normals and the scale of the object is irrelevant to where the surface is pointing to.
Or think about a circle:
img source
You need to apply inverse transpose of the model view matrix to avoid scaling the normals when transforming the normals. Another SO question discusses it, as well as this video from Jaime King teaching Graphics with OpenGL.
Additional resources on transforming normals:
LearnOpenGL: Basic Lighting
Lighthouse3d.com: The Normal Matrix

OpenGL Pointlight Shadowmapping with Cubemaps

I want to calculate the shadows of my pointlights with the following two passes:
First, I render the scene from pointlight's view into a cubemap into all six directions with the scene-objects' modelspace, the according viewmatrix for the cubemap's face and a projection matrix with 90 degree FOV. Then I store the distance in worldspace between the vertex and the lightposition (which is the camera's position, so just the length of the vertex rendered in worldspace).
Is it right to store worldspace here?
The cubemap is a GL_DEPTH_COMPONENT typed texture. For directional and spotlights shadowing works quite well, but those are single 2D textures
This is the shader with which I try to store the distances:
VertexShader:
#version 330
layout(location = 0) in vec3 vertexPosition;
uniform mat4 modelMatrix;
uniform mat4 viewMatrix;
uniform mat4 projectionMatrix;
out vec4 fragmentPosition_ws;
void main(){
gl_Position = projectionMatrix * viewMatrix * modelMatrix * vec4(vertexPosition, 1.0);
fragmentPosition_ws = modelMatrix * vec4(vertexPosition, 1.0);
}
FragmentShader:
#version 330
// Ouput data
layout(location = 0) out float fragmentdist;
in vec4 fragmentPosition_ws;
void main(){
fragmentdist = length(fragmentPosition_ws.xyz);
}
In the second step, when rendering the lighting itself, I try to get those distance values like this:
float shadowFactor = 0.0;
vec3 fragmentToLightWS = lightPos_worldspace - fragmentPos_worldspace;
float distancerad = texture(shadowCubeMap, vec3(fragmentToLightWS)).x;
if(distancerad + 0.001 > length(fragmentToLightWS)){
shadowFactor = 1.0;
}
Notes:
shadowCubeMap is a sampler of type samplerCube
lightPos_worldspace is the lightposition in worldspace (lights are already in worldspace - no modelmatrix)
fragmentPos_worldspace is the fragmentposition in worldspace ( * modelmatrix)
The result is, that everything is lighted aka. not in shadow. I am sure, that rendering into shadowmap works. I tried several implementations of calculating the shadow and sometimes a saw something like shadows, that could be objects. BUT this was with NDC shadowdepths and not the distancemethod. So check this also for mistakes.
So, finally I made it. I got shadows :)
The solution:
I used as suggested the old shadowmap technique with depthvalues. I sample from the cubemap still using the difference of light to vertex (both in worldspace) but I compare the value with the vertexToDepth() method from the other question mentioned.
Thanks for your help and clarifying points
The point is: Always be sure to compare the same values! When depthmap stores worldspace-depth, then also compare with such a value.

is my lighting correct?

I have been reading a pdf file on OpenGL lighting.
It says for the Gouraud Shading:
• Gouraud shading
– Set vertex normals
– Calculate colors at vertices
– Interpolate colors across polygon
• Must calculate vertex normals!
• Must normalize vertex normals to unit length!
So that's what I did.
Here is my Vertex and Fragment Shader file
V_Shader:
#version 330
layout(location = 0) in vec3 in_Position; //declare position
layout(location = 1) in vec3 in_Color;
// mvpmatrix is the result of multiplying the model, view, and projection matrices */
uniform mat4 MVP_matrix;
vec3 ambient;
out vec3 ex_Color;
void main(void) {
// Multiply the MVP_ matrix by the vertex to obtain our final vertex position (mvp was created in *.cpp)
gl_Position = MVP_matrix * vec4(in_Position, 1.0);
ambient = vec3(0.0f,0.0f,1.0f);
ex_Color = ambient * normalize(in_Position) ; //anti ex_Color=in_Color;
}
F_shader:
#version 330
in vec3 ex_Color;
out vec4 gl_FragColor;
void main(void) {
gl_FragColor = vec4(ex_Color,1.0);
}
The interpolation is taken care by the fragment shader right?
so here is my sphere (it is low polygon btw):
Is this the standard way of implementing Gouraud Shading?
(my sphere has a center of (0,0,0))
Thanks for your patience
ex_Color = ambient * normalize(in_Position) ; //anti ex_Color=in_Color;
Allow me to quote myself, "It certainly doesn't qualify as 'lighting'." That didn't stop being true between the first time you asked this question and now.
This is not lighting. This is just normalizing the model-space position and multiplying it by the ambient color. Even if we assume that the model-space position is centered at zero and represents a point on the sphere, multiplying a light by a normal is meaningless. It is not lighting.
If you want to learn how lighting works, read this. Or this.

GLSL: How to get pixel x,y,z world position?

I want to adjust the colors depending on which xyz position they are in the world.
I tried this in my fragment shader:
varying vec4 verpos;
void main(){
vec4 c;
c.x = verpos.x;
c.y = verpos.y;
c.z = verpos.z;
c.w = 1.0;
gl_FragColor = c;
}
but it seems that the colors change depending on my camera angle/position, how do i make the coords independent from my camera position/angle?
Heres my vertex shader:
varying vec4 verpos;
void main(){
gl_Position = ftransform();
verpos = gl_ModelViewMatrix*gl_Vertex;
}
Edit2: changed title, so i want world coords, not screen coords!
Edit3: added my full code
In vertex shader you have gl_Vertex (or something else if you don't use fixed pipeline) which is the position of a vertex in model coordinates. Multiply the model matrix by gl_Vertex and you'll get the vertex position in world coordinates. Assign this to a varying variable, and then read its value in fragment shader and you'll get the position of the fragment in world coordinates.
Now the problem in this is that you don't necessarily have any model matrix if you use the default modelview matrix of OpenGL, which is a combination of both model and view matrices. I usually solve this problem by having two separate matrices instead of just one modelview matrix:
model matrix (maps model coordinates to world coordinates), and
view matrix (maps world coordinates to camera coordinates).
So just pass two different matrices to your vertex shader separately. You can do this by defining
uniform mat4 view_matrix;
uniform mat4 model_matrix;
In the beginning of your vertex shader. And then instead of ftransform(), say:
gl_Position = gl_ProjectionMatrix * view_matrix * model_matrix * gl_Vertex;
In the main program you must write values to both of these new matrices. First, to get the view matrix, do the camera transformations with glLoadIdentity(), glTranslate(), glRotate() or gluLookAt() or what ever you prefer as you would normally do, but then call glGetFloatv(GL_MODELVIEW_MATRIX, &array); in order to get the matrix data to an array. And secondly, in a similar way, to get the model matrix, also call glLoadIdentity(); and do the object transformations with glTranslate(), glRotate(), glScale() etc. and finally call glGetFloatv(GL_MODELVIEW_MATRIX, &array); to get the matrix data out of OpenGL, so you can send it to your vertex shader. Especially note that you need to call glLoadIdentity() before beginning to transform the object. Normally you would first transform the camera and then transform the object which would result in one matrix that does both the view and model functions. But because you're using separate matrices you need to reset the matrix after camera transformations with glLoadIdentity().
gl_FragCoord are the pixel coordinates and not world coordinates.
Or you could just divide the z coordinate by the w coordinate, which essentially un-does the perspective projection; giving you your original world coordinates.
ie.
depth = gl_FragCoord.z / gl_FragCoord.w;
Of course, this will only work for non-clipped coordinates..
But who cares about clipped ones anyway?
You need to pass the World/Model matrix as a uniform to the vertex shader, and then multiply it by the vertex position and send it as a varying to the fragment shader:
/*Vertex Shader*/
layout (location = 0) in vec3 Position
uniform mat4 World;
uniform mat4 WVP;
//World FragPos
out vec4 FragPos;
void main()
{
FragPos = World * vec4(Position, 1.0);
gl_Position = WVP * vec4(Position, 1.0);
}
/*Fragment Shader*/
layout (location = 0) out vec4 Color;
...
in vec4 FragPos
void main()
{
Color = FragPos;
}
The easiest way is to pass the world-position down from the vertex shader via a varying variable.
However, if you really must reconstruct it from gl_FragCoord, the only way to do this is to invert all the steps that led to the gl_FragCoord coordinates. Consult the OpenGL specs, if you really have to do this, because a deep understanding of the transformations will be necessary.