I'm developing an OpenGL 2.1 application using shaders and I'm having a problem with my per-fragment lighting. The lighting is correct when my scene initial loads, but as I navigate around the scene, the lighting moves around with the "camera", rather than staying in a static location. For example, if I place my light way off to the right, the right side of objects will be illuminated. If I then move the camera to the other side of the object and point in the opposite direction, the lighting is still on the right side of the object (rather than being on the left, like it should now). I assume that I am calculating the lighting in the wrong coordinate system, and this is causing the wrong behavior. I'm calculating lighting the following way...
In vertex shader...
ECPosition = gl_ModelViewMatrix * gl_Vertex;
WCNormal = gl_NormalMatrix * vertexNormal;
where vertexNormal is the normal in object/model space.
In the fragment shader...
float lightIntensity = 0.2; //ambient
lightIntensity += max(dot(normalize(LightPosition - vec3(ECPosition)), WCNormal), 0.0) * 1.5; //diffuse
where LightPosition is, for example, (100.0, 10.0, 0.0), which would put the light on the right side of the world as described above. The part that I'm unsure of, is the gl_NormalMatrix part. I'm not exactly sure what this matrix is and what coordinate space it puts my normal into (I assume world space). If the normal is put into world space, then I figured the problem was that ECPosition is in eye space while LightPosition and WCNormal are in world space. Something about this doesn't seem right but I can't figure it out. I also tried putting ECPosition into world space my multiplying it by my own modelMatrix that only contains the transformations I do to get the coordinate into world space, but this didn't work. Let me know if I need to provide other information about my shaders or code.
gl_NormalMatrix transforms your normal into eye-space (see this tutorial for more details).
I think in order for your light to be in a static world position you ought to be transforming your LightPosition into eye-space as well by multiplying it by your current view matrix.
Related
While debugging my graphics engine I try to color everything at a certain distance in the direction of the camera. This works for static meshes, but animated meshes that use animationMatrix have small sections of fragments that either don't get colored, or change color when it shouldn't. As I see it gl_Position.z either doesn't gauge the depth correctly when I apply animations, or it can't be used as I intend it to.
Vertex Shader
vec4 worldPosition = model * animationMatrix * vec4(coord3d, 1.0);
gl_Position = view_projection * worldPosition;
out float ClipSpacePosZ = gl_Position.z
Fragment Shader
in float ClipSpacePosZ;
if(ClipSpacePosZ > a){
color.r = 1;
}
Thought 1
I had a similar problem with the fragment world position before, where I'd try to color by fragment world position and there would be similar artifacts. The solution was to divide the fragment position by the w component
out vec3 fragPos = vec3(worldPosition.x, worldPosition.y, worldPosition.z)/worldPosition.w;
I've tried similar ideas with ClipSpacePosZ. Maybe this is the solution that I'm missing some division by w, but I can't seem to find anything that works.
Thought 2
Could the depth for my fragments be incorrect, and they only seemingly display at the correct position.
Thought 3
gl_Position might do more things that I'm not aware of.
Thought 4
This only happens for animated meshes. There could be something wrong in my animation matrix. I don't understand why the position with gl_Position seemingly would be correct, but coloring by the depth wouldn't be.
I'm currently using OpenGL(Glad, GLFW, GLM). I just started learning and can't find the correct way to take a translation and actually render it. For example, I want to rotate 1 degree every frame. I've seen tutorials on how to use GLM to make those translations but I can't seem to figure out how to take something like this: glm::mat4 translate = glm::translate(glm::mat4(1.f), glm::vec3(2.f, 0.f, 0.f)); and apply it to an object rendered like this:
glBindVertexArray(VAO);
glDrawElements(GL_TRIANGLES,3, GL_UNSIGNED_INT, 0);
I'm really not sure what I'm not understanding and any help would be appreciated.
Edit: I know you can just change the vertices and rebind the array, but that seems like it would be pretty slow. Would that work?
I am assuming that you are somewhat familiar with the concept of the different coordinate systems and transformations like the projection, view and model matrix. If not, you should read up on them here https://learnopengl.com/Getting-started/Coordinate-Systems.
In short, the model matrix transforms the object as a whole in the world. In most cases it contains translation, rotation and scaling. It is important to note that the order of matrix multiplication is generally defined as
glm::mat4 model = translate * rotate * scale;
The view matrix is needed to get the view of the camera and the projection matrix adds perspective and is used to determine what is on the screen and will be rendered.
To apply the transformation to the object that you want to draw with your shader, you will need to load the matrix into the shader first.
glm::mat4 model = glm::translate(glm::mat4(1.f), glm::vec3(2.f, 0.f, 0.f));
unsigned int modelMatrixLoc = glGetUniformLocation(ourShader.ID, "model");
glUniformMatrix4fv(modelMatrixLoc , 1, GL_FALSE, glm::value_ptr(model));
Here, the model matrix would be loaded into the shader under the name "model". You can then use this matrix in your shader to transform the vertices on the GPU.
A simple shader would then look like this
#version 460 core
layout (location = 0) in vec3 position_in;
uniform mat4 model;
uniform mat4 view;
uniform mat4 projection;
void main()
{
gl_Position = projection * view * model * vec4(position_in, 1.0);
}
To ensure that your normal vectors do not get skewed, you can calculate another matrix
glm::mat3 model_normal = glm::transpose(glm::inverse(model));
If you were to add this to the shader, it would look something like this
#version 460 core
layout (location = 0) in vec3 position_in;
layout (location = 1) in vec3 normal_in;
uniform mat4 model;
uniform mat3 model_normal;
uniform mat4 view;
uniform mat4 projection;
out vec3 normal;
void main()
{
gl_Position = projection * view * model * vec4(position_in, 1.0);
normal = model_normal * normal_in;
}
Notice how we can now use a mat3 instead of a mat4? This is because we do not want to translate the normal vector and the translation part of the matrix is located in the fourth column, which we cut away here. This also means that it is important to set the last component of the 4d vector to 1 if we want to translate and to 0 if we do not.
Edit: I know you can just change the vertices and rebind the array, but that seems like it would be pretty slow. Would that work?
You can always edit your model if you want to change its looks. However, the transformations are going to be way slower on the CPU. Remember that the GPU is highly optimized for this kind of task. So I would advise to do most transformations on the GPU.
If you only need to change a part of the vertices of your object, updating the vertices will work better in most cases.
How would I constantly rotate as a function of time?
To rotate as a function of time, there are multiple approaches. To clarify, transformations that are applied to an object in the shader are not permanent and will be reset for the next draw call. If the rotation is performed with the same axis, it is a good idea to specify the angle somewhere and then calculate a single rotation (matrix) for that angle. To do this with GLFW, you can use
const double radians_per_second = ... ;
const glm::vec3 axis = ... ;
// rendering loop start
double time_seconds = glfwGetTime();
float angle = radians_per_second * time_seconds;
glm::mat4 rotation = glm::rotate(glm::mat4(1.f), angle, axis);
On the other hand, if the rotations are not performed on the same axis, you will have to multiply both rotation matrices together.
rotation = rotation * additional_rotation;
In both cases, you need to set the rotation for your model matrix like I explained above.
Also, if I wanted to make a square follow the mouse, would I have to rebuild the vertices every time the mouse moves?
No you do not need to do that. If you just want to move the square to the position of the mouse, you can use a translation. To get the mouse position in world space, it seems that you can use glm::unProject( ... );. I have not tried this yet, but it looks like it could solve your problem. You can take a look at it here
https://glm.g-truc.net/0.9.2/api/a00245.html#gac38d611231b15799a0c06c54ff1ede43.
If you need more information on this topic, you can take a look at this thread where it has already been answered
Using GLM's UnProject.
what's the GLFW/GLAD function to change the camera's position and rotation?
That is just the view matrix. Look here again.
I'm basically trying to create the FPS camera. I figured out movement but I can't figure out rotation.
You can take a look here learnopengl.com/Getting-started/Camera. Just scroll down until you see the section "Look around". I think that the explanation there is pretty good.
I already looked at that but was confused about one thing. If each rendered cube has view, perspective, and model, how would I change the camera's view, perspective, and model?
Ok, I think that I understand the misconception here. Not all 3 matrices are per object. The model matrix is per object, the view matrix per camera and the projection matrix per viewing mode (e.g. perspective with fov of 90° or orthogonal) so usually only once.
I'm trying to implement an atmospheric scattering in my graphics (game) engine based on the gpu gems article: link. An example implementation from that article uses a skydome. My scene is different - I don't render
a whole earth with an atmosphere which can be also visible from the space but some finite flat (rectangle)
area with objects, for example a race track. In fact this is the most common scenario in many games. Now
I wonder how to render a sky in such case:
1.What kind of geometry I should use: skydome, skybox or a full screen quad - then I have to
move almost all calculations to the fragment shader, but I don't know if it makse sense in terms
quality/performance ?
2.How to place sky geometry on the scene ? My idea:
I have a hemisphere (skydome) geometry with radius = 1 and center in vec3(0, 0, 0) - in object space.
Those vertices are sent to the atmospheric scattering vertex shader:
layout(location=0) in vec3 inPosition;
Next, In the vertex shader I transform vertex this way:
v3Pos = inPosition * 0.25f + 10.0f;
Uniform v3CameraPos = vec3(0.0f, 10.0f, 0.0f), uniform fInnerRadius = 10.0f, uniform fCameraHeight = 10.0f
Then I have correct an inner/outer radius propotion (10/10.25),right? I also send to the vertex shader a model matrix which sets a position
of the hemisphere to the postion of the mobile camera vec3(myCamera.x, myCamera.y, myCamera.z):
vec4 position = ProjectionMatrix * ViewMatrix * ModelMatrix * vec4(inPosition, 1.0);
gl_Position = position.xyww; // always fails depth test.
The hemisphere moves together with the camera (encloses only some space around camera with radius = 1, but
it also always fails a depth test.)
Unfortunately a sky color which I get is not correct: screen1
3.What about a "sky curve"? Here is a picture which demonstrate what I mean: image1
How should I set a sky curve ?
Edit1 - debugging: In the vertex shader I assigned to v3Pos position of the "highest" vertex in the hemisphere:
vec3 v3Pos = vec3(0.0f, 10.25f, 0.0f);
Now the whole sky contains a color of that vertex:
screen2
here is my simplified atmosphere scattering
https://stackoverflow.com/a/19659648/2521214
work for both in outside and inside atmosphere
For realistic scattering You need to compute the curve integral through atmosphere depth
so you need to know how thick is atmosphere in which direction. Your terrain is 'flat QUAD +/- some bumps' but you must know where on Earth it is position (x,y,z) and normal (nx,ny,nz). For atmosphere you can use sphere or like me ellipsoid unless you want implement the whole scattering (integration through area not curve) process then what you really need is just the ray length from your camera to end of atmosphere.
so you can also do a cube-map texture with precomputed atmosphere edge distance because the movement inside your flat area does not matter much.
You also need the Sun position
can be simplified to just rotating normal around earth axis 1 round per day or you can also implement the season trajectory shift
Now just before rendering fill screen with sky
You can do it all inside shaders. one Quad is enough.
in fragment: get the atmosphere thickness form cube-map or from pixel direction intersection with atmosphere sphere/ellipsoid and then do the integration (full or simplified). Output the color to screen pixel
It really doesn't matter what geometry you use as long as (a) it covers the scren and (b) you feed the fragment shader a reasonable value for v3Direction. For all cases is a typical on-the-earth game like one with a race track, the sky will be behind all other objects, so accurate Z is not such a big deal.
http://www.lighthouse3d.com/opengl/glsl/index.php?ogldir2
reports that half vector in OpenGL context is 'Eye position - Light position' but then it goes on to say 'luckily OpenGL calculates it for us' [which is now deprecated].
How can, practically be calculated (a simple example would be greatly appreciated) [mainly, it puzzles me what "Eye" is and how it can be derived].
At the moment I managed to make specular calculations work (with good visual result) with half vector being equal to Light where Light is
vec3 Light = normalize(light_position - vec3(out_Vertex));
Now, I've no idea why that worked.
[If at least I knew what "Eye" is and how it can be derived practically.]
The half-vector is used in specular lighting and represents the normal at micro-imperfections in the surface which would cause incoming light to reflect toward the viewer. When the half-vector is closer to the surface normal, more imperfections align with the actual surface normal. Smoother surfaces will have fewer imperfections pointing away from the surface normal and result in a sharper highlight with a more significant drop off of light as the half-vector moves away from the actual normal than a rougher surface. The amount of drop off is controlled by the specular term, which is the power to which the cosine between the half-vector and normal vector is taken, so smoother surfaces have a higher power.
We call it the half-vector (H) because it is half-way between the vector point to the light (light vector, L) and the vector pointing to the viewer (which is the eye position (0,0,0) minus the vertex position in eye space; view vector, V). Before you calculate H, make sure the vector to the light and the eye are in the same coordinate space (legacy OpenGL used eye-space).
H = normalize( L + V )
You did the correct calculation, but your variables could be named more appropriately.
The term light_position here isn't entirely correct, since the tutorial you cited is the Directional light tutorial, which by definition, directional lights don't have position. The light vector for directional lights is independent of the vertex, so here, you have combined a few equations. Keep in mind, the light vector is toward the light, so opposite of the flow of photons from the light.
// i'm keeping your term here... I'm assuming
// you wanted to simulate a light coming from that position
// using a directional light, so the light direction would
// be -light_position, making
// L = -(-light_position) = light_position
vec3 L = light_position;
// For a point light, you would need the direction from
// the point to the light, so instead it would be
// light_position - outVertex
vec3 V = -out_Vertex;
// really it is (eye - vertexPosition), so (0,0,0) - out_Vertex
vec3 H = normalize(L + V);
In the fragment shader the vertex coordinates can be seen as a vector that goes from the camera (or the "eye" of the viewer) to the current fragment so by reversing the direction of this vector we then have the "Eye"-vector you are looking for. When calculating the Half-vector you also need to be aware of the direction of the Light-position vector, on the webpage you link they have that on pointing towards the surface, on http://en.wikipedia.org/wiki/Blinn%E2%80%93Phong_shading_model its pointing away from the surface.
I'm writing a game using OpenGL and I am trying to implement shadow volumes.
I want to construct the shadow volume of a model on the GPU via a vertex shader. To that end, I represent the model with a VBO where:
Vertices are duplicated such that each triangle has its own unique three vertices
Each vertex has the normal of its triangle
For reasons I'm not going to get into, I was actually doing the above two points anyway, so I'm not too worried about the vertex duplication
Degenerate triangles are added to form quads inside the edges between each pair of "regular" triangles
Using this model format, inside the vertex shader I am able to find vertices that are part of triangles that face away from the light and move them back to form the shadow volume.
What I have left to figure out is what transformation exactly I should apply to the back-facing vertices.
I am able to detect when a vertex is facing away from the light, but I am unsure what transformation I should apply to it. This is what my vertex shader looks like so far:
uniform vec3 lightDir; // Parallel light.
// On the CPU this is represented in world
// space. After setting the camera with
// gluLookAt, the light vector is multiplied by
// the inverse of the modelview matrix to get
// it into eye space (I think that's what I'm
// working in :P ) before getting passed to
// this shader.
void main()
{
vec3 eyeNormal = normalize(gl_NormalMatrix * gl_Normal);
vec3 realLightDir = normalize(lightDir);
float dotprod = dot(eyeNormal, realLightDir);
if (dotprod <= 0.0)
{
// Facing away from the light
// Need to translate the vertex along the light vector to
// stretch the model into a shadow volume
//---------------------------------//
// This is where I'm getting stuck //
//---------------------------------//
// All I know is that I'll probably turn realLightDir into a
// vec4
gl_Position = ???;
}
else
{
gl_Position = ftransform();
}
}
I've tried simply setting gl_position to ftransform() - (vec4(realLightDir, 1.0) * someConstant), but this caused some kind of depth-testing bugs (some faces seemed to be visible behind others when I rendered the volume with colour) and someConstant didn't seem to affect how far the back-faces are extended.
Update - Jan. 22
Just wanted to clear up questions about what space I'm probably in. I must say that keeping track of what space I'm in is the greatest source of my shader headaches.
When rendering the scene, I first set up the camera using gluLookAt. The camera may be fixed or it may move around; it should not matter. I then use translation functions like glTranslated to position my model(s).
In the program (i.e. on the CPU) I represent the light vector in world space (three floats). I've found during development that to get this light vector in the right space of my shader I had to multiply it by the inverse of the modelview matrix after setting the camera and before positioning the models. So, my program code is like this:
Position camera (gluLookAt)
Take light vector, which is in world space, and multiply it by the inverse of the current modelview matrix and pass it to the shader
Transformations to position models
Drawing of models
Does this make anything clearer?
the ftransform result is in clip-space. So this is not the space you want to apply realLightDir in. I'm not sure which space your light is in (your comment confuses me), but what is sure is that you want to add vectors that are in the same space.
On the CPU this is represented in world
space. After setting the camera with
gluLookAt, the light vector is multiplied by
the inverse of the modelview matrix to get
it into eye space (I think that's what I'm
working in :P ) before getting passed to
this shader.
multiplying a vector by the inverse of the mv matrix brings the vector from view space to model space. so you're saying your light-vector (in world space), is applied a transform that does view->model. It makes little sense to me.
We have 4 spaces:
model space: the space where your gl_Vertex is defined in.
world space: a space that GL does not care about in general, that represents an arbitrary space to locate the models in. It's usually what the 3d engine works in (it maps to our general understanding of world coordinates).
view space: a space that corresponds to the referencial of the viewer. 0,0,0 is where the viewer is, looking down Z. Obtained by multiplying gl_Vertex by the modelview
clip space: the magic space that the matrix projection brings us in. result of ftransform is in this space (so is gl_ModelViewProjectionMatrix * gl_Vertex )
Can you clarify exactly which space your light direction is in ?
What you need to do, however, is make the light vector addition in either model, world or view space: Bring all the bits of your operation in the same space. E.g. for model space, just compute the light direction in model space on CPU, and do a:
vec3 vertexTemp = gl_Vertex + lightDirInModelSpace * someConst
then you can bring that new vertex position in clip space with
gl_Position = gl_ModelViewProjectionMatrix * vertexTemp
Last bit, don't try to apply vector additions in clip-space. It won't generally do what you think it should do, as at that point you are necessarily dealing with homogeneous coordinates with non-uniform w.