I've been trying to get all of my lights into eye space for the GLSL shaders I'm using, but I'm missing something. I have no idea what I'm missing. Here's my shader code, just in case it's causing the problem...
varying vec3 normal, lightDir;
uniform vec3 lightPos;
//gl_Normal: Object Space
//gl_Vertex: Object Space
//lightDir: Eye Space
void main()
{
vec4 vert;
normal = gl_NormalMatrix * gl_Normal;
vert = gl_ModelViewMatrix * gl_Vertex;
lightDir = normalize(vec3(vec4(lightPos, 1.0) - vert));
gl_Position = ftransform();
gl_FrontColor = gl_Color;
}
If it isn't that, then it must be the way I'm transforming the light position CPU side, so here's what I'm doing...
eye = inverse(camera->climb(root));
glMultMatrixf(value_ptr(eye));
glUniform3fv(sLight, 1, value_ptr(vec3(eye * light->climb(root) * vec4())));
Everything else in my program is working perfectly, but there's something I'm not spotting here. NOTE: camera->climb(root) yields the transformation of the camera's scene node in world space. light->climb(root) yields the transformation of the light's scene node in world space.
EDIT: The exact symptoms I'm having are that my light always appears to be at the origin in eye space (in the same location as the camera).
To move answer from the comment:
The origin coordinate that you multiply to get your light's eye-space position should be vec4(0,0,0,1) instead of vec4(0,0,0,0).
Related
I have a model I'm trying to move through the air in OpenGL with GLSL and, ultimately, have it spin as it flies. I started off just trying to do a static rotation. Here's an example of the result:
The gray track at the bottom is on the floor. The little white blocks all over the place represent an explosion chunk model and are supposed to shoot up and bounce on the floor.
Without rotation, if the model matrix is just an identity, everything works perfectly.
When introducing rotation, it looks they move based on their rotation. That means that some of them, when coming to a stop, rest in the air instead of the floor. (That slightly flatter white block on the gray line next to the red square is not the same as the other little ones. Placeholders!)
I'm using glm for all the math. Here are the relevant lines of code, in order of execution. This particular model is rendered instanced so each entity's position and model matrix get uploaded through the uniform buffer.
Object creation:
// should result in a model rotated along the Y axis
auto quat = glm::normalize(glm::angleAxis(RandomAngle, glm::vec3(0.0, 1.0, 0.0)));
myModelMatrix = glm::toMat4(quat);
Vertex shader:
struct Instance
{
vec4 position;
mat4 model;
};
layout(std140) uniform RenderInstances
{
Instance instance[500];
} instances;
layout(location = 1) in vec4 modelPos;
layout(location = 2) in vec4 modelColor;
layout(location = 3) out vec4 fragColor;
void main()
{
fragColor = vec4(modelColor.r, modelColor.g, modelColor.b, 1);
vec4 pos = instances.instance[gl_InstanceID].position + modelPos;
gl_Position = camera.projection * camera.view * instances.instance[gl_InstanceID].model * pos;
}
I don't know where I went wrong. I do know that if I make the model matrix do a simple translation, that works as expected, so at least the uniform buffer works. The camera is also a uniform buffer shared across all shaders, and that works fine. Any comments on the shader itself are also welcome. Learning!
The translation to each vertex's final destination is happening before the rotating. It was this that I didn't realize was happening, even though I know to do rotations before translations.
Here's the shader code:
void main()
{
fragColor = vec4(modelColor.r, modelColor.g, modelColor.b, 1);
vec4 pos = instances.instance[gl_InstanceID].position + modelPos;
gl_Position = camera.projection * camera.view * instances.instance[gl_InstanceID].model * pos;
}
Due to the associative nature of matrix multiplication, this can also be:
gl_Position = (projection * (view * (model * pos)));
Even though the multiplication happens left to right, the transformations happen right to left.
This is the old code to generate the model matrix:
renderc.ModelMatrix = glm::toMat4(glm::normalize(animc.Rotation));
This will result in the rotation happening with the model not at the origin, due to the translation at the end of the gl_Position = line.
This is now the code that generates the model matrix:
renderc.ModelMatrix = glm::translate(pos);
renderc.ModelMatrix *= glm::toMat4(glm::normalize(animc.Rotation));
renderc.ModelMatrix *= glm::translate(-pos);
Translate to the origin (-pos), rotate, then translate back (+pos).
I'm developing OpenGL application and having problem implementing cubemap reflection shader: reflection rotates with camera around the object, it's is same from any point of view.
Here is my vertex shader:
in vec4 in_Position;
in vec4 in_Normal;
out vec3 ws_coords;
out vec3 normal;
mat4 uniform_ModelViewProjectionMatrix;
mat4 uniform_ModelViewMatrix;
mat4 uniform_ModelMatrix;
mat3 uniform_NormalMatrix;
vec3 uniform_CameraPosition;
...
ws_coords = (uniform_ModelViewMatrix * in_Position).xyz;
normal = normalize(uniform_NormalMatrix * in_Normal);
And fragment:
uniform samplerCube uniform_ReflectionTexture;
...
vec3 normal = normalize(normal);
vec3 reflectedDirection = reflect(normalize(ws_coords), normal);
frag_Color = texture(uniform_ReflectionTexture, reflectedDirection).xyz
All shaders I found over the internet have same issue or producing weird results for me.
I guess I need to rotate reflected direction with camera rotation but I have no idea how can I do that. On shader input I have world space camera position, MVP, MV, M and Normal matrices.
Can you please help me implementing shader, that takes in account camera direction.
This part seems a bit odd to me:
vec3 reflectedDirection = reflect(normalize(ws_coords), normal);
The first argument to reflect has to be a vector that goes from the pixel position to the camera position, in world space.
I suggest you have a camera world position, then take your in_Position to world space (I don't know which space they're currently in) and create a normalized vector from that. Then reflect it with a world space normal vector and sample your cubemap.
Okay, I found an answer,
my problem was that I did calculations in ViewSpace, that is why reflection was static. Also my NormalMatrix was in ViewSpace.
So fix is
ws_coords = (uniform_ModelMatrix * in_Position).xyz;
normal = normalize(uniform_NormalMatrix * in_Normal);
and changing Normal matrix from viewspace to modelspace.
I've been trying to make a basic static point light using shaders for an LWJGL game, but it appears as if the light is moving as the camera's position is being translated and rotated. These shaders are slightly modified from the OpenGL 4.3 guide, so I'm not sure why they aren't working as intended. Can anyone explain why these shaders aren't working as intended and what I can do to get them to work?
Vertex Shader:
varying vec3 color, normal;
varying vec4 vertexPos;
void main() {
color = vec3(0.4);
normal = normalize(gl_NormalMatrix * gl_Normal);
vertexPos = gl_ModelViewMatrix * gl_Vertex;
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
}
Fragment Shader:
varying vec3 color, normal;
varying vec4 vertexPos;
void main() {
vec3 lightPos = vec3(4.0);
vec3 lightColor = vec3(0.75);
vec3 lightDir = lightPos - vertexPos.xyz;
float lightDist = length(lightDir);
float attenuation = 1.0 / (3.0 + 0.007 * lightDist + 0.000008 * lightDist * lightDist);
float diffuse = max(0.0, dot(normal, lightDir));
vec3 ambient = vec3(0.4, 0.4, 0.4);
vec3 finalColor = color * (ambient + lightColor * diffuse * attenuation);
gl_FragColor = vec4(finalColor, 1.0);
}
If anyone's interested, I ended up finding the solution. Removing the calls to gl_NormalMatrix and gl_ModelViewMatrix solved the problem.
The critical value here, lightPos, was being set as a function of vertexPos, which you have expressed in screen space (this happened because its original world space form was multiplied by modelView). Screen space stays with the camera, not anything in the 3D world. So to have a non-moving light source with respect to some absolute point in world space (like [4.0, 4.0, 4.0]), you could just leave your object's points in that space as you found out.
But getting rid of modelview is not a good idea, since the whole point of the model matrix is to place your objects where they belong (so you can re-use your vertex arrays with changes only to the model matrix, instead of burdening them with specifying every single shape's vertex positions from scratch).
A better way is to perform the modelView multiplication on both vertexPos AND lightPos. This way you're treating lightPos as originally a quantity in world space, but then doing the comparison in screen space. The math to get light intensities from normals will work out to the same in either space and you'll get a correct looking light source.
My code creates a grid of lots of vertices and then displaces them by a random noise value in height in the vertex shader. Moving around using
glTranslated(-camera.x, -camera.y, -camera.z);
works fine, but that would mean you could go until the edge of the grid.
I thought about sending the camera position to the shader, and letting the whole displacement get offset by it. Hence the final vertex shader code:
uniform vec3 camera_position;
varying vec4 position;
void main()
{
vec3 tmp;
tmp = gl_Vertex.xyz;
tmp -= camera_position;
gl_Vertex.y = snoise(tmp.xz / NOISE_FREQUENCY) * NOISE_SCALING;
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
position = gl_Position;
}
EDIT: I fixed a flaw, the vertices themselves should not get horizontally displaced.
For some reason though, when I run this and change camera position, the screen flickers but nothing moves. Vertices are displaced by the noise value correctly, and coloring and everything works, but the displacement doesn't move.
For reference, here is the git repository: https://github.com/Orpheon/Synthworld
What am I doing wrong?
PS: "Flickering" is wrong. It's as if with some positions it doesn't draw anything, and others it draws the normal scene from position 0, so if I move without stopping it flickers. I can stop at a spot where it stays black though, or at one where it stays normal.
gl_Position = ftransform();
That's what you're doing wrong. ftransform does not implicitly take gl_Vertex as a parameter. It simply does exactly what it's supposed to: perform transformation of the vertex position exactly as though this were the fixed-function pipeline. So it uses the value that gl_Vertex had initially.
You need to do proper transformations on this:
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
I managed to fix this bug by using glUniform3fv:
main.c exterp:
// Sync the camera position variable
GLint camera_pos_ptr;
float f[] = {camera.x, camera.y, camera.z};
camera_pos_ptr = glGetUniformLocation(shader_program, "camera_position");
glUniform3fv(camera_pos_ptr, 1, f);
// draw the display list
glCallList(grid);
Vertex shader:
uniform vec3 camera_position;
varying vec4 position;
void main()
{
vec4 newpos;
newpos = gl_Vertex;
newpos.y = (snoise( (newpos.xz + camera_position.xz) / NOISE_FREQUENCY ) * NOISE_SCALING) - camera_position.y;
gl_Position = gl_ModelViewProjectionMatrix * newpos;
position = gl_Position;
}
If someone wants to see the complete code, here is the git repository again: https://github.com/Orpheon/Synthworld
I Just started write a phong shader
vert:
varying vec3 normal, eyeVec;
#define MAX_LIGHTS 8
#define NUM_LIGHTS 3
varying vec3 lightDir[MAX_LIGHTS];
void main() {
gl_Position = ftransform();
normal = gl_NormalMatrix * gl_Normal;
vec4 vVertex = gl_ModelViewMatrix * gl_Vertex;
eyeVec = -vVertex.xyz;
int i;
for (i=0; i<NUM_LIGHTS; ++i){
lightDir[i] = vec3(gl_LightSource[i].position.xyz - vVertex.xyz);
}
}
I know that I need to get the camera position with uniform, but how, and where put this value?
ps. I'm using opengl 2.0
You don't need to pass the camera position, because, well, there is no camera in OpenGL.
Lighting calculations are performed in eye/world space, i.e. after the multiplication with the modelview matrix, which also performs the "camera positioning". So actually you already got the right things in place. using ftransform() is a little inefficient, as you're doing half of what it does again (gl_ModelviewMatrix * gl_Vertex, you can make this into ftransform by adding gl_Position = gl_ProjectionMatrix * eyeVec)
So if your lights seem to move when your "camera" transforms, you're not transforming the light's positions properly. Either precompute the transformed light positions, or transform them in the shader as well. It's more a matter of choosen convention, less laid out constraint if using shaders.