My code creates a grid of lots of vertices and then displaces them by a random noise value in height in the vertex shader. Moving around using
glTranslated(-camera.x, -camera.y, -camera.z);
works fine, but that would mean you could go until the edge of the grid.
I thought about sending the camera position to the shader, and letting the whole displacement get offset by it. Hence the final vertex shader code:
uniform vec3 camera_position;
varying vec4 position;
void main()
{
vec3 tmp;
tmp = gl_Vertex.xyz;
tmp -= camera_position;
gl_Vertex.y = snoise(tmp.xz / NOISE_FREQUENCY) * NOISE_SCALING;
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
position = gl_Position;
}
EDIT: I fixed a flaw, the vertices themselves should not get horizontally displaced.
For some reason though, when I run this and change camera position, the screen flickers but nothing moves. Vertices are displaced by the noise value correctly, and coloring and everything works, but the displacement doesn't move.
For reference, here is the git repository: https://github.com/Orpheon/Synthworld
What am I doing wrong?
PS: "Flickering" is wrong. It's as if with some positions it doesn't draw anything, and others it draws the normal scene from position 0, so if I move without stopping it flickers. I can stop at a spot where it stays black though, or at one where it stays normal.
gl_Position = ftransform();
That's what you're doing wrong. ftransform does not implicitly take gl_Vertex as a parameter. It simply does exactly what it's supposed to: perform transformation of the vertex position exactly as though this were the fixed-function pipeline. So it uses the value that gl_Vertex had initially.
You need to do proper transformations on this:
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
I managed to fix this bug by using glUniform3fv:
main.c exterp:
// Sync the camera position variable
GLint camera_pos_ptr;
float f[] = {camera.x, camera.y, camera.z};
camera_pos_ptr = glGetUniformLocation(shader_program, "camera_position");
glUniform3fv(camera_pos_ptr, 1, f);
// draw the display list
glCallList(grid);
Vertex shader:
uniform vec3 camera_position;
varying vec4 position;
void main()
{
vec4 newpos;
newpos = gl_Vertex;
newpos.y = (snoise( (newpos.xz + camera_position.xz) / NOISE_FREQUENCY ) * NOISE_SCALING) - camera_position.y;
gl_Position = gl_ModelViewProjectionMatrix * newpos;
position = gl_Position;
}
If someone wants to see the complete code, here is the git repository again: https://github.com/Orpheon/Synthworld
Related
I have a model I'm trying to move through the air in OpenGL with GLSL and, ultimately, have it spin as it flies. I started off just trying to do a static rotation. Here's an example of the result:
The gray track at the bottom is on the floor. The little white blocks all over the place represent an explosion chunk model and are supposed to shoot up and bounce on the floor.
Without rotation, if the model matrix is just an identity, everything works perfectly.
When introducing rotation, it looks they move based on their rotation. That means that some of them, when coming to a stop, rest in the air instead of the floor. (That slightly flatter white block on the gray line next to the red square is not the same as the other little ones. Placeholders!)
I'm using glm for all the math. Here are the relevant lines of code, in order of execution. This particular model is rendered instanced so each entity's position and model matrix get uploaded through the uniform buffer.
Object creation:
// should result in a model rotated along the Y axis
auto quat = glm::normalize(glm::angleAxis(RandomAngle, glm::vec3(0.0, 1.0, 0.0)));
myModelMatrix = glm::toMat4(quat);
Vertex shader:
struct Instance
{
vec4 position;
mat4 model;
};
layout(std140) uniform RenderInstances
{
Instance instance[500];
} instances;
layout(location = 1) in vec4 modelPos;
layout(location = 2) in vec4 modelColor;
layout(location = 3) out vec4 fragColor;
void main()
{
fragColor = vec4(modelColor.r, modelColor.g, modelColor.b, 1);
vec4 pos = instances.instance[gl_InstanceID].position + modelPos;
gl_Position = camera.projection * camera.view * instances.instance[gl_InstanceID].model * pos;
}
I don't know where I went wrong. I do know that if I make the model matrix do a simple translation, that works as expected, so at least the uniform buffer works. The camera is also a uniform buffer shared across all shaders, and that works fine. Any comments on the shader itself are also welcome. Learning!
The translation to each vertex's final destination is happening before the rotating. It was this that I didn't realize was happening, even though I know to do rotations before translations.
Here's the shader code:
void main()
{
fragColor = vec4(modelColor.r, modelColor.g, modelColor.b, 1);
vec4 pos = instances.instance[gl_InstanceID].position + modelPos;
gl_Position = camera.projection * camera.view * instances.instance[gl_InstanceID].model * pos;
}
Due to the associative nature of matrix multiplication, this can also be:
gl_Position = (projection * (view * (model * pos)));
Even though the multiplication happens left to right, the transformations happen right to left.
This is the old code to generate the model matrix:
renderc.ModelMatrix = glm::toMat4(glm::normalize(animc.Rotation));
This will result in the rotation happening with the model not at the origin, due to the translation at the end of the gl_Position = line.
This is now the code that generates the model matrix:
renderc.ModelMatrix = glm::translate(pos);
renderc.ModelMatrix *= glm::toMat4(glm::normalize(animc.Rotation));
renderc.ModelMatrix *= glm::translate(-pos);
Translate to the origin (-pos), rotate, then translate back (+pos).
I've been trying to make a basic static point light using shaders for an LWJGL game, but it appears as if the light is moving as the camera's position is being translated and rotated. These shaders are slightly modified from the OpenGL 4.3 guide, so I'm not sure why they aren't working as intended. Can anyone explain why these shaders aren't working as intended and what I can do to get them to work?
Vertex Shader:
varying vec3 color, normal;
varying vec4 vertexPos;
void main() {
color = vec3(0.4);
normal = normalize(gl_NormalMatrix * gl_Normal);
vertexPos = gl_ModelViewMatrix * gl_Vertex;
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
}
Fragment Shader:
varying vec3 color, normal;
varying vec4 vertexPos;
void main() {
vec3 lightPos = vec3(4.0);
vec3 lightColor = vec3(0.75);
vec3 lightDir = lightPos - vertexPos.xyz;
float lightDist = length(lightDir);
float attenuation = 1.0 / (3.0 + 0.007 * lightDist + 0.000008 * lightDist * lightDist);
float diffuse = max(0.0, dot(normal, lightDir));
vec3 ambient = vec3(0.4, 0.4, 0.4);
vec3 finalColor = color * (ambient + lightColor * diffuse * attenuation);
gl_FragColor = vec4(finalColor, 1.0);
}
If anyone's interested, I ended up finding the solution. Removing the calls to gl_NormalMatrix and gl_ModelViewMatrix solved the problem.
The critical value here, lightPos, was being set as a function of vertexPos, which you have expressed in screen space (this happened because its original world space form was multiplied by modelView). Screen space stays with the camera, not anything in the 3D world. So to have a non-moving light source with respect to some absolute point in world space (like [4.0, 4.0, 4.0]), you could just leave your object's points in that space as you found out.
But getting rid of modelview is not a good idea, since the whole point of the model matrix is to place your objects where they belong (so you can re-use your vertex arrays with changes only to the model matrix, instead of burdening them with specifying every single shape's vertex positions from scratch).
A better way is to perform the modelView multiplication on both vertexPos AND lightPos. This way you're treating lightPos as originally a quantity in world space, but then doing the comparison in screen space. The math to get light intensities from normals will work out to the same in either space and you'll get a correct looking light source.
I've been trying to get all of my lights into eye space for the GLSL shaders I'm using, but I'm missing something. I have no idea what I'm missing. Here's my shader code, just in case it's causing the problem...
varying vec3 normal, lightDir;
uniform vec3 lightPos;
//gl_Normal: Object Space
//gl_Vertex: Object Space
//lightDir: Eye Space
void main()
{
vec4 vert;
normal = gl_NormalMatrix * gl_Normal;
vert = gl_ModelViewMatrix * gl_Vertex;
lightDir = normalize(vec3(vec4(lightPos, 1.0) - vert));
gl_Position = ftransform();
gl_FrontColor = gl_Color;
}
If it isn't that, then it must be the way I'm transforming the light position CPU side, so here's what I'm doing...
eye = inverse(camera->climb(root));
glMultMatrixf(value_ptr(eye));
glUniform3fv(sLight, 1, value_ptr(vec3(eye * light->climb(root) * vec4())));
Everything else in my program is working perfectly, but there's something I'm not spotting here. NOTE: camera->climb(root) yields the transformation of the camera's scene node in world space. light->climb(root) yields the transformation of the light's scene node in world space.
EDIT: The exact symptoms I'm having are that my light always appears to be at the origin in eye space (in the same location as the camera).
To move answer from the comment:
The origin coordinate that you multiply to get your light's eye-space position should be vec4(0,0,0,1) instead of vec4(0,0,0,0).
I'm writing a clone of Wolfenstein 3D using only core OpenGL 3.3 for university and I've run into a bit of a problem with the sprites, namely getting them to scale correctly based on distance.
From what I can tell, previous versions of OGL would in fact do this for you, but that functionality has been removed, and all my attempts to reimplement it have resulted in complete failure.
My current implementation is passable at distances, not too shabby at mid range and bizzare at close range.
The main problem (I think) is that I have no understanding of the maths I'm using.
The target size of the sprite is slightly bigger than the viewport, so it should 'go out of the picture' as you get right up to it, but it doesn't. It gets smaller, and that's confusing me a lot.
I recorded a small video of this, in case words are not enough. (Mine is on the right)
Can anyone direct me to where I'm going wrong, and explain why?
Code:
C++
// setup
glPointParameteri(GL_POINT_SPRITE_COORD_ORIGIN, GL_LOWER_LEFT);
glEnable(GL_PROGRAM_POINT_SIZE);
// Drawing
glUseProgram(StaticsProg);
glBindVertexArray(statixVAO);
glUniformMatrix4fv(uStatixMVP, 1, GL_FALSE, glm::value_ptr(MVP));
glDrawArrays(GL_POINTS, 0, iNumSprites);
Vertex Shader
#version 330 core
layout(location = 0) in vec2 pos;
layout(location = 1) in int spriteNum_;
flat out int spriteNum;
uniform mat4 MVP;
const float constAtten = 0.9;
const float linearAtten = 0.6;
const float quadAtten = 0.001;
void main() {
spriteNum = spriteNum_;
gl_Position = MVP * vec4(pos.x + 1, pos.y, 0.5, 1); // Note: I have fiddled the MVP so that z is height rather than depth, since this is how I learned my vectors.
float dist = distance(gl_Position, vec4(0,0,0,1));
float attn = constAtten / ((1 + linearAtten * dist) * (1 + quadAtten * dist * dist));
gl_PointSize = 768.0 * attn;
}
Fragment Shader
#version 330 core
flat in int spriteNum;
out vec4 color;
uniform sampler2DArray Sprites;
void main() {
color = texture(Sprites, vec3(gl_PointCoord.s, gl_PointCoord.t, spriteNum));
if (color.a < 0.2)
discard;
}
First of all, I don't really understand why you use pos.x + 1.
Next, like Nathan said, you shouldn't use the clip-space point, but the eye-space point. This means you only use the modelview-transformed point (without projection) to compute the distance.
uniform mat4 MV; //modelview matrix
vec3 eyePos = MV * vec4(pos.x, pos.y, 0.5, 1);
Furthermore I don't completely understand your attenuation computation. At the moment a higher constAtten value means less attenuation. Why don't you just use the model that OpenGL's deprecated point parameters used:
float dist = length(eyePos); //since the distance to (0,0,0) is just the length
float attn = inversesqrt(constAtten + linearAtten*dist + quadAtten*dist*dist);
EDIT: But in general I think this attenuation model is not a good way, because often you just want the sprite to keep its object space size, which you have quite to fiddle with the attenuation factors to achieve that I think.
A better way is to input its object space size and just compute the screen space size in pixels (which is what gl_PointSize actually is) based on that using the current view and projection setup:
uniform mat4 MV; //modelview matrix
uniform mat4 P; //projection matrix
uniform float spriteWidth; //object space width of sprite (maybe an per-vertex in)
uniform float screenWidth; //screen width in pixels
vec4 eyePos = MV * vec4(pos.x, pos.y, 0.5, 1);
vec4 projCorner = P * vec4(0.5*spriteWidth, 0.5*spriteWidth, eyePos.z, eyePos.w);
gl_PointSize = screenWidth * projCorner.x / projCorner.w;
gl_Position = P * eyePos;
This way the sprite always gets the size it would have when rendered as a textured quad with a width of spriteWidth.
EDIT: Of course you also should keep in mind the limitations of point sprites. A point sprite is clipped based of its center position. This means when its center moves out of the screen, the whole sprite disappears. With large sprites (like in your case, I think) this might really be a problem.
Therefore I would rather suggest you to use simple textured quads. This way you circumvent this whole attenuation problem, as the quads are just transformed like every other 3d object. You only need to implement the rotation toward the viewer, which can either be done on the CPU or in the vertex shader.
Based on Christian Rau's answer (last edit), I implemented a geometry shader that builds a billboard in ViewSpace, which seems to solve all my problems:
Here are the shaders: (Note that I have fixed the alignment issue that required the original shader to add 1 to x)
Vertex Shader
#version 330 core
layout (location = 0) in vec4 gridPos;
layout (location = 1) in int spriteNum_in;
flat out int spriteNum;
// simple pass-thru to the geometry generator
void main() {
gl_Position = gridPos;
spriteNum = spriteNum_in;
}
Geometry Shader
#version 330 core
layout (points) in;
layout (triangle_strip, max_vertices = 4) out;
flat in int spriteNum[];
smooth out vec3 stp;
uniform mat4 Projection;
uniform mat4 View;
void main() {
// Put us into screen space.
vec4 pos = View * gl_in[0].gl_Position;
int snum = spriteNum[0];
// Bottom left corner
gl_Position = pos;
gl_Position.x += 0.5;
gl_Position = Projection * gl_Position;
stp = vec3(0, 0, snum);
EmitVertex();
// Top left corner
gl_Position = pos;
gl_Position.x += 0.5;
gl_Position.y += 1;
gl_Position = Projection * gl_Position;
stp = vec3(0, 1, snum);
EmitVertex();
// Bottom right corner
gl_Position = pos;
gl_Position.x -= 0.5;
gl_Position = Projection * gl_Position;
stp = vec3(1, 0, snum);
EmitVertex();
// Top right corner
gl_Position = pos;
gl_Position.x -= 0.5;
gl_Position.y += 1;
gl_Position = Projection * gl_Position;
stp = vec3(1, 1, snum);
EmitVertex();
EndPrimitive();
}
Fragment Shader
#version 330 core
smooth in vec3 stp;
out vec4 colour;
uniform sampler2DArray Sprites;
void main() {
colour = texture(Sprites, stp);
if (colour.a < 0.2)
discard;
}
I don't think you want to base the distance calculation in your vertex shader on the projected position. Instead just calculate the position relative to your view, i.e. use the model-view matrix instead of the model-view-projection one.
Think about it this way -- in projected space, as an object gets closer to you, its distance in the horizontal and vertical directions becomes exaggerated. You can see this in the way the lamps move away from the center toward the top of the screen as you approach them. That exaggeration of those dimensions is going to make the distance get larger when you get really close, which is why you're seeing the object shrink.
At least in OpenGL ES 2.0, there is a maximum size limitation on gl_PointSize imposed by the OpenGL implementation. You can query the size with ALIASED_POINT_SIZE_RANGE.
I Just started write a phong shader
vert:
varying vec3 normal, eyeVec;
#define MAX_LIGHTS 8
#define NUM_LIGHTS 3
varying vec3 lightDir[MAX_LIGHTS];
void main() {
gl_Position = ftransform();
normal = gl_NormalMatrix * gl_Normal;
vec4 vVertex = gl_ModelViewMatrix * gl_Vertex;
eyeVec = -vVertex.xyz;
int i;
for (i=0; i<NUM_LIGHTS; ++i){
lightDir[i] = vec3(gl_LightSource[i].position.xyz - vVertex.xyz);
}
}
I know that I need to get the camera position with uniform, but how, and where put this value?
ps. I'm using opengl 2.0
You don't need to pass the camera position, because, well, there is no camera in OpenGL.
Lighting calculations are performed in eye/world space, i.e. after the multiplication with the modelview matrix, which also performs the "camera positioning". So actually you already got the right things in place. using ftransform() is a little inefficient, as you're doing half of what it does again (gl_ModelviewMatrix * gl_Vertex, you can make this into ftransform by adding gl_Position = gl_ProjectionMatrix * eyeVec)
So if your lights seem to move when your "camera" transforms, you're not transforming the light's positions properly. Either precompute the transformed light positions, or transform them in the shader as well. It's more a matter of choosen convention, less laid out constraint if using shaders.