Rotation of model with translation results in rotation not at origin - opengl

I have a model I'm trying to move through the air in OpenGL with GLSL and, ultimately, have it spin as it flies. I started off just trying to do a static rotation. Here's an example of the result:
The gray track at the bottom is on the floor. The little white blocks all over the place represent an explosion chunk model and are supposed to shoot up and bounce on the floor.
Without rotation, if the model matrix is just an identity, everything works perfectly.
When introducing rotation, it looks they move based on their rotation. That means that some of them, when coming to a stop, rest in the air instead of the floor. (That slightly flatter white block on the gray line next to the red square is not the same as the other little ones. Placeholders!)
I'm using glm for all the math. Here are the relevant lines of code, in order of execution. This particular model is rendered instanced so each entity's position and model matrix get uploaded through the uniform buffer.
Object creation:
// should result in a model rotated along the Y axis
auto quat = glm::normalize(glm::angleAxis(RandomAngle, glm::vec3(0.0, 1.0, 0.0)));
myModelMatrix = glm::toMat4(quat);
Vertex shader:
struct Instance
{
vec4 position;
mat4 model;
};
layout(std140) uniform RenderInstances
{
Instance instance[500];
} instances;
layout(location = 1) in vec4 modelPos;
layout(location = 2) in vec4 modelColor;
layout(location = 3) out vec4 fragColor;
void main()
{
fragColor = vec4(modelColor.r, modelColor.g, modelColor.b, 1);
vec4 pos = instances.instance[gl_InstanceID].position + modelPos;
gl_Position = camera.projection * camera.view * instances.instance[gl_InstanceID].model * pos;
}
I don't know where I went wrong. I do know that if I make the model matrix do a simple translation, that works as expected, so at least the uniform buffer works. The camera is also a uniform buffer shared across all shaders, and that works fine. Any comments on the shader itself are also welcome. Learning!

The translation to each vertex's final destination is happening before the rotating. It was this that I didn't realize was happening, even though I know to do rotations before translations.
Here's the shader code:
void main()
{
fragColor = vec4(modelColor.r, modelColor.g, modelColor.b, 1);
vec4 pos = instances.instance[gl_InstanceID].position + modelPos;
gl_Position = camera.projection * camera.view * instances.instance[gl_InstanceID].model * pos;
}
Due to the associative nature of matrix multiplication, this can also be:
gl_Position = (projection * (view * (model * pos)));
Even though the multiplication happens left to right, the transformations happen right to left.
This is the old code to generate the model matrix:
renderc.ModelMatrix = glm::toMat4(glm::normalize(animc.Rotation));
This will result in the rotation happening with the model not at the origin, due to the translation at the end of the gl_Position = line.
This is now the code that generates the model matrix:
renderc.ModelMatrix = glm::translate(pos);
renderc.ModelMatrix *= glm::toMat4(glm::normalize(animc.Rotation));
renderc.ModelMatrix *= glm::translate(-pos);
Translate to the origin (-pos), rotate, then translate back (+pos).

Related

Is it possible to use a shader to map 3d coordinates with Mercator-like projection?

The background:
I am writing some terrain visualiser and I am trying to decouple the rendering from the terrain generation.
At the moment, the generator returns some array of triangles and colours, and these are bound in OpenGL by the rendering code (using OpenTK).
So far I have a very simple shader which handles the rotation of the sphere.
The problem:
I would like the application to be able to display the results either as a 3D object, or as a 2D projection of the sphere (let's assume Mercator for simplicity).
I had thought, this would be simple — I should compile an alternative shader for such cases. So, I have a vertex shader which almost works:
precision highp float;
uniform mat4 projection_matrix;
uniform mat4 modelview_matrix;
in vec3 in_position;
in vec3 in_normal;
in vec3 base_colour;
out vec3 normal;
out vec3 colour2;
vec3 fromSphere(in vec3 cart)
{
vec3 spherical;
spherical.x = atan(cart.x, cart.y) / 6;
float xy = sqrt(cart.x * cart.x + cart.y * cart.y);
spherical.y = atan(xy, cart.z) / 4;
spherical.z = -1.0 + (spherical.x * spherical.x) * 0.1;
return spherical;
}
void main(void)
{
normal = vec3(0,0,1);
normal = (modelview_matrix * vec4(in_normal, 0)).xyz;
colour2 = base_colour;
//gl_Position = projection_matrix * modelview_matrix * vec4(fromSphere(in_position), 1);
gl_Position = vec4(fromSphere(in_position), 1);
}
However, it has a couple of obvious issues (see images below)
Saw-tooth pattern where triangle crosses the cut meridian
Polar region is not well defined
3D case (Typical shader):
2D case (above shader)
Both of these seem to reduce to the statement "A triangle in 3-dimensional space is not always even a single polygon on the projection". (... and this is before any discussion about whether great circle segments from the sphere are expected to be lines after projection ...).
(the 1+x^2 term in z is already a hack to make it a little better - this ensures the projection not flat so that any stray edges (ie. ones that straddle the cut meridian) are safely behind the image).
The question: Is what I want to achieve possible with a VertexShader / FragmentShader approach? If not, what's the alternative? I think I can re-write the application side to pre-transform the points (and cull / add extra polygons where needed) but it will need to know where the cut line for the projection is — and I feel that this information is analogous to the modelViewMatrix in the 3D case... which means taking this logic out of the shader seems a step backwards.
Thanks!

OpenGL flickering issue in rendering

I have an OpenGL rendering issue and I think that is due to a problem with the Z-Buffer.
I have a code to render a set of points where their size depends on the distance from the camera. Thus bigger points means that are closer to the camera. Moreover in the following snapshots the color reflect the z-buffer of the fragment.
How you can see there is a big point near the camera.
However some frames later the same point is rendered behind more distant points.
These are the functions that I call before render the points:
glClearDepth(1.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glEnable(GL_DEPTH_TEST);
glDepthFunc(GL_LEQUAL);
this is the vertex shader:
#version 330 core
layout (location = 0) in vec3 position;
layout (location = 1) in vec4 color;
// uniform variable
uniform mat4 model;
uniform mat4 view;
uniform mat4 projection;
uniform float pointSize;
out vec4 fragColor;
void main() {
gl_Position = projection * view * model * vec4(position, 1.0f);
vec3 posEye = vec3(view * model * vec4(position, 1.0f));
float Z = length(posEye);
gl_PointSize = pointSize / Z;
fragColor = color;
}
and this is the fragment shader
#version 330 core
in vec4 fragColor;
out vec4 outColor;
void main() {
vec2 cxy = 2.0 * gl_PointCoord - 1.0;
float r = dot(cxy, cxy);
if(r > 1.0) discard;
// calculate lighting
vec3 pos = vec3(cxy.x,cxy.y,sqrt(1.0-r));
vec3 lightDir = vec3(0.577, 0.577, 0.577);
float diffuse = max(0.0, dot(lightDir, pos));
float alpha = 1.0;
float delta = fwidth(r);
alpha = 1.0 - smoothstep(1.0 - delta, 1.0 + delta, r);
outColor = fragColor * alpha * diffuse;
}
UPDATE
looks like that the problem was due to the definition of the near and far planes.
There is something that I do not understand about which are the best values that I should use.
this is the function that I use to create the projective matrix
glm::perspective(glm::radians(fov), width/(float)height, zNear, zFar);
where winth=1600 height=1200 fov=45
when things didn't work zNear was set to zero and zFar was set to double the distance of the farthest point from the center of gravity of the point cloud, i.e. in my case 1.844
If I move the near clipping plane from zero to 0.1 the flicker seems resolved. However, the distant objects, which I saw before, disappear. So I also changed the far plane to 10 and everything seems to work. Unfortunately, I don't understand why the values I used before were not good.
As already updated the issue is called Z-fighting when choosing the wrong near and far panes in the projection matrix. If they are to far away from your objects, there is only a very discrete number of values for z left. Then the drawing is detemined by the call order and processing order in the shaders, since there is no difference in z. If one has a hint of what needs to come first, draw it that way. Once the rendering-call was set, there is no proper way to determine which processor gets the objects first and that you'll see as flickering.
So please update the way you establish your projection Matrix. Best practise: look at the objects need to be rendered and determine a roughly bounding box, make it a bounding ball and center-radius is a point in the near pane, center+radius is a point in the far pane. Done!
Update
looking into
glm::perspective(glm::radians(fov), width/(float)height, zNear, zFar);
gives a specific for the only influence of z: (before normalization):
Result[3][2] = - (static_cast<T>(2) * zFar * zNear) / (zFar - zNear);
meaning other than zFar!=zNear, it should be also avoided to set zNear to zero. This means there is no difference in z, so it has to Flicker. I would then assume that you don't applied some transform on your projection matrix, better don't. If all your objects live in a space around the coordinates center meaning also in the center of your projection, move them in front of the projection as a last step. So apply some translation not onto the projection/view matrix, but on your object matrix, to avoid having such an ill formed projection space. E.g. set near to 1, and move all objects with that amount through your scene.

Displacing Vertices in the Vertex shader

My code creates a grid of lots of vertices and then displaces them by a random noise value in height in the vertex shader. Moving around using
glTranslated(-camera.x, -camera.y, -camera.z);
works fine, but that would mean you could go until the edge of the grid.
I thought about sending the camera position to the shader, and letting the whole displacement get offset by it. Hence the final vertex shader code:
uniform vec3 camera_position;
varying vec4 position;
void main()
{
vec3 tmp;
tmp = gl_Vertex.xyz;
tmp -= camera_position;
gl_Vertex.y = snoise(tmp.xz / NOISE_FREQUENCY) * NOISE_SCALING;
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
position = gl_Position;
}
EDIT: I fixed a flaw, the vertices themselves should not get horizontally displaced.
For some reason though, when I run this and change camera position, the screen flickers but nothing moves. Vertices are displaced by the noise value correctly, and coloring and everything works, but the displacement doesn't move.
For reference, here is the git repository: https://github.com/Orpheon/Synthworld
What am I doing wrong?
PS: "Flickering" is wrong. It's as if with some positions it doesn't draw anything, and others it draws the normal scene from position 0, so if I move without stopping it flickers. I can stop at a spot where it stays black though, or at one where it stays normal.
gl_Position = ftransform();
That's what you're doing wrong. ftransform does not implicitly take gl_Vertex as a parameter. It simply does exactly what it's supposed to: perform transformation of the vertex position exactly as though this were the fixed-function pipeline. So it uses the value that gl_Vertex had initially.
You need to do proper transformations on this:
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
I managed to fix this bug by using glUniform3fv:
main.c exterp:
// Sync the camera position variable
GLint camera_pos_ptr;
float f[] = {camera.x, camera.y, camera.z};
camera_pos_ptr = glGetUniformLocation(shader_program, "camera_position");
glUniform3fv(camera_pos_ptr, 1, f);
// draw the display list
glCallList(grid);
Vertex shader:
uniform vec3 camera_position;
varying vec4 position;
void main()
{
vec4 newpos;
newpos = gl_Vertex;
newpos.y = (snoise( (newpos.xz + camera_position.xz) / NOISE_FREQUENCY ) * NOISE_SCALING) - camera_position.y;
gl_Position = gl_ModelViewProjectionMatrix * newpos;
position = gl_Position;
}
If someone wants to see the complete code, here is the git repository again: https://github.com/Orpheon/Synthworld

GLSL light/highlight moving with camera

VC++ 2010, OpenGL, GLSL, SDL
I am moving over to shaders, and have run into a problem that originally occured while working with the ogl pipeline. That is, the position of the light seems to point in whatever direction my camera faces. In the ogl pipeline it was just the specular highlight, which was fixable with:
glLightModelf(GL_LIGHT_MODEL_LOCAL_VIEWER, 1.0f);
Here are the two shaders:
Vertex
varying vec3 lightDir,normal;
void main()
{
normal = normalize(gl_NormalMatrix * gl_Normal);
lightDir = normalize(vec3(gl_LightSource[0].position));
gl_TexCoord[0] = gl_MultiTexCoord0;
gl_Position = ftransform();
}
Fragment
varying vec3 lightDir,normal;
uniform sampler2D tex;
void main()
{
vec3 ct,cf;
vec4 texel;
float intensity,at,af;
intensity = max(dot(lightDir,normalize(normal)),0.0);
cf = intensity * (gl_FrontMaterial.diffuse).rgb +
gl_FrontMaterial.ambient.rgb;
af = gl_FrontMaterial.diffuse.a;
texel = texture2D(tex,gl_TexCoord[0].st);
ct = texel.rgb;
at = texel.a;
gl_FragColor = vec4(ct * cf, at * af);
}
Any help would be much appreciated!
The question is: What coordinate system (reference frame) do you want the lights to be in? Probably "the world".
OpenGL's fixed-function pipeline, however, has no notion of world coordinates, because it uses a modelview matrix, which transforms directly from eye (camera) coordinates to model coordinates. In order to have “fixed” lights, you could do one of these:
The classic OpenGL approach is to, every frame, set up the modelview matrix to be the view transform only (that is, be the coordinate system you want to specify your light positions in) and then use glLight to set the position (which is specified to apply the modelview matrix to the input).
Since you are using shaders, you could also have separate model and view matrices and have your shader apply both (rather than using ftransform) to vertices, but only the view matrix to lights. However, this means more per-vertex matrix operations and is probably not an especially good idea unless you are looking for clarity rather than performance.

GLSL Phong Light, Camera Problem

I Just started write a phong shader
vert:
varying vec3 normal, eyeVec;
#define MAX_LIGHTS 8
#define NUM_LIGHTS 3
varying vec3 lightDir[MAX_LIGHTS];
void main() {
gl_Position = ftransform();
normal = gl_NormalMatrix * gl_Normal;
vec4 vVertex = gl_ModelViewMatrix * gl_Vertex;
eyeVec = -vVertex.xyz;
int i;
for (i=0; i<NUM_LIGHTS; ++i){
lightDir[i] = vec3(gl_LightSource[i].position.xyz - vVertex.xyz);
}
}
I know that I need to get the camera position with uniform, but how, and where put this value?
ps. I'm using opengl 2.0
You don't need to pass the camera position, because, well, there is no camera in OpenGL.
Lighting calculations are performed in eye/world space, i.e. after the multiplication with the modelview matrix, which also performs the "camera positioning". So actually you already got the right things in place. using ftransform() is a little inefficient, as you're doing half of what it does again (gl_ModelviewMatrix * gl_Vertex, you can make this into ftransform by adding gl_Position = gl_ProjectionMatrix * eyeVec)
So if your lights seem to move when your "camera" transforms, you're not transforming the light's positions properly. Either precompute the transformed light positions, or transform them in the shader as well. It's more a matter of choosen convention, less laid out constraint if using shaders.