How to use mouse to change OpenGL camera - opengl

I'm trying to set up a camera in OpenGL to view some points in 3 dimensions.
To achieve this, I don't want to use the old, fixed functionality style (glMatrixMode(), glTranslate, etc.) but rather set up the Model View Projection matrix myself and use it in my vertex shader. An orthographic projection is sufficient.
A lot of tutorials on this seem to use the glm library for this, but since I'm completely new to OpenGL, I'd like to learn it the right way and afterwards use some third party libraries. Additionally, most tutorials don't describe how to use the glMotionFunc() and glMouseFunc() to position the camera in space.
So, I guess I'm looking for some sample code and guidance how to see my points in 3D. Here's the vertex shader I've written:
const GLchar *vertex_shader = // Vertex Shader
"#version 330\n"
"layout (location = 0) in vec4 in_position;"
"layout (location = 1) in vec4 in_color;"
"uniform float myPointSize;"
"uniform mat4 myMVP;"
"out vec4 color;"
"void main()"
"{"
" color = in_color;"
" gl_Position = in_position * myMVP;"
" gl_PointSize = myPointSize;"
"}\0";
I set up the initial value of the MVP to be the identity matrix in my shader set up method which gives me the correct 2D representation of my points:
// Set up initial values for uniform variables
glUseProgram(shader_program);
location_pointSize = glGetUniformLocation(shader_program, "myPointSize");
glUniform1f(location_pointSize, 25.0f);
location_mvp = glGetUniformLocation(shader_program, "myMVP");
float mvp_array[16] = {1.0f, 0.0f, 0.0f, 0.0f, // 1st column
0.0f, 1.0f, 0.0f, 0.0f, // 2nd column
0.0f, 0.0f, 1.0f, 0.0f, // 3rd column
0.0f, 0.0f, 0.0f, 1.0f // 4th column
};
glUniformMatrix4fv(location_mvp, 1, GL_FALSE, mvp_array);
glUseProgram(0);
Now my question is how to adapt the two functions "motion" and "mouse", which to this point only have some code from a previous example, where the deprecated style of doing this was used:
// OLD, UNUSED VARIABLES
int mouse_old_x;
int mouse_old_y;
int mouse_buttons = 0;
float rotate_x = 0.0;
float rotate_y = 0.0;
float translate_z = -3.0;
...
// set view matrix
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glTranslatef(0.0, 0.0, translate_z);
glRotatef(rotate_x, 1.0, 0.0, 0.0);
glRotatef(rotate_y, 0.0, 1.0, 0.0);
...
// OLD, UNUSED FUNCTIONS
void mouse(int button, int state, int x, int y)
{
if (state == GLUT_DOWN)
{
mouse_buttons |= 1<<button;
}
else if (state == GLUT_UP)
{
mouse_buttons = 0;
}
mouse_old_x = x;
mouse_old_y = y;
}
void motion(int x, int y)
{
float dx, dy;
dx = (float)(x - mouse_old_x);
dy = (float)(y - mouse_old_y);
if (mouse_buttons & 1)
{
rotate_x += dy * 0.2f;
rotate_y += dx * 0.2f;
}
else if (mouse_buttons & 4)
{
translate_z += dy * 0.01f;
}
mouse_old_x = x;
mouse_old_y = y;
}

I'd like to learn it the right way and afterwards use some third party libraries.
There's nothing wrong in using GLM, as GLM is just a math library to deal with matrices. It's a very good thing that you want to learn the very basics. A trait only seldomly seen these days. Knowing these things is invaluable when doing advanced OpenGL.
Okay, three things to learn for you:
Basic discrete linear algebra, i.e. how to deal with matrices and vectors with discrete elements. Scalar and complex elements will suffice for the time being.
A little bit of numerics. You must be able to write code performing the elementary linear algebra operations: Scaling, and adding vectors, performing an inner and and outer product of vectors. Perform matrix-vector and matrix-matrix multiplication. Inverting a matrix.
Learn about homogenous coordinates.
( 4. if you want to spice things up, learn quaternions, those things rock! )
After Step 3 you're ready to write your own linear math code. Even if you don't know about homogenous coordinates yet. Just write it do deal efficiently with matrices of dimension 4×4 and vectors of dimension 4.
Once you mastered homogenous coordinates you understand what OpenGL actually does. And then: Drop those first coding steps in writing your own linear math library. Why? Because it will be full of bugs. The one small linmath.h I maintain is riddled with them; everytime I use it in a new project I fix a number of them. Hence I recommend you use something well tested, like GLM, or Eigen.
I set up the initial value of the MVP to be the identity matrix in my shader set up method which gives me the correct 2D representation of my points:
You should separate these into 3 matrices: Model, View and Projection. In your shader you should have two, Modelview and Projection. I.e. you pass the projection to the shader as it is, but calculate a compound Model · View = Modelview matrix passed in a separate uniform.
To move the "camera" you modify the View matrix.
Now my question is how to adapt the two functions "motion" and "mouse", which to this point only have some code from a previous example, where the deprecated style of doing this was used:
Most of this code remains the same, as it doesn't touch OpenGL. What you have to replace is those glRotate and glTranslate calls.
You're working on the View matrix, as already told. First lets look what glRotate does. In fixed function OpenGL there's an internal alias, let's call it M, that is set to whatever matrix is selected with glMatrixMode. Then we can write glRotate in pseudocode as
proc glRotate(angle, vec_x, vec_y, vec_z):
mat4x4 R = make_rotation_matrix(angle, vec_x, vec_y, vec_z)
M = M · R
Okay, all the magic seems to lie within the function make_rotation_matrix. How that that one look. Well since you're learning linear algebra this is a great exercise for you. Find the matrix R with the following properties:
l a = R·a, where a is the axis of rotation
cos(phi) = b·c && b·a = 0 && b·c = 0, where phi is the angle of rotation
Since you probably just want to get this thing done, you can as well resort to look into the OpenGL-1.1 specification, which documents this matrix in its section about glRotatef
Right beside them you can find the specs for all the other matrix manipulation functions.
Now instead of operating on some hidden state variable you select with glMatrixMode you let your matrix math library operate directly on the matrix variable you define and supply. In your case View. And similar you do with Projection and Model. Then when you're rendering, you contract Model and View into the compound already mentioned. The reason for this is, that often you want the intermediate result of bringing the vertex position into eyespace (Modelview * position for the fragment shader). After determining the matrix values you bind the program (glUseProgram) and set the uniform values, then render your geometry. (glDraw…)

Related

How to rotate a translated texture around its center

I'm working on the following shader that
translates (on y)
rotates
repeats (tiles)
a texture:
uniform sampler2D texture;
uniform vec2 resolution;
varying vec4 vertColor;
varying vec4 vertTexCoord;
uniform float rotation;
uniform float yTranslation;
void main() {
vec2 repeat = vec2(2, 2);
vec2 coord = vertTexCoord.st;
coord.y += yTranslation;
float sin_factor = sin(rotation);
float cos_factor = cos(rotation);
coord += vec2(0.5);
coord = coord * mat2(cos_factor, sin_factor, -sin_factor, cos_factor) * 0.3;
coord -= vec2(0.5);
coord = vec2(mod(coord.x * repeat.x, 1.0f), mod(coord.y * repeat.y, 1.0f));
gl_FragColor = texture2D(texture, coord) * vertColor;
}
Current behavior
Desired behavior
I want the texture to always rotate around the center, no matter how far it has been translated.
Simply swapping the order of the steps results in weird behavior. What am I missing?
Your problem statement in the question is really wrong: Your addtional comment (to a now deleted answer):
I have a boat that always stays in the center of the screen, the water texture (controlled by this shader) under it moves to make it look like the boat is moving. The movement of the water texture is controlled by rotation (for steering) and yTranslation (for how far the boat has moved forwards/backwards)
makes it clear that you're asking for a different thing, and the approach described in the question is simply not going to solve your problem.
When your boat moves and rotates, it will basically travel on a curve (and you want the inverse of that curve to travel through texture space). But your 2dof parameters rotation and yTranslation are not capable of describing the curve. Your problem needs at least another parameter xTranslation - so in the end, you need a 2D vector describing the position of your boat + an angle describing the rotation. And you need to properly accumulate this data at each simulation step:
update the rotation accordingly
calculate the 2D vector your ship is heading tom as defined by the current rotation
scale it according to the velocity of the movement
accumulate it onto the position vector.
Then, your shader simply has to
1. translate the texcoords by position (or -position, whatever you store)
2. Rotate around the pivot point (which is constant and only depends on how you layed out your texture space)
coord = vec2(mod(coord.x * repeat.x, 1.0f), mod(coord.y * repeat.y, 1.0f));
that's a waste of GPU ALU cycles, the TMUs will already do the mod for you with the GL_REPEAT wrap modes.
However, what you now have here is rotation, scaling and translation: so just use a single matrix for the whole texcoord transformation - the accumulation of the 2D position that I talked about earlier can nicely by done with the matrix representations. It will also remove the sin and cos from your shader, which is another huge waste right now.

Omnidirectional Lighting in OpenGL/GLSL 4.1

I've gotten shadows working properly for my Directional Lights, but I'm a little stumped when it comes to Point Lights. My idea is to use a cube map to render the depth from all six sides surrounding the light. So far, that's all working and good. I have verified this step by rendering each face of my cube to a 2D image, and it appears to be correct.
Now I'm trying to get the shadows to show up in the world. To do so, I am using GLSL's samplerCubeShadow data type. With it, I do:
vec3 lightToFrag = light.position - fragPos
float lenLightToFrag = length(lightToFrag)
vec3 normLightToFrag = normalize(lightToFrag)
float shadow = texture(depthTexture, vec4(normLightToFrag, lightToFrag))
I've tried a number of configurations, and this always renders my scene in black. Any ideas? My fragPos is just the model matrix times the vertex position. Should I be applying the light's model-view matrix to it? Or, similarly, should I be applying the world's model-view matrix to the light? Any feedback is really appreciated!
Assuming you are storing depth values in cubemap;
AFAIK cubemap is an AABB in world space, so you need to do calculations in world space. In your case light.position and fragPos must be in world space, or provide alternative variables/members if you use these names in view space in somewhere else e.g. per-fragment light calculations
Also you need to convert lightToFrag to depth value before pass to texture.
This answer shows how to convert lightToFrag to depth value: Omnidirectional shadow mapping with depth cubemap
Here my implementation (I removed #ifdef SHAD_CUBE because others use same name):
uniform samplerCubeShadow uShadMap;
uniform vec2 uFarNear;
float depthValue(const in vec3 v) {
vec3 absv = abs(v);
float z = max(absv.x, max(absv.y, absv.z));
return uFarNear.x + uFarNear.y / z;
}
float shadowCoef() {
vec3 L;
float d;
L = vPosWS - light.position_ws;
d = depthValue(L);
return texture(uShadMap, vec4(L, d));
}
This may require uniform model matrix if you only have ModelViewProjection (MVP)
Here how to calculate uNearFar at client side:
float n, f, nfsub, nf[2];
n = sm->near;
f = sm->far;
nfsub = f - n;
nf[0] = (f + n) / nfsub * 0.5f + 0.5f;
nf[1] =-(f * n) / nfsub;
glUniform2f(gkUniformLoc(prog, "uFarNear"), nf[0], nf[1]);
this is just optimization but you don't have to use this and follow the link which I mentioned before.
You may need bias value, related answer uses bias but I'm not sure how to apply it to cubemap correctly. I'm not sure d -+ 0.0001 is correct way or not.
If you want to store world distances in cubemap then this tutorial seems god one: https://learnopengl.com/Advanced-Lighting/Shadows/Point-Shadows

My 3d OpenGL object rotates around the world origin, not local-space origin. What am I doing wrong or misunderstanding?

I've been stuck on this for two days now, I'm unsure where else to look. I'm rendering two 3d cubes using OpenGL, and trying to apply a local rotation to each cube in these scene in response to me pressing a button.
I've got to the point where my cubes rotate in 3d space, but their both rotating about the world-space origin, instead of their own local origins.
(couple second video)
https://www.youtube.com/watch?v=3mrK4_cCvUw
After scouring the internet, the appropriate formula for calculating the MVP is as follow:
auto const model = TranslationMatrix * RotationMatrix * ScaleMatrix;
auto const modelview = projection * view * model;
Each of my cube's has it's own "model", which is defined as follows:
struct model
{
glm::vec3 translation;
glm::quat rotation;
glm::vec3 scale = glm::vec3{1.0f};
};
When I press a button on my keyboard, I create a quaternion representing the new angle and multiply it with the previous rotation quaternion, updating it in place.
The function looks like this:
template<typename TData>
void rotate_entity(TData &data, ecst::entity_id const eid, float const angle,
glm::vec3 const& axis) const
{
auto &m = data.get(ct::model, eid);
auto const q = glm::angleAxis(glm::degrees(angle), axis);
m.rotation = q * m.rotation;
// I'm a bit unsure on this last line above, I've also tried the following without fully understanding the difference
// m.rotation = m.rotation * q;
}
The axis is provided by the user like so:
// inside user-input handling function
float constexpr ANGLE = 0.2f;
...
// y-rotation
case SDLK_u: {
auto constexpr ROTATION_VECTOR = glm::vec3{0.0f, 1.0f, 0.0f};
rotate_entities(data, ANGLE, ROTATION_VECTOR);
break;
}
case SDLK_i: {
auto constexpr ROTATION_VECTOR = glm::vec3{0.0f, -1.0f, 0.0f};
rotate_entities(data, ANGLE, ROTATION_VECTOR);
break;
}
My GLSL vertex shader is pretty straight forward from what I've found in the example code out there:
// attributes input to the vertex shader
in vec4 a_position; // position value
// output of the vertex shader - input to fragment
// shader
out vec3 v_uv;
uniform mat4 u_mvmatrix;
void main()
{
gl_Position = u_mvmatrix * a_position;
v_uv = vec3(a_position.x, a_position.y, a_position.z);
}
Inside my draw code, the exact code I'm using to calculate the MVP for each cube is:
...
auto const& model = shape.model();
auto const tmatrix = glm::translate(glm::mat4{}, model.translation);
auto const rmatrix = glm::toMat4(model.rotation);
auto const smatrix = glm::scale(glm::mat4{}, model.scale);
auto const mmatrix = tmatrix * rmatrix * smatrix;
auto const mvmatrix = projection * view * mmatrix;
// simple wrapper that does logging and forwards to glUniformMatrix4fv()
p.set_uniform_matrix_4fv(logger, "u_mvmatrix", mvmatrix);
Earlier in my program, I calculate my view/projection matrices like so:
auto const windowheight = static_cast<GLfloat>(hw.h);
auto const windowwidth = static_cast<GLfloat>(hw.w);
auto projection = glm::perspective(60.0f, (windowwidth / windowheight), 0.1f, 100.0f);
auto view = glm::lookAt(
glm::vec3(0.0f, 0.0f, 1.0f), // camera position
glm::vec3(0.0f, 0.0f, -1.0f), // look at origin
glm::vec3(0.0f, 1.0f, 0.0f)); // "up" vector
The positions of my cube's in world-space are on the Z axis, so they should be visible:
cube0.set_world_position(0.0f, 0.0f, 0.0f, 1.0f);
cube1.set_world_position(-0.7f, 0.7f, 0.0f, 1.0f);
// I call set_world_position() exactly once before my game enter's it's main loop.
// I never call this again, it just modifies the vertex used as the center of the shape.
// It doesn't modify the model matrix at all.
// I call it once before my game enter's it's game loop, and I never modify it after that.
So, my question is, is the appropriate way to update a rotation for an object?
Should I be storing a quaternion directly in my object's "model"?
Should I be storing my translation and scaling as separate vec3's?
Is there an easier way to do this? I've been reading and re-reading anything I can find, but I don't see anyone doing this in the same way.
This tutorial is a bit short on details, specifically how to apply a rotation to an existing rotation (I believe this is just multiplying the quaternions together, which is what I'm doing inside rotate_entity(...) above).
http://www.opengl-tutorial.org/intermediate-tutorials/tutorial-17-quaternions/
https://github.com/opengl-tutorials/ogl/blob/master/tutorial17_rotations/tutorial17.cpp#L306-L311
Does it make more sense to store the resulting "MVP" matrix myself as my "model" and apply glm::transform/glm::scale/glm::rotate operations on the MVP matrix directly? (I tried this last option earlier, but I couldn't figure out how to get that to work too).
Thanks!
edit: better link
Generally, you don't want to modify the position of your model's individual vertices on the CPU. That's the entire purpose of the vertex program. The purpose of the model matrix is to position the model in the world in the vertex program.
To rotate a model around its center, you need to first move the center to the origin, then rotate it, then move the center to its final position. So let's say you have a cube that stretches from (0,0,0) to (1,1,1). You need to:
Translate the cube by (-0.5, -0.5, -0.5)
Rotate by the angle
Translate the cube by (0.5, 0.5, 0.5)
Translate the cube to wherever it belongs in the scene
You can combine the last 2 translations into a single one, and of course, you can collapse all of these transformations into a single matrix that is your model matrix.

understanding an Opengl shader code - cubemap?

I found this code on internet http://rioki.org/2013/03/07/glsl-skybox.html for cubemap enviromental texture(actually rendering a skybox). But i do not understand it why it works.
void main()
{
mat4 r = gl_ModelViewMatrix;
r[3][0] = 0.0;
r[3][1] = 0.0;
r[3][2] = 0.0;
vec4 v = inverse(r) * inverse(gl_ProjectionMatrix) * gl_Vertex;
gl_TexCoord[0] = v;
gl_Position = gl_Vertex;
}
So gl_Vertex is in world coordinates but what do we get by multiplying that by inverse of projection matrix and then modelview matrix?
this is the code I use to draw my skybox
void SkyBoxDraw(void)
{
GLfloat SkyRad = 1.0f;
glUseProgramObjectARB(glsl_program_skybox);
glDepthMask(0);
glDisable(GL_DEPTH_TEST);
// Cull backs of polygons
glCullFace(GL_BACK);
glEnable(GL_CULL_FACE);
glEnable(GL_TEXTURE_CUBE_MAP);
glBegin(GL_QUADS);
//////////////////////////////////////////////
// Negative X
glTexCoord3f(-1.0f, -1.0f, 1.0f);
glVertex3f(-SkyRad, -SkyRad, SkyRad);
glTexCoord3f(-1.0f, -1.0f, -1.0f);
glVertex3f(-SkyRad, -SkyRad, -SkyRad);
glTexCoord3f(-1.0f, 1.0f, -1.0f);
glVertex3f(-SkyRad, SkyRad, -SkyRad);
glTexCoord3f(-1.0f, 1.0f, 1.0f);
glVertex3f(-SkyRad, SkyRad, SkyRad);
......
......
glEnd();
glDepthMask(1);
glDisable(GL_CULL_FACE);
glEnable(GL_DEPTH_TEST);
glDisable(GL_TEXTURE_CUBE_MAP);
glUseProgramObjectARB(0);
}
So gl_Vertex is in world coordinates ...
No no no, gl_Vertex is in object/model-space unless I see some code elsewhere (e.g. how your vertex position is calculated in the actual non-shader portion of your program) that indicates otherwise :) In OpenGL we skip from object-space to eye/view/camera-space when we multiply by the combined Model*View matrix. As you can see, there are lots of names for the same coordinate spaces, but object-space definitely is not a synonym for world-space. Setting r3 to < 0, 0, 0, 1 > basically re-positions the camera's origin without affecting direction, which is useful when all you want to know is the direction for a cubemap lookup.
That is, in a nutshell, what you want when using cubemaps. Just a simple direction vector. The fact that textureCube (...) takes a 3D vector instead of 4D is an immediate hint that it is looking for a direction instead of position. Position vectors have a 4th component, direction do not. So, technically if you wanted to port this shader to modern OpenGL you would probably use an out vec3 and swizzle .xyz off of v, since v.w is unnecessary.
... but what do we get by multiplying that by inverse of projection matrix and then modelview matrix?
You are basically undoing projection when you multiply by the inverse of these matrices. The only way this shader makes sense is if the coordinates you are passing for your vertices are defined in clip-space. So instead of going from object-space through the GL pipeline and winding up in screen-space at the end you want the reverse of that, only since the viewport is not involved in your shader we cannot be dealing with screen-space. A little bit more information on how your vertex positions are calculated should clear this up.

Why isn't this orthographic vertex shader producing the correct answer?

My issue is that I have a (working) orthographic vertex and fragment shader pair that allow me to specify center X and Y of a sprite via 'translateX' and 'translateY' uniforms being passed in. I multiply by a projectionMatrix that is hardcoded and works great. Everything works as far as orthographic operation. My incoming geometry to this shader is based around 0, 0, 0 as its center point.
I want to now figure out what that center point (0, 0, 0 in local coordinate space) becomes after the translations. I need to know this information in the fragment shader. I assumed that I can create a vector at 0, 0, 0 and then apply the same translations. However, I'm NOT getting the correct answer.
My question: what I am I doing wrong, and how can I even debug what's going on? I know that the value being computed must be wrong, but I have no insight in to what it is. (My platform is Xcode 4.2 on OS X developing for OpenGL ES 2.0 iOS)
Here's my vertex shader:
// Vertex Shader for pixel-accurate rendering
attribute vec4 a_position;
attribute vec2 a_texCoord;
varying vec2 v_texCoord;
uniform float translateX;
uniform float translateY;
// Set up orthographic projection
// this is for 640 x 960
mat4 projectionMatrix = mat4( 2.0/960.0, 0.0, 0.0, -1.0,
0.0, 2.0/640.0, 0.0, -1.0,
0.0, 0.0, -1.0, 0.0,
0.0, 0.0, 0.0, 1.0);
void main()
{
// Set position
gl_Position = a_position;
// Translate by the uniforms for offsetting
gl_Position.x += translateX;
gl_Position.y += translateY;
// Translate
gl_Position *= projectionMatrix;
// Do all the same translations to a vector with origin at 0,0,0
vec4 toPass = vec4(0, 0, 0, 1); // initialize. doesn't matter if w is 1 or 0
toPass.x += translateX;
toPass.y += translateY;
toPass *= projectionMatrix;
// this SHOULD pass the computed value to my fragment shader.
// unfortunately, whatever value is sent, isn't right.
//v_translatedOrigin = toPass;
// instead, I use this as a workaround, since I do know the correct values for my
// situation. of course this is hardcoded and is terrible.
v_translatedOrigin = vec4(500.0, 200.0, 0.0, 0.0);
}
EDIT: In response to my orthographic matrix being wrong, the following is what wikipedia has to say about ortho projections, and my -1's look right. because in my case for example the 4th element of my mat should be -((right+left)/(right-left)) which is right of 960 left of 0, so -1 * (960/960) which is -1.
EDIT: I've possibly uncovered the root issue here - what do you think?
Why does your ortho matrix have -1's in the bottom of each column? Those should be zeros. Granted, that should not affect anything.
I'm more concerned about this:
gl_Position *= projectionMatrix;
What does that mean? Matrix multiplication is not commutative; M * a is not the same as a * M. So which side do you expect gl_Position to be multiplied on?
Oddly, the GLSL spec does not say (I filed a bug report on this). So you should go with what is guaranteed to work:
gl_Position = projectionMatrix * gl_Position;
Also, you should use proper vectorized code. You should have one translate uniform, which is a vec2. Then you can just do gl_Position.xy = a_position.xy + translate;. You'll have to fill in the Z and W with constants (gl_Position.zw = vec2(0, 1);).
Matrices in GLSL are column major. The first four values are the first column of the matrix, not the first row. You are multiplying with a transposed ortho matrix.
I have to echo Nicol Bolas's sentiment. Two wrongs happening to make things work is frustrating, but doesn't make them any less wrong. The fact that things are showing up where you expect is likely because the translation portion of your matrix is 0, 0, 0.
The equation you posted is correct, but the notation is row major, and OpenGL is column major:
I run afoul of this stuff every new project I start. This site is a really good resource that helped me keep these things straight. They've got another page on projection matrices.
If you're not sure if your orthographic projection is correct (right now it isn't), try plugging the same values into glOrtho, and reading the values back out of GL_PROJECTION.