Rotating a light around a stationary object in openGL/glsl - opengl

So, I'm trying to rotate a light around a stationary object in the center of my scene. I'm well aware that I will need to use the rotation matrix in order to make this transformation occur. However, I'm unsure of how to do it in code. I'm new to linear algebra, so any help with explanations along the way would help a lot.
Basically, I'm working with these two right now and I'm not sure of how to make the light circulate the object.
mat4 rotation = mat4(
vec4( cos(aTimer), 0.0, sin(aTimer), 0.0),
vec4( 0, 1.0, 0.0, 0.0),
vec4(-sin(aTimer), 0.0, cos(aTimer), 0.0),
vec4( 0.0, 0.0, 0.0, 1.0)
);
and this is how my light is set up :
float lightPosition[4] = {5.0, 5.0, 1.0, 0};
glLightfv(GL_LIGHT0, GL_POSITION, lightPositon);
The aTimer in this code is a constantly incrementing float.

Even though you want the light to rotate around your object, you must not use a rotation matrix for this purpose but a translation one.
The matrix you're handling is the model matrix. It defines the orientation, the position and the scale of your object.
The matrix you have here is a rotation matrix, so the orientation of the light will change, but not the position, which is what you want.
So there is two problems to fix here :
1.Define your matrix properly. Since you want a translation (circular), I think this is the matrix you need :
mat4 rotation = mat4(
vec4( 1.0, 0.0, 0.0, 0.0),
vec4( 0.0, 1.0, 0.0, 0.0),
vec4( 0.0, 0.0, 1.0, 0.0),
vec4( cos(aTimer), sin(aTimer), 0.0, 1.0)
);
2.Define a good position vertex for your light. Since it's a single vertex and it's the job of the model matrix (above) to move the light, the light vector 4D should be :
float lightPosition[4] = {0.0f, 0.0f, 0.0f, 1.0f};
//In C, 0.0 is a double, you may have warnings at compilation for loss of precision, so use the suffix "f"
The forth component must be one since it's thanks to it that translations are possible.
You may find additional information here
Model matrix in 3D graphics / OpenGL
However they are using column vectors. Judging from your rotation matrix I do belive you use row vectors, so the translation components are in the last row, not the last column of the model matrix.

Related

Reordering OpenGL Texture vertices to flip image rows

I am a complete OpenGL beginner and I inherited a codebase. I want OpenGL to flip a texture in vertical direction, meaning the top row goes to the bottom and so on. I am only doing 2D processing, if that is relevant.
My texture vertices are currently this:
const float texture_vertices[] = {
0.0, 1.0,
0.0, 0.0,
1.0, 0.0,
0.0, 1.0,
1.0, 0.0,
1.0, 1.0,
};
I tried changing the directions of the triangles and reordering them from going clockwise to counter clockwise. I know this is caused by my lack of awareness of how the very basics of OpenGL work, but I would appreciate all help (especially a short explanation of why something is the right way to reorder them).
Maybe I am going about this all wrong and the texture coordinates are not what I am interested in?
You need to flip the 2nd component of the texture coordinates (swap 0 and 1):
const float texture_vertices[] = {
0.0, 0.0,
0.0, 1.0,
1.0, 1.0,
0.0, 0.0,
1.0, 1.0,
1.0, 0.0,
};

Shadow mapping for point light

The view matrix in OpenGL is mostly defined by the glm::lookAt() function with 3 parameters: position vector, target vector and up vector.
So, why these lines of code use for shadow mapping for point light define the last parameter (I mean up vector) like this:
float aspect = (float)SHADOW_WIDTH/(float)SHADOW_HEIGHT;
float near = 1.0f;
float far = 25.0f;
// This is the projection matrix
glm::mat4 shadowProj = glm::perspective(glm::radians(90.0f), aspect, near, far);
// This is for view matrix
std::vector<glm::mat4> shadowTransforms;
shadowTransforms.push_back(shadowProj *
glm::lookAt(lightPos, lightPos + glm::vec3( 1.0, 0.0, 0.0), glm::vec3(0.0,-1.0, 0.0));
shadowTransforms.push_back(shadowProj *
glm::lookAt(lightPos, lightPos + glm::vec3(-1.0, 0.0, 0.0), glm::vec3(0.0,-1.0, 0.0));
shadowTransforms.push_back(shadowProj *
glm::lookAt(lightPos, lightPos + glm::vec3( 0.0, 1.0, 0.0), glm::vec3(0.0, 0.0, 1.0));
shadowTransforms.push_back(shadowProj *
glm::lookAt(lightPos, lightPos + glm::vec3( 0.0,-1.0, 0.0), glm::vec3(0.0, 0.0,-1.0));
shadowTransforms.push_back(shadowProj *
glm::lookAt(lightPos, lightPos + glm::vec3( 0.0, 0.0, 1.0), glm::vec3(0.0,-1.0, 0.0));
shadowTransforms.push_back(shadowProj *
glm::lookAt(lightPos, lightPos + glm::vec3( 0.0, 0.0,-1.0), glm::vec3(0.0,-1.0, 0.0))
Why not glm::vec3(0.0, 1.0, 0.0) for all six faces?
Why not glm::vec3(0.0, 1.0, 0.0) for all six faces?
Because that wouldn't make the slightest sense. If you define your view transform via a lookAt function, you specify the camera position and the viewing direction. What the 3D up vector actually is defining is the angle of rotation around the view axis (so it is actually only one degree of freedom).
Think of rotating a real camera into landscape or portrait orientation, or in any arbitrary rotation., to get the image you want.
Since the lookAt convention has always been that the up vector is some world space vector which should be mapped to the upward axis in the resulting image, you will get a problem if you define the up vector into the same direction as your viewing direction (or it's negated direction). It simply is impossible to map the same vector to both point into the viewing direction and upwards in the resulting image,and it does not describe any orientation at all.
So in the case of 2 of the 6 faces, the math would simply break down. In the case of the other 4 faces, you could technically use (0,1,0) for all of these. However, usually, you use these sort of configuration to render to the 6 faces of a cube map texture. And for cube maps, the actual orientation of each face is defined in the GL spec. So for directly rendering to cube maps, you must orient your camera accordingly, otherwise the individual faces of the cube map will simply be rotated wrongly and won't fit together at all.

Applying perspective with GLSL matrix

I am not quite sure what is missing, but I loaded a uniform matrix into a vertex shader and when the matrix was:
GLfloat translation[4][4] = {
{1.0, 0.0, 0.0, 0.0},
{0.0, 1.0, 0.0, 0.0},
{0.0, 0.0, 1.0, 0.0},
{0.0, 0.2, 0.0, 1.0}};
or so, I seemed to be able to translate vertices just fine, depending on which values I chose to change. However, when swapping this same uniform matrix to apply projection, the image would not appear. I tried several matrices, such as:
GLfloat frustum[4][4] = {
{((2.0*frusZNear)/(frusRight - frusLeft)), 0.0, 0.0, 0.0},
{0.0, ((2.0*frusZNear)/(frusTop - frusBottom)), 0.0 , 0.0},
{((frusRight + frusLeft)/(frusRight-frusLeft)), ((frusTop + frusBottom) / (frusTop - frusBottom)), (-(frusZFar + frusZNear)/(frusZFar - frusZNear)), (-1.0)},
{0.0, 0.0, ((-2.0*frusZFar*frusZNear)/(frusZFar-frusZNear)), 0.0}
};
and values, such as:
const GLfloat frusLeft = -3.0;
const GLfloat frusRight = 3.0;
const GLfloat frusBottom = -3.0;
const GLfloat frusTop = 3.0;
const GLfloat frusZNear = 5.0;
const GLfloat frusZFar = 10.0;
The vertex shader, which seemed to apply translation just fine:
gl_Position = frustum * vPosition;
Any help appreciated.
The code for calculating the perspective/frustum matrix looks correct to me. This sets up a perspective matrix that assumes that your eye point is at the origin, and you're looking down the negative z-axis. The near and far values specify the range of distances along the negative z-axis that are within the view volume.
Therefore, with near/far values of 5.0/10.0, the range of z-values that are within your view volume will be from -5.0 to -10.0.
If your geometry is currently drawn around the origin, use a translation by something like (0.0, 0.0, -7.0) as your view matrix. This needs to be applied before the projection matrix.
You can either combine the view and projection matrices, or pass them separately into your vertex shader. With a separate view matrix, containing the translation above, your shader code could then look like this:
uniform mat4 viewMat;
...
gl_Position = frustum * viewMat * vPosition;
First thing I see is that the Z near and far planes is chosen at 5, 10. If your vertices do not lie between these planes you will not see anything.
The Projection matrix will take everything in the pyramid like shape and translate and scale it into the unit volume -1,1 in every dimension.
http://www.lighthouse3d.com/tutorials/view-frustum-culling/

Transform light worldspace coordinates to eyespace coordinates

I am attempting to model a spotlight in a scene for an introduction to graphics class. The assignment specifies that I must do everything in modernGL, therefore I can't use anything from legacy.
I have been reading the OpenGL 4.0 Shading Language Cookbook for assistance with this, but I can't figure out how to get the eyespace coordinates for my lights. I know the position and direction that I want the lights to be in in worldspace and I have attempted to transform them to eyespace by the following.
//mv = inverse(mv);
vec3 light = vec3(vec4(0.0, 15.0, 0.0, 1.0) * mv);
vec3 direction = vec3(vec4(0.0, -1.0, 0.0, 1.0) * mv);
Where mv is my modelview matrix generated by
glm::mat4 modelview_matrix = glm::lookAt(glm::vec3(window.camera.x, window.camera.y, window.camera.z), glm::vec3(0.0f, 0.0f, 0.0f), glm::vec3(0.0f, 1.0f, 0.0f));
without any transforms on it. As you can see I have attempted to multiply by the inverse of the modelview and the modelview to get the eyespace. I am positive that neither of these have worked as the specular highlight on my object follows as I move to the opposite side of the object. (ie if I am looking at the opposite side of the object that the light I shouldn't see a specular highlight.)
I'm fairly certain that you want:
vec3 light = vec3(mv * vec4(0.0, 15.0, 0.0, 1.0));
vec3 direction = vec3(mv * vec4(0.0, -1.0, 0.0, 0.0));
// ^ Note the 0 in the w coordinate
Where you have not inverted your mv matrix. Yes, the order of multiplication does matter. The reason you leave a 0 in the w coordinate is because you don't want direction vectors to be translated, only scaled or rotated.

Access and move modelview OpenGL 3.2

I'm trying to create a snake game. I managed to create my field with of squares and I drew my red snake as a square with this:
void drawSnake()
{
mat4 modelView;
modelView = Translate(0,0,0);
glUniformMatrix4fv(modelViewUniform, 1, GL_TRUE, modelView);
glDrawArrays(GL_TRIANGLES, 0, 6);
}
Right now I'm not storing my snake anywhere when I draw it. I am thinking maybe I can store the modelView as an object and then translate the object around, which would be the snake moving around. But maybe this is a stupid way to do it, so I thought I would ask for some better ways.
Vertex rectangleData1[rectangleSize] = {
{ vec2( -1.0, -1.0 ), color1 },
{ vec2( 1.0, -1.0 ), color1 },
{ vec2( 1.0, 1.0 ), color1 },
{ vec2( -1.0, -1.0 ), color1 },
{ vec2( 1.0, 1.0 ), color1 },
{ vec2(-1.0, 1.0 ), color1 }
};
You are correct to assume that using and modifying a matrix per game object is the proper technique for transition said game objects. In graphics, these matrices are usually referred to as 'Model' or 'World' matrices. When using OpenGL for 3D applications, you can additionally provide View and Projection matrices to your shader program (only one each used per render) to allow for a realistic, modifyable, presentation of your world space. This camera-like presentation of your world space is called a viewing frustum.