How to change ortho matrix size in shader - opengl

I tried scaling orthogonal projection matrix but it seems it doesn't scale its dimensions.I am using orthogonal projection for directional lighting. But if the main camera is above ground I want to make my ortho matrix range bigger.
For example if camera is zero , orho matrix is
glm::ortho(-10.0f , 10.0f , -10.0f , 10.0f , 0.01f, 4000.0f)
but if camera goes 400 in y direction i want this matrix to be like
glm::ortho(-410.0f , 410.0f , -410.0f , 410.0f , 0.01f, 4000.0f)
but I want to do this in the shader with matrix multiplication or addition

The Orthographic Projection matrix can be computed by:
r = right, l = left, b = bottom, t = top, n = near, f = far
X: 2/(r-l) 0 0 0
y: 0 2/(t-b) 0 0
z: 0 0 -2/(f-n) 0
t: -(r+l)/(r-l) -(t+b)/(t-b) -(f+n)/(f-n) 1
In glsl, for instance:
float l = -410.0f;
float r = 410.0f;
float b = -410.0f;
float t = 410.0f;
float n = 0.01f;
float f = 4000.0f;
mat4 projection = mat4(
vec4(2.0/(r-l), 0.0, 0.0, 0.0),
vec4(0.0, 2.0/(t-b), 0.0, 0.0),
vec4(0.0, 0.0, -2.0/(f-n), 0.0),
vec4(-(r+l)/(r-l), -(t+b)/(t-b), -(f+n)/(f-n), 1.0)
);
Furthermore you can scale the orthographic projection matrix:
c++
glm::mat4 ortho = glm::ortho(-10.0f, 10.0f, -10.0f, 10.0f, 0.01f, 4000.0f);
float scale = 10.0f / 410.0f;
glm::mat4 scale_mat = glm::scale(glm::mat4(1.0f), glm::vec3(scale, scale, 1.0f));
glm::mat4 ortho_new = ortho * scale_mat;
glsl
float scale = 10.0 / 410.0;
mat4 scale_mat = mat4(
vec4(scale, 0.0, 0.0, 0.0),
vec4(0.0, scale, 0.0, 0.0),
vec4(0.0, 0.0, 1.0, 0.0),
vec4(0.0, 0.0, 0.0, 1.0)
);
mat4 ortho_new = ortho * scale_mat;

Related

How to rotate a 2D square in OpenGL?

I have a basic square. How can i rotate it ?
let vertices = vec![
x, y, uv1.0, uv1.0, layer, // top left
// ---
x + w, y, uv2.0, uv2.1, layer, // top right
// ---
x + w, y + h, uv3.0, uv3.1, layer, // bottom right
// ---
x, y + h, uv4.0, uv4.1, layer // bottom left
This is my orthographic projection matrix.
let c_str_vert = CString::new("MVP".as_bytes()).unwrap();
let modelLoc = gl::GetUniformLocation(shaderProgram, c_str_vert.as_ptr());
let model = cgmath::ortho(0.0, SCR_WIDTH as f32, SCR_HEIGHT as f32, 0.0, -1.0, 1.0);
gl::UniformMatrix4fv(modelLoc, 1, gl::FALSE, model.as_ptr());
#version 430 core
layout(location = 0) in vec2 position;
// ...
uniform mat4 MVP;
void main() {
gl_Position = MVP * vec4(position.x, position.y, 0.0, 1.0);
// ...
}
I have a lot of squares but I don't want to rotate them all.
width and height is 100px, how can I turn my square to make it look like this?
I know that I can achieve this by using transformation matrices. I did it in web development while working on svg, but I have no idea how I can get this working in OpenGl.
I would like to know how can i multiply matrix and vector in OpenGL.
I would like to know how can i multiply matrix and vector in OpenGL.
You already multiplying matrices with a vector
gl_Position = MVP * vec4(position.x, position.y, 0.0, 1.0);
All you have to do is multiply your MVP matrix with your rotation matrix and than with your vector
gl_Position = MVP * rotationMatrix * vec4(position.x, position.y, 0.0, 1.0);
The rotation Matrix should be also a 4x4 matrix
i tried the library nalgebra and nalgebra-glm. API cgmath confuses me.
let mut ortho = glm::ortho(0.0, SCR_WIDTH as f32, SCR_HEIGHT as f32, 0.0, -1.0, 1.0);
ortho = glm::translate(&ortho, &glm::vec3(0.0, 50.0, 0.0));
let rad = 30.0 * (PI / 360.0);
ortho = glm::rotate(&ortho, rad, &glm::vec3(0.0, 0.0, 1.0));
ortho = glm::translate(&ortho, &glm::vec3(0.0, -50.0, 0.0) );
Thank you for all the answers

scaling is moving the object

Video Link for the issue.
https://www.youtube.com/watch?v=iqX1RPo4NDE&feature=youtu.be
This is what i want to attain.
https://www.youtube.com/watch?v=bWwYV0VhXqs
Here after scaling the object i can move pivot independently , position of the pivot does not affect the position of the object.
These are my matrices.
when i move the pivot point to one unit in x and if the scaling is set to 1 than everthing works fine.
the pivot point has moved to one unit and the cube has stayed in its position.
But when i first scale the object to 0.5 and than move the pivot point than the cube follows the pivot point , which should not be he case as i am only moving pivot point.
Please help me with this , how can i keep the cube in position.
Though i am moving the axis not the cube so the cube should stay in original position.
glm::mat4x4 Container::GetPositionMatrix()
{
// posx is the x position of the object.
// posy is the y position of the object.
// posz is the y position of the object.
glm::mat4 TransformationPosition = glm::translate(glm::mat4x4(1.0),
glm::vec3(posx, posy, posz ));
return TransformationPosition;
}
glm::mat4x4 Container::GetRotationMatrix()
{
// posx is the x positon of the object
// pivotx is the x position on the pivot point
// rotx is the x rotation of the object
glm::vec3 pivotVector(posx - pivotx, posy - pivoty, posz - pivotz);
glm::mat4 TransPivot = glm::translate(glm::mat4x4(1.0f), pivotVector);
glm::mat4 TransPivotInverse = glm::translate(glm::mat4x4(1.0f),
glm::vec3( -pivotVector.x , -pivotVector.y , -pivotVector.z));
glm::mat4 TransformationRotation(1.0);
TransformationRotation = glm::rotate(TransformationRotation,
glm::radians(rotx), glm::vec3(1.0, 0.0, 0.0));
TransformationRotation = glm::rotate(TransformationRotation,
glm::radians(roty), glm::vec3(0.0, 1.0, 0.0));
TransformationRotation = glm::rotate(TransformationRotation,
glm::radians(rotz ), glm::vec3(0.0, 0.0, 1.0));
return TransPivotInverse * TransformationRotation * TransPivot;
}
glm::mat4x4 Container::GetScalingMatrix()
{
// posx is the x positon of the object
// pivotx is the x position on the pivot point
// scax is the x scaling of the object
glm::vec3 pivotVector(posx - pivotx, posy - pivoty, posz - pivotz);
glm::mat4 TransPivot = glm::translate(glm::mat4x4(1.0f), pivotVector);
glm::mat4 TransPivotInverse = glm::translate(glm::mat4x4(1.0f),
glm::vec3(-pivotVector.x, -pivotVector.y, -pivotVector.z));
glm::mat4 TransformationScale = glm::scale(glm::mat4x4(1.0 ),
glm::vec3(scax, scay, scaz));
return TransPivotInverse * TransformationScale * TransPivot;
}
final matrix for the Object position.
TransformationPosition * TransformationRotation * TransformationScaling
This is my final matrix for the pivot point
PivotPointPosition = MatrixContainerPosition * MatrixPivotPointPosition *
MatrixRotationContainer * MatrixScaleContainer
The object should be orientated and located as follows:
The reference point (pivotx, pivoty, pivotz) is a point in object space.
The object has to be scaled (scax, scay, scaz) and rotated (rotx, roty, rotz) relative to the reference point (pivotx, pivoty, pivotz).
The point (posx, posy, posz) defines the position of the object in the scene, where the reference point has finally to be placed.
Do the following steps:
Scale the object to the desired size, with respect to the reference point:
glm::mat4 GetScalingMatrix()
{
glm::vec3 refVector(pivotx, pivoty, pivotz);
glm::mat4 TransRefToOrigin = glm::translate(glm::mat4(1.0f), -refVector);
glm::mat4 TransRefFromOrigin = glm::translate(glm::mat4(1.0f), refVector);
glm::vec3 scale = glm::vec3(scax, scay, scaz);
glm::mat4 TransformationScale = glm::scale(glm::mat4(1.0), scale);
return TransRefFromOrigin * TransformationScale * TransRefToOrigin;
}
Rotate the scaled object around the pivot, like in the answer to one of your previous questions (How to use Pivot Point in Transformations):
glm::mat4 GetRotationMatrix()
{
glm::vec3 pivotVector(pivotx, pivoty, pivotz);
glm::mat4 TransPivotToOrigin = glm::translate(glm::mat4(1.0f), -pivotVector);
glm::mat4 TransPivotFromOrigin = glm::translate(glm::mat4(1.0f), pivotVector);
glm::mat4 TransformationRotation(1.0);
TransformationRotation = glm::rotate(TransformationRotation,
glm::radians(rotx), glm::vec3(1.0, 0.0, 0.0));
TransformationRotation = glm::rotate(TransformationRotation,
glm::radians(roty), glm::vec3(0.0, 1.0, 0.0));
TransformationRotation = glm::rotate(TransformationRotation,
glm::radians(rotz), glm::vec3(0.0, 0.0, 1.0));
return TransPivotFromOrigin * TransformationRotation * TransPivotToOrigin;
}
Move the scaled and rotated object to its final position (posx, posy, posz).
glm::mat4 GetPositionMatrix()
{
glm::vec3 trans = glm::vec3(posx-pivotx, posy-pivoty, posz-pivotz);
glm::mat4 TransformationPosition = glm::translate(glm::mat4(1.0), trans);
return TransformationPosition;
}
The order matters:
glm::mat4 model = GetPositionMatrix() * GetRotationMatrix() * GetScalingMatrix();
All this can be simplified:
// translate "pivot" to origin
glm::mat4 ref2originM = glm::translate(glm::mat4(1.0f), -glm::vec3(pivotx, pivoty, pivotz));
// scale
glm::mat4 scaleM = glm::scale(glm::mat4(1.0), glm::vec3(scax, scay, scaz));
// rotate
glm::mat4 rotationM(1.0);
rotationM = glm::rotate(rotationM, glm::radians(rotx), glm::vec3(1.0, 0.0, 0.0));
rotationM = glm::rotate(rotationM, glm::radians(roty), glm::vec3(0.0, 1.0, 0.0));
rotationM = glm::rotate(rotationM, glm::radians(rotz), glm::vec3(0.0, 0.0, 1.0));
// translate to "pos"
glm::mat4 origin2posM = glm::translate(glm::mat4(1.0), glm::vec3(posx, posy, posz));
// concatenate matrices
glm::mat4 model = origin2posM * rotationM * scaleM * ref2originM;

OpenGL/GSL not working without setting Transpose to GL_TRUE

GLM matrices doesnt seem to work without transfom
glm::mat4 proj = glm::ortho(0.0f,960.0f,0.0f,540.0f,-1.0f, 1.0f);
GL_TRUE has to be set :
glUniformMatrix4fv(GetUniformLocation(name),1 ,GL_TRUE,&matrix[0][0])
isn't GLM already suppose to be in column major form?
If you don't wan to transpose the matrix, then ythe vector has to be multiplied to the matrix from the right in the shader code:
mat4 transformation;
vec4 vertexPosition;
gl_Position = transformation * vertexPosition;
Explanation:
See GLSL Programming/Vector and Matrix Operations:
Furthermore, the *-operator can be used for matrix-vector products of the corresponding dimension, e.g.:
vec2 v = vec2(10., 20.);
mat2 m = mat2(1., 2., 3., 4.);
vec2 w = m * v; // = vec2(1. * 10. + 3. * 20., 2. * 10. + 4. * 20.)
Note that the vector has to be multiplied to the matrix from the right.
If a vector is multiplied to a matrix from the left, the result corresponds to to multiplying a column vector to the transposed matrix from the right. This corresponds to multiplying a column vector to the transposed matrix from the right:
Thus, multiplying a vector from the left to a matrix corresponds to multiplying it from the right to the transposed matrix:
vec2 v = vec2(10., 20.);
mat2 m = mat2(1., 2., 3., 4.);
vec2 w = v * m; // = vec2(1. * 10. + 2. * 20., 3. * 10. + 4. * 20.)
This means:
If a matrix is defined like this:
mat4 m44 = mat4(
vec4( Xx, Xy, Xz, 0.0),
vec4( Yx, Xy, Yz, 0.0),
vec4( Zx Zy Zz, 0.0),
vec4( Tx, Ty, Tz, 1.0) );
And the matrix uniform mat4 transformation is set like this (see glUniformMatrix4fv:
glUniformMatrix4fv( .... , 1, GL_FALSE, &(m44[0][0] );
Then that the vector has to be multiplied to the matrix from the right:
gl_Position = transformation * vertexPosition;
But of course, the matrix can be set up transposed:
mat4 m44 = mat4(
vec4( Xx, Yx, Zx, Tx),
vec4( Xy, Yy, Zy, Ty),
vec4( Xz Yz Zz, Tz),
vec4( 0.0, 0.0, 0.0, 1.0) );
Or can be transposet when set to the uniform variable:
glUniformMatrix4fv( .... , 1, GL_TRUE, &(m44[0][0] );
Then that the vector has to be multiplied to the matrix from the left:
gl_Position = vertexPosition * transformation;
Note, that the glm API documentation refers to The OpenGL Shading Language specification 4.20.

OpenGL Asymmetric Frustum for Desktop VR

I am making an OpenGL c++ application that tracks the users location in relation to the screen and then updates the rendered scene to the perspective of the user. This is know as "desktop VR" or you can think of the screen as a diorama or fish tank. I am rather new to OpenGL and have only defined a very simple scene thus far, just a cube, and it is initially rendered correctly.
The problem is that when I start moving and want to rerender the cube scene, the projection plane seems translated and I don't see what I think I should. I want this plane fixed. If I were writing a ray tracer, my window would always be fixed, but my eye is allowed to wander. Can someone please explain to me how I can achieve the effect I desire (pinning the viewing window) while having my camera/eye wander at a non-origin coordinate?
All of the examples I find demand that the camera/eye be at the origin, but this is not conceptually convenient for me. Also, because this is a "fish tank", I am setting my d_near to be the xy-plane, where z = 0.
In screen/world space, I assign the center of the screen to (0,0,0) and its 4 corners to:
TL(-44.25, 25, 0)
TR( 44.25, 25, 0)
BR( 44.25,-25, 0)
BL(-44.25,-25, 0)
These values are in cm for a 16x9 display.
I then calculate the user's eye (actually a web cam on my face) using POSIT to be usually somewhere in the range of (+/-40, +/-40, 40-250). My POSIT method is accurate.
I am defining my own matrices for the perspective and viewing transforms and using shaders.
I initialize as follows:
float right = 44.25;
float left = -44.25;
float top = 25.00;
float bottom = -25.00;
vec3 eye = vec3(0.0, 0.0, 100.0);
vec3 view_dir = vec3(0.0, 0.0, -1.0);
vec3 up = vec3(0.0, 1.0, 0.0);
vec3 n = normalize(-view_dir);
vec3 u = normalize(cross(up, n));
vec3 v = normalize(cross(n, u));
float d_x = -(dot(eye, u));
float d_y = -(dot(eye, v));
float d_z = -(dot(eye, n));
float d_near = eye.z;
float d_far = d_near + 50;
// perspective transform matrix
mat4 P = mat4((2.0*d_near)/(right-left ), 0, (right+left)/(right-left), 0,
0, (2.0*d_near)/(top-bottom), (top+bottom)/(top-bottom), 0,
0, 0, -(d_far+d_near)/(d_far-d_near), -(2.0*d_far*d_near)/(d_far-d_near),
0, 0, -1.0, 0);
// viewing transform matrix
mat4 V = mat4(u.x, u.y, u.z, d_x,
v.x, v.y, v.z, d_y,
n.x, n.y, n.z, d_z,
0.0, 0.0, 0.0, 1.0);
mat4 MV = C * V;
//MV = V;
From what I gather looking on the web, my view_dir and up are to remain fixed. This means that I need only to update d_near and d_far as well as d_x, d_y, and d_y? I do this in my glutIdleFunc( idle );
void idle (void) {
hBuffer->getFrame(hFrame);
if (hFrame->goodH && hFrame->timeStamp != timeStamp) {
timeStamp = hFrame->timeStamp;
std::cout << "(" << hFrame->eye.x << ", " <<
hFrame->eye.y << ", " <<
hFrame->eye.z << ") \n";
eye = vec3(hFrame->eye.x, hFrame->eye.y, hFrame->eye.z);
d_near = eye.z;
d_far = eye.z + 50;
P = mat4((2.0*d_near)/(right-left), 0, (right+left)/(right-left), 0,
0, (2.0*d_near)/(top-bottom), (top+bottom)/(top-bottom), 0,
0, 0, -(d_far+d_near)/(d_far-d_near), -(2.0*d_far*d_near)/(d_far-d_near),
0, 0, -1.0, 0);
d_x = -(dot(eye, u));
d_y = -(dot(eye, v));
d_z = -(dot(eye, n));
C = mat4(1.0, 0.0, 0.0, eye.x,
0.0, 1.0, 0.0, eye.y,
0.0, 0.0, 1.0, 0.0,
0.0, 0.0, 0.0, 1.0);
V = mat4(u.x, u.y, u.z, d_x,
v.x, v.y, v.z, d_y,
n.x, n.y, n.z, d_z,
0.0, 0.0, 0.0, 1.0);
MV = C * V;
//MV = V;
glutPostRedisplay();
}
}
Here is my shader code:
#version 150
uniform mat4 MV;
uniform mat4 P;
in vec4 vPosition;
in vec4 vColor;
out vec4 color;
void
main()
{
gl_Position = P * MV * vPosition;
color = vColor;
}
Ok, I made some changes to my code, but without success. When I use V in place of MV in the vertex shader, everything looks as I want it to, the perspective is correct and the objects are the right size, however, the scene is translated by the displacement of the camera.
When using C and V to obtain MV, my scene is rendered from the perspective of an observer straight on and the rendered scene fills the window as it should, but the perspective of the eye/camera is lost.
Really, what I want is to translate the 2D pixels, the projection plane, by the appropriate x and y values of the eye/camera, so as to keep the center of the object (whose xy center is (0,0)) in the center of the rendered image. I am guided by the examples in the textbook "Interactive Computer Graphics: A Top-Down Approach with Shader-Based OpenGL (6th Edition)". Using the files paired with the book freely available on the web, I am continuing with the row major approach.
The following images are taken when not using the matrix C to create MV. When I do use C to create MV, all scenes look like the first image below. I desire no translation in z and so I leave that as 0.
Because the projection plane and my camera plane are parallel, the conversion from one to the other coordinate system is simply a translation and inv(T) ~ -T.
Here is my image for the eye at (0,0,50):
Here is my image for the eye at (56,-16,50):
The solution is to update d_x, d_y, and d_z, accounting for the new eye position, but to never change u, v, or n. In addition, one must update the matrix P with new values for left, right, top and bottom, as they relate to the new position of the camera/eye.
I initialize with this:
float screen_right = 44.25;
float screen_left = -44.25;
float screen_top = 25.00;
float screen_bottom = -25.00;
float screen_front = 0.0;
float screen_back = -150;
And now my idle function looks like this, note the calculations for top, bottom, right, and left:
void idle (void) {
hBuffer->getFrame(&hFrame);
if (hFrame.goodH && hFrame.timeStamp != timeStamp) {
timeStamp = hFrame.timeStamp;
//std::cout << "(" << hFrame.eye.x << ", " <<
// hFrame.eye.y << ", " <<
// hFrame.eye.z << ") \n";
eye = vec3(hFrame.eye.x, hFrame.eye.y, hFrame.eye.z);
d_near = eye.z;
d_far = eye.z + abs(screen_back) + 1;
float top = screen_top - eye.y;
float bottom = top - 2*screen_top;
float right = screen_right - eye.x;
float left = right - 2*screen_right;
P = mat4((2.0 * d_near)/(right - left ), 0.0, (right + left)/(right - left), 0.0,
0.0, (2.0 * d_near)/(top - bottom), (top + bottom)/(top - bottom), 0.0,
0.0, 0.0, -(d_far + d_near)/(d_far - d_near), -(2.0 * d_far * d_near)/(d_far - d_near),
0.0, 0.0, -1.0, 0.0);
d_x = -(dot(eye, u));
d_y = -(dot(eye, v));
d_z = -(dot(eye, n));
V = mat4(u.x, u.y, u.z, d_x,
v.x, v.y, v.z, d_y,
n.x, n.y, n.z, d_z,
0.0, 0.0, 0.0, 1.0);
glutPostRedisplay();
}
}
This redefinition of the perspective matrix keeps the rendered image from translating. I still have camera capture and screen grabbing synchronization issues, but I am able to create the images like following updated in real-time for the position of the user:
All of the examples I find demand that the camera/eye be at the origin
This is not a demand. OpenGL doesn't define a camera, it all boils down to transformations. Those are usually called projection and modelview (model and view transform combined). The camera, i.e. view transform is simply the inverse of the positioning of the camera in world space. So say you built yourself a matrix for your camera C, then you'd premultiply it's inverse on the modelview.
mat4 MV = inv(C) * V
On a side note, your shader code is wrong:
gl_Position = P * ( (V * vPosition ) / vPosition.w );
^^^^^^^^^^^^^^
The homogenous divide is not to be done in the shader, as it is hardwired into the render pipeline.

bias matrix in shadow mapping

I'm looking at shadow mapping in OpenGL.
I see code like:
// This is matrix transform every coordinate x,y,z
// x = x* 0.5 + 0.5
// y = y* 0.5 + 0.5
// z = z* 0.5 + 0.5
// Moving from unit cube [-1,1] to [0,1]
const GLdouble bias[16] = {
0.5, 0.0, 0.0, 0.0,
0.0, 0.5, 0.0, 0.0,
0.0, 0.0, 0.5, 0.0,
0.5, 0.5, 0.5, 1.0};
// Grab modelview and transformation matrices
glGetDoublev(GL_MODELVIEW_MATRIX, modelView);
glGetDoublev(GL_PROJECTION_MATRIX, projection);
glMatrixMode(GL_TEXTURE);
glActiveTextureARB(GL_TEXTURE7);
glLoadIdentity();
glLoadMatrixd(bias);
// concatating all matrice into one.
glMultMatrixd (projection);
glMultMatrixd (modelView);
// Go back to normal matrix mode
glMatrixMode(GL_MODELVIEW);
Now, if I rip out the bias matrix. The code does not work. Searching other shadow mapping code, I see the same bias matrix, without any explaination. Why do I want this bias to map x, y, z to 0.5 * x + 0.5, 0.5 * y + y, ... ?
Thanks!
when you transform vertices inside the frustum with a standard modelview/projection matrix, the result you get is a vertex that, once w-divide is done, is in the [-1:1]x[-1:1]x[-1:1] cube. you want your texture coordinates to be in the [0:1]x[0:1] range, hence the remapping for x and y. It's the same kind of thing for Z, assuming your DepthRange is [0:1], which is the default.