Projection map in OpenGL [duplicate] - opengl

This question already has an answer here:
Ogre/Mogre: Camera two point perspective
(1 answer)
Closed 6 years ago.
I need to set the two-point projection with matrix. I know about the use of function glFrustum and glOrtho but I need to set it via the matrix. How a can I do that?
I need use matrix like this
0.87 0 0.5 0
0 1 0 0
1 0 -1.73 1
0.5 0 -0.87 2

As #BeylerStudios mentioned in his/her comment, The OpenGL docs (for version 2.1) have the matrices for glFrustum() and glOrtho().
For example, the matrix for glFrustum looks like this:
(2 * nearVal)/(right - left) 0 A 0
0 (2 * nearVal)/(top - bottom) B 0
0 0 C D
0 0 -1 0
where
A = (right + left)/(right - left)
B = (top + bottom)/(top - bottom)
C = -(farVal + nearVal)/(farVal - nearVal)
D = -(2 * farVal * nearVal)/(farVal - nearVal)

Related

Creating a view matrix manually OpenGL

I´m trying to create a view matrix for my program to be able to move and rotate the camera in OpenGL.
I have a camera struct that has the position and rotation vectors in it. From what I understood, to create the view matrix, you need to multiply the transform matrix with the rotation matrix to get the expected result.
So far I tried creating matrices for rotation and for transformation and multiply them like this:
> Transformation Matrix T =
1 0 0 -x
0 1 0 -y
0 0 1 -z
0 0 0 1
> Rotation Matrix Rx =
1 0 0 0
0 cos(-x) -sin(-x) 0
0 sin(-x) cos(-x) 0
0 0 0 1
> Rotation Matrix Ry =
cos(-y) 0 sin(-y) 0
0 1 0 0
-sin(-y) 0 cos(-y) 0
0 0 0 1
> Rotation Matrix Rz =
cos(-z) -sin(-z) 0 0
sin(-z) cos(-z) 0 0
0 0 1 0
0 0 0 1
View matrix = Rz * Ry * Rx * T
Notice that the values are negated, because if we want to move the camera to one side, the entire world is moving to the opposite side.
This solution seems to almost be working. The problem that I have is that when the camera is not at 0, 0, 0, if I rotate the camera, the position is changed. What I think is that if the camera is positioned at, let´s say, 0, 0, -20 and I rotate the camera, the position should remain at 0, 0, -20 right?
I feel like I´m missing something but I can´t seem to know what. Any help?
Edit 1:
It´s an assignment for university, so I can´t use any built-in functions!
Edit 2:
I tried changing the order of the operations and putting the translation in the left side, so T * Rz * Ry * Rx, but then the models rotate around themselves, and not around the camera.

Changing/Convert from UE4 coordinates to OpenGL coorinates

I am trying to convert from UE4 coordinates (Z up) to opengl coorinates (Y up) but I cant seem to get around it.
I do this Matrix transforming from X+ right, Y+ up, Z- deep to Y+ right, Z+ up, X+ deep
[mat = M]
0 1 0 0
0 0 1 0
-1 0 0 0
0 0 0 1
But I end up getting the wrong result and also the rotation....
I get ActorTranfrom from (T) = this->GetTransform()
then
FTransform R = T * M
Then I do the R - ToMatrixWithScale() - then get send it to OpenGL backend.
But the orientation is wrong... What am I doing wrong ?

OpenGL: How are base vectors laid out in memory

this topic has been discussed quite a few times. There are a lot of information on the memory layout of matrices in OpenGL on the internet. Sadly different sources often contradict each other.
My question boils down to:
When I have three base vectors of my matrix bx, by and bz. If I want to make a matrix out of them to plug them into a shader, how are they laid out in memory?
Lets clarify what I mean by base vector, because I suspect this can also mean different things:
When I have a 3D model, that is Z-up and I want to lay it down flat in my world space along the X-axis, then bz is [1 0 0]. I.e. a vertex [0 0 2] in model space will be transformed to [2 0 0] when that vertex is multiplied by my matrix that has bz as the base vector for the Z-axis.
Coming to OpenGL matrix memory layout:
According to the GLSL spec (GLSL Spec p.110) it says:
vec3 v, u;
mat3 m;
u = v * m;
is equivalent to
u.x = dot(v, m[0]); // m[0] is the left column of m
u.y = dot(v, m[1]); // dot(a,b) is the inner (dot) product of a and b
u.z = dot(v, m[2]);
So, in order to have best performance, I should premultiply my vertices in the vertex shader (that way the GPU can use the dot product and so on):
attribute vec4 vertex;
uniform mat4 mvp;
void main()
{
gl_Position = vertex * mvp;
}
Now OpenGL is said to be column-major (GLSL Spec p 101). I.e. the columns are laid out contiguously in memory:
[ column 0 | column 1 | column 2 | column 3 ]
[ 0 1 2 3 | 4 5 6 7 | 8 9 10 11 | 12 13 14 15 ]
or:
[
0 4 8 12,
1 5 9 13,
2 6 10 14,
3 7 11 15,
]
This would mean that I have to store my base vectors in the rows like this:
bx.x bx.y bx.z 0
by.x by.y by.z 0
bz.x bz.y bz.z 0
0 0 0 1
So for my example with the 3D model that I want to lay flat down, it has the base vectors:
bx = [0 0 -1]
by = [0 1 0]
bz = [1 0 0]
The model vertex [0 0 2] from above would be transformed like dis in the vertex shader:
// m[0] is [ 0 0 1 0]
// m[1] is [ 0 1 0 0]
// m[2] is [-1 0 0 0]
// v is [ 0 0 2 1]
u.x = dot([ 0 0 2 1], [ 0 0 1 0]);
u.y = dot([ 0 0 2 1], [ 0 1 0 0]);
u.z = dot([ 0 0 2 1], [-1 0 0 0]);
// u is [ 2 0 0]
Just as expected!
On the contrary:
This: Correct OpenGL matrix format?
SO question and consequently the OpenGL Faq states:
For programming purposes, OpenGL matrices are 16-value arrays with base vectors laid out contiguously in memory. The translation components occupy the 13th, 14th, and 15th elements of the 16-element matrix, where indices are numbered from 1 to 16 as described in section 2.11.2 of the OpenGL 2.1 Specification.
This says that my base vectors should be laid out in columns like this:
bx.x by.x bz.x 0
bx.y by.y bz.y 0
bx.z by.z bz.z 0
0 0 0 1
To me these two sources which both are official documentation from Khronos seem to contradict each other.
Can somebody explain this to me? Have I made a mistake? Is there indeed some wrong information?
The FAQ is correct, it should be:
bx.x by.x bz.x 0
bx.y by.y bz.y 0
bx.z by.z bz.z 0
0 0 0 1
and it's your reasoning that is flawed.
Assuming that your base vectors bx, by, bz are the model basis given in world coordinates, then the transformation from the model-space vertex v to the world space vertex Bv is given by linear combination of the base vectors:
B*v = bx*v.x + by*v.y + bz*v.z
It is not a dot product of b with v. Instead it's the matrix multiplication where B is of the above form.
Taking a dot product of a vertex u with bx would answer the inverse question: given a world-space u what would be its coordinates in the model space along the axis bx? Therefore multiplying by the transposed matrix transpose(B) would give you the transformation from world space to model space.

Tetris Rotation without arrays

I am writing a Tetris Clone, it is almost done, except for the collisions. For example In order to move the Piece Z I use a method:
void PieceZ::movePieceDown()
{
drawBlock (x1,y1++);
drawBlock (x2,y2++);
drawBlock (x3,y3++);
drawBlock (x4,y4++);
}
and in order to rotate a Piece I use a setter (because coordinates are private). For rotation I use a 90 degree clockwise rotation matrix. For example if I want to move (x1,y1) and (x2, y2) is my origin, to get x and y of a new block:
newX = (y1-y2) + x2;
newY = (x2-x1) + y2 + 1;
That works to some extent, it starts out as:
0 0 0 0
0 1 1 0
0 0 1 1
0 0 0 0
Then as planned it rotates to:
0 0 0 1
0 0 1 1
0 0 1 0
0 0 0 0
And then it rotates to Piece S:
0 0 0 0
0 0 1 1
0 1 1 0
0 0 0 0
And then it just alternates between the second and the third stages.
My calculations are wrong but I can't figure out where, I just need a little hint.
Ok here is how it should go (somewhat):
Determine where you want to rotate the piece (this could be the upper or lower corner or the center) and call it origin
Calculate the new x newX = y - origin.y;
Calculate the new y newY = -x + origin.x;
This should work (I got this idea from wikipedia and rotation matrixes: https://en.wikipedia.org/wiki/Transformation_matrix)

Inverse perspective transformation of a warped image

# Iwillnotexist Idonotexist presented his code for image perspective transformation (rotations around 3 axes): link
I'm looking for a function (or math) to make an inverse perspective transformation.
Let's make an assumption, that my "input image" is a result of his warpImage() function, and all angles (theta, phi and gamma), scale and fovy are also known.
I'm looking for a function (or math) to compute inverse transformation (black border doesn't matter) to get an primary image.
How can I do this?
The basic idea is you need to find the inverse transformation. In the linked question they have F = P T R1 R2 where P is the projective transformation, T is a translation, and R1, R2 are two rotations.
Denote F* as the inverse transformation. We can the inverse as F* = R2* R1* T* P*. Note the order changes. Three of these are easy R1* is just another rotation but with the angle negated. So the first inverse rotation would be
cos th sin th 0 0
R1* = -sin th cos th 0 0
0 0 1 0
0 0 1
Note the signs on the two sin terms are reversed.
The inverse of a translation is just a translation in the opposite direction.
1 0 0 0
T*= 0 1 0 0
0 0 1 h
0 0 0 1
You can check these calculating T* T which should be the identity matrix.
The trickiest bit is the projective component we have
cot(fv/2) 0 0 0
P = 0 cot(fv/2) 0 0
0 0 -(f+n)/(f-n) -2 f n / (f-n)
0 0 -1 0
The inverse of this is
tan(fv/2) 0 0 0
P*= 0 tan(fv/2) 0 0
0 0 0 -2
0 0 (n-f)/(f n) (f+n)/(f n)
Wolfram alpha inverse with v=fv
You then need to multiply these together in the reverse order to get the final matrix.
I also had issues to back-transform my image.
You need to store the points
ptsInPt2f and ptsOutPt2f
which are computed in the the 'warpMatrix' method.
To back-transform, simply use the same method
M = getPerspectiveTransform(ptsOutPt2f, ptsInPt2f);
but with reversed param order (output as first argument, input as second).
Afterwards a simple crop will get rid of all the black.