Get Projection Matrix from OpenGL in version 3.3 - opengl

For compatibility reasons I have to mix old and modern OpenGL code. I would like to retrieve the current projection- and modelview-matrix from OpenGL to pass it to the shader by using a uniform-variable.
The following code shows my attempt to do that for the projection-matrix only. It does not work as intended:
float m[16];
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluPerspective(60, aspect, 0.01f, 2.0f);
glGetFloatv(GL_PROJECTION_MATRIX, m);
After these lines Matrix m contains the identity. I checked this using the following code:
printf("%f %f %f %f\n", m[0], m[4], m[8], m[12]);
printf("%f %f %f %f\n", m[1], m[5], m[9], m[13]);
printf("%f %f %f %f\n", m[2], m[6], m[10], m[14]);
printf("%f %f %f %f\n", m[3], m[7], m[11], m[15]);
I created the context using freeglut and explicitly requested an OpenGL 3.3 context using the following code:
...
glutInitContextVersion(3, 3);
glutCreateWindow(title);
When I change the version to OpenGL 2.0 the code above works as expected. Every version above produces the described problem.
I'm working on Xubuntu using an Intel Corporation Broadwell-U Integrated Graphics.
Can someone explain this behaviour? Can someone offer a solution?

The >=OpenGL-3.3 compatibility profiles are an optional feature and there's no requirement to actually support it. Most OpenGL implementations actually don't support the compatibility profiles. The notable exceptions are the proprietary drivers of NVidia and AMD. Hence mixing OpenGL-3.3 with legacy stuff will not work reliably for all systems.
But why are you doing that
float m[16];
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluPerspective(60, aspect, 0.01f, 2.0f);
glGetFloatv(GL_PROJECTION_MATRIX, m);
… at all? First and foremost GLU is not part of OpenGL and GLU functions are just little helpers that internally call OpenGL functions. Anyway the gluPerspective function is extremely simple and the roundtrip through OpenGL just to get the matrix well is awkward (and to be honest is triggering my gag reflex).
At its core gluPerspective is determining parameters for glFrustum:
void gluPerspective(
double fov,
double aspect,
double znear, float zfar)
{
double const height = znear * tanf(fovyInDegrees * M_PI / 360.0);
double const width = height * aspect;
glFrustum(-width, width, -height, height, znear, zfar);
}
That's all it does. glFrustum itself is very simple as well.
void glFrustum(double l, double r, double b, double t, double n, double f)
{
double M[4][4];
M[0][0] = 2.f*n/(r-l);
M[0][1] = M[0][2] = M[0][3] = 0.f;
M[1][1] = 2.*n/(t-b);
M[1][0] = M[1][2] = M[1][3] = 0.f;
M[2][0] = (r+l)/(r-l);
M[2][1] = (t+b)/(t-b);
M[2][2] = -(f+n)/(f-n);
M[2][3] = -1.f;
M[3][2] = -2.f*(f*n)/(f-n);
M[3][0] = M[3][1] = M[3][3] = 0.f;
glMultMatrixd(&M[0][0]);
}
It's trivial to rewrite this to take a pointer to a matrix as a parameter which you then can directly pass to a uniform.

Related

How to texture a random convex quad in openGL

Alright, so I started looking up tutorials on openGL for the sake of making a minecraft mod. I still don't know too much about it because I figured that I really shouldn't have to when it comes to making the small modification that I want, but this is giving me such a headache. All I want to do is be able to properly map a texture to an irregular concave quad.
Like this:
I went into openGL to figure out how to do this before I tried running code in the game. I've read that I need to do a perspective-correct transformation, and I've gotten it to work for trapezoids, but for the life of me I can't figure out how to do it if both pairs of edges aren't parallel. I've looked at this: http://www.imagemagick.org/Usage/scripts/perspective_transform, but I really don't have a clue where the "8 constants" this guy is talking about came from or if it will even help me. I've also been told to do calculations with matrices, but I've got no idea how much of that openGL does or doesn't take care of.
I've looked at several posts regarding this, and the answer I need is somewhere in those, but I can't make heads or tails of 'em. I can never find a post that tells me what arguments I'm supposed to put in the glTexCoord4f() method in order to have the perspective-correct transform.
If you're thinking of the "Getting to know the Q coordinate" page as a solution to my problem, I'm afraid I've already looked at it, and it has failed me.
Is there something I'm missing? I feel like this should be a lot easier. I find it hard to believe that openGL, with all its bells and whistles, would have nothing for making something other than a rectangle.
So, I hope I haven't pissed you off too much with my cluelessness, and thanks in advance.
EDIT: I think I need to make clear that I know openGL does perspective transform for you when your view of the quad is not orthogonal. I'd know to just change the z coordinates or my fov. I'm looking to smoothly texture non-rectangular quadrilateral, not put a rectangular shape in a certain fov.
OpenGL will do a perspective correct transform for you. I believe you're simply facing the issue of quad vs triangle interpolation. The difference between affine and perspective-correct transforms are related to the geometry being in 3D, where the interpolation in image space is non-linear. Think of looking down a road: the evenly spaced lines appear more frequent in the distance. Anyway, back to triangles vs quads...
Here are some related posts:
How to do bilinear interpolation of normals over a quad?
Low polygon cone - smooth shading at the tip
https://gamedev.stackexchange.com/questions/66312/quads-vs-triangles
Applying color to single vertices in a quad in opengl
An answer to this one provides a possible solution, but it's not simple:
The usual approach to solve this, is by performing the interpolation "manually" in a fragment shader, that takes into account the target topology, in your case a quad. Or in short you have to perform barycentric interpolation not based on a triangle but on a quad. You might also want to apply perspective correction.
The first thing you should know is that nothing is easy with OpenGL. It's a very complex state machine with a lot of quirks and a poor interface for developers.
That said, I think you're confusing a lot of different things. To draw a textured rectangle with perspective correction, you simply draw a textured rectangle in 3D space after setting the projection matrix appropriately.
First, you need to set up the projection you want. From your description, you need to create a perspective projection. In OpenGL, you usually have 2 main matrixes you're concerned with - projection and model-view. The projection matrix is sort of like your "camera".
How you do the above depends on whether you're using Legacy OpenGL (less than version 3.0) or Core Profile (modern, 3.0 or greater) OpenGL. This page describes 2 ways to do it, depending on which you're using.
void BuildPerspProjMat(float *m, float fov, float aspect, float znear, float zfar)
{
float xymax = znear * tan(fov * PI_OVER_360);
float ymin = -xymax;
float xmin = -xymax;
float width = xymax - xmin;
float height = xymax - ymin;
float depth = zfar - znear;
float q = -(zfar + znear) / depth;
float qn = -2 * (zfar * znear) / depth;
float w = 2 * znear / width;
w = w / aspect;
float h = 2 * znear / height;
m[0] = w;
m[1] = 0;
m[2] = 0;
m[3] = 0;
m[4] = 0;
m[5] = h;
m[6] = 0;
m[7] = 0;
m[8] = 0;
m[9] = 0;
m[10] = q;
m[11] = -1;
m[12] = 0;
m[13] = 0;
m[14] = qn;
m[15] = 0;
}
and here is how to use it in an OpenGL 1 / OpenGL 2 code:
float m[16] = {0};
float fov=60.0f; // in degrees
float aspect=1.3333f;
float znear=1.0f;
float zfar=1000.0f;
BuildPerspProjMat(m, fov, aspect, znear, zfar);
glMatrixMode(GL_PROJECTION);
glLoadMatrixf(m);
// okay we can switch back to modelview mode
// for all other matrices
glMatrixMode(GL_MODELVIEW);
With a real OpenGL 3.0 code, we must use GLSL shaders and uniform variables to pass and exploit the transformation matrices:
float m[16] = {0};
float fov=60.0f; // in degrees
float aspect=1.3333f;
float znear=1.0f;
float zfar=1000.0f;
BuildPerspProjMat(m, fov, aspect, znear, zfar);
glUseProgram(shaderId);
glUniformMatrix4fv("projMat", 1, GL_FALSE, m);
RenderObject();
glUseProgram(0);
Since I've not used Minecraft, I don't know whether it gives you a projection matrix to use or if you have the other information to construct it.

How to efficiently draw 3D lines using gluCylinder?

I have this code that draws a 3D line between two points:
void glLine(Point3D (&i)[2], double const width, int const slices = 360)
{
double const d[3] = { i[1].X - i[0].X, i[1].Y - i[0].Y, i[1].Z - i[0].Z };
glMatrixMode(GL_MODELVIEW);
glPushMatrix();
GLUquadric *const quadric = gluNewQuadric()
double z[3] = { 0, 0, 1 };
double const angle = acos(dot(z, d) / sqrt(dot(d, d) * dot(z, z)));
cross(z, d);
glTranslated(i[0].X, i[0].Y, i[0].Z);
glRotated(angle * 180 / M_PI, z[0], z[1], z[2]);
gluCylinder(quadric, width / 2, width / 2, sqrt(dot(d, d)), slices, 1);
glPopMatrix();
gluDeleteQuadric(quadric);
}
The trouble is, it's extremely slow because of the math that computes the rotation from the unit z vector to the given direction.
How can I make OpenGL (hopefully, the GPU) perform the arithmetic instead of the CPU?
Do not render lines using cylinders. Rather render quads using geometry shaders, facing the user with correct screen space scaling.
Have a look here: Wide lines in geometry shader behaves oddly

OpenGL Camera class

I'm trying to create a simple Camera class for OpenGL using C++. So it basically uses the following functions:
void setModelviewMatrix () {
float m[16];
Vector3 eVec(eye.x, eye.y, eye.z);
m[0]=u.x; m[4]=u.y; m[8]=u.z; m[12]=-eVec.dotProduct(u);
m[1]=v.x; m[5]=v.y; m[9]=v.z; m[13]=-eVec.dotProduct(v);
m[2]=n.x; m[6]=n.y; m[10]=n.z; m[14]=-eVec.dotProduct(n);
m[3]=0; m[7]=0; m[11]=0; m[15]=1.0;
glMatrixMode(GL_MODELVIEW);
glLoadMatrixf(m);
}
void set (Point3 newEye, Point3 newLook, Vector3 newUp) {
eye.set(newEye);
n.set(eye.x-newLook.x, eye.y-newLook.y, eye.z-newLook.z);
u.set(newUp.crossProduct(n));
n.normalize();
u.normalize();
v.set(n.crossProduct(u));
setModelviewMatrix();
}
void setShape (float vAng, float asp, float nearD, float farD) {
viewAngle = vAng;
aspect = asp;
nearDist = nearD;
farDist = farD;
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluPerspective(viewAngle, aspect, nearDist, farDist);
}
So I tested it by replacing the working standard camera setup (see below) with my new camera setup (below as well). From my point of view, the new setup does exactly the same thing as the old one. But still, the result is different: With the new setup the only thing I can see is a blank white screen and not the object that was displayed before. Why is that?
Old camera setup (working):
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(-64/48.0, 64/48.0, -1.0, 1.0, 0.1, 100.0);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
gluLookAt(2.3, 1.3, 2, 0, 0.25, 0, 0, 1, 0);
New setup (not working):
Point3 eye = Point3(2.3, 1.3, 2);
Point3 look = Point3(0, 0.25, 0);
Vector3 up = Vector3(0, 1, 0);
cam.setShape(30.0f, 64.0f/48.0f, 0.1f, 100.0f);
cam.set(eye, look, up);
You have forgotten that OpenGL uses column-major matrices. Assuming those are basis vectors, they should each span a column instead of row. This is the layout that OpenGL uses for matrices when stored in memory:
                                           http://i.msdn.microsoft.com/dynimg/IC534287.png
This ought to take care of your problem (transposed 3x3 matrix):
void setModelviewMatrix () {
GLfloat m[16];
m[0]=u.x; m[4]=v.x; m[ 8]=n.x; m[12]=eye.x;
m[1]=u.y; m[5]=v.y; m[ 9]=n.y; m[13]=eye.y;
m[2]=u.z; m[6]=v.z; m[10]=n.z; m[14]=eye.z;
m[3]=0.0f; m[7]=0.0f; m[11]=0.0f; m[15]=1.0f;
glMatrixMode(GL_MODELVIEW);
glLoadMatrixf(m);
}
You are free to use whatever memory layout you want if your matrix class does all of the calculations, but the second you pass a matrix to OpenGL it expects a particular layout. Even if you are using shaders, OpenGL expects matrix uniforms to be column-major by default.
Sorry, the problem had nothing to do with the implementation of these methods. It was actually a problem with Visual Studio. As I switched from C to C++ and I didn't change the setup, the problem was caused by that. I tried implementing the camera functions in C and it worked.
Thanks for your help, though.

Camera rotation in OpenGL not using glRotate glLookAt

I am trying to write a own rotation function for a camera in OpenGL, but I can't get it to run. My camera is mainly from flipcode, with some minor changes:
Camera code:
Camera::Camera(float x, float y, float z) {
memset(Transform, 0, 16*sizeof(float));
Transform[0] = 1.0f;
Transform[5] = 1.0f;
Transform[10] = 1.0f;
Transform[15] = 1.0f;
Transform[12] = x; Transform[13] = y; Transform[14] = z;
Left=&Transform[0];
Up=&Transform[4];
Forward=&Transform[8];
Position=&Transform[12];
old_x = 0;
old_y = 0;
}
The view is set before every rendered frame:
void Camera::setView() {
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
float viewmatrix[16]={//Remove the three - for non-inverted z-axis
Transform[0], Transform[4], -Transform[8], 0,
Transform[1], Transform[5], -Transform[9], 0,
Transform[2], Transform[6], -Transform[10], 0,
-(Transform[0]*Transform[12] +
Transform[1]*Transform[13] +
Transform[2]*Transform[14]),
-(Transform[4]*Transform[12] +
Transform[5]*Transform[13] +
Transform[6]*Transform[14]),
//add a - like above for non-inverted z-axis
(Transform[8]*Transform[12] +
Transform[9]*Transform[13] +
Transform[10]*Transform[14]), 1};
glLoadMatrixf(viewmatrix);
}
Now to my problem, the rotation. Consider for example rotation around the y-axis. This is the rotation matrix stack:
// deg is the angle it is not working in degree or radiant
void Camera::rotateLocal_y(float deg){
glMatrixMode(GL_MODELVIEW);
glPushMatrix();
glLoadMatrixf(Transform);
rotateMatrixf_y(Transform, deg);
glGetFloatv(GL_MODELVIEW_MATRIX, Transform);
glPopMatrix();
}
So next I am going to show the rotation function:
//rotate a matrix around y axis
void rotateMatrixf_y(float *aMatrix, float angle){
// x y z t
float rotMatrix[] = {cos(angle),0,-1*sin(angle),0, 0, 1, 0, 0, sin(angle), 0, cos(angle), 0, 0, 0, 0, 1};
multMatrixMatrix(rotMatrix,aMatrix);
}
And finally the matrix multiplication function:
void multMatrixMatrix(float* m_a, float* m_b){
float m_c[16] = {m_a[0]*m_b[0]+m_a[4]*m_b[1]+m_a[8]*m_b[2]+m_a[12]*m_b[3],
m_a[0]*m_b[4]+m_a[4]*m_b[5]+m_a[8]*m_b[6]+m_a[12]*m_b[7],
m_a[0]*m_b[8]+m_a[4]*m_b[9]+m_a[8]*m_b[10]+m_a[12]*m_b[11],
m_a[0]*m_b[12]+m_a[4]*m_b[13]+m_a[8]*m_b[14]+m_a[12]*m_b[15],
m_a[1]*m_b[0]+m_a[5]*m_b[1]+m_a[9]*m_b[2]+m_a[13]*m_b[3],
m_a[1]*m_b[4]+m_a[5]*m_b[5]+m_a[9]*m_b[6]+m_a[13]*m_b[7],
m_a[1]*m_b[8]+m_a[5]*m_b[9]+m_a[9]*m_b[10]+m_a[13]*m_b[11],
m_a[1]*m_b[12]+m_a[5]*m_b[13]+m_a[9]*m_b[14]+m_a[13]*m_b[15],
m_a[2]*m_b[0]+m_a[6]*m_b[1]+m_a[10]*m_b[2]+m_a[14]*m_b[3],
m_a[2]*m_b[4]+m_a[6]*m_b[5]+m_a[10]*m_b[6]+m_a[14]*m_b[7],
m_a[2]*m_b[8]+m_a[6]*m_b[9]+m_a[10]*m_b[10]+m_a[14]*m_b[11],
m_a[2]*m_b[12]+m_a[6]*m_b[13]+m_a[10]*m_b[14]+m_a[14]*m_b[15],
m_a[3]*m_b[0]+m_a[7]*m_b[1]+m_a[11]*m_b[2]+m_a[15]*m_b[3],
m_a[3]*m_b[4]+m_a[7]*m_b[5]+m_a[11]*m_b[6]+m_a[15]*m_b[7],
m_a[3]*m_b[8]+m_a[7]*m_b[9]+m_a[11]*m_b[10]+m_a[15]*m_b[11],
m_a[3]*m_b[12]+m_a[7]*m_b[13]+m_a[11]*m_b[14]+m_a[15]*m_b[15]
};
m_b = m_c;
}
I though this must be it, but it seems as if something is fundamentaly wrong. It is not moving at all. the camera is properly set. The method order is: cam.rotate then cam.setView.
Flipcodes originial rotate function:
void Camera::rotateLoc(float deg, float x, float y, float z) {
glMatrixMode(GL_MODELVIEW);
glPushMatrix();
glLoadMatrixf(Transform);
glRotatef(deg, x,y,z);
glGetFloatv(GL_MODELVIEW_MATRIX, Transform);
glPopMatrix();
}
Your code is pretty messy and incomplete.
I think your problem is here :
glPushMatrix();
glLoadMatrixf(Transform); // give the Transform matrix to GL (why?)
rotateMatrixf_y(Transform, deg); // modify the Transform matrix
glGetFloatv(GL_MODELVIEW_MATRIX, Transform); // (3) retrieve the original Tranform matrix
glPopMatrix();
(3) just undoes whatever changes you've been doing in 'Transform' by calling 'rotateMatrixf_y'.
The flipcode code you added is using OpenGL to update the Tranform matrix, by calling glRotatef' and reading back the result, which is fine. In your method code, you should just remove every reference to OpenGL and just keep the call to rotateMatrixf_y, which does all the work in its own.
Do you really understand what's the use of the GL matrix stack ? You should perhaps go back to the basics by either using only GL functions or using your own, but get to know why it works in either way before mixing the uses.

Function for perspective projection of a matrix in C++

Does anyone have a function that returns the perspective projection of a 3x3 matrix in C++?
Matrix Perspective()
{
Matrix m(0, 0, 0); // Creates identity matrix
// Perspective projection formulas here
return m;
}
Here's one that returns it in a 4x4 matrix, using the formula from the OpenGL gluPerspective man page:
static void my_PerspectiveFOV(double fov, double aspect, double near, double far, double* mret) {
double D2R = M_PI / 180.0;
double yScale = 1.0 / tan(D2R * fov / 2);
double xScale = yScale / aspect;
double nearmfar = near - far;
double m[] = {
xScale, 0, 0, 0,
0, yScale, 0, 0,
0, 0, (far + near) / nearmfar, -1,
0, 0, 2*far*near / nearmfar, 0
};
memcpy(mret, m, sizeof(double)*16);
}
With OpenCV 2.0 you can almost implement your pseudocode.
There's a Mat class for matrices and perspectiveTransform for perspective projection. And Mat::eye returns an identity matrix.
The documentation I've linked to is for OpenCV 1.1 (which is in C) but it's quite simple to infer the correct usage in OpenCV 2.0 (with the Mat class) from the manual.