Adjusting the distortion caused by a projector in OpenGL ES - c++

I'm getting started with a OpenGL ES app (C++/SDL) running on a Raspberry PI, that I'll want to output through a pocket projector.
I want to give the user the ability to correct the distortion caused by aiming the projector from a direction that is non-normal to the surface. The surface will also be smaller than the projected area. To do this, the user will be able to "move" the 4 corners of the projection window independently to match the surface.
I was planning to do this by solving a simple linear system of equations that transforms the original corners (in the current projection matrix coordinates) to the corners set by the user, and just push this resulting matrix on top of the GL_PROJECTION matrix stack.
However... I found out that I can't "read" the projection matrix in OpenGL ES:
float matrix[16];
glGetFloatv(GL_PROJECTION_MATRIX, matrix);
In particular, the GL_PROJECTION_MATRIX symbol doesn't exist... and I've even read that there's no such thing as a projection matrix in OpenGL ES (something I find hard to believe, but I'm totally new to it, so...).
Any idea of a workaround for reading this matrix, or maybe an alternative approach to do what I'm trying to do?
Thanks!

OpenGL ES 1.x has a projection matrix, and you should be able to get the current value with exactly the code you have in your question. See http://www.khronos.org/opengles/sdk/1.1/docs/man/glGet.xml for the documentation confirming this, or the header file at http://www.khronos.org/registry/gles/api/GLES/gl.h.
ES 2.0 and higher are a different story. ES 2.0 eliminates most of the fixed pipeline that was present in ES 1.x, and replaces it with shader based rendering.
Therefore, concepts like a built-in projection matrix don't exist anymore in ES 2.0 and are replaced with the much more flexible concept of shader programs written in GLSL. If you want to use a projection matrix in ES 2.0, you have a matrix variable in your vertex shader code, pass a value for the matrix into the shader program (using the glUniformMatrix4fv API call), and the GLSL code you provide for the vertex shader performs the multiplication of vectors with the projection matrix.
So in the case of ES 2.0, there is no need for a glGet*() call for the projection matrix. If you have one, you calculated it, and passed it into the shader program. Which means that you already have the matrix in your code. Well, you could get any matrix value back with the glGetUniformfv() API call, but it's much easier and cleaner to just keep it around in your code.

Related

How to use OpenGL Column_Major MVP Matrix in Direct3D 11

I am trying to use a single matrix stack for both OpenGL and Direct3D APIs. From all my research on this site and other articles at:
Article 1,
Article 2 among others, it is my understanding that this should be fairly easy as long as certain nuances are handled and consistency is maintained.
Also, this MSDN Article suggests that HLSL by default uses column_major format.
So I have a working Right-Handed Coordinate System, Column_Major MVP matrix for opengl.
I am trying to use this same matrix in DirectX. Since it is a Right-Handed System, I made sure that I do set rasterizerDesc.FrontCounterClockwise = true;
I have the following in my HLSL shader :
output.position = mul(u_mvpMatrix, input.position);
Note that in the above code I am using a post-multiplication as my mvp-matrix is column major and I send it to the shader without transposing it. So it should send it as column major and therefore, the post-multiplication.
Note: I did try pre-multiplication as well as post and pre-multiplication with the transpose of my mvp.
When calculating the projection matrix, as suggested in Article 2 above, I make sure that the depth range for DX is 0-1 and for openGl it is -1-to-1!
I even tested my resulting matrix against the XMMatrixPerspectiveFovRH function from the DirectXMath library and my matrix matches the one produced by that function.
Now with Rasterization flag set ->FrontCounterClockwise = true, RH column major MVP matrix, with correct depth scaling, I would have expected that this would be all that would be needed to get D3D to work with my Matrix.
What is it that I am missing here?
P.S: I believe I provided all the information here, but please let me know if any more information is needed.

rotation inside OpenGL shading language

I wonder if there is a way how to compute a rotationmatrix using OpenGL shading language, i.e. doing the calculation using a shader. (Optimally by two given vectors using quaternionpower.)
Some background:
I have a project, where I want to implement a lightfield on the vertices of a 3D mesh (+interpolation) that requires vectors to be in local coordinates (Torrance, Lafortune, etc...).
Now this requires the calculation of a rotation matrix quite a number of times(Number of vertices should be scalable). This could be done somewhere in normal sourcecode using normal CPU, but I was hoping to find a way to use the power of the graphicscard to do this job for me.
Till now I could find any hint in the OpenGL manual itself nor anywhere else...

How do I retrieve the ModelView matrix in GLSL 1.5?

I have been reading through the specification of openGL 1.5, and saw that any reference to what used to be a variable holding the reference to the ModelView matrix (like gl_ModelViewMatrix) has been deprecated, and is only availble in some kind of compatability mode (which happens not to be supported on my GPU).
I have seen a few examples first retrieving the ModelView matrix or creating one, then sending it back to the GPU as a uniform variable.
Now this all seems just backward to me; even for a simple Vertex Shader you will in many cases want to use some kind of transformation on the geometry.
So I am really wondering now; is there any way to get the current ModelView matrix from within a vertex shader, using GLSL 1.5?
OpenGL-3 core completely dropped the matrix stack, i.e. the built-in modelview, projection, texture and color matrices. It's now expected from you to implement the matrix math and supply the matrices through self chosen uniforms.
There is no built in matrix system/lib in core openGL - since 3.+ version.
A lot of people had similar (bad) opinions about that "huge change" in openGL.
You have to use your own set of functions to perform matrix calculation. See libraries like: GLM or in lighthouse3D.
All in all it was very useful to have matrix functions in OpenGL when learning. Now you have to look for other solutions...
On the other side it is not a problem for game engines or game frameworks that usually have their own math libraries. So for them "new" OpenGL is even easier to use.

Orthographic Projection in Modern OpenGL

I'd like to set up an orthographic projection using only modern OpenGL techniques (i.e. no immediate-mode stuff). I'm seeing conflicting info on the web about how to approach this.
Some people are saying that it's still OK to call glMatrixMode(GL_PROJECTION) and then glOrtho. This has always worked for me in the past, but I'm wondering if this has been deprecated in modern OpenGL.
If so, are vertex shaders the standard way to do an orthographic projection nowadays? Does GLSL provide a handy built-in function to set up an orthographic projection, or do I need to write that math myself in the vertex shader?
If so, are vertex shaders the standard way to do an orthographic projection nowadays?
Not quite. Vertex shaders perform the calculations, but the transformation matrices are usually fed into the shader through a uniform. A shader should only evaluate things, that vary with each vertex. Implementing a "ortho" function, the returns a projection matrix in GLSL is counterproductive.
I'd like to set up an orthographic projection using only modern OpenGL techniques (i.e. no immediate-mode stuff). I'm seeing conflicting info on the web about how to approach this.
The matrix stack of OpenGL before version 3 has nothing to do with the immediate mode. Immediate mode was glBegin(…); for(…){ ...; glVertex(…);} glEnd(). And up to OpenGL version 3 is was rather common to use the matrices specified through the matrix stack in a vertex shader.
With OpenGL-3 it was aimed to do, what was originally planned for OpenGL-2: A unification of the API and removing old cruft. One of those things removed was the matrix stack. Many shaders already used more than the standard matrices already, anyway (like for example for skeletal animation), so matrices already had been passed as uniforms and the programs did already contain the whole matrix math code. Taking the step of remocing the matrix math stuff from OpenGL-3 was just the next logical step.
In general, you do not compute matrices in shaders. You compute them on the CPU and upload them as uniforms to the shaders. So your vertex shader would neither know nor care if it is for an orthographic projection or a perspective one. It just transforms by whatever projection matrix it is given.
When you target an OpenGL version > 3.0, glOrtho has been removed from the official specs (but still is present in the compatability profile), so you shouldn't use it anymore. Therefore, you will have to calculate the projection you want to use directly within the vertex shader
(for OpenGL up to 3.0 glOrtho is perfectly ok ;).
And no there is no "handy" function to get the projection matrix in GLSL, so you have to specify it yourself. This is, however, no problem, as there is a) plenty of example code in the web and b) the equations are right in the OpenGL spec, so you can take it simply from there if you want to.

Doubts in RayTracing with GLSL

I am trying to develop a basic Ray Tracer. So far i have calculated intersection with a plane and blinn-phong shading.i am working on a 500*500 window and my primary ray generation code is as follows
vec3 rayDirection = vec3( gl_FragCoord.x-250.0,gl_FragCoord.y-250.0 , 10.0);
Now i doubt that above method is right or wrong. Please give me some insights.
I am also having doubt that do we need to construct geometry in OpenGL code while rayTracing in GLSL. for example if i am trying to raytrace a plane do i need to construct plane in OpenGL code using glVertex2f ?
vec3 rayDirection = vec3( gl_FragCoord.x-250.0,gl_FragCoord.y-250.0 , 10.0);
Now i doubt that above method is right or wrong. Please give me some insights.
There's no right or wrong with projections. You could as well map viewport pixels to azimut and elevation angle. Actually your way of doing this is not so bad at all. I'd just pass the viewport dimensions in a additional uniform, instead of hardcoding, and normalize the vector. The Z component literally works like focal lengths.
I am also having doubt that do we need to construct geometry in OpenGL code while rayTracing in GLSL. for example if i am trying to raytrace a plane do i need to construct plane in OpenGL code using glVertex2f?
Raytracing works on a global description containing the full scene. OpenGL primitives however are purely local, i.e. just individual triangles, lines or points, and OpenGL doesn't maintain a scene database. So geometry passed using the usual OpenGL drawing function can not be raytraced (at least not that way).
This is about the biggest obstacle for doing raytracing with GLSL: You somehow need to implement a way to deliver the whole scene as some freely accessible buffer.
It is possible to use Ray Marching to render certain types of complex scenes in a single fragment shader. Here are some examples: (use Chrome or FireFox, requires WebGL)
Gift boxes: http://glsl.heroku.com/e#820.2
Torus Journey: http://glsl.heroku.com/e#794.0
Christmas tree: http://glsl.heroku.com/e#729.0
Modutropolis: http://glsl.heroku.com/e#327.0
The key to making this stuff work is writing "distance functions" that tell the ray marcher how far it is from the surface of an object. For more info on distance functions, see:
http://www.iquilezles.org/www/articles/distfunctions/distfunctions.htm