How to get the Previous View Projection Matrix? - glsl

I'm working on porting the Motion Blur example from (NVIDIA's GPU Gems chapters) to GLSL (ES).
How can one get hold of the 'previous view projection matrix' from inside your Vertex program?

Related

Adjusting the distortion caused by a projector in OpenGL ES

I'm getting started with a OpenGL ES app (C++/SDL) running on a Raspberry PI, that I'll want to output through a pocket projector.
I want to give the user the ability to correct the distortion caused by aiming the projector from a direction that is non-normal to the surface. The surface will also be smaller than the projected area. To do this, the user will be able to "move" the 4 corners of the projection window independently to match the surface.
I was planning to do this by solving a simple linear system of equations that transforms the original corners (in the current projection matrix coordinates) to the corners set by the user, and just push this resulting matrix on top of the GL_PROJECTION matrix stack.
However... I found out that I can't "read" the projection matrix in OpenGL ES:
float matrix[16];
glGetFloatv(GL_PROJECTION_MATRIX, matrix);
In particular, the GL_PROJECTION_MATRIX symbol doesn't exist... and I've even read that there's no such thing as a projection matrix in OpenGL ES (something I find hard to believe, but I'm totally new to it, so...).
Any idea of a workaround for reading this matrix, or maybe an alternative approach to do what I'm trying to do?
Thanks!
OpenGL ES 1.x has a projection matrix, and you should be able to get the current value with exactly the code you have in your question. See http://www.khronos.org/opengles/sdk/1.1/docs/man/glGet.xml for the documentation confirming this, or the header file at http://www.khronos.org/registry/gles/api/GLES/gl.h.
ES 2.0 and higher are a different story. ES 2.0 eliminates most of the fixed pipeline that was present in ES 1.x, and replaces it with shader based rendering.
Therefore, concepts like a built-in projection matrix don't exist anymore in ES 2.0 and are replaced with the much more flexible concept of shader programs written in GLSL. If you want to use a projection matrix in ES 2.0, you have a matrix variable in your vertex shader code, pass a value for the matrix into the shader program (using the glUniformMatrix4fv API call), and the GLSL code you provide for the vertex shader performs the multiplication of vectors with the projection matrix.
So in the case of ES 2.0, there is no need for a glGet*() call for the projection matrix. If you have one, you calculated it, and passed it into the shader program. Which means that you already have the matrix in your code. Well, you could get any matrix value back with the glGetUniformfv() API call, but it's much easier and cleaner to just keep it around in your code.

Opengl set model to world matarix

There is making viewport matrix using glGetFloatv(GL_MODELVIEW_MATRIX, modelviewMatrix);
But how to set model to world matrix like glGetflatv?
The code example you gave is getting the current value of the model view matrix which represents the transformation from model space to world space. This code does not affect the viewport which is set using glviewport(x,y,width,height). Assuming you are using fixed function OpenGL as indicated by your use of GL_MODELVIEW_MATRIX, then you can manipulate the modelview matrix using the built in functions or by loading your own matrix. See this related question for more details on setting the modelview matrix: How to update opengl modelview matrix with my own 4x4 matrix?.

OpenGL 360 degree perspective

I'm looking to capture a 360 degree - spherical panorama - photo of my scene. How can I do this best? If I have it right, I can't do this the ordinary way of setting the perspective to 360.
If I would need a vertex shader, is there one available?
This is actually a nontrivial thing to do.
In a naive approach a vertex shader that transforms the vertex positions not by matrix multiplication, but by feeding them through trigonometric functions may seem to do the trick. The problem is, that this will not make straight lines "curvy". You could use a tesselation shader to add sufficient geometry to compensate for this.
The most straightforward approach is two-fold. First you render your scene into a cubemap, i.e. render with a 90°×90° FOV into the 6 directions making up a cube. This allows you to use regular affine projections rendering the scene.
In a second step you use the generated cubemap to texture a screen filling grid, where the texture coordinates of each vertex are azimuth and elevation.
Another approach is to use tiled rendering with very small FOV and rotating the "camera", kind of like doing a panoramic picture without using a wide angle lens. As a matter of fact the cubemap based approach is tiled rendering, but its easier to get right than trying to do this directly with changed camera direction and viewport placement.

What are good reasons to enable 2D projection with cocos2d-iphone?

In cocos2d-iphone the default projection type is "3D" projection. But you can also set the projection to "2D" like so:
[[CCDirector sharedDirector] setProjection:CCDirectorProjection2D];
Behind the scenes the 3D projection uses perspective projection whereas 2D projection is the OpenGL orthographic projection. The technical details about these two projection modes can be reviewed here, that's not what I'm interested in.
What are the benefits and drawbacks of 2D projection for cocos2d users? What are good reasons to switch to 2D projection?
Personally I've used 2D projection to be able to use depth buffering for isometric tilemaps. Isometric tilemaps require this for correct z ordering of tiles and objects on the tilemap.
I've also used 2D projection with depth buffering in non-tilemap projects to get complete z order control via the vertexZ property. This project used a pseudo isometric display where the vertexZ of an object is based on its Y coordinate.
That means I've been using 2D projection only to be able to use the vertexZ property, which also requires enabling depth buffering. Are there any other reasons one might want to switch to 2D projection?
Switching to 2D projection is a life saver in the following scenario:
You create a big CCRenderTexture
You draw a bunch of stuff on it, either using [... visit] or OpenGL drawing functions
You add the render texture to your layer, e.g., in order for the things you drew in point 2. to serve as the background for your game.
With 3D projection, the texture will be rendered with vertical and/or horizontal fault lines. See e.g., http://www.cocos2d-x.org/boards/6/topics/16197 which is for cocos2d-x but I have observed the same effect also for cocos2d-iphone and setting the projection to 2D got rid of the problem.
I have switched to 2D projection as the only means to resolve font rendering issues with CClabels, both font file and TTF-based labels. This is not always the cause of a font issue, but it has resolved some problems for me when all else failed.

Why would it be beneficial to have a separate projection matrix, yet combine model and view matrix?

When you are learning 3D programming, you are taught that it's easiest think in terms of 3 transformation matrices:
The Model Matrix. This matrix is individual to every single model and it rotates and scales the object as desired and finally moves it to its final position within your 3D world. "The Model Matrix transforms model coordinates to world coordinates".
The View Matrix. This matrix is usually the same for a large number of objects (if not for all of them) and it rotates and moves all objects according to the current "camera position". If you imaging that the 3D scene is filmed by a camera and what is rendered on the screen are the images that were captured by this camera, the location of the camera and its viewing direction define which parts of the scene are visible and how the objects appear on the captured image. There are little reasons for changing the view matrix while rendering a single frame, but those do in fact exists (e.g. by rendering the scene twice and changing the view matrix in between, you can create a very simple, yet impressive mirror within your scene). Usually the view matrix changes only once between two frames being drawn. "The View Matrix transforms world coordinates to eye coordinates".
The Projection Matrix. The projection matrix decides how those 3D coordinates are mapped to 2D coordinates, e.g. if there is a perspective applied to them (objects get smaller the farther they are away from the viewer) or not (orthogonal projection). The projection matrix hardly ever changes at all. It may have to change if you are rendering into a window and the window size has changed or if you are rendering full screen and the resolution has changed, however only if the new window size/screen resolution has a different display aspect ratio than before. There are some crazy effects for that you may want to change this matrix but in most cases its pretty much constant for the whole live of your program. "The Projection Matrix transforms eye coordinates to screen coordinates".
This makes all a lot of sense to me. Of course one could always combine all three matrices into a single one, since multiplying a vector first by matrix A and then by matrix B is the same as multiplying the vector by matrix C, where C = B * A.
Now if you look at the classical OpenGL (OpenGL 1.x/2.x), OpenGL knows a projection matrix. Yet OpenGL does not offer a model or a view matrix, it only offers a combined model-view matrix. Why? This design forces you to permanently save and restore the "view matrix" since it will get "destroyed" by model transformations applied to it. Why aren't there three separate matrices?
If you look at the new OpenGL versions (OpenGL 3.x/4.x) and you don't use the classical render pipeline but customize everything with shaders (GLSL), there are no matrices available any longer at all, you have to define your own matrices. Still most people keep the old concept of a projection matrix and a model-view matrix. Why would you do that? Why not using either three matrices, which means you don't have to permanently save and restore the model-view matrix or you use a single combined model-view-projection (MVP) matrix, which saves you a matrix multiplication in your vertex shader for ever single vertex rendered (after all such a multiplication doesn't come for free either).
So to summarize my question: Which advantage has a combined model-view matrix together with a separate projection matrix over having three separate matrices or a single MVP matrix?
Look at it practically. First, the fewer matrices you send, the fewer matrices you have to multiply with positions/normals/etc. And therefore, the faster your vertex shaders.
So point 1: fewer matrices is better.
However, there are certain things you probably need to do. Unless you're doing 2D rendering or some simple 3D demo-applications, you are going to need to do lighting. This typically means that you're going to need to transform positions and normals into either world or camera (view) space, then do some lighting operations on them (either in the vertex shader or the fragment shader).
You can't do that if you only go from model space to projection space. You cannot do lighting in post-projection space, because that space is non-linear. The math becomes much more complicated.
So, point 2: You need at least one stop between model and projection.
So we need at least 2 matrices. Why model-to-camera rather than model-to-world? Because working in world space in shaders is a bad idea. You can encounter numerical precision problems related to translations that are distant from the origin. Whereas, if you worked in camera space, you wouldn't encounter those problems, because nothing is too far from the camera (and if it is, it should probably be outside the far depth plane).
Therefore: we use camera space as the intermediate space for lighting.
In most cases your shader will need the geometry in world or eye coordinates for shading so you have to seperate the projection matrix from the model and view matrices.
Making your shader multiply the geometry with two matrices hurts performance. Assuming each model have thousends (or more) vertices it is more efficient to compute a model view matrix in the cpu once, and let the shader do one less mtrix-vector multiplication.
I have just solved a z-buffer fighting problem by separating the projection matrix. There is no visible increase of the GPU load. The two folowing screenshots shows the two results - pay attention to the green and white layers fighting.