Fisheye projection matrix in Xna/OpenGL - 3D - opengl

I'm looking for a projection matrix I can use in 3D that will give me the effect of a fisheye. I'm not looking for a pixelshader or anything like that, that will manipulate pixels - but the actual projection matrix used in projecting from 3D space onto 2D.
Thanks.

That's not really possible. In homogeneous coordinates, matrices transform lines to lines. So any solution based solely on matrices will necessarily fail to bend lines like you want to.

Carlos isn't wrong but you might want to try playing with the "field of view (FOV)" parameter in your projection matrix builder.

Carlos is right. There is a way your could fake it, but you will have to re-render your scene multiple times.
Basically, you start by figuring out how to do two point perspective. Which I would do by rendering the scene twice with a projection matrix that has a vanishing point on alternating sides. Then you combine the two parts, I guess using a stencil map.
You could do something like four point perspective combining images with four vanishing points. You repeat that process as many times.
What your doing then is projecting onto a polygon that approximates a sphere.
I could explain more, but my guess is it sounds too complicated.
The simplest way to fake it is to render to a texture and distort the image, and render it as a fullscreen quad.

Related

Opengl show vertex in 2d or 3d

I have question.
Is it possible to have two Vertex shaders. One will view vertex in 2D and second in 3D.
At the moment my program have 2D view.
The only one difference between vertex in 2d and 3d will be that
vec2(x,y) in 3d will vec3(x,y,z). So i am thinking about sending to gpu vec3 and set gl_Position.z=0;
My biggest problem is that i choose magic numbers for glm:lookat and glm::perspective. If i see something it means that it works. So when i have 2d and 3d view everything looks bad.
I can move camera so doing 3d and changing position of camera only wont work.
No, this is not possible. But you can always render the same geometry multiple times, but with different glViewport and projection matrices applied. This is the canonical way to render the classical "top, front, side, perspective" views of 3D editors.
My biggest problem is that i choose magic numbers for glm:lookat and glm::perspective.
Well, then I'd tackle that problem and instead of magical numbers use actual math to create the desired effect.

Can I rotate the view of glOrtho()?

I'm making a program where I need an orthographic projection.
So, I'm using glOrtho(). I made a zoom function but I was wandering if you can rotate your view?
Because glOrho() only looks parralel to other planes.
Or is there another projection wich can do that.
glLookAt can rotate but it changes dimensions when further from the camera.
I read
How can I make the glOrtho parallelepiped rotating?
but it didn't give me an answer.
What's essential here is that you usually split the operations into two matrices: The (Model)View and The Projection.
While glOrtho() is usually called with glMatrixMode(GL_PROJECTION), all operations regarding moving and rotating the camera (such as glRotate*, glTranslate* and gluLookAt) should be preceeded by glMatrixMode(GL_MODELVIEW).
In fixed pipeline, the final position of the vertex is calculated by multiplying input data by these two matrices, and the projection used (orthographic or perspective or nonlinear whatnot) is separate from camera transformations.

OpenGL drawing 2D and 3D at the same time

I have a 3D scene I'm drawing and I want to draw a rectangle for a dialogue text that will be stretched for all the screen's width, what's the best way to achieve this, having performance in mind?
I found about glOrtho() that I can use for exact pixel placing, but since it's a matrix multiplication task, won't my app feel much heavier during scenes with dialogues?
If yes, should I try to find a math solution to find the X position of my left window corner according to some Z distance and draw a QUAD from there? (Is this even possible?)
glOrtho() is the way to go.
In the course of OpenGL's rendering Pipeline, during the Primitive Assembly stage, every vertex will be transformed (projected) from eye coordinates to clip coordinates. Whether your projection matrix is used for 3D perspective or 2D orthogonalization, it's still one matrix multiplication per vertex before Rasterization starts.
glOrtho() will change your projection matrix to an orthographic one but the matrix only needs to be generated once per frame and the equations required to do so are very simple:
(image credit: MSDN)
Once you have a projection matrix, don't let the thought of matrix multiplication scare you. It's exactly what video cards are designed to do and it's hardly a frightening task for any processor or GPU these days.

Why would it be beneficial to have a separate projection matrix, yet combine model and view matrix?

When you are learning 3D programming, you are taught that it's easiest think in terms of 3 transformation matrices:
The Model Matrix. This matrix is individual to every single model and it rotates and scales the object as desired and finally moves it to its final position within your 3D world. "The Model Matrix transforms model coordinates to world coordinates".
The View Matrix. This matrix is usually the same for a large number of objects (if not for all of them) and it rotates and moves all objects according to the current "camera position". If you imaging that the 3D scene is filmed by a camera and what is rendered on the screen are the images that were captured by this camera, the location of the camera and its viewing direction define which parts of the scene are visible and how the objects appear on the captured image. There are little reasons for changing the view matrix while rendering a single frame, but those do in fact exists (e.g. by rendering the scene twice and changing the view matrix in between, you can create a very simple, yet impressive mirror within your scene). Usually the view matrix changes only once between two frames being drawn. "The View Matrix transforms world coordinates to eye coordinates".
The Projection Matrix. The projection matrix decides how those 3D coordinates are mapped to 2D coordinates, e.g. if there is a perspective applied to them (objects get smaller the farther they are away from the viewer) or not (orthogonal projection). The projection matrix hardly ever changes at all. It may have to change if you are rendering into a window and the window size has changed or if you are rendering full screen and the resolution has changed, however only if the new window size/screen resolution has a different display aspect ratio than before. There are some crazy effects for that you may want to change this matrix but in most cases its pretty much constant for the whole live of your program. "The Projection Matrix transforms eye coordinates to screen coordinates".
This makes all a lot of sense to me. Of course one could always combine all three matrices into a single one, since multiplying a vector first by matrix A and then by matrix B is the same as multiplying the vector by matrix C, where C = B * A.
Now if you look at the classical OpenGL (OpenGL 1.x/2.x), OpenGL knows a projection matrix. Yet OpenGL does not offer a model or a view matrix, it only offers a combined model-view matrix. Why? This design forces you to permanently save and restore the "view matrix" since it will get "destroyed" by model transformations applied to it. Why aren't there three separate matrices?
If you look at the new OpenGL versions (OpenGL 3.x/4.x) and you don't use the classical render pipeline but customize everything with shaders (GLSL), there are no matrices available any longer at all, you have to define your own matrices. Still most people keep the old concept of a projection matrix and a model-view matrix. Why would you do that? Why not using either three matrices, which means you don't have to permanently save and restore the model-view matrix or you use a single combined model-view-projection (MVP) matrix, which saves you a matrix multiplication in your vertex shader for ever single vertex rendered (after all such a multiplication doesn't come for free either).
So to summarize my question: Which advantage has a combined model-view matrix together with a separate projection matrix over having three separate matrices or a single MVP matrix?
Look at it practically. First, the fewer matrices you send, the fewer matrices you have to multiply with positions/normals/etc. And therefore, the faster your vertex shaders.
So point 1: fewer matrices is better.
However, there are certain things you probably need to do. Unless you're doing 2D rendering or some simple 3D demo-applications, you are going to need to do lighting. This typically means that you're going to need to transform positions and normals into either world or camera (view) space, then do some lighting operations on them (either in the vertex shader or the fragment shader).
You can't do that if you only go from model space to projection space. You cannot do lighting in post-projection space, because that space is non-linear. The math becomes much more complicated.
So, point 2: You need at least one stop between model and projection.
So we need at least 2 matrices. Why model-to-camera rather than model-to-world? Because working in world space in shaders is a bad idea. You can encounter numerical precision problems related to translations that are distant from the origin. Whereas, if you worked in camera space, you wouldn't encounter those problems, because nothing is too far from the camera (and if it is, it should probably be outside the far depth plane).
Therefore: we use camera space as the intermediate space for lighting.
In most cases your shader will need the geometry in world or eye coordinates for shading so you have to seperate the projection matrix from the model and view matrices.
Making your shader multiply the geometry with two matrices hurts performance. Assuming each model have thousends (or more) vertices it is more efficient to compute a model view matrix in the cpu once, and let the shader do one less mtrix-vector multiplication.
I have just solved a z-buffer fighting problem by separating the projection matrix. There is no visible increase of the GPU load. The two folowing screenshots shows the two results - pay attention to the green and white layers fighting.

Can someone describe the algorithm used by Ken Silverman's Voxlap engine?

From what I gathered he used sparse voxel octrees and raycasting. It doesn't seem like he used opengl or direct3d and when I look at the game Voxelstein it appears that miniature cubes are actually being drawn instead of just a bunch of 2d square. Which caught me off guard I'm not sure how he is doing that without opengl or direct3d.
I tried to read through the source code but it was difficult for me to understand what was going on. I would like to implement something similar and would like the algorithm to do so.
I'm interested in how he performed rendering, culling, occlusion, and lighting. Any help is appreciated.
The algorithm is closer to ray-casting than ray-tracing. You can get an explanation from Ken Silverman himself here:
https://web.archive.org/web/20120321063223/http://www.jonof.id.au/forum/index.php?topic=30.0
In short: on a grid, store an rle list of surface voxels for each x,y stack of voxels (if z means 'up'). Assuming 4 degrees of freedom, ray-cast across it for each vertical line on the screen, and maintain a list of visible spans which is clipped as each cube is drawn. For 6 degrees of freedom, do something similar but with scanlines which are tilted in screenspace.
I didn't look at the algorithm itself, but I can tell the following based off the screenshots:
it appears that miniature cubes are actually being drawn instead of just a bunch of 2d square
Yep, that's how ray-tracing works. It doesn't draw 2d squares, it traces rays. If you trace your rays against many miniature cubes, you'll see many miniature cubes. The scene is represented by many miniature cubes (voxels), hence you see them when you look up close. It would be nice to actually smoothen the data somehow (trace against smoothed energy function) to make them look smoother.
I'm interested in how he performed rendering
by ray-tracing
culling
no need for culling when ray-tracing, particularly in a voxel scene. As you move along the ray you check only the voxels that the ray intersects.
occlusion
voxel-voxel occlusion is handled naturally by ray-tracing; it would return the first voxel hit, which is the closest. If you draw sprites you can use a Z-buffer generated by the ray-tracer.
and lighting
It's possible to approximate the local normal by looking at nearby cells and looking which are occupied and which are not. Then performing the lighting calculation. Alternatively each voxel can store the normal along with its color or other material properties.