LWJGL 2.9 gluProject 3D to 2D - opengl

So I've been playing with LWJGL 3D object coordinates to 2D screen space coordinates using GLU.gluProject, however I'm finding there to be quite a problem when the xyz of the 3D object is behind the camera. The screen space coordinates seem to be on screen twice, once for the actual potion which works fine, but again for when the object is behind, and the positions are somewhat inverted of the objects true position (camera moves left, so do the screen coordinates twice as fast as the camera).
Here's the code I'm using for 3D to 2D:
public static float[] get2DFrom3D(float x, float y, float z) {
FloatBuffer screen = BufferUtils.createFloatBuffer(3);
IntBuffer view = BufferUtils.createIntBuffer(16);
FloatBuffer model = BufferUtils.createFloatBuffer(16);
FloatBuffer proj = BufferUtils.createFloatBuffer(16);
GL11.glGetFloat(GL11.GL_MODELVIEW_MATRIX, model);
GL11.glGetFloat(GL11.GL_PROJECTION_MATRIX, proj);
GL11.glGetInteger(GL11.GL_VIEWPORT, view);
boolean res= GLU.gluProject(x, y, z, model, proj, view, screen);
if (res) {
return new float[] {screen.get(0), Display.getHeight() - screen.get(1), screen.get(2)};
}
return null;
}
Another query is what the screen.get(2) value is used for, as it majorly varies from 0.8 to 1.1, however occasionally reaches -18 or 30 when the position is just below the camera, and the camera pitch is sat just above or below the horizon.
Any help is appreciated.

Points behins the camera (or on the camera plane) can never be correctly projected. This case can only be handled for primitives like lines or triangles. During rendering, the primitives are clipped against the viewing frustum, so that new vertices (and new primitives) can be generated. But this is impossible to do for a single point, you always need lines or polygon edges to calculate any meaningful intersection point.
Individual points, and this is all what gluProject handles, can either be inside or outside of the frustum. But gluProject does not care abnout that, it just applies the transformations, mirroring points behind the camera in front of the camera. It is the responsibilty of the caller to ensure that the points to project are actually inside of the viewing frustum.

Related

OpenGL 3.0 Window Collision Detection

Can someone tell me how to make triangle vertices collide with edges of the screen?
For math library I am using GLM and for window creation and keyboard/mouse input I am using GLFW.
I created perspective matrix and simple array of triangle vertices.
Then I multiplied all this in vertex shader like:
gl_Position = projection * view * model * vec4(pos, 1.0);
Projection matrix is defined as:
glm::mat4 projection = glm::perspective(
45.0f, (GLfloat)screenWidth / (GLfloat)screenHeight, 0.1f, 100.0f);
I have fully working camera and projection. I can move around my "world" and see triangle standing there. The problem I have is I want to make sure that triangle collide with edges of the screen.
What I did was disable camera and only enable keyboard movement. Then I initialized translation matrix as glm::translate(model, glm::vec3(xMove, yMove, -2.5f)); and scale matrix to scale by 0.4.
Now all of that is working fine. When I press RIGHT triangle moves to the right when I press UP triangle moves up etc... The problem is I have no idea how to make it stop moving then it hits edges.
This is what I have tried:
triangleRightVertex.x is glm::vec3 object.
0.4 is scaling value that I used in scaling matrix.
if(((xMove + triangleRightVertex.x) * 0.4f) >= 1.0f)
{
cout << "Right side collision detected!" << endl;
}
When I move triangle to the right it does detect collision when x of the third vertex(bottom right corner of triangle) collides with right side but it goes little bit beyond before it detects. But when I tried moving up it detected collision when half of the triangle was up.
I have no idea what to do here can someone explain me this please?
Each of the vertex coordinates of the triangle is transformed by the model matrix form model space to world space, by the view matrix from world space to view space and by the projection matrix from view space to clip space. gl_Position is the Homogeneous coordinate in clip space and further transformed by a Perspective divide from clip space to normalized device space. The normalized device space is a cube, with right, bottom, front of (-1, -1, -1) and a left, top, back of (1, 1, 1).
All the geometry which is in this (volume) cube is "visible" on the viewport.
In clip space the clipping of the scene is performed.
A point is in clip space if the x, y and z components are in the range defined by the inverted w component and the w component of the homogeneous coordinates of the point:
-w <= x, y, z <= w
What you want to do is to check if a vertex x coordinate of the triangle is clipped. SO you have to check if the x component of the clip space coordinate is in the view volume.
Calculate the clip space position of the vertices on the CPU, as it does the vertex shader.
The glm library is very suitable for things like that:
glm::vec3 triangleVertex = ... ; // new model coordinate of the triangle
glm::vec4 h_pos = projection * view * model * vec4(triangleVertex, 1.0);
bool x_is_clipped = h_pos.x < -h_pos.w || h_pos.x > h_pos.w;
If you don't know how the orientation of the triangle is transformed by the model matrix and view matrix, then you have to do this for all the 3 vertex coordinates of the triangle-

Rotating 2D camera to space ship's heading in OpenGL (OpenTK)

The game is a top-down 2D space ship game -- think of "Asteroids."
Box2Dx is the physics engine and I extended the included DebugDraw, based on OpenTK, to draw additional game objects. Moving the camera so it's always centered on the player's ship and zooming in and out work perfectly. However, I really need the camera to rotate along with the ship so it's always facing in the same direction. That is, the ship will appear to be frozen in the center of the screen and the rest of the game world rotates around it as it turns.
I've tried adapting code samples, but nothing works. The best I've been able to achieve is a skewed and cut-off rendering.
Render loop:
// Clear.
Gl.glClear(Gl.GL_COLOR_BUFFER_BIT | Gl.GL_DEPTH_BUFFER_BIT);
// other rendering omitted (planets, ships, etc.)
this.OpenGlControl.Draw();
Update view -- centers on ship and should rotate to match its angle. For now, I'm just trying to rotate it by an arbitrary angle for a proof of concept, but no dice:
public void RefreshView()
{
int width = this.OpenGlControl.Width;
int height = this.OpenGlControl.Height;
Gl.glViewport(0, 0, width, height);
Gl.glMatrixMode(Gl.GL_PROJECTION);
Gl.glLoadIdentity();
float ratio = (float)width / (float)height;
Vec2 extents = new Vec2(ratio * 25.0f, 25.0f);
extents *= viewZoom;
// rotate the view
var shipAngle = 180.0f; // just a test angle for proof of concept
Gl.glRotatef(shipAngle, 0, 0, 0);
Vec2 lower = this.viewCenter - extents;
Vec2 upper = this.viewCenter + extents;
// L/R/B/T
Glu.gluOrtho2D(lower.X, upper.X, lower.Y, upper.Y);
Gl.glMatrixMode(Gl.GL_MODELVIEW);
}
Now, I'm obviously doing this wrong. Degrees of 0 and 180 will keep it right-side-up or flip it, but any other degree will actually zoom it in/out or result in only blackness, nothing rendered. Below are examples:
If ship angle is 0.0f, then game world is as expected:
Degree of 180.0f flips it vertically... seems promising:
Degree of 45 zooms out and doesn't rotate at all... that's odd:
Degree of 90 returns all black. In case you've never seen black:
Please help!
Firstly the 2-4 arguments are the axis, so please state them correctly as stated by #pingul.
More importantly the rotation is applied to the projection matrix.
// L/R/B/T
Glu.gluOrtho2D(lower.X, upper.X, lower.Y, upper.Y);
In this line your Orthogonal 2D projection matrix is being multiplied with the previous rotation and applied to your projection matrix. Which I believe is not what you want.
The solution would be move your rotation call to a place after the model view matrix mode is selected, as below
// L/R/B/T
Glu.gluOrtho2D(lower.X, upper.X, lower.Y, upper.Y);
Gl.glMatrixMode(Gl.GL_MODELVIEW);
// rotate the view
var shipAngle = 180.0f; // just a test angle for proof of concept
Gl.glRotatef(shipAngle, 0.0f, 0.0f, 1.0f);
And now your rotations will be applied to the model-view matrix stack. (I believe this is the effect you want). Keep in mind that glRotatef() creates a rotation matrix and multiplies it with the matrix at the top of the selected stack stack.
I would also strongly suggest you move away from fixed function pipeline if possible as suggested by #BDL.

OpenGL - Why is my ray picking not working?

I recently setup a project that uses OpenGL (Via the C# Wrapper Library OpenTK) which should do the following:
Create a perspective projection camera - this camera will be used to make the user rotate,move etc. to look at my 3d models.
Draw some 3d objects.
Use 3d ray picking via unproject to let the user pick points/models in the 3d view.
The last step (ray picking) looks ok on my 3d preview (GLControl) but returns invalid results like Vector3d (1,86460186949617; -45,4086124979203; -45,0387025610247). I have no idea why this is the case!
I am using the following code to setup my viewport:
this.RenderingControl.MakeCurrent();
int w = RenderingControl.Width;
int h = RenderingControl.Height;
// Use all of the glControl painting area
GL.Viewport(0, 0, w, h);
GL.MatrixMode(MatrixMode.Projection);
GL.LoadIdentity();
Matrix4 p = Matrix4.CreatePerspectiveFieldOfView(MathHelper.PiOver4, w / (float)h, 0.1f, 64.0f);
GL.LoadMatrix(ref p);
I use this method for unprojecting:
/// <summary>
/// This methods maps screen coordinates to viewport coordinates.
/// </summary>
/// <param name="screen"></param>
/// <param name="view"></param>
/// <param name="projection"></param>
/// <param name="view_port"></param>
/// <returns></returns>
private Vector3d UnProject(Vector3d screen, Matrix4d view, Matrix4d projection, int[] view_port)
{
Vector4d pos = new Vector4d();
// Map x and y from window coordinates, map to range -1 to 1
pos.X = (screen.X - (float)view_port[0]) / (float)view_port[2] * 2.0f - 1.0f;
pos.Y = (screen.Y - (float)view_port[1]) / (float)view_port[3] * 2.0f - 1.0f;
pos.Z = screen.Z * 2.0f - 1.0f;
pos.W = 1.0f;
Vector4d pos2 = Vector4d.Transform(pos, Matrix4d.Invert(Matrix4d.Mult(view, projection)));
Vector3d pos_out = new Vector3d(pos2.X, pos2.Y, pos2.Z);
return pos_out / pos2.W;
}
I use this code to position my camera (including rotation) and do the ray picking:
// Clear buffers
GL.Clear(ClearBufferMask.ColorBufferBit | ClearBufferMask.DepthBufferBit);
// Apply camera
GL.MatrixMode(MatrixMode.Modelview);
Matrix4d mv = Matrix4d.LookAt(EyePosition, Vector3d.Zero, Vector3d.UnitY);
GL.LoadMatrix(ref mv);
GL.Translate(0, 0, ZoomFactor);
// Rotation animation
if (RotationAnimationActive)
{
CameraRotX += 0.05f;
}
if (CameraRotX >= 360)
{
CameraRotX = 0;
}
GL.Rotate(CameraRotX, Vector3.UnitY);
GL.Rotate(CameraRotY, Vector3.UnitX);
// Apply useful rotation
GL.Rotate(50, 90, 30, 0f);
// Draw Axes
drawAxes();
// Draw vertices of my 3d objects ...
// Picking Test
int x = MouseX;
int y = MouseY;
int[] viewport = new int[4];
Matrix4d modelviewMatrix, projectionMatrix;
GL.GetDouble(GetPName.ModelviewMatrix, out modelviewMatrix);
GL.GetDouble(GetPName.ProjectionMatrix, out projectionMatrix);
GL.GetInteger(GetPName.Viewport, viewport);
// get depth of clicked pixel
float[] t = new float[1];
GL.ReadPixels(x, RenderingControl.Height - y, 1, 1, OpenTK.Graphics.OpenGL.PixelFormat.DepthComponent, PixelType.Float, t);
var res = UnProject(new Vector3d(x, viewport[3] - y, t[0]), modelviewMatrix, projectionMatrix, viewport);
GL.Begin(BeginMode.Lines);
GL.Color3(Color.Yellow);
GL.Vertex3(0, 0, 0);
GL.Vertex3(res);
Debug.WriteLine(res.ToString());
GL.End();
I get the following result from my ray picker:
Clicked Position = (1,86460186949617; -45,4086124979203;
-45,0387025610247)
This vector is shown as the yellow line on the attached screenshot.
Why is the Y and Z Position not in the range -1/+1? Where do these values like -45 come from and why is the ray rendered correctly on the screen?
If you have only a tip about what could be broken I would also appreciate your reply!
Screenshot:
If you break down the transform from screen to world into individual matrices, print out the inverses of the M, V, and P matrices, and print out the intermediate result of each (matrix inverse) * (point) calculation from screen to world/model, then I think you'll see the problem. Or at least you'll see that there is a problem with using the inverse of the M-V-P matrix and then intuitively grasp the solution. Or maybe just read the list of steps below and see if that helps.
Here's the approach I've used:
Convert the 2D vector for mouse position in screen/control/widget coordinates to the 4D vector (mouse.x, mouse.y, 0, 1).
Transform the 4D vector from screen coordinates to Normalized Device Coordinates (NDC) space. That is, multiply the inverse of your NDC-to-screen matrix [or equivalent equations] by (mouse.x, mouse.y, 0, 1) to yield a 4D vector in NDC coordinate space: (nx, ny, 0, 1).
In NDC coordinates, define two 4D vectors: the source (near point) of the ray as (nx, ny, -1, 1) and a far point at (nx, ny, +1, 1).
Multiply each 4D vector by the inverse of the (perspective) projection matrix.
Convert each 4D vector to a 3D vector (i.e. divide through by the fourth component, often called "w"). *
Multiply the 3D vectors by the inverse of the view matrix.
Multiply the 3D vectors by the inverse of the model matrix (which may well be the identity matrix).
Subtract the 3D vectors to yield the ray.
Normalize the ray.
Yee-haw. Go back and justify each step with math, if desired, or save that review for later [if ever] and work frantically towards catching up on creating actual 3D graphics and interaction and whatnot.
Go back and refactor, if desired.
(* The framework I use allows multiplication of a 3D vector by a 4x4 matrix because it treats the 3D vector as a 4D vector. I can make this more clear later, if necessary, but I hope the point is reasonably clear.)
That worked for me. This set of steps also works for Ortho projections, though with Ortho you could cheat and write simpler code since the projection matrix isn't goofy.
It's late as I write this and I may have misinterpreted your problem. I may have also misinterpreted your code since I use a different UI framework. But I know how aggravating ray casting for OpenGL can be, so I'm posting in the hope that at least some of what I write is useful, and that I can thereby alleviate some human misery.
Postscript. Speaking of misery: I found numerous forum posts and blog pages that address ray casting for OpenGL, but most posts start with some variant of the following: "First, you have to know X" [where X is not necessary to know]; or "Go look at the unproject function [in library X in repository Y for which you'll need client app Z . ..]"; or a particular favorite of mine: "Review a textbook on linear algebra."
Having to slog through yet another description of the OpenGL rendering pipeline or the OpenGL transformation conga line when you just need to debug ray casting--a common problem--is like having to listen to a lecture on hydraulics when you discover your brake pedal isn't working.

OpenGL ES coordinates to screen pixels

I am trying to make an advertising application in openGL es 2.0.
Minimizing the problem here, i can explain as an example that I created a rectangle animated cube with having some advertising images on top of it. model and animation is created in 3DS Max and converted into .pod and it is coming in the Tv screen perfectly.
Now I want to know how much screen it is covering in pixels, if my projection is 1280x720, because scaling and translation has been given in the hands of advertiser and he don't know coordinates. advertiser only knows the language of pixels. So if he increase the X axis scale in pixels, I need to convert those to OpenGL coordinates and also have to adjust the translation by myself, so that cube not goes out of screen.
In short, how can I get the no of pixels taken by cube in screen? Is there any easy way?
It's the MVP matrix which gets applied by rendering pipeline to the 'OpenGL coordinates/vertices' to finally extract the screen coordinates.
So it's possible to use it's inverse to compute vertices.
Now the problem is multiple combinations of vertices, view and projection matrices can give the same screen coordinates, i.e. the mapping from vertex position to screen coordinates is not unique.
So we have to reduce the unknowns in the equation to just x and y by fixing all the other variables (in case of translation) and probably to just z (in case of scaling).
For translation, for example, the code could be:
Point3D get3dPoint(Point2D point2D, int width,
int height, Matrix viewMatrix, Matrix projectionMatrix) {
double x = 2.0 * point2D.x / clientWidth - 1;
double y = - 2.0 * point2D.y / clientHeight + 1;
Matrix4 viewProjectionInverse = inverse(projectionMatrix *
viewMatrix);
double fixedZ = 1.0;
Point3D point3D = new Point3D(x, y, fixedZ);
return viewProjectionInverse.multiply(point3D);
}

How to move a camera using in a ray-tracer?

I am currently working on ray-tracing techniques and I think I've made a pretty good job; but, I haven't covered camera yet.
Until now, I used a plane fragment for view plane which is located between (-width/2, height/2, 200) and (width/2, -height/2, 200) [200 is just a fixed number of z, can be changed].
Addition to that, I use the camera mostly on e(0, 0, 1000), and I use a perspective projection.
I send rays from point e to pixels, and print it to image's corresponding pixel after calculating the pixel color.
Here is a image I created. Hopefully you can guess where eye and view plane are by looking at the image.
My question starts from here. It's time to move my camera around, but I don't know how to map 2D view plane coordinates to the canonical coordinates. Is there a transformation matrix for that?
The method I think requires to know the 3D coordinates of pixels on view plane. I am not sure it's the right method to use. So, what do you suggest?
There are a variety of ways to do it. Here's what I do:
Choose a point to represent the camera location (camera_position).
Choose a vector that indicates the direction the camera is looking (camera_direction). (If you know a point the camera is looking at, you can compute this direction vector by subtracting camera_position from that point.) You probably want to normalize (camera_direction), in which case it's also the normal vector of the image plane.
Choose another normalized vector that's (approximately) "up" from the camera's point of view (camera_up).
camera_right = Cross(camera_direction, camera_up)
camera_up = Cross(camera_right, camera_direction) (This corrects for any slop in the choice of "up".)
Visualize the "center" of the image plane at camera_position + camera_direction. The up and right vectors lie in the image plane.
You can choose a rectangular section of the image plane to correspond to your screen. The ratio of the width or height of this rectangular section to the length of camera_direction determines the field of view. To zoom in you can increase camera_direction or decrease the width and height. Do the opposite to zoom out.
So given a pixel position (i, j), you want the (x, y, z) of that pixel on the image plane. From that you can subtract camera_position to get a ray vector (which then needs to be normalized).
Ray ComputeCameraRay(int i, int j) {
const float width = 512.0; // pixels across
const float height = 512.0; // pixels high
double normalized_i = (i / width) - 0.5;
double normalized_j = (j / height) - 0.5;
Vector3 image_point = normalized_i * camera_right +
normalized_j * camera_up +
camera_position + camera_direction;
Vector3 ray_direction = image_point - camera_position;
return Ray(camera_position, ray_direction);
}
This is meant to be illustrative, so it is not optimized.
For rasterising renderers, you tend to need a transformation matrix because that's how you map directly from 3D coordinates to screen 2D coordinates.
For ray tracing, it's not necessary because you're typically starting from a known pixel coordinate in 2D space.
Given the eye position, a point in 3-space that's in the center of the screen, and vectors for "up" and "right", it's quite easy to calculate the 3D "ray" that goes from the eye position and through the specified pixel.
I've previously posted some sample code from my own ray tracer at https://stackoverflow.com/a/12892966/6782