2D to 3D map OpenGL - opengl

I am trying to make a simple game engine but I got stuck at a point when I tried to map a 2D mouse coordinate to a 3D coordinate in my world. The basic game has a plane that serves as the ground as it is going to be (hopefully with time) an RTS gameengine.
My problem is that I can't really come up with anything useful. The plane is located at the 0,-100,-300 points and is about 1000x1000 in size. My camera is at 0,0,0 and the scene is rotated at 60 degreesto give the impression of a "bird eye" cam.
I was thinking about the trigonometric equations, using that I know the height of my camera and the angle and possibly calculating the distance will give me the right coords but this is just an idea.
Can somebody please give me some advice?

You can do it with a simple ray casting.
First, using gluUnProject, you can obtain the 3D world coordinates m corresponding to the 2D window coordinates of the mouse pointer.
Given the camera position e = (0, 0, 0), you can compute the mouse ray direction r = m - e.
Now, given a point p on the plane and the plane normal n, you can compute the intersection of the mouse ray with the plane.

Related

What are we trying to achieve by creating a right and down vectors in this ray tracing depiction?

Often, we see the following picture when talking about ray tracing.
Here, I see the Z axis as the sort of direction if the camera pointed straight ahead, and the XY grid as the grid that the camera is seeing here. From the camera's point of view, we see the usual Cartesian grid me and my classmates are used to.
Recently I was examining code that simulates this. One thing that is not obvious from this picture to me is the requirement for the "right" and "down" vectors. Obviously we have look_at, which shows where the camera is looking. And campos is where the camera is located. But why do we need camright and camdown? What are we trying to construct?
Vect X (1, 0, 0);
Vect Y (0, 1, 0);
Vect Z (0, 0, 1);
Vect campos (3, 1.5, -4);
Vect look_at (0, 0, 0);
Vect diff_btw (
campos.getVectX() - look_at.getVectX(),
campos.getVectY() - look_at.getVectY(),
campos.getVectZ() - look_at.getVectZ()
);
Vect camdir = diff_btw.negative().normalize();
Vect camright = Y.crossProduct(camdir);
Vect camdown = camright.crossProduct(camdir);
Camera scene_cam (campos, camdir, camright, camdown);
I was searching about this question recently and found this post as well: Setting the up vector in the camera setup in a ray tracer
Here, the answerer says this: "My answer assumes that the XY plane is the "ground" in world space and that Z is the altitude. Imagine standing on the floor holding a camera. It's position has a positive Z component, and it's view vector is nearly parallel to the ground. (In my diagram, it's pointing slightly upwards.) The plane of the camera's film (the uv grid) is perpendicular to the view grid. A common approach is to transform everything until the film plane coincides with the XY plane and the camera points along the Z axis. That's equivalent, but not what I'm describing here."
I'm not entirely sure why "transformations" are necessary.. How is this point of view different from the picture at the top? Here they also say that they need an "up" vector and "right" vector to "construct an image plane". I'm not sure what an image plane is..
Could someone explain better the relationship between the physical representation and code representation?
How do you know that you always want the camera's "up" to be aligned with the vertical lines in the grid in your image?
Trying to explain it another way: The grid is not really there. That imaginary grid is the result of the calculations of camera's directional vectors and the resolution you are rendering in. The grid is not what decides the camera angle.
When you are holding a physical camera in your hand, like the camera in the cell phone, don't you ever rotate the camera little bit for effect? Or when filming, you may want to slowly rotate the camera? Have you not seen any movies where the camera is rotated?
In the same way, you may want to rotate the "camera" in your ray traced world. And rotating only the camera is much easier than rotating all your objects in the scene(may be millions!)
Check out the example of rotating the camera from the movie Ice Age here:
https://youtu.be/22qniGtZhZ8?t=61
The (up or down) and right vectors constructs the plane you project the scene onto. Since the scene is in 3D you need to project the scene onto a 2D scene in order to render a picture to display on your screen.
If you have the camera position and direction you still don't know whether you're holding the camera right-side up, upside down, or tilted to the left and right.
Using camera position, lookat, up (down) and right vectors we can uniquely define the 3D scene is projected into a 2D picture.
Concretely, if you look at the code and the picture. The 3D scene are the objects displayed. The image/projection plane is the grid infront of the camera. It's orientation is defined by the the camright and camdir vectors (because we are assuming the cameras line of sight is perpendicular to camdir, camdown is uniquely defined by the other two).
The placement of the grid is based on the camera's position and intrinsic properties (it's not being displayed here, but the camera will have a specific field of view).

How to calculate near and far plane for glOrtho in OpenGL

I am using orthographic projection glOrtho for my scene. I implemented a virtual trackball to rotate an object beside that I also implemented a zoom in/out on the view matrix. Say I have a cube of size 100 unit and is located at the position of (0,-40000,0) far from the origin. If the center of rotation is located at the origin once the user rotate the cube and after zoom in or out, it could be position at some where (0,0,2500000) (this position is just an assumption and it is calculated after multiplied by the view matrix). Currently I define a very big range of near(-150000) and far(150000) plane, but some time the object still lie outside either the near or far plane and the object just turn invisible, if I define a larger near and far clipping plane say -1000000 and 1000000, it will produce an ungly z artifacts. So my question is how do I correctly calculate the near and far plane when user rotate the object in real time? Thanks in advance!
Update:
I have implemented a bounding sphere for the cube. I use the inverse of view matrix to calculate the camera position and calculate the distance of the camera position from the center of the bounding sphere (the center of the bounding sphere is transformed by the view matrix). But I couldn't get it to work. can you further explain what is the relationship between the camera position and the near plane?
A simple way is using the "bounding sphere". If you know the data bounding box, the maximum diagonal length is the diameter of the bounding sphere.
Let's say you calculate the distance 'dCC' from the camera position to the center of the sphere. Let 'r' the radius of that sphere. Then:
Near = dCC - r - smallMargin
Far = dCC + r + smallMargin
'smallMargin' is a value used just to avoid clipping points on the surface of the sphere due to numerical precision issues.
The center of the sphere should be the center of rotation. If not, the diameter should grow so as to cover all data.

Relate textures areas of a cube with the current Oculus viewport

I'm creating a 360° image player using Oculus rift SDK.
The scene is composed by a cube and the camera is posed in the center of it with just the possibility to rotate around yaw, pitch and roll.
I've drawn the object using openGL considering a 2D texture for each cube's face to create the 360° effect.
I would like to find the portion in the original texture that is actual shown on the Oculus viewport in a certain instant.
Up to now, my approach was try to find the an approximate pixel position of some significant point of the viewport (i.e. the central point and the corners) using the Euler Angles in order to identify some areas in the original textures.
Considering all the problems of using Euler Angles, do not seems the smartest way to do it.
Is there any better approach to accomplish it?
Edit
I did a small example that can be runned in the render loop:
//Keep the Orientation from Oculus (Point 1)
OVR::Matrix4f rotation = Matrix4f(hmdState.HeadPose.ThePose);
//Find the vector respect to a certain point in the viewport, in this case the center (Point 2)
FovPort fov_viewport = FovPort::CreateFromRadians(hmdDesc.CameraFrustumHFovInRadians, hmdDesc.CameraFrustumVFovInRadians);
Vector2f temp2f = fov_viewport.TanAngleToRendertargetNDC(Vector2f(0.0,0.0));// this values are the tangent in the center
Vector3f vector_view = Vector3f(temp2f.x, temp2f.y, -1.0);// just add the third component , where is oriented
vector_view.Normalize();
//Apply the rotation (Point 3)
Vector3f final_vect = rotation.Transform(vector_view);//seems the right operation.
//An example to check if we are looking at the front face (Partial point 4)
if (abs(final_vect.z) > abs(final_vect.x) && abs(final_vect.z) > abs(final_vect.y) && final_vect.z <0){
system("pause");
}
Is it right to consider the entire viewport or should be done for each single eye?
How can be indicated a different point of the viewport respect to the center? I don't really understood which values should be the input of TanAngleToRendertargetNDC().
You can get a full rotation matrix by passing the camera pose quaternion to the OVR::Matrix4 constructor.
You can take any 2D position in the eye viewport and convert it to its camera space 3D coordinate by using the fovPort tan angles. Normalize it and you get the direction vector in camera space for this pixel.
If you apply the rotation matrix gotten earlier to this direction vector you get the actual direction of that ray.
Now you have to convert from this direction to your texture UV. The component with the highest absolute value in the direction vector will give you the face of the cube it's looking at. The remaining components can be used to find the actual 2D location on the texture. This depends on how your cube faces are oriented, if they are x-flipped, etc.
If you are at the rendering part of the viewer, you will want to do this in a shader. If this is to find where the user is looking at in the original image or the extent of its field of view, then only a handful of rays would suffice as you wrote.
edit
Here is a bit of code to go from tan angles to camera space coordinates.
float u = (x / eyeWidth) * (leftTan + rightTan) - leftTan;
float v = (y / eyeHeight) * (upTan + downTan) - upTan;
float w = 1.0f;
x and y are pixel coordinates, eyeWidth and eyeHeight are eye buffer size, and *Tan variables are the fovPort values. I first express the pixel coordinate in [0..1] range, then scale that by the total tan angle for the direction, and then recenter.

Matching top view human detections with floor projection on interactive floor project

I'm building an interactive floor. The main idea is to match the detections made with a Xtion camera with objects I draw in a floor projection and have them following the person.
I also detect the projection area on the floor which translates to a polygon. the camera can detect outside the "screen" area.
The problem is that the algorithm detects the the top most part of the person under it using depth data and because of the angle between that point and the camera that point isn't directly above the person's feet.
I know the distance to the floor and the height of the person detected. And I know that the camera is not perpendicular to the floor but I don't know the camera's tilt angle.
My question is how can I project that 3D point onto the polygon on the floor?
I'm hoping someone can point me in the right direction. I've been reading about camera projections but I'm not seeing how to use it in this particular problem.
Thanks in advance
UPDATE:
With the awnser from Diego O.d.L I was able to get an almost perfect detection. I'll write the steps I used for those who might be looking for the same solution (I won't get into much detail on how detection is made):
Step 1 : Calibration
Here I get some color and depth frames from the camera, using openNI, with the projection area cleared.
The projection area is detected on the color frames.
I then convert the detection points to real world coordinates (using OpenNI's CoordinateConverter). With the new real world detection points I look for the plane that better fits them.
Step 2: Detection
I use the detection algorithm to get new person detections and to track them using the depth frames.
These detection points are converted to real world coordinates and projected to the plane previously computed. This corrects the offset between the person's height and the floor.
The points are mapped to screen coordinates using a perspective transform.
Hope this helps. Thank you again for the awnsers.
Work with the camera coordinate system initially. I'm assuming you don't have problems converting from (row,column,distance) to a real world system aligned with the camera axis (x,y,z):
calculate the plane with three or more points (for robustness) with
the camera projection (x,y,z). (choose your favorite algorithm,
i.e
Then Find the projection of your head point to the floor plane
(example)
Finally, you can convert it to the floor coordinate system or just
keep it in the camera system
From the description of your intended application, it is probably more useful for you to recover the image coordinates, I guess.
This type of problems usually benefits from clearly defining the variables.
In this case, you have a head at physical position {x,y,z} and you want the ground projection {x,y,0}. That's trivial, but your camera gives you {u,v,d} (d being depth) and you need to transform that to {x,y,z}.
The easiest solution to find the transform for a given camera positioning may be to simply put known markers on the floor at {0,0,0}, {1,0,0}, {0,1,0} and see where they pop up in your camera.

OpenGL get 3D coordinates of nearest world 3D point to the current mouse Location

in an OpenGL context, I have seen it is possible to convert mouse coordinates to 3D world coordinates (e.g. MFC with Opengl get 3d coordinate from 2d coordinate of mouse). However, this does not work when I have simply a set of GLPoints and lots of empty space: when I'm hovering the mouse over empty space, the 3D coordinates have no meaning.
How can I get the coordinates of the nearest 3D point to my mouse position?
Recognize that "unprojecting" 2D mouse coordinates into a 3D world requires additional information. A 2D position on the screen corresponds to an infinite number of points along a line, from the near plane to the far plane in 3D. What this means is that to get a 3D point, you need to provide a depth value as well.
The gluUnProject function is convenient way of doing this and provides a parameter for this winZ. 0.0 would find the 3D point at the near plane and 1.0 would find it at the far plane.
gluUnProject(winX, winY, winZ, model, proj, view, objX, objY, objZ);
Note: If gluUnProject is not available you can figure out the same thing pretty easily with matrices. source here
When the mouse is used to interact with 3D objects this depth value is typically found by sampling from the depth buffer, or intersecting a ray with scene geometry, or a primitive (such as a sphere or box). If the objective is to interact with points, you could use a sphere around each point. If you just want a 3D point "somewhere out there" you could decide on a depth value (maybe 0.0 on the near plane or 0.5 - halfway between the near and far plane).