Gimbal Locking when projecting 3D System onto Sphere - opengl

I'm making a skybox in my game. The game has a solar system with some things in it (to start, the sun and the earth, with stars in the background). The player is on one planet in this solar system. The solar system is represented to the player using a skybox, with 2D sprites projected onto the skybox in the corresponding positions. The Skybox is rendered with OpenGL (actually, Java's LWJGL) [1]
First things first, all of the bodies are being tracked in 3D space. I can obtain their coordinates, relative directions, etc. All orbits are defined independently (aka, occur on arbitrary planes). In addition, planets have Quaternion rotations. Rendering the system in full 3D, there are no problems.
Projecting the system to the skybox is another matter entirely. In theory, I figure that I should be able to do it like this;
1. Calculate direction vector of where the player is looking (full rotations are not relevant - the vector just has to point in the right direction).
2. Multiply this direction vector with their planet's orientation (Quaternion) to calculate the "view direction"
3. Calculate direction vector from the planet to the object being viewed
4. Find the rotation between the vectors, and rotate the skybox accordingly.
However, when I feed OpenGL my angles, Gimbal Locking occurs and orbits that should be straight: go all bendy (although rotations around one single axis work fine). In what ways can I attempt to prevent this from happening? I'm at a loss.
[1]: My terrain is actually a flat square voxel grid, and I scale the player's coordinates onto it, then pretend that it is a 3D planet.

Related

Mapping Bullet phyics coordinates to OpenGL

I've been using the Bullet physics engine with OpenGL to visualise my simulations. I currently have a very simple simulation of a cube that has an initial horizontal and forward velocity that falls down from the sky and collides with the walls of a room that are all slanted at 45 degrees, with the bottom of the wall meeting the floor.
I use getOpenGLMatrix to get the orientation, position, etc. of the cube and map it to OpenGL by making that matrix the Model matrix. However, when I run it and visualise the simulation the cube behaves as expected (rolls down the wall), but it does not "touch" the rendered OpenGL wall (I say touch but of course mean the rendered cube does not appear to come near the rendered wall).
My Bullet cube is 2x2x2 (specified by btBoxShape(btVector3(1.0f,1.0f,1.0f))).
My OpenGL cube is also 2x2x2, with the origin at 0 and corners 1.0 away in each direction.
The only thing I can think of is that the coordinates in Bullet physics do not map directly to the coordinates of OpenGL (for example, a cube edge of length 1 in Bullet is X pixels, but a cube edge of length 1 in OpenGL is Y pixels). Is this the case? If not, can you think why I might have this issue (obviously I don't expect you to magically know the answer, just wondering if there are any known issues like this).
Thanks

Matching top view human detections with floor projection on interactive floor project

I'm building an interactive floor. The main idea is to match the detections made with a Xtion camera with objects I draw in a floor projection and have them following the person.
I also detect the projection area on the floor which translates to a polygon. the camera can detect outside the "screen" area.
The problem is that the algorithm detects the the top most part of the person under it using depth data and because of the angle between that point and the camera that point isn't directly above the person's feet.
I know the distance to the floor and the height of the person detected. And I know that the camera is not perpendicular to the floor but I don't know the camera's tilt angle.
My question is how can I project that 3D point onto the polygon on the floor?
I'm hoping someone can point me in the right direction. I've been reading about camera projections but I'm not seeing how to use it in this particular problem.
Thanks in advance
UPDATE:
With the awnser from Diego O.d.L I was able to get an almost perfect detection. I'll write the steps I used for those who might be looking for the same solution (I won't get into much detail on how detection is made):
Step 1 : Calibration
Here I get some color and depth frames from the camera, using openNI, with the projection area cleared.
The projection area is detected on the color frames.
I then convert the detection points to real world coordinates (using OpenNI's CoordinateConverter). With the new real world detection points I look for the plane that better fits them.
Step 2: Detection
I use the detection algorithm to get new person detections and to track them using the depth frames.
These detection points are converted to real world coordinates and projected to the plane previously computed. This corrects the offset between the person's height and the floor.
The points are mapped to screen coordinates using a perspective transform.
Hope this helps. Thank you again for the awnsers.
Work with the camera coordinate system initially. I'm assuming you don't have problems converting from (row,column,distance) to a real world system aligned with the camera axis (x,y,z):
calculate the plane with three or more points (for robustness) with
the camera projection (x,y,z). (choose your favorite algorithm,
i.e
Then Find the projection of your head point to the floor plane
(example)
Finally, you can convert it to the floor coordinate system or just
keep it in the camera system
From the description of your intended application, it is probably more useful for you to recover the image coordinates, I guess.
This type of problems usually benefits from clearly defining the variables.
In this case, you have a head at physical position {x,y,z} and you want the ground projection {x,y,0}. That's trivial, but your camera gives you {u,v,d} (d being depth) and you need to transform that to {x,y,z}.
The easiest solution to find the transform for a given camera positioning may be to simply put known markers on the floor at {0,0,0}, {1,0,0}, {0,1,0} and see where they pop up in your camera.

Names for camera moves

I've got a 3D scene and want to offer an API to control the camera. The camera is currently described by its own position, a look-at point in the scene somewhere along the z axis of the camera frame of reference, an “up” vector describing the y axis of the camera frame of reference, and a field-of-view angle. I'd like to provide at least the following operations:
Two-dimensional operations (mouse drag or arrow keys)
Keep look-at point and rotate camera around that. This can also feel like rotating the object, with the look-at point describing its centre. I think that at some point I've heard this described as the camera “orbiting” around the centre of the scene.
Keep camera position, and rotate camera around that point. Colloquially I'd call this “looking around”. With a cinema camera this might perhaps be called pan and tilt, but in 3d modelling “panning” is usually something else, see below. Using aircraft principal directions, this would be a pitch-and-yaw movement of the camera.
Move camera position and look-at point in parallel. This can also feel like translating the object parallel to the view plane. As far as I know this is usually called “panning” in 3d modelling contexts.
One-dimensional operations (e.g. mouse wheel)
Keep look-at point and move camera closer to that, by a given factor. This is perhaps what most people would consider a “zoom” except for those who know about real cameras, see below.
Keep all positions, but change field-of-view angle. This is what a “real” zoom would be: changing the focal length of the lens but nothing else.
Move both look-at point and camera along the line connecting them, by a given distance. At first this feels very much like the first item above, but since it changes the look-at point, subsequent rotations will behave differently. I see this as complementing the last point of the 2d operations above, since together they allow me to move camera and look-at point together in all three directions. The cinema camera man might call this a “dolly” shot, but I guess a dolly might also be associated with the other translation moves parallel to the viewing plane.
Keep look-at point, but change camera distance from it and field-of-view angle in such a way that projected sizes in the plane of the look-at point remain unchanged. This would be a dolly zoom in cinematic contexts, but might also be used to adjust for the viewer's screen size and distance from screen, to make the field-of-view match the user's environment.
Rotate around z axis in camera frame of reference. Using aircraft principal directions, this would be a roll motion of the camera. But it could also feel like a rotation of the object within the image plane.
What would be a consistent, unambiguous, concise set of function names to describe all of the above operations? Perhaps something already established by some existing API?

OpenGL (simple) scene & object navigation basics for "lookthrough" camera

I've read through several over-complicated articles on rendering a 3D scene with OpenGL, and haven't really found something that helps me to visualize the basic concepts of scene navigation with respect to glRotate and glTranslate calls.
From experimenting with the examples provided by LWJGL (my particular OpenGL library), I understand very basically what effect comes of their use. I can use glTranslate to move to a point, glRotate to rotate about that point, and glLoadIdentity to snap back to the origin or glPopMatrix to go back to the last glPushMatrix, which are essentially snapshots of a location and rotation. Finally, the scene will render to screen with respect to the origin.
So basically, to put a cube at point A with rotation B:
glTranslate(A.x,A.y,A.z) [now focused on point A]
glRotate(B.,,*,*) for pitch, yaw, and roll; [now rotated to rotation B]
glBegin(GL_QUADS) and glVertex3f()x4 for each 'side'(quad) relative to object's origin
glEnd()
glLoadIdentity() [reset to origin for other objects, not needed if only drawing the cube]
As for the "camera", since openGL's "eye" is fixed, the scene has to move inversely from the camera in order to simulate moving the eye with the camera. This is my understanding so far, and if this is wrong, please put me on the right path.
Now for my specific situation and question:
My 'scene' consists of terrain and a player (some arbitrary shape, located at a terrain-relevant location and a 'camera' head). The view should rotate with respect to the player, and the player move with respect to the terrain. I want the final rendering to essentially "look through" the player's head, or camera. My confusion is with the order of translate/rotate calls for rendering each part, and the direction of translation. Is the following the proper way to render the terrain and player with respect to the player's "camera"?
translate away from the player by the player's distance from origin (move the scene)
rotate away from the player's rotation (player looks 40 degrees right, so rotate scene 40 left)
render terrain
reset via glLoadIdentity
render player's head (if needed)
translate to body/hands/whatever position
rotate away from the player's rotation (step 2 again)
render body/hands/whatever
Also, does rotating have an effect on translation? Aka, does OpenGL translate with respect to the rotation, or does rotation have no bearing on translation?
I can use glTranslate to move to a point, glRotate to rotate about that point, and glLoadIdentity to snap back to the origin or glPopMatrix to go back to the last glPushMatrix, which are essentially snapshots of a location and rotation.
No not quite. Those functions have now idea of what a scene is. All they do is manipulating matrices. Here's an excellent article about it:
http://www.opengl.org/wiki/Viewing_and_Transformations
One important thing to keep in mind when working with OpenGL is, that it is not a scene graph. All it does is rendering flat points, lines and triangles to the framebuffer. There's no such thing like a scene you navigate.
So basically, to put a cube at point A with rotation B:
(...)
Yes, you got that one right.
As for the "camera", since openGL's "eye" is fixed
Well, OpenGL got no camera. It's only pushing vertices through the modelview matrix, does lighting calculations on them, then passes them through the projection and maps them to the viewport.

How to draw smooth lines without using GLSL, FSAA nor GL_LINE_SMOOTH?

So i need a method to do smooth lines without using:
Full Screen Antialiasing (slow)
Shaders (not supported on all cards)
GL_LINE_SMOOTH (causes a crash on some cards)
Only way i could think of doing this was using a textured rectangle that is always faced at camera direction, but the problems are:
1. how do i always face the rectangle at the camera (efficiently) ?
2. how do i keep its size always the same no matter how far away my camera is looking at it?
Any other ideas?
Billboarding is a simple concept, but can be difficult to implement. A billboard is a flat object, usually a quad (square), which faces the camera. This direction usually changes constantly during runtime as the object and camera move, and the object needs to be rotated each frame to point in that direction. There are two types of billboarding: point and axis. Point sprites, or point billboards, are a quad that is centered at a point and the billboard rotates about that central point to face the user. Axis billboards come in two types: axis aligned and arbitrary. The axis-aligned (AA) billboards always have one local axis that is aligned with a global axis, and they are rotated about that axis to face the user. The arbitrary axis billboards are rotated about any axis to face the user.
http://nehe.gamedev.net/data/articles/article.asp?article=19
You can use point sprites, they are always the same size and always face the camera.
http://www.opengl.org/registry/specs/ARB/point_sprite.txt