I've been using the Bullet physics engine with OpenGL to visualise my simulations. I currently have a very simple simulation of a cube that has an initial horizontal and forward velocity that falls down from the sky and collides with the walls of a room that are all slanted at 45 degrees, with the bottom of the wall meeting the floor.
I use getOpenGLMatrix to get the orientation, position, etc. of the cube and map it to OpenGL by making that matrix the Model matrix. However, when I run it and visualise the simulation the cube behaves as expected (rolls down the wall), but it does not "touch" the rendered OpenGL wall (I say touch but of course mean the rendered cube does not appear to come near the rendered wall).
My Bullet cube is 2x2x2 (specified by btBoxShape(btVector3(1.0f,1.0f,1.0f))).
My OpenGL cube is also 2x2x2, with the origin at 0 and corners 1.0 away in each direction.
The only thing I can think of is that the coordinates in Bullet physics do not map directly to the coordinates of OpenGL (for example, a cube edge of length 1 in Bullet is X pixels, but a cube edge of length 1 in OpenGL is Y pixels). Is this the case? If not, can you think why I might have this issue (obviously I don't expect you to magically know the answer, just wondering if there are any known issues like this).
Thanks
Related
I'm making a skybox in my game. The game has a solar system with some things in it (to start, the sun and the earth, with stars in the background). The player is on one planet in this solar system. The solar system is represented to the player using a skybox, with 2D sprites projected onto the skybox in the corresponding positions. The Skybox is rendered with OpenGL (actually, Java's LWJGL) [1]
First things first, all of the bodies are being tracked in 3D space. I can obtain their coordinates, relative directions, etc. All orbits are defined independently (aka, occur on arbitrary planes). In addition, planets have Quaternion rotations. Rendering the system in full 3D, there are no problems.
Projecting the system to the skybox is another matter entirely. In theory, I figure that I should be able to do it like this;
1. Calculate direction vector of where the player is looking (full rotations are not relevant - the vector just has to point in the right direction).
2. Multiply this direction vector with their planet's orientation (Quaternion) to calculate the "view direction"
3. Calculate direction vector from the planet to the object being viewed
4. Find the rotation between the vectors, and rotate the skybox accordingly.
However, when I feed OpenGL my angles, Gimbal Locking occurs and orbits that should be straight: go all bendy (although rotations around one single axis work fine). In what ways can I attempt to prevent this from happening? I'm at a loss.
[1]: My terrain is actually a flat square voxel grid, and I scale the player's coordinates onto it, then pretend that it is a 3D planet.
So, I'm adding physics to my game engine right now and the physics engine expects the vertices of a primitive to be distributed around 0,0,0. Now my primitive cubes vertice positions range from 0 to 1 in every dimension. Should I center the cubes around 0,0,0 or shift the vertices when giving them to the physics engine AND when reading the position of the rigidbody ?
Depends on the physics engine, but normally it's easiest to work with if the physics system can assume the object's center of mass is at 0,0,0. For your cube primitives, if you think about rotating and scaling them, you should quickly come to the conclusion that a 0,0,0 center is convenient for those operations as well.
I'm writing a 2D game using a wrapper of OpenGLES. There is a camera aiming at a bunch of textures, which are the sprites for the game. The user should be able to move the view around by moving their fingers around on the screen. The problem is, the camera is about 100 units away from the textures, so when the finger is slid across the screen to pan the camera, the sprites move faster than the finger due to parallax effect.
So basically, I need to convert 2D screen coordinates, to 3D coordinates at a specific z distance away (in my case 100 away because that's how far away the textures are).
There are some "Unproject" functions in C#, but I'm using C++ so I need the math behind this function. I'm extremely new to 3D stuff and I'm very bad at math so if you can explain like you are explaining to a 10 year old that would be much appreciated.
If I can do this, I can pan the camera at such a speed so it looks like the distant sprites are panning with the users finger.
For picking purposes, there are better ways than doing a reverse projection. See this answer: https://stackoverflow.com/a/1114023/252687
In general, you will have to scale your finger-movement-distance to use it in a far-away plane (z unit away).
i.e, it l unit is the amount of finger movement and if you want to find the effect z unit away, the length l' = l/z
But, please check the effect and adjust the l' (double/halve etc) to get the desired effect.
Found the answer at:
Wikipedia
It has the following formula:
To determine which screen x-coordinate corresponds to a point at Ax,Az multiply the point coordinates by:
where
Bx is the screen x coordinate
Ax is the model x coordinate
Bz is the focal length—the axial distance from the camera center to the image plane
Az is the subject distance.
So i need a method to do smooth lines without using:
Full Screen Antialiasing (slow)
Shaders (not supported on all cards)
GL_LINE_SMOOTH (causes a crash on some cards)
Only way i could think of doing this was using a textured rectangle that is always faced at camera direction, but the problems are:
1. how do i always face the rectangle at the camera (efficiently) ?
2. how do i keep its size always the same no matter how far away my camera is looking at it?
Any other ideas?
Billboarding is a simple concept, but can be difficult to implement. A billboard is a flat object, usually a quad (square), which faces the camera. This direction usually changes constantly during runtime as the object and camera move, and the object needs to be rotated each frame to point in that direction. There are two types of billboarding: point and axis. Point sprites, or point billboards, are a quad that is centered at a point and the billboard rotates about that central point to face the user. Axis billboards come in two types: axis aligned and arbitrary. The axis-aligned (AA) billboards always have one local axis that is aligned with a global axis, and they are rotated about that axis to face the user. The arbitrary axis billboards are rotated about any axis to face the user.
http://nehe.gamedev.net/data/articles/article.asp?article=19
You can use point sprites, they are always the same size and always face the camera.
http://www.opengl.org/registry/specs/ARB/point_sprite.txt
What exactly is back face culling in OpenGL? Can you give me a specific example with e.g. one triangle?
If you look carefully you can see examples of this in a lot of video games. Any time the camera accidentally moves through an object - typically a moving object like a character - notice how the world continues to render correctly. That's because the back sides of the triangles that form the skin of the character are not rendered; they are effectively transparent. If this were not the case then every time the camera accidentally moved inside an object either the screen would go black (because the interior of the object is not lit) or you'd get a bizarre perspective on what the skin of the object looks like from the inside.
Back face culling is where the triangles pointing away from the camera/viewpoint are not considered for drawing.
Wikipedia defines this as:
It is a step in the graphical pipeline that tests whether the points in the polygon appear in clockwise or counter-clockwise order when projected onto the screen. If the user has specified that front-facing polygons have a clockwise winding, if the polygon projected on the screen has a counter-clockwise winding it has been rotated to face away from the camera and will not be drawn.
Other systems use the face normal and do the dot product with the view direction.
It is a relatively quick way of deciding whether to draw a triangle or not. Consider a cube. At any one time 3 of the sides of the cube are going to be facing away from the user and hence not visible. Even if these were drawn they would be obscured by the three "forward" facing sides. By performing back face culling you are reducing the number of triangles drawn from 12 to 6 (2 per side).
Back face culling works best with closed "solid" objects such as cubes, spheres, walls.
Some systems don't have this as they consider faces to be double sided and hence visible from either direction.
It's only and optimization technique.
When you look at a closed object, say a cube, you only see about half the faces : the faces that are towards you (or, at least, the faces that are not towards you are always occluded by another face that points towards you)
If you skip drawing all these backwards-facing faces, it will have two consequences :
- the rendering time will be twice better (on average)
- the final render won't change (since another, front-facing face will be drawn on top of a "culled" one)
So you basically get x2 perf for free.
In order to know whether the triangle is front- or back-facing, you take v0-v1 and v0-v2, make a cross product. This gives you the face normal. If this vector is towards you ( dot(normal, viewVector) < 0), draw.
Triangles have their coordinates specificed in a specific order, clockwise IIRC.
When the graphics engine look at a triangle from a specific direction, and the coordinates are counter-clockwise, it knows that it's looking at the backside of the triangle through an object. As the front side of the object is covering the triangle, it doesn't have to be drawn.