the axes in a openGL graph - opengl

I am new to OpenGL. One simple question. Is it right to say that the axe which goes to the South is the "X" axe, the horitonal axe which goes to the right is the "Y" axe et the last vertical axe which goes to the North is the "Z" axe as in my picture ?
OpenGL axes

If you mean by "OpenGL graph" the default coordinate system OpenGL uses, then this is wrong. By default +x goes to the right, +y to the top of the screen and +z comes out of the screen.
Usually in CG, you expect the camera to look along the negative z axis.

Related

Taking cube from camera space to clip space, error in my math?

watching Ken Joy's Computer Graphics lectures on youtube. One thing I'm confused about is after he gets the cube from the camera space to clip space, from my calculations the cube doesn't look like that. I expected the cube to look like that pink parallelogram in my picture, if we assume the Z of the front-face of the cube to be -4/3 and the back-face to be -2 then the Ws come out to be 4/3 and 2 respectively. So can someone explain how after multiplying by the viewing matrix, the cube comes out to look like how Ken has it.
Ken's view matrix:
After view matrix has been applied:
What I think the side of the cube should look like(the pink parallelogram) after view matrix has been applied:
my reasoning is, after the perspective divide by W, the blue and green vectors should get truncated to create that pink parallelogram. So I'm struggling to understand this. Thanks in advance.
At Perspective Projection the scene is seen as from of a pinhole camera. The cube on the school board is placed symmetrically around the z axis, in compare to the cube in the illustration which is placed at Y+ (above the axis).
When the z axis intersects the cube, then you can neither see the top, nor the bottom of the cube:
When the cube is lifted up, then you can see the bottom of the cube, too:

Mapping Bullet phyics coordinates to OpenGL

I've been using the Bullet physics engine with OpenGL to visualise my simulations. I currently have a very simple simulation of a cube that has an initial horizontal and forward velocity that falls down from the sky and collides with the walls of a room that are all slanted at 45 degrees, with the bottom of the wall meeting the floor.
I use getOpenGLMatrix to get the orientation, position, etc. of the cube and map it to OpenGL by making that matrix the Model matrix. However, when I run it and visualise the simulation the cube behaves as expected (rolls down the wall), but it does not "touch" the rendered OpenGL wall (I say touch but of course mean the rendered cube does not appear to come near the rendered wall).
My Bullet cube is 2x2x2 (specified by btBoxShape(btVector3(1.0f,1.0f,1.0f))).
My OpenGL cube is also 2x2x2, with the origin at 0 and corners 1.0 away in each direction.
The only thing I can think of is that the coordinates in Bullet physics do not map directly to the coordinates of OpenGL (for example, a cube edge of length 1 in Bullet is X pixels, but a cube edge of length 1 in OpenGL is Y pixels). Is this the case? If not, can you think why I might have this issue (obviously I don't expect you to magically know the answer, just wondering if there are any known issues like this).
Thanks

Why does the camera face the negative end of the z-axis by default?

I am learning openGL from this scratchpixel, and here is a quote from the perspective project matrix chapter:
Cameras point along the world coordinate system negative z-axis so that when a point is converted from world space to camera space (and then later from camera space to screen space), if the point is to left of the world coordinate system y-axis, it will also map to the left of the camera coordinate system y-axis. In other words, we need the x-axis of the camera coordinate system to point to the right when the world coordinate system x-axis also points to the right; and the only way you can get that configuration, is by having camera looking down the negative z-axis.
I think it has something to do with the mirror image? but this explanation just confused me...why is the camera's coordinate by default does not coincide with the world coordinate(like every other 3D objects we created in openGL)? I mean, we will need to transform the camera coordinate anyway with a transformation matrix (whatever we want with the negative z set up, we can simulate it)...why bother?
It is totally arbitrary what to pick for z direction.
But your pick has a lot of deep impact.
One reason to stick with the GL -z way is that the culling of faces will match GL constant names like GL_FRONT. I'd advise just to roll with the tutorial.
Flipping the sign on just one axis also flips the "parity". So a front face becomes a back face. A znear depth test becomes zfar. So it is wise to pick one early on and stick with it.
By default, yes, it's "right hand" system (used in physics, for example). Your thumb is X-axis, index finger Y-axis, and when you make those go to right directions, Z-points (middle finger) to you. Why Z-axis has been selected to point inside/outside screen? Because then X- and Y-axes go on screen, like in 2D graphics.
But in reality, OpenGL has no preferred coordinate system. You can tweak it as you like. For example, if you are making maze game, you might want Y to go outside/inside screen (and Z upwards), so that you can move nicely at XY plane. You modify your view/perspective matrices, and you get it.
What is this "camera" you're talking about? In OpenGL there is no such thing as a "camera". All you've got is a two stage transformation chain:
vertex position → viewspace position (by modelview transform)
viewspace position → clipspace position (by projection transform)
To see why be default OpenGL is "looking down" -z, we have to look at what happens if both transformation steps do "nothing", i.e. full identity transform.
In that case all vertex positions passed to OpenGL are unchanged. X maps to window width, Y maps to window height. All calculations in OpenGL by default (you can change that) have been chosen adhere to the rules of a right hand coordinate system, so if +X points right and +Y points up, then Z+ must point "out of the screen" for the right hand rule to be consistent.
And that's all there is about it. No camera. Just linear transformations and the choice of using right handed coordinates.

glm::unProject appears to be mixing up the screen Y coordinate

I'm trying to convert my mouse's cursor position in my OpenGL viewport to world coordinates. I'm using glm::unProject() to do this. However, it appears that the mouse position's Y coordinate is being negated somehow.
If I orient my camera so the world's Y axis is pointing up and X is pointing right, moving the mouse left/right gives me the correct coordinates, however moving the mouse up/down the Y "world" coordinates I get are reversed (positive Y goes downwards).
If I reorient the camera so X is now going up, Y is going left. Moving the mouse left/right gives the right Y coordinates, but moving up/down gives reversed X coordinates. Same behavior when I orient for Z.
This page mentions that device coordinates use the LHS, maybe this is the cause? Is there something I need to do to handle the case where device coordinates are in a different system? Is there a way to determine that?
I'm also noticing that my transformed coordinates are half what they should be (mouse on an object at (1,0,0) shows (0.5,0,0) but I think this is a separate issue so I'll ask another question once I solve this.
The basic problem is that OpenGL defines the origin as the lower left corner of the window, while most windowing systems use the upper left instead. The solution is simple: subtract the mouse Y coordinate from the window height:
gl_x = mouse_x;
gl_y = windowHeight - mouse_y;

Angle reflect in Cocos2d?

I am making a game in Cocos2d. I have a ball that will be shot at a flat surface (the top of the screen) how can I make it so the ball will travel, hit the surface, then reflect the angle and travel that direction? Does that make sense? Please tell me if it doesn't, and I will clarify. Thanks!
EDIT:
Here's an illustration of what I want
Here
You could build the game using box2d (in cocos2d). Then you will have that "effect" for free.
Once you launch a ball at an angle, say 50 degrees, add (cos(50)*speed) to his X position, and (sin(50)*speed) to his Y position.
When you detect the ball's y position has reached the surface's y position, just change the angle to -50.
But you must know that it only works if you want a reflection angle on a top surface, where it hits the top surface and bounces down.