2D platform game camera with OpenGl and C++ - c++

I am trying to make a 2D platform game using OpenGl and glut with C++. You can move your player around with the left and right arrow keys and jump with space. I have all the platforms loaded into the game through a text file and printed to the screen. This all works really good. The problem I am having is with the camera. When the right arrow key is pressed I have the players x position to increase. The problem is that when this happens I can not get the actual camera to move as well. This makes me think that instead of moving the player, I should use glTranslatef to translate all the platforms to the left. This seems a bit odd to me and I am just wondering if this is how it should be done. So I guess the final question is, should I translate the entire scene or move the player?

Move the camera to follow the player by glTranslateing in the opposite direction as the player.
You should think of the camera like an in game object similar to the player and other movable items and the level as a static object with a fixed position. This makes placing in-game items and other things much easier.

actually when you "move the camera" in OpenGL, since there is actually no camera, what is done internally, is exactly that, moving everything on the scene in the oposite direction.
As for the solution, if you're using glut, you can use
gluLookAt(x, y, z,
ex, ey, ez,
0, 1, 0)
where (x, y, z) is the coordinate where you want the camera to be, (ex, ey, ez) is the direction vector that you want the camera to look into (in reference to (x, y, z) coordinates) and (0, 1, 0) is the up vector. This function does all the matrix transformations necessary. More info here
If you're not using glut, but only raw OpenGL, the same link also explains the equivalent opengl calls you have to use to achieve exactly the same effect as gluLookAt
So when you want to move the camera (only right and left probably, since it's a platform game) you only have to change the x value

I got this working using the following..
gl.PushMatrix()
gl.Translatef((1280/2)-float32((player.Body.Position().X)), 0, 0.0)
//Rest of drawing Code Code
gl.PopMatrix()
This is using OpenGL + Chipmunk Physics (in GoLang but should apply as these are just c-bindings). 1280 in this case is screen Width.

Related

How would I go about moving in a 3D environment, openGL c++

I'm not quite sure on how I should be making things move using openGL.
Am I supposed to be moving the camera's position around the 3D world, or moving/translating the objects around the camera?
I read online that the camera should stay at the origin and everything else should move around the camera, but wouldn't that be an intensive operation? Like if I have 1000 objects and I'm moving, we'd have to move all of these objects. Would it not be easier to move the camera and keep the world objects where they are?
The way OpenGL works is, conceptually, the camera is always in the center, Y axis up and Z axis forward. If you want to move or rotate the camera, you actually move everything else the opposite way.
This is opposed to Direct3D for example, where you have a separate camera matrix.
It's a minor detail though because mathematically speaking they're exactly the same. Whether you move everything forward or the camera back, it's exactly the same end result. You could even argue that having only one matrix as opposed to lugging around two and multiplying them is a performance gain, but it's extremely minor and usually you'll separate your camera matrix from your world building matrix anyway.
In Opengl, the camera is always located at the eye space coordinate (0., 0., 0.). To give the appearance of moving the camera, your OpenGL application must move the scene with the inverse of the camera transformation.
You don't need to worry about moving/translating objects in your scene. gluLookAt() function does it for you. This function computes the inverse camera transform according to its parameters and multiplies it onto the current matrix stack.

OpenGL mouse listener return

I'm trying to use the mouse listener in Haskell using OpenGL and have run into a problem. Apparently the return given for the x and y coordinates is a GLint. The problem is in then using these because I need a GLfloat. Simple parsing doesn't appear to fix the problem. On a more technical note, why would this return an int at all when the entire size of the screen is represented in OpenGL as only 1 square unit?
GLUT or GLFW, which are the toolkits that you're probably using to access OpenGL, manage windows for you. They have no idea about what the current view port looks like -- heck, you could even be rendering to only a quarter of the window; that'd mess up your coordinates pretty badly if GLUT/GLFW cared about OpenGL coordinates! Luckily, they do not; the x and y coordinates that you're getting are actual pixel coordinates on the window, and I think that (0, 0) even is in the top left, with the Y axis going downwards. The mouse coordinates are completely separate from OpenGL.
Nonetheless, you can convert a GLint into a GLfloat using fromIntegral.

Making object stay on screen OpenGL SFML

I'm drawing a Triangle in OpenGL and you can move it up,down,left right. I'm using SFML as my windowing framework, I want to know how i can keep my triangle in the window and not have it move outside of it i.e. if it goes all the way to the top i want it stop going past the height
That largely depends on you projection matrix. You need to obtain High/Low bounds of it (if you use perspective they will depend on Z-distance; with orthogonal matrix it's easier as Z is squashed) and then check against them - if your object is off - forbid the movement.

OpenGL: Understanding transformation

I was trying to understand lesson 9 from NEHEs tutorial, which is about bitmaps being moved in 3d space.
the most interesting thing here is to move 2d bitmap texture on a simple quad through 3d space and keep it facing the screen (viewer) all the time. So the bitmap looks 3d but is 2d facing the viewer all the time no matter where it is in the 3d space.
In lesson 9 a list of stars is generated moving in a circle, which looks really nice. To avoid seeing the star from its side the coder is doing some tricky coding to keep the star facing the viewer all the time.
the code for this is as follows: ( the following code is called for each star in a loop)
glLoadIdentity();
glTranslatef(0.0f,0.0f,zoom);
glRotatef(tilt,1.0f,0.0f,0.0f);
glRotatef(star[loop].angle,0.0f,1.0f,0.0f);
glTranslatef(star[loop].dist,0.0f,0.0f);
glRotatef(-star[loop].angle,0.0f,1.0f,0.0f);
glRotatef(-tilt,1.0f,0.0f,0.0f);
After the lines above, the drawing of the star begins. If you check the last two lines, you see that the transformations from line 3 and 4 are just cancelled (like undo). These two lines at the end give us the possibility to get the star facing the viewer all the time. But i dont know why this is working.
And i think this comes from my misunderstanding of how OpenGL really does the transformations.
For me the last two lines are just like undoing what is done before, which for me, doesnt make sense. But its working.
So when i call glTranslatef, i know that the current matrix of the view gets multiplied with the translation values provided with glTranslatef.
In other words "glTranslatef(0.0f,0.0f,zoom);" would move the place where im going to draw my stars into the scene if zoom is negative. OK.
but WHAT exactly is moved here? Is the viewer moved "away" or is there some sort of object coordinate system which gets moved into scene with glTranslatef? Whats happening here?
Then glRotatef, what is rotated here? Again a coordinate system, the viewer itself?
In a real world. I would place the star somewhere in the 3d space, then rotate it in the world space around my worlds origin, then do the moving as the star is moving to the origin and starts at the edge again, then i would do a rotate for the star itself so its facing to the viewer. And i guess this is done here. But how do i rotate first around the worlds origin, then around the star itself? for me it looks like opengl is switching between a world coord system and a object coord system which doesnt really happen as you see.
I dont need to add the rest of the code, because its pretty standard. Simple GL initializing for 3d drawings, the rotating stuff, and then the simple drawing of QUADS with the star texture using blending. Thats it.
Could somebody explain what im misunderstanding here?
Another way of thinking about the gl matrix stack is to walk up it, backwards, from your draw call. In your case, since your draw is the last line, let's step up the code:
1) First, the star is rotated by -tilt around the X axis, with respect to the origin.
2) The star is rotated by -star[loop].angle around the Y axis, with respect to the origin.
3) The star is moved by star[loop].dist down the X axis.
4) The star is rotated by star[loop].angle around the Y axis, with respect to the origin. Since the star is not at the origin any more due to step 3, this rotation both moves the center of the star, AND rotates it locally.
5) The star is rotated by tilt around the X axis, with respect to the origin. (Same note as 4)
6) The star is moved down the Z axis by zoom units.
The trick here is difficult to type in text, but try and picture the sequence of moves. While steps 2 and 4 may seem like they invert each other, the move in between them changes the nature of the rotation. The key phrase is that the rotations are defined around the Origin. Moving the star changes the effect of the rotation.
This leads to a typical use of stacking matrices when you want to rotate something in-place. First you move it to the origin, then you rotate it, then you move it back. What you have here is pretty much the same concept.
I find that using two hands to visualize matrices is useful. Keep one hand to represent the origin, and the second (usually the right, if you're in a right-handed coordinate system like OpenGL), represents the object. I splay my fingers like the XYZ axes to I can visualize the rotation locally as well as around the origin. Starting like this, the sequence of rotations around the origin, and linear moves, should be easier to picture.
The second question you asked pertains to how the camera matrix behaves in a typical OpenGL setup. First, understand the concept of screen-space coordinates (similarly, device-coordinates). This is the space that is actually displayed. X and Y are the vectors of your screen, and Z is depth. The space is usually in the range -1 to 1. Moving an object down Z effectively moves the object away.
The Camera (or Perspective Matrix) is typically responsible for converting 'World' space into this screen space. This matrix defines the 'viewer', but in the end it is just another matrix. The matrix is always applied 'last', so if you are reading the transforms upward as I described before, the camera is usually at the very top, just as you are seeing. In this case you could think of that last transform (translate by zoom) as a very simple camera matrix, that moves the camera back by zoom units.
Good luck. :)
The glTranslatef in the middle is affected by the rotation : it moves the star along axis x' to distance dist, and axis x' is at that time rotated by ( tilt + angle ) compared to the original x axis.
In opengl you have object coordinates which are multiplied by a (a stack of) projection matrix. So you are moving the objects. If you want to "move a camera" you have to mutiply by the inverse matrix of the camera position and axis :
ProjectedCoords = CameraMatrix^-1 . ObjectMatrix . ObjectCoord
I also found this very confusing but I just played around with some of the NeHe code to get a better understanding of glTranslatef() and glRotatef().
My current understanding is that glRotatef() actually rotates the coordinate system, such that glRotatef(90.0f, 0.0f, 0.0f, 1.0f) will cause the x-axis to be where the y-axis was previously. After this rotation, glTranslatef(1.0f, 0.0f, 0.0f) will move an object upwards on the screen.
Thus, glTranslatef() moves objects in accordance with the current rotation of the coordinate system. Therefore, the order of glTranslatef and glRotatef are important in tutorial 9.
In technical terms my description might not be perfect, but it works for me.

Algorithm to only draw what the camera sees?

I'm making a 3D FPS with OpenGL and here is the basics of how it works. The game is a 3D array of cubes. I know the location of the player's current cube, aswell as the camera x,y,z and I know the x, y, z rotation of the camera too. Right now I just make a square around the player and render this and then add distant fog. The problem though, is that I'm still rendering everything that the player is in back of. How could I selectively only render what the player sees, not render everything within an X radius as Iam doing now.
Thanks
You are talking about frustum culling, if i get you right. I suggest that you take a look at this tutorial. They provide nice demos and explain everything in detail.
This sounds like you need to look into culling concepts.
Are the cubes rooms of a maze through which the player navigates? If so, and assuming the rooms are static over the course of the game, you could use a BSP tree to traverse the scene in order of depth, stopping when you pass the player.