I am currently working on designing my first FPS game using JOGL. (Java bindings for OpenGL).
So far I have been able to generate the 'world' (a series of cubes), and a player model. I have the collision detection between the player and the cubes working great.
Now I am trying to add in the guns. I have the gun models drawn correctly and loading onto player model. The first gun I'm trying to implement is a laser gun, which shoots and instantaneous line-of-sight laser at whatever you're aiming at. Before I work on implementing the enemy models, I would like to get the collision detection between the laser and the walls working.
My laser, currently, is drawn by a series of small cubes, one after the other. The first cube is drawn at the end of the players gun, then it draws continuously from there. The idea was to continue drawing the cubes of the laser until a collision was detected with something, namely the cubes in the world.
I know the locations of the cubes in the world. The problem is that I have to call glMatrixPush to draw my character model. The laser is then drawn within this modelview. Meaning that I have lost my old coordinate system - so I'm drawing the world in one system, then the lazer in another. Within this player matrix, I have go call glRotate and glTranslate several times, in order to sync everything up with the way the camera is rotating. The lazer is then built by translating along the z-axis of this new system.
My problem is that through all of these transformations, I no longer have any idea where my laser exists in the map coordinate system, primarily due to the rotations involving the camera.
Does anyone know of a method - or have any ideas, for how to solve this problem? I believe I need a way to convert the new coordinates of the laser into the old coordinates of the map, but I'm not sure how to go about undoing all of the transformations that have been done to it. There may also be some functionality provided by OpenGL to handle this sort of problem that I'm just unaware of.
You shouldn't be considering the laser as a spacial child of the character that fires it. Once its been fired, the laser is an entity of its own, so you should render as follows:
glPushMatrix(viewMatrix);
glPushMatrix(playerMatrix);
DrawPlayer();
glPopMatrix();
glPushMatrix(laserMatrix);
DrawLaser();
glPopMatrix();
glPopMatrix();
Also, be sure that you don't mix your rendering transformation logic with the game logic. You should always store the world-space position of your objects to be able to test for intersections regardless of your current OpenGL matrix stack.
Remember to be careful with spacial parent/child relationships. In practice, they aren't that frequent. For more information, google about the problems of scene graphs.
The point that was being made in the first answer is that you should never depend on the matrix to position the object in the first place. You should be keeping track of the position and rotation of the laser before you even think about drawing it. Then you use the translate and rotate commands to put it where you know it should be.
You're trying to do things backwards, and yes, that does mean you'll have to do the matrix math, and OpenGL doesn't keep track of that because the ModelView matrix is the ONLY thing that OpenGL does keep track of in regards to object positions. OpenGL has no concept of "world space" or "camera space". There is only the matrix that all input is multiplied by. It's elegantly simple... but in some cases I do prefer the way DirectX has a a separate view matrix and model matrix.
So, if you don't know where an object is located without matrix math, then I would consider that a fundamental design problem. If you don't need to know the object position, then matrix-transform to your hearts content, but if you do need it's position, start with the position.
(pretty much what the first answer says, just in a different way...)
Related
I have been following some OpenGL tutorials for an open world project i am currently working on where the goal is to have an Openworld Scene with several objects (mountains etc...) present and with a SkyBox where all the objects are placed inside it.
I would like to ask if there is any way of the camera freely moving inside the skybox, "interacting" with potential objects in it, but without actually getting out of the boundaries of the box. In the tutorials the translation of the camera is removed, so it can only look around without moving around.
Is it a common practice to actually move the camera inside the skybox, or should i somehow move the skybox along with the camera, thus never reaching the boundaries of the box?
Skybox is usually rendered without offset to camera because its content represent stuff very far away (many times bigger than actual camera movement) like stars or mountains that are many kilometers away. So even if you move like 100 m in any direction the rendered result is not changed at all (or very little that can not be recognized).
If your skybox contains stuff you want to move towards than is doable but you need to limit the movement so you not get too close as that would result in pixelation of the skybox and eventually even crossing it. That can be done by game terrain (you can not jump above boundary mountains or swim too far from an island etc.
Another option is to limit camera distance from skybox center to some safe distance. If more far then the limit move the skybox to match the distance again... that way you can come near/far to skybox up to a point (it gets bigger/smaller on the close/far side) and never cross it ... without any actual camera position restrictions...
First things first, when you are rendering a sky box, generally, you don't render an actual box.
The skybox contains stuff that generally never or only very slowly change and is so far away that the player will never reach. The skybox is stored in a cube map texture and rendered through a full screen rectangle. In the shader you use OpenGL's cubemap sampling by sampling with the eye vector into the map.
If the skybox is dynamic, like dynamic time of day, it is only re rendered every couple of frames or only when needed.
A while back I wrote an article on how to do it: GLSL Skybox (You will need to update the code to a modern OpenGL version through...)
Currently I'm using WebGL for a school project, and am stuck on trying to keep an object perpendicular to the camera while allowing rotation of the rest of the scene.
I currently have a cube and other object on the canvas, and they're being successfully rotated using a quaternion and converting that to a rotation matrix. I would now like to add a simple square within the scene, but have it consistently perpendicular to the camera.
In thinking about this, I've considered an approach using multiple vertex shader programs for the different objects in the scene: 1 for handling the positioning of all objects in the scene that can be rotated, and another vertex shader corresponding to the square that I don't want rotated.
Though, I really don't know if this approach will work as expected, as I am still a novice at WebGL.
Would there be a better approach to this? I greatly apologize if there's a lack of info given, and can give any more info if needed. I've searched quite a bit for something involving this, but have hit a dead end.
Thank you!
If you want a more step by step tutorial: http://www.opengl-tutorial.org/intermediate-tutorials/billboards-particles/billboards/.
So i'm learning openGL and one thing I find very strange is that the camera has the stay at the origin and look in the same direction. To achieve camera movement and rotation you have to move and rotate the entire world instead of the camera.
My question is, why can't you move the camera? Does directx allow you to move the camera?
This is an interesting question. I think the answer depends on what you actually mean, when you are talking about a fixed camera.
As a matter of fact instead of saying openGL has a fixed camera I'd rather tend to say there isn't any camera at all in openGL.
On the other hand I wouldn't agree with your interpretation that the openGL API moves or rotates the world.
Instead I'd say the openGL API doesn't move or rotate the world at all.
I think the reason why there isn't any concept of a camera in the openGL API is, because it isn't meant as a high level abstraction layer, but rather linked to the computational necesissities in displaying computer graphics.
As I suggest you're mainly talking about displaying 3-dimensional scenes this means transforming 3D vertex data to a 2D raster image.
For every frame rendered this involves transforms transforming the 3-D coordinates of every vertex in your scene to their corresponding 2D location on the screen.
As every vertex has to be placed at the right position on screen it doesn't make any computational difference at all if you conceptually move something like a camera around or just move the whole world, you'll have to do the same transformation nonetheless.
The mathmatics involved in computing the "right" position for a vertex on screen can be described by a mathmatical object called matrix that, when applied (the mathmatical term used for this application is matrix-multiplication) to 3-D data will result in the desired 2D screen coordinates.
So essentially what happens in rendering a 3-D scene - regardless of the fact if there is any camera at all or not - is that every vertex is processed by some transformation matrix, leaving the original 3-D data of your vertex intact.
As the 3-D vertex data doesn't get changed at all, I'd say the openGL doesn't move or rotate the world at all, but this "observation" may depend on the observers perspective.
As a matter of fact leaving the 3-D vertex data intact without changing it all is essential to prevent your 3-D scene from deforming due to accumulated rounding inaccuracy.
I hope I could help by expressing my opinon on who or what moves whom when or why in the openGL API.
Even if I couldn't convice you there is no word-moving involved in using the openGL API don't forget the fact it doesn't weight anything at all so moving it around shouldn't be too painful.
BTW. don't bother to investigate about the proprietary library mentioned in your question and keep relying on open standards.
What's the difference between moving the world and moving the camera? Mathematically... there isn't any; it's the same number either way. It's all a matter of perspective. As long as you code your camera abstraction correctly, you don't have to think of it as moving the world if you don't want to.
Points calculated with own physics engine and then sent to OpenGL every time it has to display, e.g. with glBufferSubDataArb, with the updated coordinates of a flying barrel
There are lots of barrels with the same world coordinates but somehow for each one you tell OpenGL to use a different matrix transformation. When a barrel moves you update it's transformation matrix somehow, to reflect which way it rotated/translated in the world.
Some other way
Also, if the answer is #2, is there any easy way to do it, e.g. with abstracted code rather than manipulating the matrices yourself
OpenGL is not a scene graph, it's a drawing API. Most recent versions of OpenGL (OpenGL-3 core and above) reflect this, by not managing matrix state at all. Indeed the answer is 2, more or less. And actually you are expected to deal with the matrix math. OpenGL-3 no longer provides any primitives for that.
Usually a physics engine sees an object as a rigid body with a convex hull. The natural way to represent such a body is using a 4×3 matrix (a 3×3 rotation matrix and a translation vector). So if using a physics engine you're presented with such matrices anyway.
Also you must understand that OpenGL doesn't maintain a scene, so there is nothing you "update". You just draw your data using OpenGL. Matrices are loaded as they are needed.
I'm trying to build a (simple) game engine using c++, SDL and OpenGL but I can't seem to figure out the next step. This is what I have so far...
An engine object which controls the main game loop
A scene renderer which will render the scene
A stack of game states that can be pushed and popped
Each state has a collection of actors and each actor has a collection of triangles.
The scene renderer successfully sets up the view projection matrix
I'm not sure if the problem I am having relates to how to store an actors position or how to create a rendering queue.
I have read that it is efficient to create a rendering queue that will draw opaque polygons front to back and then draw transparent polygons from back to front. Because of this my actors make calls to the "queueTriangle" method of the scene renderer object. The scene renderer object then stores a pointer to each of the actors triangles, then sorts them based on their position and then renders them.
The problem I am facing is that for this to happen the triangle needs to know its position in world coordinates, but if I'm using glTranslatef and glRotatef I don't know these coordinates!
Could someone please, please, please offer me a solution or perhaps link me to a (simple) guide on how to solve this.
Thankyou!
If you write a camera class and use its functions to move/rotate it in the world, you can use the matrix you get from the internal quaternion to transform the vertices, giving you the position in camera space so you can sort triangles from back to front.
A 'queueTriangle' call sounds to me to be very inefficient. Modern engines often work with many thousands of triangles at a time so you'd normally hardly ever be working with anything on the level of a single triangle. And if you were changing textures a lot to accomplish this ordering then that is even worse.
I'd recommend a simpler approach - draw your opaque polygons in a much less rigorous order by sorting the actor positions in world space rather than the positions of individual triangles and render the actors from front to back, an actor at a time. Your transparent/translucent polygons still require the back-to-front approach (providing you're not using premultiplied alpha) but everything else should be simpler and faster.