I have an OpenGL scene in which the user can rotate the camera. I have some two dimensional shapes that I would like to always face the user. I do have the forward facing vector, and I do have the screen point at which the component should be drawn. I'm not sure the best way to approach this problem - should I be rotating the shape to the forward vector (which I'm not entirely sure how to do correctly)? Or is there another way I can just draw in two dimensions and ignore the rotation of the camera (maybe by using an orthographic projection)? Any sample code for helping with this would be appreciated.
PS - I'm doing this in Java, but the language is irrelevant here (it is just OpenGL specific).
I already answered it in Inverting rotation in 3D, to make an object always face the camera?
My first though is to use the "gluLookAt" matrix.
http://www.opengl.org/resources/faq/technical/viewing.htm
I would say, that you keep the position of the 2d objects, and then take the "eye" or camera position and set that as the target value for the 2d objects. It should keep them facing the camera.
Related
Currently I'm using WebGL for a school project, and am stuck on trying to keep an object perpendicular to the camera while allowing rotation of the rest of the scene.
I currently have a cube and other object on the canvas, and they're being successfully rotated using a quaternion and converting that to a rotation matrix. I would now like to add a simple square within the scene, but have it consistently perpendicular to the camera.
In thinking about this, I've considered an approach using multiple vertex shader programs for the different objects in the scene: 1 for handling the positioning of all objects in the scene that can be rotated, and another vertex shader corresponding to the square that I don't want rotated.
Though, I really don't know if this approach will work as expected, as I am still a novice at WebGL.
Would there be a better approach to this? I greatly apologize if there's a lack of info given, and can give any more info if needed. I've searched quite a bit for something involving this, but have hit a dead end.
Thank you!
If you want a more step by step tutorial: http://www.opengl-tutorial.org/intermediate-tutorials/billboards-particles/billboards/.
I have question.
Is it possible to have two Vertex shaders. One will view vertex in 2D and second in 3D.
At the moment my program have 2D view.
The only one difference between vertex in 2d and 3d will be that
vec2(x,y) in 3d will vec3(x,y,z). So i am thinking about sending to gpu vec3 and set gl_Position.z=0;
My biggest problem is that i choose magic numbers for glm:lookat and glm::perspective. If i see something it means that it works. So when i have 2d and 3d view everything looks bad.
I can move camera so doing 3d and changing position of camera only wont work.
No, this is not possible. But you can always render the same geometry multiple times, but with different glViewport and projection matrices applied. This is the canonical way to render the classical "top, front, side, perspective" views of 3D editors.
My biggest problem is that i choose magic numbers for glm:lookat and glm::perspective.
Well, then I'd tackle that problem and instead of magical numbers use actual math to create the desired effect.
So i'm learning openGL and one thing I find very strange is that the camera has the stay at the origin and look in the same direction. To achieve camera movement and rotation you have to move and rotate the entire world instead of the camera.
My question is, why can't you move the camera? Does directx allow you to move the camera?
This is an interesting question. I think the answer depends on what you actually mean, when you are talking about a fixed camera.
As a matter of fact instead of saying openGL has a fixed camera I'd rather tend to say there isn't any camera at all in openGL.
On the other hand I wouldn't agree with your interpretation that the openGL API moves or rotates the world.
Instead I'd say the openGL API doesn't move or rotate the world at all.
I think the reason why there isn't any concept of a camera in the openGL API is, because it isn't meant as a high level abstraction layer, but rather linked to the computational necesissities in displaying computer graphics.
As I suggest you're mainly talking about displaying 3-dimensional scenes this means transforming 3D vertex data to a 2D raster image.
For every frame rendered this involves transforms transforming the 3-D coordinates of every vertex in your scene to their corresponding 2D location on the screen.
As every vertex has to be placed at the right position on screen it doesn't make any computational difference at all if you conceptually move something like a camera around or just move the whole world, you'll have to do the same transformation nonetheless.
The mathmatics involved in computing the "right" position for a vertex on screen can be described by a mathmatical object called matrix that, when applied (the mathmatical term used for this application is matrix-multiplication) to 3-D data will result in the desired 2D screen coordinates.
So essentially what happens in rendering a 3-D scene - regardless of the fact if there is any camera at all or not - is that every vertex is processed by some transformation matrix, leaving the original 3-D data of your vertex intact.
As the 3-D vertex data doesn't get changed at all, I'd say the openGL doesn't move or rotate the world at all, but this "observation" may depend on the observers perspective.
As a matter of fact leaving the 3-D vertex data intact without changing it all is essential to prevent your 3-D scene from deforming due to accumulated rounding inaccuracy.
I hope I could help by expressing my opinon on who or what moves whom when or why in the openGL API.
Even if I couldn't convice you there is no word-moving involved in using the openGL API don't forget the fact it doesn't weight anything at all so moving it around shouldn't be too painful.
BTW. don't bother to investigate about the proprietary library mentioned in your question and keep relying on open standards.
What's the difference between moving the world and moving the camera? Mathematically... there isn't any; it's the same number either way. It's all a matter of perspective. As long as you code your camera abstraction correctly, you don't have to think of it as moving the world if you don't want to.
Newbie to OpenGL...
I have some very simple code (non OpenGL) for rotating a rectangle around a single axis, and projecting the result down to screen coordinates. I'm now trying to map a bitmap to the resulting shape using OpenGL. When animating the rotation, the perspective of the bitmap is quite heavily distorted. Is this to be expected? Is there something I can do about it?
I know I can use OpenGL to do the whole thing instead (and that works fine), but for my current project the approach above would suit me better, if I can just get around this perspective issue... I'm thinking maybe there's not enough information after I have projected the rotated rectangle down to 2D space for OpenGL to correctly map the bitmap with the right perspective..?
Any input would be much appreciated.
Thanks,
Daniel
To clarify:
I'm using an orthographic projection, and doing the 3D calculation and projection to 2D myself. Then I just use OpenGL for rendering the resulting shape with a texture.
If you project your coordinates yourself and do the texture mapping in 2D screen coordinates you will loose all projective information and the textures will badly distort.
You can get around this by using a perspective texture mapping. A lot of different ways to do this exist. Either by writing a real perspective texture mapper or by faking and using a plain texture mapper.
Explaining how this works is somewhat beyond the scope of a single question. I assume you read the wiki-page about perspective texture mapping first and try out the subdivision method:
http://en.wikipedia.org/wiki/Texture_mapping
Then come back and ask for detail questions..
I found the following page that explains the subdivision method in detail:
http://freespace.virgin.net/hugo.elias/graphics/x_persp.htm
It worked perfectly! Thanks Nils for pointing me in the right direction.
I am currently working on designing my first FPS game using JOGL. (Java bindings for OpenGL).
So far I have been able to generate the 'world' (a series of cubes), and a player model. I have the collision detection between the player and the cubes working great.
Now I am trying to add in the guns. I have the gun models drawn correctly and loading onto player model. The first gun I'm trying to implement is a laser gun, which shoots and instantaneous line-of-sight laser at whatever you're aiming at. Before I work on implementing the enemy models, I would like to get the collision detection between the laser and the walls working.
My laser, currently, is drawn by a series of small cubes, one after the other. The first cube is drawn at the end of the players gun, then it draws continuously from there. The idea was to continue drawing the cubes of the laser until a collision was detected with something, namely the cubes in the world.
I know the locations of the cubes in the world. The problem is that I have to call glMatrixPush to draw my character model. The laser is then drawn within this modelview. Meaning that I have lost my old coordinate system - so I'm drawing the world in one system, then the lazer in another. Within this player matrix, I have go call glRotate and glTranslate several times, in order to sync everything up with the way the camera is rotating. The lazer is then built by translating along the z-axis of this new system.
My problem is that through all of these transformations, I no longer have any idea where my laser exists in the map coordinate system, primarily due to the rotations involving the camera.
Does anyone know of a method - or have any ideas, for how to solve this problem? I believe I need a way to convert the new coordinates of the laser into the old coordinates of the map, but I'm not sure how to go about undoing all of the transformations that have been done to it. There may also be some functionality provided by OpenGL to handle this sort of problem that I'm just unaware of.
You shouldn't be considering the laser as a spacial child of the character that fires it. Once its been fired, the laser is an entity of its own, so you should render as follows:
glPushMatrix(viewMatrix);
glPushMatrix(playerMatrix);
DrawPlayer();
glPopMatrix();
glPushMatrix(laserMatrix);
DrawLaser();
glPopMatrix();
glPopMatrix();
Also, be sure that you don't mix your rendering transformation logic with the game logic. You should always store the world-space position of your objects to be able to test for intersections regardless of your current OpenGL matrix stack.
Remember to be careful with spacial parent/child relationships. In practice, they aren't that frequent. For more information, google about the problems of scene graphs.
The point that was being made in the first answer is that you should never depend on the matrix to position the object in the first place. You should be keeping track of the position and rotation of the laser before you even think about drawing it. Then you use the translate and rotate commands to put it where you know it should be.
You're trying to do things backwards, and yes, that does mean you'll have to do the matrix math, and OpenGL doesn't keep track of that because the ModelView matrix is the ONLY thing that OpenGL does keep track of in regards to object positions. OpenGL has no concept of "world space" or "camera space". There is only the matrix that all input is multiplied by. It's elegantly simple... but in some cases I do prefer the way DirectX has a a separate view matrix and model matrix.
So, if you don't know where an object is located without matrix math, then I would consider that a fundamental design problem. If you don't need to know the object position, then matrix-transform to your hearts content, but if you do need it's position, start with the position.
(pretty much what the first answer says, just in a different way...)