I'm not really sure if it makes sense but I need to do and there is a chance there won't be any obstacle in front.
Is's not clear what you mean by tracing but :
1 You may be looking for gluUnProject to go from screen coordinates to space coordinates. With the help of the Z buffer for distance from camera, you can get the coordinates of 3D point which is seen at the specified pixel.
2 you want to draw a 3D line from camera origin to some 3D point at the mouse cursor. Well it's just a point at the mouse cursor.
Related
I have heard that rectangles, coordinates of the mouse and other things involving drawing all use device coordinates. Is this true? Are there any ways that can tell me if I have logical or device coordinates?
I could look at the documentation of functions that give me the coordinates, but sometimes they don't explicitly say if these are logical or device coordinates. For example, the documentation for GetCursorPos function says it "retrieves the position of the mouse cursor, in screen coordinates."
I am assuming screen coordinates are the same as device coordinates? Does this mean I have to convert the screen coordinates I get from the function into client coordinates?
You know what coordinate (0,0) is on the top-left corner of screen. But on paper when we draw a graph, (0,0) may be on bottom left, or on the centre of the graph plotting paper.
By default, the logical and coordinates and physical/screen coordinates are same, and (0,0) points to top-left. But what if you want to draw a line from bottom left to somewhere in middle of screen, that matches the math/trigonometry you've learnt or are practising? Well, the you move to changing the logical coordinate system to something of your liking.
You'd use SetMapMode to change the logical coordinate system. You may later use LPtoDP, DPtoLP, ClientToScreen, ScreenToClient etc for mapping and for physical monitor to your window coordinates mapping.
About Coordinate Spaces and Transformations
I am learning openGL from this scratchpixel, and here is a quote from the perspective project matrix chapter:
Cameras point along the world coordinate system negative z-axis so that when a point is converted from world space to camera space (and then later from camera space to screen space), if the point is to left of the world coordinate system y-axis, it will also map to the left of the camera coordinate system y-axis. In other words, we need the x-axis of the camera coordinate system to point to the right when the world coordinate system x-axis also points to the right; and the only way you can get that configuration, is by having camera looking down the negative z-axis.
I think it has something to do with the mirror image? but this explanation just confused me...why is the camera's coordinate by default does not coincide with the world coordinate(like every other 3D objects we created in openGL)? I mean, we will need to transform the camera coordinate anyway with a transformation matrix (whatever we want with the negative z set up, we can simulate it)...why bother?
It is totally arbitrary what to pick for z direction.
But your pick has a lot of deep impact.
One reason to stick with the GL -z way is that the culling of faces will match GL constant names like GL_FRONT. I'd advise just to roll with the tutorial.
Flipping the sign on just one axis also flips the "parity". So a front face becomes a back face. A znear depth test becomes zfar. So it is wise to pick one early on and stick with it.
By default, yes, it's "right hand" system (used in physics, for example). Your thumb is X-axis, index finger Y-axis, and when you make those go to right directions, Z-points (middle finger) to you. Why Z-axis has been selected to point inside/outside screen? Because then X- and Y-axes go on screen, like in 2D graphics.
But in reality, OpenGL has no preferred coordinate system. You can tweak it as you like. For example, if you are making maze game, you might want Y to go outside/inside screen (and Z upwards), so that you can move nicely at XY plane. You modify your view/perspective matrices, and you get it.
What is this "camera" you're talking about? In OpenGL there is no such thing as a "camera". All you've got is a two stage transformation chain:
vertex position → viewspace position (by modelview transform)
viewspace position → clipspace position (by projection transform)
To see why be default OpenGL is "looking down" -z, we have to look at what happens if both transformation steps do "nothing", i.e. full identity transform.
In that case all vertex positions passed to OpenGL are unchanged. X maps to window width, Y maps to window height. All calculations in OpenGL by default (you can change that) have been chosen adhere to the rules of a right hand coordinate system, so if +X points right and +Y points up, then Z+ must point "out of the screen" for the right hand rule to be consistent.
And that's all there is about it. No camera. Just linear transformations and the choice of using right handed coordinates.
in an OpenGL context, I have seen it is possible to convert mouse coordinates to 3D world coordinates (e.g. MFC with Opengl get 3d coordinate from 2d coordinate of mouse). However, this does not work when I have simply a set of GLPoints and lots of empty space: when I'm hovering the mouse over empty space, the 3D coordinates have no meaning.
How can I get the coordinates of the nearest 3D point to my mouse position?
Recognize that "unprojecting" 2D mouse coordinates into a 3D world requires additional information. A 2D position on the screen corresponds to an infinite number of points along a line, from the near plane to the far plane in 3D. What this means is that to get a 3D point, you need to provide a depth value as well.
The gluUnProject function is convenient way of doing this and provides a parameter for this winZ. 0.0 would find the 3D point at the near plane and 1.0 would find it at the far plane.
gluUnProject(winX, winY, winZ, model, proj, view, objX, objY, objZ);
Note: If gluUnProject is not available you can figure out the same thing pretty easily with matrices. source here
When the mouse is used to interact with 3D objects this depth value is typically found by sampling from the depth buffer, or intersecting a ray with scene geometry, or a primitive (such as a sphere or box). If the objective is to interact with points, you could use a sphere around each point. If you just want a 3D point "somewhere out there" you could decide on a depth value (maybe 0.0 on the near plane or 0.5 - halfway between the near and far plane).
I want to mix a perspective and orthographic view, but I can't get it to work.
I want X and Y coordinates to be orthographic and Z perspective. For clarification I added a sketch of the desired transformation from OpenGL coordinates to screen display:
(I started from a tutorial, but couldn't find how to get values top, bottom, etc.)
What you've drawn is simply perspective, not a mix. You just have to make sure that the viewing direction is parallel to the z axis to make the front and back faces of the box stay rectangular.
You could probably use glFrustum to achieve this.
If you use a standard perspective matrix and the camera faces the box front on, X/Y will be uniform, however movement away from the camera will move the X/Y coordinates towards the centre, shrinking them for a standard parallax effect. What you've drawn is movement towards the top of the window. All you need to do is crop the perspective projection to below its standard centre. That's where glFrustum comes in - move the normally symmetrical top/bottom arguments down, align the camera/view matrix along the axis you want and you should have the desired projection.
Any rotation of the camera/view will destroy the uniform projection in the X/Y plane. For camera movement you're then limited to panning and moving the glFrustum bounds.
EDIT Come to think of it, you could probably just throw in a glTranslatef(shearX, shearY, 0) before the call to gluPerspective and achieve the same thing.
So i need a method to do smooth lines without using:
Full Screen Antialiasing (slow)
Shaders (not supported on all cards)
GL_LINE_SMOOTH (causes a crash on some cards)
Only way i could think of doing this was using a textured rectangle that is always faced at camera direction, but the problems are:
1. how do i always face the rectangle at the camera (efficiently) ?
2. how do i keep its size always the same no matter how far away my camera is looking at it?
Any other ideas?
Billboarding is a simple concept, but can be difficult to implement. A billboard is a flat object, usually a quad (square), which faces the camera. This direction usually changes constantly during runtime as the object and camera move, and the object needs to be rotated each frame to point in that direction. There are two types of billboarding: point and axis. Point sprites, or point billboards, are a quad that is centered at a point and the billboard rotates about that central point to face the user. Axis billboards come in two types: axis aligned and arbitrary. The axis-aligned (AA) billboards always have one local axis that is aligned with a global axis, and they are rotated about that axis to face the user. The arbitrary axis billboards are rotated about any axis to face the user.
http://nehe.gamedev.net/data/articles/article.asp?article=19
You can use point sprites, they are always the same size and always face the camera.
http://www.opengl.org/registry/specs/ARB/point_sprite.txt