mfc when to use logical/device coordinates - c++

I have heard that rectangles, coordinates of the mouse and other things involving drawing all use device coordinates. Is this true? Are there any ways that can tell me if I have logical or device coordinates?
I could look at the documentation of functions that give me the coordinates, but sometimes they don't explicitly say if these are logical or device coordinates. For example, the documentation for GetCursorPos function says it "retrieves the position of the mouse cursor, in screen coordinates."
I am assuming screen coordinates are the same as device coordinates? Does this mean I have to convert the screen coordinates I get from the function into client coordinates?

You know what coordinate (0,0) is on the top-left corner of screen. But on paper when we draw a graph, (0,0) may be on bottom left, or on the centre of the graph plotting paper.
By default, the logical and coordinates and physical/screen coordinates are same, and (0,0) points to top-left. But what if you want to draw a line from bottom left to somewhere in middle of screen, that matches the math/trigonometry you've learnt or are practising? Well, the you move to changing the logical coordinate system to something of your liking.
You'd use SetMapMode to change the logical coordinate system. You may later use LPtoDP, DPtoLP, ClientToScreen, ScreenToClient etc for mapping and for physical monitor to your window coordinates mapping.
About Coordinate Spaces and Transformations

Related

Is it possible to separate normalized device coordinates and window clipping in openGL (glViewport)

Is there a way to set a transformation for NDC to window, but separately specify the clipping region so it matches the actual window size?
Background: I have a bunch of openGL code that renders a 2D map to a window. It's a lot of complex code, because I use both the GPU and the CPU to draw on the map, so it's important that I keep to a consistent coordinate system in both places. To keep that simple, I use glViewport(0,0,mapSizeX, mapSizeY), and now map coordinates correspond well to pixel coordinates in the frame buffer, exactly what I need. I can use GLSL to draw some of the map, call glReadPixels and use the CPU to draw on top of that, and glDrawPixels to send that back to the frame buffer, all of that using the same coordinate system. Finally I use GLSL to draw a few final things over that (that I don't want zoomed). That all works, except...
The window isn't the same size as the map, and glViewport doesn't just set up the transformation. It also sets up clipping. So now when I go to draw a few last items, and the window is larger than the map, things I draw near the top of the screen get clipped away. Is there a workaround?
glViewport doesn't just set up the transformation. It also sets up clipping.
No, it just sets up the transformation. By the time the NDC-to-window space transform happens, clipping has already been done. That happened immediately after vertex processing; your vertex shader (or whatever you're doing to transform vertices) handled that based on how it transformed vertices into clip-space.
You should use the viewport to set up how you want the NDC box to visibly appear in the window. Your VS needs to handle the transformation into the clipping area. So it effectively decides how much of the world gets put into the NDC box that things get clipped to.
Basically, you have map space (the coordinates used by your map) and clip-space (the coordinates after vertex transformations). And you have some notion of which part of the map you want to actually draw to the window. You need to transform the region of your map that you want to see such that the corners of this region appear in the corners of the clipping box (for orthographic projections, this is typically [-1, 1]).
In compatibility OpenGL, this might be defined by using glOrtho for othographic projections to transform from you. In a proper vertex shader, you'll need to provide an appropriate orthographic matrix.

Why does the camera face the negative end of the z-axis by default?

I am learning openGL from this scratchpixel, and here is a quote from the perspective project matrix chapter:
Cameras point along the world coordinate system negative z-axis so that when a point is converted from world space to camera space (and then later from camera space to screen space), if the point is to left of the world coordinate system y-axis, it will also map to the left of the camera coordinate system y-axis. In other words, we need the x-axis of the camera coordinate system to point to the right when the world coordinate system x-axis also points to the right; and the only way you can get that configuration, is by having camera looking down the negative z-axis.
I think it has something to do with the mirror image? but this explanation just confused me...why is the camera's coordinate by default does not coincide with the world coordinate(like every other 3D objects we created in openGL)? I mean, we will need to transform the camera coordinate anyway with a transformation matrix (whatever we want with the negative z set up, we can simulate it)...why bother?
It is totally arbitrary what to pick for z direction.
But your pick has a lot of deep impact.
One reason to stick with the GL -z way is that the culling of faces will match GL constant names like GL_FRONT. I'd advise just to roll with the tutorial.
Flipping the sign on just one axis also flips the "parity". So a front face becomes a back face. A znear depth test becomes zfar. So it is wise to pick one early on and stick with it.
By default, yes, it's "right hand" system (used in physics, for example). Your thumb is X-axis, index finger Y-axis, and when you make those go to right directions, Z-points (middle finger) to you. Why Z-axis has been selected to point inside/outside screen? Because then X- and Y-axes go on screen, like in 2D graphics.
But in reality, OpenGL has no preferred coordinate system. You can tweak it as you like. For example, if you are making maze game, you might want Y to go outside/inside screen (and Z upwards), so that you can move nicely at XY plane. You modify your view/perspective matrices, and you get it.
What is this "camera" you're talking about? In OpenGL there is no such thing as a "camera". All you've got is a two stage transformation chain:
vertex position → viewspace position (by modelview transform)
viewspace position → clipspace position (by projection transform)
To see why be default OpenGL is "looking down" -z, we have to look at what happens if both transformation steps do "nothing", i.e. full identity transform.
In that case all vertex positions passed to OpenGL are unchanged. X maps to window width, Y maps to window height. All calculations in OpenGL by default (you can change that) have been chosen adhere to the rules of a right hand coordinate system, so if +X points right and +Y points up, then Z+ must point "out of the screen" for the right hand rule to be consistent.
And that's all there is about it. No camera. Just linear transformations and the choice of using right handed coordinates.

How to find out how many units across the screen plane in OpenGL

How would one get the relative size of the viewing plane in opengl's own units? I need to find out the width and height in "opengl units". Is there a function which will retrieve this information?
I assume that one unit (let us say 1.0f) in Z would be equivalent to one unit in X, even if conversion to a real measurement system in meaningless.
I know I can get the screen size either by use of GetSystemMetrics(SM_CXSCREEN) or glutGet(GLUT_SCREEN_WIDTH), but this is in pixels.
To handle the graphical window calls, I am using freeglut on non-windows OSes and the WinAPI on Windows.
Assuming you want to draw something like a UI, set your projection matrix to an Orthographic matrix with glOrtho, then you don't have any perspective and have a direct orthographic mapping between world coordinates and screen coordinates. The arguments to your glOrtho call determine how wide/high your view port is in world coordinates.
If you want to draw both a UI and a 3D scene, draw the UI with glOrtho and draw the scene with glPerspective using a clipping mask to make sure you don't ruin your UI.
If on the other hand you want to know the width of the view port in a 3D scene with perspective, so that you know how big to draw your object then you'll have to deal with the perspective projection. You need to know at which Z coordinate you want to know the witdh/height of the view port. You can use gluUnProject to calculate the world coordinate corresponding to a given screen coordinate and Z plane.
However it would probably be better to do it the other way around, always draw your object with a given size and then calculate what your projection matrix should be to have that object appear properly in your view port.

Tracing a ray from the camera to the mouse pointer in GLUT

I'm not really sure if it makes sense but I need to do and there is a chance there won't be any obstacle in front.
Is's not clear what you mean by tracing but :
1 You may be looking for gluUnProject to go from screen coordinates to space coordinates. With the help of the Z buffer for distance from camera, you can get the coordinates of 3D point which is seen at the specified pixel.
2 you want to draw a 3D line from camera origin to some 3D point at the mouse cursor. Well it's just a point at the mouse cursor.

Algorithm to zoom into mouse(OpenGL)

I have an OpenGL scene with a top left coordinate system. When I glScale it zooms in from (0,0) the top left. I want it to zoom in from the mouse's coordinate (relative to the OGL frame). How is this done?
Thanks
I believe this can be done in four steps:
Find the mouse's x and y coordinates using whatever function your windowing system (i.e. GLUT or SDL) has for that, and use gluUnProject to get the object coordinates that correspond to those window coordinates
Translate by (x,y,0) to put the origin at those coordinates
Scale by your desired vector (i,j,k)
Translate by (-x,-y,0) to put the origin back at the top left
I did a smooth zoom in using glortho . The skeleton of my solution is
glortho(initial viewport x,y & size)
glcalllist(my display list)
render
.
.
loop to gradually go to final viewrport coordinates/size . Implement your timing and FPS requirements
.
.
glortho(final viewport x,y & size)
glcalllist(my display list)
render
I hope you get the general idea. There are few other methods to acheive this, but I find glortho the method the easiest to comprehend.