Opengl: fit a quad to screen, given the value of Z - opengl

Short Version of the question:
I will put a quad. I know the width and height of the screen in window coordinates, i know the Z-coordinates of the quad in 3D. I know the FOVY, I know the aspect. The quad will be put along Z-axis, My camera doesn't move (placed at 0, 0, 0). I want to find out the width and height of the quad IN 3D COORDINATES that will fit exactly onto my screen.
Long Version of the question:
I would like to put a quad along the Z-axis at the specified offset Z, I would like to find out the width and height of the quad that will exactly fill the entire screen.
I used to have a post on gamedev.net that uses a formula similar to the following:
*dist = Z * tan ( FOV / 2 )*
Now I can never find the post! Though it's similar, it is still different, because I remembered in that working formula, they do make use of screenWidth and screenHeight, which are the width and height of the screen in window coordinates.
I am not really familiar with concepts like frustum, fov and aspect so that's why I can't work out the formula on my own. Besides, I am sure I don't need gluUnproject (I tried it, but the results are way off). It's not some gl calls, it's just a math formula that can find out the width and height in 3D space that can fill the entire screen, IF Z offset, width in window coordinates, and height in window coordinates, are known.

Assuming the FOV is measured in Y-Z plane, then:
Height = Z * tan(fov/2)
width = height * aspect_ratio

Related

GPU mouse picking OpenGL/WebGL

I understand i need to render only 1x1 or 3x3 pixel part of the screen where the mouse is, with object id's as colors and then get id from the color.
I have implemented ray-cast picking with spheres and i am guessing it has something to do with making camera look in direction of the mouse ray?
How do i render the correct few pixels?
Edit:
setting camera in direction of mouse ray works, but if i make the viewport smaller the picture scales but what (i think) i need is for it to be cropped rather than scaled. How would i achieve this?
The easiest solution is to use the scissor test. It allows you to render only pixels within a specified rectangular sub-region of your window.
For example, to limit your rendering to 3x3 pixels centered at pixel (x, y):
glScissor(x - 1, y - 1, 3, 3);
glEnable(GL_SCISSOR_TEST);
glDraw...(...);
glDisable(GL_SCISSOR_TEST);
Note that the origin of the coordinate system is at the bottom left of the window, while most window systems will give you mouse coordinates in a coordinate system that has its origin at the top left. If that's the case on your system, you will have to invert the y-coordinate by subtracting it from windowHeight - 1.

OpenGL render portion of screen to texture

I am trying to render a small region of the screen to an off-screen texture. This is part of a screenshot function in my app where the user selects a region on the screen and saves this to an image. While the region on the screen might be 250x250px, the saved image can be a lot larger like 1000x1000px.
I understand the process of rendering to a texture using an FBO. I'm mostly stuck when it comes to defining the projection matrix that clips the scene so that only the screenshot region is rendered.
I believe you can do this without changing the projection matrix. After all, if you think about, you don't really want to change the projection. You want to change which part of the projected geometry gets mapped to your rendering surface. The coordinate system after projection is NDC (normalized device coordinates). The transform that controls how NDC is mapped to the rendering surface is the viewport transformation. You control the viewport transformation by the parameters to glViewport().
If you set the viewport dimensions to the size of your rendering surface, you map the NDC range of [-1.0, 1.0] to your rendering surface. To render a sub-range of that NDC range to your surface, you need to scale up the specified viewport size accordingly. Say to map 1/4 of your original image to the width of your surface, you set the viewport width to 4 times your surface width.
To map a sub-range of the standard NDC range to your surface, you will also need to adjust the origin of the viewport. The viewport origin values become negative in this case. Continuing the previous example, to map 1/4 or the original image starting in the middle of the image, the x-value of your viewport origin will be -2 times the surface width.
Here is what I came up with on how the viewport needs to be adjusted. Using the following definitions:
winWidth: width of original window
winHeight: height of original window
xMin: minimum x-value of zoomed region in original window coordinates
xMax: maximum x-value of zoomed region in original window coordinates
yMin: minimum y-value of zoomed region in original window coordinates
yMax: maximum y-value of zoomed region in original window coordinates
fboWidth: width of FBO you use for rendering zoomed region
fboHeight: height of FBO you use for rendering zoomed region
To avoid distortion, you will probably want to maintain the aspect ratio:
fboWidth / fboHeight = (xMax - xMin) / (yMax - yMin)
In all of the following, most of the operations (particularly the divisions) will have to be executed in floating point. Remember to use type casts if the original variables are integers, and round the results back to integer for the final results.
xZoom = winWidth / (xMax - xMin);
yZoom = winHeight / (yMax - yMin);
vpWidth = xZoom * fboWidth;
vpHeight = yZoom * fboHeight;
xVp = -(xMin / (xMax - xMin)) * fboWidth;
yVp = -(yMin / (yMax - yMin)) * fboHeight;
glViewport(xVp, yVp, vpWidth, vpHeight);
You might want to look into how gluPickMatrix works and replicate its functionality using modern OpenGL methods. You can find the gluPickMatrix source code in the implementation of Mesa3D.
While the original intent of gluPickMatrix was for selection mode rendering, it can be used for what you want to do as well.

OpenGl coordinate system is not at -1 to 1

I am creating a basic game in OpenGl and C++ and want to make it so that when the player gets to the edge of the screen they can't move any further. I am having trouble working out where the edge of the screen is. I know that windows normally have a system between 1 and -1, but mine seems to be more like 0.63 to -0.63. The player is shown as a box on the screen which has an x, y, and z location, but it will only move in 2D space.
I want to change the bounds so that they are between -1 and 1, not a odd value.
How can I do this?
Code has been uploaded to http://pastebin.com/jxd5YhHa.
If you aren't going to be dynamically changing your projection matrix, the easiest thing to do would be to call
glScalef(.63f,.63f,1);
on your projection matrix.
You can then restrict movement based on these values.
To compute the world space coordinates at any time you should make use of gluUnProject.
assuming 'x' and 'y' are the width and height of your window respectively (the values you pass gluPerspective) you can find the world space coordinates like so:
double world_llx,world_lly,world_llz;
//world coordinates of lower left corner of window
gluUnProject(0, 0, 0, view_mat, proj_mat, viewport,&world_llx,&world_lly,&world_llz);
//world coordinate of upper right corner of window
double world_urx,world_ury,world_urz;
gluUnProject(x,y,0,view_mat,proj_mat,viewport,&world_urx,&world_ury,&world_urz);
view_mat is your view matrix. proj_mat is your projection matrix. You can get both of these using glGetDouble* with GL_MODELVIEW_MATRIX and GL_PROJECTION_MATRIX.
The viewport parameter will probably have the same dimensions as your window. In any event, this is what you set with glViewport.
This assumes your XZ plane is at z == 0.

How to scale 2d world with 3d rendering?

I'm using 3d mode to render my 2d game, because the camera rotation and zooming in/out is much easier than with 2d mode.
Now i have ran into a problem i cant seem to think how to fix:
How to make the 2d plane of my world to fit the screen in a way that 1 texture pixel matches 1 pixel on my screen? In other words: how do i calculate the z-position of my camera to achieve this?
My texcoords start from 0 and ends to 1, so i can see all the pixels from one tile in the GL_NEAREST texture filter mode.
My window is resizeable in a way that my tiles are always squares but the visible area expands depending on how i resize my window.
Edit: my view port is using perspective mode, not isometric. but if its not possible in perspective mode, im willing to change to isometric.
Use an orthographic projection that maps eye space units to pixels:
glViewport(0,0, width, height);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0, width, 0, height, -1, 1);
Update due to question update:
A texel → viewport pixel matching is possible with a perspective projection, but only under a certain constraint: The textured quad must be coplanar to the perspective frustum near/far plane.
How to do it? For glFrustum(left, right, bottom, top, near, far) with Z=near, XY eye space range [left, right]×[bottom, top] maps to NDC xy[-1, 1]² and NDC xy[-1, 1]² maps to the viewport extents. So those are all affine transformations following the law
y(x) = to_lower_bound + (x - from_lower_bound) * (to_upper_bound - to_lower_bound) / (from_upper_bound - from_lower_bound)
All you have to do it map viewport to NDC to near plane and if you're Z =/= near scale by near/Z.

How to know the plane size in units?

Well the thing is, that I wan't to picture mazes with a different width and height. I'm drawing them in units and my question is, how can I get the plane viewable dimensions in units that I would know how deep inside the screen I would have to draw my maze in order it would be fully seeable. For perspective view I use "::gluPerspective(45.0f, (GLfloat)width / (GLfloat)height, 1.0f, 100.0f);"
For example how I get the near plane dimensions(width and height) in OpenGL units or the far plane or any plane between those planes. If I want to picture something entirely seeable I need to know the plane dimensions in OpenGL units or is there another way?
A bit of trigonometry will tell you that: h_near = 2*near*tan(fovy/2) and the same for far: h_far = 2*far*tan(fovy/2)
Then, the ratio will give you the width.
For the "proof", just consider the right-angled triangle formed by the line of view, the vertical of the plane of rendering and back. The length of the line of view is near or far (depending), the angle at the eye position is fovy/2 (i.e. half the view angle) and the vertical on the plane is h_near/2 or h_far/2, as we only get half-way to the plane. Then, the tangent of the angle on a right-angled triangle is equal to the far-side divided by the near-side ...