OpenGL Orthographic View App Windowed-to-Fullscreen - opengl

I'm working on a tile-based 2D OpenGL game (top down, 2D Zelda style), and I'm using an orthographic projection. I'd like to make this game both windowed and fullscreen compatible.
Am I better off creating scaling 2D drawing functions so that tile sizes can be scaled up when they're drawn in fullscreen mode, or should I just design the game fully around windowed mode and then scale up the ENTIRE drawing area whenever a player is in fullscreen mode?
In the end, I'm hoping to maintain the best looking texture quality when my tiles are re-scaled.
UPDATE/CLARIFICATION: Here's my concern (which may or may not even be a real problem): If I draw the entire windowed-view-sized canvas area and then scale it up, am I potentially scaling down originally 128x128 px textures to 64x64 px windowed-sized textures and then re-scaling them up again to 80x80 when I scale the entire game area up for a full screen view?
Other background info: I'm making this game using Java/LWJGL and I'm drawing with OpenGL 1.1 functions.

Dont scale tiles because your fullscreen window is larger than the normal one, but play with the projection matrix.
The window is getting larger? Enlarge the parameters used for constructing the projection matrix.
If you want to mantain proportional the tile size respect the window size, don't change projection matrix depending on window size!
The key is, if you haven't catched yet, the projectiona matrix: thanks to it, vertices are projected onto the viewport, indeed vertices are "scaled", letting you to choose appropriate unit system without worring about scaling.
In few words, the orthographic projection is specified from 4 parameters: left, right, bottom and top. Those parameters are nothing but the XY coordinates of the vertex projected onto the window screen boundaries (really the viewport).
Let do some concrete example.
Example 1
Window size: 400x300
OpenGL vertex position (2D): 25x25
Orthographic projection matrix:
- Left= 0
- Right= 400
- Bottom= 0
- Top= 300
Model view matrix set to identity
--> Point will be drawn at 25x25 in window coordinates
Example 2
Window size: 800x600 (doubled w.r.t example 1)
OpenGL vertex position (2D): 25x25
Orthographic projection matrix:
- Left= 0
- Right= 400
- Bottom= 0
- Top= 300
Model view matrix set to identity
--> Point will be drawn at 50x50 in window coordinates. It is doubled because the orthographic projection haven't changed.
Example 3
Let say I want to project the vertex in example 1 even if the aspect ratio of the window changes. Previous windows were 4:3. The window in this example is 16:9: it is larger! The trick here is to force the ratio between (right - left) and (top - bottom)
Window size: 1600x900 (doubled w.r.t example 1)
OpenGL vertex position (2D): 25x25
Orthographic projection matrix:
Left= 0
Right= 533 (300 * 16 / 9)
Bottom= 0
Top= 300
Model view matrix set to identity
--> Point will be drawn at 25x25 (instead of 33.3x25, if we were using the projection matrix of the example 1) in window coordinates.

Related

How to draw a rectangle overlay with fixed aspect ratio that represents a render region?

I have a small custom ray tracer that I am integrating in an application. There is a resizable OpenGL window that represents the camera into the scene. I have a perspective matrix that adjusts the overall aspect ratio when the window resizes (basic setup).
Now I would like to draw a transparent rectangle over the window representing the width x height of the render so a user knows exactly what will be rendered. How could this be done? How can I place the rectangle accurately? The user can enter different output resolutions for the ray tracer.
If I understand well your problem, I think that your overlay represents the new "screen" in your perspective frustum.
Redefine then a perspective matrix for the render, in which the overlay 4 corners define the "near" projection plane.

OpenGL render portion of screen to texture

I am trying to render a small region of the screen to an off-screen texture. This is part of a screenshot function in my app where the user selects a region on the screen and saves this to an image. While the region on the screen might be 250x250px, the saved image can be a lot larger like 1000x1000px.
I understand the process of rendering to a texture using an FBO. I'm mostly stuck when it comes to defining the projection matrix that clips the scene so that only the screenshot region is rendered.
I believe you can do this without changing the projection matrix. After all, if you think about, you don't really want to change the projection. You want to change which part of the projected geometry gets mapped to your rendering surface. The coordinate system after projection is NDC (normalized device coordinates). The transform that controls how NDC is mapped to the rendering surface is the viewport transformation. You control the viewport transformation by the parameters to glViewport().
If you set the viewport dimensions to the size of your rendering surface, you map the NDC range of [-1.0, 1.0] to your rendering surface. To render a sub-range of that NDC range to your surface, you need to scale up the specified viewport size accordingly. Say to map 1/4 of your original image to the width of your surface, you set the viewport width to 4 times your surface width.
To map a sub-range of the standard NDC range to your surface, you will also need to adjust the origin of the viewport. The viewport origin values become negative in this case. Continuing the previous example, to map 1/4 or the original image starting in the middle of the image, the x-value of your viewport origin will be -2 times the surface width.
Here is what I came up with on how the viewport needs to be adjusted. Using the following definitions:
winWidth: width of original window
winHeight: height of original window
xMin: minimum x-value of zoomed region in original window coordinates
xMax: maximum x-value of zoomed region in original window coordinates
yMin: minimum y-value of zoomed region in original window coordinates
yMax: maximum y-value of zoomed region in original window coordinates
fboWidth: width of FBO you use for rendering zoomed region
fboHeight: height of FBO you use for rendering zoomed region
To avoid distortion, you will probably want to maintain the aspect ratio:
fboWidth / fboHeight = (xMax - xMin) / (yMax - yMin)
In all of the following, most of the operations (particularly the divisions) will have to be executed in floating point. Remember to use type casts if the original variables are integers, and round the results back to integer for the final results.
xZoom = winWidth / (xMax - xMin);
yZoom = winHeight / (yMax - yMin);
vpWidth = xZoom * fboWidth;
vpHeight = yZoom * fboHeight;
xVp = -(xMin / (xMax - xMin)) * fboWidth;
yVp = -(yMin / (yMax - yMin)) * fboHeight;
glViewport(xVp, yVp, vpWidth, vpHeight);
You might want to look into how gluPickMatrix works and replicate its functionality using modern OpenGL methods. You can find the gluPickMatrix source code in the implementation of Mesa3D.
While the original intent of gluPickMatrix was for selection mode rendering, it can be used for what you want to do as well.

OpenGL: Size of a 3D bounding box on screen

I need a simple and fast way to find out how big a 3D bounding box appears on screen (for LOD calculation) by using OpenGL Modelview and Projection matrices and the OpenGL Viewport dimensions.
My first intention is to project all 8 box corners on screen by using gluProject() and calculate the area of the convex hull afterwards. This solution works only with bounding boxes that are fully within the view frustum.
But how can a get the covered area on screen for boxes that are not fully within the viewing volume? Imaging a box where 7 corners are behind the near plane and only one corner is in front of the near plane and thus within the view frustum.
I have found another very similar question Screen Projection and Culling united but it does not cover my problem.
what about using queries and get samples that passes rendering?
http://www.opengl.org/wiki/Query_Object and see GL_SAMPLES_PASSED,
that way you could measure how many fragments are rendered and compare it for proper LOD selection.
Why not just manually multiply the world-view-projection with the vertex positions? This will give you the vertices in "normalized device coordinates" where -1 is the bottom left of the screen and +1 is the top-right of the screen?
The only thing is if the projection is perspective, you have to divide your vertices by their 4th component, ie if the final vertex is (x,y,z,w) you would divide by w.
Take for example a position vector
v = {x, 0, -z, 1}
Given a vertical viewing angle view 'a' and an aspect ration 'r', the position of x' in normalized device coordinates (range 0 - 1) is this (this formula taken directly out of a graphics programming book):
x' = x * cot(a/2) / ( r * z )
So a perspective projection for given parameters these will be as follows (shown in row major format):
cot(a/2) / r 0 0 0
0 cot(a/2) 0 0
0 0 z1 -1
0 0 z2 0
When you multiply your vector by the projection matrix (assuming the world, view matrices are identity in this example) you get the following (i'm only computing the new "x" and "w" values cause only they matter in this example).
v' = { x * cot(a/2) / r, newY, newZ, z }
So finally when we divide the new vector by its fourth component we get
v' = { x * cot(a/2) / (r*z), newY/z, newZ/z, 1 }
So v'.x is now the screen space coordinate v.x. This is exactly what the graphics pipeline does to figure out where your vertex is on screen.
I've used this basic method before to figure out the size of geometry on screen. The nice part about it is that the math works regardless of whether or not the projection is perspective or orthographic, as long you divide by the 4th component of the vector (for orthographic projections, the 4th component will be 1).

OpenGL Viewport Transformation

I was wondering how OpenGL handles viewport transformation to the window.
As I understand viewport transformation is that it strethes the scene onto the OpenGL window by applying the viewport transformation to that scene.
Please correct me if I'm wrong.
After clipping and perspective divide, all remaining (visible) vertex coordinates x,y,z are between -1 and +1 -- these are called normalized device coordinates. These are mapped to device coordinates by the appropriate scale and shift -- i.e, the viewport transformation.
For example, if the viewport has size 1024x768 with a 16-bit depth buffer and the origin is (0,0), then the points will be scaled by (512,384,2^14) and shifted by (512,384,2^14) yielding the appropriate pixel and depth values for the device.
http://www.songho.ca/opengl/gl_transform.html:
Window Coordinates (Screen Coordinates)
It is yielded by applying normalized device coordinates (NDC) to
viewport transformation. The NDC are scaled and translated in order to
fit into the rendering screen. The window coordinates finally are
passed to the raterization process of OpenGL pipeline to become a
fragment. glViewport() command is used to define the rectangle of the
rendering area where the final image is mapped. And, glDepthRange() is
used to determine the z value of the window coordinates. The window
coordinates are computed with the given parameters of the above 2
functions;
Follow the link to see the math details.

OpenGL: 2D Vertex coordinates to 2D viewing coordinates?

I'm implementing a rasterizer for a class project, and currently im stuck on what method/how i should convert vertex coordinates to viewing pane coordinates.
I'm given a list of verticies of 2d coordinates for a triangle, like
0 0 1
2 0 1
0 1 1
and im drawing in a viewing pane (using OpenGL and GLUT) of size 400X400 pixels, for example.
My question is how do i decide where in the viewing pane to put these verticies, assuming
1) I want the coordinate's to be centered around 0,0 at the center of the screen
2) I want to fill up most of the screen (lets say for this example, the screen is the maximum x coordinate + 1 lengths wide, etc)
3) I have any and all of OpenGL's and GLUT's standard library functions at my disposal.
Thanks!
http://www.opengl.org/sdk/docs/man/xhtml/glOrtho.xml
To center around 0 use symmetric left/right and bottom/top. Beware the near/far which are somewhat arbitrary but are often chosen (in examples) as -1..+1 which might be a problem for your triangles at z=1.
If you care about the aspect ratio make sure that right-left and bottom-top are proportional to the window's width/height.
You should consider the frustum which is your volumetric view and calculate the coordinates by transforming the your objects to consider their position, this explains the theory quite thoroughly..
basically you have to project the object using a specified projection matrix that is calculated basing on the characteristics of your view:
scale them according to a z (depth) value: you scale both y and x in so inversely proportionally to z
you scale and shift coordinates in order to fit the width of your view