I am trying to render a small region of the screen to an off-screen texture. This is part of a screenshot function in my app where the user selects a region on the screen and saves this to an image. While the region on the screen might be 250x250px, the saved image can be a lot larger like 1000x1000px.
I understand the process of rendering to a texture using an FBO. I'm mostly stuck when it comes to defining the projection matrix that clips the scene so that only the screenshot region is rendered.
I believe you can do this without changing the projection matrix. After all, if you think about, you don't really want to change the projection. You want to change which part of the projected geometry gets mapped to your rendering surface. The coordinate system after projection is NDC (normalized device coordinates). The transform that controls how NDC is mapped to the rendering surface is the viewport transformation. You control the viewport transformation by the parameters to glViewport().
If you set the viewport dimensions to the size of your rendering surface, you map the NDC range of [-1.0, 1.0] to your rendering surface. To render a sub-range of that NDC range to your surface, you need to scale up the specified viewport size accordingly. Say to map 1/4 of your original image to the width of your surface, you set the viewport width to 4 times your surface width.
To map a sub-range of the standard NDC range to your surface, you will also need to adjust the origin of the viewport. The viewport origin values become negative in this case. Continuing the previous example, to map 1/4 or the original image starting in the middle of the image, the x-value of your viewport origin will be -2 times the surface width.
Here is what I came up with on how the viewport needs to be adjusted. Using the following definitions:
winWidth: width of original window
winHeight: height of original window
xMin: minimum x-value of zoomed region in original window coordinates
xMax: maximum x-value of zoomed region in original window coordinates
yMin: minimum y-value of zoomed region in original window coordinates
yMax: maximum y-value of zoomed region in original window coordinates
fboWidth: width of FBO you use for rendering zoomed region
fboHeight: height of FBO you use for rendering zoomed region
To avoid distortion, you will probably want to maintain the aspect ratio:
fboWidth / fboHeight = (xMax - xMin) / (yMax - yMin)
In all of the following, most of the operations (particularly the divisions) will have to be executed in floating point. Remember to use type casts if the original variables are integers, and round the results back to integer for the final results.
xZoom = winWidth / (xMax - xMin);
yZoom = winHeight / (yMax - yMin);
vpWidth = xZoom * fboWidth;
vpHeight = yZoom * fboHeight;
xVp = -(xMin / (xMax - xMin)) * fboWidth;
yVp = -(yMin / (yMax - yMin)) * fboHeight;
glViewport(xVp, yVp, vpWidth, vpHeight);
You might want to look into how gluPickMatrix works and replicate its functionality using modern OpenGL methods. You can find the gluPickMatrix source code in the implementation of Mesa3D.
While the original intent of gluPickMatrix was for selection mode rendering, it can be used for what you want to do as well.
Related
I understand i need to render only 1x1 or 3x3 pixel part of the screen where the mouse is, with object id's as colors and then get id from the color.
I have implemented ray-cast picking with spheres and i am guessing it has something to do with making camera look in direction of the mouse ray?
How do i render the correct few pixels?
Edit:
setting camera in direction of mouse ray works, but if i make the viewport smaller the picture scales but what (i think) i need is for it to be cropped rather than scaled. How would i achieve this?
The easiest solution is to use the scissor test. It allows you to render only pixels within a specified rectangular sub-region of your window.
For example, to limit your rendering to 3x3 pixels centered at pixel (x, y):
glScissor(x - 1, y - 1, 3, 3);
glEnable(GL_SCISSOR_TEST);
glDraw...(...);
glDisable(GL_SCISSOR_TEST);
Note that the origin of the coordinate system is at the bottom left of the window, while most window systems will give you mouse coordinates in a coordinate system that has its origin at the top left. If that's the case on your system, you will have to invert the y-coordinate by subtracting it from windowHeight - 1.
I was wondering how OpenGL handles viewport transformation to the window.
As I understand viewport transformation is that it strethes the scene onto the OpenGL window by applying the viewport transformation to that scene.
Please correct me if I'm wrong.
After clipping and perspective divide, all remaining (visible) vertex coordinates x,y,z are between -1 and +1 -- these are called normalized device coordinates. These are mapped to device coordinates by the appropriate scale and shift -- i.e, the viewport transformation.
For example, if the viewport has size 1024x768 with a 16-bit depth buffer and the origin is (0,0), then the points will be scaled by (512,384,2^14) and shifted by (512,384,2^14) yielding the appropriate pixel and depth values for the device.
http://www.songho.ca/opengl/gl_transform.html:
Window Coordinates (Screen Coordinates)
It is yielded by applying normalized device coordinates (NDC) to
viewport transformation. The NDC are scaled and translated in order to
fit into the rendering screen. The window coordinates finally are
passed to the raterization process of OpenGL pipeline to become a
fragment. glViewport() command is used to define the rectangle of the
rendering area where the final image is mapped. And, glDepthRange() is
used to determine the z value of the window coordinates. The window
coordinates are computed with the given parameters of the above 2
functions;
Follow the link to see the math details.
I am currently using glutsolidsphere() to render a sphere. Of course, after scaling, the sphere appears to be an ellipsoid.
So is there any way to render a sphere with fixed pixel radius ? I just want to draw a sphere in a certain place (x,y,z) with a certain radius in pixels (eg, r = 10 pixels) and make it sure that its shape will not be affected by modeling transformation.
Transformation such as Rotation, Translation, and Scaling should not affect the way a sphere looks. Remember to scale correctly on all 3 axis by the same value. Or you can just multiply vertices by a constant scalar and that should scale the sphere without distorting it. If you still see distortion, it might be because of your camera (high FOV tends to distort near the edges) or a wrong aspect-ratio (re-sizing an openGL window does not preserve aspect-ratio).
When you scale, you can scale in x, y, and z. If you scale with the same value in each dimension, it will stay a sphere.
If you want to apply a scaling that always gives the same size of the sphere, as measured in pixels, then you have to make a scaling based on the viewport size definition. These are the arguments you gave to glViewport().
For example, when scaling 'x', use factor k/width (where width is taken from glViewport). Choose constant 'k' as you want, depending on the size of the sphere.
It is possible to use glGet() to request the data that was sent with glViewport(), but reading data from OpenGL should be avoided. In worst case, it will wait for the pipeline to flush. A better idea is to remember what was used for glViewport().
You do realize that glut is old and no longer recommended? And that glutsolidsphere is based on deprecated fixed function pipeline OpenGL?
I'm working on a tile-based 2D OpenGL game (top down, 2D Zelda style), and I'm using an orthographic projection. I'd like to make this game both windowed and fullscreen compatible.
Am I better off creating scaling 2D drawing functions so that tile sizes can be scaled up when they're drawn in fullscreen mode, or should I just design the game fully around windowed mode and then scale up the ENTIRE drawing area whenever a player is in fullscreen mode?
In the end, I'm hoping to maintain the best looking texture quality when my tiles are re-scaled.
UPDATE/CLARIFICATION: Here's my concern (which may or may not even be a real problem): If I draw the entire windowed-view-sized canvas area and then scale it up, am I potentially scaling down originally 128x128 px textures to 64x64 px windowed-sized textures and then re-scaling them up again to 80x80 when I scale the entire game area up for a full screen view?
Other background info: I'm making this game using Java/LWJGL and I'm drawing with OpenGL 1.1 functions.
Dont scale tiles because your fullscreen window is larger than the normal one, but play with the projection matrix.
The window is getting larger? Enlarge the parameters used for constructing the projection matrix.
If you want to mantain proportional the tile size respect the window size, don't change projection matrix depending on window size!
The key is, if you haven't catched yet, the projectiona matrix: thanks to it, vertices are projected onto the viewport, indeed vertices are "scaled", letting you to choose appropriate unit system without worring about scaling.
In few words, the orthographic projection is specified from 4 parameters: left, right, bottom and top. Those parameters are nothing but the XY coordinates of the vertex projected onto the window screen boundaries (really the viewport).
Let do some concrete example.
Example 1
Window size: 400x300
OpenGL vertex position (2D): 25x25
Orthographic projection matrix:
- Left= 0
- Right= 400
- Bottom= 0
- Top= 300
Model view matrix set to identity
--> Point will be drawn at 25x25 in window coordinates
Example 2
Window size: 800x600 (doubled w.r.t example 1)
OpenGL vertex position (2D): 25x25
Orthographic projection matrix:
- Left= 0
- Right= 400
- Bottom= 0
- Top= 300
Model view matrix set to identity
--> Point will be drawn at 50x50 in window coordinates. It is doubled because the orthographic projection haven't changed.
Example 3
Let say I want to project the vertex in example 1 even if the aspect ratio of the window changes. Previous windows were 4:3. The window in this example is 16:9: it is larger! The trick here is to force the ratio between (right - left) and (top - bottom)
Window size: 1600x900 (doubled w.r.t example 1)
OpenGL vertex position (2D): 25x25
Orthographic projection matrix:
Left= 0
Right= 533 (300 * 16 / 9)
Bottom= 0
Top= 300
Model view matrix set to identity
--> Point will be drawn at 25x25 (instead of 33.3x25, if we were using the projection matrix of the example 1) in window coordinates.
Short Version of the question:
I will put a quad. I know the width and height of the screen in window coordinates, i know the Z-coordinates of the quad in 3D. I know the FOVY, I know the aspect. The quad will be put along Z-axis, My camera doesn't move (placed at 0, 0, 0). I want to find out the width and height of the quad IN 3D COORDINATES that will fit exactly onto my screen.
Long Version of the question:
I would like to put a quad along the Z-axis at the specified offset Z, I would like to find out the width and height of the quad that will exactly fill the entire screen.
I used to have a post on gamedev.net that uses a formula similar to the following:
*dist = Z * tan ( FOV / 2 )*
Now I can never find the post! Though it's similar, it is still different, because I remembered in that working formula, they do make use of screenWidth and screenHeight, which are the width and height of the screen in window coordinates.
I am not really familiar with concepts like frustum, fov and aspect so that's why I can't work out the formula on my own. Besides, I am sure I don't need gluUnproject (I tried it, but the results are way off). It's not some gl calls, it's just a math formula that can find out the width and height in 3D space that can fill the entire screen, IF Z offset, width in window coordinates, and height in window coordinates, are known.
Assuming the FOV is measured in Y-Z plane, then:
Height = Z * tan(fov/2)
width = height * aspect_ratio