Optimizing a render-to-texture process - c++

I'm developing a render-to-texture process that involves using several cameras which render an entire scene of geometry. The output of these cameras is then combined and mapped directly to the screen by converting each geometry's vertex coordinates to screen coordinates in a vertex shader (I'm using GLSL, here).
The process works fine, but I've realized a small problem: every RTT camera I create will create a texture the same dimensions as the screen output. That is, if my viewport is sized to 1024x1024, even if the geometry occupies a 256x256 section of the screen, each RTT camera will render the 256x256 geometry in a 1024x1024 texture.
The solution seems reasonably simple - adjust the RTT camera texture sizes to match the actual screen area the geometry occupies, but I'm not sure how to do that. That is, how can I (for example) determine that a geometry occupies a 256x256 area of a screen so that I can correspondingly set the RTT camera's output texture to 256x256 pixels?
The API I use (OpenSceneGraph) uses axis-aligned bounding boxes, so I'm out of luck there..
Thoughts?

why out of luck? can't you use the axis-aligned bounding box to compute the area?
My idea:
take the 8 corner points of the bounding box and project them onto the image plane of the camera.
for the resulting 2d points on the image plane you can determine an axis-aligned 2d bounding box again
This should be a correct upper bound for the space the geometry can occupy.

Related

c++ opengGL: Draw polygon + image processing interior pixels

I am using opengl and c++ doing image processing. The idea is simple, I will load an image, draw a polygon by clicking and then apply an effect (desaturation for instance) only to the pixels in the interior of the polygon shape just created.
Can anyone give me any direction on how to limit the effect to the pixels within the interior of the polygon? Loading the image and drawing the polygon is not a problem
Supposing the following situation :
The picture on which you want to apply the effect takes the whole screen
The picture is rendered using opengl , probably through a simple shader, with the picture passed as a texture
You can do the following approach :
consider the screen as being a big texture
you draw a polygon, which will be rendered on top of the rendered texture
inside the polygon's vertices insert the uv's coresponding to the 2D coordinates on the screen (so from screen space to uv space (0 , 1 ) )
draw the picture normaly
on top of the picture draw your polygon using the same picture as texture, but with a different shader
So instead of trying to desaturate a specific region from your picture, create a polygon on top of that region with the same picture, and desaturate that new polygon.
This would help you in avoiding the stencil buffer.
Another approach would be to create the polygon, but draw it only on the stencil buffer, before the picture is drawn.

2D clip in OpenGL

I have a question about clipping in OpenGL. So I have a small viewport and I want to render one part of a large image. If I draw a large image like
glEnable(GL_TEXTURE_RECTANGLE)
glColor3f(1,1,1)
glTexCoord2f(0,height)
glVertex2f(0,0)
glTexdoord2f(width,height)
glVertex2f(width,0)
glTexCoord2f(width,0)
glVertex2f(width,height)
glTexCoord(0,0)
glVertex2f(0,height)
The width and height is the size of the image(texture), which is much larger than the viewport.
My question is: Is OpenGL going to firstly clip the rectangle with respect to the size of viewport and then draw, or firstly render the whole image and then clip?
It will first clip the geometry, then draw. More specifically, OpenGL does all the geometry processing before going down to render individual framebuffer elements (pixels). Here's a relevant quote from the 3.3 GL spec (Section 2.4):
The final resulting primitives are clipped to a viewing volume in
preparation for the next stage, rasterization. The rasterizer produces
a series of framebuffer addresses and values using a two-dimensional
description of a point, line segment, or polygon.

Convert stack of 2d images into 3d image, volume rendering

I want to do a texture based volume render of CT data. I have a stack of 2d CT images that I'd like to use as a 3d texture in opengl (jogl really). I have to do it the way with polygon proxy geometry that shifts when viewing parameters change. How can I convert the 2d images to one 3d texture? I have not been able to find anything about how opengl expects 3d images to be formatted. I saw this: https://stackoverflow.com/questions/13509191/how-to-convert-2d-image-into-3d-image , but I don't it's the same.
Also, I am still in confusion about this volume rendering technique. Is it possible to take a 3d location in the 3d texture and map it to a 2d corner of a quad? I found this example: http://www.felixgers.de/teaching/jogl/texture3D.html but I don't know if it means you have to use 3d vertices. Does anyone know more sources with explicit examples?
See
http://www.opengl.org/registry/doc/glspec40.core.20100311.pdf
section 3.8.3, on defining 3D texture images.
This results in a 3d cube of texels, and yes, you can map a 3d location in this cube to a corner of a quad.
OpenGL does know a 3D texture format where each texel is a small subvolume in a [0;1]^3 cube. When you texture a triangle or a quad with this texture, it is like if you cut out a thin slice of this volume. If you want a volumetric you must write a volume raycaster. If you Google "GPU direct volume rendering" you should find plenty of tutorials.

Background image in OpenGL

I'm doing a 3D asteroids game in windows (using OpenGL and GLUT) where you move in space through a bunch of obstacles and survive. I'm looking for a way to set an image background against the boring bg color options. I'm new to OpenGL and all i can think of is to texture map a sphere and set it to a ridiculously large radius. What is the standard way of setting image bg in a 3d game?
The standard method is to draw two texture mapped triangles, whose coordinates are x,y = +-1, z=0, w=1 and where both camera and perspective matrices are set to identity matrix.
Of course in the context of a 'space' game, where one could want the background to rotate, the natural choice is to render a cube with cubemap (perhaps showing galaxies). As the depth buffering is turned off during the background rendering, the cube doesn't even have to be "infinitely" large. A unit cube will do, as there is no way to find out how close the camera is to the object.

Video as voxels in OpenGL

Any good references on displaying sequence of images from a video as voxel data in OpenGL? I want to display all these images at once as a cuboid with 50% alpha and navigate using keyboard or mouse.
Check out this tutorial on setting up a 3D texture.
If you then render slices through the texture array with the appropriate UVW coordinates you will get what you are after.