Video as voxels in OpenGL - opengl

Any good references on displaying sequence of images from a video as voxel data in OpenGL? I want to display all these images at once as a cuboid with 50% alpha and navigate using keyboard or mouse.

Check out this tutorial on setting up a 3D texture.
If you then render slices through the texture array with the appropriate UVW coordinates you will get what you are after.

Related

How to use a complex OpenGL as background in QGraphicsScene?

I'm trying to create a display with a complex OpenGL image and some spinboxes on the image. Using http://doc.qt.digia.com/qq/qq26-openglcanvas.html I'm able to have a two layers object (inheriting from QGraphicsScene) with a simple OpenGL image as background and the controls on foreground.
So, now I'm trying to display my true OpenGL image as background. This image is created by:
A quad mapped on a structure,
Some small 2D objects represented by 2D textures with alpha channel and specific shaders, drawn on the quad (upper z value)
Some polylines.
With this image I have some strange behavior. The 2D textured objects are drawn with a white background. Some experiments seem to indicate that, in the drawing of this complex OpenGL image the alpha channel is disabled.
I tried different configurations for the QGLWidget used as viewport of the QGraphicsView but without result.
So I need help to be able to create this OpenGL image with the right transparency effects.

opengl selecting area on model

I need some help in surface area selection on a 3d model rendered in opengl by picking points through mouse. I know how to get a point in world coordinate but cant find a way to select an area. Later I need to remesh that selected area and map an image over it which I know.
Well, OpenGL by itself can't help you there. OpenGL is a drawing API. You draw things, but once the drawing commands have been executed all that's left are pixels in a framebuffer and OpenGL has no recollection about the geometry whatsoever.
You can use OpenGL to implement image based area selection algorithms, for example by drawing each face with a unique index color into an off screen framebuffer. Then by looking at what values can be found therein you know which faces are present in a given area.
Later I need to remesh
This is called topology modification and is completely outside the scope of OpenGL.
that selected area and map an image over it which I know
You can use a image based approach for this again, however you must know in which way you want to make images to faces first. If you want to unwrap the mesh, then OpenGL is of no help. However if you want the user to be able to "directly draw" onto the mesh, this can be done by drawing texture coordinates into another off screen framebuffer and by this reverse mapping screen coordinates to texture coordinates.

Convert stack of 2d images into 3d image, volume rendering

I want to do a texture based volume render of CT data. I have a stack of 2d CT images that I'd like to use as a 3d texture in opengl (jogl really). I have to do it the way with polygon proxy geometry that shifts when viewing parameters change. How can I convert the 2d images to one 3d texture? I have not been able to find anything about how opengl expects 3d images to be formatted. I saw this: https://stackoverflow.com/questions/13509191/how-to-convert-2d-image-into-3d-image , but I don't it's the same.
Also, I am still in confusion about this volume rendering technique. Is it possible to take a 3d location in the 3d texture and map it to a 2d corner of a quad? I found this example: http://www.felixgers.de/teaching/jogl/texture3D.html but I don't know if it means you have to use 3d vertices. Does anyone know more sources with explicit examples?
See
http://www.opengl.org/registry/doc/glspec40.core.20100311.pdf
section 3.8.3, on defining 3D texture images.
This results in a 3d cube of texels, and yes, you can map a 3d location in this cube to a corner of a quad.
OpenGL does know a 3D texture format where each texel is a small subvolume in a [0;1]^3 cube. When you texture a triangle or a quad with this texture, it is like if you cut out a thin slice of this volume. If you want a volumetric you must write a volume raycaster. If you Google "GPU direct volume rendering" you should find plenty of tutorials.

Background image in OpenGL

I'm doing a 3D asteroids game in windows (using OpenGL and GLUT) where you move in space through a bunch of obstacles and survive. I'm looking for a way to set an image background against the boring bg color options. I'm new to OpenGL and all i can think of is to texture map a sphere and set it to a ridiculously large radius. What is the standard way of setting image bg in a 3d game?
The standard method is to draw two texture mapped triangles, whose coordinates are x,y = +-1, z=0, w=1 and where both camera and perspective matrices are set to identity matrix.
Of course in the context of a 'space' game, where one could want the background to rotate, the natural choice is to render a cube with cubemap (perhaps showing galaxies). As the depth buffering is turned off during the background rendering, the cube doesn't even have to be "infinitely" large. A unit cube will do, as there is no way to find out how close the camera is to the object.

Optimizing a render-to-texture process

I'm developing a render-to-texture process that involves using several cameras which render an entire scene of geometry. The output of these cameras is then combined and mapped directly to the screen by converting each geometry's vertex coordinates to screen coordinates in a vertex shader (I'm using GLSL, here).
The process works fine, but I've realized a small problem: every RTT camera I create will create a texture the same dimensions as the screen output. That is, if my viewport is sized to 1024x1024, even if the geometry occupies a 256x256 section of the screen, each RTT camera will render the 256x256 geometry in a 1024x1024 texture.
The solution seems reasonably simple - adjust the RTT camera texture sizes to match the actual screen area the geometry occupies, but I'm not sure how to do that. That is, how can I (for example) determine that a geometry occupies a 256x256 area of a screen so that I can correspondingly set the RTT camera's output texture to 256x256 pixels?
The API I use (OpenSceneGraph) uses axis-aligned bounding boxes, so I'm out of luck there..
Thoughts?
why out of luck? can't you use the axis-aligned bounding box to compute the area?
My idea:
take the 8 corner points of the bounding box and project them onto the image plane of the camera.
for the resulting 2d points on the image plane you can determine an axis-aligned 2d bounding box again
This should be a correct upper bound for the space the geometry can occupy.