How do I output 3D images to my 3D TV? - opengl

I have a 3D TV and feel that I would be shirking my responsibilities (as a geek) if I didn't at least try to make it display pretty 3D images of my own creation!
I've done a very basic amount of OpenGL programming before and so I understand the concepts involved - Assume that I can render myself a simple Tetrahedron or Cube and make it spin around a bit; How can I get my 3D TV to display this image in, well, 3D?
Note that I understand the basics of how 3D works (render the same image twice from 2 different angles, one for each eye), my question is about the logistics of actually doing this (do I need an SDK? etc...)
The TV I have uses polarization 3D, although my intention is that this question also be relevant to other 3D technologies (if possible)
My laptop has a HDMI output, which is what I intend to use to connect up to my TV with (does this make any difference over using a VGA / component video cable?)
In the past I have experimented with GLUT / OpenGL, however if its easier / only really possible to do this using some alternative technology then thats fine

The main problem is, getting your GPU to send a stereoscopic format. In the case of a HDMI connection this will not work without the help of a driver. If you have a professional grade GPU (Quadro, FireGL), then they likely support OpenGL quadbuffers, i.e. you get framebuffers for the left and right eye, both back and front:
glDrawBuffer(GL_BACK_LEFT);
render_left_eye();
glDrawBuffer(GL_BACK_RIGHT);
render_right_eye();
glDrawBuffer(GL_BACK); // renders to both eyes simultanously
render_screen_level_and_nonstereoscopic();
SwapBuffers();
Unfortunately OpenGL quad buffer is considered professional grade stuff.
Instead NVidia (at least) provides a customary stereoscopy library plus some extensions to control it. The main reasoning is, that shared fragments are to be rendered only once and then sent to both eyes with the appropirate parallax applied. However from my semi-professional experiences with stereoscopy¹, these kinds of semi-/automatic stereoscopifications just don't cut it. Stereoscopy requires tight control of the whole "production" pipeline, otherwise you're screwed. With Elephants Dream I went as far as modifying the renderer's core code.
I sent the people at the 3D devision at NVidia some case scenarios where you need exact control over the stereoscopy process, and I hope they will see the light and give access to quad buffer stereo also on consumer grade hardware.
Note that I understand the basics of how 3D works (render the same image twice from 2 different angles, one for each eye)
Actually you don't render from two different angles but with a shifted parallax and lens shift. Otherwise you get some trapezoidal/keystone distortion in the horizontal, which are very, very unpleasant to watch (in fact I now think that in the stereoscopic rendering process one should slightly diverge the optical axes – i.e. doing the complete contrary to what one would naively do – and "over"compensate with lens shift, I'm currently preparing a small study about this, but still need to gather my testing and control groups).
1: heck, I'm the guy who single-handedly stereographed Elephants Dream, rendered it and got it an award at a 3D movie festival.

Because you have a passive 3D TV, it's likely that the left and right eye views are rendered on alternate scan lines. (or perhaps on alternate pixels in a checkerboard pattern)
Thus your mission is to render the left-eye view to the even numbered scan lines, and the right eye view to the odd numbered scan lines (or vice versa). This can be accomplished either via OpenGL stencil operations, or, more modernly, using custom fragment shaders.
This way, you can avoid the whole quad-buffered video card/GL_BACK_LEFT/GL_BACK_RIGHT approach described by datenwolf. And you want to avoid that approach, as I have never encountered a video driver that directs quad-buffered stereo 3D to an actual 3D TV.
I agree with datenwolf's advice that you should use asymmetric frustum shift rather than scene rotation to generate the right and left eye viewpoints.

Related

drawing time series of millions of vertices using OpenGL

I'm working on a data visualisation application where I need to draw about 20 different time series overlayed in 2D, each consisting of a few million data points. I need to be able to zoom and pan the data left and right to look at it and need to be able to place cursors on the data for measuring time intervals and to inspect data points. It's very important that when zoomed out all the way, I can easily spot outliers in the data and zoom in to look at them. So averaging the data can be problematic.
I have a naive implementation using a standard GUI framework on linux which is way too slow to be practical. I'm thinking of using OpenGL instead (testing on a Radeon RX480 GPU), with orthogonal projection. I searched around and it seems VBOs to draw line strips might work, but I have no idea if this is the best solution (would give me the best frame rate).
What is the best way to send data sets consisting of millions of vertices to the GPU, assuming the data does not change, and will be redrawn each time the user interacts with it (pan/zoom/click on it)?
In modern OpenGL (versions 3/4 core profile) VBOs are the standard way to transfer geometric / non-image data to the GPU, so yes you will almost certainly end up using VBOs.
Alternatives would be uniform buffers, or texture buffer objects, but for the application you're describing I can't see any performance advantage in using them - might even be worse - and it would complicate the code.
The single biggest speedup will come from having all the data points stored on the GPU instead of being copied over each frame as a 2D GUI might be doing. So do the simplest thing that works and then worry about speed.
If you are new to OpenGL, I recommend the book "OpenGL SuperBible" 6th or 7th edition. There are many good online OpenGL guides and references, just make sure you avoid the older ones written for OpenGL 1/2.
Hope this helps.

Difference between SphericalMapping and CubeMapping for environmental reflection in OpenGL?

I'm working with an environmental reflection in OpenGL+GLSL.
I want to reflect the environment around an object in the most accurate way possible.
I found basically two way to do this, one is called SphericalMapping and the other is CubeMapping.
They differ in the shader code but really don't understand what is the difference between them.
Obviously for the cubemapping shader I have 6 images printed on a cube that are needed for the fragment shader to look the right pixel, and for my Spheric mapping shader a single image which is distorted with a photo-retouch software or obtained by taking a photo of a specular reflective sphere.
The drawbacks of spherical mapping seems to be that the camera (and the person which holds it) is always showed in the image and the sampling is non-uniform. What is meant by this latest statement? What is meant by "black-hole" effect in spherical mapping?
I would like to find an interactive demonstration of the differences and drawbacks of these two approaches, it seems like cubemapping is the best, but don't know why.
What is the best of the two especially for a realtime simulation with head tracking in your opinion?
Spheremaps are usually for small, low quality stuff.
The drawbacks of spherical mapping seems to be that the camera (and the person which holds it) is always showed in the image
We're talking about computer graphics here; there is no real camera, or no real person. Try imagegoogling "spheremap", you won't see anybody in the pictures.
the sampling is non-uniform
This means that the center of the spheremap has many pixels for a relatively small area, while near the border, you have few pixels for a relatively large area.
Cubemaps are almost always better : you can generate them at runtime easily, it's faster to sample for the hardware, and even though you have 6 textures instead of 1, you can use a lower resolution and still get the same quality.

OpenGL Picking from a large set

I'm trying to, in JOGL, pick from a large set of rendered quads (several thousands). Does anyone have any recommendations?
To give you more detail, I'm plotting a large set of data as billboards with procedurally created textures.
I've seen this post OpenGL GL_SELECT or manual collision detection? and have found it helpful. However it can take my program up to several minutes to complete a rendering of the full set, so I don't think drawing 2x (for color picking) is an option.
I'm currently drawing with calls to glBegin/glVertex.../glEnd. Given that I made the switch to batch rendering on the GPU with vao's and vbo's, do you think I would receive a speedup large enough to facilitate color picking?
If not, given all of the recommendations against using GL_SELECT, do you think it would be worth me using it?
I've investigated multithreaded CPU approaches to picking these quads that completely sidestep OpenGL all together. Do you think a OpenGL-less CPU solution is the way to go?
Sorry for all the questions. My main question remains to be, whats a good way that one can pick from a large set of quads using OpenGL (JOGL)?
The best way to pick from a large number of quad cannot be easily defined. I don't like color picking or similar techniques very much, because they seem to be to impractical for most situations. I never understood why there are so many tutorials that focus on people that are new to OpenGl or even programming focus on picking that is just useless for nearly everything. For exmaple: Try to get a pixel you clicked on in a heightmap: Not possible. Try to locate the exact mesh in a model you clicked on: Impractical.
If you have a large number of quads you will probably need a good spatial partitioning or at least (better also) a scene graph. Ok, you don't need this, but it helps A LOT. Look at some tutorials for scene graphs for further information's, it's a good thing to know if you start with 3D programming, because you get to know a lot of concepts and not only OpenGl code.
So what to do now to start with some picking? Take the inverse of your modelview matrix (iirc with glUnproject(...)) on the position where your mouse cursor is. With the orientation of your camera you can now cast a ray into your spatial structure (or your scene graph that holds a spatial structure). Now check for collisions with your quads. I currently have no link, but if you search for inverse modelview matrix you should find some pages that explain this better and in more detail than it would be practical to do here.
With this raycasting based technique you will be able to find your quad in O(log n), where n is the number of quads you have. With some heuristics based on the exact layout of your application (your question is too generic to be more specific) you can improve this a lot for most cases.
An easy spatial structure for this is for example a quadtree. However you should start with they raycasting first to fully understand this technique.
Never faced such problem, but in my opinion, I think the CPU based picking is the best way to try.
If you have a large set of quads, maybe you can group quads by space to avoid testing all quads. For example, you can group the quads in two boxes and firtly test which box you
I just implemented color picking but glReadPixels is slow here (I've read somehere that it might be bad for asynchron behaviour between GL and CPU).
Another possibility seems to me using transform feedback and a geometry shader that does the scissor test. The GS can then discard all faces that do not contain the mouse position. The transform feedback buffer contains then exactly the information about hovered meshes.
You probably want to write the depth to the transform feedback buffer too, so that you can find the topmost hovered mesh.
This approach works also nice with instancing (additionally write the instance id to the buffer)
I haven't tried it yet but I guess it will be a lot faster then using glReadPixels.
I only found this reference for this approach.
I'm using the solution that I've borrowed from DirectX SDK, there's a nice example how to detect the selected polygon in a vertext buffer object.
The same algorithm works nice with OpenGL.

SDL - Dynamic Alpha?

I plan on making a game (in SDL) where, if one character moves, the part of the image it was on turns alpha, thus allowing me to place a scrolling image underneath the original scene.
1) Is this possible?
2) If yes to #1, how can I go about implementing this (not to give me code, but to guide me in the right direction).
It sounds like you want to learn about image compositing.
A typical game these days will have a redraw function somewhere to redraw the entire screen. The entire scene is always redrawn each frame.
void redraw()
{
drawBackground();
drawCharacters();
drawHUD();
swapBuffers();
}
This is as simple as it gets: by using the right blending modes, each time you draw something it appears on top of what was drawn before. Older games are much more complicated because they don't redraw the entire screen at a time (or don't use a framebuffer), and newer games are much more complicated because they draw the world front-to-back and back-to-front in multiple passes for different types of objects.
SDL has software image compositing functions which you can use, or you can use OpenGL (which may use a combination of software and hardware). I personally use OpenGL because it is more powerful (lets you draw more complicated scenes), but the SDL compositing functions are easier to use. There are many excellent tutorials and many more mediocre or terrible tutorials online.
I'm not sure what you mean when you say "the part of the image it was on turns alpha". The alpha channel does not appear on screen, you cannot see it, it just affects how two images are composited.

IDEAs: how to interactively render large image series using GPU-based direct volume rendering

I'm looking for idea's how to convert a 30+gb, 2000+ colored TIFF image series into a dataset able to be visualized in realtime (interactive frame rates) using GPU-based volume rendering (using OpenCL / OpenGL / GLSL). I want to use a direct volume visualization approach instead of surface fitting (i.e. raycasting instead of marching cubes).
The problem is two-fold, first I need to convert my images into a 3D dataset. The first thing which came into my mind is to see all images as 2D textures and simply stack them to create a 3D texture.
The second problem is the interactive frame rates. For this I will probably need some sort of downsampling in combination with "details-on-demand" loading the high-res dataset when zooming or something.
A first point-wise approach i found is:
polygonization of the complete volume data through layer-by-layer processing and generating corresponding image texture;
carrying out all essential transformations through vertex processor operations;
dividing polygonal slices into smaller fragments, where the corresponding depth and texture coordinates are recorded;
in fragment processing, deploying the vertex shader programming technique to enhance the rendering of fragments.
But I have no concrete ideas of how to start implementing this approach.
I would love to see some fresh ideas or ideas on how to start implementing the approach shown above.
If anyone has any fresh ideas in this area, they're probably going to be trying to develop and publish them. It's an ongoing area of research.
In your "point-wise approach", it seems like you have outlined the basic method of slice-based volume rendering. This can give good results, but many people are switching to a hardware raycasting method. There is an example of this in the CUDA SDK if you are interested.
A good method for hierarchical volume rendering was detailed by Crassin et al. in their paper called Gigavoxels. It uses an octree-based approach, which only loads bricks needed in memory when they are needed.
A very good introductory book in this area is Real-Time Volume Graphics.
I've done a bit of volume rendering, though my code generated an isosurface using marching cubes and displayed that. However, in my modest self-education of volume rendering I did come across an interesting short paper: Volume Rendering on Common Computer Hardware. It comes with source example too. I never got around to checking it out but it seemed promising. It is it DirectX though, not OpenGL. Maybe it can give you some ideas and a place to start.