Can OpenGL rendering be used for 3D monitors? - opengl

We have been considering buying a 3D-ready LCD monitor along with a machine with a 3D-stereoscopic vision capable graphics card (ATI Radeon 5870 or better). This is for displaying some scientific data that we are rendering in 3D using OpenGL.
Now can we expect the GPU, Monitor and shutter-glasses to take care of the stereoscopic display or do we need to modify the rendering program?
If there are specific techniques for graphics programming for 3D stereoscopic displays, some tutorial links will be much appreciated.

Techically OpenGL provisions for stereoscopic display. The keyword is "Quad Buffered Stereo". You can switch left-right eye rendering with glDrawBuffer(GL_BACK_LEFT); /* render left eye view */; glDrawBuffer(GL_BACK_RIGHT); /* render right eye view */;.
Unfortunately Quadbuffer stereo is considered a professional feature by GPU vendors. So you need either a NVidia Quadro or a AMD/ATI FireGL graphics card to get support for it.
Stereoscopic rendering is not done by tilting the eyes, but by applying a "lens shift" in the projection matrix. The details are a bit tricky, though. I did a talk last year about it, but apparently the link with my presentation slides went down.
The key diagram is this one (it's from my notes of my patch to Blender to directly support stereoscopic rendering)

Related

How does windows gdi work in compare to directX/openGL?

I know that DirectX and OpenGL are both built into a graphics card and you can access them when you need to so all the rendering will be hardware based accelerated.
I can't find anything about GDI but i'm sure it's not a graphics card based acceleration.
How does those technologies differ and what am i missing here?
The GDI library dates back to the early 90s and the first versions of Windows. It's supports a mix of hardware-accelerated raster ops and software vector graphics drawing. It is primary for drawing 'presentation' graphics and text for GUIs. GDI+ was a C++ class-based wrapper for GDI, but is basically the same technology. Modern versions of Windows still support legacy GDI APIs but modern graphics cards don't implement the old fixed-function 'raster ops' hardware accelerated paths--it's not really accurate to even refer to the video cards of the early 90s "GPUs".
OpenGL and Direct3D are graphics APIs built around pixels, lines, and triangles with texturing. Modern versions of both use programmable shader stages instead of fixed-function hardware to implement transform & lighting, texture blending, etc. These enable 3D hardware acceleration, but by themselves do not support classic 'raster ops' or vector-based drawing like styled/wide lines, filled ellipses, etc. Direct2D is a vector graphics library that does these old "GDI-style" operations on top of Direct3D with a mix of software and hardware-accelerated paths, and is what is used by the modern Windows GUI in combination with DirectWrite which implements high-quality text rendering.
See Graphics APIs in Windows, Vector graphics, and Raster graphics.
In short, for modern Windows programming you should use Direct2D/DirectWrite for 'presentation graphics' perhaps via a wrapper like Win2D, and Direct3D for 2D raster "sprite" graphics or 3D graphics rendering (see DirectX Tool Kit).
GDI is pure 2D image access and rendering (with very limited acceleration on a raster level if not disabled). So in GDI there are no:
textures
shaders
buffers
depth/stencil buffers
transform matrices
gfx pipeline
The differences are:
transparency is done by specific color value instead of alpha channel
blending is possible in only per channel add,xor,copy modes
you can get direct pixel access
got windows font access and text functions support
window related GDI calls should be called just from it's main thread (WndProc)

How do I speed up my Offscreen OpenGL pointcloud warp rendering code?

I'm working on a visual odometry algorithm that tracks movement of the camera between images. An integral part of this algorithm is being able to generate incremental dense warped images of a reference image, where each pixel has a corresponding depth (so it can be considered a point cloud of width x height dimensions)
I haven't had much experience working with OpenGL in the past, but having gone through a few tutorials, I managed to setup an offscreen rendering pipeline to take in a transformation matrix and render the pointcloud from the new perspective. I'm using VBOs to load the data in the GPU and renderbuffers to render, and glReadPixels() to read into CPU memory.
On my Nvidia card, I can render at ~1 ms per warp. Is that the fastest I can render the data (640x480 3D points)? This step is proving to be a major bottleneck for my algorithm, so I'd really appreciate any performance tips!
(I thought that one optimization could be rendering only in grayscale, since I don't really care about colour, but it seems like internally OpenGL uses colour anyway)
My current implementation is at
https://gist.github.com/icoderaven/1212c7623881d8cd5e1f1e0acb7644fb,
and the shaders at
https://gist.github.com/icoderaven/053c9a6d674c86bde8f7246a48e5c033
Thanks!

Is there any conventional way to do a per voxel shader programming?

I'm finding a way to do 3d filters in directx or opengl shaders, same as the gaussian filter for images.In detail, it is to do proccessing for every voxel of a 3d texture.
Maybe store the volume data in slices can do it, but it is not a friendly way to access the volume data and not easy to write in shaders.
sorry for my poor english, any reply will be appreciate.
p.s.:Cuda's texture memory can do this work, but my poor gpu can only run in a very low frame rate with debug model,and i don't know why.
There is a 3D texture target in both Direct3D and OpenGL. Of course target framebuffers are still 2D. So using a compute shader, OpenCL or DirectCompute may be better suited for pure filter purposes, that don't include rendering to screen.

NURBS in the OpenGL Graphics Pipeline

I'm curious about how NURBS are rendered in GPU's / the OpenGL graphics pipeline. I understand there are various calls within OpenGL and GLUT for easily rendering NURBS objects from a coding perspective using glMap and glMapGrid, but what I don't get is the process OpenGL goes through to do this. The idea behind NURBS is using curves to define surfaces, whereas the graphics pipeline appears to be build around triangle rasterization and triangle meshes, whereas NURBS are based around Bezier Curves, which are curved.
So how are NURBS actually rendered, from a (high-level) pipeline perspective?
The simple answer, is that they are not dealt with in the OpenGL pipeline, but must be converted to something that the GL pipeline can process. The general approach would probably be to first convert to a primitive a little more real-time friendly, such as bezier patches, and then tesselate these at runtime into triangles.
Tessellation could be regular, mapping a grid onto the patch, or could be based on curvature, subdividing the patch more where there is higher variance. Either way the surface is only truly evaluated at some vertices, and rendered as flat polygons (though shaders can be used to create appropriately smoothly varying normals, etc.)
glMap() et-al (which were previously used to help render bezier patches, etc.) are deprecated and no longer present in the modern OpenGL API. Nowadays you would use shaders to deal with tessellation.

Why does stereo 3D rendering require software written especially for it?

Given a naive take on 3D graphics rendering it seems that stereo 3D rendering should be essentially transparent to the developer and be entirely a feature of the graphics hardware and drivers. Wherever an OpenGL window is displaying a scene, it takes the geometry, lighting, camera and texture etc. information to render a 2D image of the scene.
Adding stereo 3D to the scene seems to essentially imply using two laterally offset cameras where there was originally one, and all other scene variables stay the same. The only additional information then would be how far apart to make the cameras and how far out to to make their central rays converge. Given this it would seem trivial to take a GL command sequence and interleave the appropriate commands at driver level to drive a 3D rendering.
It seems though applications need to be specially written to make use of special 3D hardware architectures making it cumbersome and prohibitive to implement. Would we expect this to be the future of stereo 3D implementations or am I glossing over too many important details?
In my specific case we are using a .net OpenGL viewport control. I originally hoped that simply having stereo enabled hardware and drivers would be enough to enable stereo 3D.
Your assumptions are wrong. OpenGL does not "take geometry, lighting camera and texture information to render a 2D image". OpenGL takes commands to manipulate its state machine and commands to execute draw calls.
As Nobody mentions in his comment, the core profile does not even care about transformations at all. The only thing it really provides you with now is ways to provide arbitrary data to a vertex shader, and an arbitrary 3D cube to do rendering to. Wether that corresponds or not to the actual view, GL does not care, nor should it.
Mind you, some people have noticed that a driver can try to guess what's the view and what's not, and this is what the nvidia driver tries to do when doing automatic stereo rendering. This requires some specific guess-work, which amounts to actual analysis of game rendering to tweak the algorithms so that the driver guesses right. So it's typically a per-title, in-driver change. And some developers have noticed that the driver can guess wrong, and when that happens, it starts to get confusing. See some first-hand account of those questions.
I really recommend you read that presentation, because it makes some further points as to where the camera should be pointing towards (should the 2 view directions be parallel and such).
Also, It turns out that is essentially costs twice as much rendering for everything that is view dependent. Some developers (including, for example, the Crytek guys, see Part 2), figured out that to a great extent, you can do a single render, and fudge the picture with additional data to generate the left and right eye pictures.
The amount of saved work here is worth a lot by itself, for the developer to do this themselves.
Stereo 3D rendering is unfortunately more complex than just adding a lateral camera offset.
You can create stereo 3D from an original 'mono' rendered frame and the depth buffer. Given the range of (real world) depths in the scene, the depth buffer for each value tells you how far away the corresponding pixel would be. Given a desired eye separation value, you can slide each pixel left or right depending on distance. But...
Do you want parallel axis stereo (offset asymmetrical frustums) or 'toe in' stereo where the two cameras eventually converge? If the latter, you will want to tweak the camera angles scene by scene to avoid 'reversing' bits of geometry beyond the convergence point.
For objects very close to the viewer, the left and right eyes see quite different images of the same object, even down to the left eye seeing one side of the object and the right eye the other side - but the mono view will have averaged these out to just the front. If you want an accurate stereo 3D image, it really does have to be rendered from different eye viewpoints. Does this matter? FPS shooter game, probably not. Human surgery training simulator, you bet it does.
Similar problem if the viewer tilts their head to one side, so one eye is higher than the other. Again, probably not important for a game, really important for the surgeon.
Oh, and do you have anti-aliasing or transparency in the scene? Now you've got a pixel which really represents two pixel values at different depths. Move an anti-aliased pixel sideways and it probably looks worse because the 'underneath' color has changed. Move a mostly-transparent pixel sideways and the rear pixel will be moving too far.
And what do you do with gunsight crosses and similar HUD elements? If they were drawn with depth buffer disabled, the depth buffer values might make them several hundred metres away.
Given all these potential problems, OpenGL sensibly does not try to say how stereo 3D rendering should be done. In my experience modifying an OpenGL program to render in stereo is much less effort than writing it in the first place.
Shameless self promotion: this might help
http://cs.anu.edu.au/~Hugh.Fisher/3dteach/stereo3d-devel/index.html