What buffers to use for stereo in Qt using QOpenGLWidget? - c++

I'm trying to do a stereoscopic visualization in Qt. I have found some tutorials but all of them use the older QGLWidget and the buffers GL_FRONT_LEFT and GL_FRONT_RIGHT.
AS I'm using the newer QOpenGLWidget I tried drawing images to the same buffers but the call to glDrawBuffer(GL_FRONT_LEFT) is generating a GL_INVALID_ENUM.
I also saw that the default buffer is GL_COLOR_ATTACHMENT0 instead of GL_FRONT_LEFT so I imagine I need to use a different set of buffers to enable stereo.
Which buffers should I use?

you should use
glDrawBuffer(GL_BACK_RIGHT);
glDrawBuffer(GL_BACK_LEFT);
look this link
I am working on the same thing with Nvidia Quadro 4000 . No luck yet, I got 2 images slightly offset, the IR tansmitter light up BUT the screen flicker!
GOT IT: the sync was 60hz, I put it to 120 and everything works fine
I still need work on the right/left frustrum to say eureka.

Related

Best way to display camera video stream in modern OpenGL

I'm creating an augmented reality application in OpenGL where I want to augment a video stream captured by a Kinect with virtual objects. I found some running code using fixed function pipeline OpenGL that creates a glTexImage2D, fills it with the image data and then creates a GL_QUAD with glTexCoord2f to fill the screen.
I'm looking for an optimized solution using modern, shader-based OpenGL only which is also capable of handling HD video streams.
I guess what I hope for as an answer to my question is a list of possibilities on how rendering a camera video stream can be achieved in OpenGL from which I can select the one that best fits my needs.

What is the use case of cudaGraphicsRegisterFlagsWriteDiscard in cudaGraphicsGLRegisterImage?

I'm fairly new to CUDA, but I've managed to display something generated by a kernel on the screen using OpenGL. I've tried several approach :
Using a PBO and an OpenGL texture (old style);
Using a OpenGL texture as a CUDA surface and rendering on a quad (new style);
Using a renderbuffer as a CUDA surface and rendering using glBlitFramebuffer.
All of them worked, but, while implementing #2, I erroneously set the hint as cudaGraphicsRegisterFlagsWriteDiscard. Since all of the data will be generated by CUDA, I thought this was the correct option. However, later I realized that I needed a CUDA surface to write to an OpenGL texture, and when you use a surface, you are requested to use the LoadStore flag.
So basically my question is this : Since I absolutely need a CUDA surface to write to an OpenGL texture in CUDA, what is the use case of cudaGraphicsRegisterFlagsWriteDiscard in cudaGraphicsGLRegisterImage?
The documentation description seems pretty straightforward. It is for one-way delivery of data from CUDA to OpenGL.
This online book excerpt provides a similar explanation:
Applications where CUDA is the producer and OpenGL is the consumer should register the objects with a write-discard flag...
If you want to see an example, take a look at the postProcessGL cuda sample. In that case, OpenGL is rendering an image, and it's being post-processed (blur added) by cuda, before display. In this case, there are two separate pathways for data flow. In the OpenGL->CUDA case, the data is handled by the createTextureSrc function, and the flag specified is read-only. For the CUDA->OpenGL case (delivery of the post-processed frame) the function is handled in createTextureDst, where a call is made to cudaGraphicsGLRegisterImage with the cudaGraphicsMapFlagsWriteDiscard flag specified, since on this path, CUDA is producing and OpenGL is consuming.
To understand how the textures are handled (populated with data from the cuda operations via a cudaArray) you probably want to study the sequence of operations in processImage().

Print an OpenGL Texture to File without Display?

I'm trying to use OpenGL to help with processing Kinect depth map input into an image. At the moment we're using the Kinect as a basic motion sensor, and the program counts how many people walk by and takes a screen shot each time it detects someone new.
The problem is that I need to get this program to run without access to a display. We're wanting to run it remotely over SSH, and the network traffic from other services is going to be too much for X11 forwarding to be a good idea. Attaching a display to the machine running the program is a possibility, but we're wanting to avoid doing that for energy consumption reasons.
The program does generate a 2D texture object for OpenGL, and usually just uses GLUT to render it before a read the pixels and output them to a .PNG file using FreeImage. The issue I'm running into is that once GLUT function calls are removed, all that gets printed to the .PNG files are just black boxes.
I'm using the OpenNI and NITE drivers for the Kinect. The programming language is C++, and I'm needing to use Ubuntu 10.04 due to the hardware limitations of the target device.
I've tried using OSMesa or FrameBuffer objects, but I am a complete OpenGL newbie so I haven't gotten OSMesa to render properly in place of the GLUT functions, and my compilers can't find any of the OpenGL FrameBuffer functions in GL/glext.h or GL/gl.h.
I know that textures can be read into a program from image files, and all I want to output is a single 2-D texture. Is there a way to skip the headache of off-screen rendering in this case and print a texture directly to an image file without needing OpenGL to render it first?
The OSMesa library is neither a drop-in replacement for GLUT, nor can work together. If you only need the offscreen rendering part without interaction you have to implement a simple event loop yourself.
For example:
/* init OSMesa */
OSMesaContext mContext;
void *mBuffer;
size_t mWidth;
size_t mHeight;
// Create RGBA context and specify Z, stencil, accum sizes
mContext = OSMesaCreateContextExt( OSMESA_RGBA, 16, 0, 0, NULL );
OSMesaMakeCurrent(mContext, mBuffer, GL_UNSIGNED_BYTE, mWidth, mHeight);
After this snipped you can use the normal OpenGL calls to render and after a glFinish() call the results can be accessed through the mBuffer pointer.
In you event loop you can call your normal onDisplay, onIdle, etc callbacks.
We're wanting to run it remotely over SSH, and the network traffic from other services is going to be too much for X11 forwarding to be a good idea.
If you forward X11 and create the OpenGL context on that display, OpenGL traffic will go over the net no matter if there is a window visible or not. So what you actually need to do (if you want to make use of GPU accelerated OpenGL) is starting an X server on the remote machine, and keeping it the active VT (i.e. the X server must be the program that "owns" the display). Then your program can make a connection to this very X server only. But this requires to use Xlib. Some time ago fungus wrote a minimalistic Xlib example, I extended it a bit so that it makes use of FBConfigs, you can find it here: https://github.com/datenwolf/codesamples/blob/master/samples/OpenGL/x11argb_opengl/x11argb_opengl.c
In your case you should render to a FBO or a PBuffer. Never use a visible window framebuffer to render stuff that's to be stored away! If you create a OpenGL window, like with the code I linked use a FBO. Creating a GLX PBuffer is not unlike creating a GLX Window, only that it will be off-screen.
The trick is, not to use the default X Display (of your SSH forward) but a separate connection to the local X Server. The key is the line
Xdisplay = XOpenDisplay(NULL);
Instead of NULL, you'd pass the connection to the local server there. To make this work you'll also need to (manually) add an xauth entry to, or disable xauth on the OpenGL rendering server.
You can use glGetTexImage to read a texture back from OpenGL.

Cinder: How to get a pointer to data\frame generated but never shown on screen?

There is that grate lib I want to use called libCinder, I looked thru its docs but do not get if it is possible and how to render something with out showing it first?
Say we want to create a simple random color 640x480 canvas with 3 red white blue circles on it, and get RGB\HSL\any char * to raw image data out of it with out ever showing any windows to user. (say we have console application project type). I want to use such feaure for server side live video stream generation and for video streaming I would prefer to use ffmpeg so that is why I want a pointer to some RGB\HSV or what ever buffer with actuall image data. How to do such thing with libCInder?
You will have to use off-screen rendering. libcinder seems to be just a wrapper for OpenGL, as far as graphics go, so you can use OpenGL code to achieve this.
Since OpenGL does not have a native mechanism for off-screen rendering, you'll have to use an extension. A tutorial for using such an extension, called Framebuffer Rendering, can be found here. You will have to modify renderer.cpp to use this extension's commands.
An alternative to using such an extension is to use Mesa 3D, which is an open-source implementation of OpenGL. Mesa has a software rendering engine which allows it to render into memory without using a video card. This means you don't need a video card, but on the other hand the rendering might be slow. Mesa has an example of rendering to a memory buffer at src/osdemos/ in the Demos zip file. This solution will probably require you to write a complete Renderer class, similar to Renderer2d and RendererGl which will use Mesa's intrusctions instead of Windows's or Mac's.

How to do stereoscopic 3D with OpenGL on GTX 560 and later?

I am using the open source haptics and 3D graphics library Chai3D running on Windows 7. I have rewritten the library to do stereoscopic 3D with Nvidia nvision. I am using OpenGL with GLUT, and using glutInitDisplayMode(GLUT_RGB | GLUT_DEPTH | GLUT_DOUBLE | GLUT_STEREO) to initialize the display mode. It works great on Quadro cards, but on GTX 560m and GTX 580 cards it says the pixel format is unsupported. I know the monitors are capable of displaying the 3D, and I know the cards are capable of rendering it. I have tried adjusting the resolution of the screen and everything else I can think of, but nothing seems to work. I have read in various places that stereoscopic 3D with OpenGL only works in fullscreen mode. So, the only possible reason for this error I can think of is that I am starting in windowed mode. How would I force the application to start in fullscreen mode with 3D enabled? Can anyone provide a code example of quad buffer stereoscopic 3D using OpenGL that works on the later GTX model cards?
What you experience has no technical reasons, but is simply product policy of NVidia. Quadbuffer stereo is considered a professional feature and so NVidia offers it only on their Quadro cards, even if the GeForce GPUs would do it as well. This is not a recent development. Already back in 1999 it was like this. For example I had (well still have) a GeForce2 Ultra back then. But technically this was the very same chip like the Quadro, the only difference was the PCI-ID reported back to the system. One could trick the driver into thinking you had a Quadro by tinkering with the PCI-IDs (either by patching the driver or by soldering an additional resistor onto the graphics card PCB).
The stereoscopic 3D mode for Direct3D hack was already supported by my GeForce2 then. Back then the driver duplicated the rendering commands, but applied a translation to the modelview and a skew to the projection matrix. These days it's implemented a shader and multi rendertarget trick.
The NVision3D API does allow you to blit images for specific eyes (this is meant for movie players and image viewers). But it also allows you to emulate quadbuffer stereo: Instead of GL_BACK_LEFT and GL_BACK_RIGHT buffers create two Framebuffer Objects, which you bind and use as if they were quadbuffer stereo. Then after rendering you blit the resulting images (as textures) to the NVision3D API.
With only as little as 50 lines of management code you can build a program that seamlessly works on both NVision3D as well as quadbuffer stereo. What NVidia does is pointless and they should just stop it now and properly support quadbuffer stereo pixelformats on consumer GPUs as well.
Simple: you can't. Not the way you're trying to do it.
There is a difference between having a pre-existing program do things with stereoscopic glasses and doing what you're trying to do. What you are attempting to do is use the built-in stereo support of OpenGL: the ability to create a stereoscopic framebuffer, where you can render to the left and right framebuffers arbitrarily.
NVIDIA does not allow that with their non-Quadro cards. It has hacks in the driver that will force stereo on applications with nVision and the control panel. But NVIDIA's GeForce drivers do not allow you to create stereoscopic framebuffers.
And before you ask, no, I have no idea why NVIDIA doesn't let you control stereo.
Since I was looking into this issue for my own game, I w found this link where somebody hacked the USB protocol. http://users.csc.calpoly.edu/~zwood/teaching/csc572/final11/rsomers/
I didn't follow it through but at the time when I was researching on this it didn't look to hard to make use of this information. So you might have to implement your own code in order to support it in your app, which should be possible. Unfortunately a generic solution would be harder, because then you would have to hack the driver or somehow hook into the OpenGL library and intercept the calls.