How to do stereoscopic 3D with OpenGL on GTX 560 and later? - opengl

I am using the open source haptics and 3D graphics library Chai3D running on Windows 7. I have rewritten the library to do stereoscopic 3D with Nvidia nvision. I am using OpenGL with GLUT, and using glutInitDisplayMode(GLUT_RGB | GLUT_DEPTH | GLUT_DOUBLE | GLUT_STEREO) to initialize the display mode. It works great on Quadro cards, but on GTX 560m and GTX 580 cards it says the pixel format is unsupported. I know the monitors are capable of displaying the 3D, and I know the cards are capable of rendering it. I have tried adjusting the resolution of the screen and everything else I can think of, but nothing seems to work. I have read in various places that stereoscopic 3D with OpenGL only works in fullscreen mode. So, the only possible reason for this error I can think of is that I am starting in windowed mode. How would I force the application to start in fullscreen mode with 3D enabled? Can anyone provide a code example of quad buffer stereoscopic 3D using OpenGL that works on the later GTX model cards?

What you experience has no technical reasons, but is simply product policy of NVidia. Quadbuffer stereo is considered a professional feature and so NVidia offers it only on their Quadro cards, even if the GeForce GPUs would do it as well. This is not a recent development. Already back in 1999 it was like this. For example I had (well still have) a GeForce2 Ultra back then. But technically this was the very same chip like the Quadro, the only difference was the PCI-ID reported back to the system. One could trick the driver into thinking you had a Quadro by tinkering with the PCI-IDs (either by patching the driver or by soldering an additional resistor onto the graphics card PCB).
The stereoscopic 3D mode for Direct3D hack was already supported by my GeForce2 then. Back then the driver duplicated the rendering commands, but applied a translation to the modelview and a skew to the projection matrix. These days it's implemented a shader and multi rendertarget trick.
The NVision3D API does allow you to blit images for specific eyes (this is meant for movie players and image viewers). But it also allows you to emulate quadbuffer stereo: Instead of GL_BACK_LEFT and GL_BACK_RIGHT buffers create two Framebuffer Objects, which you bind and use as if they were quadbuffer stereo. Then after rendering you blit the resulting images (as textures) to the NVision3D API.
With only as little as 50 lines of management code you can build a program that seamlessly works on both NVision3D as well as quadbuffer stereo. What NVidia does is pointless and they should just stop it now and properly support quadbuffer stereo pixelformats on consumer GPUs as well.

Simple: you can't. Not the way you're trying to do it.
There is a difference between having a pre-existing program do things with stereoscopic glasses and doing what you're trying to do. What you are attempting to do is use the built-in stereo support of OpenGL: the ability to create a stereoscopic framebuffer, where you can render to the left and right framebuffers arbitrarily.
NVIDIA does not allow that with their non-Quadro cards. It has hacks in the driver that will force stereo on applications with nVision and the control panel. But NVIDIA's GeForce drivers do not allow you to create stereoscopic framebuffers.
And before you ask, no, I have no idea why NVIDIA doesn't let you control stereo.

Since I was looking into this issue for my own game, I w found this link where somebody hacked the USB protocol. http://users.csc.calpoly.edu/~zwood/teaching/csc572/final11/rsomers/
I didn't follow it through but at the time when I was researching on this it didn't look to hard to make use of this information. So you might have to implement your own code in order to support it in your app, which should be possible. Unfortunately a generic solution would be harder, because then you would have to hack the driver or somehow hook into the OpenGL library and intercept the calls.

Related

RPI OpenGL PWM display driver

So I'm building a system based on a raspberry pi 4 running Linux (image created through buildroot) driving a Led matrix (64x32 RGB connectors) and I'm very confused about the software stack of linux. I'd like to be able to use OpenGL capabilities on a small resolution screen that would then be transfered to a driver that would actually drive the Led matrix.
I've read about DRM, KMS, GEM and other systems and I've concluded the best way to go about it would be to have the following working scheme:
User space: App
| OpenGL
v
Kernel space: DRM -GEM-> Led device driver
|
v
Hardware: Led Matrix
Some of this may not make a lot of sense since the concepts are still confusing to me.
Essentially, the app would make OpenGL calls that would generate frames that could be mapped to buffers on the DRM which could be shared with the Led device driver which would then drive the leds in the matrix.
Would something like this be the best way about it?
I could just program some dumb buffer cpu implementation but I'd rather take this as a learning experience.
OpenGL renders into a buffer (called "framebuffer" that is usually displayed onto the screen. But rendering into an off screen buffer (as the name implies) does not render onto the screen but into an array, which can be read by C/C++. There is one indirection on modern operating systems. Usually you have multiple windows visible on your screen. Therefore the application can't render onto the screen itself but into a buffer maneged by the windowing system, which is then composited into one final image. Linux uses Wayland, multiple Wayland clients can create and draw into the Wayland compositor's buffers.
If you only want to display your application just use a off screen buffer.
If you want to display another application read it's framebuffer by writing your own Wayland compositor. Note this may be hard (I've never done that) if you want to use hardware acceleration.

Can EGL application run in console mode?

I want to implement an opengl application which generates images and I view the image via a webpage.
the application is intended to run on a linux server which has no display, no x windows, but with gpu.
I know that egl can use pixmap or pbuffer as render targets.
but the function eglGetDisplay worries me, it sounds like I still need to have attached display to make it work?
does egl work without display and xwindows or wayland?
This is a recurring question. TL;DR: With the current Linux graphics driver model it is impossible to use the GPU with traditional drivers without running a X server. If the GPU is supported by KMS+DRM+DRI you can do it. (EDIT:) Also in 2016 Nvidia finally introduced truly headless OpenGL support in their drivers through EGL.
The long story is, that technically GPUs are perfectly capable of rendering to an offscreen buffer without a display being attached or a graphics server running. However due to the history of graphics driver and environment development this is not possible, yet has not been possible for a long time. The assumption back then (when graphics was first introduced to Linux) was: "The graphics device is there to deliver a picture to a screen." That a graphics card could be used as an accelerating coprocessor was not even a figment of an idea.
Add to this, that until a few years ago, the Linux kernel itself had no idea how to talk to graphics devices (other than a dumb framebuffer somewhere in the system's address space). The X server was what talked to GPUs, so you needed that to run. And the first X server developers made the assumption that there is a person between keyboard and chair.
So what are your options:
Short term, if you're using a NVidia GPU: Just start an X server. You don't need a full blown desktop environment. You can even save yourself the trouble of starting a window manager. Just have the X server claim the VT and being active. There is now support for headless OpenGL contexts through EGL in the Nvidia drivers.
If you're using an AMD or Intel GPU you can talk directly to it. Either through EGL or using KMS (Google for something called kmscube, when trying it, make sure you switch away from your X server to a text VT first, otherwise you'll crash the X server). I've not tried it yet, but it should be possible to adjust the kmscube example that it uses the GPU to render into an offscreen buffer, without switching the VT to graphics mode or have any graphics output on the display framebuffer at all.
As datenwolf told u can create a frame buffer without using x with AMD and intel GPU. since iam using AMD graphics card with EGL and iam able to create a frame buffer and iam drawing on it.with Mesa Library by configuring without x u can achieve.

What's the point of Nvidia 3D Vision and AMD HD3D?

Why they promote these weird technologies instead of just supporting OpenGL quad buffering?
Well they say AMD cards beginning with HD6000 support OpenGL quad buffering, yet HD3D is still what you see on the front pages (well, maybe because there is no native DirectX quad buffering support yet)...
Two reasons: Keeping an incentive for professional users who need quadbuffer stereo to buy the professional cards. Now with 3D Vision being pushed so hard a lot of people asked "uncomfortable" questions. The other reason was to try attempting on Vendor Lock in with a custom API, so that 3D Vision games would work only on NVidia hardware.
Similar reasoning on the side of AMD. However FireGL cards didn't keep up with the Radeons and so there's little reason for AMD to make their Radeon cards less attractive to professionals (current AMD FireGL cards can not compete with NVidia Quadros, the Radeons are also the competition for the Quadros), so having quadbuffer OpenGL support for them was the logical decision.
Note that this is a pure marketing decision. There never have been technical reasons of any kind for this artificial limitation of consumer cards.
Windows 8.1 supports Stereoscopic modes right out of the box, in DirectX 11.1.
AMD HD3D and NVidia 3DVision add:
1) Enumeration of Stereo 3D modes on Windows <= 8.1 (on Windows 8.1 the DirectX API provides this)
2) Sending the EDID signal to the monitor to enable/disable 3D on Windows <= 8.1 (on Windows 8.1, the DirectX API provides this)
3) Rendering Left and Right camera in an above/below arrangement -- it tells you the offset to use for the right image. Then, you use standard double buffering instead of Quad. (on Windows 8.1, this is not necessary -- sensing a pattern?)
3DVision adds the following:
1) Support for desktop apps to run in Stereo without engaging full screen mode (and it sometimes actually works).
2) Support for forcing non-stereoscopic games stereoscopic by intercepting the drawing calls. (this works most of the time -- on AMD, you can get the same thing by buying TriDef or iZ3D).
3) A NVidia-standard connector (e.g. proprietary, but common to all NVidia cards) for the IR transmitter and shutter glasses. (AMD, and NVidia can do this as well, uses the HDMI 3D spec and leaves the glasses up to the monitor company)
Note:
The key feature in both cases is being able to enumerate modes that have stereo support, and being able to send the EDID code to the monitor to turn on the stereo display.

Is Cairo acelerated on Opengl backend?

By this I mean, does Cairo draw lines, shapes and everything using opengl acelerated primitives or no? and if not, a library that does this?
The OpenGL backend certainly accelerates some functions. But there are many it can't accelerate. The fact that it's written against GL 2.1 (and thus can't use more advanced features of 3.x or 4.x hardware) means that there is a lot that it simply cannot accelerate.
If you are willing to limit yourself to NVIDIA hardware, NVIDIA just came out with the NV_path_rendering extension, which provides a lot of the 2D functionality you would find with Cairo. Indeed, it's possible that you could write a Cairo backend for it. The path rendering extension is only available on GeForce 8xxx hardware and above.
It's nifty in that it's focused on the vertex pipeline. It doesn't do things like gradients or colors or whatever. That's good, because it still allows you the use of a fragment shader. Which means you get to do pretty much whatever you want ;)
Cairo is designed to have a flexible backend for rendering. It can use OpenGL for rendering, though support is still listed as "experimental" at this point. For details, see using cairo with OpenGL.
It can also output to the X Window System, Quartz, Win32, image buffers, PostScript, PDF, and SVG, and more.

Seamless multi-screen OpenGL rendering with heteregeneous multi-GPU configuration on Windows XP

On Windows XP (64-bit) it seems to be impossible to render with OpenGL to two screens connected to different graphics cards with different GPUs (e.g. two NVIDIAs of different generations). What happens in this case is that rendering works only in one of the screens. On the other hand, with Direct3D it works without problem, rendering in both screens. Anyone knows why is this? Or more importantly: is there a way to render in both screens with OpenGL?
I have discovered that on Windows 7 rendering works on both screens even with GPUs of different brands (e.g. AMD and Intel). I think this may be because of its display model, which runs on top of a Direct3D compositer if I am not mistaken. This is just a suposition, I really don't know if it is the actual reason.
If Direct3D would be the solution, one idea would be to do all the rendering with OpenGL to a texture, and then somehow render this texture with Direct3D, suposing it isn't too slow.
What happens in Windows 7 is, that one GPU, or same type GPUs coupled, render the image to an offscreen buffer, which is then composited spanning the screens. However it is (yet) impossible to distribute rendering of a single context over GPUs of different making. That would require a standardized communication and synchronization infrastructure, which simply doesn't exsist. Neither OpenGL or Direct3D can do it.
What can be done is copying the rendering results into the onscreen framebuffers of several GPUs. Windows 7 and DirectX have support for this built in. Doing it with OpenGL is a bit more involved. Technically you render to an offscreen device context, usually a so called PBuffer. After finishing the rendering you copy the result using GDI functions to your window. This last copying step however is very slow, compared to the rest of OpenGL operation.
Both NVIDIA and AMD have ways of allowing you to choose which GPU to use. NVIDIA has WGL_NV_gpu_affinity and AMD has WGL_AMD_gpu_association. They both work rather differently, so you'll have to do different things on the different hardware to get the behavior you need.