Why they promote these weird technologies instead of just supporting OpenGL quad buffering?
Well they say AMD cards beginning with HD6000 support OpenGL quad buffering, yet HD3D is still what you see on the front pages (well, maybe because there is no native DirectX quad buffering support yet)...
Two reasons: Keeping an incentive for professional users who need quadbuffer stereo to buy the professional cards. Now with 3D Vision being pushed so hard a lot of people asked "uncomfortable" questions. The other reason was to try attempting on Vendor Lock in with a custom API, so that 3D Vision games would work only on NVidia hardware.
Similar reasoning on the side of AMD. However FireGL cards didn't keep up with the Radeons and so there's little reason for AMD to make their Radeon cards less attractive to professionals (current AMD FireGL cards can not compete with NVidia Quadros, the Radeons are also the competition for the Quadros), so having quadbuffer OpenGL support for them was the logical decision.
Note that this is a pure marketing decision. There never have been technical reasons of any kind for this artificial limitation of consumer cards.
Windows 8.1 supports Stereoscopic modes right out of the box, in DirectX 11.1.
AMD HD3D and NVidia 3DVision add:
1) Enumeration of Stereo 3D modes on Windows <= 8.1 (on Windows 8.1 the DirectX API provides this)
2) Sending the EDID signal to the monitor to enable/disable 3D on Windows <= 8.1 (on Windows 8.1, the DirectX API provides this)
3) Rendering Left and Right camera in an above/below arrangement -- it tells you the offset to use for the right image. Then, you use standard double buffering instead of Quad. (on Windows 8.1, this is not necessary -- sensing a pattern?)
3DVision adds the following:
1) Support for desktop apps to run in Stereo without engaging full screen mode (and it sometimes actually works).
2) Support for forcing non-stereoscopic games stereoscopic by intercepting the drawing calls. (this works most of the time -- on AMD, you can get the same thing by buying TriDef or iZ3D).
3) A NVidia-standard connector (e.g. proprietary, but common to all NVidia cards) for the IR transmitter and shutter glasses. (AMD, and NVidia can do this as well, uses the HDMI 3D spec and leaves the glasses up to the monitor company)
Note:
The key feature in both cases is being able to enumerate modes that have stereo support, and being able to send the EDID code to the monitor to turn on the stereo display.
Related
On the website of Oculus Rift is is stated that the minimum system requirements for the Oculus Rift are a NVIDIA GTX 970 / AMD R9 290 equivalent or greater. I am aware that the Quadro M1000M does not meet those requirements.
My intention is to use the Oculus Rift for developing educational applications (visualization of molecular structures) which in terms of computational demand does not even come close to modern games.
For the above-mentioned kind of purpose, would the Oculus Rift run fine on less powerful GPUs (i.e. the Quadro M1000M) or is the driver developed in such a way that it simply "blocks" cards that do not meet the required specifications?
Further information:
I intent on developing my application in Linux using GLFW in combination with LibOVR as mentioned in this guide: http://www.glfw.org/docs/3.1/rift.html.
edit
It was pointed out that the SDK does not support Linux. So as an alternative option, I could also use Windows / Unity.
Any personal experiences on the topic are highly appreciated!
Consumer Oculus Rift hardware has not been reverse engineered to the point where you can use it without the official software, which currently only supports Windows based desktop systems running one of a specific number of supported GPUs. It will not function on any mobile GPU, nor on any non-Windows OS. Plugging the HMD into the display port on systems where the Oculus service isn't running will not result in anything appearing on the headset.
The Oculus DK2 and DK1 can both be made to function on alternative operating systems and with virtually any graphics card, since when connected they are detected by the OS as just another monitor.
Basically your only path is to either use older HMD hardware, wait for Oculus to support other platforms, or wait for someone to reverse engineer the interaction with the production HMD hardware.
To answer my own question (I hope that's ok), I bought an Oculus Rift CV1. It turns out it runs smoothly on my HP Zbook G3 which has an Quadro M1000M card in it. Admittedly, the Oculus Desktop application gives a warning that my machine does not meet the required specifications. Indeed, if I render a scene with lots of complicated graphics in it and turn my head, the visuals tend to 'stutter' a bit.
I tested a couple of very simple scenes in Unity 5 and these run without any kind of problems. I would say that the above mentioned hardware is perfectly suitable for any kind of educational purposes I had in mind, just nothing extremely fancy.
As #SurvivalMachine mentioned in the comments, Optimus can be a bit problematic, but this is resolved by turning hybrid graphics off in the bios (which I heard is possible for the HP Zbook series, but not for all laptops). Furthermore, the laptop needs to be connected to a power outlet (i.e. not run on its battery) for the graphical card to function properly with the Oculus Rift.
WebGl is based on OpelGL ES 2.0.
Is it correct to say that Stage3d is also based OpenGL? I mean does it call OpenGL functions? Or ot calles Direct3D when runs on Windows?
If no, could you explain me, what API does Stage3d use for hardware acceleration?
The accepted answer is incorrect unfortunately. Stage 3D uses:
DirectX on Windows systems
OpenGL on OSX systems
OpenGL ES on mobile
Software Renderer when no hardware acceleration is available. (Due to
older hardware or no hardware at all.)
Please see: http://www.slideshare.net/danielfreeman779/adobe-air-stage3d-and-agal
Good day, Stage3D isn't based on anything, it may share similar methodology/terminology. It is another rendering pipeline, this is why Adobe is soo pumped about it.
Have a look at this: http://www.adobe.com/devnet/flashplayer/articles/how-stage3d-works.html
You can skip down to this heading "Comparing the advantages and restrictions of working with Stage3D" to get right down to it.
Also, take a peak at this: http://www.adobe.com/devnet/flashplayer/stage3d.html, excerpt:
The Stage3D APIs in Flash Player and Adobe AIR offer a fully
hardware-accelerated architecture that brings stunning visuals across
desktop browsers and iOS and Android apps enabling advanced 2D and 3D
capabilities. This set of low-level GPU-accelerated APIs provide
developers with the flexibility to leverage GPU hardware acceleration
for significant performance gains in video game development, whether
you’re using cutting-edge 3D game engines or the intuitive, lightning
fast Starling 2D framework that powers Angry Birds.
I have hit a brick wall and I wonder if someone here can help. My program opens an OpenGL surface for very minor rendering needs. It seems on the MacbookPro this causes the graphics card driver to switch the hybrid card from low performance intel graphics to high performance AMD ATI graphics.
This causes me problems as there seems to be an issue with the AMD driver and putting the Mac to sleep, but also it drains the battery unnecessarily fast. I only need OpenGL to create a static 3D image on occasion, I do not require a fast frame rate!
Is there a way in a Cocoa app to prevent OpenGL switching a hybrid graphics card into performance mode?
The relevant documentation for this is QA1734, “Allowing OpenGL applications to utilize the integrated GPU”:
… On OS X 10.6 and earlier, you are not allowed to choose to run on the integrated GPU instead. …
On OS X 10.7 and later, there is a new attribute called NSSupportsAutomaticGraphicsSwitching. To allow your OpenGL application to utilize the integrated GPU, you must add in the Info.plist of your application this key with a Boolean value of true…
So you can only do this on Lion, and “only … on the dual-GPU MacBook Pros that were shipped Early 2011 and after.”
There are a couple of other important caveats:
Additionally, you must make sure that your application works correctly with multiple GPUs or else the system may continue forcing your application to use the discrete GPU. TN2229 Supporting Multiple GPUs on Mac OS X discusses in detail the required steps that you need to follow.
and:
Features that are available on the discrete GPU may not be available on the integrated GPU. You must check that features you desire to use exist on the GPU you are using. For a complete listing of supported features by GPU class, please see: OpenGL Capabilities Tables.
I am using the open source haptics and 3D graphics library Chai3D running on Windows 7. I have rewritten the library to do stereoscopic 3D with Nvidia nvision. I am using OpenGL with GLUT, and using glutInitDisplayMode(GLUT_RGB | GLUT_DEPTH | GLUT_DOUBLE | GLUT_STEREO) to initialize the display mode. It works great on Quadro cards, but on GTX 560m and GTX 580 cards it says the pixel format is unsupported. I know the monitors are capable of displaying the 3D, and I know the cards are capable of rendering it. I have tried adjusting the resolution of the screen and everything else I can think of, but nothing seems to work. I have read in various places that stereoscopic 3D with OpenGL only works in fullscreen mode. So, the only possible reason for this error I can think of is that I am starting in windowed mode. How would I force the application to start in fullscreen mode with 3D enabled? Can anyone provide a code example of quad buffer stereoscopic 3D using OpenGL that works on the later GTX model cards?
What you experience has no technical reasons, but is simply product policy of NVidia. Quadbuffer stereo is considered a professional feature and so NVidia offers it only on their Quadro cards, even if the GeForce GPUs would do it as well. This is not a recent development. Already back in 1999 it was like this. For example I had (well still have) a GeForce2 Ultra back then. But technically this was the very same chip like the Quadro, the only difference was the PCI-ID reported back to the system. One could trick the driver into thinking you had a Quadro by tinkering with the PCI-IDs (either by patching the driver or by soldering an additional resistor onto the graphics card PCB).
The stereoscopic 3D mode for Direct3D hack was already supported by my GeForce2 then. Back then the driver duplicated the rendering commands, but applied a translation to the modelview and a skew to the projection matrix. These days it's implemented a shader and multi rendertarget trick.
The NVision3D API does allow you to blit images for specific eyes (this is meant for movie players and image viewers). But it also allows you to emulate quadbuffer stereo: Instead of GL_BACK_LEFT and GL_BACK_RIGHT buffers create two Framebuffer Objects, which you bind and use as if they were quadbuffer stereo. Then after rendering you blit the resulting images (as textures) to the NVision3D API.
With only as little as 50 lines of management code you can build a program that seamlessly works on both NVision3D as well as quadbuffer stereo. What NVidia does is pointless and they should just stop it now and properly support quadbuffer stereo pixelformats on consumer GPUs as well.
Simple: you can't. Not the way you're trying to do it.
There is a difference between having a pre-existing program do things with stereoscopic glasses and doing what you're trying to do. What you are attempting to do is use the built-in stereo support of OpenGL: the ability to create a stereoscopic framebuffer, where you can render to the left and right framebuffers arbitrarily.
NVIDIA does not allow that with their non-Quadro cards. It has hacks in the driver that will force stereo on applications with nVision and the control panel. But NVIDIA's GeForce drivers do not allow you to create stereoscopic framebuffers.
And before you ask, no, I have no idea why NVIDIA doesn't let you control stereo.
Since I was looking into this issue for my own game, I w found this link where somebody hacked the USB protocol. http://users.csc.calpoly.edu/~zwood/teaching/csc572/final11/rsomers/
I didn't follow it through but at the time when I was researching on this it didn't look to hard to make use of this information. So you might have to implement your own code in order to support it in your app, which should be possible. Unfortunately a generic solution would be harder, because then you would have to hack the driver or somehow hook into the OpenGL library and intercept the calls.
I know that if the openGl implementation does not find a suitable driver it happily falls back and render everything in software mode. It's good for graphics applications but it is not acceptable for computer games.
I know many users using Windows XP and if the user does not install the video card driver for his GPU then the OpenGL won't be hardware accelerated (while DirectX is or if not it will throw errors).
Is there a better (and possibly cross platform) way to determine if OpenGL uses the hardware acceleration than measuring the FPS and if it's too low notify the user?
I know that games like Quake3 can find it out somehow...
It seems that there is no direct way to query OpenGL for this but there are some methods that may help you to determine if hardware acceleration is present. See here for Windows ideas. In a UNIX environment glxinfo | grep "direct rendering" should work.
See also glGetString and 5.040 How do I know my program is using hardware acceleration on a Wintel card?
This previous answer suggests that checking to see if the user only has OpenGL 1.1 may be sufficient.
How to write an installer that checks for openGL support?