Will Oculus Rift work with Quadro M1000M for non-gaming purposes? - glfw

On the website of Oculus Rift is is stated that the minimum system requirements for the Oculus Rift are a NVIDIA GTX 970 / AMD R9 290 equivalent or greater. I am aware that the Quadro M1000M does not meet those requirements.
My intention is to use the Oculus Rift for developing educational applications (visualization of molecular structures) which in terms of computational demand does not even come close to modern games.
For the above-mentioned kind of purpose, would the Oculus Rift run fine on less powerful GPUs (i.e. the Quadro M1000M) or is the driver developed in such a way that it simply "blocks" cards that do not meet the required specifications?
Further information:
I intent on developing my application in Linux using GLFW in combination with LibOVR as mentioned in this guide: http://www.glfw.org/docs/3.1/rift.html.
edit
It was pointed out that the SDK does not support Linux. So as an alternative option, I could also use Windows / Unity.
Any personal experiences on the topic are highly appreciated!

Consumer Oculus Rift hardware has not been reverse engineered to the point where you can use it without the official software, which currently only supports Windows based desktop systems running one of a specific number of supported GPUs. It will not function on any mobile GPU, nor on any non-Windows OS. Plugging the HMD into the display port on systems where the Oculus service isn't running will not result in anything appearing on the headset.
The Oculus DK2 and DK1 can both be made to function on alternative operating systems and with virtually any graphics card, since when connected they are detected by the OS as just another monitor.
Basically your only path is to either use older HMD hardware, wait for Oculus to support other platforms, or wait for someone to reverse engineer the interaction with the production HMD hardware.

To answer my own question (I hope that's ok), I bought an Oculus Rift CV1. It turns out it runs smoothly on my HP Zbook G3 which has an Quadro M1000M card in it. Admittedly, the Oculus Desktop application gives a warning that my machine does not meet the required specifications. Indeed, if I render a scene with lots of complicated graphics in it and turn my head, the visuals tend to 'stutter' a bit.
I tested a couple of very simple scenes in Unity 5 and these run without any kind of problems. I would say that the above mentioned hardware is perfectly suitable for any kind of educational purposes I had in mind, just nothing extremely fancy.
As #SurvivalMachine mentioned in the comments, Optimus can be a bit problematic, but this is resolved by turning hybrid graphics off in the bios (which I heard is possible for the HP Zbook series, but not for all laptops). Furthermore, the laptop needs to be connected to a power outlet (i.e. not run on its battery) for the graphical card to function properly with the Oculus Rift.

Related

ovr_CreateSwapTextureSetGL fails [OpenGL + Oculus 0.8.0 / DK2]

As you probably know, the DK2 supports a new mode called Direct Mode that reduces latency and hence improves the VR experience. When I run the DK2 samples that come with the currently latest 0.8.0(beta) SDK, the DiredctX11 version of the OculusTinyRoom runs fine.
My problem: The OpenGL (using 3.3 profile) version uses a function called ovrHmd_CreateSwapTextureSetGL() that returns a textureset with zero textures (but calls glGenBuffer as fallback), and the return value is -1006 (ovrServiceError).
I've seen many reports on the problematic OpenGL support on the Oculus Developer Forum. For earlier versions of the SDK the OpenGL support was neglected from 0.2.4+ and seem to have been resolved from versions 0.5 and up (all in Client Rendering Mode). Nothing said about the newer Direct Mode, except that for some people it failed to work at all if they had their second screen attached, even in DirectX11. This is not the case for me.
I've also seen people suggest to uninstall the 3D Vision drivers from NVidia, because they may conflict with the Oculus Rift drivers. They report dramatic framerate improvements, although I get a 10% improvement myself. Apparently NVidia's GameWorks VR bites driver performance just by installing it. Unfortunately, uninstalling them does not fix the problem.
The latest driver (361.34) update suggests improved Oculus and OpenGL support in , GameWorks VR OpenGL, as well as Direct Mode support (even for SLI setups, which seems to have pretty impressive results). But that's an NVidia-only solution. AMD has LiquidVR as an alternative. But I'd still like to use the Oculus SDK stack.
I am using both a Geforce 480 and Titan X.
I went back to the second screen issue that some seem to have had. Since it worked for me in DX11, I figured my problem would not be similar.
During my research I found a few interesting forum post on reddit suggesting that part of the problem might stem from using multiple monitors. It turns out it seems to have been fixed for DX11 but not for OpenGL in the meantime.
So I can confirm that turning off any secondary screens connected to secondary cards fixes the problem. For OpenGL, you have to connect ALL your output devices to the SAME card.
I did some more testing:
What worked:
Primary screen AND oculus both connected to the TitanX (The 480 isn't connected).
Connecting both screens and the Oculus to the Titan worked(!)
What did not work:
Connecting the primary to the Titan and the Oculus to the 480 does not work.
Connecting the primary to the 480 and the Oculus to the Titan also does not work.
So it seems to be a driver issue with the graphics device enumeration.
Note: This was AFTER I removed the NVidia 3D Vision drivers and updated to build 361.43, so it might also still have been related to having them installed. If someone can confirm this, would be nice to know.

Running OpenGL on windows server 2012 R2

This should be straightforward, but for some reason I can't make it work.
I hired a Softlayer Bare Metal Server that comes with an Nvidea Tesla GPU.
I'm remotley executing a program (openScad) that needs OpenGL > 2.0 in order to properly export a PNG file.
When I invoke openScad and export a model, I get a 0kb png file as output, a clear symptom that OpenGL > 2.0 support is not present.
In order to make sure that I was running openGL > 2.0 I connected to my server via RD and ran GlView. To my surprise I saw that the server was supporting nothing but openGL 1.1.
After a little research I found out that for standard RD sessions the GPU is not used so it makes sense that I'm only seeing openGL 1.1.
The problem is that when I execute openscad remotley, it seems that the GPU is not used either.
What can I do to successfully make the GPU capabilities of my server work when I invoke openscad remotely?
PS: I checked with softlayer support and they are not taking any responsibility
Most (currently all) OpenGL implementations that use a GPU assume that there's a display system of some sort using that GPU; in the case of Windows that would be GDI. However on a headless server Windows usually doesn't start the GDI on the GPU but uses some framebuffer.
The NVidia Tesla GPUs are marketed as compute-only-devices and hence their driver does not support any graphics functionality (note that this is a marketing limitation implemented in software, as the silicon is perfectly capable of doing graphics). Or in other words: If you can implement your graphics operations using CUDA or OpenCL, then you can use it to generate pictures. Otherwise (i.e. for OpenGL or Direct3D) it's useless.
Note that NVidia is marketing their "GRID" products for remote/cloud rendering.
I'm replying because i faced a similar problem in the past; also trying to run an application that needed openGL 4 on a windows server.
windows remote desktop indeed doesn't trigger opengl. However if you use tigervnc instead and then start your openScad application it might recognize your opengl drivers. At least this trick did it for me.
(when opening an openGL context in a program it scan's for monitors/RD's attached i pressume).
hope it helps.

What's the point of Nvidia 3D Vision and AMD HD3D?

Why they promote these weird technologies instead of just supporting OpenGL quad buffering?
Well they say AMD cards beginning with HD6000 support OpenGL quad buffering, yet HD3D is still what you see on the front pages (well, maybe because there is no native DirectX quad buffering support yet)...
Two reasons: Keeping an incentive for professional users who need quadbuffer stereo to buy the professional cards. Now with 3D Vision being pushed so hard a lot of people asked "uncomfortable" questions. The other reason was to try attempting on Vendor Lock in with a custom API, so that 3D Vision games would work only on NVidia hardware.
Similar reasoning on the side of AMD. However FireGL cards didn't keep up with the Radeons and so there's little reason for AMD to make their Radeon cards less attractive to professionals (current AMD FireGL cards can not compete with NVidia Quadros, the Radeons are also the competition for the Quadros), so having quadbuffer OpenGL support for them was the logical decision.
Note that this is a pure marketing decision. There never have been technical reasons of any kind for this artificial limitation of consumer cards.
Windows 8.1 supports Stereoscopic modes right out of the box, in DirectX 11.1.
AMD HD3D and NVidia 3DVision add:
1) Enumeration of Stereo 3D modes on Windows <= 8.1 (on Windows 8.1 the DirectX API provides this)
2) Sending the EDID signal to the monitor to enable/disable 3D on Windows <= 8.1 (on Windows 8.1, the DirectX API provides this)
3) Rendering Left and Right camera in an above/below arrangement -- it tells you the offset to use for the right image. Then, you use standard double buffering instead of Quad. (on Windows 8.1, this is not necessary -- sensing a pattern?)
3DVision adds the following:
1) Support for desktop apps to run in Stereo without engaging full screen mode (and it sometimes actually works).
2) Support for forcing non-stereoscopic games stereoscopic by intercepting the drawing calls. (this works most of the time -- on AMD, you can get the same thing by buying TriDef or iZ3D).
3) A NVidia-standard connector (e.g. proprietary, but common to all NVidia cards) for the IR transmitter and shutter glasses. (AMD, and NVidia can do this as well, uses the HDMI 3D spec and leaves the glasses up to the monitor company)
Note:
The key feature in both cases is being able to enumerate modes that have stereo support, and being able to send the EDID code to the monitor to turn on the stereo display.

Does stage3d use OpenGL? or Direct3D when on Windows

WebGl is based on OpelGL ES 2.0.
Is it correct to say that Stage3d is also based OpenGL? I mean does it call OpenGL functions? Or ot calles Direct3D when runs on Windows?
If no, could you explain me, what API does Stage3d use for hardware acceleration?
The accepted answer is incorrect unfortunately. Stage 3D uses:
DirectX on Windows systems
OpenGL on OSX systems
OpenGL ES on mobile
Software Renderer when no hardware acceleration is available. (Due to
older hardware or no hardware at all.)
Please see: http://www.slideshare.net/danielfreeman779/adobe-air-stage3d-and-agal
Good day, Stage3D isn't based on anything, it may share similar methodology/terminology. It is another rendering pipeline, this is why Adobe is soo pumped about it.
Have a look at this: http://www.adobe.com/devnet/flashplayer/articles/how-stage3d-works.html
You can skip down to this heading "Comparing the advantages and restrictions of working with Stage3D" to get right down to it.
Also, take a peak at this: http://www.adobe.com/devnet/flashplayer/stage3d.html, excerpt:
The Stage3D APIs in Flash Player and Adobe AIR offer a fully
hardware-accelerated architecture that brings stunning visuals across
desktop browsers and iOS and Android apps enabling advanced 2D and 3D
capabilities. This set of low-level GPU-accelerated APIs provide
developers with the flexibility to leverage GPU hardware acceleration
for significant performance gains in video game development, whether
you’re using cutting-edge 3D game engines or the intuitive, lightning
fast Starling 2D framework that powers Angry Birds.

Using OpenGL on lower-power side of Hybrid Graphics chip

I have hit a brick wall and I wonder if someone here can help. My program opens an OpenGL surface for very minor rendering needs. It seems on the MacbookPro this causes the graphics card driver to switch the hybrid card from low performance intel graphics to high performance AMD ATI graphics.
This causes me problems as there seems to be an issue with the AMD driver and putting the Mac to sleep, but also it drains the battery unnecessarily fast. I only need OpenGL to create a static 3D image on occasion, I do not require a fast frame rate!
Is there a way in a Cocoa app to prevent OpenGL switching a hybrid graphics card into performance mode?
The relevant documentation for this is QA1734, “Allowing OpenGL applications to utilize the integrated GPU”:
… On OS X 10.6 and earlier, you are not allowed to choose to run on the integrated GPU instead. …
On OS X 10.7 and later, there is a new attribute called NSSupportsAutomaticGraphicsSwitching. To allow your OpenGL application to utilize the integrated GPU, you must add in the Info.plist of your application this key with a Boolean value of true…
So you can only do this on Lion, and “only … on the dual-GPU MacBook Pros that were shipped Early 2011 and after.”
There are a couple of other important caveats:
Additionally, you must make sure that your application works correctly with multiple GPUs or else the system may continue forcing your application to use the discrete GPU. TN2229 Supporting Multiple GPUs on Mac OS X discusses in detail the required steps that you need to follow.
and:
Features that are available on the discrete GPU may not be available on the integrated GPU. You must check that features you desire to use exist on the GPU you are using. For a complete listing of supported features by GPU class, please see: OpenGL Capabilities Tables.