ovr_CreateSwapTextureSetGL fails [OpenGL + Oculus 0.8.0 / DK2] - opengl

As you probably know, the DK2 supports a new mode called Direct Mode that reduces latency and hence improves the VR experience. When I run the DK2 samples that come with the currently latest 0.8.0(beta) SDK, the DiredctX11 version of the OculusTinyRoom runs fine.
My problem: The OpenGL (using 3.3 profile) version uses a function called ovrHmd_CreateSwapTextureSetGL() that returns a textureset with zero textures (but calls glGenBuffer as fallback), and the return value is -1006 (ovrServiceError).
I've seen many reports on the problematic OpenGL support on the Oculus Developer Forum. For earlier versions of the SDK the OpenGL support was neglected from 0.2.4+ and seem to have been resolved from versions 0.5 and up (all in Client Rendering Mode). Nothing said about the newer Direct Mode, except that for some people it failed to work at all if they had their second screen attached, even in DirectX11. This is not the case for me.
I've also seen people suggest to uninstall the 3D Vision drivers from NVidia, because they may conflict with the Oculus Rift drivers. They report dramatic framerate improvements, although I get a 10% improvement myself. Apparently NVidia's GameWorks VR bites driver performance just by installing it. Unfortunately, uninstalling them does not fix the problem.
The latest driver (361.34) update suggests improved Oculus and OpenGL support in , GameWorks VR OpenGL, as well as Direct Mode support (even for SLI setups, which seems to have pretty impressive results). But that's an NVidia-only solution. AMD has LiquidVR as an alternative. But I'd still like to use the Oculus SDK stack.
I am using both a Geforce 480 and Titan X.

I went back to the second screen issue that some seem to have had. Since it worked for me in DX11, I figured my problem would not be similar.
During my research I found a few interesting forum post on reddit suggesting that part of the problem might stem from using multiple monitors. It turns out it seems to have been fixed for DX11 but not for OpenGL in the meantime.
So I can confirm that turning off any secondary screens connected to secondary cards fixes the problem. For OpenGL, you have to connect ALL your output devices to the SAME card.
I did some more testing:
What worked:
Primary screen AND oculus both connected to the TitanX (The 480 isn't connected).
Connecting both screens and the Oculus to the Titan worked(!)
What did not work:
Connecting the primary to the Titan and the Oculus to the 480 does not work.
Connecting the primary to the 480 and the Oculus to the Titan also does not work.
So it seems to be a driver issue with the graphics device enumeration.
Note: This was AFTER I removed the NVidia 3D Vision drivers and updated to build 361.43, so it might also still have been related to having them installed. If someone can confirm this, would be nice to know.

Related

Will Oculus Rift work with Quadro M1000M for non-gaming purposes?

On the website of Oculus Rift is is stated that the minimum system requirements for the Oculus Rift are a NVIDIA GTX 970 / AMD R9 290 equivalent or greater. I am aware that the Quadro M1000M does not meet those requirements.
My intention is to use the Oculus Rift for developing educational applications (visualization of molecular structures) which in terms of computational demand does not even come close to modern games.
For the above-mentioned kind of purpose, would the Oculus Rift run fine on less powerful GPUs (i.e. the Quadro M1000M) or is the driver developed in such a way that it simply "blocks" cards that do not meet the required specifications?
Further information:
I intent on developing my application in Linux using GLFW in combination with LibOVR as mentioned in this guide: http://www.glfw.org/docs/3.1/rift.html.
edit
It was pointed out that the SDK does not support Linux. So as an alternative option, I could also use Windows / Unity.
Any personal experiences on the topic are highly appreciated!
Consumer Oculus Rift hardware has not been reverse engineered to the point where you can use it without the official software, which currently only supports Windows based desktop systems running one of a specific number of supported GPUs. It will not function on any mobile GPU, nor on any non-Windows OS. Plugging the HMD into the display port on systems where the Oculus service isn't running will not result in anything appearing on the headset.
The Oculus DK2 and DK1 can both be made to function on alternative operating systems and with virtually any graphics card, since when connected they are detected by the OS as just another monitor.
Basically your only path is to either use older HMD hardware, wait for Oculus to support other platforms, or wait for someone to reverse engineer the interaction with the production HMD hardware.
To answer my own question (I hope that's ok), I bought an Oculus Rift CV1. It turns out it runs smoothly on my HP Zbook G3 which has an Quadro M1000M card in it. Admittedly, the Oculus Desktop application gives a warning that my machine does not meet the required specifications. Indeed, if I render a scene with lots of complicated graphics in it and turn my head, the visuals tend to 'stutter' a bit.
I tested a couple of very simple scenes in Unity 5 and these run without any kind of problems. I would say that the above mentioned hardware is perfectly suitable for any kind of educational purposes I had in mind, just nothing extremely fancy.
As #SurvivalMachine mentioned in the comments, Optimus can be a bit problematic, but this is resolved by turning hybrid graphics off in the bios (which I heard is possible for the HP Zbook series, but not for all laptops). Furthermore, the laptop needs to be connected to a power outlet (i.e. not run on its battery) for the graphical card to function properly with the Oculus Rift.

Running OpenGL on windows server 2012 R2

This should be straightforward, but for some reason I can't make it work.
I hired a Softlayer Bare Metal Server that comes with an Nvidea Tesla GPU.
I'm remotley executing a program (openScad) that needs OpenGL > 2.0 in order to properly export a PNG file.
When I invoke openScad and export a model, I get a 0kb png file as output, a clear symptom that OpenGL > 2.0 support is not present.
In order to make sure that I was running openGL > 2.0 I connected to my server via RD and ran GlView. To my surprise I saw that the server was supporting nothing but openGL 1.1.
After a little research I found out that for standard RD sessions the GPU is not used so it makes sense that I'm only seeing openGL 1.1.
The problem is that when I execute openscad remotley, it seems that the GPU is not used either.
What can I do to successfully make the GPU capabilities of my server work when I invoke openscad remotely?
PS: I checked with softlayer support and they are not taking any responsibility
Most (currently all) OpenGL implementations that use a GPU assume that there's a display system of some sort using that GPU; in the case of Windows that would be GDI. However on a headless server Windows usually doesn't start the GDI on the GPU but uses some framebuffer.
The NVidia Tesla GPUs are marketed as compute-only-devices and hence their driver does not support any graphics functionality (note that this is a marketing limitation implemented in software, as the silicon is perfectly capable of doing graphics). Or in other words: If you can implement your graphics operations using CUDA or OpenCL, then you can use it to generate pictures. Otherwise (i.e. for OpenGL or Direct3D) it's useless.
Note that NVidia is marketing their "GRID" products for remote/cloud rendering.
I'm replying because i faced a similar problem in the past; also trying to run an application that needed openGL 4 on a windows server.
windows remote desktop indeed doesn't trigger opengl. However if you use tigervnc instead and then start your openScad application it might recognize your opengl drivers. At least this trick did it for me.
(when opening an openGL context in a program it scan's for monitors/RD's attached i pressume).
hope it helps.

When using GLEW, why does my system support OpenGL 3.2 when my GPU is only 2.0 compliant?

I'm a relative beginner with OpenGL (I'm not counting the ver. 1.1 NeHe tutorials I've done, because I'm trying to learn to do it the modern way with custom shaders), and I don't quite grasp how the different versions work, which ones require hardware changes, and which ones only require updates to the driver. Also, I've tried to find more details about how GLEW works (without diving into the code - yet), and it's still not clicking. While learning, I'm trying to find a balance between forward and backward compatibility in my code, especially since I'm working with older hardware, and it could become the basis of a game down the road. I'm trying to decide which version of GL and GLSL to code for.
My specific question is this: Why, when I use the GLEW (2.7) library (also using GLFW), does GLEW_VERSION_3_2 evaluate to true, even though the advertising for my GPU says it's only 2.0 compliant? Is it emulating higher-version functionality in software? Is it exposing hardware extensions in a way that makes it behave transparently like 3.2? Is it just a bug in GLEW?
It is an integrated Radeon HD 4250.
Then whatever advertisement you were looking at was wrong. All HD-4xxx class GPUs (whether integrated, mobile, or discrete cards) are perfectly capable of OpenGL 3.3. That ad was either extremely old, simply incorrect, or you read it wrong.

freeglut GLUT_MULTISAMPLE very slow on Intel HD Graphics 3000

I just picked up a new Lenovo Thinkpad that comes with Intel HD Graphics 3000. I'm finding that my old freeglut apps, which use GLUT_MULTISAMPLE, are running at 2 or 3 fps as opposed to the expected 60fps. Even the freeglut example 'shapes' runs this slow.
If I disable GLUT_MULTISAMPLE from shapes.c (or my app) things run quickly again.
I tried multisampling on glfw (using GLFW_FSAA - or whatever that hint is called), and I think it's working fine. This was with a different app (glgears). glfw is triggering Norton Internet Security, which things it's malware so keeps removing .exes... but that's another problem... my interest is with freeglut.
I wonder if the algorithm that freeglut uses to choose a pixel format is tripping up on this card, whereas glfw is choosing the right one.
Has anyone else come across something like this? Any ideas?
That glfw triggeres Norton is a bug in Nortons virus definition. If it's still the case with the latest definitions, send them your glfw dll/app so they can fix it. Same happens on Avira and they are working on it (have already confirmed that it's a false positive).
As for the HD3000, that's quite a weak GPU, what resolution is your app and how many samples are you using? Maybe the amount of framebuffer memory gets to high for the little guy?

I need openGl 2.0 but my graphic card support 1.5

I want to start with my webGL project and minimal require is my graphic card support openGL 2.0.
Problem exist because i have intel laptop with integrated intel 965 graphic media accelerator and driver is up to date and it support openGL 1.5.
Is there any solution how to update my graphic carf to support 2.0? Is this possible?
Okay. just stay patient actually because ANGLE is coming. It seems to me that your hardware is able to run directX 9 and ANGLE is a project from google to allow webgl support from directX. But as the others say, you can't upgrade opengl drivers just like that. Or you could try MESA in the firefox build.
For more information, see Learningwebgl.com.
Sadly no. With a little more effort you can still develop against opengl 2.0 but you'll need to use another machine (or just buy a better graphics card) to test anything 2.0 specific (pixel shading for instance).
Ok, that's not entirely true. You could download the mesa library and compile it for win32 and get some of the opengl 2.0 functionality emulated in a software renderer but it would be very slow.
It's possible that updating drivers might help some, but probably won't make that jump. Otherwise, you could use something like Mesa3D, which does the rendering in software. It can be slow, but does support up through OpenGL 2.1 (including shaders), if memory serves.
If there's no other way, you could try http://www.mesa3d.org/ . I haven't followed this project for quite some time, but apparently they currently provide OpenGL 2.1 software rendering.
I just updated drivers my HP 6710b with Mobile Intel 965 Express Chipset -- and now WebGL is working in Firefox 4 RC1!
I put instructions on this site.
It is not pretty but it works!
angleproject is your best bet. Check out which exact 965 card you have from here (search for 'intel gma' in wikipedia), which also lists the OpenGL support version for these cards. It might take a couple of months though before you can use angleproject to accelerate your WebGL application.
I have a slightly newer 4500MHD, and I have the same problem. WebGL works on Firefox 3.7a4, but fails in the later versions a5 and a6. I had to use the latest drivers from Intel which claim to support OpenGL 2.0. The Microsoft drivers don't ship with OpenGL support.
I have reported a issue in the Firefox https://bugzilla.mozilla.org/show_bug.cgi?id=570474. It looks like support for Intel cards might be fixed by the time the releases are in beta.