Why does open scene graph only render if UseVertexBufferObject is enabled? - opengl

I have written a program with OpenSceneGraph (interfaced into Qt Gui) at work and all was fine. Now that I took the program home (i.e. I got the source code home and compiled it at home), I don't see the scene anymore unless I set the option setUseVertexBufferObjects(true) which leads me to believe that the scene just doesn't render objects that aren't set up like this (i.e. the objects aren't just culled). The models are definitely a child of the viewer camera when rendering and I don't use any node masks that would lead to culling either. I reset the position of at least one object to be in the view of the camera, so it should not be a frustum culling.
My shaders use #version330, my graphics card at work is a NVidia Quadro 4000 (I believe), my graphics card at home is a Radeon HD 5870 so the hardware should not be a problem.
The OpenSceneGraph installation is a fresh one, so obviously I might have done a mistake here, but I wouldn't know which setting would lead to this "behaviour".
So, why might this happen?

Related

Capture UIView to texture continously

I am looking for a way to create an updating texture from iOS to an OpenGL Context. I have seen Render contents of UIView as an OpenGL texture but this is quite slow, as it requires the whole view to be rerendered every time it changes. This means webviews in particular are hit very hard as the whole page needs blitting around. This breaks animations in web views, and makes everything very static. I was wondering if there is a technique using some other APIs in iOS that would enable a link to be created between view to texture (much like video textures do).
This seems to be a fundamental requirement of OS display composition, but it feels like it always happens under the covers and is not exposed to developers. Perhaps I am wrong!
Bonus points for anyone that can tell me if any other OSes support this feature.
Take a look at RosyWriter sample project form Apple.
It uses CVOpenGLESTextureCache to improve performance of rendering camera frames as opengl textures.

What's causing this unpredictable OpenGL bug?

I have an OpenGL test application that is producing incredibly unusual results. When I start up the application it may or may not feature a severe graphical bug.
It might produce an image like this:
http://i.imgur.com/JwPoDrh.jpg
Or like this:
http://i.imgur.com/QEYwhBY.jpg
Or just the correct image, like this:
http://i.imgur.com/zUJbwCM.jpg
The scene consists of one spinning colored cube (made of 12 triangles) with a simple shader on it that colors the pixels based on the absolute value of their model space coordinates. The junk faces appear to spin with the cube as though they were attached to it and often junk triangles or quads flash on the screen briefly as though they were rendered in 2D.
The thing I find most unusual about this is that the behavior is highly inconsistent, starting the exact same application repeatedly without me personally changing anything else on the system will produce different results, sometimes bugged, sometimes not, the arrangement of the junk faces produced isn't consistent either.
I can't really post source code for the application as it is very lengthy and the actual OpenGL calls are spread out across many wrapper classes and such.
This is occurring under the following conditions:
Windows 10 64 bit OS (although I have observed very similar behavior under Windows 8.1 64 bit).
AMD FX-9590 CPU (Clocked at 4.7GHz on an ASUS Sabertooth 990FX).
AMD 7970HD GPU (It is a couple years old and occasionally areas of the screen in 3D applications become scrambled, but nothing on the scale of what I'm experiencing here).
Using SDL (https://www.libsdl.org/) for window and context creation.
Using GLEW (http://glew.sourceforge.net/) for OpenGL.
Using OpenGL versions 1.0, 3.3 and 4.3 (I'm assuming SDL is indeed using the versions I instructed it to).
AMD Catalyst driver version 15.7.1 (Driver Packaging Version listed as 15.20.1062.1004-150803a1-187674C, although again I have seen very similar behavior on much older drivers).
Catalyst Control Center lists my OpenGL version as 6.14.10.13399.
This looks like a broken graphics card to me. Most likely some problem with the memory (either the memory itself, or some soldering problem). Artifacts like those you see can happen if for some reason setting the address for a memory operation does not fully settle or happen at all, before starting the read; that can happen due to a bad connection between the GPU and the memory (solder connections failed) or because the memory itself failed.
Solution: Buy new graphics card. You may try out what happens if you resolder it using a reflow process; there are some tutorials on how to do this DIY, but a proper reflow oven gives better results.

DirectX 11.1 toggle stereoscopic

I'm fiddling with the Direct3D stereoscopic 3D sample, and was wondering if there is any way to toggle the stereoscopic view on and off while the app is running. I'm tried manually setting m_stereoEnabled = false, but this still renders both left and right eye, but simply does not update the right eye rendering. I'm fairly new to DirectX, but not software development or to 3d game development (however, my tool of choice is usually OpenGL).
It looked like I'd need to change DirectXBase::UpdateStereoEnabledStatus(), because it automatically sets m_stereoEnabled to true if my graphics card/monitor support 3d.
I believe more information is needed about your setup for s3d. At least using XNA (which is higher level from DirectX) and Nvidia 3D Vision glasses, the stereoscopic view is rendered automatically: http://xboxforums.create.msdn.com/forums/t/90838.aspx

How to do stereoscopic 3D with OpenGL on GTX 560 and later?

I am using the open source haptics and 3D graphics library Chai3D running on Windows 7. I have rewritten the library to do stereoscopic 3D with Nvidia nvision. I am using OpenGL with GLUT, and using glutInitDisplayMode(GLUT_RGB | GLUT_DEPTH | GLUT_DOUBLE | GLUT_STEREO) to initialize the display mode. It works great on Quadro cards, but on GTX 560m and GTX 580 cards it says the pixel format is unsupported. I know the monitors are capable of displaying the 3D, and I know the cards are capable of rendering it. I have tried adjusting the resolution of the screen and everything else I can think of, but nothing seems to work. I have read in various places that stereoscopic 3D with OpenGL only works in fullscreen mode. So, the only possible reason for this error I can think of is that I am starting in windowed mode. How would I force the application to start in fullscreen mode with 3D enabled? Can anyone provide a code example of quad buffer stereoscopic 3D using OpenGL that works on the later GTX model cards?
What you experience has no technical reasons, but is simply product policy of NVidia. Quadbuffer stereo is considered a professional feature and so NVidia offers it only on their Quadro cards, even if the GeForce GPUs would do it as well. This is not a recent development. Already back in 1999 it was like this. For example I had (well still have) a GeForce2 Ultra back then. But technically this was the very same chip like the Quadro, the only difference was the PCI-ID reported back to the system. One could trick the driver into thinking you had a Quadro by tinkering with the PCI-IDs (either by patching the driver or by soldering an additional resistor onto the graphics card PCB).
The stereoscopic 3D mode for Direct3D hack was already supported by my GeForce2 then. Back then the driver duplicated the rendering commands, but applied a translation to the modelview and a skew to the projection matrix. These days it's implemented a shader and multi rendertarget trick.
The NVision3D API does allow you to blit images for specific eyes (this is meant for movie players and image viewers). But it also allows you to emulate quadbuffer stereo: Instead of GL_BACK_LEFT and GL_BACK_RIGHT buffers create two Framebuffer Objects, which you bind and use as if they were quadbuffer stereo. Then after rendering you blit the resulting images (as textures) to the NVision3D API.
With only as little as 50 lines of management code you can build a program that seamlessly works on both NVision3D as well as quadbuffer stereo. What NVidia does is pointless and they should just stop it now and properly support quadbuffer stereo pixelformats on consumer GPUs as well.
Simple: you can't. Not the way you're trying to do it.
There is a difference between having a pre-existing program do things with stereoscopic glasses and doing what you're trying to do. What you are attempting to do is use the built-in stereo support of OpenGL: the ability to create a stereoscopic framebuffer, where you can render to the left and right framebuffers arbitrarily.
NVIDIA does not allow that with their non-Quadro cards. It has hacks in the driver that will force stereo on applications with nVision and the control panel. But NVIDIA's GeForce drivers do not allow you to create stereoscopic framebuffers.
And before you ask, no, I have no idea why NVIDIA doesn't let you control stereo.
Since I was looking into this issue for my own game, I w found this link where somebody hacked the USB protocol. http://users.csc.calpoly.edu/~zwood/teaching/csc572/final11/rsomers/
I didn't follow it through but at the time when I was researching on this it didn't look to hard to make use of this information. So you might have to implement your own code in order to support it in your app, which should be possible. Unfortunately a generic solution would be harder, because then you would have to hack the driver or somehow hook into the OpenGL library and intercept the calls.

Seamless multi-screen OpenGL rendering with heteregeneous multi-GPU configuration on Windows XP

On Windows XP (64-bit) it seems to be impossible to render with OpenGL to two screens connected to different graphics cards with different GPUs (e.g. two NVIDIAs of different generations). What happens in this case is that rendering works only in one of the screens. On the other hand, with Direct3D it works without problem, rendering in both screens. Anyone knows why is this? Or more importantly: is there a way to render in both screens with OpenGL?
I have discovered that on Windows 7 rendering works on both screens even with GPUs of different brands (e.g. AMD and Intel). I think this may be because of its display model, which runs on top of a Direct3D compositer if I am not mistaken. This is just a suposition, I really don't know if it is the actual reason.
If Direct3D would be the solution, one idea would be to do all the rendering with OpenGL to a texture, and then somehow render this texture with Direct3D, suposing it isn't too slow.
What happens in Windows 7 is, that one GPU, or same type GPUs coupled, render the image to an offscreen buffer, which is then composited spanning the screens. However it is (yet) impossible to distribute rendering of a single context over GPUs of different making. That would require a standardized communication and synchronization infrastructure, which simply doesn't exsist. Neither OpenGL or Direct3D can do it.
What can be done is copying the rendering results into the onscreen framebuffers of several GPUs. Windows 7 and DirectX have support for this built in. Doing it with OpenGL is a bit more involved. Technically you render to an offscreen device context, usually a so called PBuffer. After finishing the rendering you copy the result using GDI functions to your window. This last copying step however is very slow, compared to the rest of OpenGL operation.
Both NVIDIA and AMD have ways of allowing you to choose which GPU to use. NVIDIA has WGL_NV_gpu_affinity and AMD has WGL_AMD_gpu_association. They both work rather differently, so you'll have to do different things on the different hardware to get the behavior you need.