I'm fiddling with the Direct3D stereoscopic 3D sample, and was wondering if there is any way to toggle the stereoscopic view on and off while the app is running. I'm tried manually setting m_stereoEnabled = false, but this still renders both left and right eye, but simply does not update the right eye rendering. I'm fairly new to DirectX, but not software development or to 3d game development (however, my tool of choice is usually OpenGL).
It looked like I'd need to change DirectXBase::UpdateStereoEnabledStatus(), because it automatically sets m_stereoEnabled to true if my graphics card/monitor support 3d.
I believe more information is needed about your setup for s3d. At least using XNA (which is higher level from DirectX) and Nvidia 3D Vision glasses, the stereoscopic view is rendered automatically: http://xboxforums.create.msdn.com/forums/t/90838.aspx
Related
I have written a program with OpenSceneGraph (interfaced into Qt Gui) at work and all was fine. Now that I took the program home (i.e. I got the source code home and compiled it at home), I don't see the scene anymore unless I set the option setUseVertexBufferObjects(true) which leads me to believe that the scene just doesn't render objects that aren't set up like this (i.e. the objects aren't just culled). The models are definitely a child of the viewer camera when rendering and I don't use any node masks that would lead to culling either. I reset the position of at least one object to be in the view of the camera, so it should not be a frustum culling.
My shaders use #version330, my graphics card at work is a NVidia Quadro 4000 (I believe), my graphics card at home is a Radeon HD 5870 so the hardware should not be a problem.
The OpenSceneGraph installation is a fresh one, so obviously I might have done a mistake here, but I wouldn't know which setting would lead to this "behaviour".
So, why might this happen?
What graphics engine Maya uses, OpenGL or DirectX? Does it at all use any? Since maya is written in C++.
For going deep into Maya, is it proper to learn to use OpenGL or one should go with DirectX?
My questions specially are associated with adding super new functionalities, such as new edge-system for a certain geometry in Maya.
What graphics engine Maya uses
Its own.
Neither OpenGL nor Direct3D are graphics engines. They're drawing APIs. You push in bunches of data and parameters and shaders to make sense of that data, and rasterized points, lines and triangles on a 2D framebuffer come out on the other side. That's it.
Maya, like every other graphics program out there implements its own engine or uses a graphics engine library that maybe uses Direct3D or OpenGL as a backend. In the case of Maya OpenGL is used for the interactive display. But the offline renderer is independent from that.
For going deep into Maya, is it proper to learn to use OpenGL or one should go with DirectX?
As long as you don't want to write lower-level-ish Maya plug-ins, you don't have to learn either.
My questions specially are associayted with adding super new functionalities, such as new edge-system for a certain geometry in Maya.
You surely want to make that available to the offline renderer as well. As such neiter OpenGL nor Direct3D are of use for you. You have to implement this using the graphics pipeline functions offered by Maya and its renderer. Note that you might also have to patch into external renderers if you want to use those with your news edge features.
I am developing a user interface for my application.... most of my application is portable as is written in c++ but today I started thinking about the UI. Which is currently written in Direct2D. I was wondering if there was an equivalent for developing a UI in IOS(Ipad), and OSX(MAC)?
Something high level enough that I could draw rectangles and circles, but also low level enough that is not as slow as GDI.
Thanks in advance.
PS. I DON'T want comparing which are better or worse, I just want to know what options I have.
CoreAnimation is a GPU-accelerated framework. Individual views are cached on the GPU. You can then apply composition arbitrary transforms to them. Such transforms are applied by the GPU to the cached image. So you can use CoreGraphics to draw a circle, rectangle or whatever, have CoreAnimation store that bitmap on the GPU and then transform that.
Also from the first-party frameworks, Sprite Kit provides a game-oriented framework that includes game-style (ie, accelerated write-once read-many 'sprites') drawing alongside physics/etc.
OpenGL ES is also fully supported. You can assume 2.0 is always available as it was introduced on the 3GS and Apple no longer accepts binaries for older devices. 3.0 is also available on the latest iPhone. That's obviously quite a bit lower level than Direct2D but Apple supplies GLKit which allows you to upload images trivially and to emulate the old fixed-functionality pipeline with just a few simple calls.
Out in the third-party world I guess the main thing people are going to suggest is Cocos2d but at this point it's already playing catch-up to Sprite Kit.
Of those, CoreAnimation, OpenGL and Cocos2D span iOS and OS X with some minor differences, Sprite Kit is already available on iOS and will turn up in the next OS X Mavericks.
Start with Cocoa (OSX) and Cocoa Touch (iOS). In apps made with those you can use Core Graphics which seems like a good fit for your needs, or OpenGL which is probably overkill. Of course there are many 3rd party libraries you can use, Cocos2d as Petesh mentioned is one of them.
Suppose you have a third party program on your desktop that uses OpenGL (the fixed pipeline version, <2.0), for example, Street View in Google Maps. Is there a way to find out more about what that app is actually rendering in OpenGL? In particular, I'm interested in the vertices that are used for drawing -- how many are there, and where they are.
I could imagine something existing like a hacked/modified OpenGL driver or similar that could show you the actual vertices overlaid as dots on the display, but can't find any such thing.
gDEBugger can do that for standalone OpenGL applications.
For the record:
There is also WebGL inspector for WebGL.
There is also a variety of OpenGL ES tools for mobile platforms if that is of use, but typically these do not record enough information to completely reconstruct a scene for debugging.
I am using the open source haptics and 3D graphics library Chai3D running on Windows 7. I have rewritten the library to do stereoscopic 3D with Nvidia nvision. I am using OpenGL with GLUT, and using glutInitDisplayMode(GLUT_RGB | GLUT_DEPTH | GLUT_DOUBLE | GLUT_STEREO) to initialize the display mode. It works great on Quadro cards, but on GTX 560m and GTX 580 cards it says the pixel format is unsupported. I know the monitors are capable of displaying the 3D, and I know the cards are capable of rendering it. I have tried adjusting the resolution of the screen and everything else I can think of, but nothing seems to work. I have read in various places that stereoscopic 3D with OpenGL only works in fullscreen mode. So, the only possible reason for this error I can think of is that I am starting in windowed mode. How would I force the application to start in fullscreen mode with 3D enabled? Can anyone provide a code example of quad buffer stereoscopic 3D using OpenGL that works on the later GTX model cards?
What you experience has no technical reasons, but is simply product policy of NVidia. Quadbuffer stereo is considered a professional feature and so NVidia offers it only on their Quadro cards, even if the GeForce GPUs would do it as well. This is not a recent development. Already back in 1999 it was like this. For example I had (well still have) a GeForce2 Ultra back then. But technically this was the very same chip like the Quadro, the only difference was the PCI-ID reported back to the system. One could trick the driver into thinking you had a Quadro by tinkering with the PCI-IDs (either by patching the driver or by soldering an additional resistor onto the graphics card PCB).
The stereoscopic 3D mode for Direct3D hack was already supported by my GeForce2 then. Back then the driver duplicated the rendering commands, but applied a translation to the modelview and a skew to the projection matrix. These days it's implemented a shader and multi rendertarget trick.
The NVision3D API does allow you to blit images for specific eyes (this is meant for movie players and image viewers). But it also allows you to emulate quadbuffer stereo: Instead of GL_BACK_LEFT and GL_BACK_RIGHT buffers create two Framebuffer Objects, which you bind and use as if they were quadbuffer stereo. Then after rendering you blit the resulting images (as textures) to the NVision3D API.
With only as little as 50 lines of management code you can build a program that seamlessly works on both NVision3D as well as quadbuffer stereo. What NVidia does is pointless and they should just stop it now and properly support quadbuffer stereo pixelformats on consumer GPUs as well.
Simple: you can't. Not the way you're trying to do it.
There is a difference between having a pre-existing program do things with stereoscopic glasses and doing what you're trying to do. What you are attempting to do is use the built-in stereo support of OpenGL: the ability to create a stereoscopic framebuffer, where you can render to the left and right framebuffers arbitrarily.
NVIDIA does not allow that with their non-Quadro cards. It has hacks in the driver that will force stereo on applications with nVision and the control panel. But NVIDIA's GeForce drivers do not allow you to create stereoscopic framebuffers.
And before you ask, no, I have no idea why NVIDIA doesn't let you control stereo.
Since I was looking into this issue for my own game, I w found this link where somebody hacked the USB protocol. http://users.csc.calpoly.edu/~zwood/teaching/csc572/final11/rsomers/
I didn't follow it through but at the time when I was researching on this it didn't look to hard to make use of this information. So you might have to implement your own code in order to support it in your app, which should be possible. Unfortunately a generic solution would be harder, because then you would have to hack the driver or somehow hook into the OpenGL library and intercept the calls.