I have 3D point cloud data and I would like to display the output on a 3D monitor. Is there a c++ library that can do this? I would also like the user to be able to pan, rotate, and zoom the point cloud. I am using a nvidia GeForce GT 540M 1GB vram video card.
There is the Point Cloud Library which uses the Visualization Toolkit to render. They support all basic forms of interaction, and I have used them to render point clouds. I think they would both be good starting points, and they use OpenGL to render. I know VTK has support for 3D displays, but have not used it in that way as I do not have access to a 3D monitor.
Related
If I want a 3D animated/rigged character in OpenGL game how i would have it in OpenGL?If i make a animated/rigged character in 3Ds max is it possible to have that character in OpenGl?would be helpful if someone gives a proper way or a tutorial link to export animated model from 3d software to open GL
OpenGL is a very simple 3D framework which only offers the bare bones. OpenGL can display triangles and fill them with color and that's about it. It comes with some helper functions to manipulate point clouds but it's really very basic.
Animation and rigging are way beyond what it can do. What you need is a framework which supports animation and rigging which then uses OpenGL to display the result.
Since you don't specify any requirements, it's hard to say which one would be good for you. Blender is probably a good place to start. It's game engine runs on many platforms, supports OpenGL, animation and rigging.
I'm fiddling with the Direct3D stereoscopic 3D sample, and was wondering if there is any way to toggle the stereoscopic view on and off while the app is running. I'm tried manually setting m_stereoEnabled = false, but this still renders both left and right eye, but simply does not update the right eye rendering. I'm fairly new to DirectX, but not software development or to 3d game development (however, my tool of choice is usually OpenGL).
It looked like I'd need to change DirectXBase::UpdateStereoEnabledStatus(), because it automatically sets m_stereoEnabled to true if my graphics card/monitor support 3d.
I believe more information is needed about your setup for s3d. At least using XNA (which is higher level from DirectX) and Nvidia 3D Vision glasses, the stereoscopic view is rendered automatically: http://xboxforums.create.msdn.com/forums/t/90838.aspx
Suppose you have a third party program on your desktop that uses OpenGL (the fixed pipeline version, <2.0), for example, Street View in Google Maps. Is there a way to find out more about what that app is actually rendering in OpenGL? In particular, I'm interested in the vertices that are used for drawing -- how many are there, and where they are.
I could imagine something existing like a hacked/modified OpenGL driver or similar that could show you the actual vertices overlaid as dots on the display, but can't find any such thing.
gDEBugger can do that for standalone OpenGL applications.
For the record:
There is also WebGL inspector for WebGL.
There is also a variety of OpenGL ES tools for mobile platforms if that is of use, but typically these do not record enough information to completely reconstruct a scene for debugging.
I am using the open source haptics and 3D graphics library Chai3D running on Windows 7. I have rewritten the library to do stereoscopic 3D with Nvidia nvision. I am using OpenGL with GLUT, and using glutInitDisplayMode(GLUT_RGB | GLUT_DEPTH | GLUT_DOUBLE | GLUT_STEREO) to initialize the display mode. It works great on Quadro cards, but on GTX 560m and GTX 580 cards it says the pixel format is unsupported. I know the monitors are capable of displaying the 3D, and I know the cards are capable of rendering it. I have tried adjusting the resolution of the screen and everything else I can think of, but nothing seems to work. I have read in various places that stereoscopic 3D with OpenGL only works in fullscreen mode. So, the only possible reason for this error I can think of is that I am starting in windowed mode. How would I force the application to start in fullscreen mode with 3D enabled? Can anyone provide a code example of quad buffer stereoscopic 3D using OpenGL that works on the later GTX model cards?
What you experience has no technical reasons, but is simply product policy of NVidia. Quadbuffer stereo is considered a professional feature and so NVidia offers it only on their Quadro cards, even if the GeForce GPUs would do it as well. This is not a recent development. Already back in 1999 it was like this. For example I had (well still have) a GeForce2 Ultra back then. But technically this was the very same chip like the Quadro, the only difference was the PCI-ID reported back to the system. One could trick the driver into thinking you had a Quadro by tinkering with the PCI-IDs (either by patching the driver or by soldering an additional resistor onto the graphics card PCB).
The stereoscopic 3D mode for Direct3D hack was already supported by my GeForce2 then. Back then the driver duplicated the rendering commands, but applied a translation to the modelview and a skew to the projection matrix. These days it's implemented a shader and multi rendertarget trick.
The NVision3D API does allow you to blit images for specific eyes (this is meant for movie players and image viewers). But it also allows you to emulate quadbuffer stereo: Instead of GL_BACK_LEFT and GL_BACK_RIGHT buffers create two Framebuffer Objects, which you bind and use as if they were quadbuffer stereo. Then after rendering you blit the resulting images (as textures) to the NVision3D API.
With only as little as 50 lines of management code you can build a program that seamlessly works on both NVision3D as well as quadbuffer stereo. What NVidia does is pointless and they should just stop it now and properly support quadbuffer stereo pixelformats on consumer GPUs as well.
Simple: you can't. Not the way you're trying to do it.
There is a difference between having a pre-existing program do things with stereoscopic glasses and doing what you're trying to do. What you are attempting to do is use the built-in stereo support of OpenGL: the ability to create a stereoscopic framebuffer, where you can render to the left and right framebuffers arbitrarily.
NVIDIA does not allow that with their non-Quadro cards. It has hacks in the driver that will force stereo on applications with nVision and the control panel. But NVIDIA's GeForce drivers do not allow you to create stereoscopic framebuffers.
And before you ask, no, I have no idea why NVIDIA doesn't let you control stereo.
Since I was looking into this issue for my own game, I w found this link where somebody hacked the USB protocol. http://users.csc.calpoly.edu/~zwood/teaching/csc572/final11/rsomers/
I didn't follow it through but at the time when I was researching on this it didn't look to hard to make use of this information. So you might have to implement your own code in order to support it in your app, which should be possible. Unfortunately a generic solution would be harder, because then you would have to hack the driver or somehow hook into the OpenGL library and intercept the calls.
I want to provide a virtual webcam via DirectShow that will use the video feed from an existing camera running some tracking software against it to find the users face and then overlay a 3d model oriented just that it appears to move the users face. I am using a third party api to do the face tracking and thats working great. I get position and rotation data from that api.
My question is whats the best way to render the 3d model and get into the video feed and out to direct show?
I am using c++ on windows xp.
you can overlay your graphics by using a VMR filter -- a video renderer with multiple input pins. The VMR-9 filter is based on Direct3D, so you can use Direct3D rendering for your model and feed the output to a secondary pin on the VMR, to be overlaid or alpha-blended with the camera output which is fed to the primary pin of the VMR.
If you are using DirectShow then using DirectX for rendering seems reasonable.