OpenGL: finding out vertex structure in other apps using OpenGL - opengl

Suppose you have a third party program on your desktop that uses OpenGL (the fixed pipeline version, <2.0), for example, Street View in Google Maps. Is there a way to find out more about what that app is actually rendering in OpenGL? In particular, I'm interested in the vertices that are used for drawing -- how many are there, and where they are.
I could imagine something existing like a hacked/modified OpenGL driver or similar that could show you the actual vertices overlaid as dots on the display, but can't find any such thing.

gDEBugger can do that for standalone OpenGL applications.
For the record:
There is also WebGL inspector for WebGL.
There is also a variety of OpenGL ES tools for mobile platforms if that is of use, but typically these do not record enough information to completely reconstruct a scene for debugging.

Related

Autodesk Maya, C++ and OpenGL rendering engine

What graphics engine Maya uses, OpenGL or DirectX? Does it at all use any? Since maya is written in C++.
For going deep into Maya, is it proper to learn to use OpenGL or one should go with DirectX?
My questions specially are associated with adding super new functionalities, such as new edge-system for a certain geometry in Maya.
What graphics engine Maya uses
Its own.
Neither OpenGL nor Direct3D are graphics engines. They're drawing APIs. You push in bunches of data and parameters and shaders to make sense of that data, and rasterized points, lines and triangles on a 2D framebuffer come out on the other side. That's it.
Maya, like every other graphics program out there implements its own engine or uses a graphics engine library that maybe uses Direct3D or OpenGL as a backend. In the case of Maya OpenGL is used for the interactive display. But the offline renderer is independent from that.
For going deep into Maya, is it proper to learn to use OpenGL or one should go with DirectX?
As long as you don't want to write lower-level-ish Maya plug-ins, you don't have to learn either.
My questions specially are associayted with adding super new functionalities, such as new edge-system for a certain geometry in Maya.
You surely want to make that available to the offline renderer as well. As such neiter OpenGL nor Direct3D are of use for you. You have to implement this using the graphics pipeline functions offered by Maya and its renderer. Note that you might also have to patch into external renderers if you want to use those with your news edge features.

way to have 3d animated/rigged character in Opengl

If I want a 3D animated/rigged character in OpenGL game how i would have it in OpenGL?If i make a animated/rigged character in 3Ds max is it possible to have that character in OpenGl?would be helpful if someone gives a proper way or a tutorial link to export animated model from 3d software to open GL
OpenGL is a very simple 3D framework which only offers the bare bones. OpenGL can display triangles and fill them with color and that's about it. It comes with some helper functions to manipulate point clouds but it's really very basic.
Animation and rigging are way beyond what it can do. What you need is a framework which supports animation and rigging which then uses OpenGL to display the result.
Since you don't specify any requirements, it's hard to say which one would be good for you. Blender is probably a good place to start. It's game engine runs on many platforms, supports OpenGL, animation and rigging.

Draw Direct To Screen With CUDA/OPENCL

Is it possible yet to draw CUDA/OPENCL results directly to the screen with any existing API (opengl, directx, something else)? Skipping the typical drawing a textured quad method.
Even with registering resources and using modern CUDA interop methods, we still have to march through entire rendering pipelines just to render an array of colors. For applications like mine where every ms counts, this is a problem.
There's no way to draw directly on the screen with OpenCL or CUDA.
It is a solvable problem, but as far as I know, NVIDIA has not provided the needed APIs because they would be very complicated both to implement and to use, and the performance benefits would be limited at best.
The two main issues are:
1) the differing layouts of the buffers used for rendering (i.e. you'd have to use surface load/store functionality - a mapping into CUDA's address space is not suitable for graphics because the pitch-linear layout has poor performance in that context) and
2) the platform-specific details of incorporating your CUDA/OpenCL output into the presentation model (be it the desktop or a page-flipped full-screen experience, like a Direct3D game, or incorporating your app's output into the desktop). Bear in mind that most desktops these days are themselves page-flipped, so scribbling on the front buffer is frowned upon in any case.
I very much doubt that there is any performance lost in drawing pixels using a textured quad but you can draw pixels directly on the framebuffer with glDrawPixels.

3D model manipulation for a Desktop Augmented Reality application

I'm working on an Augmented Reality project that uses multiple markers to get positions for 3D models that I'm planning to overlay. (I'm doing this from scratch using OpenCV and I'm not using ARToolkit or any other off the shelf marker detection libraries).
Environment: Visual C++ 2008, Windows 7, Core2Duo 1GB ram, OpenCV 2.3
I want the 3D models to be manipulated by user so it will turn out to a sort of simulation.
For this I'm planning to use OpenGL. What are your suggestions, recommendations? Can the simulation part be done by using OpenGL itself or will i need to use something like OpenSceneGraph/ODE/Unity 3D/Ogre 3D?
This is for an academic project so better if I can produce more self-coded system rather than using off-the-shelf products.
it would seem that OpenGL is pretty enough for your needs (drawing a model with a specific colour and size).
If you're new to OpenGL, and you are not going to be using it for your future projects, it might be easier to use the old fixed-function pipeline, which already has the lighting and color system ready and doesn't require you to learn how to write shaders.
For your project, you will need a texture where you would copy the image from camera using glTexSubImage2D() which you would in turn draw to background (or you can use glDrawPixels() in case you don't require any scaling). After that, you need to have your model, complete with normals for lighting. Models can be eg. exported from Blender or 3DS Max to ascii format, which is pretty easy to parse. Then you can draw the model. Colors can be changed using glColor3f() before drawing the model (make sure you don't specify different color while drawing the model). Positioning of the models is done using matrices. The old OpenGL have some handy and easy-to-use functions for rotating and translating objects. There are also functions for scaling the objects (changing size), so that is covered pretty easy. All you need is to figure out camera position, relative to the marker (which i believe is implemented in OpenCV).
If you were to use the forward-compatible OpenGL, you would need to set up vertex buffer objects to contain model data and write vertex and fragment shaders to shade and display your model. That's kinda more work for which you get extended flexibility. But you can use shaders in the old OpenGL as well, if you decide you need them (eg. for some special effects).
Learning how to use a scenegraph or an engine (ogre) can take some time, i would not recommend it for your task.

Is Cairo acelerated on Opengl backend?

By this I mean, does Cairo draw lines, shapes and everything using opengl acelerated primitives or no? and if not, a library that does this?
The OpenGL backend certainly accelerates some functions. But there are many it can't accelerate. The fact that it's written against GL 2.1 (and thus can't use more advanced features of 3.x or 4.x hardware) means that there is a lot that it simply cannot accelerate.
If you are willing to limit yourself to NVIDIA hardware, NVIDIA just came out with the NV_path_rendering extension, which provides a lot of the 2D functionality you would find with Cairo. Indeed, it's possible that you could write a Cairo backend for it. The path rendering extension is only available on GeForce 8xxx hardware and above.
It's nifty in that it's focused on the vertex pipeline. It doesn't do things like gradients or colors or whatever. That's good, because it still allows you the use of a fragment shader. Which means you get to do pretty much whatever you want ;)
Cairo is designed to have a flexible backend for rendering. It can use OpenGL for rendering, though support is still listed as "experimental" at this point. For details, see using cairo with OpenGL.
It can also output to the X Window System, Quartz, Win32, image buffers, PostScript, PDF, and SVG, and more.