I want to write a general purpose utility function that will use an OpenGL Framebuffer Object to create a texture that can be used by some OpenGL program for whatever purpose a third party programmer would like.
Lets say for argument stake the function looks like
void createSpecialTexture(GLuint textureID)
{
MyOpenGLState state;
saveOpenGLState(state);
setStateToDefault();
doMyDrawing();
restoreOpenGLState(state);
}
What should MyOpenGLState, saveOpenGLState(state), setStateToDefault() and restoreOpenGLState(state) look like to ensure that doMyDrawing will behave correctly and that nothing that I do in doMyDrawing will affect anything else that a third party developer might be doing?
The problem that has been biting me is that OpenGL has a lot of implicit state and I am not sure I am capturing it all.
Update: My main concern is OpenGL ES 2.0 but I thought I would ask the question more generally
Don't use a framebuffer object to render your texture. Rather create a new EGLContext and EGLSurface. You can then use eglBindTexImage to turn your EGLSurface into a texture. This way you are guaranteed that state from doMyDrawing will not pollute the main gl Context and visa versa.
As for saving and restoring, glPushAttrib and glPopAttrib will get you very far.
You cannot, however, restore GL to a default state. However, since doMyDrawing() uses and/or modifies only state that should be known to you, you can just set that to values that you need.
That depends a lot on what yo do on your doMyDrawing();, but basically you have to restore everything (all states) that you change in this function. Without having a look at what is going on inside doMyDrawing(); it is impossible to guess what you have to restore.
If modifications on the projection or modelview matrix are done inside doMyDrawing(), remember to go initially push both GL_PROJECTION and GL_MODELVIEW matrix through glPushMatrix and restore them after the drawing through glPopMatrix. Any other state that you modify can be push and pop though glPushAttrib and the right attribute. Remember also to unbind any texture, FBO, PBO, etc.. that you might bind inside doMyDrawing();
Related
Is there any proper way to access the low level OpenGL objects of VTK in order to modify them from a CUDA/OpenCL kernel using the openGL-CUDA/OpenCL interoperability feature?
Specifically, I would want to get the GLuint (or unsigned int) member from vtkOpenGLGPUVolumeRayCastMapper that points to the Opengl 3D Texture object where the dataset is stored, in order to bind it to a CUDA Surface to be able to access and modify its values from a CUDA kernel implemented by me.
For further information, the process that I need to follow is explained here:
http://rauwendaal.net/2011/12/02/writing-to-3d-opengl-textures-in-cuda-4-1-with-3d-surface-writes/
where the texID object used there (in Steps 1 and 2) is the equivalent to what I want to retrieve from VTK.
At a first look at the vtkOpenGLGPUVolumeRayCastMapper functions, I don't find an easy way to do this, rather than maybe creating a vtkGPUVolumeRayCastMapper subclass, but even in that case I am not sure what should I modify exactly, since I guess that some other members depend on the 3D Texture values, and should be also updated after modifying it.
So, do you know some way to do this?
Lots of thanks.
Subclassing might work, but you could probably avoid it if you wanted. The important thing is that you get the order of the GL/CUDA API calls in the right order.
First, you have to register the texture with CUDA. This is done using:
cudaGraphicsGLRegisterImage(&cuda_graphics_resource, texture_handle,
GL_TEXTURE_3D, cudaGraphicsRegisterFlagsSurfaceLoadStore);
with the stipulation that texture_handle is a GLuint written to by a call to glGenTextures(...)
Once you have registered the texture with CUDA, you can create the surface which can be read or written to in your kernel.
The only thing you have to worry about from here is that vtk does not use the texture in between a call to cudaGraphicsMapResources(...) and cudaGraphicsUnmapResources(...). Everything else should just be standard CUDA.
Also once you map the texture to CUDA and write to it within a kernel, there is no additional work besides unmapping the texture. GL will get the modified texture the next time it is used.
I am writing small tool that is drawing OpenGL overlay on top of the game which is closed source. The game is using SDL, so I am just hooking into SDL_GL_SwapWindow and doing my own stuff. However, this kind of hooking results in some side effects in the game itself. I found a solution that is basically wrapping around my own calls with deprecated glPushAttrib/glPopAttrib. But this solves only half of the problems. I am still getting random texture flickering in the game (I meant game textures, mine are showing fine). What could be the reason of this flickering? Can my own textures interfere with game textures? Do I need to isolate my own calls and how can I do it?
What could be the reason of this flickering?
If the game uses shaders, then glPushAttrib / glPopAttrib will not take care of all the state you may be clobbering with. The attribute stack has been deprecated and the program may use states that are either not covered by it, or where certain attribute bits in compatibility profile have been reused or expanded to cover further state. I recommend not using the attribute stack at all, because it's hard to get right.
Can my own textures interfere with game textures?
Yes. Say you left a 2D texture active in a texture unit that's later being used for a 1D texture. If the host program does not use shaders, then the GL_TEXTURE_2D will take precedence over the GL_TEXTURE_1D. It's a (IMHO poor) design choice of OpenGL that you can have multiple texture targets being bound to the same texture unit at the same time and which one is used to deliver texels depends on the individual targets' precedence.
Do I need to isolate my own calls
Yes.
and how can I do it?
Two possible solutions:
Create separate OpenGL context for just your own stuff. Use {wgl,glX}GetCurrentContext and {wglGetCurrentDC,glXGetCurrentDrawable} to retrieve the OpenGL context and drawable active at the moment you're "jumping" in. If you don't have a context already, you can use the drawable just retrieved to create a matching OpenGL context. Optionally install a namespace sharing. Switch to your context, draw your stuff and switch back to the host program one's. – Major drawback: Switching OpenGL contexts is quite expensive.
Before switching state around, use glGet… to retrieve the state active before doing so and restore the old state before returning to the host program.
I'm working on a plugin for a scripting language that allows the user to access the OpenGL 1.1 command set. On top of that, all functions of the scripting language's own gfx command set are transparently redirected to appropriate OpenGL calls. Normally, the user should use either the OpenGL command set or the scripting language's inbuilt gfx command set which basically contains just your typical 2D drawing commands like DrawLine(), DrawRectangle(), DrawPolygon(), etc.
Under certain conditions, however, the user might want to mix calls to the OpenGL and the inbuilt gfx command sets. This leads to the problem that my OpenGL implementations of inbuilt commands like DrawLine(), DrawRectangle(), DrawPolygon(), etc. have to be able to deal with whatever state the OpenGL state machine might currently be in.
Therefore, my idea was to first save all state information on the stack, then prepare a clean OpenGL context needed for my implementations of commands like DrawLine(), etc. and then restore the original state. E.g. something like this:
glPushAttrib(GL_ALL_ATTRIB_BITS);
glPushClientAttrib(GL_CLIENT_ALL_ATTRIB_BITS);
glPushMatrix();
....prepare OpenGL context for my needs.... --> problem: see below #2
....do drawing....
glPopMatrix();
glPopClientAttrib();
glPopAttrib();
Doing it like this, however, leads to several problems:
glPushAttrib() doesn't push all attributes, e.g save pixel pack and unpack state, render mode state, and select and feedback state are not pushed. Also, extension states are not saved. Extension states are not important as my plugin is not designed to support extensions. Saving and restoring other information (pixel pack and unpack) could probably be implemented manually using glGet().
Big problem: How should I prepare the OpenGL context after having saved all state information? I could save a copy of a "clean" state on the stack right after OpenGL's initialization and then try to pop this stack but for this I'd need a function to move data inside the stack, i.e. I'd need a function to copy or move a saved state from the back to the top of stack so that I can pop it. But I didn't see a function that can accomplish this...
It's probably going to be very slow but this is something I could live with because the user should not mix OpenGL and inbuilt gfx calls. If he does nevertheless, he will have to live with a very poor performance.
After these introductory considerations I'd finally like to present my question: Is it possible to "beat" the OpenGL state machine somehow? By "beating" I mean the following: Is it possible to completely save all current state information, then restore the default state and prepare it for my needs and do the drawing, and then finally restore the complete previous state again so that everything is exactly as it was before. For example, an OpenGL based version of the scripting language's DrawLine() command would do something like this then:
1. Save all current state information
2. Restore default state, set up a 2D projection matrix
3. Draw the line
4. Restore all saved state information so that the state is exactly the same as before
Is that possible somehow? It doesn't matter if it's very slow as long as it is 100% guaranteed to put the state into exactly the same state as it was before.
You can simply use different contexts, especially if you do not care about performance. Just keep an context for your internal gfx operations and another one the user might mess with and just bind the appropriate one to your window (and thread).
The way you describe it looks like you never want to share objects with the user's GL stuff, so simple "unshared" contexts will do fine. All you seem to want to share is the framebuffer - and the GL framebuffer (including back and front color buffers, depth buffer, stencil, etc..) is part of the drawable/window, not the context - so you get access to it whit any context when you make the context current. Changing the contexts mid-frame is not a problem.
After googling a lot, I only have this space to ask you the next question.
I'm trying to writing a simple OpenGL 3.x sample to learn how works the new programmable pipeline (shaders). This tutorial is really helpful (and uses glut to keep things simple, as you can see) and great as a starting point. But the nightmare and questions starts when I'm trying to use the predefined glut objects (teapots i.e) and trying to move or rotate in a local way like the old and deprecated way (glScalef, glTranslatef, glRotatef, glLoadIdentity, glMultMatrixf, glPushMatrix and glPopMatrix...), but for now it's impossible for me.
If I'm trying to do that using a handy transformation matrix with a translation, it moves the whole scene globally (the two or more objects rotates, not only one, i.e.), but not local. I've found this similar question here, but still in a mess... (only works with vbos? every object in the scene has to have a unique shader?,...)
I don't know if I've explained clearly. Every tutorial I've found about this topic always uses a single object. If someone knows any well written tutorial or sample code that explains this, I'll much appreciate your help.
I will assume that, when you say "OpenGL 3.x", what you mean is core OpenGL 3.1 or greater. That is, you are not using a compatibility context.
First, you cannot use GLUT's predefined objects anymore. Nor can you use glu's predefined objects. If that's too much of a limitation for you, then I suggest you create a compatibility context.
The reason all of your objects move is because you didn't reset the uniforms between drawing the two objects. Uniforms are data in shaders that are set from OpenGL, but will not change over multiple executions of a shader within a single glDraw* call. The matrix functions in previous GL versions effectively set the equivalent of uniforms. So simply convert those functions into uniform setting.
If you want to see a tutorial series that uses GL 3.x core, then you can look at my tutorial series.
The key here is, that you need to maintain your own transfomration heirachy. glPushMatrix creates a copy of the current matrix on the active stack, then you apply some transform that's applied to the stack. Then drawing things they will recieve that transformation. glPopMatrix goes one step up in the hierachy.
In the case of Uniforms you no longer have matrix stacks. So instead of glPushMatrix you create a copy of the current transformation level matrix, apply the sub-transform and load that new matrix into the uniform.
I'm having a rough time trying to set up this behavior in my program.
Basically, I want it that when a the user presses the "a" key a new sphere is displayed on the screen.
How can you do that?
I would probably do it by simply having some kind of data structure (array, linked list, whatever) holding the current "scene". Initially this is empty. Then when the event occurs, you create some kind of representation of the new desired geometry, and add that to the list.
On each frame, you clear the screen, and go through the data structure, mapping each representation into a suitble set of OpenGL commands. This is really standard.
The data structure is often referred to as a scene graph, it is often in the form of a tree or graph, where geometry can have child-geometries and so on.
If you're using the GLuT library (which is pretty standard), you can take advantage of its automatic primitive generation functions, like glutSolidSphere. You can find the API docs here. Take a look at section 11, 'Geometric Object Rendering'.
As unwind suggested, your program could keep some sort of list, but of the parameters for each primitive, rather than the actual geometry. In the case of the sphere, this would be position/radius/slices. You can then use the GLuT functions to easily draw the objects. Obviously this limits you to what GLuT can draw, but that's usually fine for simple cases.
Without some more details of what environment you are using it's difficult to be specific, but a few of pointers to things that can easily go wrong when setting up OpenGL
Make sure you have the camera set up to look at point you are drawing the sphere. This can be surprisingly hard, and the simplest approach is to implement glutLookAt from the OpenGL Utility Toolkit. Make sure you front and back planes are set to sensible values.
Turn off backface culling, at least to start with. Sure with production code backface culling gives you a quick performance gain, but it's remarkably easy to set up normals incorrectly on an object and not see it because you're looking at the invisible face
Remember to call glFlush to make sure that all commands are executed. Drawing to the back buffer then failing to call glSwapBuffers is also a common mistake.
Occasionally you can run into issues with buffer formats - although if you copy from sample code that works on your system this is less likely to be a problem.
Graphics coding tends to be quite straightforward to debug once you have the basic environment correct because the output is visual, but setting up the rendering environment on a new system can always be a bit tricky until you have that first cube or sphere rendered. I would recommend obtaining a sample or template and modifying that to start with rather than trying to set up the rendering window from scratch. Using GLUT to check out first drafts of OpenGL calls is good technique too.