JOGL render outside of display(GLAutoDrawable drawable) - opengl

For my purposes, I want to clear the drawing surface of my canvas and grab the current GL2 object, save it to a managing wrapper and use it one step later, after returning from the canvas's display() method (that in turn calls the display(GLAutoDrawable drawable) method). It seems though, that after returning from the display() method, something happens that causes the GL object to defunction, like when I want to get an available textureID by calling glGenBuffers(1, buffer) I recieve 0 - not a valid textureID to load a texture into.
Is there a way to make the GL object work outside of the display method? (gl.getContext().makeCurrent() does not change anything ...)
Edit: After tinkering around, it seems that the call to glGenTextures does actually nothing - when I create a texture within the display method and then call it later from outside the call to display, I get the same textureID that was in the buffer before - so the call does not change the value within the buffer - also glGetError returns 0 ...
Edit2: Java: openGL: JOGL: What happens behind the scenes when I call the display() method? contained that question, but no answer was given on how to do it. It might be interesting to see a step by step method to do it, if having the code somewhere once, one might not need to modify it ...

Rather use GLAutoDrawable.invoke(), it's safer than trying to get the GLContext stored in the GLAutoDrawable and calling makeCurrent() on it. Anyway, you're trying to defeat the main purpose of GLEventListener which is both useless and dangerous. Finally, the best place to get advises about JOGL is the official JogAmp forum as we can't be everywhere:
http://forum.jogamp.org

Related

Multiple calls to cv::ogl::Texture2D.copyFrom() results in cv::Exception (-219)

I am rendering two views per frame on an HMD and it's kind of complicated right now because I use OpenCV to load images and process intermediary results and the rest is OpenGL, but I still want it to work. I am using OpenCV 3.1 and any help would be greatly appreciated, even if you have just some advice.
Application details:
Per view (left and right eye) I take four images as cv::Mat and copy them into four cv::ogl::Texture2D objects. Then I bind these textures to active OpenGL textures to configure my shader and draw to a framebuffer. I read the pixels of the frame buffer again (glReadPixels()) as a cv::Mat and do some postprocessing. This cv::Mat ("synthView") is getting copied to another cv::ogl::Texture2D which is rendered on a 2D screenspace quad for the view.
Here's some console output I logged for each call to the cv::ogl::Texture2D objects. No actual code!
// First iteration for my left eye view
colorImageTexture[LEFT].copyFrom(imageLeft, true); //view1
colorImageTexture[RIGHT].copyFrom(imageRight, true); //view1
depthImageTexture[LEFT].copyFrom(depthLeft, true); //view1
depthImageTexture[RIGHT].copyFrom(depthRight, true); //view1
colorImageTexture[i].bind(); //left
depthImageTexture[i].bind(); //left
colorImageTexture[i].bind(); //right
depthImageTexture[i].bind(); //right
synthesizedImageTexture.copyFrom(synthView, true); //frame0, left_eye done
// Second iteration for my right eye view, reusing colorImageTexture[LEFT] the first time
colorImageTexture[LEFT].copyFrom(imageLeft, true); //view2 // cv::Exception!
The code was working when I catched the exceptions and used the Oculus DK2 instead of the CV1. As you can see, I can run through one rendered view, but trying to render the second view will throw an exception in the copyFrom method at gl::Buffer::unbind(ogl::Buffer::PIXEL_UNPACK_BUFFER).
The exception occurs after all ogl::Texture2D objects have been used once and the first one gets "reused", which means that it will not call ogl::Texture2D::create(...) in the copyFrom() function!
Details of the cv::Exception:
code: -219
err: The specified operation is not allowed in the current state
func: cv::ogl::Buffer::unbind
file: C:\\SDKs\\opencv3.1\\sources\\modules\\core\\src\\opengl.cpp
Call stack details:
cv::ogl::Texture2D::copyFrom(const cv::_InputArray &arr, bool autoRelease);
gets called from my calls, which invokes
ogl::Buffer::unbind(ogl::Buffer::PIXEL_UNPACK_BUFFER);
In that, there is an OpenGL call to
gl::BindBuffer(target, 0); // target is "ogl::Buffer::PIXEL_UNPACK_BUFFER"
with a direct call to CV_CheckGlError(); afterwards, which throws the cv::exception. HAVE_OPENGL is apparently not defined in my code. The GL error is a GL_INVALID_OPERATION.
According to the specification of glBindBuffer:
void glBindBuffer(GLenum target,
GLuint buffer);
While a non-zero buffer object name is bound, GL operations on the
target to which it is bound affect the bound buffer object, and
queries of the target to which it is bound return state from the bound
buffer object. While buffer object name zero is bound, as in the
initial state, attempts to modify or query state on the target to
which it is bound generates an GL_INVALID_OPERATION error.
If I understand it correctly, gl::BindBuffer(target, 0) is causing this error because the buffer argument is 0 and I somehow alter the target. I am not sure what the target actually is, but maybe my glReadPixels() interferes with it?
Can somebody point me in the right direction to get rid of this exception? I just used the sample OpenCV code to construct my code.
Update: My shader code can trigger the exception. If I simply output the unprojected coordinates or vec4(0,0,0,1.0f), the program breaks because of the exception. Else, it continues but I cannot see my color texture on my mesh.
Given the information in your question, I believe the issue is with asynchronous writes to a pixel buffer object (PBO). I believe your code is trying to bind a buffer to 0 (unbinding the buffer), but that buffer is still being written to by an asynchronous call prior to it.
One way to overcome this is to use sync objects. Either define a sync object and use glFenceSync() or glWaitSync(). If you wait for buffers to finish their actions, this will have a negative impact on performance. Here is some information about a sync object.
Check this question for information on where one would use fence sync objects.
Another way could be use multiple buffers and switch between them for consecutive frames, this will make it less likely that a buffer is still in use while you unbind it.
Actual answer is that the code from OpenCV checks for errors with glGetError(). If you don't do this in your code, the cv::ogl::Texture2D::copyFrom() code will catch the error and throw an exception.

Multiple QOpenGLWidgets and QOpenGLTextures. How to destroy textures?

I've set QCoreApplication::setAttribute(Qt::AA_ShareOpenGLContexts) inorder to share my textures across multiple QpenGLWidgets and it works for the most part. However whenever I try to call destroy() on a texture it fails because QOpenGLTexture checks if the contexts match before deleting. Is this intended behavior or a bug? If it's intended can I manage textures with multiple windows? Do I need to store a pointer to every window and trace the texture back to the one I created the texture on, then call makeCurrent()? Surely there must be a better way?

VTK OpenGL objects (3D texture) access from CUDA‏

Is there any proper way to access the low level OpenGL objects of VTK in order to modify them from a CUDA/OpenCL kernel using the openGL-CUDA/OpenCL interoperability feature?
Specifically, I would want to get the GLuint (or unsigned int) member from vtkOpenGLGPUVolumeRayCastMapper that points to the Opengl 3D Texture object where the dataset is stored, in order to bind it to a CUDA Surface to be able to access and modify its values from a CUDA kernel implemented by me.
For further information, the process that I need to follow is explained here:
http://rauwendaal.net/2011/12/02/writing-to-3d-opengl-textures-in-cuda-4-1-with-3d-surface-writes/
where the texID object used there (in Steps 1 and 2) is the equivalent to what I want to retrieve from VTK.
At a first look at the vtkOpenGLGPUVolumeRayCastMapper functions, I don't find an easy way to do this, rather than maybe creating a vtkGPUVolumeRayCastMapper subclass, but even in that case I am not sure what should I modify exactly, since I guess that some other members depend on the 3D Texture values, and should be also updated after modifying it.
So, do you know some way to do this?
Lots of thanks.
Subclassing might work, but you could probably avoid it if you wanted. The important thing is that you get the order of the GL/CUDA API calls in the right order.
First, you have to register the texture with CUDA. This is done using:
cudaGraphicsGLRegisterImage(&cuda_graphics_resource, texture_handle,
GL_TEXTURE_3D, cudaGraphicsRegisterFlagsSurfaceLoadStore);
with the stipulation that texture_handle is a GLuint written to by a call to glGenTextures(...)
Once you have registered the texture with CUDA, you can create the surface which can be read or written to in your kernel.
The only thing you have to worry about from here is that vtk does not use the texture in between a call to cudaGraphicsMapResources(...) and cudaGraphicsUnmapResources(...). Everything else should just be standard CUDA.
Also once you map the texture to CUDA and write to it within a kernel, there is no additional work besides unmapping the texture. GL will get the modified texture the next time it is used.

Restoring OpenGL State

I want to write a general purpose utility function that will use an OpenGL Framebuffer Object to create a texture that can be used by some OpenGL program for whatever purpose a third party programmer would like.
Lets say for argument stake the function looks like
void createSpecialTexture(GLuint textureID)
{
MyOpenGLState state;
saveOpenGLState(state);
setStateToDefault();
doMyDrawing();
restoreOpenGLState(state);
}
What should MyOpenGLState, saveOpenGLState(state), setStateToDefault() and restoreOpenGLState(state) look like to ensure that doMyDrawing will behave correctly and that nothing that I do in doMyDrawing will affect anything else that a third party developer might be doing?
The problem that has been biting me is that OpenGL has a lot of implicit state and I am not sure I am capturing it all.
Update: My main concern is OpenGL ES 2.0 but I thought I would ask the question more generally
Don't use a framebuffer object to render your texture. Rather create a new EGLContext and EGLSurface. You can then use eglBindTexImage to turn your EGLSurface into a texture. This way you are guaranteed that state from doMyDrawing will not pollute the main gl Context and visa versa.
As for saving and restoring, glPushAttrib and glPopAttrib will get you very far.
You cannot, however, restore GL to a default state. However, since doMyDrawing() uses and/or modifies only state that should be known to you, you can just set that to values that you need.
That depends a lot on what yo do on your doMyDrawing();, but basically you have to restore everything (all states) that you change in this function. Without having a look at what is going on inside doMyDrawing(); it is impossible to guess what you have to restore.
If modifications on the projection or modelview matrix are done inside doMyDrawing(), remember to go initially push both GL_PROJECTION and GL_MODELVIEW matrix through glPushMatrix and restore them after the drawing through glPopMatrix. Any other state that you modify can be push and pop though glPushAttrib and the right attribute. Remember also to unbind any texture, FBO, PBO, etc.. that you might bind inside doMyDrawing();

Object Oriented Programming and OpenGL

I want to use OpenGL as graphics of my project. But I really want to do it in a good style. How can I declare a member function "draw()" for each class to call it within OpenGL display function?
For example, I want something like this:
class Triangle
{
public:
void draw()
{
glBegin(GL_TRIANGLES);
...
glEnd();
}
};
Well, it also depends on how much time you have and what is required. Your approach is not bad, a little old-fashioned, though. Modern OpenGL uses shaders, but I this is not covered by your (school?) project, I guess. For that purpose, and for starters, your approach should be completely OK.
Besides shaders, if you wanted to progress a little further, you could also go in the direction of using more generic polygon objects, simply storing a list of vertices and combine that with a separate 'Renderer' class that would be capable of rendering polygons, consisting of triangles. The code would look like this:
renderer.draw(triangle);
Of course, a polygon can have some additional attributes like color, texture, transparency, etc. You can have some more specific polygon classes like TriangleStrip, TriangleFan, etc., also. Then all you need to do is to write a generic draw() method in your OpenGL Renderer that will be able to set all the states and push the vertices to rendering pipeline.
When I was working on my PhD, I wrote a simulator which did what you wanted to do. Just remember that even though your code may look object oriented, the OpenGL engine still renders things sequentially. Also, the sequential nature of matrix algrebra, which is under the hood in OpenGL, is sometimes not in the same order as you would logically think (when do I translate, when do I draw, when do I rotate, etc.?).
Remember LOGO back in the old days? It had a turtle, which was a pen, and you moved the turtle around and it would draw lines. If the pen was down, it drew, if the pen was up, it did not. That is how my mindset was when I worked on this program. I would start the "turtle" at a familiar coordinate (0, 0, 0), and use the math to translate it (move it around to the center of the object I want to draw), then call the draw() methods you are trying to write to draw my shape based on the relative coordinate system where the "turtle" is, not from absolute (0, 0, 0). Then I would move, the turtle, draw, etc. Hope that helps...
No, it won't work like this. The problem is that the GLUT Display function is exactly one function. So if you wanted to draw a bunch of triangles, you could still only register one of their draw() functions to be the GLUT display function. (Besides, pointers to member functions in C++ are a hard topic as well).
So as suggested above, go for a dedicated Renderer class. This class would know about all drawable objects in your application.
class Renderer {
std::list<Drawable> _objects;
public:
drawAllObjects() {
// iterate through _objects and call the respective draw() functions
}
};
Your GLUT display function would then be a static function that calls drawAllObjects() on a (global or not) renderer object.
Ah, good old immediate-mode OpenGL. :) That routine there looks fine.
I would probably make the 'draw' method virtual, though, and inherit from a 'Drawable' base type that specifies the methods such classes have.