opencl/opengl interop using clCreateFromGLTexture fails to draw to texture (texture black) - c++

The setup is a little complicated so I will try my best to detail it.
First I am attempting to use openCL/openGL interop. The code works when the interop cl::ImageGL is not used so the basics are there. The project is using 3 main openGL context (took a lot to get the openCL context to open in that). I am using QGLWidgets for the context. There is a hidden QGLWidget created first and the other 2 share the hidden's context. Each of the 2 QGLWidgets are running in their own threads. The hidden QGLWidget is transfer to a thread where the openCL context is created.
QGLFormat qglFormat;
qglFormat.setVersion(3, 3);
qglFormat.setProfile(QGLFormat::CoreProfile);
m_hiddenGl=new GLHiddenWidget(qglFormat);
m_hiddenGl->setVisible(false);
m_view1=new GLWidget(qglFormat, m_hiddenGl);
m_view2=new GLWidget(qglFormat, m_hiddenGl);
...
QThread *processThread=m_process.qThread();
m_hiddenGl->doneCurrent();
m_hiddenGl->context()->moveToThread(processThread);
GLWidget is a custom class that launches its own thread and moves the context, GLHiddenWidget again custom class but basically just overrides all functions needed to keep makeCurrent from being called by the main thread.
Inside the process thread at start is the following
m_hiddenGl->makeCurrent();
hdc=wglGetCurrentDC();
glHandle=wglGetCurrentContext();
cl_context_properties clContextProps[]={
CL_CONTEXT_PLATFORM, (cl_context_properties)m_openCLPlatform(),
CL_WGL_HDC_KHR, (intptr_t) hdc,
CL_GL_CONTEXT_KHR, (intptr_t) glHandle, 0
};
m_openCLContext=cl::Context(m_openCLDevice, clContextProps, NULL, NULL, &error);
This all is a go. From this point several kernels are executed sequentially on incoming data which is images. All of the kernels succeed (no errors) however the one kernel using a openGL texture to write out data fails to write anything. When using an openCL cl::Image2d it works fine (produces the correct output) even if the openCL context is created as interop.
The openGL texture is created after all of the openGL contexts are created and after the openCL context is created (also in the same thread as the openCL context). At the beginning of the processThread the texture is generated by the hidden QGLWidget with glGenTextures. Then all of the kernels are created along with other openCL buffers and images. Right before the kernel is executed the following is done.
if(initBuffer) //runs only if buffer size is changed, allways runs first time
{
m_hiddenGl->makeCurrent();
glBindTexture(GL_TEXTURE_2D, m_texture);
//I have attempted to put data in, result is always black from kernel
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glBindTexture(GL_TEXTURE_2D, 0);
glFinish();
//m_flags is CL_MEM_READ_WRITE
m_imageGL=cl::ImageGL(m_openCLContext, m_flags, GL_TEXTURE_2D, 0, m_texture, &error);
}
and a simplified kernel.
__kernel void kernel1(__read_only image2d_t src1, __read_only image2d_t src2, __write_only image2d_t dst)
{
int2 coord=(int2)(get_global_id(0), get_global_id(1));
uint4 value=255;
write_imageui(dst, coord, convert_uint4(value));
}
Even if I do not display the texture the image is still black. The image is read back from openCL and saved to the hard disk, when using cl::Image2d or cl::ImageGL. With cl::Image2d it is correct with cl::ImageGL it is black.

I've had the same problem and using write_imagef() (don't forget to divide RGBA values by 255.0!), instead of write_imageui() in the OpenCL kernel solved it, without changing the pixel format of the OpenGL texture (you may not have the opportunity to do that, especially if it's an existing texture, created somewhere else in a large app).

You'd have to post more code for us to find the error.
In any case, I've written an example program which does something very similar. It uses OpenCL to calculate a mandelbrot fractal then draws it with OpenGL inside a QGLWidget. The source is here: https://github.com/kylelutz/compute/blob/master/example/mandelbrot.cpp.
Hope that helps.

Well as noted above in my comment I found my problem was related to the definition of the texture for openGL. Since I used write_imageui in the openCL kernel the openGL texture had to be defined as follows:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8UI, width, height, 0, GL_RGBA_INTEGER, GL_UNSIGNED_BYTE, NULL);
I have made some time since then and created a example program that uses both openGL/openCL interop and threaded QGLWidgets. You can read my comments on it here http://www.krazer.com/?p=109 and/or you can get the source from github.com

Related

OpenGL Invalid Texture or State

We are developing a C++ plug-in within an OpenGL application. The application will call a "render" method on our plug-in as necessary. While rendering our textures, we noticed that sometimes some of the textures are drawn completely white even though they are created with valid data. It appears to be random about which texture and when. While investigating what could cause some of the textures to render white, I noticed that simply trying to retrieve the size of a texture (even for the ones that render correctly) doesn't work. Here is the simple code to create the texture and retrieve its size:
GLuint textureId;
glGenTextures(1, &textureId);
glBindTexture(GL_TEXTURE_2D, textureId);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, imageWidth, imageHeight, 0,
GL_BGRA, GL_UNSIGNED_BYTE, imageDdata);
// try to lookup the size of the texture
int textureWidth = 0, textureHeight = 0;
glGetTexLevelParameteriv(GL_TEXTURE_2D, 0, GL_TEXTURE_WIDTH, &textureWidth);
glGetTexLevelParameteriv(GL_TEXTURE_2D, 0, GL_TEXTURE_HEIGHT, &textureHeight);
glBindTexture(GL_TEXTURE_2D, 0);
The image width and height input to glTexImage2D are 1536 x 1536, however the values returned from glGetTexLevelParameter are 16384 x 256. In fact, any width and height image that I pass to glTexImage2D result in an output of 16384 x 256. If I pass a width and height of 64 x 64, I still get back 16384 x 256.
I am using the same simple texture load/render code in another standalone test application and it works correctly all the time. However, I get these white textures when I use the code within this larger application. I have also verified that glGetError() returns 0.
I am assuming the containing application is setting some OpenGL state that is causing problems when we try to render our textures. Do you have any suggestions for things to check that could cause these white textures OR invalid texture dimensions?
Update
Both my test application that renders correctly and the integrated application that doesn't render correctly are running within a VM on Windows 7 with Accelerated 3D Graphics enabled. Here is the VM environment:
CentOS 7.0
OpenGL 2.1 Mesa 9.2.5
Did you check that you've got a valid OpenGL context being active when this code is called? The values you get back may be uninitialized garbage left in the variables, which values don't get modified if glGetTexLevelParameter fails for some reason. Note that glGetErrors may return GL_NO_ERROR if there's no OpenGL context active.
To check if there's a OpenGL context use wglGetCurrentContext (Windows), glXGetCurrentContext (X11 / GLX) or CGLGetCurrentContext (MacOS X CGL) to query the active OpenGL context; if there's none active all of these functions will return NULL.
Just FYI: You should use GLint for retrieval of integer values from OpenGL, the reason for that is, the the OpenGL types have very specific sizes, which may differ from the primitive C types of the same name. For example a C unsigned int may vary between 16 to 64 bits in size, while a OpenGL GLuint always is fixed to 32 bits.
https://www.opengl.org/sdk/docs/man/docbook4/xhtml/glGetTexLevelParameter.xml
glGetTexLevelParameter returns the texture level parameters for the active texture unit.
Try glActiveTexture on all the units. See if you are getting default values.

What is faster? glFramebufferTexture2D output flickers

inside my program I'm using glFramebufferTexture2D to set the target. But if I use it the output starts to flicker. If I use two frame buffers the output looks quite normal.
Has anybody an idea why that happens or what can be better inside the following source code? - that is an example and some not relevant code isn't inside.
// bind framebuffer for post process
::glBindFramebuffer(GL_DRAW_FRAMEBUFFER, m_SwapBuffer);
::glFramebufferTexture2D(GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, m_SwapBufferTargets[SwapBufferTarget1]->m_NativeTextureHandle, 0);
unsigned int DrawAttachments = { GL_COLOR_ATTACHMENT0 };
::glDrawBuffers(1, &DrawAttachments);
...
// render gaussian blur
m_Shader->Use();
::glFramebufferTexture2D(GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, m_BlurredImageFromPass1->m_NativeTextureHandle, 0);
_InputTexturePtr->ActivateTexture(GL_TEXTURE0);
RenderMesh();
::glFramebufferTexture2D(GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, _TargetTexturePtr->m_NativeTextureHandle, 0);
m_BlurredImageFromPass1->ActivateTexture(GL_TEXTURE0);
RenderMesh();
...
// copy swap buffer to system buffer
::glBindFramebuffer(GL_READ_FRAMEBUFFER, m_SwapBuffer);
::glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0);
::glBlitFramebuffer(0, 0, m_pConfig->m_Width, m_pConfig->m_Height, 0, 0, m_pConfig->m_Width, m_pConfig->m_Height, GL_COLOR_BUFFER_BIT, GL_NEAREST);
EDIT: I found the problem! It was inside my swap chain. I've rendered the original picture and after that a black one. So I get a flicker if frame rate drops.
This is probably better suited for a comment but is too large, so I will put it here. Your OpenGL semantics seem to be a little off in the following code segment:
::glFramebufferTexture2D(GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, m_BlurredImageFromPass1->m_NativeTextureHandle, 0);
_InputTexturePtr->ActivateTexture(GL_TEXTURE0);
RenderMesh();
::glFramebufferTexture2D(GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, _TargetTexturePtr->m_NativeTextureHandle, 0);
m_BlurredImageFromPass1->ActivateTexture(GL_TEXTURE0);
RenderMesh();
glActiveTexture (and thus your ActivateTexture wrapper) is purely for setting the active texture slot when binding a texture INPUT to a sampler in a shader program, and glFramebufferTexture2D is used in combination with glDrawBuffers to set the target OUTPUTS of your shader program. Thus, glActiveTexture and glFramebufferTexture2D should probably not be used on the same texture during the same draw operation. (Although I don't think this is what is causing your flicker) Additionally, I don't see where you bind/release your texture handles. It is generally good OpenGL practice to only bind objects when they are needed and release them immediately after. As OpenGL is a state machine, forgetting to release objects can really come and bite you in the ass on large projects.
Furthermore, when you bind a texture to a texture slot using glActiveTexture (or any glActiveTexture wrapper) always call glActiveTexture BEFORE you bind the texture handle.

Loading a PVR, texture problems

I'm having a problem with loading a tga from a PVR.
I believe the PVR is loading correctly, but when I try and load the texture into OpenGL I'm getting issues.
I'm getting odd, incoherant drawings. I'll passing the entire texture file I'm making over to my graphics window class and then asking it to get the id which is an unsigned int and then create the texture.
This is my load texture class.
glGenTextures(animalTexture->getID(), &texture[0]);
glBindTexture(GL_TEXTURE_2D, texture[0]);
glTexImage2D(GL_TEXTURE_2D, 0, 3, animalTexture->getWidth(),animalTexture->getHeight(), 0, GL_RGBA, GL_UNSIGNED_BYTE, animalTexture->getImageData());
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
I'm wondering what the cause could be. This method does get called more than once so I'm wondering if you can overwrite a previously generated texture without any issues? Do you have to have a gluint to use to generate a texture? I'm trying to load a tga.
I know this draws successfully with a normal saved image.
Any ideas or help would be mucn appreciated.
p.s Ignore the black spot that was me.
Have a look at this post. Essentially, as PVR is compressed, you can't send the texture using glTexImage2D which assumes uncompressed texels (each texel being 4 unsigned bytes in the code you posted). You must use glCompressedTexImage2D instead which handles compressed formats. Have a look at this OpenGL es extension to know which internal format to use. If you're not too sure which one to choose or just want to view your compressed textures, PVRTexTool looks like a nice tool.

Why is my texture rendered improperly in my OpenGL application?

I'm working with SDL and OpenGL, creating a fairly simple application. I created a basic text rendering function, which maps a generated texture onto a quad for each character. This texture is rendered from a bitmap of each character.
The bitmap is fairly small, about 800x16 pixels. It works absolutely fine on my desktop and laptop, both in and out of a VM (and on both Windows and Linux).
Now, I'm trying it on another computer, and the text becomes all garbled - it appears as though the computer can't handle a very basic thing like this. To see if it was due to the OS, I installed VirtualBox and tested it in the VM - but the result is even worse! Instead of rendering anything (albeit garbled), it just renders a plain white box.
Why is this occuring, and is there any way to solve it?
Some code - how I initialize the texture:
glGenTextures(1, &fontRef);
glBindTexture(GL_TEXTURE_2D, iFont);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, FNT_IMG_W, FNT_IMG_H, 0,
GL_RGB, GL_UNSIGNED_BYTE, MY_FONT);
Above, MY_FONT is an unsigned char array (the raw image dump from GIMP). When I draw a character:
GLfloat ix = c * (GLfloat) FNT_CHAR_W;
// We just map each corner of the texture to a new vertex.
glTexCoord2d(ix, FNT_CHAR_H); glVertex3d(x, y, 0);
glTexCoord2d(ix + FNT_CHAR_W, FNT_CHAR_H); glVertex3d(x + iCharW, y, 0);
glTexCoord2d(ix + FNT_CHAR_W, 0); glVertex3d(x + iCharW, y + iCharH, 0);
glTexCoord2d(ix, 0); glVertex3d(x, y + iCharH, 0);
That sounds to me as if the Graphics card of the machine you are working on only supports power of two textures (i.e. 16, 32, 64...). 800x16 certainly would not work on such a gfx card.
you can use glGet with ARB_texture_non_power_of_two to check if the gfx card does support it.
Or use GLEW to do that check for you.

Access Violation on glDelete*

I got a strange problem here: i have a potential large (as in up to 500mb) 3d texture which is created several times per second. The size of the texture might change so reusing the old texture is not an option every time. The logical step to avoid memory consumption is to delete the texture every time it is not used anymore (using glDeleteTexture) but the program crashes with a read or write access violation pretty soon. The same thing happens on glDeleteBuffer when called on the buffer i use to update the texture.
In my eyes this can't happen as the glDelete* functions are pretty failsafe. If you give them a gl handle which is not a corresponding object they just don't do anything.
The interesting thing is that if i just don't delete the textures and buffers the program runs fine until it eventually runs out of memory on the graphics card.
This is running on Windows XP 32bit, NVIDIA Geforce 9500GT with 266.58er drivers, programming language is c++ in visual studio 2005.
Update
Apparently glDelete is not the only function affected. I just got violations in several other methods (which wasn't the case yesterday) ... looks like something is damn broken here.
Update 2
this shouldn't fail should it?
template <> inline
Texture<GL_TEXTURE_3D>::Texture(
GLint internalFormat,
glm::ivec3 size,
GLint border ) : Wrapper<detail::gl_texture>()
{
glGenTextures(1,&object.t);
std::vector<GLbyte> tmp(glm::compMul(size)*4);
glTextureImage3DEXT(
object, // texture
GL_TEXTURE_3D, // target
0, // level
internalFormat, // internal format
size.x, size.y, size.z, // size
border, // border
GL_RGBA, // format
GL_BYTE, // type
&tmp[0]); // don't load anything
}
fails with:
Exception (first chance) at 0x072c35c0: 0xC0000005: Access violoation while writing to position 0x00000004.
Unhandled exception at 0x072c35c0 in Project.exe: 0xC0000005: Access violatione while writing to position 0x00000004.
best guess: something messing up the program memory?
I don't know why glDelete would crash but I am fairly certain you don't need it anyway and are overcomplicating this.
glGenTextures creates a 'name' for your texture. glTexImage3D gives OpenGL some data to attach to that name. If my understanding is correct, there is no reason to delete the name when you don't want the data anymore.
Instead, you should simply call glTexImage3D again on the same texture name and trust that the driver will know that your old data is no longer needed. This allows you to respecify a new size each time, instead of specifying a maximum size first and then calling glTexSubImage3D, which would make actually using the data difficult since the texture would still retain its maximum size.
Below is a silly test in python (pyglet needed) that allocates a whole bunch of textures (just to check that the GPU memory usage measurement in GPU-Z actually works) then re-allocates new data to the same texture every frame, with a random new size and some random data just to work around any optimizations that might exist if the data stays constant.
It's (obviously) slow as hell but it definitely shows, at least on my system (Windows server 2003 x64, NVidia Quadro FX1800, drivers 259.81), that GPU memory usage does NOT go up while looping over the re-allocation of the texture.
import pyglet
from pyglet.gl import *
import random
def toGLArray(input):
return (GLfloat*len(input))(*input)
w, h = 800, 600
AR = float(h)/float(w)
window = pyglet.window.Window(width=w, height=h, vsync=False, fullscreen=False)
def init():
glActiveTexture(GL_TEXTURE1)
tst_tex = GLuint()
some_data = [11.0, 6.0, 3.2, 2.8, 2.2, 1.90, 1.80, 1.80, 1.70, 1.70, 1.60, 1.60, 1.50, 1.50, 1.40, 1.40, 1.30, 1.20, 1.10, 1.00]
some_data = some_data * 1000*500
# allocate a few useless textures just to see GPU memory load go up in GPU-Z
for i in range(10):
dummy_tex = GLuint()
glGenTextures(1, dummy_tex)
glBindTexture(GL_TEXTURE_2D, dummy_tex)
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA16F, 1000, 1000, 0, GL_RGBA, GL_FLOAT, toGLArray(some_data))
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE)
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE)
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR)
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR)
# our real test texture
glGenTextures(1, tst_tex)
glBindTexture(GL_TEXTURE_2D, tst_tex)
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA16F, 1000, 1000, 0, GL_RGBA, GL_FLOAT, toGLArray(some_data))
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE)
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE)
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR)
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR)
def world_update(dt):
pass
pyglet.clock.schedule_interval(world_update, 0.015)
#window.event
def on_draw():
glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT)
# randomize texture size and data
size = random.randint(1, 1000)
data = [random.randint(0, 100) for i in xrange(size)]
data = data*1000*4
# just to see our draw calls 'tick'
print pyglet.clock.get_fps()
# reallocate texture every frame
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA16F, 1000, size, 0, GL_RGBA, GL_FLOAT, toGLArray(data))
def main():
init()
pyglet.app.run()
if __name__ == '__main__':
main()
Sprinkle glGetError()s throughout your code. I would wager you are getting caught out by the fact that glDelete doesn't actually destroy the object. The object may be in use for several frames longer. As such I suspect you are running out of memory (ie glGetError is returning GL_OUT_OF_MEMORY).