after having used PyOpenGL happily for some time, I'm now seriously stuck. I am working on a Python package that allows me to use GLSL shaders and OpenCL programs for image processing, using textures as the standardized way to get my data in and out of the GLSL shaders and OpenCL programs.
Everything works, except that I can not succeed in copying a texture into a pbo (pixel buffer object).
I'm using pbo's to get my texture data in/out of OpenCL and that works nice and fast in PyOpenCL: I can copy my OpenCL output from its
pbo to a texture and display it, and I also can load data from the cpu into a pbo. But I am hopelessly stuck trying to fill my pbo with texture data already on the GPU, which is what I need to do to load my images produced by GLSL shaders into OpenCL for further processing.
I've read about two ways to do this:
variant 1 binds the pbo, binds the texture and uses glGetTexImage()
variant 2 attaches the texture to a frame buffer object, binds the fbo and the pbo and uses glReadPixels()
I also read that the PyOpenGL versions of both glReadPixels() and glGetTexImage() have trouble with the 'Null'-pointers one should use when having a bound pbo, so for that reason I am using the OpenGL.raw.GL variants.
But in both these cases I get an 'Invalid Operation' error, and I really do not see what I am doing wrong. Below two versions
of the load_texture() method of my pixelbuffer Python class, I hope I didn't strip them down too far...
variant 1:
def _load_texture(self, texture):
glBindBuffer(GL_PIXEL_PACK_BUFFER_ARB, self.id)
glEnable(texture.target)
glActiveTexture(GL_TEXTURE0_ARB)
glBindTexture(texture.target, texture.id)
OpenGL.raw.GL.glGetTexImage(texture.target, 0, texture.gl_imageformat,
texture.gl_dtype, ctypes.c_void_p(0))
glBindBuffer(GL_PIXEL_PACK_BUFFER_ARB, 0)
glDisable(texture.target)
variant 2:
def _load_texture(self, texture):
fbo = FrameBufferObject.from_textures([texture])
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0,
texture.target, texture.id, 0)
glReadBuffer(GL_COLOR_ATTACHMENT0)
glBindFramebuffer(GL_FRAMEBUFFER, fbo.id)
glBindBuffer(GL_PIXEL_PACK_BUFFER, self.id)
OpenGL.raw.GL.glReadPixels(0, 0, self.size[0], self.size[1],
texture.gl_imageformat, texture.gl_dtype,
ctypes.c_void_p(0))
glBindBuffer(GL_PIXEL_PACK_BUFFER, 0)
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0,
GL_TEXTURE_RECTANGLE_ARB, 0, 0)
glBindFramebuffer(GL_FRAMEBUFFER, 0)
EDIT (adding some information about the error and initialization of my pbo):
the Error I am getting for variant 1 is:
OpenGL.error.GLError: GLError(
err = 1282,
description = 'invalid operation',
baseOperation = glGetTexImage,
cArguments = (
GL_TEXTURE_RECTANGLE_ARB,
0,
GL_RGBA,
GL_UNSIGNED_BYTE,
c_void_p(None),
)
and i'm initializing my pbo like this:
self.usage = usage
if isinstance(size, tuple):
size = size[0] * size[1] * self.imageformat.planecount
bytesize = self.imageformat.get_bytesize_per_plane() * size
glBindBuffer(self.arraytype, self.id)
glBufferData(self.arraytype, bytesize, None, self.usage)
glBindBuffer(self.arraytype, 0)
the 'self.arraytype' is GL_ARRAY_BUFFER, self.usage I have tried all possibilities just in case, but GL_STREAM_READ seemed the most logical for my type of use.
the size I am typically using is 1024 by 1024, 4 planes, 1 byte per plane since it is unisgned ints. This works fine when transferring pixel data from the host.
Also I am on Kubuntu 11.10, using a NVIDIA GeForce GTX 580 with 3Gb of memory on the GPU, using the proprietary driver, version 295.33
what am I missing ?
Found a solution myself without really understanding why it makes that huge difference.
The code I had (for both variants) was basically correct but needs the call to glBufferData in there for it to work. I already made that identical call when initializing my pbo in my original code, but my guess is that there was enough going on between that initialization and my attempt to load the texture, for the pbo memory somehow to become disallocated in the meantime.
Now I only moved that call closer to my glGetTexImage call and it works without changing anything else.
Strange, I'm not sure if that is a bug or a feature, if it is related to PyOpenGL, to the NVIDIA driver or to something else. It sure is not documented anywhere easy to find if it is expected behaviour.
The variant 1 code below works and is mighty fast too, variant 2 works fine as well when treated in the same way, but at about half the speed.
def _load_texture(self, texture):
bytesize = (self.size[0] * self.size[1] *
self.imageformat.planecount *
self.imageformat.get_bytesize_per_plane())
glBindBuffer(GL_PIXEL_PACK_BUFFER_ARB, self.id)
glBufferData(GL_PIXEL_PACK_BUFFER_ARB,
bytesize,
None, self.usage)
glEnable(texture.target)
glActiveTexture(GL_TEXTURE0_ARB)
glBindTexture(texture.target, texture.id)
OpenGL.raw.GL.glGetTexImage(texture.target, 0, texture.gl_imageformat,
texture.gl_dtype, ctypes.c_void_p(0))
glBindBuffer(GL_PIXEL_PACK_BUFFER_ARB, 0)
glDisable(texture.target)
Related
So, here's the problem. I have got an FBO with 8 render buffers which I use in my deferred rendering pipeline. Then I added another render buffer and now I get a GLError.
GLError(
err = 1282,
description = b'invalid operation',
baseOperation = glFramebufferTexture2D,
cArguments = (GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT8, GL_TEXTURE_2D, 12, 0,)
The code should be fine, since I have just copied it from the previously used render buffer.
glMyRenderBuffer = glGenTextures(1)
glBindTexture(GL_TEXTURE_2D, glMyRenderBuffer)
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB16F, self.width, self.height, 0, GL_RGB, GL_FLOAT, None)
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST)
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST_MIPMAP_NEAREST)
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT8, GL_TEXTURE_2D, glMyRenderBuffer, 0)
glGenerateMipmap(GL_TEXTURE_2D)
And I get the error at this line
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT8, GL_TEXTURE_2D, glMyRenderBuffer, 0)
It looks more like some kind of OpenGL limitation that I don't know about.
And I also have got a weird stack - Linux + GLFW + PyOpenGL which may also cause this problem.
I would be glad to any advice at this point.
It looks more like some kind of OpenGL limitation that I don't know about.
The relevant limit is GL_MAX_COLOR_ATTACHMENTS and the spec guarantees that this value is at least 8.
Now needing more than 8 render targets in a single pass seems insane anyway.
Consider the following things:
try to reduce the number of render targets as much as possible, do not store redundant information (such as vertex position) which can easily be calculated on the fly (you only need depth alone, and you usually have a depth attachment anyway)
use clever encodings appropriate for the data, i.e. 3xfloat for a normal vector is a huge waste. See for example Survey of Efficient Representations for Independent Unit Vectors
coalesce different render targets. i.e if you need one vec3 and 2 vec2 outputs, better use 2 vec4 targets and asiign the 8 values to the 8 channels
maybe even use a higher bitdepth formats like RGBA32UI and manually encode different values into a single channel
If you still need more data, you either can do several render passes (basically with n/8 targets for each pass). Another alternative would be to use image load/store or SSBOs in your fragment shader to write the additional data. In your Scenario, using image load/store seems to make most sense, soince you probaly need the resulting data as texture. You also get a relatively good access pattern, since you can basically use gl_FragCoord.xy for adressing the image. However, care must be taken if you have overlapping geometry in one draw call, so that you write to each pixel more than once (that issue is also addressed by the GL_ARB_fragment_shader_interlock extension, but that one is not yet a core feature of OpenGL). However, you might be able to eliminate that scenario completely by using a pre-depth-pass.
I have the following pipeline:
Render into texture attachment to custom FBO.
Bind that texture attachment as image.
Run compute shader ,sampling from the image above using imageLoad/Store.
Write the results into SSBO or image.
Map the SSBO (or image) as CUDA CUgraphicsResource and process the data from that buffer using CUDA.
Now,the problem is in synchronizing data between the stages 4 and 5. Here are the sync solutions I have tried.
glFlush - doesn't really work as it doesn't guarantee the completeness of the execution of all the commands.
glFinish - this one works. But it is not recommended as it finalizes all the commands submitted to the driver.
ARB_sync Here it is said it is not recommended because it heavily impacts performance.
glMemoryBarrier This one is interesting. But it simply doesn't work.
Here is example of the code:
glMemoryBarrier(GL_ALL_BARRIER_BITS);
And also tried:
glTextureBarrierNV()
The code execution goes like this:
//rendered into the fbo...
glBindFramebuffer(GL_READ_FRAMEBUFFER, fbo);
glBindImageTexture(imageUnit1, fboTex, 0, GL_FALSE, 0, GL_READ_ONLY,GL_RGBA8);
glBindImageTexture(imageUnit2, imageTex, 0, GL_FALSE, 0, GL_WRITE_ONLY, GL_RGBA8));
glDispatchCompute(16, 16, 1);
glFinish(); // <-- must sync here,otherwise cuda buffer doesn't receive all the data
//cuda maps the image to CUDA buffer here..
Moreover, I tried unbinding FBOs and unbinding textures from the context before launching compute, I even tried to launch one compute after other with a glMemoryBarrier set between them, and fetching the target image from the first compute launch to CUDA. Still no synch. (Well,that makes sense as two computes also run out of sync with each other)
after the compute shader stage. It doesn't sync! Only when I replace with glFinish,or with any other operation which completely stall the pipeline.
Like glMapBuffer(), for example.
So should I just use glFinish(), or I am missing something here?
Why glMemoryBarrier() doesn't sync compute shader work before CUDA takes over the control?
UPDATE
I would like to refactor the question a little bit as the original one is pretty old. Nevertheless, even with the latest CUDA and Video Codec SDK (NVENC) the issue is still alive.So, I don't care about why glMemoryBarrier doesn't sync. What I want to know is:
If it is possible to synchronize OpenGL compute shader execution finish with CUDA's usage of that shared resource without stalling the whole rendering pipeline, which is in my case OpenGL image.
If the answer is 'yes', then how?
I know this is an old question, but if any poor soul stumbles upon this...
First, the reason glMemoryBarrier does not work: it requires the OpenGL driver to insert a barrier into the pipeline. CUDA does not care about the OpenGL pipeline at all.
Second, the only other way outside of glFinish is to use glFenceSync in combination with glClientWaitSync:
....
glBindFramebuffer(GL_READ_FRAMEBUFFER, fbo);
glBindImageTexture(imageUnit1, fboTex, 0, GL_FALSE, 0, GL_READ_ONLY,GL_RGBA8);
glBindImageTexture(imageUnit2, imageTex, 0, GL_FALSE, 0, GL_WRITE_ONLY, GL_RGBA8));
glDispatchCompute(16, 16, 1);
GLsync fence = glFenceSync(GL_SYNC_GPU_COMMANDS_COMPLETE, 0);
... other work you might want to do that does not impact the buffer...
GLenum res = glClientWaitSync(fence, GL_SYNC_FLUSH_COMMANDS_BIT, timeoutInNs);
if(res == GL_TIMEOUT_EXPIRED || res == GL_WAIT_FAILED) {
...handle timeouts and failures
}
cudaGraphicsMapResources(1, &gfxResource, stream);
...
This will cause the CPU to block until the GPU is done with all commands until the fence. This includes memory transfers and compute operations.
Unfortunately, there is no way to tell CUDA to wait on an OpenGL memory barrier/fence. If you really require the extra bit of asynchronicity, you'll have to switch to DirectX 12, for which CUDA supports importing fences/semaphores and waiting on as well as signaling them from a CUDA stream via cuImportExternalSemaphore, cuWaitExternalSemaphoresAsync, and cuSignalExternalSemaphoresAsync.
I know Directx for Dx9 at least, has a texture object where you are able to get only a small portion of the texture to CPU accessible memory. It was a function called "LockRect" I believe. OpenGL has glGetTexImage() but it grabs the entire image and if the format isn't the same as the texture's then it is going to have to convert the entire texture into the new pixel format on top of transferring the entire texture. This function is also not in OpenGL ES. Framebuffers is another option but where I could potentially bind a framebuffer where a color attachment in connected to a texture. Then there is glReadPixels which reads from the framebuffer, so it should be reading from the texture. glReadPixels has limited pixel format options so a conversion is going to have to happen, but I can read the pixels I need (which is only 1 pixel). I haven't used this method but it seems like it is possible. If anyone can confirm the framebuffer method, that it is a working alternative. Then this method would also work for OpenGL ES 2+.
Are there any other methods? How efficient is the framebuffer method (if it works), does it end up having to convert the entire texture to the desired format before it reads the pixels or is it entirely implementation defined?
Edit: #Nicol_Bolas Please stop removing OpenGL from tags and adding OpenGL-ES, OpenGL-ES isn't applicable, OpenGL is. This is for OpenGL specifically but I would like it to be Open ES 2+ compatible if possible, though it doesn't have to be. If a OpenGL only solution is available then it is a consideration I will make if it is worth the trade off. Thank you.
Please note, I do not have that much experience with ES in particular, so there might be better ways to do this specifically in that context. The general gist applies in either plain OpenGL or ES, though.
First off, the most important performance consideration should be when you are doing the reading. If you request data from the video card while you are rendering, your program (the CPU end) will have to halt until the video card returns the data, which will slow rendering due to your inability to issue further render commands. As a general rule, you should always upload, render, download - do not mix any of these processes, it will impact speed immensely, and how much so can be very driver/hardware/OS dependent.
I suggest using glReadPixels( ) at the end of your render cycle. I suspect the limitations on formats for that function are connected to limitations on framebuffer formats; besides, you really should be using 8 bit unsigned or floating point, both of which are supported. If you have some fringe case not allowing any of those supported formats, you should explain what that is, as there may be a way to handle it specifically.
If you need the contents of the framebuffer at a specific point in rendering (rather than the end), create a second texture + framebuffer (again with the same format) to be an effective "backbuffer" and then copy from the target framebuffer to that texture. This occurs on the video card, so it does not impose the bus latency directly reading does. Here is something I wrote that does this operation:
glActiveTexture( GL_TEXTURE0 + unit );
glBindTexture( GL_TEXTURE_2D, backbufferTextureHandle );
glBindFramebuffer( GL_READ_FRAMEBUFFER, framebufferHandle );
glCopyTexSubImage2D(
GL_TEXTURE_2D,
0, // level
0, 0, // offset
0, 0, // x, y
screenX, screenY );
glBindFramebuffer( GL_DRAW_FRAMEBUFFER, framebufferHandle );
Then when you want the data, bind the backbuffer to GL_READ_FRAMEBUFFER and use glReadPixels( ) on it.
Finally, you should keep in mind that a download of data will still halt the CPU end. If you download before displaying the framebuffer, you will put off displaying the image until after you can again execute commands, which might result in visible latency. As such, I suggest still using a non-default framebuffer even if you only care about the final buffer state, and ending your render cycle to the effect of:
(1.) Blit to the default framebuffer:
glBindFramebuffer( GL_DRAW_FRAMEBUFFER, 0 ); // Default framebuffer
glBindFramebuffer( GL_READ_FRAMEBUFFER, framebufferHandle );
glBlitFramebuffer(
0, 0, screenX, screenY,
0, 0, screenX, screenY,
GL_COLOR_BUFFER_BIT,
GL_NEAREST );
(2.) Call whatever your swap buffers command may be in your given situation.
(3.) Your download call from the framebuffer (be it glReadPixels( ) or something else).
As for the speed impact of the blit/texcopy operations, it's quite good on most modern hardware and I have not found it to have a noticeable impact even done 10+ times a frame, but if you are dealing with antiquated hardware, it might be worth a second thought.
I have written an emulator which I am in the process of porting to Linux. At the moment to do the video I am using Direct3D 11, which I am porting to OpenGL (which I'm running on Windows for now). I render to a 1024x1024 texture which I upload to memory every frame (the original hardware doesn't really lend itself to modern hardware acceleration, so I just do it all in software). However, I have found that uploading the texture in OpenGL is a lot slower.
In Direct3D uploading the texture every frame drops the frame rate from 416 to 395 (a 5% drop). In OpenGL it drops from 427 to 297 (a 30% drop!).
Here's the relevant code from my draw function.
Direct3D:
D3D11_MAPPED_SUBRESOURCE resource;
deviceContext_->Map(texture, 0, D3D11_MAP_WRITE_DISCARD, 0, &resource);
uint32_t *buf = reinterpret_cast<uint32_t *>(resource.pData);
memcpy(buf, ...);
deviceContext_->Unmap(texture, 0);
OpenGL:
glBindTexture(GL_TEXTURE_2D, texture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, 1024, 1024, 0, GL_RGBA,
GL_UNSIGNED_BYTE, textureBuffer);
Can anyone suggest what may be causing this slowdown?
If it makes any odds, I'm running Windows 7 x64 with an NVIDIA GeForce GTX 550 Ti.
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, 1024, 1024, 0, GL_RGBA, GL_UNSIGNED_BYTE, textureBuffer);
You're doing several things wrong here. First, glTexImage2D is the equivalent of creating a Direct3D texture resource every frame. But you're not creating it; you're just uploading to it. You should use glTexImage2D only once per mipmap layer of interest; after that, all uploading should happen with glTexSubImage2D.
Second, your internal format (third parameter from the left) is GL_RGBA. You should always use explicit sizes for your image formats. So use GL_RGBA8. This isn't really a problem, but you should get into the habit now.
Third, you're using GL_RGBA ordering for your pixel transfer format (the third parameter from the right, not the left). This is generally not the most optimal pixel transfer format, as lots of hardware tends to prefer GL_BGRA ordering. But if you're not getting your data from whatever is producing it in that order, then there's not much that can be done.
Fourth, if you have something else you can do between starting the upload and actually rendering with it, you can employ asynchronous pixel transfer operations. You write your data to a buffer object (which can be mapped, so that you don't have to copy into it). Then you use glTexSubImage2D to transfer this data to OpenGL. Because the source data and the destination image are part of OpenGL's memory, it doesn't have to copy the data out of client memory before glTexSubImage2D returns.
Granted, that's probably not going to help you much, since you're already effectively doing that copy in the D3D case.
In OpenGL it drops from 427 to 297 (a 30% drop!)
The more important statistic is that it's a 1 millisecond difference. You should look at your timings in absolute time, not in frames-per-second, nor in percentage drops of FPS.
glTexImage2d does memory reallocation as well as update. Try to use glTexSubImage2d instead.
I have an OpenGL Texture and want to be able to read back a single pixel's value, so I can display it on the screen. If the texture is a regular old RGB texture or the like, this is no problem: I take an empty Framebuffer Object that I have lying around, attach the texture to COLOR0 on the framebuffer and call:
glReadPixels(x, y, 1, 1, GL_RGBA, GL_FLOAT, &c);
Where c is essentially a float[4].
However, when it is a depth texture, I have to go down a different code path, setting the DEPTH attachment instead of the COLOR0, and calling:
glReadPixels(x, y, 1, 1, GL_DEPTH_COMPONENT, GL_FLOAT, &c);
where c is a float. This works fine on my Windows 7 computer running NVIDIA GeForce 580, but causes an error on my old 2008 MacBook pro. Specifically, after attaching the depth texture to the framebuffer, if I call glCheckFrameBufferStatus(GL_READ_BUFFER), I get GL_FRAMEBUFFER_INCOMPLETE_READ_BUFFER.
After searching the OpenGL documentation, I found this line, which seems to imply that OpenGL does not support reading from a depth component of a framebuffer if there is no color attachment:
GL_FRAMEBUFFER_INCOMPLETE_READ_BUFFER is returned if GL_READ_BUFFER is not GL_NONE
and the value of GL_FRAMEBUFFER_ATTACHMENT_OBJECT_TYPE is GL_NONE for the color
attachment point named by GL_READ_BUFFER.
Sure enough, if I create a temporary color texture and bind it to COLOR0, no errors occur when I readPixels from the depth texture.
Now creating a temporary texture every time (EDIT: or even once and having GPU memory tied up by it) through this code is annoying and potentially slow, so I was wondering if anyone knew of an alternative way to read a single pixel from a depth texture? (Of course if there is no better way I will keep around one texture to resize when needed and use only that for the temporary color attachment, but this seems rather roundabout).
The answer is contained in your error message:
if GL_READ_BUFFER is not GL_NONE
So do that; set the read buffer to GL_NONE. With glReadBuffer. Like this:
glBindFramebuffer(GL_READ_FRAMEBUFFER, fbo); //where fbo is your FBO.
glReadBuffer(GL_NONE);
That way, the FBO is properly complete, even though it only has a depth texture.