I have been working with VR recently and encountered some OpenGL related problem.
The API i use for VR capture a video stream and write it to a texture, I, then, want to submit this texture to a headset. But there is an incompatibility in the API : the texture I get from the stream has an undefined internal format and cannot be submitted to the headset directly.
I am working on a workaround, for now, I have used a GPU -> CPU -> GPU transfer :
I read the first texture pixel (with glReadPixels) and write them into a buffer, then I use this buffer to create a texture with the correct format. This works fine but has some latencies due to data transfers.
I have been trying to do a direct GPU copy but failed :
I tried using PBO but have problems with invalid operation (following http://www.songho.ca/opengl/gl_pbo.html), here is the code with a invalid operation on glReadPixels
// Initialization
glGenBuffers(1, &pbo);
glGenTextures(1, &dstTexture);
glBindTexture(GL_TEXTURE_2D, dstTexture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL);
glBindTexture(GL_TEXTURE_2D, 0);
// Copy to GPU
glBindBuffer(GL_PIXEL_PACK_BUFFER, pbo);
glBufferData(GL_PIXEL_PACK_BUFFER, bufferSize, NULL, GL_DYNAMIC_DRAW);
glBindTexture(GL_TEXTURE_2D, id); // texture given by the API
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glReadPixels(0, 0, width, height, GL_RGBA, GL_UNSIGNED_BYTE, NULL);
glBindTexture(GL_TEXTURE_2D, 0);
glBindBuffer(GL_PIXEL_UNPACK_BUFFER, pbo);
glBufferData(GL_PIXEL_UNPACK_BUFFER, bufferSize, NULL, GL_DYNAMIC_READ);
glBindTexture(GL_TEXTURE_2D, dstTexture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL);
I tried using FBO but encountered pointer exceptions.
glCopyImageSubData does not work because the first texture internal format is not recognised.
What are the steps to do a direct GPU copy?
Related
I am working on a cross-platform project that involves OpenGLES (3.1). While code executes perfectly on my Windows and Ubuntu machines. Running the same code on Raspberry PI 4 causes a strange issue, after successfully initializing texture with glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, 16, 256, 0, GL_RGB, GL_UNSIGNED_BYTE, 0) call, later in code requesting available reading type for same FrameBuffer glGetIntegerv(GL_IMPLEMENTATION_COLOR_READ_FORMAT, ...) returns GL_GRBA. For context creation, I am using GLFW with GLAD. Below is the complete code of texture initialization:
...
GLuint pix_buf;
glGenFramebuffers(1, &pix_buf);
glBindFramebuffer(GL_FRAMEBUFFER, pix_buf);
GLuint text;
glGenTextures(1, &text);
glBindTexture(GL_TEXTURE_2D, text);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, 16, 256, 0, GL_RGB, GL_UNSIGNED_BYTE, 0);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, text, 0);
GLenum DrawBuffers[1] = {GL_COLOR_ATTACHMENT0};
glDrawBuffers(1, DrawBuffers);
if (glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE)
{
std::cout << "Frame buffer was not initialized" << std::endl;
return;
}
GLint read_format, read_type;
glGetIntegerv(GL_IMPLEMENTATION_COLOR_READ_FORMAT, &read_format);
glGetIntegerv(GL_IMPLEMENTATION_COLOR_READ_TYPE, &read_type);
...
read_format value is equal to GL_RGBA which should be GL_RGB!
read_type value is equal to GL_UNSIGNED_BYTE as expected
So after rendering call attempt of reading texture to local back_buf array using: glReadPixels(0, 0, 16, 256, GL_RGB, GL_UNSIGNED_BYTE, back_buf) is causing GL_INVALID_OPERATION with glReadPixels(invalid format GL_RGB and/or GL_UNSIGNED_BYTE). Changing reading type from GL_RGB to GL_RGBA is fixing error but resulting data format cant be used by my program (I strictly am looking for GL_RGB format).
My question is am I doing something wrong or there is a problem with Raspberry PI 4 OpenGLES drivers?
Im trying to make an OpenCV Mat() using output from OpenGL's glGetTexImage(). The texture I am trying to get information from was made using the call;
glTexImage2D( GL_TEXTURE_2D, 0, GL_RGBA8UI, iWidth, iHeight, 0, GL_RGBA_INTEGER, GL_UNSIGNED_BYTE, pImageData);
and so I've tried to do this using;
unsigned char* texture_bytes = (unsigned char*)malloc(sizeof(unsigned char)*texture_width*texture_height * 3);
glGetTexImage(GL_TEXTURE_2D, 0, GL_BGR, GL_UNSIGNED_BYTE, texture_bytes);
Matrix = Mat(texture_height, texture_width, CV_8UC3, texture_bytes);
What I am wondering is If anyone knows what I should set the format and type of glGetTexImage() to in order for this to work. Also, what should i set the type of the Mat() to?
You can assume that the context is set correctly, and that the texture that is input is correct. I have verified this by displaying the texture on screen using OpenGL. Thanks in advance!
I have been faced with the problem of getting data from OpenGL to OpenCV recently. I didn't use glGetTexImage though.
What I did was an offscreen render in a framebuffer with a texture initialized like this:
GLuint texture;
if (glIsTexture(texture)) {
glDeleteTextures(1, &texture);
}
glGenTextures(1, &texture);
glBindTexture(GL_TEXTURE_2D, texture);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glBindTexture(GL_TEXTURE_2D, 0);
glBindTexture(GL_TEXTURE_2D, texture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA32F, width, height, 0, GL_RGBA, GL_FLOAT, 0);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, texture, 0);
glBindTexture(GL_TEXTURE_2D, 0);
Then after my draw calls, I get the data using glReadPixels:
glPixelStorei(GL_PACK_ALIGNMENT, 1);
glReadBuffer(GL_COLOR_ATTACHMENT0);
cv::Mat texture = cv::Mat::zeros(height, width, CV_32FC3);
glReadPixels(0, 0, width, height, GL_BGR, GL_FLOAT, texture.data);
Hope it helps you.
You have a mismatch in the format parameter used for the glGetTexImage() call and the internal format of the texture:
glTexImage2D( GL_TEXTURE_2D, 0, GL_RGBA8UI, iWidth, iHeight, 0, GL_RGBA_INTEGER, GL_UNSIGNED_BYTE, pImageData);
...
glGetTexImage(GL_TEXTURE_2D, 0, GL_BGR, GL_UNSIGNED_BYTE, texture_bytes);
For an integer texture, which you have in this case, you need to use a format parameter to glGetTexImage() that works for integer textures. In your example, that would be:
glGetTexImage(GL_TEXTURE_2D, 0, GL_BGR_INTEGER, GL_UNSIGNED_BYTE, texture_bytes);
It is always a good idea to call glGetError() if you have any kind of problem getting the desired OpenGL behavior. In this case, you would have gotten a GL_INVALID_OPERATION error, based on this error condition in the spec:
format is one of the integer formats in table 8.3 and the internal format of the texture image is not integer, or format is not one of the integer formats in table 8.3 and the internal format is integer.
I am trying to get into screen a video frame in a 2d texture covering the screen, I must use framebuffers, because eventually I wish to do the ping-pong rendering technique. But for the time being I want first to achieve just to render on the screen by using framebuffers.
This is my setup code:
// Texture setup
int[] text = new int[1];
glGenTextures(1, text, 0);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, text[0]);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GLES20.GL_LINEAR);
glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE, 512, 512, 0, GL_LUMINANCE, GL_UNSIGNED_BYTE, null);
// Frambuffer setup
int[] fbo = new int[1];
glGenFramebuffers(1, fbo, 0);
glBindFramebuffer(GL_FRAMEBUFFER, fbo[0]);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, text[0], 0);
glBindFramebuffer(GL_FRAMEBUFFER, 0);
Till here all ok, I didn't write here the testing code to keep it small, but right after I check if the frame buffer was created correctly, and all is fine.
Now in my Render loop I do next:
// Use the GLSL program
glUseProgram(programHandle);
// Swap to my FBO
glBindTexture(GL_TEXTURE_2D, 0);
glBindFramebuffer(GL_FRAMEBUFFER, fbo[0]);
glViewport(0, 0, 512, 512);
// Pass the new image data to the program so the fragment shader processes it.
// Using glTexSubImage2D to speed up
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, text[0]);
glUniform1i(glGetUniformLocation(programHandle, "u_texture"), 0);
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, 512, 512, GL_LUMINANCE, GL_UNSIGNED_BYTE, NewImageData);
// Draw the quad
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
// Swap back to the default screen frambuffer
glBindFramebuffer(GL_FRAMEBUFFER, 0);
glBindTexture(GL_TEXTURE_2D, 0);
At this point I get a black screen, and on the log I can see a GL_INVALID_FRAMEBUFFER_OPERATION 1286.
I tried putting the glDrawArrays after the glBindFramebuffer call, but then the application crashes.
Any thoughts? Thanks in advance.
LUMINANCE textures are not renderable, this means you can't render to them using a FBO. This problem is solved with the GL_ARB_texture_rg extension, which introduces one and two channel texture formats that are renderable.
EDIT: As suggested I changed the texture target to GL_TEXTURE_2D. So the initialisation looks like that now:
void initTexture( int width, int height )
{
glGenTextures(1, &tex);
glBindTexture(GL_TEXTURE_2D, tex);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, width,
height, 0, GL_RGB, GL_UNSIGNED_BYTE, NULL );
glGenerateMipmap(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, 0);
}
Since it's a GL_TEXTURE_2D, mipmaps need to be defined. How should that be reflected on the initialisation of the OpenCL Image2D?
texMems.push_back(Image2DGL(clw->context, CL_MEM_READ_ONLY, GL_TEXTURE_2D, 0, tex, &err));
I'm still getting a CL_INVALID_GL_OBJECT, though. So the question still is: How can I check for texture completeness at the point of the initialisation of the OpenCL Image2D?
Previous approach:
I'm decoding a video-file with avcodec. The result is an AVFrame. I can display the frames on a GL_TEXTURE_RECTANGLE_ARB.
This is an excerpt from my texture initialisation, following an initialisation of the gl (glew) context:
GLuint tex = 0;
void initTexture( int width, int height )
{
glGenTextures(1, &tex);
glBindTexture(GL_TEXTURE_RECTANGLE_ARB, tex);
glTexImage2D(GL_TEXTURE_RECTANGLE_ARB, 0, GL_RGB8, width,
height, 0, GL_RGB, GL_UNSIGNED_BYTE, NULL );
glBindTexture(GL_TEXTURE_RECTANGLE_ARB, 0);
}
Now I want to assign tex to a Image2DGL for an interop between OpenCL and OpenGL. Im using an Nvidia Geforce 310M, Cuda Toolkit 4. OpenGL version is 3.3.0 and GLX version is 1.4.
texMems.push_back(Image2DGL(clw->context, CL_MEM_READ_WRITE, GL_TEXTURE_RECTANGLE_ARB, 0, tex, &err));
This gives back an:
clCreateFromGLBuffer: -60 (CL_INVALID_GL_OBJECT)
This is all happening before I'm starting the render loop. I can display the video frames on the texture just fineafter that. The texture target (GL_TEXTURE_RECTANGLE_ARB) is allowed for the OpenCL context, as the corresponding OpenGL extension is enabled (GL_ARB_texture_rectangle).
Now the error description in the OpenCL 1.1 spec states:
CL_INVALID_GL_OBJECT if texture is not a GL texture object whose type matches
texture_target, if the specified miplevel of texture is not defined, or if the width or height
of the specified miplevel is zero.
I'm using GL_TEXTURE_RECTANGLE_ARB, so there's not mipmapping (as I understand). However what I found was this statement in the Nvidia OpenCL implementation notes:
If the texture object specified in a call to clCreateFromGLTexture2D or
clCreateFromGLTexture3D is incomplete as per OpenGL rules on texture
completeness then the call will return CL_INVALID_GL_OBJECT in errcode_ret.
How can I validate the texture completeness at that state where I only initialize the texture without providing any actual texture content? Any ideas?
I was able to resolve the issue that I couldn't create an Image2DGL. I was missing to specify a 4-channel internal format for the texture2D:
void initTexture( int width, int height )
{
glGenTextures(1, &tex);
glBindTexture(GL_TEXTURE_2D, tex);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGB, GL_UNSIGNED_BYTE, NULL );
glBindTexture(GL_TEXTURE_2D, 0);
}
By specifying GL_RGBA I was able to successfully create the Image2DGL (which is equivalent to clCreateFromGLTexture2D). It seems that this fulfilled the demand to have texture completeness.
I'm trying to copy a PBO into a Texture with automipmapping enabled, but it seems only the top level texture is generated (in other words, no mipmapping is occuring).
I'm building a PBO using
//Generate a buffer ID called a PBO (Pixel Buffer Object)
glGenBuffers(1, pbo);
//Make this the current UNPACK buffer
glBindBuffer(GL_PIXEL_UNPACK_BUFFER, *pbo);
//Allocate data for the buffer. 4-channel 8-bit image
glBufferData(GL_PIXEL_UNPACK_BUFFER, size_tex_data, NULL, GL_DYNAMIC_COPY);
cudaGLRegisterBufferObject(*pbo);
and I'm buildilng a texture using
// Enable Texturing
glEnable(GL_TEXTURE_2D);
// Generate a texture identifier
glGenTextures(1,textureID);
// Make this the current texture (remember that GL is state-based)
glBindTexture( GL_TEXTURE_2D, *textureID);
// Allocate the texture memory. The last parameter is NULL since we only
// want to allocate memory, not initialize it
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA_FLOAT32_ATI, size_x, size_y, 0,
GL_RGBA, GL_FLOAT, NULL);
// Must set the filter mode, GL_LINEAR enables interpolation when scaling
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR_MIPMAP_LINEAR);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_GENERATE_MIPMAP, GL_TRUE);
Later in a kernel I modify the PBO using something like:
float4* aryPtr = NULL;
cudaGLMapBufferObject((void**)&aryPtr, *pbo);
//Pixel* gpuPixelsRawPtr = thrust::raw_pointer_cast(&gpuPixels[0]);
//... do some cuda stuff to aryPtr ...
//If we don't unmap the PBO then OpenGL won't be able to use it:
cudaGLUnmapBufferObject(*pbo);
Now, before I draw to the screen using the texture generated above I call:
(note that rtMipmapTex = *textureID above and rtResultPBO = *pbo above)
glEnable(GL_DEPTH);
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, rtMipmapTex);
glBindBuffer(GL_PIXEL_UNPACK_BUFFER, rtResultPBO);
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, canvasSize.x, canvasSize.y, GL_RGBA, GL_FLOAT, NULL);
This all works fine and correctly displays the texture. But, if I change that last line to
glTexSubImage2D(GL_TEXTURE_2D, 1, 0, 0, canvasSize.x, canvasSize.y, GL_RGBA, GL_FLOAT, NULL);
which, as far as I understand, should show me the first level instead of the zeroth level texture in the texture pyramid, I just get a blank white texture.
How do I copy the texture from the PBO in such a way that the auto-mipmapping is triggered?
Thanks.
I was being an idiot. The above code works perfectly, the problem was that
glTexSubImage2D(GL_TEXTURE_2D, 1, 0, 0, canvasSize.x, canvasSize.y, GL_RGBA, GL_FLOAT, NULL);
doesn't select the mipmap level from the texture, it selects it from the pbo, which is not mipmapped. Instead you can display the particular mipmapped level with:
glTexEnvi(GL_TEXTURE_FILTER_CONTROL,GL_TEXTURE_LOD_BIAS, 4);