I am working on a cross-platform project that involves OpenGLES (3.1). While code executes perfectly on my Windows and Ubuntu machines. Running the same code on Raspberry PI 4 causes a strange issue, after successfully initializing texture with glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, 16, 256, 0, GL_RGB, GL_UNSIGNED_BYTE, 0) call, later in code requesting available reading type for same FrameBuffer glGetIntegerv(GL_IMPLEMENTATION_COLOR_READ_FORMAT, ...) returns GL_GRBA. For context creation, I am using GLFW with GLAD. Below is the complete code of texture initialization:
...
GLuint pix_buf;
glGenFramebuffers(1, &pix_buf);
glBindFramebuffer(GL_FRAMEBUFFER, pix_buf);
GLuint text;
glGenTextures(1, &text);
glBindTexture(GL_TEXTURE_2D, text);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, 16, 256, 0, GL_RGB, GL_UNSIGNED_BYTE, 0);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, text, 0);
GLenum DrawBuffers[1] = {GL_COLOR_ATTACHMENT0};
glDrawBuffers(1, DrawBuffers);
if (glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE)
{
std::cout << "Frame buffer was not initialized" << std::endl;
return;
}
GLint read_format, read_type;
glGetIntegerv(GL_IMPLEMENTATION_COLOR_READ_FORMAT, &read_format);
glGetIntegerv(GL_IMPLEMENTATION_COLOR_READ_TYPE, &read_type);
...
read_format value is equal to GL_RGBA which should be GL_RGB!
read_type value is equal to GL_UNSIGNED_BYTE as expected
So after rendering call attempt of reading texture to local back_buf array using: glReadPixels(0, 0, 16, 256, GL_RGB, GL_UNSIGNED_BYTE, back_buf) is causing GL_INVALID_OPERATION with glReadPixels(invalid format GL_RGB and/or GL_UNSIGNED_BYTE). Changing reading type from GL_RGB to GL_RGBA is fixing error but resulting data format cant be used by my program (I strictly am looking for GL_RGB format).
My question is am I doing something wrong or there is a problem with Raspberry PI 4 OpenGLES drivers?
Related
I have been working with VR recently and encountered some OpenGL related problem.
The API i use for VR capture a video stream and write it to a texture, I, then, want to submit this texture to a headset. But there is an incompatibility in the API : the texture I get from the stream has an undefined internal format and cannot be submitted to the headset directly.
I am working on a workaround, for now, I have used a GPU -> CPU -> GPU transfer :
I read the first texture pixel (with glReadPixels) and write them into a buffer, then I use this buffer to create a texture with the correct format. This works fine but has some latencies due to data transfers.
I have been trying to do a direct GPU copy but failed :
I tried using PBO but have problems with invalid operation (following http://www.songho.ca/opengl/gl_pbo.html), here is the code with a invalid operation on glReadPixels
// Initialization
glGenBuffers(1, &pbo);
glGenTextures(1, &dstTexture);
glBindTexture(GL_TEXTURE_2D, dstTexture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL);
glBindTexture(GL_TEXTURE_2D, 0);
// Copy to GPU
glBindBuffer(GL_PIXEL_PACK_BUFFER, pbo);
glBufferData(GL_PIXEL_PACK_BUFFER, bufferSize, NULL, GL_DYNAMIC_DRAW);
glBindTexture(GL_TEXTURE_2D, id); // texture given by the API
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glReadPixels(0, 0, width, height, GL_RGBA, GL_UNSIGNED_BYTE, NULL);
glBindTexture(GL_TEXTURE_2D, 0);
glBindBuffer(GL_PIXEL_UNPACK_BUFFER, pbo);
glBufferData(GL_PIXEL_UNPACK_BUFFER, bufferSize, NULL, GL_DYNAMIC_READ);
glBindTexture(GL_TEXTURE_2D, dstTexture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL);
I tried using FBO but encountered pointer exceptions.
glCopyImageSubData does not work because the first texture internal format is not recognised.
What are the steps to do a direct GPU copy?
Im trying to make an OpenCV Mat() using output from OpenGL's glGetTexImage(). The texture I am trying to get information from was made using the call;
glTexImage2D( GL_TEXTURE_2D, 0, GL_RGBA8UI, iWidth, iHeight, 0, GL_RGBA_INTEGER, GL_UNSIGNED_BYTE, pImageData);
and so I've tried to do this using;
unsigned char* texture_bytes = (unsigned char*)malloc(sizeof(unsigned char)*texture_width*texture_height * 3);
glGetTexImage(GL_TEXTURE_2D, 0, GL_BGR, GL_UNSIGNED_BYTE, texture_bytes);
Matrix = Mat(texture_height, texture_width, CV_8UC3, texture_bytes);
What I am wondering is If anyone knows what I should set the format and type of glGetTexImage() to in order for this to work. Also, what should i set the type of the Mat() to?
You can assume that the context is set correctly, and that the texture that is input is correct. I have verified this by displaying the texture on screen using OpenGL. Thanks in advance!
I have been faced with the problem of getting data from OpenGL to OpenCV recently. I didn't use glGetTexImage though.
What I did was an offscreen render in a framebuffer with a texture initialized like this:
GLuint texture;
if (glIsTexture(texture)) {
glDeleteTextures(1, &texture);
}
glGenTextures(1, &texture);
glBindTexture(GL_TEXTURE_2D, texture);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glBindTexture(GL_TEXTURE_2D, 0);
glBindTexture(GL_TEXTURE_2D, texture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA32F, width, height, 0, GL_RGBA, GL_FLOAT, 0);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, texture, 0);
glBindTexture(GL_TEXTURE_2D, 0);
Then after my draw calls, I get the data using glReadPixels:
glPixelStorei(GL_PACK_ALIGNMENT, 1);
glReadBuffer(GL_COLOR_ATTACHMENT0);
cv::Mat texture = cv::Mat::zeros(height, width, CV_32FC3);
glReadPixels(0, 0, width, height, GL_BGR, GL_FLOAT, texture.data);
Hope it helps you.
You have a mismatch in the format parameter used for the glGetTexImage() call and the internal format of the texture:
glTexImage2D( GL_TEXTURE_2D, 0, GL_RGBA8UI, iWidth, iHeight, 0, GL_RGBA_INTEGER, GL_UNSIGNED_BYTE, pImageData);
...
glGetTexImage(GL_TEXTURE_2D, 0, GL_BGR, GL_UNSIGNED_BYTE, texture_bytes);
For an integer texture, which you have in this case, you need to use a format parameter to glGetTexImage() that works for integer textures. In your example, that would be:
glGetTexImage(GL_TEXTURE_2D, 0, GL_BGR_INTEGER, GL_UNSIGNED_BYTE, texture_bytes);
It is always a good idea to call glGetError() if you have any kind of problem getting the desired OpenGL behavior. In this case, you would have gotten a GL_INVALID_OPERATION error, based on this error condition in the spec:
format is one of the integer formats in table 8.3 and the internal format of the texture image is not integer, or format is not one of the integer formats in table 8.3 and the internal format is integer.
I am unable to read correct depth values from depth texture using glreadpixels function. FBO status is complete. other render targets also look fine after blitting to another FBO.
code snippet:
// Create the FBO
glGenFramebuffers(1, &m_fbo);
glBindFramebuffer(GL_FRAMEBUFFER, m_fbo);
// Create the gbuffer textures
glGenTextures(GBUFFER_NUM_TEXTURES, m_textures);
glGenTextures(1, &m_depthTexture);
for (unsigned int i = 0 ; i < GBUFFER_NUM_TEXTURES ; i++) {
glBindTexture(GL_TEXTURE_2D, m_textures[i]);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA16F, fboWidth, fboHeight, 0, GL_RGBA, GL_FLOAT, NULL);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0 + i, GL_TEXTURE_2D, m_textures[i], 0);
}
// depth
glBindTexture(GL_TEXTURE_2D, m_depthTexture);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_COMPARE_MODE, GL_NONE);
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT32F, fboWidth, fboHeight, 0, GL_DEPTH_COMPONENT, GL_FLOAT,
NULL);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, m_depthTexture, 0);
GLenum DrawBuffers[] = { GL_COLOR_ATTACHMENT0, GL_COLOR_ATTACHMENT1 };
glDrawBuffers(GBUFFER_NUM_TEXTURES, DrawBuffers);
GLenum Status = glCheckFramebufferStatus(GL_FRAMEBUFFER);
if (Status != GL_FRAMEBUFFER_COMPLETE) {
printf("FB error, status: 0x%x\n", Status);
return 0;
}
// drawing something with depth test enabled.
// Now i am using glreadpixels functions to read depth values from depth texture.
int w = 4, h = 1;
GLfloat windowDepth[4];
glBindFramebuffer(GL_FRAMEBUFFER, m_fbo);
glReadPixels(x, y, w, h, GL_DEPTH_COMPONENT, GL_FLOAT, windowDepth);
You are drawing to a depth texture. The appropriate function to call to read a texture into client memory is glGetTexImage (...).
Now, since there is no glGetTexSubImage (...), you need to allocate enough client storage to hold an entire LOD of the depth texture. Something like this will probably do the trick:
GLuint w = fboWidth, h = fboHeight;
GLfloat windowDepth [w * h];
glBindTexture (GL_TEXTURE_2D, m_depthTexture);
glGetTexImage (GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT, GL_FLOAT, windowDepth);
Keep in mind that unlike glReadPixels (...), glGetTexImage (...) does not perform pixel transfer conversion. That is, your format and data type must be an exact match with the types used when the texture was created, the GL will not convert your data.
With that out of the way, can I ask why you are reading the depth buffer into client memory in the first place? You appear to be using deferred shading, so I can see why you need a depth texture, but it is less clear why you need a copy of the depth buffer outside of shaders. You will have a hard time achieving interactive frame rates if you copy the depth buffer each frame.
EDIT: As suggested I changed the texture target to GL_TEXTURE_2D. So the initialisation looks like that now:
void initTexture( int width, int height )
{
glGenTextures(1, &tex);
glBindTexture(GL_TEXTURE_2D, tex);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, width,
height, 0, GL_RGB, GL_UNSIGNED_BYTE, NULL );
glGenerateMipmap(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, 0);
}
Since it's a GL_TEXTURE_2D, mipmaps need to be defined. How should that be reflected on the initialisation of the OpenCL Image2D?
texMems.push_back(Image2DGL(clw->context, CL_MEM_READ_ONLY, GL_TEXTURE_2D, 0, tex, &err));
I'm still getting a CL_INVALID_GL_OBJECT, though. So the question still is: How can I check for texture completeness at the point of the initialisation of the OpenCL Image2D?
Previous approach:
I'm decoding a video-file with avcodec. The result is an AVFrame. I can display the frames on a GL_TEXTURE_RECTANGLE_ARB.
This is an excerpt from my texture initialisation, following an initialisation of the gl (glew) context:
GLuint tex = 0;
void initTexture( int width, int height )
{
glGenTextures(1, &tex);
glBindTexture(GL_TEXTURE_RECTANGLE_ARB, tex);
glTexImage2D(GL_TEXTURE_RECTANGLE_ARB, 0, GL_RGB8, width,
height, 0, GL_RGB, GL_UNSIGNED_BYTE, NULL );
glBindTexture(GL_TEXTURE_RECTANGLE_ARB, 0);
}
Now I want to assign tex to a Image2DGL for an interop between OpenCL and OpenGL. Im using an Nvidia Geforce 310M, Cuda Toolkit 4. OpenGL version is 3.3.0 and GLX version is 1.4.
texMems.push_back(Image2DGL(clw->context, CL_MEM_READ_WRITE, GL_TEXTURE_RECTANGLE_ARB, 0, tex, &err));
This gives back an:
clCreateFromGLBuffer: -60 (CL_INVALID_GL_OBJECT)
This is all happening before I'm starting the render loop. I can display the video frames on the texture just fineafter that. The texture target (GL_TEXTURE_RECTANGLE_ARB) is allowed for the OpenCL context, as the corresponding OpenGL extension is enabled (GL_ARB_texture_rectangle).
Now the error description in the OpenCL 1.1 spec states:
CL_INVALID_GL_OBJECT if texture is not a GL texture object whose type matches
texture_target, if the specified miplevel of texture is not defined, or if the width or height
of the specified miplevel is zero.
I'm using GL_TEXTURE_RECTANGLE_ARB, so there's not mipmapping (as I understand). However what I found was this statement in the Nvidia OpenCL implementation notes:
If the texture object specified in a call to clCreateFromGLTexture2D or
clCreateFromGLTexture3D is incomplete as per OpenGL rules on texture
completeness then the call will return CL_INVALID_GL_OBJECT in errcode_ret.
How can I validate the texture completeness at that state where I only initialize the texture without providing any actual texture content? Any ideas?
I was able to resolve the issue that I couldn't create an Image2DGL. I was missing to specify a 4-channel internal format for the texture2D:
void initTexture( int width, int height )
{
glGenTextures(1, &tex);
glBindTexture(GL_TEXTURE_2D, tex);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGB, GL_UNSIGNED_BYTE, NULL );
glBindTexture(GL_TEXTURE_2D, 0);
}
By specifying GL_RGBA I was able to successfully create the Image2DGL (which is equivalent to clCreateFromGLTexture2D). It seems that this fulfilled the demand to have texture completeness.
I'm trying to load an image file and use it as a texture for a cube. I'm using SDL_image to do that.
I used this image because I've found it in various file formats (tga, tif, jpg, png, bmp)
The code :
SDL_Surface * texture;
//load an image to an SDL surface (i.e. a buffer)
texture = IMG_Load("/Users/Foo/Code/xcode/test/lena.bmp");
if(texture == NULL){
printf("bad image\n");
exit(1);
}
//create an OpenGL texture object
glGenTextures(1, &textureObjOpenGLlogo);
//select the texture object you need
glBindTexture(GL_TEXTURE_2D, textureObjOpenGLlogo);
//define the parameters of that texture object
//how the texture should wrap in s direction
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
//how the texture should wrap in t direction
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
//how the texture lookup should be interpolated when the face is smaller than the texture
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
//how the texture lookup should be interpolated when the face is bigger than the texture
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
//send the texture image to the graphic card
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, texture->w, texture->h, 0, GL_RGB, GL_UNSIGNED_BYTE, texture-> pixels);
//clean the SDL surface
SDL_FreeSurface(texture);
The code compiles without errors or warnings !
I've tired all the files formats but this always produces that ugly result :
I'm using : SDL_image 1.2.9 & SDL 1.2.14 with XCode 3.2 under 10.6.2
Does anyone knows how to fix this ?
The reason the image is distorted is because it's not in the RGBA format that you've specified. Check the texture->format to find out the format it's in and select the appropriate GL_ constant that represents the format. (Or, transform it yourself to the format of your choice.)
I think greyfade has the right answer, but another thing you should be aware of is the need to lock surfaces. This is probably not the case, since you're working with an in-memory surface, but normally you need to lock surfaces before accessing their pixel data with SDL_LockSurface(). For example:
bool lock = SDL_MUSTLOCK(texture);
if(lock)
SDL_LockSurface(texture); // should check that return value == 0
// access pixel data, e.g. call glTexImage2D
if(lock)
SDL_UnlockSUrface(texture);
If you have an alpha chanel every pixel is 4 unsigned bytes, if you don't it's 3 unsigned bytes. This image has no transpareny and when I try to save it, its a .jpg.
change
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, texture->w, texture->h, 0, GL_RGB, GL_UNSIGNED_BYTE, texture-> pixels);
to
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, texture->w, texture->h, 0, GL_RGB, GL_UNSIGNED_BYTE, texture-> pixels);
That should fix it.
For a .png with an alpha channel use
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, texture->w, texture->h, 0, GL_RGBA, GL_UNSIGNED_BYTE, texture-> pixels);