EDIT: As suggested I changed the texture target to GL_TEXTURE_2D. So the initialisation looks like that now:
void initTexture( int width, int height )
{
glGenTextures(1, &tex);
glBindTexture(GL_TEXTURE_2D, tex);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, width,
height, 0, GL_RGB, GL_UNSIGNED_BYTE, NULL );
glGenerateMipmap(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, 0);
}
Since it's a GL_TEXTURE_2D, mipmaps need to be defined. How should that be reflected on the initialisation of the OpenCL Image2D?
texMems.push_back(Image2DGL(clw->context, CL_MEM_READ_ONLY, GL_TEXTURE_2D, 0, tex, &err));
I'm still getting a CL_INVALID_GL_OBJECT, though. So the question still is: How can I check for texture completeness at the point of the initialisation of the OpenCL Image2D?
Previous approach:
I'm decoding a video-file with avcodec. The result is an AVFrame. I can display the frames on a GL_TEXTURE_RECTANGLE_ARB.
This is an excerpt from my texture initialisation, following an initialisation of the gl (glew) context:
GLuint tex = 0;
void initTexture( int width, int height )
{
glGenTextures(1, &tex);
glBindTexture(GL_TEXTURE_RECTANGLE_ARB, tex);
glTexImage2D(GL_TEXTURE_RECTANGLE_ARB, 0, GL_RGB8, width,
height, 0, GL_RGB, GL_UNSIGNED_BYTE, NULL );
glBindTexture(GL_TEXTURE_RECTANGLE_ARB, 0);
}
Now I want to assign tex to a Image2DGL for an interop between OpenCL and OpenGL. Im using an Nvidia Geforce 310M, Cuda Toolkit 4. OpenGL version is 3.3.0 and GLX version is 1.4.
texMems.push_back(Image2DGL(clw->context, CL_MEM_READ_WRITE, GL_TEXTURE_RECTANGLE_ARB, 0, tex, &err));
This gives back an:
clCreateFromGLBuffer: -60 (CL_INVALID_GL_OBJECT)
This is all happening before I'm starting the render loop. I can display the video frames on the texture just fineafter that. The texture target (GL_TEXTURE_RECTANGLE_ARB) is allowed for the OpenCL context, as the corresponding OpenGL extension is enabled (GL_ARB_texture_rectangle).
Now the error description in the OpenCL 1.1 spec states:
CL_INVALID_GL_OBJECT if texture is not a GL texture object whose type matches
texture_target, if the specified miplevel of texture is not defined, or if the width or height
of the specified miplevel is zero.
I'm using GL_TEXTURE_RECTANGLE_ARB, so there's not mipmapping (as I understand). However what I found was this statement in the Nvidia OpenCL implementation notes:
If the texture object specified in a call to clCreateFromGLTexture2D or
clCreateFromGLTexture3D is incomplete as per OpenGL rules on texture
completeness then the call will return CL_INVALID_GL_OBJECT in errcode_ret.
How can I validate the texture completeness at that state where I only initialize the texture without providing any actual texture content? Any ideas?
I was able to resolve the issue that I couldn't create an Image2DGL. I was missing to specify a 4-channel internal format for the texture2D:
void initTexture( int width, int height )
{
glGenTextures(1, &tex);
glBindTexture(GL_TEXTURE_2D, tex);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGB, GL_UNSIGNED_BYTE, NULL );
glBindTexture(GL_TEXTURE_2D, 0);
}
By specifying GL_RGBA I was able to successfully create the Image2DGL (which is equivalent to clCreateFromGLTexture2D). It seems that this fulfilled the demand to have texture completeness.
Related
I am working on a cross-platform project that involves OpenGLES (3.1). While code executes perfectly on my Windows and Ubuntu machines. Running the same code on Raspberry PI 4 causes a strange issue, after successfully initializing texture with glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, 16, 256, 0, GL_RGB, GL_UNSIGNED_BYTE, 0) call, later in code requesting available reading type for same FrameBuffer glGetIntegerv(GL_IMPLEMENTATION_COLOR_READ_FORMAT, ...) returns GL_GRBA. For context creation, I am using GLFW with GLAD. Below is the complete code of texture initialization:
...
GLuint pix_buf;
glGenFramebuffers(1, &pix_buf);
glBindFramebuffer(GL_FRAMEBUFFER, pix_buf);
GLuint text;
glGenTextures(1, &text);
glBindTexture(GL_TEXTURE_2D, text);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, 16, 256, 0, GL_RGB, GL_UNSIGNED_BYTE, 0);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, text, 0);
GLenum DrawBuffers[1] = {GL_COLOR_ATTACHMENT0};
glDrawBuffers(1, DrawBuffers);
if (glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE)
{
std::cout << "Frame buffer was not initialized" << std::endl;
return;
}
GLint read_format, read_type;
glGetIntegerv(GL_IMPLEMENTATION_COLOR_READ_FORMAT, &read_format);
glGetIntegerv(GL_IMPLEMENTATION_COLOR_READ_TYPE, &read_type);
...
read_format value is equal to GL_RGBA which should be GL_RGB!
read_type value is equal to GL_UNSIGNED_BYTE as expected
So after rendering call attempt of reading texture to local back_buf array using: glReadPixels(0, 0, 16, 256, GL_RGB, GL_UNSIGNED_BYTE, back_buf) is causing GL_INVALID_OPERATION with glReadPixels(invalid format GL_RGB and/or GL_UNSIGNED_BYTE). Changing reading type from GL_RGB to GL_RGBA is fixing error but resulting data format cant be used by my program (I strictly am looking for GL_RGB format).
My question is am I doing something wrong or there is a problem with Raspberry PI 4 OpenGLES drivers?
Im trying to make an OpenCV Mat() using output from OpenGL's glGetTexImage(). The texture I am trying to get information from was made using the call;
glTexImage2D( GL_TEXTURE_2D, 0, GL_RGBA8UI, iWidth, iHeight, 0, GL_RGBA_INTEGER, GL_UNSIGNED_BYTE, pImageData);
and so I've tried to do this using;
unsigned char* texture_bytes = (unsigned char*)malloc(sizeof(unsigned char)*texture_width*texture_height * 3);
glGetTexImage(GL_TEXTURE_2D, 0, GL_BGR, GL_UNSIGNED_BYTE, texture_bytes);
Matrix = Mat(texture_height, texture_width, CV_8UC3, texture_bytes);
What I am wondering is If anyone knows what I should set the format and type of glGetTexImage() to in order for this to work. Also, what should i set the type of the Mat() to?
You can assume that the context is set correctly, and that the texture that is input is correct. I have verified this by displaying the texture on screen using OpenGL. Thanks in advance!
I have been faced with the problem of getting data from OpenGL to OpenCV recently. I didn't use glGetTexImage though.
What I did was an offscreen render in a framebuffer with a texture initialized like this:
GLuint texture;
if (glIsTexture(texture)) {
glDeleteTextures(1, &texture);
}
glGenTextures(1, &texture);
glBindTexture(GL_TEXTURE_2D, texture);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glBindTexture(GL_TEXTURE_2D, 0);
glBindTexture(GL_TEXTURE_2D, texture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA32F, width, height, 0, GL_RGBA, GL_FLOAT, 0);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, texture, 0);
glBindTexture(GL_TEXTURE_2D, 0);
Then after my draw calls, I get the data using glReadPixels:
glPixelStorei(GL_PACK_ALIGNMENT, 1);
glReadBuffer(GL_COLOR_ATTACHMENT0);
cv::Mat texture = cv::Mat::zeros(height, width, CV_32FC3);
glReadPixels(0, 0, width, height, GL_BGR, GL_FLOAT, texture.data);
Hope it helps you.
You have a mismatch in the format parameter used for the glGetTexImage() call and the internal format of the texture:
glTexImage2D( GL_TEXTURE_2D, 0, GL_RGBA8UI, iWidth, iHeight, 0, GL_RGBA_INTEGER, GL_UNSIGNED_BYTE, pImageData);
...
glGetTexImage(GL_TEXTURE_2D, 0, GL_BGR, GL_UNSIGNED_BYTE, texture_bytes);
For an integer texture, which you have in this case, you need to use a format parameter to glGetTexImage() that works for integer textures. In your example, that would be:
glGetTexImage(GL_TEXTURE_2D, 0, GL_BGR_INTEGER, GL_UNSIGNED_BYTE, texture_bytes);
It is always a good idea to call glGetError() if you have any kind of problem getting the desired OpenGL behavior. In this case, you would have gotten a GL_INVALID_OPERATION error, based on this error condition in the spec:
format is one of the integer formats in table 8.3 and the internal format of the texture image is not integer, or format is not one of the integer formats in table 8.3 and the internal format is integer.
I am the new one here, and I have a question about the texture format in OpenGL for depth infomation, there is part of my code:
glGenTextures(1,&tex);
glBindTexture(GL_TEXTURE_2D, tex);
glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE16UI_EXT, width, height, 0, GL_LUMINANCE_INTEGER_EXT, GL_UNSIGNED_SHORT, data);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
The question is: in my Intel HD (graphics 5500), there will have problem in "glTexImage2D" when I want to deal with the depth camera (in unsigned short), but that is okey for NV (GeForce 940M). (The GL error is 0x0502)
Is the internal format "GL_LUMINANCE16UI_EXT" not suitable for Intel HD? or do I miss something or there have better format can used?
BTW, I had tried the internal format "GL_DEPTH_COMPONENT16" with "GL_DEPTH_COMPONENT" to make the error not happened, but some other problem will happen in the following code after the code above:
glBindTexture(GL_TEXTURE_2D, tex);
frameBuffer.Bind();
glPushAttrib(GL_VIEWPORT_BIT);
glViewport(0, 0, renderBuffer.width, renderBuffer.height);
glClearColor(0, 0, 0, 0);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
GlSlProgram Bind;
.
glGetUniformLocation(...);
glUniform3f(...);
.
glDrawArrays(GL_POINTS, 0, 1);
frameBuffer.Unbind();
GlSlProgram Unbind;
glPopAttrib();
glFinish();
The gl error will happen in "glClear" and "glDrawArrays" with 0x0506, when this kind of format is used. I don't know how to fix that...
GL_LUMINANCE16UI is no depth buffer format and will most likely not work. A list of available depth buffer formats is here.
also, you probably shouldn't bind the texture itself but instead attach it to the framebuffer with glFrameBufferTexture2D and GL_DEPTH_ATTACHMENT.
I have problems to use 1D textures in OpenGL 4.x.
I create my 1d texture this way (BTW: I removed my error checks to make the code more clear and shorter - usually after each gl call a BLUE_ASSERTEx(glGetError() == GL_NO_ERROR, "glGetError failed."); follows):
glGenTextures(1, &textureId_);
// bind texture
glBindTexture(GL_TEXTURE_1D, textureId_);
// tells OpenGL how the data that is going to be uploaded is aligned
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
BLUE_ASSERTEx(description.data, "Invalid data provided");
glTexImage1D(
GL_TEXTURE_1D, // Specifies the target texture. Must be GL_TEXTURE_1D or GL_PROXY_TEXTURE_1D.
0, // Specifies the level-of-detail number. Level 0 is the base image level. Level n is the nth mipmap reduction image.
GL_RGBA32F,
description.width,
0, // border: This value must be 0.
GL_RGBA,
GL_FLOAT,
description.data);
BLUE_ASSERTEx(glGetError() == GL_NO_ERROR, "glGetError failed.");
// texture sampling/filtering operation.
glTexParameteri(GL_TEXTURE_1D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri (GL_TEXTURE_1D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexParameteri (GL_TEXTURE_1D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri (GL_TEXTURE_1D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glBindTexture(GL_TEXTURE_1D, 0);
After the creation I try to read the pixel data of the created texture this way:
const int width = width_;
const int height = 1;
// Allocate memory for depth buffer screenshot
float* pixels = new float[width*height*sizeof(buw::vector4f)];
// bind texture
glBindTexture(GL_TEXTURE_1D, textureId_);
glPixelStorei(GL_PACK_ALIGNMENT, 1);
glReadPixels(0, 0, width, height, GL_RGBA, GL_FLOAT, pixels);
glBindTexture(GL_TEXTURE_1D, 0);
buw::Image_4f::Ptr img(new buw::Image_4f(width, height, pixels));
buw::storeImageAsFile(filename.toCString(), img.get());
delete pixels;
But the returned pixel data is different to the input pixel data (input: color ramp, ouptut: black image)
Any ideas how to solve the issue? Maybe I am using a wrong API call.
Replacing glReadPixels by glGetTexImage does fix the issue.
I'm trying to load an image file and use it as a texture for a cube. I'm using SDL_image to do that.
I used this image because I've found it in various file formats (tga, tif, jpg, png, bmp)
The code :
SDL_Surface * texture;
//load an image to an SDL surface (i.e. a buffer)
texture = IMG_Load("/Users/Foo/Code/xcode/test/lena.bmp");
if(texture == NULL){
printf("bad image\n");
exit(1);
}
//create an OpenGL texture object
glGenTextures(1, &textureObjOpenGLlogo);
//select the texture object you need
glBindTexture(GL_TEXTURE_2D, textureObjOpenGLlogo);
//define the parameters of that texture object
//how the texture should wrap in s direction
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
//how the texture should wrap in t direction
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
//how the texture lookup should be interpolated when the face is smaller than the texture
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
//how the texture lookup should be interpolated when the face is bigger than the texture
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
//send the texture image to the graphic card
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, texture->w, texture->h, 0, GL_RGB, GL_UNSIGNED_BYTE, texture-> pixels);
//clean the SDL surface
SDL_FreeSurface(texture);
The code compiles without errors or warnings !
I've tired all the files formats but this always produces that ugly result :
I'm using : SDL_image 1.2.9 & SDL 1.2.14 with XCode 3.2 under 10.6.2
Does anyone knows how to fix this ?
The reason the image is distorted is because it's not in the RGBA format that you've specified. Check the texture->format to find out the format it's in and select the appropriate GL_ constant that represents the format. (Or, transform it yourself to the format of your choice.)
I think greyfade has the right answer, but another thing you should be aware of is the need to lock surfaces. This is probably not the case, since you're working with an in-memory surface, but normally you need to lock surfaces before accessing their pixel data with SDL_LockSurface(). For example:
bool lock = SDL_MUSTLOCK(texture);
if(lock)
SDL_LockSurface(texture); // should check that return value == 0
// access pixel data, e.g. call glTexImage2D
if(lock)
SDL_UnlockSUrface(texture);
If you have an alpha chanel every pixel is 4 unsigned bytes, if you don't it's 3 unsigned bytes. This image has no transpareny and when I try to save it, its a .jpg.
change
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, texture->w, texture->h, 0, GL_RGB, GL_UNSIGNED_BYTE, texture-> pixels);
to
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, texture->w, texture->h, 0, GL_RGB, GL_UNSIGNED_BYTE, texture-> pixels);
That should fix it.
For a .png with an alpha channel use
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, texture->w, texture->h, 0, GL_RGBA, GL_UNSIGNED_BYTE, texture-> pixels);