I'm trying to load a .jpg image as a background, I loaded it with stbi_load however when i try to draw the texture i get the following error:
Exception thrown at 0x69ABF340 (nvoglv32.dll)
0xC0000005: Access violation reading location 0x0C933000. occurred
I tried changing the channels while loading the image, perhaps the image is not in rgb but rgba, with no success.
int width = 1280;
int height = 720;
int channels = 3;
GLuint t;
stbi_set_flip_vertically_on_load(true);
unsigned char *image = stbi_load("bg.jpg",
&width,
&height,
&channels,
STBI_rgb);
glEnable(GL_TEXTURE_2D);
glGenTextures(1, &t);
glBindTexture(GL_TEXTURE_2D, t);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, image);
glBindTexture(GL_TEXTURE_2D, 0);
The window should contain the texture specified, instead I get a white window with an exception.
The parameter STBI_rgb indicates to load an generate a texture with 3 color channels. This causes that the image buffer (image) consists of 3 bytes per pixel.
But when you specify the two-dimensional texture image by glTexImage2D then the specified format is GL_RGBA, which suggests 4 channels and so 4 bytes per pixel.
This causes that the data buffer is read out of bounds. Change the format parameter to GL_RGB, to solve that.
Further note, that by default OpenGL assumes that the start of each row of an image is aligned 4 bytes. This is because the GL_UNPACK_ALIGNMENT parameter by default is 4. Since the image has 3 color channel, and is tightly packed the start of a row is possibly misaligned.
Change the the GL_UNPACK_ALIGNMENT parameter to 1, before specifying the two-dimensional texture image (glTexImage2D).
Further more, the texture is not (mipmap) complete. The initial value of GL_TEXTURE_MIN_FILTER is GL_NEAREST_MIPMAP_LINEAR. If you don't change it and you don't create mipmaps, then the texture is not "complete" and will not be "shown". See glTexParameter.
Either set the minification filter to GL_NEAREST or GL_LINEAR
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
or generate mipmaps by glGenerateMipmap(GL_TEXTURE_2D) to solve the issue.
e.g.:
glBindTexture(GL_TEXTURE_2D, t);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, width, height, 0, GL_RGB, GL_UNSIGNED_BYTE, image);
Related
If I create a greyscale texture with dimensions that aren’t divisible by 4, the layout doesn't match the given data. If I make the texture RGBA, everything works. What’s going on? Is openGL internally packing the data into RGBA format?
width=16:
width=15:
int width = 15;
unsigned char* localBuffer = new unsigned char[width*width];
glGenTextures(1, &textureObjID);
glBindTexture(GL_TEXTURE_2D, textureObjID);
for (int i = 0; i < width*width; i++)
{
float x = (i % width) / (float)width;
localBuffer[i] = x * 255;
}
glTexImage2D(GL_TEXTURE_2D, 0, GL_RED, width, width, 0, GL_RED, GL_UNSIGNED_BYTE, localBuffer);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
By default OpenGL assumes that the start of each row of an image is aligned 4 bytes.
This is because the GL_UNPACK_ALIGNMENT parameter by default is 4.
Since the image has 1 (RED) color channel, and is tightly packed the start of a row of the image is aligned to 4 bytes if width=16, but it is not aligned to 4 bytes if width=15.
Change the the GL_UNPACK_ALIGNMENT parameter to 1, before specifying the two-dimensional texture image (glTexImage2D):
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RED, width, width,
0, GL_RED, GL_UNSIGNED_BYTE, localBuffer);
Since that is missed, this cause a shift effect at each line of the image, except if the width of the image is divisible by 4.
When the format of the image is changed to GL_RGBA, the the size of single pixel is 4, so the size of a line (in bytes) is divisible by 4 in any case.
I have found a function in order to convert cv::Mat To Texture and it is worked:
GLuint matToTexture(cv::Mat &mat, GLenum minFilter, GLenum magFilter, GLenum wrapFilter)
{
// Generate a number for our textureID's unique handle
GLuint textureID;
glGenTextures(1, &textureID);
// Bind to our texture handle
glBindTexture(GL_TEXTURE_2D, textureID);
// Set texture interpolation methods for minification and magnification
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, minFilter);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, magFilter);
// Set texture clamping method
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, wrapFilter);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, wrapFilter);
// Create the texture
glTexImage2D(GL_TEXTURE_2D, // Type of texture
0, // Pyramid level (for mip-mapping) - 0 is the top level
GL_RGB, // Internal colour format to convert to
mat.cols, // Image width i.e. 640 for Kinect in standard mode
mat.rows, // Image height i.e. 480 for Kinect in standard mode
0, // Border width in pixels (can either be 1 or 0)
GL_BGR, // Input image format (i.e. GL_RGB, GL_RGBA, GL_BGR etc.)
GL_UNSIGNED_BYTE, // Image data type
mat.data); // The actual image data itself
glBindFramebuffer(GL_TEXTURE_2D, textureID);
return textureID;
}
// example of usage
GLuint tex = matToTexture(mat, GL_NEAREST, GL_NEAREST, GL_CLAMP);
glBindTexture(GL_TEXTURE_2D, tex);
Is there any way to convert cv::UMat data to opengl texture directly (without converting to cv::Mat)?
I have problems to use 1D textures in OpenGL 4.x.
I create my 1d texture this way (BTW: I removed my error checks to make the code more clear and shorter - usually after each gl call a BLUE_ASSERTEx(glGetError() == GL_NO_ERROR, "glGetError failed."); follows):
glGenTextures(1, &textureId_);
// bind texture
glBindTexture(GL_TEXTURE_1D, textureId_);
// tells OpenGL how the data that is going to be uploaded is aligned
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
BLUE_ASSERTEx(description.data, "Invalid data provided");
glTexImage1D(
GL_TEXTURE_1D, // Specifies the target texture. Must be GL_TEXTURE_1D or GL_PROXY_TEXTURE_1D.
0, // Specifies the level-of-detail number. Level 0 is the base image level. Level n is the nth mipmap reduction image.
GL_RGBA32F,
description.width,
0, // border: This value must be 0.
GL_RGBA,
GL_FLOAT,
description.data);
BLUE_ASSERTEx(glGetError() == GL_NO_ERROR, "glGetError failed.");
// texture sampling/filtering operation.
glTexParameteri(GL_TEXTURE_1D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri (GL_TEXTURE_1D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexParameteri (GL_TEXTURE_1D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri (GL_TEXTURE_1D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glBindTexture(GL_TEXTURE_1D, 0);
After the creation I try to read the pixel data of the created texture this way:
const int width = width_;
const int height = 1;
// Allocate memory for depth buffer screenshot
float* pixels = new float[width*height*sizeof(buw::vector4f)];
// bind texture
glBindTexture(GL_TEXTURE_1D, textureId_);
glPixelStorei(GL_PACK_ALIGNMENT, 1);
glReadPixels(0, 0, width, height, GL_RGBA, GL_FLOAT, pixels);
glBindTexture(GL_TEXTURE_1D, 0);
buw::Image_4f::Ptr img(new buw::Image_4f(width, height, pixels));
buw::storeImageAsFile(filename.toCString(), img.get());
delete pixels;
But the returned pixel data is different to the input pixel data (input: color ramp, ouptut: black image)
Any ideas how to solve the issue? Maybe I am using a wrong API call.
Replacing glReadPixels by glGetTexImage does fix the issue.
EDIT: As suggested I changed the texture target to GL_TEXTURE_2D. So the initialisation looks like that now:
void initTexture( int width, int height )
{
glGenTextures(1, &tex);
glBindTexture(GL_TEXTURE_2D, tex);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, width,
height, 0, GL_RGB, GL_UNSIGNED_BYTE, NULL );
glGenerateMipmap(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, 0);
}
Since it's a GL_TEXTURE_2D, mipmaps need to be defined. How should that be reflected on the initialisation of the OpenCL Image2D?
texMems.push_back(Image2DGL(clw->context, CL_MEM_READ_ONLY, GL_TEXTURE_2D, 0, tex, &err));
I'm still getting a CL_INVALID_GL_OBJECT, though. So the question still is: How can I check for texture completeness at the point of the initialisation of the OpenCL Image2D?
Previous approach:
I'm decoding a video-file with avcodec. The result is an AVFrame. I can display the frames on a GL_TEXTURE_RECTANGLE_ARB.
This is an excerpt from my texture initialisation, following an initialisation of the gl (glew) context:
GLuint tex = 0;
void initTexture( int width, int height )
{
glGenTextures(1, &tex);
glBindTexture(GL_TEXTURE_RECTANGLE_ARB, tex);
glTexImage2D(GL_TEXTURE_RECTANGLE_ARB, 0, GL_RGB8, width,
height, 0, GL_RGB, GL_UNSIGNED_BYTE, NULL );
glBindTexture(GL_TEXTURE_RECTANGLE_ARB, 0);
}
Now I want to assign tex to a Image2DGL for an interop between OpenCL and OpenGL. Im using an Nvidia Geforce 310M, Cuda Toolkit 4. OpenGL version is 3.3.0 and GLX version is 1.4.
texMems.push_back(Image2DGL(clw->context, CL_MEM_READ_WRITE, GL_TEXTURE_RECTANGLE_ARB, 0, tex, &err));
This gives back an:
clCreateFromGLBuffer: -60 (CL_INVALID_GL_OBJECT)
This is all happening before I'm starting the render loop. I can display the video frames on the texture just fineafter that. The texture target (GL_TEXTURE_RECTANGLE_ARB) is allowed for the OpenCL context, as the corresponding OpenGL extension is enabled (GL_ARB_texture_rectangle).
Now the error description in the OpenCL 1.1 spec states:
CL_INVALID_GL_OBJECT if texture is not a GL texture object whose type matches
texture_target, if the specified miplevel of texture is not defined, or if the width or height
of the specified miplevel is zero.
I'm using GL_TEXTURE_RECTANGLE_ARB, so there's not mipmapping (as I understand). However what I found was this statement in the Nvidia OpenCL implementation notes:
If the texture object specified in a call to clCreateFromGLTexture2D or
clCreateFromGLTexture3D is incomplete as per OpenGL rules on texture
completeness then the call will return CL_INVALID_GL_OBJECT in errcode_ret.
How can I validate the texture completeness at that state where I only initialize the texture without providing any actual texture content? Any ideas?
I was able to resolve the issue that I couldn't create an Image2DGL. I was missing to specify a 4-channel internal format for the texture2D:
void initTexture( int width, int height )
{
glGenTextures(1, &tex);
glBindTexture(GL_TEXTURE_2D, tex);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGB, GL_UNSIGNED_BYTE, NULL );
glBindTexture(GL_TEXTURE_2D, 0);
}
By specifying GL_RGBA I was able to successfully create the Image2DGL (which is equivalent to clCreateFromGLTexture2D). It seems that this fulfilled the demand to have texture completeness.
I'm trying to load an image file and use it as a texture for a cube. I'm using SDL_image to do that.
I used this image because I've found it in various file formats (tga, tif, jpg, png, bmp)
The code :
SDL_Surface * texture;
//load an image to an SDL surface (i.e. a buffer)
texture = IMG_Load("/Users/Foo/Code/xcode/test/lena.bmp");
if(texture == NULL){
printf("bad image\n");
exit(1);
}
//create an OpenGL texture object
glGenTextures(1, &textureObjOpenGLlogo);
//select the texture object you need
glBindTexture(GL_TEXTURE_2D, textureObjOpenGLlogo);
//define the parameters of that texture object
//how the texture should wrap in s direction
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
//how the texture should wrap in t direction
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
//how the texture lookup should be interpolated when the face is smaller than the texture
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
//how the texture lookup should be interpolated when the face is bigger than the texture
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
//send the texture image to the graphic card
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, texture->w, texture->h, 0, GL_RGB, GL_UNSIGNED_BYTE, texture-> pixels);
//clean the SDL surface
SDL_FreeSurface(texture);
The code compiles without errors or warnings !
I've tired all the files formats but this always produces that ugly result :
I'm using : SDL_image 1.2.9 & SDL 1.2.14 with XCode 3.2 under 10.6.2
Does anyone knows how to fix this ?
The reason the image is distorted is because it's not in the RGBA format that you've specified. Check the texture->format to find out the format it's in and select the appropriate GL_ constant that represents the format. (Or, transform it yourself to the format of your choice.)
I think greyfade has the right answer, but another thing you should be aware of is the need to lock surfaces. This is probably not the case, since you're working with an in-memory surface, but normally you need to lock surfaces before accessing their pixel data with SDL_LockSurface(). For example:
bool lock = SDL_MUSTLOCK(texture);
if(lock)
SDL_LockSurface(texture); // should check that return value == 0
// access pixel data, e.g. call glTexImage2D
if(lock)
SDL_UnlockSUrface(texture);
If you have an alpha chanel every pixel is 4 unsigned bytes, if you don't it's 3 unsigned bytes. This image has no transpareny and when I try to save it, its a .jpg.
change
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, texture->w, texture->h, 0, GL_RGB, GL_UNSIGNED_BYTE, texture-> pixels);
to
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, texture->w, texture->h, 0, GL_RGB, GL_UNSIGNED_BYTE, texture-> pixels);
That should fix it.
For a .png with an alpha channel use
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, texture->w, texture->h, 0, GL_RGBA, GL_UNSIGNED_BYTE, texture-> pixels);