OpenGL fail to load raw picture - c++

I am trying to execute a code that uses a raw picture as a texture. The problem is the picture wont load. Where do I need to put the picture so the program can locate it? It is currently in the project folder I am working on. I work in Codeblocks 12.11 (Win7, MinGW)
bool setup_textures()
{
RGBIMG img;
// Create The Textures' Id List
glGenTextures(TEXTURES_NUM, g_texid);
// Load The Image From A Disk File
if (!load_rgb_image("glass_128x128.raw", 128, 128, &img)) return false;
// Create Nearest Filtered Texture
glBindTexture(GL_TEXTURE_2D, g_texid[0]);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_NEAREST);
glTexImage2D(GL_TEXTURE_2D, 0, 3, img.w, img.h, 0, GL_RGB, GL_UNSIGNED_BYTE, img.data);
// Create Linear Filtered Texture
glBindTexture(GL_TEXTURE_2D, g_texid[1]);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR);
glTexImage2D(GL_TEXTURE_2D, 0, 3, img.w, img.h, 0, GL_RGB, GL_UNSIGNED_BYTE, img.data);
// Create MipMapped Texture
glBindTexture(GL_TEXTURE_2D, g_texid[2]);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR_MIPMAP_NEAREST);
gluBuild2DMipmaps(GL_TEXTURE_2D, 3, img.w, img.h, GL_RGB, GL_UNSIGNED_BYTE, img.data);
// Finished With Our Image, Free The Allocated Data
delete img.data;
return true;
}
the problem is the line with the loading of glass_128x128.raw fails and returns false;

Go to:
Project -> Properties -> Build targets -> [name of target] -> Execution working dir
and make sure that's set to the same directory as glass_128x128.raw. Chances are it's running from whatever directory your debug builds get put in, which isn't the same directory as your image is in.

Related

Is there a better/more efficient way to capture composite X windows in Linux?

As per subject I have the following pseudo-code to setup window capture in X (Linux):
xdisplay = XOpenDisplay(NULL);
win_capture = ...find the window to capture...
XCompositeRedirectWindow(xdisplay, win_capture, CompositeRedirectAutomatic);
XGetWindowAttributes(xdisplay, win_capture, &win_attr); // attributes used later
GLXFBConfig *configs = glXChooseFBConfig(xdisplay, win_attr.root, config_attrs, &nelem);
// cycle through the configs to
// find a valid one
...
win_pixmap = XCompositeNameWindowPixmap(xdisplay, win_capture);
const int pixmap_attrs[] = {GLX_TEXTURE_TARGET_EXT, GLX_TEXTURE_2D_EXT,
GLX_TEXTURE_FORMAT_EXT,
GLX_TEXTURE_FORMAT_RGBA_EXT, None};
gl_pixmap = glXCreatePixmap(xdisplay, config, win_pixmap, pixmap_attrs);
gl_ctx = glXCreateNewContext(xdisplay, config, GLX_RGBA_TYPE, 0, 1);
glXMakeCurrent(xdisplay, gl_pixmap, gl_ctx);
glEnable(GL_TEXTURE_2D);
glGenTextures(1, &gl_texmap);
glBindTexture(GL_TEXTURE_2D, gl_texmap);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, win_attr.width, win_attr.height, 0, GL_RGBA, GL_UNSIGNED_BYTE, 0);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
Then, much later on, this would be the loop to capture the frames:
glXMakeCurrent(xdisplay, gl_pixmap, gl_ctx);
glBindTexture(GL_TEXTURE_2D, gl_texmap);
glXBindTexImageEXT(xdisplay, gl_pixmap, GLX_FRONT_LEFT_EXT, NULL);
glGetTexImage(GL_TEXTURE_2D, 0, GL_RGBA, GL_UNSIGNED_BYTE, data); // data is output RGBA buffer
glXReleaseTexImageEXT(xdisplay, gl_pixmap, GLX_FRONT_LEFT_EXT);
I basically do glXBindTexImageEXT -> glGetTexImage -> glXReleaseTexImageEXT so that I get an updated picture.
It does work, but not sure I'm doing the right/optimal thing.
Is there a better/more optimized way to get such picture/context?
As of now I've found a slightly better way to implement fetching the composite window through OpenGL, via PBO; the advantages of this way is that you could initiate the command asynchronously and then retrieve the RGBA buffer from system memory, whilst the OpenGL driver does data transfer.
Sample pseudocode:
// setup a PBO
GLuint cur_pbo;
glGenBuffers(1, &cur_pbo);
glBindBuffer(GL_PIXEL_PACK_BUFFER, cur_pbo);
glBufferData(GL_PIXEL_PACK_BUFFER, size, NULL, GL_STREAM_READ);
Then much later on
glXMakeCurrent(xdisplay, gl_pixmap, gl_ctx);
glBindTexture(GL_TEXTURE_2D, gl_texmap);
glXBindTexImageEXT(xdisplay, gl_pixmap, GLX_FRONT_LEFT_EXT, NULL);
glBindBuffer(GL_PIXEL_PACK_BUFFER, cur_pbo);
// This will initiate the data transfer, the previous
// buffer pointer is now an offset in the index bound by previous
// glBufferData call
glGetTexImage(GL_TEXTURE_2D, 0, GL_RGBA, GL_UNSIGNED_BYTE, 0);
// do something else
...
...
...
// then later on when we _really_ need to get the data
// perform this call which will make wait if the RGBA
// data is not avilable yet
void* rgba_ptr = glMapBuffer(GL_PIXEL_PACK_BUFFER, GL_READ_ONLY);
// Then when finished to use rgba_ptr, release it
glUnmapBuffer(GL_PIXEL_PACK_BUFFER);
glXReleaseTexImageEXT(xdisplay, gl_pixmap, GLX_FRONT_LEFT_EXT);
This approach is definitely better than original approach (in the question) if you can use the CPU/same thread to do something between the calls to glGetTexImage and glMapBuffer.
It's worth thinking it may be still better even if you perform these calls sequentially (instead of glGetTexImage without PBO) because the driver may still optimize the transfer and would manage the system memory buffer itself.

Loading PNG with stb_image for OpenGL texture gives wrong colors

I am using stb_image to load a 32-bit PNG file (RGBA) and I am creating an OpenGL texture with it.
It works fine for 24-bit PNG files (with no alpha channel), but when I use a 32-bit PNG file, something goes wrong.
This is what the texture should look like:
And this is what it looks like when rendered with OpenGL (the black parts are meant to be transparent, and are when I enable blending):
This is how I load the texture:
int w;
int h;
int comp;
unsigned char* image = stbi_load(filename.c_str(), &w, &h, &comp, STBI_rgb);
if(image == nullptr)
throw(std::string("Failed to load texture"));
glGenTextures(1, &m_texture);
glBindTexture(GL_TEXTURE_2D, m_texture);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
if(comp == 3)
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, w, h, 0, GL_RGB, GL_UNSIGNED_BYTE, image);
else if(comp == 4)
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, w, h, 0, GL_RGBA, GL_UNSIGNED_BYTE, image);
glBindTexture(GL_TEXTURE_2D, 0);
stbi_image_free(image);
And these are the window parameters (using SDL)
SDL_GL_SetAttribute(SDL_GL_RED_SIZE, 8);
SDL_GL_SetAttribute(SDL_GL_GREEN_SIZE, 8);
SDL_GL_SetAttribute(SDL_GL_BLUE_SIZE, 8);
SDL_GL_SetAttribute(SDL_GL_ALPHA_SIZE, 8);
SDL_GL_SetAttribute(SDL_GL_BUFFER_SIZE, 32);
SDL_GL_SetAttribute(SDL_GL_DOUBLEBUFFER, 1);
SDL_GL_SetAttribute(SDL_GL_DEPTH_SIZE, 16);
What is happening?
Your actual bug is that you use comp to determine your format (GL_RBA/GL_RGBA) parameter to glTexImage2D. This happens because when you load an image using stbi_load, the value returned in comp will always match the source image, not the image data returned.
More specifically, your bug is that you use STBI_rgb, causing stbi_load to return 3 byte pixels, but then you load it with glTexImage2D as 4 byte pixels with GL_RGBA because comp is 4 when you load a 32 bit image.
You must set the format in your call to glTexImage2D to GL_RGB if you use STBI_rgb and to GL_RGBA if you use STBI_rgb_alpha.
Bonus to other readers
Are you having a similar problem and the above still doesn't help? Then your image data might not have rows on the alignment OpenGL expects. Try glPixelStorei(GL_UNPACK_ALIGNMENT, 1); before you call glTexImage2D. See https://www.khronos.org/opengl/wiki/Pixel_Transfer#Pixel_layout for more information.
Changing the STBI_rgb to STBI_rgb_alpha in the stbi_load function call fixed it.
Probably best not to specify RGB when its RGBA :D

Texture loading with DevIL, equivalent code to texture loading with Qt?

I am working with opengl and glsl, in visual studio c++ 2010. I am writing shaders and I need
to load a texture. I am reading code from a book and in there they load textures with Qt, but I
need to do it with DevIl, can someone please write the equivalent code for texture loading with DevIL? I am new to DevIL and I don't know how to translate this.
// Load texture file
const char * texName = "texture/brick1.jpg";
QImage timg = QGLWidget::convertToGLFormat(QImage(texName,"JPG"));
// Copy file to OpenGL
glActiveTexture(GL_TEXTURE0);
GLuint tid;
glGenTextures(1, &tid);
glBindTexture(GL_TEXTURE_2D, tid);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, timg.width(), timg.height(), 0,
GL_RGBA, GL_UNSIGNED_BYTE, timg.bits());
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
Given that DevIL is no longer maintained, and the ILUT part assumes the requirement for power-of-2 texture dimensions and does rescale the images in its convenience functions, it actually makes sense to take the detour of doing it manually.
First loading a image from a file with DevIL happens quite similar to loading a texture from an image in OpenGL. First you create a DevIL image name and bind it
GLuint loadImageToTexture(char const * const thefilename)
{
ILuint imageID;
ilGenImages(1, &imageID);
ilBindImage(imageID);
now you can load an image from a file
ilLoadImage(thefilename);
check that the image does offer data, if not so, clean up
void data = ilGetData();
if(!data) {
ilBindImage(0);
ilDeleteImages(1, &imageID);
return 0;
}
retrieve the important parameters
int const width = ilGetInteger(IL_IMAGE_WIDTH);
int const height = ilGetInteger(IL_IMAGE_HEIGHT);
int const type = ilGetInteger(IL_IMAGE_TYPE); // matches OpenGL
int const format = ilGetInteger(IL_IMAGE_FORMAT); // matches OpenGL
Generate a texture name
GLuint textureID;
glGenTextures(1, &textureID);
glBindTexture(GL_TEXTURE_2D, textureID);
next we set the pixel store paremeters (your original code missed that crucial step)
glPixelStorei(GL_UNPACK_SWAP_BYTES, GL_FALSE);
glPixelStorei(GL_UNPACK_ROW_LENGTH, 0); // rows are tightly packed
glPixelStorei(GL_UNPACK_SKIP_PIXELS, 0);
glPixelStorei(GL_UNPACK_SKIP_ROWS, 0);
glPixelStorei(GL_UNPACK_ALIGNMENT, 1); // pixels are tightly packed
finally we can upload the texture image and return the ID
glTexImage2D(GL_TEXTURE_2D, 0, format, width, height, 0, format, type, data);
next, for convenience we set the minification filter to GL_LINEAR, so that we don't have to supply mipmap levels.
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
finally return the textureID
return textureID;
}
If you want to use mipmapping you can use the OpenGL glGenerateMipmap later on; use glTexParameter GL_TEXTURE_MIN_LOD and GL_TEXTURE_MAX_LOD to control the span of the image pyramid generated.

Dynamically changing a texture in OpenGL

I have some (OpenCV) code that generates images. I'm displaying these using OpenGL. When new images are created I run the following function (each time) with the same texture name and a new image:
void loadCVTexture(GLuint& texture, const cv::Mat_<Vec3f>& image){
if(texture != 0){
glBindTexture(GL_TEXTURE_2D, texture);
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, image.cols, image.rows, GL_BGR, GL_FLOAT, image.data);
} else {
glGenTextures(1, &texture);
glBindTexture(GL_TEXTURE_2D, texture);
glTexImage2D(GL_TEXTURE_2D, 0, 3, image.cols, image.rows, 0, GL_BGR, GL_FLOAT, image.data);
}
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR);
}
I initialize the first image before glutMainLoop() and it displays correctly. It is given the id 1. When I update the image again the picture does not change. (I have confirmed that the display function is being called, and that the image is different.)
Edit: Another clue, I have sub-windows. If I comment-out my other window the code works as expected.
Since it works correctly without "sub-windows", my guess would be that you have multiple OpenGL contexts in your application, and that the updating of the texture happens with the wrong context active.
Try putting the texture uploading into your display function and see if that makes a difference.
Are you trying to show new images in a sequence instead of the existing ones?
In which case you just need to change the image.data, not create a new texture binding.

SDL_surface to OpenGL texture

Hey, I have this script to load a SDL_Surface and save it as a OpenGL texture:
typedef GLuint texture;
texture load_texture(std::string fname){
SDL_Surface *tex_surf = IMG_Load(fname.c_str());
if(!tex_surf){
return 0;
}
texture ret;
glGenTextures(1, &ret);
glBindTexture(GL_TEXTURE_2D, ret);
glTexImage2D(GL_TEXTURE_2D, 0, 3, tex_surf->w, tex_surf->h, 0, GL_RGB, GL_UNSIGNED_BYTE, tex_surf->pixels);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
SDL_FreeSurface(tex_surf);
return ret;
}
The problem is that it isn't working. When I call the function from the main function, it just doesn't load any image (when displaying it's just turning the drawing color), and when calling from any function outside the main function, the program crashes.
It's this line that makes the program crash:
2D(GL_TEXTURE_2D, 0, 3, tex_surf->w, tex_surf->h, 0, GL_RGB, GL_UNSIGNED_BYTE, tex_surf->pixels);
Can anybody see a mistake in this?
My bet is you need to convert the SDL_Surface before trying to cram it into an OpenGL texture. Here's something that should give you the general idea:
SDL_Surface* originalSurface; // Load like an other SDL_Surface
int w = pow(2, ceil( log(originalSurface->w)/log(2) ) ); // Round up to the nearest power of two
SDL_Surface* newSurface =
SDL_CreateRGBSurface(0, w, w, 24, 0xff000000, 0x00ff0000, 0x0000ff00, 0);
SDL_BlitSurface(originalSurface, 0, newSurface, 0); // Blit onto a purely RGB Surface
texture ret;
glGenTextures( 1, &ret );
glBindTexture( GL_TEXTURE_2D, ret );
glTexImage2D( GL_TEXTURE_2D, 0, 3, w, w, 0, GL_RGB,
GL_UNSIGNED_BYTE, newSurface->pixels );
I found the original code here. There may be some other useful posts on GameDev as well.
The problem lies probably in 3rd argument (internalformat) of the call to glTexImage2D.
glTexImage2D(GL_TEXTURE_2D, 0, 3, tex_surf->w, tex_surf->h, 0, GL_RGB, GL_UNSIGNED_BYTE, tex_surf->pixels);
You have to use constants like GL_RGB or GL_RGBA because the actual values of the macro are not related to the number of color components.
A list of allowed values is in the reference manual: https://www.khronos.org/registry/OpenGL-Refpages/gl4/html/glTexImage2D.xhtml .
This seems to be a frequent mistake. Maybe some drivers are just clever and correct this, so the wrong line might still work for some people.
/usr/include/GL/gl.h:473:#define GL_RGB 0x1907
/usr/include/GL/gl.h:474:#define GL_RGBA 0x1908
I'm not sure if you're doing this somewhere outside your code snippet, but have you called
glEnable(GL_TEXTURE_2D);
at some point?
Some older hardware (and, surprisingly, emscripten's opengl ES 2.0 emulation, running on the new machine I bought this year) doesn't seem to support textures whose dimensions aren't powers of two. That turned out to be the problem I was stuck on for a while (I was getting a black rectangle rather than the sprite I wanted). So it's possible the poster's problem would go away after resizing the image to have dimensions that are powers of two.
See: https://www.khronos.org/opengl/wiki/NPOT_Texture