Hey, I have this script to load a SDL_Surface and save it as a OpenGL texture:
typedef GLuint texture;
texture load_texture(std::string fname){
SDL_Surface *tex_surf = IMG_Load(fname.c_str());
if(!tex_surf){
return 0;
}
texture ret;
glGenTextures(1, &ret);
glBindTexture(GL_TEXTURE_2D, ret);
glTexImage2D(GL_TEXTURE_2D, 0, 3, tex_surf->w, tex_surf->h, 0, GL_RGB, GL_UNSIGNED_BYTE, tex_surf->pixels);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
SDL_FreeSurface(tex_surf);
return ret;
}
The problem is that it isn't working. When I call the function from the main function, it just doesn't load any image (when displaying it's just turning the drawing color), and when calling from any function outside the main function, the program crashes.
It's this line that makes the program crash:
2D(GL_TEXTURE_2D, 0, 3, tex_surf->w, tex_surf->h, 0, GL_RGB, GL_UNSIGNED_BYTE, tex_surf->pixels);
Can anybody see a mistake in this?
My bet is you need to convert the SDL_Surface before trying to cram it into an OpenGL texture. Here's something that should give you the general idea:
SDL_Surface* originalSurface; // Load like an other SDL_Surface
int w = pow(2, ceil( log(originalSurface->w)/log(2) ) ); // Round up to the nearest power of two
SDL_Surface* newSurface =
SDL_CreateRGBSurface(0, w, w, 24, 0xff000000, 0x00ff0000, 0x0000ff00, 0);
SDL_BlitSurface(originalSurface, 0, newSurface, 0); // Blit onto a purely RGB Surface
texture ret;
glGenTextures( 1, &ret );
glBindTexture( GL_TEXTURE_2D, ret );
glTexImage2D( GL_TEXTURE_2D, 0, 3, w, w, 0, GL_RGB,
GL_UNSIGNED_BYTE, newSurface->pixels );
I found the original code here. There may be some other useful posts on GameDev as well.
The problem lies probably in 3rd argument (internalformat) of the call to glTexImage2D.
glTexImage2D(GL_TEXTURE_2D, 0, 3, tex_surf->w, tex_surf->h, 0, GL_RGB, GL_UNSIGNED_BYTE, tex_surf->pixels);
You have to use constants like GL_RGB or GL_RGBA because the actual values of the macro are not related to the number of color components.
A list of allowed values is in the reference manual: https://www.khronos.org/registry/OpenGL-Refpages/gl4/html/glTexImage2D.xhtml .
This seems to be a frequent mistake. Maybe some drivers are just clever and correct this, so the wrong line might still work for some people.
/usr/include/GL/gl.h:473:#define GL_RGB 0x1907
/usr/include/GL/gl.h:474:#define GL_RGBA 0x1908
I'm not sure if you're doing this somewhere outside your code snippet, but have you called
glEnable(GL_TEXTURE_2D);
at some point?
Some older hardware (and, surprisingly, emscripten's opengl ES 2.0 emulation, running on the new machine I bought this year) doesn't seem to support textures whose dimensions aren't powers of two. That turned out to be the problem I was stuck on for a while (I was getting a black rectangle rather than the sprite I wanted). So it's possible the poster's problem would go away after resizing the image to have dimensions that are powers of two.
See: https://www.khronos.org/opengl/wiki/NPOT_Texture
Related
I'm trying to copy one GL_TEXTURE_2D into a chosen slice of a GL_TEXTURE_2D_ARRAY Texture.
I try to bind the usual Texture_2D to one framebuffer and only a slice of the Texture_2D_Array to another framebuffer (both have the same size (width, height, GL_RGB, GL_UNSIGNED_BYTE)).
Afterwards I thought to glBlitFramebuffer would copy that texture into this one slice... but I think I misunderstand the glFramebufferTexture3D command.
BTW: the GL_TEXTURE_2D is loaded correctly and I also printed it out (works)
Here my code:
//Create 2 FBOs for copying textures
glGenFramebuffers(1, &nFrameBufferRead); //FBO for texture2D
glGenFramebuffers(1, &nFrameBufferWrite); //FBO for one slice of the texture2d_array
CBasics::GetOpenGLError();
//generate the GL_TEXTURE_2D_ARRAY with given values (glgentextures is already called for this texture)
glTexImage3D(GL_TEXTURE_2D_ARRAY, 0, GL_RGB, nWidth, nHeight, countSlices, 0, GL_RGB, GL_UNSIGNED_BYTE, NULL);
CBasics::GetOpenGLError();
//Bind the Texture2D to the readFramebuffer
glBindFramebuffer(GL_READ_FRAMEBUFFER, nFrameBufferRead);
glFramebufferTexture(GL_READ_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, texture2D_ID, 0);
CBasics::GetOpenGLError();
//try to bind the Texture2D_Array to the drawFramebuffer
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, nFrameBufferWrite);
CBasics::GetOpenGLError(); //till here everything works (no glerror)
glFramebufferTexture3D(GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D_ARRAY, texture2D_Array_ID, 0, slicenumber); // here the error appears
CBasics::GetOpenGLError();
//because of the error one step earlier here will be the next error...
glBlitFramebuffer(0, 0, nWidth, nHeight, 0, 0, nWidth, nHeight, GL_COLOR_BUFFER_BIT, GL_NEAREST);
CBasics::GetOpenGLError();
at glFramebufferTexture3D the error appears: GL_INVALID_VALUE
I think it is because of
GL_INVALID_VALUE is generated if texture is not zero or the name of an
existing texture object.
1st: Is this way to copy textures into arrayslices correctly? Or is there a better way to do that?
2nd: Is it possible to bind only one slice of a GL_TEXTURE_2D_ARRAY?
3rd: Do I need the glFramebufferTexture3D command or the glFramebufferTexture2D command for GL_TEXTURE_2D_ARRAYs?
Assuming that "Here's my code" actually does contain all of your code, it doesn't work because you didn't bind the 2D array texture before calling glTexImage3D to allocate storage for it.
However, you don't have to render or blit to copy texture data. You can copy texture data by... copying texture data. The glCopyImageSubData function can copy layers between textures with different array layer counts. In your case:
glCopyImageSubData(
texture2D_ID, GL_TEXTURE_2D, 0, 0, 0, 0,
texture2D_Array_ID, GL_TEXTURE_2D_ARRAY, 0, 0, 0, slicenumber,
nWidth, nHeight, 1);
This requires OpenGL 4.3 or better, or one of the ARB/NV_copy_image extensions. The NVIDIA extension is actually quite widely implemented.
But you still need to use glTexImage3D correctly.
I am using stb_image to load a 32-bit PNG file (RGBA) and I am creating an OpenGL texture with it.
It works fine for 24-bit PNG files (with no alpha channel), but when I use a 32-bit PNG file, something goes wrong.
This is what the texture should look like:
And this is what it looks like when rendered with OpenGL (the black parts are meant to be transparent, and are when I enable blending):
This is how I load the texture:
int w;
int h;
int comp;
unsigned char* image = stbi_load(filename.c_str(), &w, &h, &comp, STBI_rgb);
if(image == nullptr)
throw(std::string("Failed to load texture"));
glGenTextures(1, &m_texture);
glBindTexture(GL_TEXTURE_2D, m_texture);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
if(comp == 3)
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, w, h, 0, GL_RGB, GL_UNSIGNED_BYTE, image);
else if(comp == 4)
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, w, h, 0, GL_RGBA, GL_UNSIGNED_BYTE, image);
glBindTexture(GL_TEXTURE_2D, 0);
stbi_image_free(image);
And these are the window parameters (using SDL)
SDL_GL_SetAttribute(SDL_GL_RED_SIZE, 8);
SDL_GL_SetAttribute(SDL_GL_GREEN_SIZE, 8);
SDL_GL_SetAttribute(SDL_GL_BLUE_SIZE, 8);
SDL_GL_SetAttribute(SDL_GL_ALPHA_SIZE, 8);
SDL_GL_SetAttribute(SDL_GL_BUFFER_SIZE, 32);
SDL_GL_SetAttribute(SDL_GL_DOUBLEBUFFER, 1);
SDL_GL_SetAttribute(SDL_GL_DEPTH_SIZE, 16);
What is happening?
Your actual bug is that you use comp to determine your format (GL_RBA/GL_RGBA) parameter to glTexImage2D. This happens because when you load an image using stbi_load, the value returned in comp will always match the source image, not the image data returned.
More specifically, your bug is that you use STBI_rgb, causing stbi_load to return 3 byte pixels, but then you load it with glTexImage2D as 4 byte pixels with GL_RGBA because comp is 4 when you load a 32 bit image.
You must set the format in your call to glTexImage2D to GL_RGB if you use STBI_rgb and to GL_RGBA if you use STBI_rgb_alpha.
Bonus to other readers
Are you having a similar problem and the above still doesn't help? Then your image data might not have rows on the alignment OpenGL expects. Try glPixelStorei(GL_UNPACK_ALIGNMENT, 1); before you call glTexImage2D. See https://www.khronos.org/opengl/wiki/Pixel_Transfer#Pixel_layout for more information.
Changing the STBI_rgb to STBI_rgb_alpha in the stbi_load function call fixed it.
Probably best not to specify RGB when its RGBA :D
This question already has answers here:
How to use GLUT/OpenGL to render to a file?
(6 answers)
Closed 9 years ago.
I want to try to make a simple program that takes a 3D model and renders it into an image. Is there any way I can use OpenGL to render an image and put it into a variable that holds an image rather than displaying an image? I don't want to see what I'm rendering I just want to save it. Is there any way to do this with OpenGL?
I'm assuming that you know how to draw stuff to the screen with OpenGL, and you wrote a function such as drawStuff to do so.
First of all you have to decide how big you want your final render to be; I'm choosing a square here, with size 512x512. You can also use sizes that are not power of two, but to keep things simple let's stick to this format for now. Sometimes OpenGL gets picky about this issue.
const int width = 512;
const int height = 512;
Then you need three objects in order to create an offscreen drawing area; this is called a frame buffer object as user1118321 said.
GLuint color;
GLuint depth;
GLuint fbo;
The FBO stores a color buffer and a depth buffer; also you screen rendering area has these two buffers, but you don't want to use them because you don't want to draw to the screen. To create the FBO, you need to do something like the following only one time for instance at startup:
glGenTextures(1, &color);
glBindTexture(GL_TEXTURE_2D, color);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_BGRA, GL_UNSIGNED_BYTE, NULL);
glBindTexture(GL_TEXTURE_2D, 0);
glGenRenderbuffers(1, &depth);
glBindRenderbuffer(GL_RENDERBUFFER, depth);
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT, width, height);
glBindRenderbuffer(GL_RENDERBUFFER, 0);
glGenFramebuffers(1, &fbo);
glBindFramebuffer(GL_FRAMEBUFFER, fbo);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, color, 0);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, depth);
glBindFramebuffer(GL_FRAMEBUFFER, 0);
First you create a memory area to store pixel color, than one to store pixel depth (which in computer graphics is used to remove hidden surfaces), and finally you connect them to the FBO, which basically holds a reference to both. Consider as an example the first block, with 6 calls:
glGenTextures creates a name for a texture; a name in OpenGL is simply an integer, because a string would be too inefficient.
glBindTexture binds the texture to a target, namely GL_TEXTURE_2D; subsequent calls that specify that same target will operate on that texture.
The 3rd, 4th and 5th call are specific to the target being manipulated, and you should refer to the OpenGL documentation for further information.
The last call to glBindTexture unbinds the texture from the target. Since at some point you will hand control to your drawStuff function, which in turn will make its whole lot of OpenGL calls, you need to clear you workspace now, to avoid interference with the object that you have created.
To switch from screen rendering to offscreen rendering you could use a boolean variable somewhere in your program:
if (offscreen)
glBindFramebuffer(GL_FRAMEBUFFER, fbo);
else
glBindFramebuffer(GL_FRAMEBUFFER, 0);
drawStuff();
if (offscreen)
saveToFile();
So, if offscreen is true you actually want drawStuff to interfere with fbo, because you want it to render the scene on it.
Function saveToFile is responsible for loading the result of the rendering and converting it to file. This is heavily dependent on the OS and language that you are using. As an example, on Mac OS X with C it would be something like the following:
void saveImage()
{
void *imageData = malloc(width * height * 4);
glBindTexture(GL_TEXTURE_2D, color);
glGetTexImage(GL_TEXTURE_2D, 0, GL_BGRA, GL_UNSIGNED_BYTE, imageData);
CGContextRef contextRef = CGBitmapContextCreate(imageData, width, height, 8, 4 * width, CGColorSpaceCreateWithName(kCGColorSpaceGenericRGB), kCGImageAlphaPremultipliedLast);
CGImageRef imageRef = CGBitmapContextCreateImage(contextRef);
CFURLRef urlRef = (CFURLRef)[NSURL fileURLWithPath:#"/Users/JohnDoe/Documents/Output.png"];
CGImageDestinationRef destRef = CGImageDestinationCreateWithURL(urlRef, kUTTypePNG, 1, NULL);
CGImageDestinationAddImage(destRef, imageRef, nil);
CFRelease(destRef);
glBindTexture(GL_TEXTURE_2D, 0);
free(imageData);
}
Yes, you can do that. What you want to do is create a frame buffer object (FBO) backed by a texture. Once you create one and draw to it, you can download the texture to main memory and save it just like you would any bitmap.
I am working with opengl and glsl, in visual studio c++ 2010. I am writing shaders and I need
to load a texture. I am reading code from a book and in there they load textures with Qt, but I
need to do it with DevIl, can someone please write the equivalent code for texture loading with DevIL? I am new to DevIL and I don't know how to translate this.
// Load texture file
const char * texName = "texture/brick1.jpg";
QImage timg = QGLWidget::convertToGLFormat(QImage(texName,"JPG"));
// Copy file to OpenGL
glActiveTexture(GL_TEXTURE0);
GLuint tid;
glGenTextures(1, &tid);
glBindTexture(GL_TEXTURE_2D, tid);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, timg.width(), timg.height(), 0,
GL_RGBA, GL_UNSIGNED_BYTE, timg.bits());
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
Given that DevIL is no longer maintained, and the ILUT part assumes the requirement for power-of-2 texture dimensions and does rescale the images in its convenience functions, it actually makes sense to take the detour of doing it manually.
First loading a image from a file with DevIL happens quite similar to loading a texture from an image in OpenGL. First you create a DevIL image name and bind it
GLuint loadImageToTexture(char const * const thefilename)
{
ILuint imageID;
ilGenImages(1, &imageID);
ilBindImage(imageID);
now you can load an image from a file
ilLoadImage(thefilename);
check that the image does offer data, if not so, clean up
void data = ilGetData();
if(!data) {
ilBindImage(0);
ilDeleteImages(1, &imageID);
return 0;
}
retrieve the important parameters
int const width = ilGetInteger(IL_IMAGE_WIDTH);
int const height = ilGetInteger(IL_IMAGE_HEIGHT);
int const type = ilGetInteger(IL_IMAGE_TYPE); // matches OpenGL
int const format = ilGetInteger(IL_IMAGE_FORMAT); // matches OpenGL
Generate a texture name
GLuint textureID;
glGenTextures(1, &textureID);
glBindTexture(GL_TEXTURE_2D, textureID);
next we set the pixel store paremeters (your original code missed that crucial step)
glPixelStorei(GL_UNPACK_SWAP_BYTES, GL_FALSE);
glPixelStorei(GL_UNPACK_ROW_LENGTH, 0); // rows are tightly packed
glPixelStorei(GL_UNPACK_SKIP_PIXELS, 0);
glPixelStorei(GL_UNPACK_SKIP_ROWS, 0);
glPixelStorei(GL_UNPACK_ALIGNMENT, 1); // pixels are tightly packed
finally we can upload the texture image and return the ID
glTexImage2D(GL_TEXTURE_2D, 0, format, width, height, 0, format, type, data);
next, for convenience we set the minification filter to GL_LINEAR, so that we don't have to supply mipmap levels.
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
finally return the textureID
return textureID;
}
If you want to use mipmapping you can use the OpenGL glGenerateMipmap later on; use glTexParameter GL_TEXTURE_MIN_LOD and GL_TEXTURE_MAX_LOD to control the span of the image pyramid generated.
I have some (OpenCV) code that generates images. I'm displaying these using OpenGL. When new images are created I run the following function (each time) with the same texture name and a new image:
void loadCVTexture(GLuint& texture, const cv::Mat_<Vec3f>& image){
if(texture != 0){
glBindTexture(GL_TEXTURE_2D, texture);
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, image.cols, image.rows, GL_BGR, GL_FLOAT, image.data);
} else {
glGenTextures(1, &texture);
glBindTexture(GL_TEXTURE_2D, texture);
glTexImage2D(GL_TEXTURE_2D, 0, 3, image.cols, image.rows, 0, GL_BGR, GL_FLOAT, image.data);
}
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR);
}
I initialize the first image before glutMainLoop() and it displays correctly. It is given the id 1. When I update the image again the picture does not change. (I have confirmed that the display function is being called, and that the image is different.)
Edit: Another clue, I have sub-windows. If I comment-out my other window the code works as expected.
Since it works correctly without "sub-windows", my guess would be that you have multiple OpenGL contexts in your application, and that the updating of the texture happens with the wrong context active.
Try putting the texture uploading into your display function and see if that makes a difference.
Are you trying to show new images in a sequence instead of the existing ones?
In which case you just need to change the image.data, not create a new texture binding.