I have never come across this error before and I use glTexImage2D elsewhere in the project without error. Below is a screenshot of what error Visual Studio shows, and a view of the disassembly:
Given the line has ptr in it I assume there's a pointer error but I don't know what I'm doing wrong.
Below is the function I use to convert from an SDL_surface to a texture.
void surfaceToTexture(SDL_Surface *&surface, GLuint &texture) {
glEnable(GL_TEXTURE_2D);
glGenTextures(1, &texture);
glBindTexture(GL_TEXTURE_2D, texture);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, surface->w, surface->h, 0, GL_BGRA, GL_UNSIGNED_BYTE, surface->pixels);
glDisable(GL_TEXTURE_2D);
}
This function succeeds elsewhere in the program, for example when loading text:
SDL_Surface *surface;
surface = TTF_RenderText_Blended(tempFont, message.c_str(), color);
if (surface == NULL)
printf("Unable to generate text surface using font: %s! SDL_ttf Error: %s\n", font.c_str(), TTF_GetError());
else {
SDL_LockSurface(surface);
width = surface->w;
height = surface->h;
if (style != TTF_STYLE_NORMAL)
TTF_SetFontStyle(tempFont, TTF_STYLE_NORMAL);
surfaceToTexture(surface, texture);
SDL_UnlockSurface(surface);
}
SDL_FreeSurface(surface);
But not when loading an image:
SDL_Surface* surface = IMG_Load(path.c_str());
if (surface == NULL)
printf("Unable to load image %s! SDL_image Error: %s\n", path.c_str(), IMG_GetError());
else{
SDL_LockSurface(surface);
width = (w==0)?surface->w:w;
height = (h==0)?surface->h/4:h;
surfaceToTexture(surface, texture);
SDL_UnlockSurface(surface);
}
SDL_FreeSurface(surface);
Both examples are extracted from a class where texture is defined.
The path to the image is correct.
I know it's glTexImage2D that causes the problem as I added a breakpoint at the start of surfaceToTexture and stepped through the function.
Even when it doesn't work, texture and surface do have seemingly correct values/properties.
Any ideas?
The error you're getting means, that the procress crashed within a section of code for which the debugger could not find any debugging information (association between assembly and source code) whatsoever. This is typically the case for anything that's part of a/your program's debug build.
Now in your case what happens is, that you called glTexImage2D with parameters that "lie" to it about the memory layout of the buffer you pointed it to with the data parameter. Pointers don't carry any meaningful meta information (as far as the assembly level is concerned, they're just another integer, with special meaning). So you must make sure, that all the parameters you pass to a function along with a pointer do match up. If not, somewhere deep in the bowles of that function, or whatever it calls (or that calls, etc.) the memory might be accessed in a way that violates constraints set up by the operating system, triggering that kind of crash.
Solution to your problem: Fix your code, i.e. make sure that what you pass to OpenGL is consistent. It crashes within the OpenGL driver, but only because you lied to it.
Related
I wanted to try making a game with OpenGL and GLUT, but as it turns out, GLUT is not well adapted to making games. So I switched to using SDL 1.2 (this is for a sort of competition, so I can't use SDL 2). When I saw I could use OpenGL within SDL, I decided to do that, since I had already written a majority of my code with OpenGL. Now, I'm having issues trying to load an image into an SDL_Surface and then converting it to an OpenGL texture, with OpenGL blending enabled. Here is the code I'm using (loadImage loads an SDL_Surface & loadTexture loads into an OpenGL texture):
SDL_Surface * Graphics::loadImage(const char * filename) {
SDL_Surface *loaded = nullptr;
SDL_Surface *optimized = nullptr;
loaded = IMG_Load(filename);
if (loaded) {
optimized = SDL_DisplayFormat(loaded);
SDL_FreeSurface(loaded);
}
return optimized;
}
GLuint Graphics::loadTexture(const char * filename, GLuint oldTexId) {
//return SOIL_load_OGL_texture(filename, SOIL_LOAD_AUTO, oldTexId, SOIL_FLAG_NTSC_SAFE_RGB | SOIL_FLAG_MULTIPLY_ALPHA);
GLuint texId = 0;
SDL_Surface *s = loadImage(filename);
if (!s) return 0;
if (oldTexId) glDeleteTextures(1, &oldTexId);
glGenTextures(1, &texId);
glBindTexture(GL_TEXTURE_2D, texId);
int format;
if (s->format->BytesPerPixel == 4) {
if (s->format->Rmask == 0x000000ff)
format = GL_RGBA;
else
format = GL_BGRA;
} else if (s->format->BytesPerPixel == 3) {
if (s->format->Rmask == 0x000000ff)
format = GL_RGB;
else
format = GL_BGR;
}
glTexImage2D(GL_TEXTURE_2D, 0, s->format->BytesPerPixel, s->w, s->h, 0, format, GL_UNSIGNED_BYTE, s->pixels);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
SDL_FreeSurface(s);
return texId;
}
I've been searching online for a solution to this issue quite a bit, and none of the solutions I found worked. This code actually works when I don't glEnable(GL_BLEND), but when I do enable it, it doesn't show anything on screen anymore. I am fairly new to OpenGL, and I'm not sure I'm using the glTexImage2D correctly.
The way I was loading images before I converted to SDL was using the SOIL library, and when I replace the loadTexture function's body with that commented out first line, it actually works fine, but I'd rather have less external libraries, and do everything graphics-side with SDL & OpenGL.
The third argument of glTexImage2D is wrong:
glTexImage2D(GL_TEXTURE_2D, 0, s->format->BytesPerPixel, s->w, s->h, 0, format, GL_UNSIGNED_BYTE, s->pixels);
The third argument is internalFormat and must be one of the base internal formats:
GL_DEPTH_COMPONENT
GL_DEPTH_STENCIL
GL_RED
GL_RG
GL_RGB
GL_RGBA
Or one of the sized internal formats, which specifies the bits per channel.
So in other words your third argument should be either:
GL_RGB
GL_RGB8
GL_RGBA
GL_RGBA8
If you're using an 8 bit per channel texture.
Whereas the 7th argument, format, can be either RGB or BGR, (including the alpha version), the third argument, internalFormat can only be RGB and not the other way around.
So where you check the red mask and change the format is still good for the 7th argument, the third argument (internalFormat) should be either GL_RGB or GL_RGBA. Or optionally the sized version GL_RGB8 or GL_RGBA8.
glTexImage2D(GL_TEXTURE_2D, 0, /*GL_RGB or GL_RGBA*/, s->w, s->h, 0, format, GL_UNSIGNED_BYTE, s->pixels);
Docs
I've been trying to submit a texture to the HTC Vive using the compositor. I keep getting 105 errors which is "TextureUsesUnsupportedFormat". The Texture is a bmp image 24 bit Depth. I've looked at the hellovr sample and still a bit confused. I also saw that the Vive requires a RGBA8 format for the texture but not sure how to actually make one. I am trying to get the texture to fill up each Eye port.
What am I doing wrong?
Here's my Code to retrieve the Texture and texture id:
Loading_Surf = SDL_LoadBMP("Test.bmp");
Background_Tx = SDL_CreateTextureFromSurface(renderer, Loading_Surf);
if (!Loading_Surf) {
return 0;
}
glGenTextures(1, &textureid);
glBindTexture(GL_TEXTURE_2D, textureid);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, Loading_Surf->w, Loading_Surf->h, 0, mode, GL_UNSIGNED_BYTE, Loading_Surf->pixels);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
SDL_FreeSurface(Loading_Surf);
SDL_RenderCopy(renderer, Background_Tx, NULL, NULL);
SDL_RenderPresent(renderer);
return textureid;
Submitting to Vive Code:
vr::Texture_t l_Eye = { (void*)frameID, vr::API_OpenGL, vr::ColorSpace_Gamma };
std::cout << vr::VRCompositor()->WaitGetPoses(ViveTracked, vr::k_unMaxTrackedDeviceCount, NULL, 0);
error = vr::VRCompositor()->Submit(vr::Eye_Left, &l_Eye);
You might need to create a surface with the correct RGBA8 format first, as mentioned in this answer: https://gamedev.stackexchange.com/a/109067/6920
Create a temporary surface (SDL_CreateRGBSurface) with the exact image
format you want, then copy Loading_Surf onto that temporary surface
(SDL_BlitSurface)
RGBA8 requires 32-bits. Your bitmap has only 24-bits. Seems like the alpha channels is missing.
Try to copy it into a bigger container that has 4x8-bit = 32-bit per pixel (in c++ you can use char or you make use of some image library).
Or you figure out to feed your device with RGB8 texture if something like that exists (play around with OpenGL).
This helps you https://www.khronos.org/opengl/wiki/Texture
I am attempting to load a texture into OpenGL using Devil, and i am having a segmentation fault upon the calling of this constructor
Sprite::Sprite(const char *path){
ILuint tex = 0;
ilutEnable(ILUT_OPENGL_CONV);
ilGenImages(1, &tex);
ilBindImage(tex);
ilLoadImage(path);
ilConvertImage(IL_RGBA, IL_UNSIGNED_BYTE);
width = (GLuint*)ilGetInteger(IL_IMAGE_WIDTH);
height = (GLuint*)ilGetInteger(IL_IMAGE_HEIGHT);
glGenTextures(1, &texture);
glBindTexture(GL_TEXTURE_2D, texture);
glTexImage2D(GL_TEXTURE_2D,
0,
GL_RGBA,
width,
height,
0,
GL_RGBA,
GL_UNSIGNED_BYTE,
&tex);
ilBindImage(0);
ilDeleteImages(1, &tex);
ilutDisable(ILUT_OPENGL_CONV);
}
and texture is a protected member
GLuint texture;
As soon as this constructor is called i recieve a segfault error and it exits and I am using freeglut, gl, il, ilu, and ilut. any help would be appreciated
Edit:
I also decided to take a different approach and use
texture = ilutGLLoadImage(path)
function to just load it directly into the gl texture because I located the segfault coming from
ilLoadImage(path)
but the compiler tells me that ilutGLLoadImage() is not declared in this scope, and i have IL/il.h IL/ilu.h and IL/ilut.h all included and initialized
I never used DevIL, but glTexImage2D wants pointer to pixel data as the last argument and you pass pointer to local variable tex there instead, which is allocated on stack and does not contain expected information. So glTexImage2D reads through your stack and eventually attempts to access memory it was not supposed to access and you get segmentation fault.
I guess you'd want to use ilGetData() instead.
Make sure you have DevIL initialized with ilInit ( ) and change &tex to ilGetData ( ) and then it should work.
This question already has answers here:
How to use GLUT/OpenGL to render to a file?
(6 answers)
Closed 9 years ago.
I want to try to make a simple program that takes a 3D model and renders it into an image. Is there any way I can use OpenGL to render an image and put it into a variable that holds an image rather than displaying an image? I don't want to see what I'm rendering I just want to save it. Is there any way to do this with OpenGL?
I'm assuming that you know how to draw stuff to the screen with OpenGL, and you wrote a function such as drawStuff to do so.
First of all you have to decide how big you want your final render to be; I'm choosing a square here, with size 512x512. You can also use sizes that are not power of two, but to keep things simple let's stick to this format for now. Sometimes OpenGL gets picky about this issue.
const int width = 512;
const int height = 512;
Then you need three objects in order to create an offscreen drawing area; this is called a frame buffer object as user1118321 said.
GLuint color;
GLuint depth;
GLuint fbo;
The FBO stores a color buffer and a depth buffer; also you screen rendering area has these two buffers, but you don't want to use them because you don't want to draw to the screen. To create the FBO, you need to do something like the following only one time for instance at startup:
glGenTextures(1, &color);
glBindTexture(GL_TEXTURE_2D, color);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_BGRA, GL_UNSIGNED_BYTE, NULL);
glBindTexture(GL_TEXTURE_2D, 0);
glGenRenderbuffers(1, &depth);
glBindRenderbuffer(GL_RENDERBUFFER, depth);
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT, width, height);
glBindRenderbuffer(GL_RENDERBUFFER, 0);
glGenFramebuffers(1, &fbo);
glBindFramebuffer(GL_FRAMEBUFFER, fbo);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, color, 0);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, depth);
glBindFramebuffer(GL_FRAMEBUFFER, 0);
First you create a memory area to store pixel color, than one to store pixel depth (which in computer graphics is used to remove hidden surfaces), and finally you connect them to the FBO, which basically holds a reference to both. Consider as an example the first block, with 6 calls:
glGenTextures creates a name for a texture; a name in OpenGL is simply an integer, because a string would be too inefficient.
glBindTexture binds the texture to a target, namely GL_TEXTURE_2D; subsequent calls that specify that same target will operate on that texture.
The 3rd, 4th and 5th call are specific to the target being manipulated, and you should refer to the OpenGL documentation for further information.
The last call to glBindTexture unbinds the texture from the target. Since at some point you will hand control to your drawStuff function, which in turn will make its whole lot of OpenGL calls, you need to clear you workspace now, to avoid interference with the object that you have created.
To switch from screen rendering to offscreen rendering you could use a boolean variable somewhere in your program:
if (offscreen)
glBindFramebuffer(GL_FRAMEBUFFER, fbo);
else
glBindFramebuffer(GL_FRAMEBUFFER, 0);
drawStuff();
if (offscreen)
saveToFile();
So, if offscreen is true you actually want drawStuff to interfere with fbo, because you want it to render the scene on it.
Function saveToFile is responsible for loading the result of the rendering and converting it to file. This is heavily dependent on the OS and language that you are using. As an example, on Mac OS X with C it would be something like the following:
void saveImage()
{
void *imageData = malloc(width * height * 4);
glBindTexture(GL_TEXTURE_2D, color);
glGetTexImage(GL_TEXTURE_2D, 0, GL_BGRA, GL_UNSIGNED_BYTE, imageData);
CGContextRef contextRef = CGBitmapContextCreate(imageData, width, height, 8, 4 * width, CGColorSpaceCreateWithName(kCGColorSpaceGenericRGB), kCGImageAlphaPremultipliedLast);
CGImageRef imageRef = CGBitmapContextCreateImage(contextRef);
CFURLRef urlRef = (CFURLRef)[NSURL fileURLWithPath:#"/Users/JohnDoe/Documents/Output.png"];
CGImageDestinationRef destRef = CGImageDestinationCreateWithURL(urlRef, kUTTypePNG, 1, NULL);
CGImageDestinationAddImage(destRef, imageRef, nil);
CFRelease(destRef);
glBindTexture(GL_TEXTURE_2D, 0);
free(imageData);
}
Yes, you can do that. What you want to do is create a frame buffer object (FBO) backed by a texture. Once you create one and draw to it, you can download the texture to main memory and save it just like you would any bitmap.
Hey, I have this script to load a SDL_Surface and save it as a OpenGL texture:
typedef GLuint texture;
texture load_texture(std::string fname){
SDL_Surface *tex_surf = IMG_Load(fname.c_str());
if(!tex_surf){
return 0;
}
texture ret;
glGenTextures(1, &ret);
glBindTexture(GL_TEXTURE_2D, ret);
glTexImage2D(GL_TEXTURE_2D, 0, 3, tex_surf->w, tex_surf->h, 0, GL_RGB, GL_UNSIGNED_BYTE, tex_surf->pixels);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
SDL_FreeSurface(tex_surf);
return ret;
}
The problem is that it isn't working. When I call the function from the main function, it just doesn't load any image (when displaying it's just turning the drawing color), and when calling from any function outside the main function, the program crashes.
It's this line that makes the program crash:
2D(GL_TEXTURE_2D, 0, 3, tex_surf->w, tex_surf->h, 0, GL_RGB, GL_UNSIGNED_BYTE, tex_surf->pixels);
Can anybody see a mistake in this?
My bet is you need to convert the SDL_Surface before trying to cram it into an OpenGL texture. Here's something that should give you the general idea:
SDL_Surface* originalSurface; // Load like an other SDL_Surface
int w = pow(2, ceil( log(originalSurface->w)/log(2) ) ); // Round up to the nearest power of two
SDL_Surface* newSurface =
SDL_CreateRGBSurface(0, w, w, 24, 0xff000000, 0x00ff0000, 0x0000ff00, 0);
SDL_BlitSurface(originalSurface, 0, newSurface, 0); // Blit onto a purely RGB Surface
texture ret;
glGenTextures( 1, &ret );
glBindTexture( GL_TEXTURE_2D, ret );
glTexImage2D( GL_TEXTURE_2D, 0, 3, w, w, 0, GL_RGB,
GL_UNSIGNED_BYTE, newSurface->pixels );
I found the original code here. There may be some other useful posts on GameDev as well.
The problem lies probably in 3rd argument (internalformat) of the call to glTexImage2D.
glTexImage2D(GL_TEXTURE_2D, 0, 3, tex_surf->w, tex_surf->h, 0, GL_RGB, GL_UNSIGNED_BYTE, tex_surf->pixels);
You have to use constants like GL_RGB or GL_RGBA because the actual values of the macro are not related to the number of color components.
A list of allowed values is in the reference manual: https://www.khronos.org/registry/OpenGL-Refpages/gl4/html/glTexImage2D.xhtml .
This seems to be a frequent mistake. Maybe some drivers are just clever and correct this, so the wrong line might still work for some people.
/usr/include/GL/gl.h:473:#define GL_RGB 0x1907
/usr/include/GL/gl.h:474:#define GL_RGBA 0x1908
I'm not sure if you're doing this somewhere outside your code snippet, but have you called
glEnable(GL_TEXTURE_2D);
at some point?
Some older hardware (and, surprisingly, emscripten's opengl ES 2.0 emulation, running on the new machine I bought this year) doesn't seem to support textures whose dimensions aren't powers of two. That turned out to be the problem I was stuck on for a while (I was getting a black rectangle rather than the sprite I wanted). So it's possible the poster's problem would go away after resizing the image to have dimensions that are powers of two.
See: https://www.khronos.org/opengl/wiki/NPOT_Texture