I am attempting to load a texture into OpenGL using Devil, and i am having a segmentation fault upon the calling of this constructor
Sprite::Sprite(const char *path){
ILuint tex = 0;
ilutEnable(ILUT_OPENGL_CONV);
ilGenImages(1, &tex);
ilBindImage(tex);
ilLoadImage(path);
ilConvertImage(IL_RGBA, IL_UNSIGNED_BYTE);
width = (GLuint*)ilGetInteger(IL_IMAGE_WIDTH);
height = (GLuint*)ilGetInteger(IL_IMAGE_HEIGHT);
glGenTextures(1, &texture);
glBindTexture(GL_TEXTURE_2D, texture);
glTexImage2D(GL_TEXTURE_2D,
0,
GL_RGBA,
width,
height,
0,
GL_RGBA,
GL_UNSIGNED_BYTE,
&tex);
ilBindImage(0);
ilDeleteImages(1, &tex);
ilutDisable(ILUT_OPENGL_CONV);
}
and texture is a protected member
GLuint texture;
As soon as this constructor is called i recieve a segfault error and it exits and I am using freeglut, gl, il, ilu, and ilut. any help would be appreciated
Edit:
I also decided to take a different approach and use
texture = ilutGLLoadImage(path)
function to just load it directly into the gl texture because I located the segfault coming from
ilLoadImage(path)
but the compiler tells me that ilutGLLoadImage() is not declared in this scope, and i have IL/il.h IL/ilu.h and IL/ilut.h all included and initialized
I never used DevIL, but glTexImage2D wants pointer to pixel data as the last argument and you pass pointer to local variable tex there instead, which is allocated on stack and does not contain expected information. So glTexImage2D reads through your stack and eventually attempts to access memory it was not supposed to access and you get segmentation fault.
I guess you'd want to use ilGetData() instead.
Make sure you have DevIL initialized with ilInit ( ) and change &tex to ilGetData ( ) and then it should work.
Related
I am attempting to use a CUDA kernel to modify an OpenGL texture, but am having a strange issue where my calls to surf2Dwrite() seem to blend with the previous contents of the texture, as you can see in the image below. The wooden texture in the back is what's in the texture before modifying it with my CUDA kernel. The expected output would include ONLY the color gradients, not the wood texture behind it. I don't understand why this blending is happening.
Possible Problems / Misunderstandings
I'm new to both CUDA and OpenGL. Here I'll try to explain the thought process that led me to this code:
I'm using a cudaArray to access the texture (rather than e.g. an array of floats) because I read that it's better for cache locality when reading/writing a texture.
I'm using surfaces because I read somewhere that it's the only way to modify a cudaArray
I wanted to use surface objects, which I understand to be the newer way of doing things. The old way is to use surface references.
Some possible problems with my code that I don't know how to check/test:
Am I being inconsistent with image formats? Maybe I didn't specify the correct number of bits/channel somewhere? Maybe I should use floats instead of unsigned chars?
Code Summary
You can find a full minimum working example in this GitHub Gist. It's quite long because of all the moving parts, but I'll try to summarize. I welcome suggestions on how to shorten the MWE. The overall structure is as follows:
create an OpenGL texture from a file stored locally
register the texture with CUDA using cudaGraphicsGLRegisterImage()
call cudaGraphicsSubResourceGetMappedArray() to get a cudaArray that represents the texture
create a cudaSurfaceObject_t that I can use to write to the cudaArray
pass the surface object to a kernel that writes to the texture with surf2Dwrite()
use the texture to draw a rectangle on-screen
OpenGL Texture Creation
I am new to OpenGL, so I'm using the "Textures" section of the LearnOpenGL tutorials as a starting point. Here's how I set up the texture (using the image library stb_image.h)
GLuint initTexturesGL(){
// load texture from file
int numChannels;
unsigned char *data = stbi_load("img/container.jpg", &g_imageWidth, &g_imageHeight, &numChannels, 4);
if(!data){
std::cerr << "Error: Failed to load texture image!" << std::endl;
exit(1);
}
// opengl texture
GLuint textureId;
glGenTextures(1, &textureId);
glBindTexture(GL_TEXTURE_2D, textureId);
// wrapping
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_MIRRORED_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_MIRRORED_REPEAT);
// filtering
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
// set texture image
glTexImage2D(
GL_TEXTURE_2D, // target
0, // mipmap level
GL_RGBA8, // internal format (#channels, #bits/channel, ...)
g_imageWidth, // width
g_imageHeight, // height
0, // border (must be zero)
GL_RGBA, // format of input image
GL_UNSIGNED_BYTE, // type
data // data
);
glGenerateMipmap(GL_TEXTURE_2D);
// unbind and free image
glBindTexture(GL_TEXTURE_2D, 0);
stbi_image_free(data);
return textureId;
}
CUDA Graphics Interop
After calling the function above, I register the texture with CUDA:
void initTexturesCuda(GLuint textureId){
// register texture
HANDLE(cudaGraphicsGLRegisterImage(
&g_textureResource, // resource
textureId, // image
GL_TEXTURE_2D, // target
cudaGraphicsRegisterFlagsSurfaceLoadStore // flags
));
// resource description for surface
memset(&g_resourceDesc, 0, sizeof(g_resourceDesc));
g_resourceDesc.resType = cudaResourceTypeArray;
}
Render Loop
Every frame, I run the following to modify the texture and render the image:
while(!glfwWindowShouldClose(window)){
// -- CUDA --
// map
HANDLE(cudaGraphicsMapResources(1, &g_textureResource));
HANDLE(cudaGraphicsSubResourceGetMappedArray(
&g_textureArray, // array through which to access subresource
g_textureResource, // mapped resource to access
0, // array index
0 // mipLevel
));
// create surface object (compute >= 3.0)
g_resourceDesc.res.array.array = g_textureArray;
HANDLE(cudaCreateSurfaceObject(&g_surfaceObj, &g_resourceDesc));
// run kernel
kernel<<<gridDim, blockDim>>>(g_surfaceObj, g_imageWidth, g_imageHeight);
// unmap
HANDLE(cudaGraphicsUnmapResources(1, &g_textureResource));
// --- OpenGL ---
// clear
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// use program
shader.use();
// triangle
glBindVertexArray(vao);
glBindTexture(GL_TEXTURE_2D, textureId);
glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_INT, 0);
glBindVertexArray(0);
// glfw: swap buffers and poll i/o events
glfwSwapBuffers(window);
glfwPollEvents();
}
CUDA Kernel
The actual CUDA kernel is as follows:
__global__ void kernel(cudaSurfaceObject_t surface, int nx, int ny){
int x = blockIdx.x * blockDim.x + threadIdx.x;
int y = blockIdx.y * blockDim.y + threadIdx.y;
if(x < nx && y < ny){
uchar4 data = make_uchar4(x % 255,
y % 255,
0, 255);
surf2Dwrite(data, surface, x * sizeof(uchar4), y);
}
}
If I understand correctly, you initially register the texture, map it once, create a surface object for the array representing the mapped texture, and then unmap the texture. Every frame, you then map the resource again, ask for the array representing the mapped texture, and then completely ignore that one and use the surface object created for the array you got back when you first mapped the resource. From the documentation:
[…] The value set in array may change every time that resource is mapped.
You have to create a new surface object every time you map the resource because you might get a different array every time. And, in my experience, you will actually get a different one every so often. It may be a valid thing to do to only create a new surface object whenever the array actually changes. The documentation seems to allow for that, but I never tried, so I can't tell whether that works for sure…
Apart from that: You generate mipmaps for your texture. You only overwrite mip level 0. You then render the texture using mipmapping with trilinear interpolation. So my guess would be that you just happen to render the texture at a resolution that does not match the resolution of mip level 0 exactly and, thus, you will end up interpolating between level 0 (in which you wrote) and level 1 (which was generated from the original texture)…
It turns out the problem is that I had mistakenly generated mipmaps for the original wood texture, and my CUDA kernel was only modifying the level-0 mipmap. The blending I noticed was the result of OpenGL interpolating between my modified level-0 mipmap and a lower-resolution version of the wood texture.
Here's the correct output, obtained by disabling mipmap interpolation. Lesson learned!
I have never come across this error before and I use glTexImage2D elsewhere in the project without error. Below is a screenshot of what error Visual Studio shows, and a view of the disassembly:
Given the line has ptr in it I assume there's a pointer error but I don't know what I'm doing wrong.
Below is the function I use to convert from an SDL_surface to a texture.
void surfaceToTexture(SDL_Surface *&surface, GLuint &texture) {
glEnable(GL_TEXTURE_2D);
glGenTextures(1, &texture);
glBindTexture(GL_TEXTURE_2D, texture);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, surface->w, surface->h, 0, GL_BGRA, GL_UNSIGNED_BYTE, surface->pixels);
glDisable(GL_TEXTURE_2D);
}
This function succeeds elsewhere in the program, for example when loading text:
SDL_Surface *surface;
surface = TTF_RenderText_Blended(tempFont, message.c_str(), color);
if (surface == NULL)
printf("Unable to generate text surface using font: %s! SDL_ttf Error: %s\n", font.c_str(), TTF_GetError());
else {
SDL_LockSurface(surface);
width = surface->w;
height = surface->h;
if (style != TTF_STYLE_NORMAL)
TTF_SetFontStyle(tempFont, TTF_STYLE_NORMAL);
surfaceToTexture(surface, texture);
SDL_UnlockSurface(surface);
}
SDL_FreeSurface(surface);
But not when loading an image:
SDL_Surface* surface = IMG_Load(path.c_str());
if (surface == NULL)
printf("Unable to load image %s! SDL_image Error: %s\n", path.c_str(), IMG_GetError());
else{
SDL_LockSurface(surface);
width = (w==0)?surface->w:w;
height = (h==0)?surface->h/4:h;
surfaceToTexture(surface, texture);
SDL_UnlockSurface(surface);
}
SDL_FreeSurface(surface);
Both examples are extracted from a class where texture is defined.
The path to the image is correct.
I know it's glTexImage2D that causes the problem as I added a breakpoint at the start of surfaceToTexture and stepped through the function.
Even when it doesn't work, texture and surface do have seemingly correct values/properties.
Any ideas?
The error you're getting means, that the procress crashed within a section of code for which the debugger could not find any debugging information (association between assembly and source code) whatsoever. This is typically the case for anything that's part of a/your program's debug build.
Now in your case what happens is, that you called glTexImage2D with parameters that "lie" to it about the memory layout of the buffer you pointed it to with the data parameter. Pointers don't carry any meaningful meta information (as far as the assembly level is concerned, they're just another integer, with special meaning). So you must make sure, that all the parameters you pass to a function along with a pointer do match up. If not, somewhere deep in the bowles of that function, or whatever it calls (or that calls, etc.) the memory might be accessed in a way that violates constraints set up by the operating system, triggering that kind of crash.
Solution to your problem: Fix your code, i.e. make sure that what you pass to OpenGL is consistent. It crashes within the OpenGL driver, but only because you lied to it.
This question already has answers here:
How to use GLUT/OpenGL to render to a file?
(6 answers)
Closed 9 years ago.
I want to try to make a simple program that takes a 3D model and renders it into an image. Is there any way I can use OpenGL to render an image and put it into a variable that holds an image rather than displaying an image? I don't want to see what I'm rendering I just want to save it. Is there any way to do this with OpenGL?
I'm assuming that you know how to draw stuff to the screen with OpenGL, and you wrote a function such as drawStuff to do so.
First of all you have to decide how big you want your final render to be; I'm choosing a square here, with size 512x512. You can also use sizes that are not power of two, but to keep things simple let's stick to this format for now. Sometimes OpenGL gets picky about this issue.
const int width = 512;
const int height = 512;
Then you need three objects in order to create an offscreen drawing area; this is called a frame buffer object as user1118321 said.
GLuint color;
GLuint depth;
GLuint fbo;
The FBO stores a color buffer and a depth buffer; also you screen rendering area has these two buffers, but you don't want to use them because you don't want to draw to the screen. To create the FBO, you need to do something like the following only one time for instance at startup:
glGenTextures(1, &color);
glBindTexture(GL_TEXTURE_2D, color);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_BGRA, GL_UNSIGNED_BYTE, NULL);
glBindTexture(GL_TEXTURE_2D, 0);
glGenRenderbuffers(1, &depth);
glBindRenderbuffer(GL_RENDERBUFFER, depth);
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT, width, height);
glBindRenderbuffer(GL_RENDERBUFFER, 0);
glGenFramebuffers(1, &fbo);
glBindFramebuffer(GL_FRAMEBUFFER, fbo);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, color, 0);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, depth);
glBindFramebuffer(GL_FRAMEBUFFER, 0);
First you create a memory area to store pixel color, than one to store pixel depth (which in computer graphics is used to remove hidden surfaces), and finally you connect them to the FBO, which basically holds a reference to both. Consider as an example the first block, with 6 calls:
glGenTextures creates a name for a texture; a name in OpenGL is simply an integer, because a string would be too inefficient.
glBindTexture binds the texture to a target, namely GL_TEXTURE_2D; subsequent calls that specify that same target will operate on that texture.
The 3rd, 4th and 5th call are specific to the target being manipulated, and you should refer to the OpenGL documentation for further information.
The last call to glBindTexture unbinds the texture from the target. Since at some point you will hand control to your drawStuff function, which in turn will make its whole lot of OpenGL calls, you need to clear you workspace now, to avoid interference with the object that you have created.
To switch from screen rendering to offscreen rendering you could use a boolean variable somewhere in your program:
if (offscreen)
glBindFramebuffer(GL_FRAMEBUFFER, fbo);
else
glBindFramebuffer(GL_FRAMEBUFFER, 0);
drawStuff();
if (offscreen)
saveToFile();
So, if offscreen is true you actually want drawStuff to interfere with fbo, because you want it to render the scene on it.
Function saveToFile is responsible for loading the result of the rendering and converting it to file. This is heavily dependent on the OS and language that you are using. As an example, on Mac OS X with C it would be something like the following:
void saveImage()
{
void *imageData = malloc(width * height * 4);
glBindTexture(GL_TEXTURE_2D, color);
glGetTexImage(GL_TEXTURE_2D, 0, GL_BGRA, GL_UNSIGNED_BYTE, imageData);
CGContextRef contextRef = CGBitmapContextCreate(imageData, width, height, 8, 4 * width, CGColorSpaceCreateWithName(kCGColorSpaceGenericRGB), kCGImageAlphaPremultipliedLast);
CGImageRef imageRef = CGBitmapContextCreateImage(contextRef);
CFURLRef urlRef = (CFURLRef)[NSURL fileURLWithPath:#"/Users/JohnDoe/Documents/Output.png"];
CGImageDestinationRef destRef = CGImageDestinationCreateWithURL(urlRef, kUTTypePNG, 1, NULL);
CGImageDestinationAddImage(destRef, imageRef, nil);
CFRelease(destRef);
glBindTexture(GL_TEXTURE_2D, 0);
free(imageData);
}
Yes, you can do that. What you want to do is create a frame buffer object (FBO) backed by a texture. Once you create one and draw to it, you can download the texture to main memory and save it just like you would any bitmap.
I'm having an issue trying to get the width and height of a texture using the glGetTexLevelParameter function. No matter what I try, the function will not set the value of the width or height variable. I checked for errors but keep getting no error. Here is my code (based off of the NeHe tutorials if that helps):
int LoadGLTextures()
{
//load image file directly into opengl as new texture
GLint width = 0;
GLint height = 0;
texture[0] = SOIL_load_OGL_texture("NeHe.bmp", SOIL_LOAD_AUTO, SOIL_CREATE_NEW_ID, SOIL_FLAG_INVERT_Y); //image must be in same place as lib
if(texture[0] == 0)
{
return false;
}
glEnable(GL_TEXTURE_2D);
glGenTextures(3, &texture[0]);
glBindTexture(GL_TEXTURE_2D, texture[0]);
glGetTexLevelParameteriv(GL_TEXTURE_2D, 0, GL_TEXTURE_WIDTH, &width);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); //no filtering bc of GL_NEAREST, looks really bad
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
const GLubyte* test = gluErrorString(glGetError());
cout << test << endl;
return true;
}
I'm using visual studio 2010 also if that helps. The call to load texture[0] is from the SOIL image library.
Let's break this down:
This call loads an image, creates a new texture ID and loads the image into the texture object named by this ID. In case of success the ID is returned and stored in texture[0].
texture[0] = SOIL_load_OGL_texture(
"NeHe.bmp",
SOIL_LOAD_AUTO,
SOIL_CREATE_NEW_ID,
SOIL_FLAG_INVERT_Y);
BTW: The image file is not to be in the same directory as the library, but in the current working directory of the process at time of calling this function. If you didn't change the working directory, it's whatever directory your process got called from.
Check if the texture was loded successfully
if(texture[0] == 0)
{
return false;
}
Enabling texturing here makes only little sense. glEnable calls belong in the rendering code.
glEnable(GL_TEXTURE_2D);
Okay, here's a problem. glGenTextures generates new texture IDs and places them in the array provided to it. Whatever was stored in that array before is overwritten. In your case the very texture ID generated and returned by SOIL_load_OGL_texture. Note that this is just some handle and is not garbage collected in any way. You now have in face a texture object dangling in OpenGL and no longer access to it, because you threw away the handle.
glGenTextures(3, &texture[0]);
Now you bind a texture object named by the newly created ID. Since this is a new ID you're effectively creating a new texture object with no image data assigned.
glBindTexture(GL_TEXTURE_2D, texture[0]);
All the following calls operate on an entirely different texture than the one created by SOIL.
How to fix the code: Remove glGenTextures. In your case it's not only redundant, it's the cause of your problem.
This line:
texture[0] = SOIL_load_OGL_texture("NeHe.bmp", SOIL_LOAD_AUTO, SOIL_CREATE_NEW_ID, SOIL_FLAG_INVERT_Y);
Creates a texture, storing the OpenGL texture in texture[0].
This line:
glGenTextures(3, &texture[0]);
Creates three textures, storing them in the texture array, overwriting whatever was there.
See the problem? You get a texture from SOIL, then you immediately throw it away by overwriting it with a newly-created texture.
This is no different conceptually than the following:
int *pInt = new int(5);
pInt = new int(10);
Hm, doesn't glGenTextures(howmany,where) work just like glGenBuffers? Why do you assign three textures to one pointer, how it's expected to work?
I think it shoud be
int textures[3];
glGenTextures(3,textures);
this way three generated texture buffers will be placed in texture array.
Or
int tex1, tex2, tex3;
glGenTextures(1,&tex1);
glGenTextures(1,&tex2);
glGenTextures(1,&tex3);
so you have three separate texture buffer pointers
My program renders large jpegs as textures one at a time and allows the user to switch between them by re-using the "same" texture.
I call glGenTextures(1, &texture); once at the start.
Then each time I want to swap the image I use:
FreeTexture( texture );
ROI_img = fetch_image(temp, sortVector[tPiece]);
loadTexture_Ipl( ROI_img , &texture );
here are the two functions being called:
int loadTexture_Ipl(IplImage *image, GLuint *text)
{
if (image==NULL) return -1;
glBindTexture( GL_TEXTURE_2D, *text );
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR);
glPixelStorei(GL_UNPACK_ALIGNMENT, 4);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, ROI_WIDTH, ROI_HEIGHT,0, GL_BGR, GL_UNSIGNED_BYTE, image->imageData);
return 0;
}
void FreeTexture(GLuint texture)
{
glDeleteTextures( 1, &texture);
}
My problem is that after a couple of images they stop rendering (texture is all black). If I keep trying to switch I get this error message:
test(55248,0xacdf22c0) malloc: *** mmap(size=744001536) failed (error code=12)
*** error: can't allocate region
*** set a breakpoint in malloc_error_break to debug
Mar 19 11:57:51 Ants-MacBook-Pro.local test[55248] <Error>: CGSImageDataLock: Cannot allocate memory
I thought that glDeleteTextures would free the memory each time??
Any ideas on how to better implement this?
p.s. here is a screen of the memory leaks being encountered (https://p.twimg.com/AoWVy8FCMAEK8q7.png:large)
Since you want to reuse your texture, you shouldn't delete it with glDeleteTextures. If you do that, you need to create a new texture using glGenTextures, and I don't see you are doing it in the loadTexture_Ipl function.
glGenTextures and glDeleteTextures go in pairs. If you really want to delete your texture (maybe because you most likely won't be using that texture again), delete it like you already do and call glGenerateTextures again before setting a new texture