I have a function to load a texture from a JPEG image using SOIL.
So far I have been loading the texture with the SOIL_load_image() function and then supplying the image to OpenGL using glTexImage2D (see code below). However! My textures is upside down, so I wanted to use the SOIL_load_OGL_texture() instead and supply the SOIL_FLAG_INVERT_Y in order to flip the images. My problem is though, that I get an unhandled exception at the SOIL_load_OGL_texture() function.
The code is almost a copy paste from the documentation, so I don’t understand why this error occurs?
(NOTE: I could invert the textures in my vertex shader, but I would like to use SOIL.)
The old way
int width;
int height;
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, textures[0]);
image = SOIL_load_image(filename, &width, &height, 0, SOIL_LOAD_RGB);
if (image == NULL) {
std::cout << "An error occurred while loading image." << std::endl;
exit(EXIT_FAILURE);
}
std::cout << "Loaded first texture image" << std::endl;
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, width, height, 0, GL_RGB, GL_UNSIGNED_BYTE, image);
SOIL_free_image_data(image);
What I am trying now
GLuint image;
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, textures[0]);
image = SOIL_load_OGL_texture(
filename,
SOIL_LOAD_RGB,
SOIL_CREATE_NEW_ID,
SOIL_FLAG_INVERT_Y
);
if (image == 0)
cerr << "SOIL loading error: '" << SOIL_last_result() << "' (" << "res_texture.png" << ")" << endl;
And the error
Unhandled exception at 0x0F5427FF (msvcr110d.dll) in AnotherTutorial.exe: 0xC0000005: Access violation reading location 0x00000000.
Seems like there is no answer to using SOIL, so i'll post my solution:
In the vertex shader I do:
Texcoord = vec2(texcoord.x, 1.0-texcoord.y);
gl_Position = proj * view * model * vec4( position, 1.0 );
The 1.0-texcoord.y inverts the y-axis of the image. Not as clean a solution, but it works.
void loadTexture(GLuint* texture, char* path){
*texture = SOIL_load_OGL_texture(filename,
SOIL_LOAD_AUTO,
SOIL_CREATE_NEW_ID,
SOIL_FLAG_NTSC_SAFE_RGB | SOIL_FLAG_MULTIPLY_ALPHA
);
if(*textre == NULL){
printf("[Texture loader] \"%s\" failed to load!\n", filename);
}
}
void drawTexturedRect(int x, int y, int w, int h, GLuint texture){
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, texture);
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glDepthMask(GL_FALSE);
glDisable(GL_DEPTH_TEST);
glBegin(GL_QUADS);
glColor3f(255,255,255);
glTexCoord2f(0,0);
glVertex2f(x,y);
glTexCoord2f(1,0);
glVertex2f(x+w,y);
glTexCoord2f(0,1);
glVertex2f(x,y+h);
glTexCoord2f(1,1);
glVertex2f(x+w,y+h);
glTexCoord2f(0,1);
glVertex2f(x,y+h);
glEnd();
glEnable(GL_DEPTH_TEST);
glDepthMask(GL_TRUE);
glDisable(GL_BLEND);
}
And then you can do:
// At initialization
GLuint texture;
loadTexture(&texture, "filename.png");
// Drawing part
drawTexturedRect(25,25,256,256,texture);
This is what I personally use, and it works perfectly. I'm using Visual Studio 2012 combined with SDL and SOIL.
SOIL_load_image will return unsigned char* yes, array holding your image data.
Then you will normally feed this data to gl using glGenerateTextures, assign texture IDs, calling glTexImage2D.
SOIL_load_OGL_texture returns a texture id, so you could do this instead:
texture[0] = SOIL_load_OGL_texture(...)
if your goal is to load textures 0,1,2.
As for invert y, the easiest and weight free solution is to flip the texture coordinates in glTexCoord, but it depends on what you are doing. If you are loading resources as a one time only operation or like, it can’t hurt, except some startup time probably not worth mentioning. But if you’re loading resources dynamically through the main loop when needed, then invert y flag (and about any flag), can hurt performance because of additional processing throughout your entire program.
One big advantage to using SOIL_load_image is that you can retrieve width, height and channel number from the original image which SOIL_load_OGL_texture doesn’t provide.
If it helps, SOIL_load_OGL_texture would crash after a while when loading RGB images as RGB but worked fine all the way with SOIL_LOAD_RGBA, that could be a fix to your problem.
I still find it easier with SOIL_load_image. I hope any of this helps. Also check the SOIL source code that ships with soil to see what’s going on.
Basically, in the command
SOIL_load_image(filename, &width, &height, 0, SOIL_LOAD_RGB);
the 0 (NULL) you are passing is a pointer to the channels set by the library, so, of course, when the library tries to access it you have:
Unhandled exception at 0x0F5427FF (msvcr110d.dll) in
AnotherTutorial.exe: 0xC0000005: Access violation reading location
0x00000000
Try declaring a variable and using it:
int channels;
SOIL_load_image(filename, &width, &height, &channels, SOIL_LOAD_RGB);
None of the solution above works for me.
I have same problem just now, like the following one:
SOIL_load_OGL_texture Unhandled exception at xxxxxx (msvcr120d.dll)
After reading solution at http://www.idevgames.com/forums/thread-10281.html,
i changed my path from "relative path" to "absolute path".
BTW, And since I am Chinese, i need to make sure that the path is ALL ENGLISH without non-ascii characters.
For example:
I change my code from
SOIL_load_OGL_texture("my_texture.bmp",
SOIL_LOAD_AUTO,
SOIL_CREATE_NEW_ID,
SOIL_FLAG_NTSC_SAFE_RGB | SOIL_FLAG_MULTIPLY_ALPHA
);
to
SOIL_load_OGL_texture("D:\temp_20160926\ConsoleApplication20\ConsoleApplication19\my_texture.bmp",
SOIL_LOAD_AUTO,
SOIL_CREATE_NEW_ID,
SOIL_FLAG_NTSC_SAFE_RGB | SOIL_FLAG_MULTIPLY_ALPHA
);
And it solve my problem perfectly.
I think this is a bug/restriction in SOIL. But you can avoid this by specific the absolute path like i do here.
Hope it can help you too.
Related
I have never come across this error before and I use glTexImage2D elsewhere in the project without error. Below is a screenshot of what error Visual Studio shows, and a view of the disassembly:
Given the line has ptr in it I assume there's a pointer error but I don't know what I'm doing wrong.
Below is the function I use to convert from an SDL_surface to a texture.
void surfaceToTexture(SDL_Surface *&surface, GLuint &texture) {
glEnable(GL_TEXTURE_2D);
glGenTextures(1, &texture);
glBindTexture(GL_TEXTURE_2D, texture);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, surface->w, surface->h, 0, GL_BGRA, GL_UNSIGNED_BYTE, surface->pixels);
glDisable(GL_TEXTURE_2D);
}
This function succeeds elsewhere in the program, for example when loading text:
SDL_Surface *surface;
surface = TTF_RenderText_Blended(tempFont, message.c_str(), color);
if (surface == NULL)
printf("Unable to generate text surface using font: %s! SDL_ttf Error: %s\n", font.c_str(), TTF_GetError());
else {
SDL_LockSurface(surface);
width = surface->w;
height = surface->h;
if (style != TTF_STYLE_NORMAL)
TTF_SetFontStyle(tempFont, TTF_STYLE_NORMAL);
surfaceToTexture(surface, texture);
SDL_UnlockSurface(surface);
}
SDL_FreeSurface(surface);
But not when loading an image:
SDL_Surface* surface = IMG_Load(path.c_str());
if (surface == NULL)
printf("Unable to load image %s! SDL_image Error: %s\n", path.c_str(), IMG_GetError());
else{
SDL_LockSurface(surface);
width = (w==0)?surface->w:w;
height = (h==0)?surface->h/4:h;
surfaceToTexture(surface, texture);
SDL_UnlockSurface(surface);
}
SDL_FreeSurface(surface);
Both examples are extracted from a class where texture is defined.
The path to the image is correct.
I know it's glTexImage2D that causes the problem as I added a breakpoint at the start of surfaceToTexture and stepped through the function.
Even when it doesn't work, texture and surface do have seemingly correct values/properties.
Any ideas?
The error you're getting means, that the procress crashed within a section of code for which the debugger could not find any debugging information (association between assembly and source code) whatsoever. This is typically the case for anything that's part of a/your program's debug build.
Now in your case what happens is, that you called glTexImage2D with parameters that "lie" to it about the memory layout of the buffer you pointed it to with the data parameter. Pointers don't carry any meaningful meta information (as far as the assembly level is concerned, they're just another integer, with special meaning). So you must make sure, that all the parameters you pass to a function along with a pointer do match up. If not, somewhere deep in the bowles of that function, or whatever it calls (or that calls, etc.) the memory might be accessed in a way that violates constraints set up by the operating system, triggering that kind of crash.
Solution to your problem: Fix your code, i.e. make sure that what you pass to OpenGL is consistent. It crashes within the OpenGL driver, but only because you lied to it.
I have been having a very odd problem when trying to use OpenGL's C++ API. I am trying to load in a texture using ImageMagick, and then display it as a simple 2D textured square. I have a decent amount of experience with using OpenGL in Java, so I understand how to render a texture and bind it to a primitive. However, each time I attempt to draw it, the program either fails to render, or it renders it as a (properly sized) white square. I'm not entirely sure what is going on, but I believe it has to do with ImageMagick.
I have been using Ubuntu's terminal for compiling, and I've learned just how painful it can be to have to install libraries manually. ImageMagick first refused to compile when used in my program, and when I finally got the program to compile, it would seg-fault each time it ran. I've finally got it "working", but now, whenever I attempt to load in the image, the program will run without rendering. I haven't found anything like this on Google.
http://imgur.com/C7yKwDK
The odd thing is, very rarely, it will work correctly and render the square as expected. However, when I then try to rerun the program, it fails as shown above. I've determined that the line that causes it to fail to render is the same line the image is loaded, so that led me to believe that the image was just being loaded incorrectly, causing the program to fail. However, if I move the texture loading code before the creation of the GL window, the program will consistently render successfully, but the textured square appears only as white (though the size of the square is correct, so I know the image loading is working).
Anyway, sorry for the long post. I've just given up solving this one on my own, and was hoping one of you could help me out.
OpenGL Initialization Code:
Texture* tx;
void GraphicsOGL :: initialize3D(int argc, char* argv[]) {
Magick::InitializeMagick(*argv);
glutInit(&argc, argv);
//Loading Here ALWAYS Causes White Square
/*glEnable(GL_TEXTURE_2D);
tx = new Texture("Resources/Images/test.png");
tx->load();*/
glutInitDisplayMode(GLUT_DOUBLE|GLUT_RGBA);
glutInitWindowSize(SCREEN_WIDTH, SCREEN_HEIGHT);
glutInitWindowPosition(100, 100);
glutCreateWindow("OpenGL Game");
glViewport(0,0,SCREEN_WIDTH,SCREEN_HEIGHT);
glOrtho(0,SCREEN_WIDTH,SCREEN_HEIGHT,0, -3,1000);
glEnable(GL_DEPTH_TEST);
glEnable(GL_ALPHA_TEST);
glEnable(GL_TEXTURE_2D);
//Loading Here SOMETIMES Works, But Typically Fails
tx = new Texture("Resources/Images/test.png");
tx->load();
glutDisplayFunc(displayCallback);
glClearColor(0.0f, 0.0f, 0.0f, 0.0f);
glutMainLoop();
}
Texture Loading Code:
bool Texture::load() {
try {
m_image.read(m_fileName); //This Line Causes it to Fail to Render
m_image.write(&m_blob, "RGBA");
}
catch (Magick::Error& Error) {
std::cout << "Error loading texture '" << m_fileName << "': " << Error.what() << std::endl;
return false;
}
width = m_image.columns();
height = m_image.rows();
glGenTextures(1, &m_textureObj);
glBindTexture(m_textureTarget, m_textureObj);
//glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_BASE_LEVEL, 0);
//glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAX_LEVEL, 0);
glTexParameterf(m_textureTarget, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameterf(m_textureTarget, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_GENERATE_MIPMAP, GL_TRUE);
glTexImage2D(m_textureTarget, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, m_blob.data());
//glBindTexture(m_textureTarget, 0);
return true;
}
Texture Drawing Code:
void GraphicsOGL :: drawTexture(float x, float y, Texture* tex) {
glEnable(GL_TEXTURE_2D);
tex->bind();
float depth = 0, w, h;
w = tex->getWidth();
h = tex->getHeight();
glBegin(GL_QUADS);
glVertex3f(x, y+h, depth); glTexCoord2f(1,0);
glVertex3f(x+w, y+h, depth); glTexCoord2f(1,1);
glVertex3f(x+w, y, depth); glTexCoord2f(0,1);
glVertex3f(x, y, depth); glTexCoord2f(0,0);
glEnd();
}
I am attempting to load a texture into OpenGL using Devil, and i am having a segmentation fault upon the calling of this constructor
Sprite::Sprite(const char *path){
ILuint tex = 0;
ilutEnable(ILUT_OPENGL_CONV);
ilGenImages(1, &tex);
ilBindImage(tex);
ilLoadImage(path);
ilConvertImage(IL_RGBA, IL_UNSIGNED_BYTE);
width = (GLuint*)ilGetInteger(IL_IMAGE_WIDTH);
height = (GLuint*)ilGetInteger(IL_IMAGE_HEIGHT);
glGenTextures(1, &texture);
glBindTexture(GL_TEXTURE_2D, texture);
glTexImage2D(GL_TEXTURE_2D,
0,
GL_RGBA,
width,
height,
0,
GL_RGBA,
GL_UNSIGNED_BYTE,
&tex);
ilBindImage(0);
ilDeleteImages(1, &tex);
ilutDisable(ILUT_OPENGL_CONV);
}
and texture is a protected member
GLuint texture;
As soon as this constructor is called i recieve a segfault error and it exits and I am using freeglut, gl, il, ilu, and ilut. any help would be appreciated
Edit:
I also decided to take a different approach and use
texture = ilutGLLoadImage(path)
function to just load it directly into the gl texture because I located the segfault coming from
ilLoadImage(path)
but the compiler tells me that ilutGLLoadImage() is not declared in this scope, and i have IL/il.h IL/ilu.h and IL/ilut.h all included and initialized
I never used DevIL, but glTexImage2D wants pointer to pixel data as the last argument and you pass pointer to local variable tex there instead, which is allocated on stack and does not contain expected information. So glTexImage2D reads through your stack and eventually attempts to access memory it was not supposed to access and you get segmentation fault.
I guess you'd want to use ilGetData() instead.
Make sure you have DevIL initialized with ilInit ( ) and change &tex to ilGetData ( ) and then it should work.
I want to setup a really simple two-pass effect. The first pass draws a texture object to a texture. The second pass creates a full screen quad in the geometry shader and textures it with the texture written in pass one.
The texture and framebuffer is set up in the following way:
gl.glGenFramebuffers(1, frameBufferHandle, 0);
gl.glBindFramebuffer(GL3.GL_FRAMEBUFFER, frameBufferHandle[0]);
texture = new Texture(gl, new TextureData(gl.getGLProfile(), GL3.GL_RGB, viewportWidth, viewportHeight,
0, GL3.GL_RGB, GL3.GL_UNSIGNED_BYTE, false, false, false, null, null));
texture.setTexParameteri(gl, GL3.GL_TEXTURE_MAG_FILTER, GL3.GL_LINEAR);
texture.setTexParameteri(gl, GL3.GL_TEXTURE_MIN_FILTER, GL3.GL_LINEAR);
gl.glFramebufferTexture(GL3.GL_FRAMEBUFFER, GL3.GL_COLOR_ATTACHMENT0, texture.getTextureObject(), 0);
int drawBuffers[] = {GL3.GL_COLOR_ATTACHMENT0};
gl.glDrawBuffers(1, drawBuffers, 0);
if (gl.glCheckFramebufferStatus(GL3.GL_FRAMEBUFFER) != GL3.GL_FRAMEBUFFER_COMPLETE)
throw new Exception("error while creating framebuffer");
The render function looks like:
// 1st pass
gl.glBindFramebuffer(GL3.GL_FRAMEBUFFER, frameBufferHandle[0]);
gl.glClearColor(0.2f, 0.2f, 0.2f, 1.0f);
gl.glClear(GL3.GL_STENCIL_BUFFER_BIT | GL3.GL_COLOR_BUFFER_BIT | GL3.GL_DEPTH_BUFFER_BIT);
texturePass.apply();
texturePass.updatePerObject(world);
texturePass.updateTexture(object.getDiffuseMap());
object.draw(gl);
// 2nd pass
gl.glBindFramebuffer(GL3.GL_FRAMEBUFFER, 0);
gl.glClearColor(0.2f, 0.2f, 0.2f, 1.0f);
gl.glClear(GL3.GL_STENCIL_BUFFER_BIT | GL3.GL_COLOR_BUFFER_BIT | GL3.GL_DEPTH_BUFFER_BIT);
fullscreenQuadPass.apply();
fullscreenQuadPass.updateTexture(texture)
;
gl.glDrawArrays(GL3.GL_POINTS, 0, 1);
The picture below shows the result of applying this effect:
As you hopefully can see, one can see through the golem and see his right hand. It seems like there is some kind of depth-test or transparency error.
Everything looks fine if I comment the 2nd pass out and replace
gl.glBindFramebuffer(GL3.GL_FRAMEBUFFER, frameBufferHandle[0]);
by
gl.glBindFramebuffer(GL3.GL_FRAMEBUFFER, 0);
Does anyone have an idea, what goes on here?
EDIT: In fact, I'm actually missing a depth buffer for the 2nd pass. Thus, I've updated my initialization sequence to
// Create framebuffer
gl.glGenFramebuffers(1, frameBufferHandle, 0);
gl.glBindFramebuffer(GL4.GL_FRAMEBUFFER, frameBufferHandle[0]);
// Set up color texture
colorTexture = new Texture(gl, new TextureData(gl.getGLProfile(),
GL4.GL_RGBA, width, height, 0, GL4.GL_RGBA, GL4.GL_UNSIGNED_BYTE,
false, false, false, null, null));
gl.glFramebufferTexture(GL4.GL_FRAMEBUFFER, GL4.GL_COLOR_ATTACHMENT0,
colorTexture.getTextureObject(), 0);
// Create and set up depth renderbuffer
gl.glGenRenderbuffers(GL4.GL_RENDERBUFFER, depthRenderBufferHandle, 0);
gl.glBindRenderbuffer(GL4.GL_RENDERBUFFER, depthRenderBufferHandle[0]);
gl.glRenderbufferStorage(GL4.GL_RENDERBUFFER, GL4.GL_DEPTH_COMPONENT,
width, height);
gl.glFramebufferRenderbuffer(GL4.GL_FRAMEBUFFER, GL4.GL_DEPTH_ATTACHMENT,
GL4.GL_RENDERBUFFER, depthRenderBufferHandle[0]);
int drawBuffers[] = {GL4.GL_COLOR_ATTACHMENT0};
gl.glDrawBuffers(1, drawBuffers, 0);
However, now my system crashes with a "fatal error" by the Java Runtime Environment. If I comment the newly added lines out, everything "works fine". What's the point now?
EDIT2: I've no idea why I've written
gl.glGenRenderbuffers(GL4.GL_RENDERBUFFER, depthRenderBufferHandle, 0);
Of course, it should be
gl.glGenRenderbuffers(1, depthRenderBufferHandle, 0);
That solved my problem.
Your Framebuffer Object currently lacks a depth attachment.
Here is some C pseudo-code that will address your problem:
GLuint depth_rbo;
glGenRenderbuffers (1, &depth_rbo);
glBindRenderbuffer (GL_RENDERBUFFER, depth_rbo);
glRenderbufferStorage (GL_RENDERBUFFER, GL_DEPTH_COMPONENT, width, height);
glFramebufferRenderbuffer (GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, depth_rbo);
In fact, it also lacks a stencil attachment, so I am not sure why you are clearing the stencil buffer?
If you have stencil operations to perform you will need to allocate storage for it as well. Moreover, if you need both depth and stencil in an FBO, you must use a packed Depth-Stencil format (GL_DEPTH_STENCIL_ATTACHMENT).
I'm having an issue trying to get the width and height of a texture using the glGetTexLevelParameter function. No matter what I try, the function will not set the value of the width or height variable. I checked for errors but keep getting no error. Here is my code (based off of the NeHe tutorials if that helps):
int LoadGLTextures()
{
//load image file directly into opengl as new texture
GLint width = 0;
GLint height = 0;
texture[0] = SOIL_load_OGL_texture("NeHe.bmp", SOIL_LOAD_AUTO, SOIL_CREATE_NEW_ID, SOIL_FLAG_INVERT_Y); //image must be in same place as lib
if(texture[0] == 0)
{
return false;
}
glEnable(GL_TEXTURE_2D);
glGenTextures(3, &texture[0]);
glBindTexture(GL_TEXTURE_2D, texture[0]);
glGetTexLevelParameteriv(GL_TEXTURE_2D, 0, GL_TEXTURE_WIDTH, &width);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); //no filtering bc of GL_NEAREST, looks really bad
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
const GLubyte* test = gluErrorString(glGetError());
cout << test << endl;
return true;
}
I'm using visual studio 2010 also if that helps. The call to load texture[0] is from the SOIL image library.
Let's break this down:
This call loads an image, creates a new texture ID and loads the image into the texture object named by this ID. In case of success the ID is returned and stored in texture[0].
texture[0] = SOIL_load_OGL_texture(
"NeHe.bmp",
SOIL_LOAD_AUTO,
SOIL_CREATE_NEW_ID,
SOIL_FLAG_INVERT_Y);
BTW: The image file is not to be in the same directory as the library, but in the current working directory of the process at time of calling this function. If you didn't change the working directory, it's whatever directory your process got called from.
Check if the texture was loded successfully
if(texture[0] == 0)
{
return false;
}
Enabling texturing here makes only little sense. glEnable calls belong in the rendering code.
glEnable(GL_TEXTURE_2D);
Okay, here's a problem. glGenTextures generates new texture IDs and places them in the array provided to it. Whatever was stored in that array before is overwritten. In your case the very texture ID generated and returned by SOIL_load_OGL_texture. Note that this is just some handle and is not garbage collected in any way. You now have in face a texture object dangling in OpenGL and no longer access to it, because you threw away the handle.
glGenTextures(3, &texture[0]);
Now you bind a texture object named by the newly created ID. Since this is a new ID you're effectively creating a new texture object with no image data assigned.
glBindTexture(GL_TEXTURE_2D, texture[0]);
All the following calls operate on an entirely different texture than the one created by SOIL.
How to fix the code: Remove glGenTextures. In your case it's not only redundant, it's the cause of your problem.
This line:
texture[0] = SOIL_load_OGL_texture("NeHe.bmp", SOIL_LOAD_AUTO, SOIL_CREATE_NEW_ID, SOIL_FLAG_INVERT_Y);
Creates a texture, storing the OpenGL texture in texture[0].
This line:
glGenTextures(3, &texture[0]);
Creates three textures, storing them in the texture array, overwriting whatever was there.
See the problem? You get a texture from SOIL, then you immediately throw it away by overwriting it with a newly-created texture.
This is no different conceptually than the following:
int *pInt = new int(5);
pInt = new int(10);
Hm, doesn't glGenTextures(howmany,where) work just like glGenBuffers? Why do you assign three textures to one pointer, how it's expected to work?
I think it shoud be
int textures[3];
glGenTextures(3,textures);
this way three generated texture buffers will be placed in texture array.
Or
int tex1, tex2, tex3;
glGenTextures(1,&tex1);
glGenTextures(1,&tex2);
glGenTextures(1,&tex3);
so you have three separate texture buffer pointers