Texture does not wrap where expected - opengl

I am generating a bitmap (1 byte per pixel) and attempting to use it for alpha blending. I am successfully using the bitmap, but it appears that the texture does not wrap lines as I expect.
When I use the following code, it wraps where I would expect, given the input image. I get the set of Xs that I expect.
std::ofstream file{ R"(FileName.txt)" };
file << "width: " << gs.width() << "\theight: " << gs.height() << "\n";
for (int i = 0; i < gs.height(); ++i)
{
for (int j = 0; j < gs.width(); ++j)
{
file << ((gs.alpha()[j + i * gs.width()]) ? 'X' : ' ');
}
file << "\n";
}
When I load the texture it appears that the width of the texture does not match gs.width(), since it wraps oddly.
This is the code that I use to create the texture and load it with the bitmap.
GLuint tex;
glGenTextures(1, &tex);
glBindTexture(GL_TEXTURE_2D, tex);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT, gs.width(), gs.height(), 0, GL_DEPTH_COMPONENT, GL_UNSIGNED_BYTE, gs.alpha());
Can anyone suggest what I might be doing wrong?

Related

When glIsTexture is useful

Here is the example opengl commands sequence:
glGenTextures(1, &texId);
std::cout << (int)glIsTexture(texId) << std::endl; //0
glBindTexture(GL_TEXTURE_2D, texId);
std::cout << (int)glIsTexture(texId) << std::endl; //1
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, img.getWidth(), img.getHeight(),
0, GL_BGR, GL_UNSIGNED_BYTE, img.accessPixels()); //when data == 0 glIsTexture returns the same results
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glBindTexture(GL_TEXTURE_2D, 0);
std::cout << (int)glIsTexture(texId) << std::endl; //1
glDeleteTextures(1, &texId);
std::cout << (int)glIsTexture(texId) << std::endl; //0
I wonder when glIsTexture function is useful ? It looks like that the main usage is checking if texture has been deleted. glIsTexture also returns false when a texture has been generated but is not bound and initialized. Do you know any other scenarios ?
I wonder when glIsTexture function is useful ? It looks like that the main usage is checking if texture has been deleted.
If the renderer is properly architectured, then there is no need to check if a texture has been deleted (the code would already know).
I assume the designers felt exposing the state of the texture ID could be useful for either debugging or to implement some kind of pool of textures.

How to properly load an image to use as an OpenGL texture?

I am trying to load an image into an OpenGL texture using SOIL2; however, it never seems to be correct unless I use SOIL2's load to texture function. I have tried using STB image and Devil, but both get similar results.
Code:
GLuint load_image(const std::string& path) {
int iwidth, iheight, channels;
unsigned char* image = SOIL_load_image(path.c_str(), &iwidth, &iheight, &channels, SOIL_LOAD_RGBA);
// std::cout << SOIL_last_result() << std::endl;
// float* image = stbi_loadf(path.c_str(), &iwidth, &iheight, &channels, STBI_rgb_alpha);
// if(!ilLoadImage(path.c_str()))
// std::cout << "Devil Failed to load image: " << iluErrorString(ilGetError()) << std::endl;
//
// unsigned char* image = ilGetData();
//
// int iwidth = ilGetInteger(IL_IMAGE_WIDTH);
// int iheight = ilGetInteger(IL_IMAGE_HEIGHT);
// int channels = ilGetInteger(IL_IMAGE_CHANNELS);
GLuint texture;
glGenTextures(1, &texture);
glActiveTexture(GL_TEXTURE0 + texture);
GLint old_unpack_alignment;
glGetIntegerv(GL_UNPACK_ALIGNMENT, &old_unpack_alignment);
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
glBindTexture(GL_TEXTURE_2D, texture);
glCheckError();
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glCheckError();
GLenum original_format = (channels == 4 ? GL_RGBA : GL_RGB);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, image);
glGenerateMipmap(GL_TEXTURE_2D);
glPixelStorei(GL_UNPACK_ALIGNMENT, old_unpack_alignment);
return texture;
}
Screenshot:
What I should get:
I would like to know how to properly load an image into a texture.
Here is an example of what my texture loading function looks like:
unsigned int loadTexture(char const * path)
{
unsigned int textureID;
glGenTextures(1, &textureID);
int width, height, nrComponents;
unsigned char *data = stbi_load(path, &width, &height, &nrComponents, 0);
if (data)
{
GLenum format;
if (nrComponents == 1)
format = GL_RED;
else if (nrComponents == 3)
format = GL_RGB;
else if (nrComponents == 4)
format = GL_RGBA;
glBindTexture(GL_TEXTURE_2D, textureID);
glTexImage2D(GL_TEXTURE_2D, 0, format, width, height, 0, format, GL_UNSIGNED_BYTE, data);
glGenerateMipmap(GL_TEXTURE_2D);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, format == GL_RGBA ? GL_CLAMP_TO_EDGE : GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, format == GL_RGBA ? GL_CLAMP_TO_EDGE : GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
stbi_image_free(data);
}
else
{
std::cout << "Texture failed to load at path: " << path << std::endl;
stbi_image_free(data);
}
return textureID
}
I will usually set up my VAO & VBO before hand, then I'll use this to load in a texture. After this I'll configure my shader(s) for use, then within the render loop is where I'll use my shader, set the matrices passing in any of the needed uniforms, then after all the "model" information is completed I'll finally bind the VertexArrays, set the approrpriate texture to Active, then Bind those Texture(s) and fish up with drawing the arrays or primitives.

GL_INVALID_OPERATION in glGenerateMipmap(incomplete cube map)

I'm trying to learn OpenGL and i'm using SOIL to load images.
I have the following piece of code:
GLuint texID = 0;
bool loadCubeMap(const char * baseFileName) {
glActiveTexture(GL_TEXTURE0);
glGenTextures(1, &texID);
glBindTexture(GL_TEXTURE_CUBE_MAP, texID);
const char * suffixes[] = { "posx", "negx", "posy", "negy", "posz", "negz" };
GLuint targets[] = {
GL_TEXTURE_CUBE_MAP_POSITIVE_X, GL_TEXTURE_CUBE_MAP_NEGATIVE_X,
GL_TEXTURE_CUBE_MAP_POSITIVE_Y, GL_TEXTURE_CUBE_MAP_NEGATIVE_Y,
GL_TEXTURE_CUBE_MAP_POSITIVE_Z, GL_TEXTURE_CUBE_MAP_NEGATIVE_Z
};
for (int i = 0; i < 6; i++) {
int width, height;
std::string fileName = std::string(baseFileName) + "_" + suffixes[i] + ".png";
std::cout << "Loading: " << fileName << std::endl;
unsigned char * image = SOIL_load_image(fileName.c_str(), &width, &height, 0, SOIL_LOAD_RGB);
if (!image) {
std::cerr << __FUNCTION__ << " cannot load image " << fileName << " (" << SOIL_last_result() << ")" << std::endl;
return false;
}
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, width, height, 0, GL_RGB, GL_UNSIGNED_BYTE, image);
SOIL_free_image_data(image);
}
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_R, GL_CLAMP_TO_EDGE);
glGenerateMipmap(GL_TEXTURE_CUBE_MAP);
glBindTexture(GL_TEXTURE_CUBE_MAP, 0);
return true;
}
When i call this, the images load successfully, but then i get an error in console:
---- OGL DEBUG ----
message <1>: 'API' reported 'Error' with 'High' severity:
GL_INVALID_OPERATION in glGenerateMipmap(incomplete cube map)
---- BACKTRACE ----
and no cubemap is displaying at all.
Do you see any mistake in this code?
You never actually specify the texture image for the cube map faces. You instead call glTexImage2D on the GL_TEXTURE_2D target for all of the cube faces.

glDeleteTextures causes segfault

I have this C++ console application that creates a window and initializes an OpenGL context.
IDE: CodeBocks
Compilator: MinGW (x32)
OS: Windows 8.1 64bit
So when I compile I have a console and the window with the context. When I close the window first and then the console everything is okay.
However if I close the console first I get a segmentation fault from glDeleteTextures in the virtual destructor.
Here is how I initialize the texture:
texture::texture(const string& fileName)
{
int width, height, numComponents;
unsigned char* imageData = stbi_load(fileName.c_str(), &width, &height, &numComponents, 4);
if(imageData == NULL)
cout << "Could not open " << fileName << "." << endl;
glGenTextures(1, &m_texture);
glBindTexture(GL_TEXTURE_2D, m_texture);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, imageData);
glBindTexture(GL_TEXTURE, 0);
stbi_image_free(imageData);
}
void texture::useTexture(unsigned int textureUnit)
{
if(textureUnit < 0 || textureUnit > 32)
{
cout << "Texture unit not between 0 and 32. Setting it to 0..." << endl;
textureUnit = 0;
}
glActiveTexture(GL_TEXTURE0 + textureUnit);
glBindTexture(GL_TEXTURE_2D, m_texture);
}
texture::~texture()
{
glDeleteTextures(1, &m_texture);
}
Keep in mind that I wrote the same code, but compiled it with 64bit mingw and used 64bit glfw and it worked just fine... if that helps in any way.

opengl texturing

I am trying to create a normal map in opengl that I can load into the shader and change dynamically, though currently i am stuck at how to create the texture.
I currently have this:
glActiveTexture(GL_TEXTURE7);
glGenTextures(1, &normals);
glBindTexture(GL_TEXTURE_2D, normals);
texels = new Vector3f*[256];
for(int i = 0; i < 256; ++i){
texels[i] = new Vector3f[256];
}
this->setup_normals();
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexImage2D(GL_TEXTURE_2D, 0, 3, 256, 256, 0, GL_RGB, GL_FLOAT, texels);
...
void setup_normals(){
for(int i = 0; i < 256; ++i){
for(int j = 0; j < 256; ++j){
texels[i][j][0] = 0.0f;
texels[i][j][1] = 1.0f;
texels[i][j][2] = 0.0f;
}
}
}
where Vector3f is: typedef float Vector3f[3];
and texels is: Vector3f** texels;
When I draw this texture to a screenquad using an orthogonal matrix( which works for textures loaded in) I get .
I am unsure why it does not appear fully green and also what is causing the black streaks to appear within it. Any help appreciated.
Your array needs to be contiguous since glTexImage2D() doesn't take any sort of stride or row mapping parameters:
texels = new Vector3f[256*256];