I have implemented Pixel Buffer Object (PBO) in my OpenGL application. However, I have the error 1282 when I try to load a texture using the function 'glTexImage2D'. It's very strange because the problems comes from textures with a specific resolution.
To have a better understanding of my problem let's examine 3 textures with 3 different resolutions:
a) blue.jpg
Bpp: 24
Resolution: 259x469
b) green.jpg
Bpp: 24
Resolution: 410x489
c) red.jpg
Bpp: 24
Resolution: 640x480
Now let's examine the C++ code without PBO usage:
FIBITMAP *bitmap = FreeImage_Load(
FreeImage_GetFIFFromFilename(file.GetFullName().c_str()), file.GetFullName().c_str());
FIBITMAP *pImage = FreeImage_ConvertTo32Bits(bitmap);
char *pPixels = (char*)FreeImage_GetBits(bitmap);
uint32_t width = FreeImage_GetWidth(bitmap);
uint32_t height = FreeImage_GetHeight(bitmap);
uint32_t byteSize = width * height * (FreeImage_GetBPP(bitmap)/8); //24 bits / 8 bits = 3 bytes)
glGenTextures(1, &this->m_Handle);
glBindTexture(GL_TEXTURE_2D, this->m_Handle);
{
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
std::cout << "ERROR: " << glGetError() << std::endl;
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, width,
height, 0, GL_BGR, GL_UNSIGNED_BYTE, pPixels);
std::cout << "ERROR: " << glGetError() << std::endl;
if (this->m_IsMipmap)
glGenerateMipmap(this->m_Target);
}
glBindTexture(GL_TEXTURE_2D, 0);
For the 3 textures the output is always the same (so the loading has been executed correctly):
$> ERROR: 0
$> ERROR: 0
And the graphical result is also correct:
a) Blue
b) Green
c) Red
Now let's examine the C++ code using this time PBO:
FIBITMAP *bitmap = FreeImage_Load(
FreeImage_GetFIFFromFilename(file.GetFullName().c_str()), file.GetFullName().c_str());
FIBITMAP *pImage = FreeImage_ConvertTo32Bits(bitmap);
char *pPixels = (char*)FreeImage_GetBits(bitmap);
uint32_t width = FreeImage_GetWidth(bitmap);
uint32_t height = FreeImage_GetHeight(bitmap);
uint32_t byteSize = width * height * (FreeImage_GetBPP(bitmap)/8);
uint32_t pboID;
glGenTextures(1, &this->m_Handle);
glBindTexture(GL_TEXTURE_2D, this->m_Handle);
{
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glGenBuffers(1, &pboID);
glBindBuffer(GL_PIXEL_UNPACK_BUFFER, pboID);
{
unsigned int bufferSize = width * height * 3;
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
glBufferData(GL_PIXEL_UNPACK_BUFFER, bufferSize, 0, GL_STATIC_DRAW);
glBufferSubData(GL_PIXEL_UNPACK_BUFFER, 0, bufferSize, pPixels);
std::cout << "ERROR: " << glGetError() << std::endl;
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, width,
height, 0, GL_BGR, GL_UNSIGNED_BYTE, OFFSET_BUFFER(0));
std::cout << "ERROR: " << glGetError() << std::endl;
if (this->m_IsMipmap)
glGenerateMipmap(this->m_Target);
}
glBindBuffer(GL_PIXEL_UNPACK_BUFFER, 0);
}
glBindTexture(GL_TEXTURE_2D, 0);
The output for blue.jpg (259x469) and green.jpg (410x489) is the following:
$> ERROR: 0
$> ERROR: 1282
The graphical output is of course both the same:
But now the most interesting if for the texture red.jpg (640x480) there is no error and the graphical output is correct:
So using the PBO method the 1282 error seems to refer to a texture resolution problem!
The OpenGL documentation says for the error 1282 (GL_INVALID_OPERATION) concerning PBO:
GL_INVALID_OPERATION is generated if a non-zero buffer object name is bound to the GL_PIXEL_UNPACK_BUFFER target and the buffer object's data store is currently mapped.
GL_INVALID_OPERATION is generated if a non-zero buffer object name is bound to the GL_PIXEL_UNPACK_BUFFER target and the data would be unpacked from the buffer object such that the memory reads required would exceed the data store size.
GL_INVALID_OPERATION is generated if a non-zero buffer object name is bound to the GL_PIXEL_UNPACK_BUFFER target and data is not evenly divisible into the number of bytes needed to store in memory a datum indicated by type.
But I don't understand what is wrong with my code implementation!
I thought maybe if I need to use PBO I'm allowed to load textures with a resolution with a multiple of 8... But I hope not!
UPDATE
I tried to add the line:
glPixelStorei(GL_UNPACK_ALIGNMENT, 1); //1: byte alignment
before the call of 'glTexImage2D'.
The error 1282 has disappeared but the display is not correct:
I'm really lost!
Does anyone can help me?
It is obvous that the image data you loaded is padded to generate a 4-byte alignment for each row. THis is what the GL expects by default, and what you most probably also used in your non-PBO case.
When you switched to PBOs, you ignored that padding bytes per row, so your buffer was to small and the GL detected that aout-of -range access.
When you finally switched to GL_UNPACK_ALIGNMENT of 1, there is no out-of-range access any more, and the error goes away. But you now lie about your data format. It is still padded, but you told the GL that it isn't. For the 640x480 image, the padding is zero bytes (as 640*3 is divisable by 4), but for the other two images, there are padding bytes at the end of each row.
The correct solution is to leave GL_UNPACK_ALIGNMENT at the default of 4, and fix the calculation of bufferSize. You need to find out how many bytes there have to be added to each line so that the total bytes of the line are divisible by 4 again (that means, at most 3 bytes are added):
unsigned int padding = ( 4 - (width * 3) % 4 ) % 4;
Now, you can take these extra bytes into account, and get the final size of the buffer (and the image you have in memory):
unsigned int bufferSize = (width * 3 + padding) * height;
I had a similar problem where i got error 1282, and black texture.
The third parameter of glTexImage2D is said to be able to accept the values 1,2,3,4 meaning the number of bytes per pixel. But this suddenly stopped working for some reason. Replacing '4' with 'GL_RGBA' fixed the problem for me.
Hope this helps someone.
Related
This question already has an answer here:
OpenGL 2 Texture Internal Formats GL_RGB8I, GL_RGB32UI, etc
(1 answer)
Closed 3 years ago.
I want to use a 3D_TEXTURE as a big custom buffer, because TEXTURE_BUFFER is limited to 128MB on most GPUs, but 3D_TEXTURES don't have a limit. Unfortuantely I also can't use GL_SHADER_STORAGE_BUFFER, because it is limited to the glsl data types (float, uint, int) and I need byte-wise access.
I debugged the code and found that it throws the error code 1282 => GL_INVALID_OPERATION, when I try filling the data in the texture with glTexSubImage3D(). But I can't explain the error code. I checked all cases in the reference,
but to my knowledge none of the named reasons for GL_INVALID_OPERATION shouldn't be the case in my opinion.
// main code:
genSuperTex(1024);
fillSuperTex();
void genSuperTex(int inputSize) {
superBufferSize = inputSize; // saving the z size of the buffer...
GLuint gerror;
glGenTextures(1, &superBufferID);
glBindTexture(GL_TEXTURE_3D, superBufferID);
gerror = glGetError();
if (gerror) std::cout << "genSuperTex(): Texture generation error = " << gerror << std::endl; // no error here
// Setting texture parameters:
glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_WRAP_R, GL_CLAMP_TO_EDGE);
gerror = glGetError();
if (gerror) std::cout << "genSuperTex(): Texture param error = " << gerror << std::endl; // no error here
// Generating the storage for the texture:
// SUPERTEX_DIM = 128 for testing purposes
glTexStorage3D(GL_TEXTURE_3D, 1, GL_R8UI, SUPERTEX_DIM, SUPERTEX_DIM, inputSize);
gerror = glGetError();
if (gerror) std::cout << "genSuperTex(): glTexStorage3D = " << gerror << std::endl; // no error here
}
void fillSuperTex() {
glBindBuffer(GL_PIXEL_UNPACK_BUFFER, 0); // Just out of paranoia
glBindTexture(GL_TEXTURE_3D, superBufferID); // It is still bound, but out of paranoia.
unsigned int index = 0;
// I have tree of data, which all want to insert their data into the super texture
root->insertToGLSuperBuffer(superBufferID, superBufferSize, index);
}
void TSDFMesh_SVT::insertToGLSuperBuffer(const GLuint aTexID, const unsigned int size, unsigned int & index) {
GLuint error;
error = glGetError();
if (error) std::cout << "TSDFMesh_SVT::insertToGLSuperBuffer(): prev Error = " << error << std::endl; // no error here, to ensure the error is not from a previous operation
// How many z-slices are needed for the data:
GLsizei local_size = meSegmentCount(svtMap->getSize(), SUPERTEX_SEG_SIZE); // = 2, during debugging
glTexSubImage3D(GL_TEXTURE_3D, 0, 0, 0, index, SUPERTEX_DIM, SUPERTEX_DIM, local_size, GL_RED, GL_UNSIGNED_BYTE, svtMap->getData()->DataPointer());
// SUPERTEX_DIM = 128
error = glGetError(); // <------------------------- Here I get: GL_INVALID_OPERATION
if (error) std::cout << "TSDFMesh_SVT::insertToGLSuperBuffer(): gl Error = " << error << std::endl;
}
I also tried using glTextureSubImage() instead of glTexSubImage() with aTexID as first parameter instead, but to no use.
From the listed errors on the opengl reference:
GL_INVALID_OPERATION is generated by glTextureSubImage3D if texture is not the name of an existing texture object.
Cannot be the case as I use glTexSubImage3D and not glTextureSubImage3D. Also I bind the texture at the beginning of fillSuperTex() and during the traversal of the tree no glBindTexture() operation is done.
GL_INVALID_OPERATION is generated if the texture array has not been defined by a previous glTexImage3D or glTexStorage3D operation.
This seems to be the most likely error in my opinion. But in my opinion I have done everything correct with creating the buffer. At least I don't get any errors when it is called. And I used a breakpoint to check that the function genSuperTex() is actually called.
GL_INVALID_OPERATION is generated if type is one of GL_UNSIGNED_BYTE_3_3_2, GL_UNSIGNED_BYTE_2_3_3_REV, GL_UNSIGNED_SHORT_5_6_5, or GL_UNSIGNED_SHORT_5_6_5_REV and format is not GL_RGB.
GL_INVALID_OPERATION is generated if type is one of GL_UNSIGNED_SHORT_4_4_4_4, GL_UNSIGNED_SHORT_4_4_4_4_REV, GL_UNSIGNED_SHORT_5_5_5_1, GL_UNSIGNED_SHORT_1_5_5_5_REV, GL_UNSIGNED_INT_8_8_8_8, GL_UNSIGNED_INT_8_8_8_8_REV, GL_UNSIGNED_INT_10_10_10_2, or GL_UNSIGNED_INT_2_10_10_10_REV and format is neither GL_RGBA nor GL_BGRA.
GL_INVALID_OPERATION is generated if format is GL_STENCIL_INDEX and the base internal format is not GL_STENCIL_INDEX.
GL_INVALID_OPERATION is generated if a non-zero buffer object name is bound to the GL_PIXEL_UNPACK_BUFFER target and the buffer object's data store is currently mapped.
GL_INVALID_OPERATION is generated if a non-zero buffer object name is bound to the GL_PIXEL_UNPACK_BUFFER target and the data would be unpacked from the buffer object such that the memory reads required would exceed the data store size.
GL_INVALID_OPERATION is generated if a non-zero buffer object name is bound to the GL_PIXEL_UNPACK_BUFFER target and pixels is not evenly divisible into the number of bytes needed to store in memory a datum indicated by type.
In all these cases, the named conditions do not apply.
If anyone has an idea what the error here is, please let me know.
I found the problem:
I tried somewhere else to just testwise create a texture:
glGenTextures(1, &testtex);
glBindTexture(GL_TEXTURE_3D, testtex);
glTexImage3D(GL_TEXTURE_3D, 0, GL_R8UI, 64, 64, 64, 0, GL_RED, GL_UNSIGNED_BYTE, NULL);
and I found out that whenever internalformat is R8UI, I get the error. For GL_R8 or GL_RED everything works, but if I change to an integer format (R8I or R8UI) I get the error, although these interal formats are listed as valid entries.
( https://www.khronos.org/registry/OpenGL-Refpages/gl4/html/glTexImage3D.xhtml )
Is this a problem with my graphics card (NVIDIA) or does anybody else get that error?
I'm trying to make an openGL game in c++ and I'm trying to implement a text system,
to do this I'm trying to use SDL_ttf.
I already used SDL_ttf in an other project but with another api, so I made the same code but it happened to not fill the pixel data of the surface.
Here is my code :
void Text2Texture::setText(const char * text, size_t fontIndex){
SDL_Color c = {255, 255, 0, 255};
SDL_Surface * surface;
surface = TTF_RenderUTF8_Blended(loadedFonts_[fontIndex], text, c);
if(surface == nullptr) {
fprintf(stderr, "Error TTF_RenderText\n");
return;
}
GLenum texture_format;
GLint colors = surface->format->BytesPerPixel;
if (colors == 4) { // alpha
if (surface->format->Rmask == 0x000000ff)
texture_format = GL_RGBA;
else
texture_format = GL_BGRA_EXT;
} else { // no alpha
if (surface->format->Rmask == 0x000000ff)
texture_format = GL_RGB;
else
texture_format = GL_BGR_EXT;
}
glBindTexture(GL_TEXTURE_2D, textureId_);
glTexImage2D(GL_TEXTURE_2D, 0, colors, surface->w, surface->h, 0, texture_format, GL_UNSIGNED_BYTE, surface->pixels);
///This line tell me pixel data is 8 bit witch isn't good ?
std::cout << "pixel size : " << sizeof(surface->pixels) << std::endl;
///This line give me correct result
fprintf(stderr, "texture size : %d %d\n", surface->w, surface->h);
glBindTexture(GL_TEXTURE_2D, 0);
}
As you can see in the comment, the pointer pixels in surface have a size of 8 bit, witch is way too low for a texture. I don't know why It do that.
At the end, the texture data look to be fully filled with 0 (resulting with a black squad using very basic shaders).
In this project I'm using glfw to create an openGL context so I'm not using sdl and I did not initialized it.
However, I did initialize sdl_ttf, here is all I did before calling setText :
std::vector<TTF_Font *> Text2Texture::loadedFonts_;
void Text2Texture::init(){
if(TTF_Init() == -1) {
fprintf(stderr, "TTF_Init: %s\n", TTF_GetError());
}
}
int Text2Texture::loadFont(std::string const& fontPath){
loadedFonts_.emplace_back();
loadedFonts_.back() = TTF_OpenFont(fontPath.data(), 32);
if( loadedFonts_.back() == nullptr ) {
fprintf(stderr, "TTF_OpenFont: %s \n", TTF_GetError());
loadedFonts_.pop_back();
return -1;
}
return ((int)loadedFonts_.size() - 1);
}
///The constructor initialize the texture :
Text2Texture::Text2Texture(){
glGenTextures(1, &textureId_);
glBindTexture(GL_TEXTURE_2D, textureId_);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
}
My class got a static part here is it corp :
class Text2Texture {
public:
Text2Texture();
void setText(const char * text, size_t fontIndex = 0);
unsigned int getId() const;
//Partie static
static void init();
static void quit();
static int loadFont(std::string const& fontPath);
private:
unsigned int textureId_;
//Partie static
static std::vector<TTF_Font *> loadedFonts_;
};
I initialize sdl_ttf and load texture with static method then I create class instance to create specific texture.
If you find where is my mistake I would be pleased to read your answer.
(By the way, I'm not really sure using sdl_ttf is the good approach, if you have a better idea I would take it too but I would like to solve this problem first)
The format and type parameter of glTexImage2Dspecifiy how one single pixel is encoded.
When the texture font is created, each pixel is encoded to a single byte. This means your texture consist of a single color channel and each pixel has 1 byte.
I'm very sure that colors = surface->format->BytesPerPixel is 1.
Note that it is sufficient to encode the glyph in one color channel, because the glyph consists of information that would fit in a single byte.
By default, OpenGL assumes that the start of each row of an image is aligned 4 bytes. This is because the GL_UNPACK_ALIGNMENT parameter by default is 4. Since the image has 1 (red) color channel, and is tightly packed, the start of a row is possibly misaligned.
Change the GL_UNPACK_ALIGNMENT parameter to 1, before specifying the two-dimensional texture image (glTexImage2D).
Since the texture has only one (red) color channel, the green and blue color will be 0 and the alpha channel will be 1 when the texture is looked up. But you can treat green, blue and even alpha channels to be read from the red color channel, too.
This can be achieved by setting the texture swizzle parameters GL_TEXTURE_SWIZZLE_G, GL_TEXTURE_SWIZZLE_B respectively GL_TEXTURE_SWIZZLE_A. See glTexParameter.
Further, note that the texture parameter are stored in the texture object. glTexParameter changes the texture object which is currently bound to the specified target of the current texture unit. So it is sufficient to set the parameters once when the texture image is created.
In comparison, glPixelStore changes global states an ma have to be set to its default value after specifying the texture image (if later calls to glTexImage2D rely on it).
The specification of the 2-dimensional texture image and setting the parameters may look as follows:
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
glTexImage2D(GL_TEXTURE_2D, 0, GL_R8, surface->w, surface->h, 0,
GL_RED, GL_UNSIGNED_BYTE, surface->pixels);
glPixelStorei(GL_UNPACK_ALIGNMENT, 4);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_SWIZZLE_G, GL_RED);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_SWIZZLE_B, GL_RED);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_SWIZZLE_A, GL_RED);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
I've got some problems while trying to load smaller textures for my pixel-art game using SOIL. This is the result while loading a 40 x 40 image:
But when I switch to 30 x 40:
I checked my code if there are any problems when width is smaller than height, and for 40 x 50 everything is alright. I checked that 30 x 40 with Windows' image viewer, and it seems alright there too. Single thing that may influence in any way the loader is when using the coordinate axis to set the position, but, it works right. This is the code for loading the texture:
glGenTextures(1, &actor.texture);
glBindTexture(GL_TEXTURE_2D, actor.texture);
unsigned char* image = SOIL_load_image(("App/Textures/" + name + ".png").c_str(), &width, &height, 0, SOIL_LOAD_RGB);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, width, height, 0, GL_RGB, GL_UNSIGNED_BYTE, image);
SOIL_free_image_data(image);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
When the image is loaded to a texture object, then GL_UNPACK_ALIGNMENT has to be set to 1:
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, width, height, 0, GL_RGB, GL_UNSIGNED_BYTE, image);
Note, by default the parameter is 4. This means that each line of the image is assumed to be aligned to a size which is a multiple of 4. Since the image data are tightly packed and each pixel has a size of 3 bytes, the alignment has to be changed.
When the size of the image is 40 x 50 then the size of a line in bytes is 120, which is divisible by 4.
But if the size of the image is 30 x 40, the the size of a line in bytes is 90, which is not divisible by 4.
The problem does not lie in the small size, but that 30 isn't divisible by 4: 30 = 2 * 3 * 5. The default pixel store setting OpenGL assumes, that rows are aligned to 4-byte boundaries. For the 40×40 image that condition happens to be fulfilled, because no matter what pixel format you use, there's a factor 4 in the width.
The solution is to tell OpenGL, that pixel rows start at a different n-byte boundary:
unsigned char* image = SOIL_load_image(…);
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
glTexImage2D(…);
When I rasterize out a font, my code gives me a single channel of visability for a texture. Currently, I just duplicate this out to 4 different channels, and send that as a texture. Now this works, but I want to try and avoid unnecessary memory allocations and de-alocations on the cpu.
unsigned char *bitmap = new unsigned char[width*height] //How this is populated is not the point.
bitmap, now contains a 2d graphic.
It seems this guy also has the same problem: Opengl: Use single channel texture as alpha channel to display text
I do the same thing as a work around for now, where I just multiply the array size by 4 and copy the data into it 4 times.
unsigned char* colormap = new unsigned char[width * height * 4];
int offset = 0;
for (int d = 0; d < width * height;d++)
{
for (int i = 0;i < 4;i++)
{
colormap[offset++] = bitmap[d];
}
}
WHen I multiply it out, I use:
glTexParameteri(gltype, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(gltype, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexImage2D(gltype, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, colormap);
And get:
Which is what I want.
When i use only the single channel:
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
glTexParameteri(gltype, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(gltype, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexImage2D(GL_TEXTURE_2D, 0, GL_R8, width, height, 0, GL_RED, GL_UNSIGNED_BYTE, bitmap);
And Get:
It has no transparency, only red ext. makes it hard to colorize and ext. later.
Instead of having to do what I feel is a unnecessary allocations on the cpu side id like the tell OpenGL: "Hey your getting just one channel. multiply it out for all 4 color channels."
Is there a command for that?
In your shader, it's trivial enough to just broadcast the r component to all four channels:
vec4 vals = texture(tex, coords).rrrr;
If you don't want to modify your shader (perhaps because you need to use the same shader for 4-channel textures too), then you can apply a texture swizzle mask to the texture:
GLint swizzleMask[] = {GL_RED, GL_RED, GL_RED, GL_RED};
glTexParameteriv(GL_TEXTURE_2D, GL_TEXTURE_SWIZZLE_RGBA, swizzleMask);
When mechanisms read from the fourth component of the texture, they'll get the value defined by the red component of that texture.
I am attempting to load the following image:
As a texture for the stanford Dragon. The result however is as follows:
I have read that other people have had issues with this due to either not binding the textures correctly or using the wrong number of components when loading a texture. I think that I don't have either of those issues as I am both checking for the format of the image and binding the texture. I have managed to get other images to load correctly, so this seems like there is an issue specific to this image (I am not saying the image is corrupted, rather that something about this image is slightly different to the other images I ahve tried).
The code I am using to initialize the texture is as follows:
//Main constructor
Texture::Texture(string file_path, GLuint t_target)
{
//Change the coordinate system of the image
stbi_set_flip_vertically_on_load(true);
int numComponents;
//Load the pixel data of the image
void *data = stbi_load(file_path.c_str(), &width, &height, &numComponents, 0);
if (data == nullptr)//Error check
{
cerr << "Error when loading texture from file: " + file_path << endl;
Log::record_log(
string(80, '!') +
"\nError when loading texture from file: " + file_path + "\n" +
string(80, '!')
);
exit(EXIT_FAILURE);
}
//Create the texture OpenGL object
target = t_target;
glGenTextures(1, &textureID);
glBindTexture(target, textureID);
//Name the texture
glObjectLabel(GL_TEXTURE, textureID, -1,
("\"" + extract_name(file_path) +"\"").c_str());
//Set the color format
color_format = numComponents == 3 ? GL_RGB : GL_RGBA;
glTexImage2D(target, 0, color_format, width, height, 0,
color_format, GL_UNSIGNED_BYTE, data);
//Set the texture parameters of the image
glTexParameteri(target, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(target, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameteri(target, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(target, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
//free the memory
stbi_image_free(data);
//Create a debug notification event
char name[100];
glGetObjectLabel(GL_TEXTURE, textureID, 100, NULL, name);
string message = "Succesfully created texture: " + string(name) +
". Bound to target: " + textureTargetEnumToString(target);
glDebugMessageInsert(GL_DEBUG_SOURCE_APPLICATION, GL_DEBUG_TYPE_OTHER, 0,
GL_DEBUG_SEVERITY_NOTIFICATION, message.size(), message.c_str());
}
A JPEG eh? Probably no alpha channel then. And 894 pixels wide isn't quite evenly divisible by 4.
Double-check if you're hitting the numComponents == 3 case and if so, make sure GL_UNPACK_ALIGNMENT is set to 1 (default 4) with glPixelStorei() before your glTexImage2D() call.