I am attempting to load the following image:
As a texture for the stanford Dragon. The result however is as follows:
I have read that other people have had issues with this due to either not binding the textures correctly or using the wrong number of components when loading a texture. I think that I don't have either of those issues as I am both checking for the format of the image and binding the texture. I have managed to get other images to load correctly, so this seems like there is an issue specific to this image (I am not saying the image is corrupted, rather that something about this image is slightly different to the other images I ahve tried).
The code I am using to initialize the texture is as follows:
//Main constructor
Texture::Texture(string file_path, GLuint t_target)
{
//Change the coordinate system of the image
stbi_set_flip_vertically_on_load(true);
int numComponents;
//Load the pixel data of the image
void *data = stbi_load(file_path.c_str(), &width, &height, &numComponents, 0);
if (data == nullptr)//Error check
{
cerr << "Error when loading texture from file: " + file_path << endl;
Log::record_log(
string(80, '!') +
"\nError when loading texture from file: " + file_path + "\n" +
string(80, '!')
);
exit(EXIT_FAILURE);
}
//Create the texture OpenGL object
target = t_target;
glGenTextures(1, &textureID);
glBindTexture(target, textureID);
//Name the texture
glObjectLabel(GL_TEXTURE, textureID, -1,
("\"" + extract_name(file_path) +"\"").c_str());
//Set the color format
color_format = numComponents == 3 ? GL_RGB : GL_RGBA;
glTexImage2D(target, 0, color_format, width, height, 0,
color_format, GL_UNSIGNED_BYTE, data);
//Set the texture parameters of the image
glTexParameteri(target, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(target, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameteri(target, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(target, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
//free the memory
stbi_image_free(data);
//Create a debug notification event
char name[100];
glGetObjectLabel(GL_TEXTURE, textureID, 100, NULL, name);
string message = "Succesfully created texture: " + string(name) +
". Bound to target: " + textureTargetEnumToString(target);
glDebugMessageInsert(GL_DEBUG_SOURCE_APPLICATION, GL_DEBUG_TYPE_OTHER, 0,
GL_DEBUG_SEVERITY_NOTIFICATION, message.size(), message.c_str());
}
A JPEG eh? Probably no alpha channel then. And 894 pixels wide isn't quite evenly divisible by 4.
Double-check if you're hitting the numComponents == 3 case and if so, make sure GL_UNPACK_ALIGNMENT is set to 1 (default 4) with glPixelStorei() before your glTexImage2D() call.
Related
I am using this code to load FBX (note: specific for FBX), the textures unable to load successfully
for (unsigned int i = 0; i < mat->GetTextureCount(type); i++) {
aiString str;
mat->GetTexture(type, i, &str);
if (auto texture_inside = scene->GetEmbeddedTexture(str.C_Str())) {
unsigned char *image_data = nullptr;
int width, height, nrComponents;
if (texture_inside->mHeight == 0) {
image_data = stbi_load_from_memory(
reinterpret_cast<unsigned char *>(texture_inside->pcData),
texture_inside->mWidth, &width, &height, &nrComponents, 0);
} else {
image_data = stbi_load_from_memory(
reinterpret_cast<unsigned char *>(texture_inside->pcData),
texture_inside->mWidth * texture_inside->mHeight, &width, &height,
&nrComponents, 0);
}
if (image_data) {
GLenum format;
if (nrComponents == 1)
format = GL_RED;
else if (nrComponents == 3)
format = GL_RGB;
else if (nrComponents == 4)
format = GL_RGBA;
unsigned int t_id;
glGenTextures(1, &t_id);
glBindTexture(GL_TEXTURE_2D, t_id);
glTexImage2D(GL_TEXTURE_2D, 0, format, texture_inside->mWidth,
texture_inside->mHeight, 0, format, GL_UNSIGNED_BYTE,
image_data);
glGenerateMipmap(GL_TEXTURE_2D);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER,
GL_LINEAR_MIPMAP_LINEAR);
glBindTexture(GL_TEXTURE_2D, 0);
delete image_data;
AnimTexture texture;
texture.id = t_id;
texture.type_name = typeName;
texture.file_path = str.C_Str();
textures.push_back(texture);
}
LOG(INFO) << "loading texture from embeded: " << str.C_Str();
}
}
then I got error message like this:
UNSUPPORTED (log once): POSSIBLE ISSUE: unit 0 GLD_TEXTURE_INDEX_2D is unloadable and bound to sampler type (Float) - using zero texture because texture unloadable
My question is:
How to load FBX embedded texture in a correct workable way?
what did I miss here caused above errors possibly?
currently I only got wrong black dark texture.
This is a common question in the assimp-project. You can find an example how to load embedded textures here: How to deal with embedded textures
In short:
Get the data from the embedded texture
Encode it with a image-converter
Put it into your texture on the GPU
I am creating a color picker OpenGL application for images with ImGUI. I have managed to load an image by loading the image into a glTexImage2D and using ImGUI::Image().
Now I would like to implement a method, which can determine the color of the pixel in case of a left mouse click.
Here is the method I loading the texture, then assigning it to a framebuffer:
bool LoadTextureFromFile(const char *filename, GLuint *out_texture, int *out_width, int *out_height,ImVec2 mousePosition ) {
// Reading the image into a GL_TEXTURE_2D
int image_width = 0;
int image_height = 0;
unsigned char *image_data = stbi_load(filename, &image_width, &image_height, NULL, 4);
if (image_data == NULL)
return false;
GLuint image_texture;
glGenTextures(1, &image_texture);
glBindTexture(GL_TEXTURE_2D, image_texture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, image_width, image_height, 0, GL_RGBA, GL_UNSIGNED_BYTE, image_data);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
stbi_image_free(image_data);
glBindTexture(GL_TEXTURE_2D, 0);
*out_texture = image_texture;
*out_width = image_width;
*out_height = image_height;
// Assigning texture to Frame Buffer
unsigned int fbo;
glGenFramebuffers(1, &fbo);
glBindFramebuffer(GL_FRAMEBUFFER, fbo);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, image_texture, 0); glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, image_texture, 0);
if(glCheckFramebufferStatus(GL_FRAMEBUFFER) == GL_FRAMEBUFFER_COMPLETE){
std::cout<< "Frame buffer is done."<< std::endl;
}
return true;
}
Unfortunately, the above code results in a completely blank screen. I guess, there is something I missed during setting the framebuffer.
Here is the method, where I would like to sample the framebuffer texture by using the mouse coordinates:
void readPixelFromImage(ImVec2 mousePosition) {
unsigned char pixels[4];
glReadBuffer(GL_COLOR_ATTACHMENT0);
glReadPixels(GLint(mousePosition.x), GLint(mousePosition.y), 1, 1, GL_RGBA, GL_UNSIGNED_BYTE, pixels);
std::cout << "r: " << static_cast<int>(pixels[0]) << '\n';
std::cout << "g: " << static_cast<int>(pixels[1]) << '\n';
std::cout << "b: " << static_cast<int>(pixels[2]) << '\n';
std::cout << "a: " << static_cast<int>(pixels[3]) << '\n' << std::endl;
}
Any help is appreciated!
There is indeed something missing in your code:
You set up a new Framebuffer that contains just a single texture buffer. This is okay, so the glCheckFramebufferStatus equals GL_FRAMEBUFFER_COMPLETE. But there is no render buffer attached to your framebuffer. If you want your image rendered on screen, you should use the default framebuffer. This framebuffer is created from your GL context.
However, the documentation says: Default framebuffers cannot change their buffer attachments, [...] https://www.khronos.org/opengl/wiki/Framebuffer. So attaching a texture or renderbuffer to the default FB is certainly not possible. You could, however, generate a new FB as you did, render to it, and finally render the outcome (or blit the buffers) to your default FB. Maybe a good starting point for this technique is https://learnopengl.com/Advanced-Lighting/Deferred-Shading
Moreover, if you intend to just read back rendered values from your GPU, it is more performant to use a renderbuffer instead of a texture. You can even have multiple renderbuffers attached to your framebuffer (as in deferred shading). Example: you could use a second renderbuffer to render an object/instance id (so, the renderbuffer will be single channel integer), and your first renderbuffer will be used for normal drawing. Reading the second renderbuffer with glReadPixels you can directly read which instance was drawn at e.g. the mouse position. This way, you can enable mouse picking very efficiently.
I'm trying to make an openGL game in c++ and I'm trying to implement a text system,
to do this I'm trying to use SDL_ttf.
I already used SDL_ttf in an other project but with another api, so I made the same code but it happened to not fill the pixel data of the surface.
Here is my code :
void Text2Texture::setText(const char * text, size_t fontIndex){
SDL_Color c = {255, 255, 0, 255};
SDL_Surface * surface;
surface = TTF_RenderUTF8_Blended(loadedFonts_[fontIndex], text, c);
if(surface == nullptr) {
fprintf(stderr, "Error TTF_RenderText\n");
return;
}
GLenum texture_format;
GLint colors = surface->format->BytesPerPixel;
if (colors == 4) { // alpha
if (surface->format->Rmask == 0x000000ff)
texture_format = GL_RGBA;
else
texture_format = GL_BGRA_EXT;
} else { // no alpha
if (surface->format->Rmask == 0x000000ff)
texture_format = GL_RGB;
else
texture_format = GL_BGR_EXT;
}
glBindTexture(GL_TEXTURE_2D, textureId_);
glTexImage2D(GL_TEXTURE_2D, 0, colors, surface->w, surface->h, 0, texture_format, GL_UNSIGNED_BYTE, surface->pixels);
///This line tell me pixel data is 8 bit witch isn't good ?
std::cout << "pixel size : " << sizeof(surface->pixels) << std::endl;
///This line give me correct result
fprintf(stderr, "texture size : %d %d\n", surface->w, surface->h);
glBindTexture(GL_TEXTURE_2D, 0);
}
As you can see in the comment, the pointer pixels in surface have a size of 8 bit, witch is way too low for a texture. I don't know why It do that.
At the end, the texture data look to be fully filled with 0 (resulting with a black squad using very basic shaders).
In this project I'm using glfw to create an openGL context so I'm not using sdl and I did not initialized it.
However, I did initialize sdl_ttf, here is all I did before calling setText :
std::vector<TTF_Font *> Text2Texture::loadedFonts_;
void Text2Texture::init(){
if(TTF_Init() == -1) {
fprintf(stderr, "TTF_Init: %s\n", TTF_GetError());
}
}
int Text2Texture::loadFont(std::string const& fontPath){
loadedFonts_.emplace_back();
loadedFonts_.back() = TTF_OpenFont(fontPath.data(), 32);
if( loadedFonts_.back() == nullptr ) {
fprintf(stderr, "TTF_OpenFont: %s \n", TTF_GetError());
loadedFonts_.pop_back();
return -1;
}
return ((int)loadedFonts_.size() - 1);
}
///The constructor initialize the texture :
Text2Texture::Text2Texture(){
glGenTextures(1, &textureId_);
glBindTexture(GL_TEXTURE_2D, textureId_);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
}
My class got a static part here is it corp :
class Text2Texture {
public:
Text2Texture();
void setText(const char * text, size_t fontIndex = 0);
unsigned int getId() const;
//Partie static
static void init();
static void quit();
static int loadFont(std::string const& fontPath);
private:
unsigned int textureId_;
//Partie static
static std::vector<TTF_Font *> loadedFonts_;
};
I initialize sdl_ttf and load texture with static method then I create class instance to create specific texture.
If you find where is my mistake I would be pleased to read your answer.
(By the way, I'm not really sure using sdl_ttf is the good approach, if you have a better idea I would take it too but I would like to solve this problem first)
The format and type parameter of glTexImage2Dspecifiy how one single pixel is encoded.
When the texture font is created, each pixel is encoded to a single byte. This means your texture consist of a single color channel and each pixel has 1 byte.
I'm very sure that colors = surface->format->BytesPerPixel is 1.
Note that it is sufficient to encode the glyph in one color channel, because the glyph consists of information that would fit in a single byte.
By default, OpenGL assumes that the start of each row of an image is aligned 4 bytes. This is because the GL_UNPACK_ALIGNMENT parameter by default is 4. Since the image has 1 (red) color channel, and is tightly packed, the start of a row is possibly misaligned.
Change the GL_UNPACK_ALIGNMENT parameter to 1, before specifying the two-dimensional texture image (glTexImage2D).
Since the texture has only one (red) color channel, the green and blue color will be 0 and the alpha channel will be 1 when the texture is looked up. But you can treat green, blue and even alpha channels to be read from the red color channel, too.
This can be achieved by setting the texture swizzle parameters GL_TEXTURE_SWIZZLE_G, GL_TEXTURE_SWIZZLE_B respectively GL_TEXTURE_SWIZZLE_A. See glTexParameter.
Further, note that the texture parameter are stored in the texture object. glTexParameter changes the texture object which is currently bound to the specified target of the current texture unit. So it is sufficient to set the parameters once when the texture image is created.
In comparison, glPixelStore changes global states an ma have to be set to its default value after specifying the texture image (if later calls to glTexImage2D rely on it).
The specification of the 2-dimensional texture image and setting the parameters may look as follows:
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
glTexImage2D(GL_TEXTURE_2D, 0, GL_R8, surface->w, surface->h, 0,
GL_RED, GL_UNSIGNED_BYTE, surface->pixels);
glPixelStorei(GL_UNPACK_ALIGNMENT, 4);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_SWIZZLE_G, GL_RED);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_SWIZZLE_B, GL_RED);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_SWIZZLE_A, GL_RED);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
I can't find my mistake, why text has not been created? When using texture instead of text I get nothing or black background with colored points, please help
GLuint texture;
SDL_Surface *text = NULL;
TTF_Font *font = NULL;
SDL_Color color = {0, 0, 0};
font = TTF_OpenFont("../test.ttf", 20);
text = TTF_RenderText_Solid(font, "Hello, SDL !!!", color);
glGenTextures(1, &texture);
glBindTexture(GL_TEXTURE_2D, texture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, text->w, text->h, 0, GL_RGB, GL_UNSIGNED_BYTE, text->pixels);
SDL_FreeSurface(text);
One thing you could add is to specify texture filters, e.g.
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
Few things you have to check first
is the font loaded properly? check if "font == NULL", maybe your
font path is wrong
is the shader (if you use a shader) setup properly?
My guess is that you set the wrong pixel format type in glTexImage2D cause random color dots apear on your texture
Below is my code that load image via SDL_image for OpenGL use, I think it would be a good start to figure out what step you missed or forgot.
BTW, this code is not perfect. The types of pixel format is more than four (like index color) and I only handle some of them.
/*
* object_, originalWidth_ and originalHeight_ are private variables in
* this class, don't panic.
*/
void
Texture::Load(string filePath, GLint minMagFilter, GLint wrapMode)
{
SDL_Surface* image;
GLenum textureFormat;
GLint bpp; //Byte Per Pixel
/* Load image file */
image = IMG_Load(filePath.c_str());
if (image == nullptr) {
string msg("IMG error: ");
msg += IMG_GetError();
throw runtime_error(msg.c_str());
}
/* Find out pixel format type */
bpp = image->format->BytesPerPixel;
if (bpp == 4) {
if (image->format->Rmask == 0x000000ff)
textureFormat = GL_RGBA;
else
textureFormat = GL_BGRA;
} else if (bpp == 3) {
if (image->format->Rmask == 0x000000ff)
textureFormat = GL_RGB;
else
textureFormat = GL_BGR;
} else {
string msg("IMG error: Unknow pixel format, bpp = ");
msg += bpp;
throw runtime_error(msg.c_str());
}
/* Store widht and height */
originalWidth_ = image->w;
originalHeight_ = image->h;
/* Make OpenGL texture */
glEnable(GL_TEXTURE_2D);
glGenTextures(1, &object_);
glBindTexture(GL_TEXTURE_2D, object_);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, minMagFilter);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, minMagFilter);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, wrapMode);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, wrapMode);
glTexParameteri(GL_TEXTURE_2D, GL_GENERATE_MIPMAP, GL_TRUE);
glTexImage2D(
GL_TEXTURE_2D, // texture type
0, // level
bpp, // internal format
image->w, // width
image->h, // height
0, // border
textureFormat, // format(in this texture?)
GL_UNSIGNED_BYTE, // data type
image->pixels // pointer to data
);
/* Clean these mess up */
glBindTexture(GL_TEXTURE_2D, 0);
glDisable(GL_TEXTURE_2D);
SDL_FreeSurface(image);
}
For more information, you should check out SDL wiki or deep into it's source code to fully understand the architecture of SDL_Surface.
I have implemented Pixel Buffer Object (PBO) in my OpenGL application. However, I have the error 1282 when I try to load a texture using the function 'glTexImage2D'. It's very strange because the problems comes from textures with a specific resolution.
To have a better understanding of my problem let's examine 3 textures with 3 different resolutions:
a) blue.jpg
Bpp: 24
Resolution: 259x469
b) green.jpg
Bpp: 24
Resolution: 410x489
c) red.jpg
Bpp: 24
Resolution: 640x480
Now let's examine the C++ code without PBO usage:
FIBITMAP *bitmap = FreeImage_Load(
FreeImage_GetFIFFromFilename(file.GetFullName().c_str()), file.GetFullName().c_str());
FIBITMAP *pImage = FreeImage_ConvertTo32Bits(bitmap);
char *pPixels = (char*)FreeImage_GetBits(bitmap);
uint32_t width = FreeImage_GetWidth(bitmap);
uint32_t height = FreeImage_GetHeight(bitmap);
uint32_t byteSize = width * height * (FreeImage_GetBPP(bitmap)/8); //24 bits / 8 bits = 3 bytes)
glGenTextures(1, &this->m_Handle);
glBindTexture(GL_TEXTURE_2D, this->m_Handle);
{
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
std::cout << "ERROR: " << glGetError() << std::endl;
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, width,
height, 0, GL_BGR, GL_UNSIGNED_BYTE, pPixels);
std::cout << "ERROR: " << glGetError() << std::endl;
if (this->m_IsMipmap)
glGenerateMipmap(this->m_Target);
}
glBindTexture(GL_TEXTURE_2D, 0);
For the 3 textures the output is always the same (so the loading has been executed correctly):
$> ERROR: 0
$> ERROR: 0
And the graphical result is also correct:
a) Blue
b) Green
c) Red
Now let's examine the C++ code using this time PBO:
FIBITMAP *bitmap = FreeImage_Load(
FreeImage_GetFIFFromFilename(file.GetFullName().c_str()), file.GetFullName().c_str());
FIBITMAP *pImage = FreeImage_ConvertTo32Bits(bitmap);
char *pPixels = (char*)FreeImage_GetBits(bitmap);
uint32_t width = FreeImage_GetWidth(bitmap);
uint32_t height = FreeImage_GetHeight(bitmap);
uint32_t byteSize = width * height * (FreeImage_GetBPP(bitmap)/8);
uint32_t pboID;
glGenTextures(1, &this->m_Handle);
glBindTexture(GL_TEXTURE_2D, this->m_Handle);
{
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glGenBuffers(1, &pboID);
glBindBuffer(GL_PIXEL_UNPACK_BUFFER, pboID);
{
unsigned int bufferSize = width * height * 3;
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
glBufferData(GL_PIXEL_UNPACK_BUFFER, bufferSize, 0, GL_STATIC_DRAW);
glBufferSubData(GL_PIXEL_UNPACK_BUFFER, 0, bufferSize, pPixels);
std::cout << "ERROR: " << glGetError() << std::endl;
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, width,
height, 0, GL_BGR, GL_UNSIGNED_BYTE, OFFSET_BUFFER(0));
std::cout << "ERROR: " << glGetError() << std::endl;
if (this->m_IsMipmap)
glGenerateMipmap(this->m_Target);
}
glBindBuffer(GL_PIXEL_UNPACK_BUFFER, 0);
}
glBindTexture(GL_TEXTURE_2D, 0);
The output for blue.jpg (259x469) and green.jpg (410x489) is the following:
$> ERROR: 0
$> ERROR: 1282
The graphical output is of course both the same:
But now the most interesting if for the texture red.jpg (640x480) there is no error and the graphical output is correct:
So using the PBO method the 1282 error seems to refer to a texture resolution problem!
The OpenGL documentation says for the error 1282 (GL_INVALID_OPERATION) concerning PBO:
GL_INVALID_OPERATION is generated if a non-zero buffer object name is bound to the GL_PIXEL_UNPACK_BUFFER target and the buffer object's data store is currently mapped.
GL_INVALID_OPERATION is generated if a non-zero buffer object name is bound to the GL_PIXEL_UNPACK_BUFFER target and the data would be unpacked from the buffer object such that the memory reads required would exceed the data store size.
GL_INVALID_OPERATION is generated if a non-zero buffer object name is bound to the GL_PIXEL_UNPACK_BUFFER target and data is not evenly divisible into the number of bytes needed to store in memory a datum indicated by type.
But I don't understand what is wrong with my code implementation!
I thought maybe if I need to use PBO I'm allowed to load textures with a resolution with a multiple of 8... But I hope not!
UPDATE
I tried to add the line:
glPixelStorei(GL_UNPACK_ALIGNMENT, 1); //1: byte alignment
before the call of 'glTexImage2D'.
The error 1282 has disappeared but the display is not correct:
I'm really lost!
Does anyone can help me?
It is obvous that the image data you loaded is padded to generate a 4-byte alignment for each row. THis is what the GL expects by default, and what you most probably also used in your non-PBO case.
When you switched to PBOs, you ignored that padding bytes per row, so your buffer was to small and the GL detected that aout-of -range access.
When you finally switched to GL_UNPACK_ALIGNMENT of 1, there is no out-of-range access any more, and the error goes away. But you now lie about your data format. It is still padded, but you told the GL that it isn't. For the 640x480 image, the padding is zero bytes (as 640*3 is divisable by 4), but for the other two images, there are padding bytes at the end of each row.
The correct solution is to leave GL_UNPACK_ALIGNMENT at the default of 4, and fix the calculation of bufferSize. You need to find out how many bytes there have to be added to each line so that the total bytes of the line are divisible by 4 again (that means, at most 3 bytes are added):
unsigned int padding = ( 4 - (width * 3) % 4 ) % 4;
Now, you can take these extra bytes into account, and get the final size of the buffer (and the image you have in memory):
unsigned int bufferSize = (width * 3 + padding) * height;
I had a similar problem where i got error 1282, and black texture.
The third parameter of glTexImage2D is said to be able to accept the values 1,2,3,4 meaning the number of bytes per pixel. But this suddenly stopped working for some reason. Replacing '4' with 'GL_RGBA' fixed the problem for me.
Hope this helps someone.