This question already has an answer here:
OpenGL 2 Texture Internal Formats GL_RGB8I, GL_RGB32UI, etc
(1 answer)
Closed 3 years ago.
I want to use a 3D_TEXTURE as a big custom buffer, because TEXTURE_BUFFER is limited to 128MB on most GPUs, but 3D_TEXTURES don't have a limit. Unfortuantely I also can't use GL_SHADER_STORAGE_BUFFER, because it is limited to the glsl data types (float, uint, int) and I need byte-wise access.
I debugged the code and found that it throws the error code 1282 => GL_INVALID_OPERATION, when I try filling the data in the texture with glTexSubImage3D(). But I can't explain the error code. I checked all cases in the reference,
but to my knowledge none of the named reasons for GL_INVALID_OPERATION shouldn't be the case in my opinion.
// main code:
genSuperTex(1024);
fillSuperTex();
void genSuperTex(int inputSize) {
superBufferSize = inputSize; // saving the z size of the buffer...
GLuint gerror;
glGenTextures(1, &superBufferID);
glBindTexture(GL_TEXTURE_3D, superBufferID);
gerror = glGetError();
if (gerror) std::cout << "genSuperTex(): Texture generation error = " << gerror << std::endl; // no error here
// Setting texture parameters:
glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_WRAP_R, GL_CLAMP_TO_EDGE);
gerror = glGetError();
if (gerror) std::cout << "genSuperTex(): Texture param error = " << gerror << std::endl; // no error here
// Generating the storage for the texture:
// SUPERTEX_DIM = 128 for testing purposes
glTexStorage3D(GL_TEXTURE_3D, 1, GL_R8UI, SUPERTEX_DIM, SUPERTEX_DIM, inputSize);
gerror = glGetError();
if (gerror) std::cout << "genSuperTex(): glTexStorage3D = " << gerror << std::endl; // no error here
}
void fillSuperTex() {
glBindBuffer(GL_PIXEL_UNPACK_BUFFER, 0); // Just out of paranoia
glBindTexture(GL_TEXTURE_3D, superBufferID); // It is still bound, but out of paranoia.
unsigned int index = 0;
// I have tree of data, which all want to insert their data into the super texture
root->insertToGLSuperBuffer(superBufferID, superBufferSize, index);
}
void TSDFMesh_SVT::insertToGLSuperBuffer(const GLuint aTexID, const unsigned int size, unsigned int & index) {
GLuint error;
error = glGetError();
if (error) std::cout << "TSDFMesh_SVT::insertToGLSuperBuffer(): prev Error = " << error << std::endl; // no error here, to ensure the error is not from a previous operation
// How many z-slices are needed for the data:
GLsizei local_size = meSegmentCount(svtMap->getSize(), SUPERTEX_SEG_SIZE); // = 2, during debugging
glTexSubImage3D(GL_TEXTURE_3D, 0, 0, 0, index, SUPERTEX_DIM, SUPERTEX_DIM, local_size, GL_RED, GL_UNSIGNED_BYTE, svtMap->getData()->DataPointer());
// SUPERTEX_DIM = 128
error = glGetError(); // <------------------------- Here I get: GL_INVALID_OPERATION
if (error) std::cout << "TSDFMesh_SVT::insertToGLSuperBuffer(): gl Error = " << error << std::endl;
}
I also tried using glTextureSubImage() instead of glTexSubImage() with aTexID as first parameter instead, but to no use.
From the listed errors on the opengl reference:
GL_INVALID_OPERATION is generated by glTextureSubImage3D if texture is not the name of an existing texture object.
Cannot be the case as I use glTexSubImage3D and not glTextureSubImage3D. Also I bind the texture at the beginning of fillSuperTex() and during the traversal of the tree no glBindTexture() operation is done.
GL_INVALID_OPERATION is generated if the texture array has not been defined by a previous glTexImage3D or glTexStorage3D operation.
This seems to be the most likely error in my opinion. But in my opinion I have done everything correct with creating the buffer. At least I don't get any errors when it is called. And I used a breakpoint to check that the function genSuperTex() is actually called.
GL_INVALID_OPERATION is generated if type is one of GL_UNSIGNED_BYTE_3_3_2, GL_UNSIGNED_BYTE_2_3_3_REV, GL_UNSIGNED_SHORT_5_6_5, or GL_UNSIGNED_SHORT_5_6_5_REV and format is not GL_RGB.
GL_INVALID_OPERATION is generated if type is one of GL_UNSIGNED_SHORT_4_4_4_4, GL_UNSIGNED_SHORT_4_4_4_4_REV, GL_UNSIGNED_SHORT_5_5_5_1, GL_UNSIGNED_SHORT_1_5_5_5_REV, GL_UNSIGNED_INT_8_8_8_8, GL_UNSIGNED_INT_8_8_8_8_REV, GL_UNSIGNED_INT_10_10_10_2, or GL_UNSIGNED_INT_2_10_10_10_REV and format is neither GL_RGBA nor GL_BGRA.
GL_INVALID_OPERATION is generated if format is GL_STENCIL_INDEX and the base internal format is not GL_STENCIL_INDEX.
GL_INVALID_OPERATION is generated if a non-zero buffer object name is bound to the GL_PIXEL_UNPACK_BUFFER target and the buffer object's data store is currently mapped.
GL_INVALID_OPERATION is generated if a non-zero buffer object name is bound to the GL_PIXEL_UNPACK_BUFFER target and the data would be unpacked from the buffer object such that the memory reads required would exceed the data store size.
GL_INVALID_OPERATION is generated if a non-zero buffer object name is bound to the GL_PIXEL_UNPACK_BUFFER target and pixels is not evenly divisible into the number of bytes needed to store in memory a datum indicated by type.
In all these cases, the named conditions do not apply.
If anyone has an idea what the error here is, please let me know.
I found the problem:
I tried somewhere else to just testwise create a texture:
glGenTextures(1, &testtex);
glBindTexture(GL_TEXTURE_3D, testtex);
glTexImage3D(GL_TEXTURE_3D, 0, GL_R8UI, 64, 64, 64, 0, GL_RED, GL_UNSIGNED_BYTE, NULL);
and I found out that whenever internalformat is R8UI, I get the error. For GL_R8 or GL_RED everything works, but if I change to an integer format (R8I or R8UI) I get the error, although these interal formats are listed as valid entries.
( https://www.khronos.org/registry/OpenGL-Refpages/gl4/html/glTexImage3D.xhtml )
Is this a problem with my graphics card (NVIDIA) or does anybody else get that error?
Related
I am attempting to load the following image:
As a texture for the stanford Dragon. The result however is as follows:
I have read that other people have had issues with this due to either not binding the textures correctly or using the wrong number of components when loading a texture. I think that I don't have either of those issues as I am both checking for the format of the image and binding the texture. I have managed to get other images to load correctly, so this seems like there is an issue specific to this image (I am not saying the image is corrupted, rather that something about this image is slightly different to the other images I ahve tried).
The code I am using to initialize the texture is as follows:
//Main constructor
Texture::Texture(string file_path, GLuint t_target)
{
//Change the coordinate system of the image
stbi_set_flip_vertically_on_load(true);
int numComponents;
//Load the pixel data of the image
void *data = stbi_load(file_path.c_str(), &width, &height, &numComponents, 0);
if (data == nullptr)//Error check
{
cerr << "Error when loading texture from file: " + file_path << endl;
Log::record_log(
string(80, '!') +
"\nError when loading texture from file: " + file_path + "\n" +
string(80, '!')
);
exit(EXIT_FAILURE);
}
//Create the texture OpenGL object
target = t_target;
glGenTextures(1, &textureID);
glBindTexture(target, textureID);
//Name the texture
glObjectLabel(GL_TEXTURE, textureID, -1,
("\"" + extract_name(file_path) +"\"").c_str());
//Set the color format
color_format = numComponents == 3 ? GL_RGB : GL_RGBA;
glTexImage2D(target, 0, color_format, width, height, 0,
color_format, GL_UNSIGNED_BYTE, data);
//Set the texture parameters of the image
glTexParameteri(target, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(target, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameteri(target, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(target, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
//free the memory
stbi_image_free(data);
//Create a debug notification event
char name[100];
glGetObjectLabel(GL_TEXTURE, textureID, 100, NULL, name);
string message = "Succesfully created texture: " + string(name) +
". Bound to target: " + textureTargetEnumToString(target);
glDebugMessageInsert(GL_DEBUG_SOURCE_APPLICATION, GL_DEBUG_TYPE_OTHER, 0,
GL_DEBUG_SEVERITY_NOTIFICATION, message.size(), message.c_str());
}
A JPEG eh? Probably no alpha channel then. And 894 pixels wide isn't quite evenly divisible by 4.
Double-check if you're hitting the numComponents == 3 case and if so, make sure GL_UNPACK_ALIGNMENT is set to 1 (default 4) with glPixelStorei() before your glTexImage2D() call.
I can load a texture just fine in SOIL/OpenGL normally. No errors, everything works fine:
// this is inside my texture loading code in my texture class
// that i normally use for loading textures
image = SOIL_load_OGL_texture
(
file,
SOIL_LOAD_AUTO,
SOIL_CREATE_NEW_ID,
NULL
);
However, using that same code and calling it from an std::thread, at the line image = SOIL_load_OGL_texture I get unhandled exception Integer Division by Zero:
void loadMe() {
Texture* abc = new Texture("res/img/office.png");
}
void loadStuff() {
Texture* loading = new Texture("res/img/head.png"); // < always works
loadMe() // < always works
std::thread textures(loadMe); // < always "integer division by zero"
Here's some relevant code from my Texture class:
// inside the class
private:
GLint w, h;
GLuint image;
// loading the texture (called by constructor if filename is given)
void Texture::loadImage(const char* file)
{
image = SOIL_load_OGL_texture
(
file,
SOIL_LOAD_AUTO,
SOIL_CREATE_NEW_ID,
NULL
);
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, image);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glGetTexLevelParameteriv(GL_TEXTURE_2D, 0, GL_TEXTURE_WIDTH, &w);
glGetTexLevelParameteriv(GL_TEXTURE_2D, 0, GL_TEXTURE_HEIGHT, &h);
glBindTexture(GL_TEXTURE_2D, 0);
if (image <= 0)
std::cout << file << " failed to load!\n";
else
std::cout << file << " loaded.\n";
glDisable(GL_TEXTURE_2D);
}
It raises the exception exactly at image = SOIL_load_OGL_texture, and when I go into the debugger, I see things like w = -816294792 and w = -816294792, but I guess that just means it hasn't been set yet, as it also shows that in the debugger for loading the other textures.
Also, the SOIL_load_OGL_texture part of the code works fine by itself, outside of the Texture class, even in a std::thread.
Any idea what's going on here?
This is how you do it. Note that, as others have mentioned in the comments, a context needs to be maintained current for every thread that uses GL. What this means is practically there cannot be a GL API call made in multiple threads without making one thread the owner of the GL context. Hence if the intention is to separate the Image loading overhead it is recommended to load the image file into a buffer using a library in a separate thread, then use that buffer to glTexImage2D in the main thread. Till the image is loaded, a dummy texture can be displayed.
I tried checking what platform you are on (see comment above), since I did not see a response, I am assuming Linux for below.
/* Regular GL context creation foo */
/* Regular attribute, uniform, shader creation foo */
/* Create a thread that does loading with SOIL in function SOIL_loader */
std::thread textureloader(SOIL_loader);
/* Wait for loader thread to finish,
thus defeating the purpose of a thread. Ideally,
only the image file read/decode should happen in separate thread */
textureloader.join();
/* Make the GL context current back again in the main thread
for other actions */
glfwMakeContextCurrent((GLFWwindow*)window);
/* Some other foo */
======
And this is the loader thread function:
void SOIL_loader()
{
glfwMakeContextCurrent((GLFWwindow*)window);
SOIL_load_OGL_texture
(
"./img_test.png",
SOIL_LOAD_AUTO,
SOIL_CREATE_NEW_ID /* or passed ID */,
NULL
);
GL_CHECK(SOIL);
}
Tested on Ubuntu 14.04, Mesa, and glfw3.
I'm working on getting textures to render using openGL. I'm part of the way there and stuck.
My goal is to get this picture: http://i.imgur.com/d3kZTsn.png
and this is where I'm at: http://i.imgur.com/uAV8q0W.png
Has anyone seen this issue before?
if (tObject == 0) // We don't yet have an OpenGL texture target
{
// This code counts the number of images and if there are none simply
// returns without doing anything
int nImages = 0;
while (tName[nImages][0] != '\0' && nImages < MAX_IMAGES)
nImages++;
if (nImages < 1)
return;
// To Do
//
// Generate a texture object and place the object's value in the "tObject"
// member, then bind the object to the 2D texture target
glGenTextures(nImages, &tObject);
glBindTexture(GL_TEXTURE_2D, tObject);
for (int nImage = 0; nImage < nImages; nImage++)
{
// This code loads the texture using the windows library's "BitmapFile" object
BitmapFile texture;
if (!texture.Read(tName[nImage]))
complain("Couldn't read texture %s", tName);
GLuint srcFormat, targFormat;
// To Do
//
// First decide which format the texture is. If the texture has 4 bytes
// per pixel then it should be an RGBA texture, if it is 3 bytes per pixel
// then it is an RGB image. Notice though that the byte order for the BitmatFile
// object is reversed, so you need to take that into account in the "source" format
if( texture.BytesPerPixel() == 3 )
{
srcFormat = GL_BGR;
targFormat = GL_RGB;
}
else
{
srcFormat = GL_BGRA;
targFormat = GL_RGBA;
}
// Then you need to set the unpack alignment to tell OpenGL about the structure
// of the data in the image and send the data to OpenGL. If there are multiple files
// then we are manually creating a mipmap here and you will use the "level" parameter
// of glTexImage2D to tell OpenGL which mipmap level is being set. The levels are
// set in the same order as they are stored in the image list.
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
if( nImages > 1 )
{
glGenerateMipmap(GL_TEXTURE_2D);
}
glTexImage2D(GL_TEXTURE_2D, nImage, targFormat, texture.Width(), texture.Height(), 0, srcFormat, GL_UNSIGNED_BYTE, texture.ImageData());
}
// Finally, if there is only one image, you need to tell OpenGL to generate a mipmap
if( nImages == 1)
{
glGenerateMipmap(GL_TEXTURE_2D);
}
}
// Here you need to bind the texture to the 2D texture target and set the texture parameters
// You need to set the wrap mode, the minification and magnification filters.
glBindTexture(GL_TEXTURE_2D, tObject);
glTexParameteri(tObject, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(tObject, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexParameteri(tObject, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(tObject, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
// To Do
//
// For advanced antialiasing set the number of anisotropic samples
GLERR;
I do not understand the logic you are using to call glGenerateMipmap (...). The second parameter to glTexImage2D (...) is the texture LOD - glGenerateMipmap will generate the entire mip pyramid starting with LOD 0. Essentially, you invalidate every one of the calls to glTexImage2D (...) except the first and last iterations of that loop by doing this. It really looks like you either want an array texture, or each one of those images should be a separate texture.
In fact, glGenTextures (...) does not work the way you think it does. You are supposed to pass an array if nImages is > 1. That array will hold nImages-many texture object names. You bind each one and upload image data individually to LOD 0, then you can generate mipmaps.
The following addresses everything I just mentioned:
GLuint* tObjects = NULL;
if (tObjects == NULL) // We don't yet have any OpenGL textures
{
// This code counts the number of images and if there are none simply
// returns without doing anything
int nImages = 0;
while (tName[nImages][0] != '\0' && nImages < MAX_IMAGES)
nImages++;
if (nImages < 1)
return;
tObjects = new GLuint [nImages];
// To Do
//
// Generate multiple texture objects and place the object's values in the "tObjects"
// member, then bind the object to the 2D texture target
glGenTextures (nImages, tObjects);
for (int nImage = 0; nImage < nImages; nImage++)
{
glBindTexture(GL_TEXTURE_2D, tObjects [nImage]);
// This code loads the texture using the windows library's "BitmapFile" object
BitmapFile texture;
if (!texture.Read(tName[nImage]))
complain("Couldn't read texture %s", tName);
GLuint srcFormat, targFormat;
// To Do
//
// First decide which format the texture is. If the texture has 4 bytes
// per pixel then it should be an RGBA texture, if it is 3 bytes per pixel
// then it is an RGB image. Notice though that the byte order for the BitmatFile
// object is reversed, so you need to take that into account in the "source" format
if( texture.BytesPerPixel() == 3 )
{
srcFormat = GL_BGR;
targFormat = GL_RGB;
}
else
{
srcFormat = GL_BGRA;
targFormat = GL_RGBA;
}
// Then you need to set the unpack alignment to tell OpenGL about the structure
// of the data in the image and send the data to OpenGL. If there are multiple files
// then we are manually creating a mipmap here and you will use the "level" parameter
// of glTexImage2D to tell OpenGL which mipmap level is being set. The levels are
// set in the same order as they are stored in the image list.
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
glTexImage2D(GL_TEXTURE_2D, 0, targFormat, texture.Width(), texture.Height(), 0, srcFormat, GL_UNSIGNED_BYTE, texture.ImageData());
glGenerateMipmap (GL_TEXTURE_2D);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
}
}
I have implemented Pixel Buffer Object (PBO) in my OpenGL application. However, I have the error 1282 when I try to load a texture using the function 'glTexImage2D'. It's very strange because the problems comes from textures with a specific resolution.
To have a better understanding of my problem let's examine 3 textures with 3 different resolutions:
a) blue.jpg
Bpp: 24
Resolution: 259x469
b) green.jpg
Bpp: 24
Resolution: 410x489
c) red.jpg
Bpp: 24
Resolution: 640x480
Now let's examine the C++ code without PBO usage:
FIBITMAP *bitmap = FreeImage_Load(
FreeImage_GetFIFFromFilename(file.GetFullName().c_str()), file.GetFullName().c_str());
FIBITMAP *pImage = FreeImage_ConvertTo32Bits(bitmap);
char *pPixels = (char*)FreeImage_GetBits(bitmap);
uint32_t width = FreeImage_GetWidth(bitmap);
uint32_t height = FreeImage_GetHeight(bitmap);
uint32_t byteSize = width * height * (FreeImage_GetBPP(bitmap)/8); //24 bits / 8 bits = 3 bytes)
glGenTextures(1, &this->m_Handle);
glBindTexture(GL_TEXTURE_2D, this->m_Handle);
{
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
std::cout << "ERROR: " << glGetError() << std::endl;
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, width,
height, 0, GL_BGR, GL_UNSIGNED_BYTE, pPixels);
std::cout << "ERROR: " << glGetError() << std::endl;
if (this->m_IsMipmap)
glGenerateMipmap(this->m_Target);
}
glBindTexture(GL_TEXTURE_2D, 0);
For the 3 textures the output is always the same (so the loading has been executed correctly):
$> ERROR: 0
$> ERROR: 0
And the graphical result is also correct:
a) Blue
b) Green
c) Red
Now let's examine the C++ code using this time PBO:
FIBITMAP *bitmap = FreeImage_Load(
FreeImage_GetFIFFromFilename(file.GetFullName().c_str()), file.GetFullName().c_str());
FIBITMAP *pImage = FreeImage_ConvertTo32Bits(bitmap);
char *pPixels = (char*)FreeImage_GetBits(bitmap);
uint32_t width = FreeImage_GetWidth(bitmap);
uint32_t height = FreeImage_GetHeight(bitmap);
uint32_t byteSize = width * height * (FreeImage_GetBPP(bitmap)/8);
uint32_t pboID;
glGenTextures(1, &this->m_Handle);
glBindTexture(GL_TEXTURE_2D, this->m_Handle);
{
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glGenBuffers(1, &pboID);
glBindBuffer(GL_PIXEL_UNPACK_BUFFER, pboID);
{
unsigned int bufferSize = width * height * 3;
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
glBufferData(GL_PIXEL_UNPACK_BUFFER, bufferSize, 0, GL_STATIC_DRAW);
glBufferSubData(GL_PIXEL_UNPACK_BUFFER, 0, bufferSize, pPixels);
std::cout << "ERROR: " << glGetError() << std::endl;
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, width,
height, 0, GL_BGR, GL_UNSIGNED_BYTE, OFFSET_BUFFER(0));
std::cout << "ERROR: " << glGetError() << std::endl;
if (this->m_IsMipmap)
glGenerateMipmap(this->m_Target);
}
glBindBuffer(GL_PIXEL_UNPACK_BUFFER, 0);
}
glBindTexture(GL_TEXTURE_2D, 0);
The output for blue.jpg (259x469) and green.jpg (410x489) is the following:
$> ERROR: 0
$> ERROR: 1282
The graphical output is of course both the same:
But now the most interesting if for the texture red.jpg (640x480) there is no error and the graphical output is correct:
So using the PBO method the 1282 error seems to refer to a texture resolution problem!
The OpenGL documentation says for the error 1282 (GL_INVALID_OPERATION) concerning PBO:
GL_INVALID_OPERATION is generated if a non-zero buffer object name is bound to the GL_PIXEL_UNPACK_BUFFER target and the buffer object's data store is currently mapped.
GL_INVALID_OPERATION is generated if a non-zero buffer object name is bound to the GL_PIXEL_UNPACK_BUFFER target and the data would be unpacked from the buffer object such that the memory reads required would exceed the data store size.
GL_INVALID_OPERATION is generated if a non-zero buffer object name is bound to the GL_PIXEL_UNPACK_BUFFER target and data is not evenly divisible into the number of bytes needed to store in memory a datum indicated by type.
But I don't understand what is wrong with my code implementation!
I thought maybe if I need to use PBO I'm allowed to load textures with a resolution with a multiple of 8... But I hope not!
UPDATE
I tried to add the line:
glPixelStorei(GL_UNPACK_ALIGNMENT, 1); //1: byte alignment
before the call of 'glTexImage2D'.
The error 1282 has disappeared but the display is not correct:
I'm really lost!
Does anyone can help me?
It is obvous that the image data you loaded is padded to generate a 4-byte alignment for each row. THis is what the GL expects by default, and what you most probably also used in your non-PBO case.
When you switched to PBOs, you ignored that padding bytes per row, so your buffer was to small and the GL detected that aout-of -range access.
When you finally switched to GL_UNPACK_ALIGNMENT of 1, there is no out-of-range access any more, and the error goes away. But you now lie about your data format. It is still padded, but you told the GL that it isn't. For the 640x480 image, the padding is zero bytes (as 640*3 is divisable by 4), but for the other two images, there are padding bytes at the end of each row.
The correct solution is to leave GL_UNPACK_ALIGNMENT at the default of 4, and fix the calculation of bufferSize. You need to find out how many bytes there have to be added to each line so that the total bytes of the line are divisible by 4 again (that means, at most 3 bytes are added):
unsigned int padding = ( 4 - (width * 3) % 4 ) % 4;
Now, you can take these extra bytes into account, and get the final size of the buffer (and the image you have in memory):
unsigned int bufferSize = (width * 3 + padding) * height;
I had a similar problem where i got error 1282, and black texture.
The third parameter of glTexImage2D is said to be able to accept the values 1,2,3,4 meaning the number of bytes per pixel. But this suddenly stopped working for some reason. Replacing '4' with 'GL_RGBA' fixed the problem for me.
Hope this helps someone.
I've been attempting to write a two-pass GPU implementation of the Marching Cubes algorithm, similar to the one detailed in the first chapter of GPU Gems 3, using OpenGL and GLSL. However, the call to glDrawArrays in my first pass consistently fails with a GL_INVALID_OPERATION.
I've looked up all the documentation I can find, and found these conditions under which glDrawArrays can throw that error:
GL_INVALID_OPERATION is generated if a non-zero buffer object name is bound to an enabled array or to the GL_DRAW_INDIRECT_BUFFER binding and the buffer object's data store is currently mapped.
GL_INVALID_OPERATION is generated if glDrawArrays is executed between the execution of glBegin and the corresponding glEnd.
GL_INVALID_OPERATION will be generated by glDrawArrays or glDrawElements if any two active samplers in the current program object are of different types, but refer to the same texture image unit.
GL_INVALID_OPERATION is generated if a geometry shader is active and mode is incompatible with the input primitive type of the geometry shader in the currently installed program object.
GL_INVALID_OPERATION is generated if mode is GL_PATCHES and no tessellation control shader is active.
GL_INVALID_OPERATION is generated if recording the vertices of a primitive to the buffer objects being used for transform feedback purposes would result in either exceeding the limits of any buffer object’s size, or in exceeding the end position offset + size - 1, as set by glBindBufferRange.
GL_INVALID_OPERATION is generated by glDrawArrays() if no geometry shader is present, transform feedback is active and mode is not one of the allowed modes.
GL_INVALID_OPERATION is generated by glDrawArrays() if a geometry shader is present, transform feedback is active and the output primitive type of the geometry shader does not match the transform feedback primitiveMode.
GL_INVALID_OPERATION is generated if the bound shader program is invalid.
EDIT 10/10/12: GL_INVALID_OPERATION is generated if transform feedback is in use, and the buffer bound to the transform feedback binding point is also bound to the array buffer binding point. This is the problem I was having, due to a typo in which buffer I bound. While the spec does state that this is illegal, it isn't listed under glDrawArrays as one of the reasons it can throw an error, in any documentation I found.
Unfortunately, no one piece of official documentation I can find covers more than 3 of these. I had to collect this list from numerous sources. Points 7 and 8 actually come from the documentation for glBeginTransformFeedback, and point 9 doesn't seem to be documented at all. I found it mentioned in a forum post somewhere. However, I still don't think this list is complete, as none of these seem to explain the error I'm getting.
I'm not mapping any buffers at all in my program, anywhere.
I'm using the Core profile, so glBegin and glEnd aren't even available.
I have two samplers, and they are of different types, but they're definitely mapped to different textures.
A geometry shader is active, but it's input layout is layout (points) in, and glDrawArrays is being called with GL_POINTS.
I'm not using GL_PATCHES or tessellation shaders of any sort.
I've made sure I'm allocating the maximum amount of space my geometry shaders could possible output. Then I tried quadrupling it. Didn't help.
There is a geometry shader. See the next point.
Transform feedback is being used, and there is a geometry shader, but the output layout is layout (points) out and glBeginTransformFeedback is called with GL_POINTS.
I tried inserting a call to glValidateProgram right before the call to glDrawArrays, and it returned GL_TRUE.
The actual OpenGL code is here:
const int SECTOR_SIZE = 32;
const int SECTOR_SIZE_CUBED = SECTOR_SIZE * SECTOR_SIZE * SECTOR_SIZE;
const int CACHE_SIZE = SECTOR_SIZE + 3;
const int CACHE_SIZE_CUBED = CACHE_SIZE * CACHE_SIZE * CACHE_SIZE;
MarchingCubesDoublePass::MarchingCubesDoublePass(ServiceProvider* svc, DensityMap* sourceData) {
this->sourceData = sourceData;
densityCache = new float[CACHE_SIZE_CUBED];
}
MarchingCubesDoublePass::~MarchingCubesDoublePass() {
delete densityCache;
}
void MarchingCubesDoublePass::InitShaders() {
ShaderInfo vertShader, geoShader, fragShader;
vertShader = svc->shader->Load("data/shaders/MarchingCubesDoublePass-Pass1.vert", GL_VERTEX_SHADER);
svc->shader->Compile(vertShader);
geoShader = svc->shader->Load("data/shaders/MarchingCubesDoublePass-Pass1.geo", GL_GEOMETRY_SHADER);
svc->shader->Compile(geoShader);
shaderPass1 = glCreateProgram();
static const char* outputVaryings[] = { "triangle" };
glTransformFeedbackVaryings(shaderPass1, 1, outputVaryings, GL_SEPARATE_ATTRIBS);
assert(svc->shader->Link(shaderPass1, vertShader, geoShader));
uniPass1DensityMap = glGetUniformLocation(shaderPass1, "densityMap");
uniPass1TriTable = glGetUniformLocation(shaderPass1, "triangleTable");
uniPass1Size = glGetUniformLocation(shaderPass1, "size");
attribPass1VertPosition = glGetAttribLocation(shaderPass1, "vertPosition");
vertShader = svc->shader->Load("data/shaders/MarchingCubesDoublePass-Pass2.vert", GL_VERTEX_SHADER);
svc->shader->Compile(vertShader);
geoShader = svc->shader->Load("data/shaders/MarchingCubesDoublePass-Pass2.geo", GL_GEOMETRY_SHADER);
svc->shader->Compile(geoShader);
fragShader = svc->shader->Load("data/shaders/MarchingCubesDoublePass-Pass2.frag", GL_FRAGMENT_SHADER);
svc->shader->Compile(fragShader);
shaderPass2 = glCreateProgram();
assert(svc->shader->Link(shaderPass2, vertShader, geoShader, fragShader));
uniPass2DensityMap = glGetUniformLocation(shaderPass2, "densityMap");
uniPass2Size = glGetUniformLocation(shaderPass2, "size");
uniPass2Offset = glGetUniformLocation(shaderPass2, "offset");
uniPass2Matrix = glGetUniformLocation(shaderPass2, "matrix");
attribPass2Triangle = glGetAttribLocation(shaderPass2, "triangle");
}
void MarchingCubesDoublePass::InitTextures() {
for (int x = 0; x < CACHE_SIZE; x++) {
for (int y = 0; y < CACHE_SIZE; y++) {
for (int z = 0; z < CACHE_SIZE; z++) {
densityCache[x + y*CACHE_SIZE + z*CACHE_SIZE*CACHE_SIZE] = sourceData->GetDensity(Vector3(x-1, y-1, z-1));
}
}
}
glGenTextures(1, &densityTex);
glBindTexture(GL_TEXTURE_3D, densityTex);
glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_WRAP_R, GL_CLAMP_TO_EDGE);
glTexImage3D(GL_TEXTURE_3D, 0, GL_R32F, CACHE_SIZE, CACHE_SIZE, CACHE_SIZE, 0, GL_RED, GL_FLOAT, densityCache);
glGenTextures(1, &triTableTex);
glBindTexture(GL_TEXTURE_RECTANGLE, triTableTex);
glTexParameteri(GL_TEXTURE_RECTANGLE, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_RECTANGLE, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_RECTANGLE, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_RECTANGLE, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexImage2D(GL_TEXTURE_RECTANGLE, 0, GL_R16I, 16, 256, 0, GL_RED_INTEGER, GL_INT, triTable);
}
void MarchingCubesDoublePass::InitBuffers() {
float* voxelGrid = new float[SECTOR_SIZE_CUBED*3];
unsigned int index = 0;
for (int x = 0; x < SECTOR_SIZE; x++) {
for (int y = 0; y < SECTOR_SIZE; y++) {
for (int z = 0; z < SECTOR_SIZE; z++) {
voxelGrid[index*3 + 0] = x;
voxelGrid[index*3 + 1] = y;
voxelGrid[index*3 + 2] = z;
index++;
}
}
}
glGenBuffers(1, &bufferPass1);
glBindBuffer(GL_ARRAY_BUFFER, bufferPass1);
glBufferData(GL_ARRAY_BUFFER, SECTOR_SIZE_CUBED*3*sizeof(float), voxelGrid, GL_STATIC_DRAW);
glBindBuffer(GL_ARRAY_BUFFER, 0);
glGenBuffers(1, &bufferPass2);
glBindBuffer(GL_ARRAY_BUFFER, bufferPass2);
glBufferData(GL_ARRAY_BUFFER, SECTOR_SIZE_CUBED*5*sizeof(int), NULL, GL_DYNAMIC_COPY);
glBindBuffer(GL_ARRAY_BUFFER, 0);
glGenVertexArrays(1, &vaoPass1);
glBindVertexArray(vaoPass1);
glBindBuffer(GL_ARRAY_BUFFER, bufferPass1);
glVertexAttribPointer(attribPass1VertPosition, 3, GL_FLOAT, GL_FALSE, 0, (void*)0);
glBindBuffer(GL_ARRAY_BUFFER, 0);
glEnableVertexAttribArray(attribPass1VertPosition);
glBindVertexArray(0);
glGenVertexArrays(1, &vaoPass2);
glBindVertexArray(vaoPass2);
glBindBuffer(GL_ARRAY_BUFFER, bufferPass2);
glVertexAttribIPointer(attribPass2Triangle, 1, GL_INT, 0, (void*)0);
glBindBuffer(GL_ARRAY_BUFFER, 0);
glEnableVertexAttribArray(attribPass2Triangle);
glBindVertexArray(0);
glGenQueries(1, &queryNumTriangles);
}
void MarchingCubesDoublePass::Register(Genesis::ServiceProvider* svc, Genesis::Entity* ent) {
this->svc = svc;
this->ent = ent;
svc->scene->RegisterEntity(ent);
InitShaders();
InitTextures();
InitBuffers();
}
void MarchingCubesDoublePass::Unregister() {
if (!ent->GetBehavior<Genesis::Render>()) {
svc->scene->UnregisterEntity(ent);
}
}
void MarchingCubesDoublePass::RenderPass1() {
glEnable(GL_RASTERIZER_DISCARD);
glUseProgram(shaderPass1);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_3D, densityTex);
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_RECTANGLE, triTableTex);
glUniform1i(uniPass1DensityMap, 0);
glUniform1i(uniPass1TriTable, 1);
glUniform1i(uniPass1Size, SECTOR_SIZE);
glBindBufferBase(GL_TRANSFORM_FEEDBACK_BUFFER, 0, bufferPass2);
glBindVertexArray(vaoPass2);
glBeginQuery(GL_TRANSFORM_FEEDBACK_PRIMITIVES_WRITTEN, queryNumTriangles);
glBeginTransformFeedback(GL_POINTS);
GLenum error = glGetError();
glDrawArrays(GL_POINTS, 0, SECTOR_SIZE_CUBED);
error = glGetError();
glEndTransformFeedback();
glEndQuery(GL_TRANSFORM_FEEDBACK_PRIMITIVES_WRITTEN);
glBindVertexArray(0);
glBindBufferBase(GL_TRANSFORM_FEEDBACK_BUFFER, 0, 0);
glUseProgram(0);
glDisable(GL_RASTERIZER_DISCARD);
glGetQueryObjectuiv(queryNumTriangles, GL_QUERY_RESULT, &numTriangles);
}
void MarchingCubesDoublePass::RenderPass2(Matrix mat) {
glUseProgram(shaderPass2);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_3D, densityTex);
glUniform1i(uniPass2DensityMap, 0);
glUniform1i(uniPass2Size, SECTOR_SIZE);
glUniform3f(uniPass2Offset, 0, 0, 0);
mat.UniformMatrix(uniPass2Matrix);
glBindVertexArray(vaoPass2);
glDrawArrays(GL_POINTS, 0, numTriangles);
glBindVertexArray(0);
glUseProgram(0);
}
void MarchingCubesDoublePass::OnRender(Matrix mat) {
RenderPass1();
RenderPass2(mat);
}
The actual error is the call to glDrawArrays in RenderPass1. Worth noting that if I comment out the calls to glBeginTransformFeedback and glEndTransformFeedback, then glDrawArrays stops generating the error. So whatever's wrong, it's probably somehow related to transform feedback.
Edit 8/18/12, 9 PM:
I just found the NVIDIA GLExpert feature in gDEBugger, which I wasn't previously familiar with. When I turned this on, it gave somewhat more substantial information on the GL_INVALID_OPERATION, specifically The current operation is illegal in the current state: Buffer is mapped.. So I'm running into point 1, above. Though I have no idea how.
I have no calls to glMapBuffer, or any related function, anywhere in my code. I set gDEBugger to break on any calls to glMapBuffer, glMapBufferARB, glMapBufferRange, glUnmapBuffer and glUnmapBufferARB, and it didn't break anywhere. Then I added code to the start of RenderPass1 to explicitly unmap bother buffers. Not only did the error not go away, but calls to glUnmapBuffer now both generate The current operation is illegal in the current state: Buffer is unbound or is already unmapped.. So if neither of the buffers I'm using are mapped, where is the error coming from?
Edit 8/19/12, 12 AM:
Based on the error messages I'm getting out of GLExpert in gDEBugger, it appears that calling glBeginTransformFeedback is causing the buffer bound to GL_TRANSFORM_FEEDBACK_BUFFER to become mapped. Specifically, when I click on the buffer in "Textures, Buffers and Images Viewer" it outputs the message The current operation is illegal in the current state: Buffer must be bound and not mapped.. However, if I add this between glBeginTransformFeedback and glEndTransformFeedback:
int bufferBinding;
glGetBufferParameteriv(GL_TRANSFORM_FEEDBACK_BUFFER, GL_BUFFER_MAPPED, &bufferBinding);
printf("Transform feedback buffer binding: %d\n", bufferBinding);
it outputs 0, which would indicate that GL_TRANSFORM_FEEDBACK_BUFFER is not mapped. If this buffer is mapped on another binding point, would this still return 0? Why would glBeginTransformFeedback map the buffer, thus rendering it unusable for transform feedback?
The more I learn here, the more confused I'm becoming.
Edit 10/10/12:
As indicated in my reply below to Nicol Bolas' solution, I found the problem, and it's the same one he found: Due to a stupid typo, I was binding the same buffer to both the input and output binding points.
I found it probably two weeks after posting the question. I'd given up in frustration for a time, and eventually came back and basically re-implemented the whole thing from scratch, regularly comparing bits and pieces the older, non-working one. When I was done, the new version worked, and it was when I searched out the differences that I discovered I'd been binding the wrong buffer.
I figured out your problem: you are rendering to the same buffer that you're sourcing your vertex data.
glBindVertexArray(vaoPass2);
I think you meant vaoPass1
From the spec:
Buffers should not be bound or in use for both transform feedback and other
purposes in the GL. Specifically, if a buffer object is simultaneously bound to a
transform feedback buffer binding point and elsewhere in the GL, any writes to
or reads from the buffer generate undefined values. Examples of such bindings
include ReadPixels to a pixel buffer object binding point and client access to a
buffer mapped with MapBuffer.
Now, you should get undefined values; I'm not sure that a GL error qualifies, but it probably should be an error.
Another (apparently undocumented) case where glDrawArrays and glDrawElements fail with GL_INVALID_OPERATION:
GL_INVALID_OPERATION is generated if a sampler uniform is set to an invalid texture unit identifier. (I had mistakenly performed glUniform1i(location, GL_TEXTURE0); when I meant glUniform1i(location, 0);.)
Another (undocumented) case where glDraw*() calls can fail with GL_INVALID_OPERATION:
GL_INVALID_OPERATION is generated if a sampler uniform is set to a texture unit bound to a texture of the incorrect type. For example, if a uniform sampler2D is set glUniform1i(location, 0);, but GL_TEXTURE0 has a GL_TEXTURE_2D_ARRAY texture bound.