Textures not Working With glBindTexture - c++

I have a single texture working, but I cannot figure out how to switch between 2, or how glBindTexture actually works.
I copied this from somewhere and it works, and I believe that I understand most of it. Problem is, I can uncomment glBindTexture(GL_TEXTURE_2D, texture[0].texID); and it works. Which I don't understand. This code shouldn't be a problem, I think it's something simple I am missing.
bool LoadTGA(TextureImage *texture, char *filename) // Loads A TGA File Into Memory
{
GLubyte TGAheader[12]={0,0,2,0,0,0,0,0,0,0,0,0}; // Uncompressed TGA Header
GLubyte TGAcompare[12]; // Used To Compare TGA Header
GLubyte header[6]; // First 6 Useful Bytes From The Header
GLuint bytesPerPixel; // Holds Number Of Bytes Per Pixel Used In The TGA File
GLuint imageSize; // Used To Store The Image Size When Setting Aside Ram
GLuint temp; // Temporary Variable
GLuint type=GL_RGBA; // Set The Default GL Mode To RBGA (32 BPP)
system("cd");
FILE *file = fopen(filename, "r"); // Open The TGA File
if( file==NULL || // Does File Even Exist?
fread(TGAcompare,1,sizeof(TGAcompare),file)!=sizeof(TGAcompare) || // Are There 12 Bytes To Read?
memcmp(TGAheader,TGAcompare,sizeof(TGAheader))!=0 || // Does The Header Match What We Want?
fread(header,1,sizeof(header),file)!=sizeof(header)) // If So Read Next 6 Header Bytes
{
if (file == NULL) // Did The File Even Exist? *Added Jim Strong*
{
perror("Error");
return false; // Return False
}
else
{
fclose(file); // If Anything Failed, Close The File
perror("Error");
return false; // Return False
}
}
texture->width = header[1] * 256 + header[0]; // Determine The TGA Width (highbyte*256+lowbyte)
texture->height = header[3] * 256 + header[2]; // Determine The TGA Height (highbyte*256+lowbyte)
if( texture->width <=0 || // Is The Width Less Than Or Equal To Zero
texture->height <=0 || // Is The Height Less Than Or Equal To Zero
(header[4]!=24 && header[4]!=32)) // Is The TGA 24 or 32 Bit?
{
fclose(file); // If Anything Failed, Close The File
return false; // Return False
}
texture->bpp = header[4]; // Grab The TGA's Bits Per Pixel (24 or 32)
bytesPerPixel = texture->bpp/8; // Divide By 8 To Get The Bytes Per Pixel
imageSize = texture->width*texture->height*bytesPerPixel; // Calculate The Memory Required For The TGA Data
texture->imageData=(GLubyte *)malloc(imageSize); // Reserve Memory To Hold The TGA Data
if( texture->imageData==NULL || // Does The Storage Memory Exist?
fread(texture->imageData, 1, imageSize, file)!=imageSize) // Does The Image Size Match The Memory Reserved?
{
if(texture->imageData!=NULL) // Was Image Data Loaded
free(texture->imageData); // If So, Release The Image Data
fclose(file); // Close The File
return false; // Return False
}
for(GLuint i=0; i<int(imageSize); i+=bytesPerPixel) // Loop Through The Image Data
{ // Swaps The 1st And 3rd Bytes ('R'ed and 'B'lue)
temp=texture->imageData[i]; // Temporarily Store The Value At Image Data 'i'
texture->imageData[i] = texture->imageData[i + 2]; // Set The 1st Byte To The Value Of The 3rd Byte
texture->imageData[i + 2] = temp; // Set The 3rd Byte To The Value In 'temp' (1st Byte Value)
}
fclose (file); // Close The File
// Build A Texture From The Data
glGenTextures(1, &texture[0].texID); // Generate OpenGL texture IDs
//glBindTexture(GL_TEXTURE_2D, texture[0].texID); // Bind Our Texture
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); // Linear Filtered
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); // Linear Filtered
//glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
//glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
if (texture[0].bpp==24) // Was The TGA 24 Bits
{
type=GL_RGB; // If So Set The 'type' To GL_RGB
}
glTexImage2D(GL_TEXTURE_2D, 0, type, texture[0].width, texture[0].height, 0, type, GL_UNSIGNED_BYTE, texture[0].imageData);
return true;
Now when I draw I have this:
glEnable(GL_TEXTURE_2D);
//glBindTexture(GL_TEXTURE_2D, texturesList[0].texID);
glColor4f(1, 1, 1, 1);
glBegin(GL_POLYGON);
glTexCoord2f(0.0f, 0.0f);
glVertex4f(-50, 0, 50, 1);
glTexCoord2f(50.0f, 0.0f);
glVertex4f(-50, 0, -50, 1);
glTexCoord2f(50.0f, 50.0f);
glVertex4f(50, 0, -50, 1);
glTexCoord2f(0.0f, 50.0f);
glVertex4f(50, 0, 50, 1);
glEnd();
glDisable(GL_TEXTURE_2D);
And this at the start of the program:
LoadTGA(&texturesList[0], "\snow.tga");
LoadTGA(&texturesList[1], "\snow2.tga");
So after it loads them texturesList contains 2 textures with ids of 1 and 2.
So do I not call glBindTexture(GL_TEXTURE_2D, texturesList[0].texID); before I draw to choose the right texture? Because I have to tell glTexCoord2f what to operate on?
It works perfectly if I never call glBind in my draw, but if I do nothing shows up. What I am more confused about is that it glBind doesn't need to be called for it to work.
But the last texture I create gets shown(snow2.tga).
If I can clear anything up let me know.

So do I not call glBindTexture (GL_TEXTURE_2D, texturesList[0].texID); before I draw to choose the right texture? Because I have to tell glTexCoord2f what to operate on?
glTexCoord2f (...) operates at the per-vertex level. It is independent of what texture you have loaded, that is actually the whole point. You can map any texture you want simply by changing which texture is bound when you draw.
It works perfectly if I never call glBind in my draw, but if I do nothing shows up. What I am more confused about is that it glBind doesn't need to be called for it to work.
You need to bind your texture in LoadTGA (...) because "generating" a name alone is insufficient.
All that glGenTextures (...) does is return one or more unused names from the list of names OpenGL has for textures and reserve them so that a subsequent call does not give out the same name.
It does not actually create a texture, the name returned does not become a texture until you bind it. Until that time the name is merely in a reserved state. Commands such as glTexParameterf (...) and glTexImage2D (...) operate on the currently bound texture, so in addition to generating a texture you must also bind one before making those calls.
Now, onto some other serious issues that are not related to OpenGL:
Do whatever possible to get rid of your system ("cd"); line. There are much better ways of changing the working directory.
SetCurrentDirectory (...) (Windows)
chdir (...) (Linux/OS X/BSD/POSIX)
Do not use the file name "\snow.tga" as a string literal, because a C compiler may see "\" and interpret whatever comes after it as part of an escape sequence. Consider "\\snow.tga" instead or "/snow.tga" (yes, this even works on Windows - "\" is a terrible character to use as a path separator).
"\s" is not actually a recognized escape sequence by C compilers, but using "\" to begin your path is playing with fire because there are a handful of reserved characters where it will actually matter. "\fire.tga", for instance, is actually shorthand for {0x0c} "ire.tga". The compiler will replace your string literal with that sequence of bytes and will leave you scratching your head trying to figure out what went wrong.

Related

OpenGL 4.1 internally stores my texture incorrectly, causing a garbled image

I'm trying to load a 2bpp image format into OpenGL textures. The format is just a bunch of indexed-color pixels, 4 pixels fit into one byte since it's 2 bits per pixel.
My current code works fine in all cases except if the image's width is not divisible by 4. I'm not sure if this has something to do with the data being 2bpp, as it's converted to a pixel unsigned byte array (GLubyte raw[4096]) anyway.
16x16? Displays fine.
16x18? Displays fine.
18x16? Garbled mess.
22x16? Garbled mess.
etc.
Here is what I mean by works VS. garbled mess (resized to 3x):
Here is my code:
GLubyte raw[4096];
std::ifstream bin(file, std::ios::ate | std::ios::binary | std::ios::in);
unsigned short size = bin.tellg();
bin.clear();
bin.seekg(0, std::ios::beg);
// first byte is height; width is calculated from a combination of filesize & height
// this part works correctly every time
char ch = 0;
bin.get(ch);
ubyte h = ch;
ubyte w = ((size-1)*4)/h;
printf("%dx%d Filesize: %d (%d)\n", w, h, size-1, (size-1)*4);
// fill it in with 0's which means transparent.
for (int ii = 0; ii < w*h; ++ii) {
if (ii < 4096) {
raw[ii] = 0x00;
} else {
return false;
}
}
size_t i;
while (bin.get(ch)) {
// 2bpp mode
// take each byte in the file, split it into 4 bytes.
raw[i] = (ch & 0x03);
raw[i+1] = (ch & 0x0C) >> 2;
raw[i+2] = (ch & 0x30) >> 4;
raw[i+3] = (ch & 0xC0) >> 6;
i = i + 4;
}
texture_sizes[id][1] = w;
texture_sizes[id][2] = h;
glGenTextures(1, &textures[id]);
glBindTexture(GL_TEXTURE_2D, textures[id]);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
GLenum fmt = GL_RED;
GLint swizzleMask[] = { GL_RED, GL_RED, GL_RED, 255 };
glTexParameteriv(GL_TEXTURE_2D, GL_TEXTURE_SWIZZLE_RGBA, swizzleMask);
glTexImage2D(GL_TEXTURE_2D, 0, fmt, w, h, 0, fmt, GL_UNSIGNED_BYTE, raw);
glBindTexture(GL_TEXTURE_2D, 0);
What's actually happening for some reason, the image is being treated as if it's 20x24; OpenGL(probably?) seems to be forcefully rounding the width up to the nearest number that's divisible by 4. That would be 20. This is despite the w value in my code being correct at 18, it's as if OpenGL is saying "no, I'm going to make it be 20 pixels wide internally."
However since the texture is still being rendered as an 18x24 rectangle, the last 2 pixels of each row - that should be the first 2 pixels of the next row - are just... not being rendered.
Here's what happens when I force my code's w value to always be 20, instead of 18. (I just replaced w = ((size-1)*4)/h with w = 20):
And here's when my w value is 18 again, as in the first image:
As you can see, the image is a whole 2 pixels wider; those 2 pixels at the end of every row should be on the next row, because the width is supposed to be 18, not 20!
This proves that for whatever reason, internally, the texture bytes were parsed and stored as if they were 20x24 instead of 18x24. Why that is, I can't figure out, and I've been trying to solve this specific problem for days. I've verified that the raw bytes and everything are all the values I expect; there's nothing wrong with my data format. Is this an OpenGL bug? Why is OpenGL forcing internally storing my texture as 20x24, when I clearly told it to store it as 18x24? The rest of my code recognizes that I told the width to be 18 not 20, it's just OpenGL itself that doesn't.
Finally, one more note: I've tried loading the exact same file, in the exact same way with the LÖVE framework (Lua), exact same size and exact same bytes as my C++ version and all. And I dumped those bytes into love.image.newImageData and it displays just fine!
That's the final proof that it's not my format's problem; it's very likely OpenGL's problem or something in the code above that I'm overlooking.
How can I solve this problem? The problem being that OpenGL is storing textures internally with an incorrect width value (20 as opposed to the value of 18 that I gave the function) therefore loading the raw unsigned bytes incorrectly.

Compressed texture batching in OpenGL

I'm trying to create an atlas of compressed textures but I can't seem to get it working. Here is a code snippet:
void Texture::addImageToAtlas(ImageProperties* imageProperties)
{
generateTexture(); // delete and regenerate an empty texture
bindTexture(); // bind it
atlasProperties.push_back(imageProperties);
width = height = 0;
for (int i=0; i < atlasProperties.size(); i++)
{
width += atlasProperties[i]->width;
height = atlasProperties[i]->height;
}
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
// glCompressedTexImage2D MUST be called with valid data for the 'pixels'
// parameter. Won't work if you use zero/null.
glCompressedTexImage2D(GL_TEXTURE_2D, 0,
GL_COMPRESSED_RGBA8_ETC2_EAC,
width,
height,
0,
(GLsizei)(ceilf(width/4.f) * ceilf(height/4.f) * 16.f),
atlasProperties[0]->pixels);
// Recreate the whole atlas by adding all the textures we have appended
// to our vector so far
int x, y = 0;
for (int i=0; i < atlasProperties.size(); i++)
{
glCompressedTexSubImage2D(GL_TEXTURE_2D,
0,
x,
y,
atlasProperties[i]->width,
atlasProperties[i]->height,
GL_RGBA,
(GLsizei)(ceilf(atlasProperties[i]->width/4.f) * ceilf(atlasProperties[i]->height/4.f) * 16.f),
atlasProperties[i]->pixels);
x += atlasProperties[i]->width;
}
unbindTexture(); // unbind the texture
}
I'm testing this with just 2 small KTX textures that have the same size and as you can see from the code I'm trying to append the second one next to the first one on the x axis.
My KTX parsing works fine as I can render individual textures but as soon as I try to batch (that is as soon as I use glCompressedTexSubImage2d) I get nothing on the screen.
It might be useful to know that all of this works fine if I replace compressed textures with PNGs and swap the glCompressedTexImage2d and glCompressedTexSubImage2d with their non-compressed versions...
One of the things that I cannot find any information on is the x and y position of the textures in the atlas. How do I offset them? So if the first texture has a width of 60 pixels for example, do I just position the second one at 61?
I've seen some code online where people calculate the x and y position as follows:
x &= ~3;
y &= ~3;
Is this what I need to do and why? I've tried it but it doesn't seem to work.
Also, I'm trying the above code on an ARM i.mx6 Quad with a Vivante GPU, and I get the suspicion from what I read online that glCompressedTexSubImage2d might not be working on this board.
Can anyone please help me out?
The format you pass to glCompressedTexSubImage2D() must be the same as the one used for the corresponding glCompressedTexImage2D(). From the ES 2.0 spec:
This command does not provide for image format conversion, so an INVALID_OPERATION error results if format does not match the internal format of the texture image being modified.
Therefore, to match the glCompressedTexImage2D() call, the glCompressedTexSubImage2D() call needs to be:
glCompressedTexSubImage2D(GL_TEXTURE_2D,
0, x, y, atlasProperties[i]->width, atlasProperties[i]->height,
GL_COMPRESSED_RGBA8_ETC2_EAC,
(GLsizei)(ceilf(atlasProperties[i]->width/4.f) *
ceilf(atlasProperties[i]->height/4.f) * 16.f),
atlasProperties[i]->pixels);
As for the sizes and offsets:
Your logic of determining the overall size would only work if the height of all sub-images is the same. Or more precisely, since the height is set to the height of the last sub-image, if no other height is larger than the last one. To make it more robust, you would probably want to use the maximum height of all sub-images.
I was surprised that you can't pass null as the last argument of glCompressedTexImage2D(), but it seems to be true. At least I couldn't find anything allowing it in the spec. But this being the case, I don't think it would be ok to simply pass the pointer to the data of the first sub-image. That would not be enough data, and it would read beyond the end of the memory. You may have to allocate and pass "data" that is large enough to cover the entire atlas texture. You could probably set it to anything (e.g. zero it out), since you're going to replace it anyway.
The way I read the ETC2 definition (as included in the ES 3.0 spec), the width/height of the texture do not strictly have to be multiples of 4. However, the positions for glCompressedTexSubImage2D() do have to be multiples of 4, as well as the width/height, unless they extend to the edge of the texture. This means that you have to make the width of each sub-image except the last a multiple of 4. At that point, you might as well use a multiple of 4 for everything.
Based on this, I think the size determination should look like this:
width = height = 0;
for (int i = 0; i < atlasProperties.size(); i++)
{
width += (atlasProperties[i]->width + 3) & ~3;
if (atlasProperties[i]->height > height)
{
height = atlasProperties[i]->height;
}
}
height = (height + 3) & ~3;
uint8_t* dummyData = new uint8_t[width * height];
memset(dummyData, 0, width * height);
glCompressedTexImage2D(GL_TEXTURE_2D, 0,
GL_COMPRESSED_RGBA8_ETC2_EAC,
width, height, 0,
width * height,
dummyData);
delete[] dummyData;
Then to set the sub-images:
int xPos = 0;
for (int i = 0; i < atlasProperties.size(); i++)
{
int w = (atlasProperties[i]->width + 3) & ~3;
int h = (atlasProperties[i]->height + 3) & ~3;
glCompressedTexSubImage2D(GL_TEXTURE_2D,
0, xPos, 0, w, h,
GL_COMPRESSED_RGBA8_ETC2_EAC,
w * h,
atlasProperties[i]->pixels);
xPos += w;
}
The whole thing would get slightly simpler if you could ensure that the original texture images already had sizes that are multiples of 4. Then you can skip rounding up the sizes/positions to multiples of 4.
After all, this was one of those mistakes that make you want to hit your head on a wall. GL_COMPRESSED_RGBA8_ETC2_EAC was actually not supported on the board.
I copied it from the headers but it did not query the device for supported formats. I can use a DXT5 format just fine with this code.

Error C2664: 'auxDIBImageLoadW' : cannot convert parameter 1 from 'LPSTR' to 'LPCWSTR'

I was just modifying the code after reinstalling windows and VS 2012 Ultimate. The code (shown below) works perfectly fine before, but when I try to run the code right now, it gives following errors:
Error 1 error C2664: 'auxDIBImageLoadW' : cannot convert parameter 1 from 'LPSTR' to 'LPCWSTR'
Code:
void CreateTexture(GLuint textureArray[], LPSTR strFileName, int textureID)
{
AUX_RGBImageRec *pBitmap = NULL;
if (!strFileName) // Return from the function if no file name was passed in
return;
pBitmap = auxDIBImageLoad(strFileName); //<-Error in this line // Load the bitmap and store the data
if (pBitmap == NULL) // If we can't load the file, quit!
exit(0);
// Generate a texture with the associative texture ID stored in the array
glGenTextures(1, &textureArray[textureID]);
// This sets the alignment requirements for the start of each pixel row in memory.
// glPixelStorei (GL_UNPACK_ALIGNMENT, 1);
// Bind the texture to the texture arrays index and init the texture
glBindTexture(GL_TEXTURE_2D, textureArray[textureID]);
// Build Mipmaps (builds different versions of the picture for distances - looks better)
gluBuild2DMipmaps(GL_TEXTURE_2D, 3, pBitmap->sizeX, pBitmap->sizeY, GL_RGB, GL_UNSIGNED_BYTE, pBitmap->data);
// Lastly, we need to tell OpenGL the quality of our texture map. GL_LINEAR is the smoothest.
// GL_NEAREST is faster than GL_LINEAR, but looks blochy and pixelated. Good for slower computers though.
// Read more about the MIN and MAG filters at the bottom of main.cpp
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
// glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_NEAREST);
// glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_NEAREST);
// Now we need to free the bitmap data that we loaded since openGL stored it as a texture
if (pBitmap) // If we loaded the bitmap
{
if (pBitmap->data) // If there is texture data
{
free(pBitmap->data); // Free the texture data, we don't need it anymore
}
free(pBitmap); // Free the bitmap structure
}
}
I tried this Link, This one too and also Tried this one too. but still getting error.
This function is used after initialization as:
LPCWSTR k =L"grass.bmp";
CreateTexture(g_Texture, "building1.bmp", 0);
CreateTexture(g_Texture, "clock.bmp", 0);
//list goes on
Can you help me out?
Change "LPSTR strFileName" to "LPCWSTR strFileName", "building1.bmp" to L"building1.bmp and "clock.bmp" to L"clock.bmp".
Always be careful because LPSTR is ASCII and LPCWSTR is Unicode. So if the function needs a Unicode variable (like this: L"String here") you can't give it a ASCII string.
The solutions are either:
Change your function prototype to take wide strings:
void CreateTexture(GLuint textureArray[], LPWSTR strFileName, int textureID)
//...
LPCWSTR k =L"grass.bmp";
CreateTexture(g_Texture, L"building1.bmp", 0);
CreateTexture(g_Texture, L"clock.bmp", 0);
or
Don't change your function prototype, but call the A version of the API function:
pBitmap = auxDIBImageLoadA(strFileName);
Recommended: Stick to wide strings and use the correct string types.

How to load devIL image from raw data

I would like to create devIL image from raw texture data, but I can't seem to find a way to do it. The proper way seems to be ilLoadL with IL_RAW, but I can't get it to work. The documentation in here says that that there should be 13-byte header in the data, so i just put meaningless data there. If I put 0 to "size" parameter of ilLoadL,
I'll get black texture, no matter what. Otherwise my program refuses to draw anything. ilIsImage returns true, and I can create openGL texture from it just fine. The code works if I load texture from file.
It's not much, but here's my code so far:
//Loading:
ilInit();
iluInit();
ILuint ilID;
ilGenImages(1, &ilID);
ilBindImage(ilID);
ilEnable(IL_ORIGIN_SET);
ilOriginFunc(IL_ORIGIN_LOWER_LEFT);
//Generate 13-byte header and fill it with meaningless numbers
for (int i = 0; i < 13; ++i){
data.insert(data.begin() + i, i);
}
//This fails.
if (!ilLoadL(IL_RAW, &data[0], size)){
std::cout << "Fail" << std::endl;
}
Texture creation:
ilBindImage(ilId[i]);
ilConvertImage(IL_RGBA, IL_UNSIGNED_BYTE);
glBindTexture(textureTarget, id[i]);
glTexParameteri(textureTarget, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(textureTarget, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameterf(textureTarget, GL_TEXTURE_MIN_FILTER, filters[i]);
glTexParameterf(textureTarget, GL_TEXTURE_MAG_FILTER, filters[i]);
glTexImage2D(textureTarget, 0, GL_RGBA,
ilGetInteger(IL_IMAGE_WIDTH), ilGetInteger(IL_IMAGE_HEIGHT),
0, GL_RGBA, GL_UNSIGNED_BYTE, ilGetData());
If an image format has a header, you can generally assume it contains some important information necessary to correctly read the rest of the file. Filling it with "meaningless data" is inadvisable at best.
Since there is no actual struct in DevIL for the .raw header, let us take a look at the implementation of iLoadRawInternal () to figure out what those first 13-bytes are supposed to be.
// Internal function to load a raw image
ILboolean iLoadRawInternal()
{
if (iCurImage == NULL) {
ilSetError(IL_ILLEGAL_OPERATION);
return IL_FALSE;
}
iCurImage->Width = GetLittleUInt(); /* Bytes: 0-3 {Image Width} */
iCurImage->Height = GetLittleUInt(); /* Bytes: 4-7 {Image Height} */
iCurImage->Depth = GetLittleUInt(); /* Bytes: 8-11 {Image Depth} */
iCurImage->Bpp = (ILubyte)igetc(); /* Byte: 12 {Bytes per-pixel} */
NOTE: The /* comments */ are my own
GetLittleUInt () reads a 32-bit unsigned integer in little-endian order and advances the read location appropriately. igetc () does the same for a single byte.
This is equivalent to the following C structure (minus the byte order consideration):
struct RAW_HEADER {
uint32_t width;
uint32_t height;
uint32_t depth; // This is depth as in the number of 3D slices (not bit depth)
uint8_t bpp; // **Bytes** per-pixel (1 = Luminance, 3 = RGB, 4 = RGBA)
};
If you read the rest of the implementation of iLoadRawInternal () in il_raw.c, you will see that without proper values in the header DevIL will not be able to calculate the correct file size. Filling in the correct values should help.

QGLBuffer::map returns NULL?

I'm trying to use QGLbuffer to display an image.
Sequence is something like:
initializeGL() {
glbuffer= QGLBuffer(QGLBuffer::PixelUnpackBuffer);
glbuffer.create();
glbuffer.bind();
glbuffer.allocate(image_width*image_height*4); // RGBA
glbuffer.release();
}
// Attempting to write an image directly the graphics memory.
// map() should map the texture into the address space and give me an address in the
// to write directly to but always returns NULL
unsigned char* dest = glbuffer.map(QGLBuffer::WriteOnly); FAILS
MyGetImageFunction( dest );
glbuffer.unmap();
paint() {
glbuffer.bind();
glBegin(GL_QUADS);
glTexCoord2i(0,0); glVertex2i(0,height());
glTexCoord2i(0,1); glVertex2i(0,0);
glTexCoord2i(1,1); glVertex2i(width(),0);
glTexCoord2i(1,0); glVertex2i(width(),height());
glEnd();
glbuffer.release();
}
There aren't any examples of using GLBuffer in this way, it's pretty new
Edit --- for search here is the working solution -------
// Where glbuffer is defined as
glbuffer= QGLBuffer(QGLBuffer::PixelUnpackBuffer);
// sequence to get a pointer into a PBO, write data to it and copy it to a texture
glbuffer.bind(); // bind before doing anything
unsigned char *dest = (unsigned char*)glbuffer.map(QGLBuffer::WriteOnly);
MyGetImageFunction(dest);
glbuffer.unmap(); // need to unbind before the rest of openGL can access the PBO
glBindTexture(GL_TEXTURE_2D,texture);
// Note 'NULL' because memory is now onboard the card
glTexSubImage2D(GL_TEXTURE_2D, 0, 0,0, image_width, image_height, glFormatExt, glType, NULL);
glbuffer.release(); // but don't release until finished the copy
// PaintGL function
glBindTexture(GL_TEXTURE_2D,textures);
glBegin(GL_QUADS);
glTexCoord2i(0,0); glVertex2i(0,height());
glTexCoord2i(0,1); glVertex2i(0,0);
glTexCoord2i(1,1); glVertex2i(width(),0);
glTexCoord2i(1,0); glVertex2i(width(),height());
glEnd();
You should bind the buffer before mapping it!
In the documentation for QGLBuffer::map:
It is assumed that create() has been called on this buffer and that it has been bound to the current context.
In addition to VJovic's comments, I think you are missing a few points about PBOs:
A pixel unpack buffer does not give you a pointer to the graphics texture. It is a separate piece of memory allocated on the graphics card to which you can write to directly from the CPU.
The buffer can be copied into a texture by a glTexSubImage2D(....., 0) call, with the texture being bound as well, which you do not do. (0 is the offset into the pixel buffer). The copy is needed partly because textures have a different layout than linear pixel buffers.
See this page for a good explanation of PBO usages (I used it a few weeks ago to do async texture upload).
create will return false if the GL implementation does not support buffers, or there is no current QGLContext.
bind returns false if binding was not possible, usually because type() is not supported on this GL implementation.
You are not checking if these two functions passed.
I got the same thing, map returns NULL. When I used the following order it is solved.
bool success = mPixelBuffer->create();
mPixelBuffer->setUsagePattern(QGLBuffer::DynamicDraw);
success = mPixelBuffer->bind();
mPixelBuffer->allocate(sizeof(imageData));
void* ptr =mPixelBuffer->map(QGLBuffer::ReadOnly);