Magick Pixel data garbled after minify - c++

I need to read in images of arbitrary sizes and apply them to GL textures. I am trying to resize the images with ImageMagick to get them to fit inside a maximum 1024 dimension texture.
Here is my code:
Magick::Image image(filename);
int width = image.columns();
int height = image.rows();
cout << "Image dimensions: " << width << "x" << height << endl;
// resize it to fit a texture
while ( width>1024 || height>1024 ) {
try {
image.minify();
}
catch (exception &error) {
cout << "Error minifying: " << error.what() << " Skipping." << endl;
return;
}
width = image.columns();
height = image.rows();
cout << " -- minified to: " << width << "x" << height << endl;
}
// transform the pixels to something GL can use
Magick::Pixels view(image);
GLubyte *pixels = (GLubyte*)malloc( sizeof(GLubyte)*width*height*3 );
for ( ssize_t row=0; row<height; row++ ) {
Magick::PixelPacket *im_pixels = view.get(0,row,width,1);
for ( ssize_t col=0; col<width; col++ ) {
*(pixels+(row*width+col)*3+0) = (GLubyte)im_pixels[col].red;
*(pixels+(row*width+col)*3+1) = (GLubyte)im_pixels[col].green;
*(pixels+(row*width+col)*3+2) = (GLubyte)im_pixels[col].blue;
}
}
texPhoto = LoadTexture( pixels, width, height );
free(pixels);
The code for LoadTexure() looks like this:
GLuint LoadTexture(GLubyte* pixels, GLuint width, GLuint height) {
GLuint textureId;
glPixelStorei( GL_UNPACK_ALIGNMENT, 1 );
glGenTextures( 1, &textureId );
glBindTexture( GL_TEXTURE_2D, textureId );
glTexImage2D( GL_TEXTURE_2D, 0, GL_RGB, width, height, 0, GL_RGB, GL_UNSIGNED_BYTE, (unsigned int*)pixels );
return textureId;
}
All the textures work great except when they have had image.minify() applied to them. Once minified the pixels are basically just random noise. There must be something else going on that I'm not aware of. I am probably missing something in the ImageMagick docs about what I am supposed to do to get the pixel data after I minify it.
How do I properly get the pixel data after a call to minify()?

It turns out that the problem was in the libraries and has to do with the environment I'm running in which is on a Raspberry Pi embedded system. Maybe just a recompile of the sources would have been sufficient but for my purposes I decided I should also reduce the quantum size to 8 bits rather than Magick's default of 16. I also chose a few other configure options for my scenario.
It basically boiled down to this:
apt-get remove libmagick++-dev
wget http://www.imagemagick.org/download/ImageMagick.tar.gz
tar xvfz ImageMagick.tar.gz
cd IMageMagick-6.8.7-2
./configure --with-quantum-depth=8 --disable-openmp --disable-largefile --without-freetype --without-x
make
make install
And then compile against these libraries instead. I also needed to make soft links in /usr/lib to the so files.

Related

DDS texture transparency rendered black Opengl

I am currently trying to render textured objects in Opengl. Everything worked fine until I wanted to render a texture with transparency. Instead of showing the the object transparent it just rendered in total black.
The method fo loading the texture file is this:
// structures for reading and information variables
char magic[4];
unsigned char header[124];
unsigned int width, height, linearSize, mipMapCount, fourCC;
unsigned char* dataBuffer;
unsigned int bufferSize;
fstream file(path, ios::in|ios::binary);
// read magic and header
if (!file.read((char*)magic, sizeof(magic))){
cerr<< "File " << path << " not found!"<<endl;
return false;
}
if (magic[0]!='D' || magic[1]!='D' || magic[2]!='S' || magic[3]!=' '){
cerr<< "File does not comply with dds file format!"<<endl;
return false;
}
if (!file.read((char*)header, sizeof(header))){
cerr<< "Not able to read file information!"<<endl;
return false;
}
// derive information from header
height = *(int*)&(header[8]);
width = *(int*)&(header[12]);
linearSize = *(int*)&(header[16]);
mipMapCount = *(int*)&(header[24]);
fourCC = *(int*)&(header[80]);
// determine dataBuffer size
bufferSize = mipMapCount > 1 ? linearSize * 2 : linearSize;
dataBuffer = new unsigned char [bufferSize*2];
// read data and close file
if (file.read((char*)dataBuffer, bufferSize/1.5))
cout<<"Loading texture "<<path<<" successful"<<endl;
else{
cerr<<"Data of file "<<path<<" corrupted"<<endl;
return false;
}
file.close();
// check pixel format
unsigned int format;
switch(fourCC){
case FOURCC_DXT1:
format = GL_COMPRESSED_RGBA_S3TC_DXT1_EXT;
break;
case FOURCC_DXT3:
format = GL_COMPRESSED_RGBA_S3TC_DXT3_EXT;
break;
case FOURCC_DXT5:
format = GL_COMPRESSED_RGBA_S3TC_DXT5_EXT;
break;
default:
cerr << "Compression type not supported or corrupted!" << endl;
return false;
}
glGenTextures(1, &ID);
glBindTexture(GL_TEXTURE_2D, ID);
glPixelStorei(GL_UNPACK_ALIGNMENT,1);
unsigned int blockSize = (format == GL_COMPRESSED_RGBA_S3TC_DXT1_EXT) ? 8 : 16;
unsigned int offset = 0;
/* load the mipmaps */
for (unsigned int level = 0; level < mipMapCount && (width || height); ++level) {
unsigned int size = ((width+3)/4)*((height+3)/4)*blockSize;
glCompressedTexImage2D(GL_TEXTURE_2D, level, format, width, height,
0, size, dataBuffer + offset);
offset += size;
width /= 2;
height /= 2;
}
textureType = DDS_TEXTURE;
return true;
In the fragment shader I just set the gl_FragColor = texture2D( myTextureSampler, UVcoords )
I hope that there is an easy explanation such as some code missing.
In the openGL initialization i glEnabled GL_Blend and set a blend function.
Does anyone have an idea of what I did wrong?
Make sure the blend function is the correct function for what you are trying to accomplish. For what you've described that should be glBlendFunc(GL_SRC_ALPHA,GL_ONE_MINUS_SRC_ALPHA);
You probably shouldn't set the blend function in your openGL initialization function but should wrap it around your draw calls like:
glEnable(GL_BLEND)
glBlendFunc(GL_SRC_ALPHA,GL_ONE_MINUS_SRC_ALPHA);
//gl draw functions (glDrawArrays,glDrawElements,etc..)
glDisable(GL_BLEND)
Are you clearing the 2D texture binding before you swap buffers? i.e ...
glBindTexture(GL_TEXTURE_2D, 0);

Using the FreeType lib to create text bitmaps to draw in OpenGL 3.x

At the moment I not too sure where my problem is. I can draw loaded images as textures no problem, however when I try to generate a bitmap with a char on it I just get a black box.
I am confident that the problem is when I generate and upload the texture.
Here is the method for that; the top section of the if statement just draws an texture of a image loaded from file (res/texture.jpg) and that draws perfectly. And the else part of the if statement will try to generate and upload a texture with the char (variable char enter) on.
Source Code, I will add shaders and more of the C++ if needed but they work fine for the image.
void uploadTexture()
{
if(enter=='/'){
// Draw the image.
GLenum imageFormat;
glimg::SingleImage image = glimg::loaders::stb::LoadFromFile("res/texture.jpg")->GetImage(0,0,0);
glimg::OpenGLPixelTransferParams params = glimg::GetUploadFormatType(image.GetFormat(), 0);
imageFormat = glimg::GetInternalFormat(image.GetFormat(),0);
glGenTextures(1,&textureBufferObject);
glBindTexture(GL_TEXTURE_2D, textureBufferObject);
glimg::Dimensions dimensions = image.GetDimensions();
cout << "Texture dimensions w "<< dimensions.width << endl;
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB8, dimensions.width, dimensions.height, 0, params.format, params.type, image.GetImageData());
}else
{
// Draw the char useing the FreeType Lib
FT_Init_FreeType(&ft);
FT_New_Face(ft, "arial.ttf", 0, &face);
FT_Set_Pixel_Sizes(face, 0, 48);
FT_GlyphSlot g = face->glyph;
glGenTextures(1,&textureBufferObject);
glBindTexture(GL_TEXTURE_2D, textureBufferObject);
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
FT_Load_Char(face, enter, FT_LOAD_RENDER);
FT_Bitmap theBitmap = g->bitmap;
int BitmapWidth = g->bitmap.width;
int BitmapHeight = g->bitmap.rows;
cout << "draw char - " << enter << endl;
cout << "g->bitmap.width - " << g->bitmap.width << endl;
cout << "g->bitmap.rows - " << g->bitmap.rows << endl;
int TextureWidth =roundUpToNextPowerOfTwo(g->bitmap.width);
int TextureHeight =roundUpToNextPowerOfTwo(g->bitmap.rows);
cout << "texture width x height - " << TextureWidth <<" x " << TextureHeight << endl;
GLubyte* TextureBuffer = new GLubyte[ TextureWidth * TextureWidth ];
for(int j = 0; j < TextureHeight; ++j)
{
for(int i = 0; i < TextureWidth; ++i)
{
TextureBuffer[ j*TextureWidth + i ] = (j >= BitmapHeight || i >= BitmapWidth ? 0 : g->bitmap.buffer[ j*BitmapWidth + i ]);
}
}
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB8, TextureWidth, TextureHeight, 0, GL_RGB8, GL_UNSIGNED_BYTE, TextureBuffer);
}
}
I'm not sure about the OpenGL part, but your algorithm to process FT bitmap is not correct. The number of bytes of each row in FT bitmap is bitmap->pitch. The number of each pixel also depends on which render mode your are loading the character. For example, if bitmap->pixel_mode is FT_PIXEL_MODE_LCD, each pixel is encoded as 3 bytes, in order of R, G, B and the values are actually the alpha mask value, while if the pixel mode is FT_PIXEL_MODE_GRAY, each pixel is 1 byte and the values are the gray level.
Take a look at http://freetype.sourceforge.net/freetype2/docs/reference/ft2-basic_types.html#FT_Bitmap
The very first thing I'd do would be to look at the return codes that Freetype gives to pretty much ALL the FT_* functions.
That's good practice anyway, but if you have a problem it should narrow it down substantially.

C++ OpenGL glTexImage2D Access Violation

I'm writing an application using OpenGL (freeglut and glew).
I also wanted textures so I did some research on the Bitmap file format and wrote a struct for the main header and another for the DIB header (info header).
Then I started writing the loader. It automatically binds the texture to OpenGL. Here is the function:
static unsigned int ReadInteger(FILE *fp)
{
int a, b, c, d;
// Integer is 4 bytes long.
a = getc(fp);
b = getc(fp);
c = getc(fp);
d = getc(fp);
// Convert the 4 bytes to an integer.
return ((unsigned int) a) + (((unsigned int) b) << 8) +
(((unsigned int) c) << 16) + (((unsigned int) d) << 24);
}
static unsigned int ReadShort(FILE *fp)
{
int a, b;
// Short is 2 bytes long.
a = getc(fp);
b = getc(fp);
// Convert the 2 bytes to a short (int16).
return ((unsigned int) a) + (((unsigned int) b) << 8);
}
GLuint LoadBMP(const char* filename)
{
FILE* file;
// Check if a file name was provided.
if (!filename)
return 0;
// Try to open file.
fopen_s(&file, filename, "rb");
// Return if the file could not be open.
if (!file)
{
cout << "Warning: Could not find texture '" << filename << "'." << endl;
return 0;
}
// Read signature.
unsigned char signature[2];
fread(&signature, 2, 1, file);
// Use signature to identify a valid bitmap.
if (signature[0] != BMPSignature[0] || signature[1] != BMPSignature[1])
{
fclose(file);
return 0;
}
// Read width and height.
unsigned long width, height;
fseek(file, 16, SEEK_CUR); // After the signature we have 16bytes until the width.
width = ReadInteger(file);
height = ReadInteger(file);
// Calculate data size (we'll only support 24bpp).
unsigned long dataSize;
dataSize = width * height * 3;
// Make sure planes is 1.
if (ReadShort(file) != 1)
{
cout << "Error: Could not load texture '" << filename << "' (planes is not 1)." << endl;
return 0;
}
// Make sure bpp is 24.
if (ReadShort(file) != 24)
{
cout << "Error: Could not load texture '" << filename << "' (bits per pixel is not 24)." << endl;
return 0;
}
// Move pointer to beggining of data. (after the bpp we have 24 bytes until the data)
fseek(file, 24, SEEK_CUR);
// Allocate memory and read the image data.
unsigned char* data = new unsigned char[dataSize];
if (!data)
{
fclose(file);
cout << "Warning: Could not allocate memory to store data of '" << filename << "'." << endl;
return 0;
}
fread(data, dataSize, 1, file);
if (data == NULL)
{
fclose(file);
cout << "Warning: Could no load data from '" << filename << "'." << endl;
return 0;
}
// Close the file.
fclose(file);
// Create the texture.
GLuint texture;
glGenTextures(1, &texture);
glBindTexture(GL_TEXTURE_2D, texture);
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR); //NEAREST);
glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE);
gluBuild2DMipmaps(GL_TEXTURE_2D, GL_RGB, width, height, GL_BGR_EXT, GL_UNSIGNED_BYTE, data);
return texture;
}
I know that the bitmap's data is correctly read because I outputted it's data to the console and compared with the image opened in paint.
The problem here is this line:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, dibheader.width,
dibheader.height, 0, GL_RGB, GL_UNSIGNED_BYTE, data);
Most of the times I run the application this line crashes with the error:
Unhandled exception at 0x008ffee9 in GunsGL.exe: 0xC0000005: Access violation reading location 0x00af7002.
This is the Disassembly of where the error occurs:
movzx ebx,byte ptr [esi+2]
It's not an error with my loader, because I've downloaded other loaders.
A downloaded loader that I used was this one from NeHe.
EDIT: (CODE UPDATED ABOVE)
I rewrote the loader, but I still get the crash on the same line. Instead of that crash, sometimes I get a crash on mlock.c (same error message is I recall correctly):
void __cdecl _lock (
int locknum
)
{
/*
* Create/open the lock, if necessary
*/
if ( _locktable[locknum].lock == NULL ) {
if ( !_mtinitlocknum(locknum) )
_amsg_exit( _RT_LOCK );
}
/*
* Enter the critical section.
*/
EnterCriticalSection( _locktable[locknum].lock );
}
On the line:
EnterCriticalSection( _locktable[locknum].lock );
Also, here is a screen shot of one of those times the applications doesn't crash (the texture is obviously not right):
http://i.stack.imgur.com/4Mtso.jpg
Edit2:
Updated code with the new working one.
(The reply marked as an answer does not contain all that was needed for this to work, but it was vital)
Try glPixelStorei(GL_UNPACK_ALIGNMENT, 1) before your glTexImage2D() call.
I know, it's tempting to read binary data like this
BitmapHeader header;
BitmapInfoHeader dibheader;
/*...*/
// Read header.
fread(&header, sizeof(BitmapHeader), 1, file);
// Read info header.
fread(&dibheader, sizeof(BitmapInfoHeader), 1, file);
but you really shouldn't do it that way. Why? Because the memory layout of structures may be padded to meet alignment constraints (yes, I know about packing pragmas), the type size of the used compiler may not match the data size in the binary file, and last but not least endianess may not match.
Always read binary data into a intermediary buffer of which you extract the fields in a well defined way with exactly specified offsets and typing.
// Allocate memory for the image data.
data = (unsigned char*)malloc(dibheader.dataSize);
If this is C++, then use the new operator. If this is C, then don't cast from void * to the L value type, it's bad style and may cover usefull compiler warnings.
// Verify memory allocation.
if (!data)
{
free(data);
If data is NULL you mustn't free it.
// Swap R and B because bitmaps are BGR and OpenGL uses RGB.
for (unsigned int i = 0; i < dibheader.dataSize; i += 3)
{
B = data[i]; // Backup Blue.
data[i] = data[i + 2]; // Place red in right place.
data[i + 2] = B; // Place blue in right place.
}
OpenGL does indeed support BGR alignment. The format parameter is, surprise, GL_BGR
// Generate texture image.
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, dibheader.width, dibheader.height, 0, GL_RGB, GL_UNSIGNED_BYTE, data);
Well, and this misses setting all of the pixel store parameters. Always set every pixel store parameter before doing pixel transfers, they may be left in some undesired state from a previous operation. Better safe than sorry.

SDL_GL_SetAttribute doesn't set color sizes

I'm trying to change the size of the Accumulation buffer color components in *SDL_opengl*, but the SetAttribute command doesn't seem to be doing anything. Here's the code I'm using.
(To reduce code size I am only dealing with the RED component here, but in the actual code I pass all 4 components to both the color and the accumulation buffer and the effect is the same)
#include <iostream>
#include <SDL/SDL.h>
#include <SDL/SDL_opengl.h>
int main(int argc, char *argv[])
{
//Initialize all SDL subsystems
if( SDL_Init( SDL_INIT_EVERYTHING ) < 0 )std::cout << "SDL ERROR!";
// Try to Set the BitSize, while checking for errors
int BitSize = 1; //This number never makes a difference!!
int ErrorCode = 0;
ErrorCode += SDL_GL_SetAttribute( SDL_GL_DOUBLEBUFFER, 1 ) // This one WORKS
+ SDL_GL_SetAttribute( SDL_GL_ACCUM_RED_SIZE, BitSize ) // These ones DON'T
+ SDL_GL_SetAttribute( SDL_GL_RED_SIZE, BitSize )
;
if( ErrorCode < 0 )std::cout << "SDL ERROR!";
// Create the Window
int w = 1000, h = 700;
int bpp = 32;
SDL_Surface* Screen = SDL_SetVideoMode( w, h, bpp, SDL_OPENGL |
SDL_NOFRAME |
SDL_DOUBLEBUF );
if( !Screen )std::cout << "SDL ERROR!";
// Check if BitSize's are correct (they are not)
// I'm using glGetInteger, but SDL_GL_GetAttribute yields the same output.
glGetIntegerv( GL_ACCUM_RED_BITS, &BitSize );
std::cout << "AccumBuffer color component size in bits is " << BitSize << "\n";
glGetIntegerv( GL_RED_BITS, &BitSize );
std::cout << "ColorBuffer color component size in bits is " << BitSize << "\n";
ErrorCode = SDL_GL_GetAttribute( SDL_GL_BUFFER_SIZE, &BitSize );
std::cout << "FrameBuffer BitSize is " << BitSize << "\n";
if( ErrorCode < 0 )std::cout << "SDL ERROR!";
if( glGetError() != GL_NO_ERROR )std::cout << "GL ERROR";
return 0;
}
This compiles fine, and always prints the following output:
AccumBuffer color component size in bits is 16
ColorBuffer color component size in bits is 8
FrameBuffer Bit Size is 32
no matter what I set the BitSize variable to. It's like the SDL_GL_SetAttribute( SDL_GL_*_SIZE, int ) aren't having any effect. I can understand the Color Buffer components might be restricted to 8 bits because I initialize the window with 32 bpp. But shouldn't I be able to edit the Accumulation Buffer color resolution?
The attribute values you set are only requests for what you expect as a minimum, but it is perfectly valid to give you something larger.
BTW: The accumulation buffer is probably not HW accelerated, unless you have a professional grade GPU (FireGL, Quadro). Use a Framebuffer Object instead.
Accumulation buffer bit depths are also restricted to what the GPU supports, and in your case, it looks your GPU only supports a 16-bit per component accumulation buffer.

OpenGL Issue Drawing a Large Image Texture causing Skewing

I'm trying to store a 1365x768 image on a 2048x1024 texture in OpenGL ES but the resulting image once drawn appears skewed. If I run the same 1365x768 image through gluScaleImage() and fit it onto the 2048x1024 texture it looks fine when drawn but this OpenGL call is slow and hurts performance.
I'm doing this on an Android device (Motorola Milestone) which has 256MB of memory. Not sure if the memory is a factor though since it works fine when scaled using gluScaleImage() (it's just slower.)
Mapping smaller textures (854x480 onto 1024x512, for example) works fine though. Does anyone know why this is and suggestions for what I can do about it?
Update
Some code snippets to help understand context...
// uiImage is loaded. The texture dimensions are determined from upsizing the image
// dimensions to a power of two size:
// uiImage->_width = 1365
// uiImage->_height = 768
// width = 2048
// height = 1024
// Once the image is loaded:
// INT retval = gluScaleImage(GL_RGBA, uiImage->_width, uiImage->_height, GL_UNSIGNED_BYTE, uiImage->_texels, width, height, GL_UNSIGNED_BYTE, data);
copyImage(GL_RGBA, uiImage->_width, uiImage->_height, GL_UNSIGNED_BYTE, uiImage->_texels, width, height, GL_UNSIGNED_BYTE, data);
if (pixelFormat == RGB565 || pixelFormat == RGBA4444)
{
unsigned char* tempData = NULL;
unsigned int* inPixel32;
unsigned short* outPixel16;
tempData = new unsigned char[height*width*2];
inPixel32 = (unsigned int*)data;
outPixel16 = (unsigned short*)tempData;
if(pixelFormat == RGB565)
{
// "RRRRRRRRGGGGGGGGBBBBBBBBAAAAAAAA" --> "RRRRRGGGGGGBBBBB"
for(unsigned int i = 0; i < numTexels; ++i, ++inPixel32)
{
*outPixel16++ = ((((*inPixel32 >> 0) & 0xFF) >> 3) << 11) |
((((*inPixel32 >> 8) & 0xFF) >> 2) << 5) |
((((*inPixel32 >> 16) & 0xFF) >> 3) << 0);
}
}
if(tempData != NULL)
{
delete [] data;
data = tempData;
}
}
// [snip..]
// Copy function (mostly)
static void copyImage(GLint widthin, GLint heightin, const unsigned int* datain, GLint widthout, GLint heightout, unsigned int* dataout)
{
unsigned int* p1 = const_cast<unsigned int*>(datain);
unsigned int* p2 = dataout;
int nui = widthin * sizeof(unsigned int);
for(int i = 0; i < heightin; i++)
{
memcpy(p2, p1, nui);
p1 += widthin;
p2 += widthout;
}
}
In the render code, without changing my texture coordinates I should see the correct image when using gluScaleImage() and a smaller image (that requires some later correction factors) for the copyImage() code. This is what happens when the image is small (854x480 for example works fine with copyImage()) but when I use the 1365x768 image, that's when the skewing appears.
Finally solved the issue. First thing to know is what's the maximum texture size allowed for the device:
GLint texSize;
glGetIntegerv(GL_MAX_TEXTURE_SIZE, &texSize);
When I ran this the texture size max for the Motorola Milestone was 2048x2048, which was fine in my case.
After messing with the texture mapping to no end I finally decided to try opening and resaving the image..and voilĂ  it suddenly began working. I don't know what was wrong with the format the original image was stored in but as advice to anyone else experiencing a similar problem: might be worth looking at your image itself.