Loading a BMP image at a specific index in OpenGL - c++

I have to load a 24 bit BMP file at a certain (x,y) index of glut window from a file using OpenGL. I have found a function that uses glaux library to do so. Here the color mentioned in ignoreColor is ignored during rendering.
void iShowBMP(int x, int y, char filename[], int ignoreColor)
{
AUX_RGBImageRec *TextureImage;
TextureImage = auxDIBImageLoad(filename);
int i,j,k;
int width = TextureImage->sizeX;
int height = TextureImage->sizeY;
int nPixels = width * height;
int *rgPixels = new int[nPixels];
for (i = 0, j=0; i < nPixels; i++, j += 3)
{
int rgb = 0;
for(int k = 2; k >= 0; k--)
{
rgb = ((rgb << 8) | TextureImage->data[j+k]);
}
rgPixels[i] = (rgb == ignoreColor) ? 0 : 255;
rgPixels[i] = ((rgPixels[i] << 24) | rgb);
}
glRasterPos2f(x, y);
glDrawPixels(width, height, GL_RGBA, GL_UNSIGNED_BYTE, rgPixels);
delete []rgPixels;
free(TextureImage->data);
free(TextureImage);
}
But the problem is that glaux is now obsolete. If I call this function, the image is rendered and shown for a minute, then an error pops up (without any error message) and the glut window disappears. From the returned value shown in the console, it seems like a runtime error.
Is there any alternative to this function that doesn't use glaux? I have seen cimg, devil etc but none of them seems to work like this iShowBMP function. I am doing my project in Codeblocks.
I have to load every frame to keep the implementation consistent with other parts of the program. Also, the bmp file whose name has been passed as a parameter to the function has both width and height in powers of 2.

The last two free() statements were not getting executed for some unknown reasons, so the memory consumption was increasing. That's why the program was crashing after a moment. Later I solved it using stb_image.h.

Related

shrink image c++ - 'System.AccessViolationException'

I have a project where we have to develop some image processing functions. One of the functions is shrinking an image.
this is the description of the function
void averageRegions(int blockWidth, int blockHeight)
INPUTS: Integers indicating the width and height of the blocks?to be averaged
OUTPUTS: NONE
When this function is called, you should create a new image that will consist of 1 pixel for every block of size
blockWidth by blockHeight pixels in the original image, with each pixel being the average color of the pixels in that
region in the original image.
Please note that it may be easier if you split this into 2 functions and call your helper function from within this one.
The second function could then just calculate the average value of a block of pixels given to it, and return that
to the original function to be used. However, this implementation is up to you! Complete it as you see fit.
I have completed the code of it however after closing the app I get this error
An unhandled exception of type 'System.AccessViolationException' occurred in MCS2514Pgm2.exe
Additional information: Attempted to read or write protected memory. This is often an indication that other memory is corrupt.
or this one
Heap Corruption Detected: after Normal block (#126) at 0x004cF6c0 CRT detected that the application wrote to memory after end of heap bugger.
This is the function code
void averageRegions(int blockWidth, int blockHeight)
{
//please add the code
int height = inImage.getHeight();
int width = inImage.getWidth();
pixel** myPixels = inImage.getPixels();
pixel* pixelptr;
int Rsum = 0, Gsum = 0, Bsum = 0;
int Ravg, Gavg, Bavg, pcount = 0, m, n;
outImage.createNewImage(width/blockWidth, height/blockHeight);
pixel** outPixels = outImage.getPixels();
//pixelptr = &myPixels[0][4];
for(int x = 0; x < height; x +=blockHeight)
{
for(int y = 0; y < width; y += blockWidth)
{
for(int i = x; i < blockHeight+x; i++)
{
for(int j = y; j < blockWidth+y; j++)
{
Rsum += myPixels[i][j].red;
Gsum += myPixels[i][j].green;
Bsum += myPixels[i][j].blue;
pcount++;
}
}
Ravg = Rsum/pcount;
Gavg = Gsum/pcount;
Bavg = Bsum/pcount;
for(int i = x; i < blockHeight+x; i++)
{
for(int j = y; j < blockWidth+y; j++)
{
myPixels[i][j].red = Ravg;
myPixels[i][j].green = Gavg;
myPixels[i][j].blue = Bavg;
m = x/blockHeight;
n = y/blockWidth;
outPixels[m][n].red = myPixels[i][j].red;
outPixels[m][n].green = myPixels[i][j].green;
outPixels[m][n].blue = myPixels[i][j].blue;
}
}
pcount=0;
Rsum = 0;
Gsum = 0;
Bsum = 0;
}
}
inImage = outImage;
}
this is the image.h
#ifndef IMAGE
#define IMAGE
#include <atlimage.h>
#include <string>
#include <cstdlib>
#include "globals.h"
#include "pixel.h"
using namespace std;
class image {
public:
image(); //the image constructor (initializes everything)
image(string filename); //a image constructor that directly loads an image from disk
~image(); //the image destructor (deletes the dynamically created pixel array)
void createNewImage(int width, int height); //this function deletes any current image data and creates a new blank image
//with the specified width/height and allocates the needed number of pixels
//dynamically.
bool loadImage(string filename); //load an image from the specified file path. Return true if it works, false if it is not a valid image.
//Note that we only accept images of the RGB 8bit colorspace!
void saveImage(string filename); //Save an image to the specified path
pixel** getPixels(); //return the 2-dimensional pixels array
int getWidth(); //return the width of the image
int getHeight(); //return the height of the image
void viewImage(CImage* myImage); //This function is called by the windows GUI. It returns the image in format the GUI understands.
private:
void pixelsToCImage(CImage* myImage); //this function is called internally by the image class.
//it converts our pixel struct array to a standard BGR uchar array with word spacing.
//(Don't worry about what this does)
pixel** pixels; // pixel data array for image
int width, height; // stores the image dimensions
};
#endif
And this is Pixel.h
#ifndef PIXEL_H
#define PIXEL_H
class pixel
{
public:
unsigned char red; //the red component
unsigned char green; //the green component
unsigned char blue; //the blue component
};
#endif
Can any one tell me why I am getting this error
In addition:
the error is taking me to this line in dbgdel.cpp
/* verify block type */
_ASSERTE(_BLOCK_TYPE_IS_VALID(pHead->nBlockUse));
This error occurs because you are accessing outside of memory allocated for an array. There are several places in your code where this can happen.
If height is not a multiple of blockHeight, or width is not a multiple of blockWidth, your i/j loops will access elements outside of the memory allocated for myPixels.
Another possibility is when writing to outPixels if blockHeight and blockWidth are not equal. Your computation of m and n may have the blockHeight and blockWidth swapped (you're dividing x by blockHeight).
In
for (int x = 0; x < height; x += blockHeight)
say height is 100 and blockHeight is 33
x == 0. 0 < 100, so the body is entered and iterates 0 -> 32
x == 33. 33 < 100, so the body is entered and iterates 33 -> 65
x == 66. 66 < 100, so the body is entered and iterates 66 -> 98
x == 99. 99 < 100, so the body is entered and iterates 99 -> 131
Sadly there is no 100 -> 131.

FreeType2 my_draw_bitmap undefined

I am trying to run "7. Simple text rendering" the "a. Basic code" from here but the function "my_draw_bitmap" seems to be undefined. I tried to use GLEW, but the issue is the same. Then I saw "pngwriter" library here, but the compilation for Visual Studio 2013 with Cmake give error.
Please someone help, where 'my_draw_bitmap' function is defined?
The tutorial states
The function my_draw_bitmap is not part of FreeType but must be provided by the application to draw the bitmap to the target surface. In this example, it takes a pointer to a FT_Bitmap descriptor and the position of its top-left corner as arguments.
What this means is that you need to implement the function for copying the glyphs to the texture or bitmap to be rendered yourself (assuming there isn't a suitable function available in the libraries you're using).
The below code should be appropriate for copying the pixels of a single glyph to an array that could be copied to a texture.
unsigned char **tex;
void makeTex(const unsigned int width, const unsigned int height)
{
tex = (unsigned char**)malloc(sizeof(char*)*height);
tex[0] = (unsigned char*)malloc(sizeof(char)*width*height);
memset(tex[0], 0, sizeof(char)*width*height);
for (int i = 1; i < height;i++)
{
tex[i] = tex[i*width];
}
}
void paintGlyph(FT_GlyphSlot glyph, unsigned int penX, unsigned int penY)
{
for (int y = 0; y<glyph->bitmap.rows; y++)
{
//src ptr maps to the start of the current row in the glyph
unsigned char *src_ptr = glyph->bitmap.buffer + y*glyph->bitmap.pitch;
//dst ptr maps to the pens current Y pos, adjusted for the current char row
unsigned char *dst_ptr = tex[penY + (glyph->bitmap.rows - y - 1)] + penX;
//copy entire row
for (int x = 0; x<glyph->bitmap.pitch; x++)
{
dst_ptr[x] = src_ptr[x];
}
}
}

Writing to .BMP - distorted image

I'd like to write a normal map to a .bmp file, so I've implemented a simple .bmp writer first:
void BITMAPLOADER::writeHeader(std::ofstream& out, int width, int height)
{
BITMAPFILEHEADER tWBFH;
tWBFH.bfType = 0x4d42;
tWBFH.bfSize = 14 + 40 + (width*height*3);
tWBFH.bfReserved1 = 0;
tWBFH.bfReserved2 = 0;
tWBFH.bfOffBits = 14 + 40;
BITMAPINFOHEADER tW2BH;
memset(&tW2BH,0,40);
tW2BH.biSize = 40;
tW2BH.biWidth = width;
tW2BH.biHeight = height;
tW2BH.biPlanes = 1;
tW2BH.biBitCount = 24;
tW2BH.biCompression = 0;
out.write((char*)(&tWBFH),14);
out.write((char*)(&tW2BH),40);
}
bool TERRAINLOADER::makeNormalmap(unsigned int width, unsigned int height)
{
std::ofstream file;
file.open("terrainnormal.bmp");
if(!file)
{
file.close();
return false;
}
bitmaploader.writeHeader(file,width,height);
for(int y = 0; y < height; y++)
{
for(int x = 0; x < width; x++)
{
file << static_cast<unsigned char>(255*x/height); //(unsigned char)((getHeight(float(x)/float(width),float(y)/float(height))));
file << static_cast<unsigned char>(0); //(unsigned char)((getHeight(float(x)/float(width),float(y)/float(height))));
file << static_cast<unsigned char>(0); //(unsigned char)((getHeight(float(x)/float(width),float(y)/float(height))));
};
};
file.close();
return true;
};
The writeHeader(...) function is from SO, from a solved,working post. (I've forgot the name of it)
The getHeight(...) is using bicubic interpolation, so I can write it to big resolution images, and it stays smooth. It will be also used for collision detection and now is used as a LOD factor for my clipmaps.
Now the problem is that this outputs a distorted image. The pictures will tell everything I think:
The expected/distorted result(s):
for the heightmap: I have the function that describes a mesh: getHeight(x,z). It gives back the correct results because I've tested it with shaders (by sending heights as vertex attribs) too. The image downloaded from internet:
And with the y(x,z) function values written to a .BMP: (the commented out part of the code):
With a simple function: file << static_cast<unsigned char>(255*(float)x/height)
which should be a simple blend from black to white to the right.
I used an image size of 256 x 256, because I've read it should be multiple of 4. I CAN use libraries, but I'd like to solve this problem without one. So, what caused this distortion?
EDIT:
On the last image some lines are also colored, but they shouldn't be. This post is similar, but my heightmap is not distorted linearly as in this post: Image Distortion with Lock Bits
EDIT:
Another strange issue is when I don't make all colors the same, it get's distorted in colors too. For example set only the RED to the heights, and leave G and B 0, it became not only RED, but a noisy colored heightmap.
EDIT /comments/
If I understood them right, there's the size of the header, then comes my pixel data. Now before the pixel data there must be 4 * n bytes. So that padding mean after the header I put some more data that fills the place.
For example assuming (I will look up hot to get it exactly) my header is 55 bytes, then I should add 1 more byte to it because 55+1 = 56 and 4|56.
So
file << static_cast<unsigned char>('a');
for(int y = 1; y <= width; y++)
{
for(int x = 1; x <= height; x++)
{
file << static_cast<unsigned char>(x);
file << static_cast<unsigned char>(x);
file << static_cast<unsigned char>(x);
};
};
should be correct.
But I realized the real issue (as Jigsore commented). When I cast from int to char, it seems like a 1 digit number becomes 1 byte, 2 digits number 2, and 3 digits 3 bytes. Clamping the height to 3 digits works well, but the image is a bit whitey, because 'darkest' color becomes (100,100,100) instead of (0,0,0). Also, this is the cause of the non-regular distortion, because it depends on how many 'hills' or 'mountains' are there in one row. How can I solve this, and I hope the last problem? I don't want to compress the image to 100-256 range.;)
Open your file in binary mode.
Under Windows, if you open a file in the default text mode, it will write an extra 0x0d (Return) character after every 0x0a (Linefeed) that gets written out. The first time this happens it will change the colors of the following pixels, as the RGB order gets out of alignment. After it happens 3 times you'll be off by a full pixel.

Reorganize image/picture arrays in OpenGL to fit power of 2 textures size

I am having troubles in OpenGL due to the fact that textures have to be power of 2 in OpenGL.
What I am doing is the following:
I Load a PNG file into an array of unsigned char, using PNGLIB or SOIL. The idea is that I can run though this array and "Select" the parts that are relevant for me. For example, imagining I've loaded a person, but I just want to store the head in a separate texture. So im looping through the array and selecting only the necessary parts.
First Question: I believe that the data in the array is stored in RGBA mode, but I'm yet not sure if the data is filled rowise or columnwise. Is it possible to know this information?
Second Question: Since there is the need to always create power of 2 textures, it can happen that i have an image with 513pixels width so that I will need a texture with 1024px width. So what is happening is that the picture looks like it gets completly "destroyed" because the pixels are not on the places they should be - The texture has a different size than the relevant data filled in the array. So how can I manage to reorganize the array in order to get the contents of the image again? I tried the following but it doesn't work:
unsigned char* new_memory = 0;
int index = 0;
int new_index = 0;
new_memory = new unsigned char[new_tex_width * new_tex_height * 4];
for(int i=0; i<picture.width; i++) // WIDTH
{
for(int j=0; j<picture.height; j++) // HEIGHT
{
for(int k=0; k<4; k++) // DEPTH
new_memory[new_index++] = picture.memory[index++];//picture.memory[i + picture.height * (j + 4 * k)];
}
new_index += new_tex_height - picture.height;
}
glGenTextures(1, &png_texture);
glBindTexture(GL_TEXTURE_2D, png_texture);
glTexImage2D(GL_TEXTURE_2D, 0, 3, new_tex_width, new_tex_height, 0 , GL_RGBA, GL_UNSIGNED_BYTE, new_memory);
Non power of two textures has been supported since a good while back. However, creating textures atlases and rearranging textures still have a lot of merit, the way we do it is to simply use freeimage as they handle all of this for you and supports some of the compressed formats.
If you want to do it your way, and know that it's just a bitmap, then I'd do it more along the lines of ( not tested, and does not check inputs, but should give you an idea ):
void Blit( int xOffset, int yOffset, int targetW, int sourceW, int sourceH, unsigned char* source, unsigned char* target, unsigned int bpp )
{
for( unsigned int i = 0; i < sourceH; ++i )
{
memcpy( target + bpp * ( targetW * ( yOffset + i ) + xOffset ), source + sourceW * i * bpp, sourceW * bpp );
}
}
Basically, just take each row and memcpy it over.

OpenGL Issue Drawing a Large Image Texture causing Skewing

I'm trying to store a 1365x768 image on a 2048x1024 texture in OpenGL ES but the resulting image once drawn appears skewed. If I run the same 1365x768 image through gluScaleImage() and fit it onto the 2048x1024 texture it looks fine when drawn but this OpenGL call is slow and hurts performance.
I'm doing this on an Android device (Motorola Milestone) which has 256MB of memory. Not sure if the memory is a factor though since it works fine when scaled using gluScaleImage() (it's just slower.)
Mapping smaller textures (854x480 onto 1024x512, for example) works fine though. Does anyone know why this is and suggestions for what I can do about it?
Update
Some code snippets to help understand context...
// uiImage is loaded. The texture dimensions are determined from upsizing the image
// dimensions to a power of two size:
// uiImage->_width = 1365
// uiImage->_height = 768
// width = 2048
// height = 1024
// Once the image is loaded:
// INT retval = gluScaleImage(GL_RGBA, uiImage->_width, uiImage->_height, GL_UNSIGNED_BYTE, uiImage->_texels, width, height, GL_UNSIGNED_BYTE, data);
copyImage(GL_RGBA, uiImage->_width, uiImage->_height, GL_UNSIGNED_BYTE, uiImage->_texels, width, height, GL_UNSIGNED_BYTE, data);
if (pixelFormat == RGB565 || pixelFormat == RGBA4444)
{
unsigned char* tempData = NULL;
unsigned int* inPixel32;
unsigned short* outPixel16;
tempData = new unsigned char[height*width*2];
inPixel32 = (unsigned int*)data;
outPixel16 = (unsigned short*)tempData;
if(pixelFormat == RGB565)
{
// "RRRRRRRRGGGGGGGGBBBBBBBBAAAAAAAA" --> "RRRRRGGGGGGBBBBB"
for(unsigned int i = 0; i < numTexels; ++i, ++inPixel32)
{
*outPixel16++ = ((((*inPixel32 >> 0) & 0xFF) >> 3) << 11) |
((((*inPixel32 >> 8) & 0xFF) >> 2) << 5) |
((((*inPixel32 >> 16) & 0xFF) >> 3) << 0);
}
}
if(tempData != NULL)
{
delete [] data;
data = tempData;
}
}
// [snip..]
// Copy function (mostly)
static void copyImage(GLint widthin, GLint heightin, const unsigned int* datain, GLint widthout, GLint heightout, unsigned int* dataout)
{
unsigned int* p1 = const_cast<unsigned int*>(datain);
unsigned int* p2 = dataout;
int nui = widthin * sizeof(unsigned int);
for(int i = 0; i < heightin; i++)
{
memcpy(p2, p1, nui);
p1 += widthin;
p2 += widthout;
}
}
In the render code, without changing my texture coordinates I should see the correct image when using gluScaleImage() and a smaller image (that requires some later correction factors) for the copyImage() code. This is what happens when the image is small (854x480 for example works fine with copyImage()) but when I use the 1365x768 image, that's when the skewing appears.
Finally solved the issue. First thing to know is what's the maximum texture size allowed for the device:
GLint texSize;
glGetIntegerv(GL_MAX_TEXTURE_SIZE, &texSize);
When I ran this the texture size max for the Motorola Milestone was 2048x2048, which was fine in my case.
After messing with the texture mapping to no end I finally decided to try opening and resaving the image..and voilĂ  it suddenly began working. I don't know what was wrong with the format the original image was stored in but as advice to anyone else experiencing a similar problem: might be worth looking at your image itself.