I ran into some trouble while extracting a matrix (cropping) using OpenCV. What's funny is if I don't execute the line to "crop" the image everything works fine. But if I do, I see horizontal multi-coloured lines in the place of the image.
This is to show that the cropping takes place correctly.
cv::imshow("before", newimg);
//the line that "crops" the image
newimg = newimg(cv::Rect(leftcol, toprow, rightcol - leftcol, bottomrow - toprow));
cv::imshow("after", newimg);
The code that follows is where I bind the image to a texture so that I can use it in OpenGL.
glGenTextures(1, &tex);
glBindTexture(GL_TEXTURE_2D, tex);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, newimg.cols, newimg.rows,
0, GL_BGR, GL_UNSIGNED_BYTE, newimg.ptr());
glBindTexture(GL_TEXTURE_2D, 0);
And later to draw . . .
float h = size;
float w = size * aspectRatio; // of the image. aspectRatio = width / height
glEnable(GL_TEXTURE_2D);
glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_DECAL);
glBindTexture(GL_TEXTURE_2D, texture);
glBegin(GL_QUADS);
glTexCoord2f(0.0f, 0.0f); glVertex3f(x, y, z);
glTexCoord2f(0.0f, 1.0f); glVertex3f(x, y + h, z);
glTexCoord2f(1.0f, 1.0f); glVertex3f(x + w, y + h, z);
glTexCoord2f(1.0f, 0.0f); glVertex3f(x + w, y, z);
glEnd();
glDisable(GL_TEXTURE_2D);
All of this works well and I see the proper image drawn in the OpenGL window when I comment out that line in which I had cropped the image. I have checked the image type before and after the cropping, but the only difference seems to be the reduced number of rows and columns in the final image.
Here is the image that gets drawn when cropping has been done.
After a bit of research I found out a way to solve the problem. The reason the image was looking distorted was because though the image had been cropped, newimg.step / newimg.elemSize() still showed the original size of the image. This meant that only the values of rows and columns had changed in the output image but pixels with no data in them which were no part of the image remained. That is possibly why the "after" image has a gray area on the right. I might be wrong about this theory since I don't have a deep understanding of the subject but it all started working properly once I inserted this line before calling glTexImage2D:
glPixelStorei(GL_UNPACK_ROW_LENGTH, newimg.step / newimg.elemSize());
Note: Since we are manipulating the pixel store of OpenGL, it's best to push the state before doing so:
glPushClientAttrib(GL_CLIENT_PIXEL_STORE_BIT);
And pop it out after you are done:
glPopClientAttrib();
This is to avoid the memory from getting corrupted. I thank genpfault for pointing me in the right direction.
See 8. Know your pixel store state for more details.
https://www.opengl.org/archives/resources/features/KilgardTechniques/oglpitfall/
This post was also very useful.
Related
I'm trying to simply draw an image with OpenGL's immediate mode functions.
However, my output is kinda weird. I tried a few Texture parameters - but i get the same result, sometimes with different colors. I kinda can't figure out the problem, but i figure it's either the image loading or texture setup. I'll cut to the case, here is how i generate my texture (in a Texture2D class):
glBindTexture(GL_TEXTURE_2D, id);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
A call to glTexImage2D follows, like this: glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, _width, _height, 0, GL_RGB, GL_UNSIGNED_BYTE, data);, where data is an array of unsigned char set to 255, white. I then load a PNG file with CImg, like this:
CImg<unsigned char> image(_filename.c_str());
image.resize(m_Width, m_Height);
glBindTexture(GL_TEXTURE_2D, m_ID);
if(image.spectrum() == 4)
{
unsigned char* data = image.data();
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, m_Width, m_Height, GL_RGBA, GL_UNSIGNED_BYTE, data);
}
else if(image.spectrum() == 3)
{
unsigned char* data = image.data();
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, m_Width, m_Height, GL_RGB, GL_UNSIGNED_BYTE, data);
}
glBindTexture(GL_TEXTURE_2D, 0);
But when i try to draw the texture in immediate mode like this (origin is upper left corner, notice the red rectangle arund the texture is intentional):
void SimpleRenderer::drawTexture(Texture2D* _texture, float _x, float _y, float _width, float _height)
{
_texture->setActive(0);
glBegin(GL_QUADS);
glTexCoord2f(0.0f, 0.0f);
glVertex2f(_x, _y);
glTexCoord2f(1.0f, 0.0f);
glVertex2f(_x + _width, _y);
glTexCoord2f(1.0f, 1.0f);
glVertex2f(_x + _width, _y + _height);
glTexCoord2f(0.0f, 1.0f);
glVertex2f(_x, _y + _height);
glEnd();
glBindTexture(GL_TEXTURE_2D, 0);
glColor3ub(strokeR, strokeG, strokeB);
glBegin(GL_LINE_LOOP);
glVertex2f((GLfloat)_x, (GLfloat)_y);
glVertex2f((GLfloat)_x+_width, (GLfloat)_y);
glVertex2f((GLfloat)_x+_width, (GLfloat)_y+_height);
glVertex2f((GLfloat)_x, (GLfloat)_y+_height);
glEnd();
}
I expect an output like this within the rectangle (debugging the same PNG within the same program/thread with CImg):
But i get this:
Can anyone spot the problem with my code?
From How pixel data are stored with CImg:
The values are not interleaved, and are ordered first along the X,Y,Z and V axis respectively (corresponding to the width,height,depth,dim dimensions), starting from the upper-left pixel to the bottom-right pixel of the instane image, with a classical scanline run.So, a color image with dim=3 and depth=1, will be stored in memory as :R1R2R3R4R5R6......G1G2G3G4G5G6.......B1B2B3B4B5B6.... (i.e following a 'planar' structure)and not as R1G1B1R2G2B2R3G3B3... (interleaved channels),
OpenGL does not work with planar data, it expects interleaved pixel data. Easiest solution is to use a library other than CImg.
This does not fix the scaling issues, but it's a good place to start.
Use CImg<T>::permute_axes() to transform your image buffer to interleaved format, like this :
CImg<unsigned char> img("imageRGB.png");
img.permute_axes("cxyz"); // Convert to interleaved representation
// Now, use img.data() here as a pointer for OpenGL functions.
img.permute_axes("yzcx"); // Go back to planar representation (if needed).
CImg is indeed a great library, so all these kind of transformations have been planned in the library.
Im working on a porject in opengl.
I have a polygon in the polygon filled with bmp image file.
I can rotate the camera to look at the image from different places, and I want to copy the part of the image and put it inside a new bmp file.
I have alot of Unnecessary code so I will copy the imprtant parts.
_textureId = LoadBMP("file.bmp");
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, _textureId);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glColor3f(1, 1, 0.7);
float BOX_SIZE = -12.0f;
glBegin(GL_QUADS);
glVertex3f(-BOX_SIZE / 2, -BOX_SIZE / 2, -5);
glVertex3f(BOX_SIZE / 2, -BOX_SIZE / 2, -5);
glVertex3f(BOX_SIZE / 2, -BOX_SIZE / 2, 5);
glVertex3f(-BOX_SIZE / 2, -BOX_SIZE / 2, 5);
glEnd();
and the rotation is pretty basic, soo someone have any suggestions?
thanks alot.
If you want to save the output of OpenGL to a file, you will have to read back the contents of the color buffer from the GL to client memory. Then, you can do whatecver you want to it. The command
glReadPixels(GLint x, GLint y, GLsizei width, GLsizei height, GLenum format, GLenum type, GLvoid *data)
will read back the pixel data in an rectangle of the width * height pixels beginning at x,y to the memory buffer located at data. Since you said you want to save it as a BMP file, you probably want GL_UNSIGNED_BYTE as type, because BMP only supports up to 8 bit per channel. You also want probably GL_BGA or GL_BGR as the format, as this is the native channel layout for BMP.
It appears that i am loading the image wrong. The image appears all scrambled. What am i doing wrong here?
int DrawGLScene(GLvoid) // Here's Where We Do All The Drawing
{
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); // Clear Screen And Depth Buffer
glLoadIdentity(); // Reset The Current Modelview Matrix
glEnable(GL_TEXTURE_2D);
GLuint texture = 0;
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_REPLACE);
glBindTexture(GL_TEXTURE_2D, texture);
FIBITMAP* bitmap = FreeImage_Load(FreeImage_GetFileType("C:/Untitled.bmp", 0), "C:/Untitled.bmp", BMP_DEFAULT);
FIBITMAP *pImage = FreeImage_ConvertTo32Bits(bitmap);
int nWidth = FreeImage_GetWidth(pImage);
int nHeight = FreeImage_GetHeight(pImage);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, nWidth, nHeight, 0, GL_RGB, GL_UNSIGNED_BYTE, (void*)FreeImage_GetBits(pImage));
FreeImage_Unload(pImage);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glBegin(GL_QUADS);
glTexCoord2f(0.0f, 0.0f); glVertex2f(-1.0f, -1.0f);
glTexCoord2f(0.0f, 1.0f); glVertex2f(-1.0f, 1.0f);
glTexCoord2f(1.0f, 1.0f); glVertex2f(1.0f, 1.0f);
glTexCoord2f(1.0f, 0.0f); glVertex2f(1.0f, -1.0f);
glEnd();
RECT desktop;
const HWND hDesktop = GetDesktopWindow();
GetWindowRect(hDesktop, &desktop);
long horizontal = desktop.right;
long vertical = desktop.bottom;
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glDisable(GL_TEXTURE_2D);
return TRUE; // Keep Going
}
what i see when the program runs
what i expect to see when the program runs (bitmap i am loading)
Part of the problem here seems to be a mismatch between the format of the pixel data you're getting from FreeImage and the format expected by glTexImage2D.
FreeImage_ConvertTo32Bits is going to return an image with 32-bit pixel data. I haven't used FreeImage but usually that would mean that each pixel would have 4 8-bit components representing red, green, blue and alpha (opacity). The format of the data you get from the BMP file depends on the file and may or may not have an alpha channel in it. However it seems safe to assume that FreeImage_ConvertTo32Bits will return data with alpha set to 255 if your original image is fully opaque (either because your BMP file format lacked an alpha channel or because you set it to opaque when you created the file).
This is all fine, but the problem comes with the call to glTexImage2D. This function can take pixel data in lots of different formats. The format it uses for interpreting the data is determined by the parameters type and format as decribed here. The value of format in your code is GL_RGB. This says that each pixel has three components describing its red, green and blue components which appear in that order in the pixel data. The value of type (GL_UNSIGNED_BYTE) says each component is a single byte in size. The problem is that the alpha channel is missing, so glTexImage2D reads the pixel data from the wrong place. Also it seems that the order of the colours that FreeImage produces is blue-green-red, as pointed out by Roger Rowland. The fix is simple: set the format to GL_BGRA to tell glTexImage2D about the alpha components and the order of the data:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, nWidth, nHeight, 0, GL_BGRA, GL_UNSIGNED_BYTE, (void*)FreeImage_GetBits(pImage));
To see if this problem could explain the weird outputs, think about the big blocks of blue colour in your original image. According to GIMP this is largely blue with a smaller amount of green (R=0, G=162, B=232). So we have something like this:
B G R A B G R A B G R A ... meaning of bytes from FreeImage_GetBits
1 - 0 1 1 - 0 1 1 - 0 1 ... "values" of bytes (0 -> 0, - -> 162, 1 -> 232 or 255)
R G B R G B R G B R G B ... what glTexImage2D thinks the bytes mean
The texture pixel colours here are: orange, light yellow, bright cyan and a purply colour. These repeat because reading 4 pixels consumes 12 bytes which is exactly equal to 3 of the original pixels. This explains the pattern you see running over much of your output image. (Originally I had gone through this using the RGBA order but the GBRA version is a better fit for the image - the difference is especially clear at the start of the rows).
However I must admit I'm not sure what's going on at the beginning and how to relate it to the pink square! So there might be other problems here, but hopefully using GL_BGRA will be a step in the right direction.
You are making a 32 bpp image, so you need to have the correct texture declarations:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, nWidth, nHeight,
0, GL_BGRA, GL_UNSIGNED_BYTE, (void*)FreeImage_GetBits(pImage));
Also, the FreeImage_GetBits function returns data which is normally aligned so that each row ends on a 4 byte boundary. So, if your image is 32 bpp (as your is), or if your image width is a power of two, it will be ok but otherwise, you will need to set the unpack alignment appropriately.
i am seeing this problem where the textures disappear after the application has been used for a minutes or two. why would the textures be disappearing? the 3d cube remains on the screen at all times. the place which the textures were appear as white boxes when the textures disappear.
my DrawGLScene method looks like this:
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); // Clear Screen And Depth Buffer
glLoadIdentity(); // Reset The Current Modelview Matrix
glTranslatef(0.0f, 0.0f, -7.0f); // Translate Into The Screen 7.0 Units
//rotquad is a value that is updated as the user interacts with the ui by +/-9 to rotate the cube
glRotatef(rotquad, 0.0f, 1.0f, 0.0f);
//cube code here
RECT desktop;
const HWND hDesktop = GetDesktopWindow();
GetWindowRect(hDesktop, &desktop);
long horizontal = desktop.right;
long vertical = desktop.bottom;
glMatrixMode(GL_PROJECTION);
glPushMatrix();
glLoadIdentity();
glOrtho(-5.0, 3, 3, -5.0, -1.0, 10.0);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glDisable(GL_CULL_FACE);
glEnable(GL_TEXTURE_2D);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glClear(GL_DEPTH_BUFFER_BIT);
glColor4f(255.0f, 255.0f, 255.0f, 0.0f);
if (hoverRight) {
imageLoaderOut(outImage);
imageLoaderIn(inImage);
imageLoaderUp(upImage);
imageLoaderLeft(leftHover);
imageLoaderDown(upImage);
imageLoaderRight(rightImage);
}
// code for hover left, up and down are the same as hover right code above
glDisable(GL_TEXTURE_2D);
return TRUE; // Keep Going
}
this method is one of the imageLoad methods (others being called are almost identical, except for location/position..
void imageLoaderOut(const char* value)
{
FIBITMAP* bitmap60 = FreeImage_Load(
FreeImage_GetFileType(value, 0),
value, PNG_DEFAULT);
FIBITMAP *pImage60 = FreeImage_ConvertTo32Bits(bitmap60);
int nWidth60 = FreeImage_GetWidth(pImage60);
int nHeight60 = FreeImage_GetHeight(pImage60);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, nWidth60, nHeight60, 0, GL_RGBA, GL_UNSIGNED_BYTE, (void*)FreeImage_GetBits(pImage60));
FreeImage_Unload(pImage60);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glBegin(GL_QUADS);
glTexCoord2f(0.0f, 0.0f); glVertex2f(2.8f, -1.1f); // moves BOTTOM EDGE UP or DOWN - stretches length of image
glTexCoord2f(0.0f, 1.0f); glVertex2f(2.8f, -1.9f);
glTexCoord2f(1.0f, 1.0f); glVertex2f(2.1f, -1.9f);
glTexCoord2f(1.0f, 0.0f); glVertex2f(2.1f, -1.1f); // moves BOTTOM EDGE UP or DOWN - stretches length of image
glEnd();
}
It's just a guess, but you have a severe design issue in your code, combined with memory leak, that can lead to such undefined results as you've described.
First, in imageLoaderOut() you are reading all the textures each frame from HDD, converting it to 32 bpp and sending data to OpenGL. You call it from DrawGLScene, which means you do it each frame. It's really invalid way to do things. You don't need to load resources each frame. Do it once and for all in some kind if Initialize() function, and just use GL resource on drawing.
Then, I think here you have memory leak, because you never unloading bitmap60. As you make loading each frame, possibly thousands times per second, this unreleased memory accumulating. So, after some time, something goes really bad and FreeImage refuses to load textures.
So, possible solution is to:
move resource loading to initialization phase of your application
free leaked resources: FreeImage_Unload(bitmap60) in each loading function
Hope it helps.
The problem seems to be in glTexImage2D. The manual can be found here: http://www.opengl.org/sdk/docs/man/xhtml/glTexImage2D.xml
In particular, they said that:
glTexImage2D specifies the two-dimensional texture for the current texture unit, specified with glActiveTexture.
Once you are calling glTexImage2D multiple times, it seems that your are overwriting the same location multiples times.
I am drawing a polygon with texture on it as part of the HUD in my OpenGL program.
//Change the projection so that it is suitable for drawing HUD
glMatrixMode(GL_PROJECTION); //Change the projection
glLoadIdentity();
glOrtho(0, 800, 800, 0, -1, 1); //2D Mode
glMatrixMode(GL_MODELVIEW); //Back to modeling
glLoadIdentity();
//Draw the polygon with the texture on it
glBegin(GL_POLYGON);
glTexCoord2f(0.0, 1.0); glVertex3f(250.0, 680, 0.0);
glTexCoord2f(1.0, 1.0); glVertex3f(570.0, 680, 0.0);
glTexCoord2f(1.0, 0.0); glVertex3f(570.0, 800, 0.0);
glTexCoord2f(0.0, 0.0); glVertex3f(250.0, 800, 0.0);
glEnd();
//Change the projection back to how it was before
glMatrixMode(GL_PROJECTION); //Change the projection
glLoadIdentity();
gluPerspective(45.0, ((GLfloat)800) / GLfloat(800), 1.0, 200.0); //3D Mode
glMatrixMode(GL_MODELVIEW); //Back to modeling
glLoadIdentity();
The problem is that I can't get the "box" around the image to blend with the background. I opened the image (.bmp) in Photoshop and deleted the pixels around the image that I want displayed, but it still draws the whole rectangular image. It colors the pixels that I deleted with the last color that I used with glColor3f(), and I can get the whole image to become transparent, but I only want the pixels that I deleted in Photoshop to be transparent.
Any suggestions?
Properties that I am using for the textures:
glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST_MIPMAP_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
gluBuild2DMipmaps(GL_TEXTURE_2D, GL_RGB, TextureList[i]->getSizeX(), TextureList[i]->getSizeY(), GL_RGB, GL_UNSIGNED_BYTE, TextureList[i]->getData());
Here's an image of my program. I'm trying to get the white box to disappear, but as I decrease the alpha with glColor4f(), the whole image fades instead of just the white box.
img607.imageshack.us/img607/51/ogly.png
Code that loads a texture from file:
texture::texture(string filename)
{
// Routine to read a bitmap file.
// Works only for uncompressed bmp files of 24-bit color.
// Both width and height must be powers of 2.
unsigned int size, offset, headerSize;
// Read input file name.
ifstream infile(filename.c_str(), ios::binary);
// Get the starting point of the image data.
infile.seekg(10);
infile.read((char *) &offset, 4);
// Get the header size of the bitmap.
infile.read((char *) &headerSize,4);
// Get width and height values in the bitmap header.
infile.seekg(18);
infile.read( (char *) &sizeX, 4);
infile.read( (char *) &sizeY, 4);
// Allocate buffer for the image.
size = sizeX * sizeY * 24;
data = new unsigned char[size];
// Read bitmap data.
infile.seekg(offset);
infile.read((char *) data , size);
// Reverse color from bgr to rgb.
int temp;
for (unsigned int i = 0; i < size; i += 3)
{
temp = data[i];
data[i] = data[i+2];
data[i+2] = temp;
}
}
I don't see a glEnable(GL_BLEND) or a glBlendFunc in your code. Are you doing this?
Also, do you have an alpha channel in your image?
EDIT: You're loading the texture with the format GL_RGB, you're telling OpenGL there is no alpha on this texture.
You need to make an image with alpha transparency. Look here for a GIMP tutorial or here for a Photoshop tutorial.
Now load the texture indicating the correct image formats. There are MANY tutorials for "opengl alpha blending" on the internet - here is a C++ and SDL video tutorial.
Hope this helps!
If I understand correctly, you want transparency to work with your textures, yes?
If so, change
gluBuild2DMipmaps(GL_TEXTURE_2D, GL_RGB, TextureList[i]->getSizeX(), TextureList[i]->getSizeY(), GL_RGB, GL_UNSIGNED_BYTE, TextureList[i]->getData());
to
gluBuild2DMipmaps(GL_TEXTURE_2D, GL_RGBA, TextureList[i]->getSizeX(), TextureList[i]->getSizeY(), GL_RGBA, GL_UNSIGNED_BYTE, TextureList[i]->getData());
To allow for an alpha channel, and turn on blending with:
glEnable (GL_BLEND);
glBlendFunc (GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
UPDATE
As far as I know, BMPs don't support transparency (they can, ananthonline corrected me in the comments, but your application must support this) you should try one of the following formats if your imag editor does not support BMPs with alpha:
PNG
TIFF (recent variations)
TARGA
To use the transparency channel (alpha channel) you need both to generate the BMP file with this channel (Photoshop does this if you ask him to), and specify the correct format when generating the mipmaps and sending the image to the video card.
This way, the image will have the needed transparency info and the OpenGl driver will know that the image has this info.