I am drawing a polygon with texture on it as part of the HUD in my OpenGL program.
//Change the projection so that it is suitable for drawing HUD
glMatrixMode(GL_PROJECTION); //Change the projection
glLoadIdentity();
glOrtho(0, 800, 800, 0, -1, 1); //2D Mode
glMatrixMode(GL_MODELVIEW); //Back to modeling
glLoadIdentity();
//Draw the polygon with the texture on it
glBegin(GL_POLYGON);
glTexCoord2f(0.0, 1.0); glVertex3f(250.0, 680, 0.0);
glTexCoord2f(1.0, 1.0); glVertex3f(570.0, 680, 0.0);
glTexCoord2f(1.0, 0.0); glVertex3f(570.0, 800, 0.0);
glTexCoord2f(0.0, 0.0); glVertex3f(250.0, 800, 0.0);
glEnd();
//Change the projection back to how it was before
glMatrixMode(GL_PROJECTION); //Change the projection
glLoadIdentity();
gluPerspective(45.0, ((GLfloat)800) / GLfloat(800), 1.0, 200.0); //3D Mode
glMatrixMode(GL_MODELVIEW); //Back to modeling
glLoadIdentity();
The problem is that I can't get the "box" around the image to blend with the background. I opened the image (.bmp) in Photoshop and deleted the pixels around the image that I want displayed, but it still draws the whole rectangular image. It colors the pixels that I deleted with the last color that I used with glColor3f(), and I can get the whole image to become transparent, but I only want the pixels that I deleted in Photoshop to be transparent.
Any suggestions?
Properties that I am using for the textures:
glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST_MIPMAP_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
gluBuild2DMipmaps(GL_TEXTURE_2D, GL_RGB, TextureList[i]->getSizeX(), TextureList[i]->getSizeY(), GL_RGB, GL_UNSIGNED_BYTE, TextureList[i]->getData());
Here's an image of my program. I'm trying to get the white box to disappear, but as I decrease the alpha with glColor4f(), the whole image fades instead of just the white box.
img607.imageshack.us/img607/51/ogly.png
Code that loads a texture from file:
texture::texture(string filename)
{
// Routine to read a bitmap file.
// Works only for uncompressed bmp files of 24-bit color.
// Both width and height must be powers of 2.
unsigned int size, offset, headerSize;
// Read input file name.
ifstream infile(filename.c_str(), ios::binary);
// Get the starting point of the image data.
infile.seekg(10);
infile.read((char *) &offset, 4);
// Get the header size of the bitmap.
infile.read((char *) &headerSize,4);
// Get width and height values in the bitmap header.
infile.seekg(18);
infile.read( (char *) &sizeX, 4);
infile.read( (char *) &sizeY, 4);
// Allocate buffer for the image.
size = sizeX * sizeY * 24;
data = new unsigned char[size];
// Read bitmap data.
infile.seekg(offset);
infile.read((char *) data , size);
// Reverse color from bgr to rgb.
int temp;
for (unsigned int i = 0; i < size; i += 3)
{
temp = data[i];
data[i] = data[i+2];
data[i+2] = temp;
}
}
I don't see a glEnable(GL_BLEND) or a glBlendFunc in your code. Are you doing this?
Also, do you have an alpha channel in your image?
EDIT: You're loading the texture with the format GL_RGB, you're telling OpenGL there is no alpha on this texture.
You need to make an image with alpha transparency. Look here for a GIMP tutorial or here for a Photoshop tutorial.
Now load the texture indicating the correct image formats. There are MANY tutorials for "opengl alpha blending" on the internet - here is a C++ and SDL video tutorial.
Hope this helps!
If I understand correctly, you want transparency to work with your textures, yes?
If so, change
gluBuild2DMipmaps(GL_TEXTURE_2D, GL_RGB, TextureList[i]->getSizeX(), TextureList[i]->getSizeY(), GL_RGB, GL_UNSIGNED_BYTE, TextureList[i]->getData());
to
gluBuild2DMipmaps(GL_TEXTURE_2D, GL_RGBA, TextureList[i]->getSizeX(), TextureList[i]->getSizeY(), GL_RGBA, GL_UNSIGNED_BYTE, TextureList[i]->getData());
To allow for an alpha channel, and turn on blending with:
glEnable (GL_BLEND);
glBlendFunc (GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
UPDATE
As far as I know, BMPs don't support transparency (they can, ananthonline corrected me in the comments, but your application must support this) you should try one of the following formats if your imag editor does not support BMPs with alpha:
PNG
TIFF (recent variations)
TARGA
To use the transparency channel (alpha channel) you need both to generate the BMP file with this channel (Photoshop does this if you ask him to), and specify the correct format when generating the mipmaps and sending the image to the video card.
This way, the image will have the needed transparency info and the OpenGl driver will know that the image has this info.
Related
I noticed a big problem in my openGL texture rendering:
Assumedly transparent pixels are rendered as solid white. According to most solutions to similar issues discussed on StackOverflow, I need to set glBlend / the proper functions, but I have already set the necessary gl state and am positive that textures are loaded correctly as far as I can tell. My texture load function is below:
GLboolean GL_texture_load(Texture* texture_id, const char* const path, const GLboolean alpha, const GLint param_edge_x, const GLint param_edge_y)
{
// load image
SDL_Surface* img = nullptr;
if (!(img = IMG_Load(path))) {
fprintf(stderr, "SDL_image could not be loaded %s, SDL_image Error: %s\n",
path, IMG_GetError());
return GL_FALSE;
}
glBindTexture(GL_TEXTURE_2D, *texture_id);
// image assignment
GLuint format = (alpha) ? GL_RGBA : GL_RGB;
glTexImage2D(GL_TEXTURE_2D, 0, format, img->w, img->h, 0, format, GL_UNSIGNED_BYTE, img->pixels);
// wrapping behavior
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, param_edge_x);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, param_edge_y);
// texture filtering
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glBindTexture(GL_TEXTURE_2D, 0);
// free the surface
SDL_FreeSurface(img);
return GL_TRUE;
}
I use Adobe Photoshop to export "for the web" 24-bit + transparency .png files -- 72 pixels/inch, 6400 x 720. I am not sure how to set the color mode (8, 16, 32), but this might have something to do with the issue. I also use the default sRGB color profile, but I thought to remove the color profile at one point. This didn't do anything.
No matter what, a png exported from Photoshop displays as solid white over transparent pixels.
If I create an image in e.g. Gimp, I have correct transparency. Importing the Adobe .psd or .png does not seem to work, and in any case I prefer to use Photoshop for editing purposes.
Has anyone experienced this issue? I imagine that Photoshop must add some strange metadata or I am not using the correct color modes--or both.
(I am concerned that this goes beyond the scope of Stack Overflow, but my issue intersects image editing and programming. Regardless, please let me know if this is not the right place.)
EDIT:
In both Photoshop and Gimp I created a test case-- 8 pixels (red, green, transparent, blue) clockwise.
In Photoshop, the transparent square is read as 1, 1, 1, 0 and displays as white.
In Gimp, the transparent square is 0, 0, 0, 0.
I also checked my fragment shader to see whether transparency works at all. Varying the alpha over time does increase transparency, so the alpha isn't outright ignored. For some reason 1, 1, 1, 0 counts as solid.
In addition, setting the background color to black with glClearColor seems to prevent the alpha from increasing transparency.
I don't know how to explain some of these behaviors, but something seems off. 0 alpha should be the same regardless of color, shouldn't it?
(Note that I render a few shapes on top of each other, but I've tried just rendering one for testing purposes.)
The best I can do is post more of my setup code (with bits omitted):
// vertex array and buffers setup
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glViewport(0, 0, SCREEN_WIDTH, SCREEN_HEIGHT);
glEnable(GL_DEPTH_TEST);
glEnable(GL_BLEND);
// I think that the blend function may be wrong (GL_ONE that is).
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
glDepthRange(0, 1);
glDepthFunc(GL_LEQUAL);
Texture tex0;
// same function as above, but generates one texture id for me
if (GL_texture_gen_and_load_1(&tex0, "./textures/sq2.png", GL_TRUE, GL_CLAMP_TO_EDGE, GL_CLAMP_TO_EDGE) == GL_FALSE) {
return EXIT_FAILURE;
}
glUseProgram(shader_2d);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, tex0);
glUniform1i(glGetUniformLocation(shader_2d, "tex0"), 0);
bool active = true;
while (active) {
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// uniforms, game logic, etc.
glDrawElements(GL_TRIANGLES, tri_data.i_count, GL_UNSIGNED_INT, (void*)0);
}
I don't know how to explain some of these behaviors, but something seems off. 0 alpha should be the same regardless of color, shouldn't it?
If you want to get an identical result for an alpha channel of 0.0, independent on the red, green and blue channels, the you have to change the blend function. See glBlendFunc.
Use:
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
This cause tha the the red, green and blue channel are multiplied by the alpha channel.
If the alpha channel is 0.0, the resulting RGB color is (0, 0, 0).
If the alpha channel is 1.0, the RGB color channels keep unchanged.
See further Alpha Compositing, OpenGL Blending and Premultiplied Alpha
I ran into some trouble while extracting a matrix (cropping) using OpenCV. What's funny is if I don't execute the line to "crop" the image everything works fine. But if I do, I see horizontal multi-coloured lines in the place of the image.
This is to show that the cropping takes place correctly.
cv::imshow("before", newimg);
//the line that "crops" the image
newimg = newimg(cv::Rect(leftcol, toprow, rightcol - leftcol, bottomrow - toprow));
cv::imshow("after", newimg);
The code that follows is where I bind the image to a texture so that I can use it in OpenGL.
glGenTextures(1, &tex);
glBindTexture(GL_TEXTURE_2D, tex);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, newimg.cols, newimg.rows,
0, GL_BGR, GL_UNSIGNED_BYTE, newimg.ptr());
glBindTexture(GL_TEXTURE_2D, 0);
And later to draw . . .
float h = size;
float w = size * aspectRatio; // of the image. aspectRatio = width / height
glEnable(GL_TEXTURE_2D);
glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_DECAL);
glBindTexture(GL_TEXTURE_2D, texture);
glBegin(GL_QUADS);
glTexCoord2f(0.0f, 0.0f); glVertex3f(x, y, z);
glTexCoord2f(0.0f, 1.0f); glVertex3f(x, y + h, z);
glTexCoord2f(1.0f, 1.0f); glVertex3f(x + w, y + h, z);
glTexCoord2f(1.0f, 0.0f); glVertex3f(x + w, y, z);
glEnd();
glDisable(GL_TEXTURE_2D);
All of this works well and I see the proper image drawn in the OpenGL window when I comment out that line in which I had cropped the image. I have checked the image type before and after the cropping, but the only difference seems to be the reduced number of rows and columns in the final image.
Here is the image that gets drawn when cropping has been done.
After a bit of research I found out a way to solve the problem. The reason the image was looking distorted was because though the image had been cropped, newimg.step / newimg.elemSize() still showed the original size of the image. This meant that only the values of rows and columns had changed in the output image but pixels with no data in them which were no part of the image remained. That is possibly why the "after" image has a gray area on the right. I might be wrong about this theory since I don't have a deep understanding of the subject but it all started working properly once I inserted this line before calling glTexImage2D:
glPixelStorei(GL_UNPACK_ROW_LENGTH, newimg.step / newimg.elemSize());
Note: Since we are manipulating the pixel store of OpenGL, it's best to push the state before doing so:
glPushClientAttrib(GL_CLIENT_PIXEL_STORE_BIT);
And pop it out after you are done:
glPopClientAttrib();
This is to avoid the memory from getting corrupted. I thank genpfault for pointing me in the right direction.
See 8. Know your pixel store state for more details.
https://www.opengl.org/archives/resources/features/KilgardTechniques/oglpitfall/
This post was also very useful.
Im working on a porject in opengl.
I have a polygon in the polygon filled with bmp image file.
I can rotate the camera to look at the image from different places, and I want to copy the part of the image and put it inside a new bmp file.
I have alot of Unnecessary code so I will copy the imprtant parts.
_textureId = LoadBMP("file.bmp");
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, _textureId);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glColor3f(1, 1, 0.7);
float BOX_SIZE = -12.0f;
glBegin(GL_QUADS);
glVertex3f(-BOX_SIZE / 2, -BOX_SIZE / 2, -5);
glVertex3f(BOX_SIZE / 2, -BOX_SIZE / 2, -5);
glVertex3f(BOX_SIZE / 2, -BOX_SIZE / 2, 5);
glVertex3f(-BOX_SIZE / 2, -BOX_SIZE / 2, 5);
glEnd();
and the rotation is pretty basic, soo someone have any suggestions?
thanks alot.
If you want to save the output of OpenGL to a file, you will have to read back the contents of the color buffer from the GL to client memory. Then, you can do whatecver you want to it. The command
glReadPixels(GLint x, GLint y, GLsizei width, GLsizei height, GLenum format, GLenum type, GLvoid *data)
will read back the pixel data in an rectangle of the width * height pixels beginning at x,y to the memory buffer located at data. Since you said you want to save it as a BMP file, you probably want GL_UNSIGNED_BYTE as type, because BMP only supports up to 8 bit per channel. You also want probably GL_BGA or GL_BGR as the format, as this is the native channel layout for BMP.
I have an OpenGL context on which I draw successfully using OpenGL.
I need to draw a specific rectangle of an IOSurface to this context.
What is the best way to do this on 10.8?
NOTE:
I know how to do this on 10.9 using CoreImage (by createing a CIImage from the IOSurface, and render it with [CIContext drawImage:inRect:fromRect]).
However, this does not work well for me on 10.8 (each raw of the image is displayed with a different offset, and the image is distorted diagonally).
Edit: Here is the code that works on 10.9 but not on 10.8:
CGColorSpaceRef colorSpace = CGColorSpaceCreateWithName(kCGColorSpaceSRGB);
CIImage* ciImage = [[CIImage alloc] initWithIOSurface:surface plane:0 format:kCVPixelFormatType_32BGRA options:#{kCIImageColorSpace : (__bridge id)colorSpace}];
NSRect flippedFromRect = fromRect;
// Flip rect before passing to CoreImage:
{
flippedFromRect.origin.y = IOSurfaceGetHeight(surface) - fromRect.origin.y - fromRect.size.height;
}
[ciContext drawImage:ciImage inRect:inRect fromRect:flippedFromRect];
CGColorSpaceRelease(colorSpace);
Here is the solution by wrapping the IOSurface with an OpenGL texture and draw the texture to the screen. This assumes a similar API to [CIContext render:toIOSurface:bounds:colorSpace:] but a vertically flipped OpenGL coordinate system.
// Draw surface on OpenGL context
{
// Enable the rectangle texture extenstion
glEnable(GL_TEXTURE_RECTANGLE_EXT);
// 1. Create a texture from the IOSurface
GLuint name;
{
CGLContextObj cgl_ctx = ...
glGenTextures(1, &name);
GLsizei surface_w = (GLsizei)IOSurfaceGetWidth(surface);
GLsizei surface_h = (GLsizei)IOSurfaceGetHeight(surface);
glBindTexture(GL_TEXTURE_RECTANGLE_EXT, name);
CGLError cglError =
CGLTexImageIOSurface2D(cgl_ctx, GL_TEXTURE_RECTANGLE_EXT, GL_RGBA, surface_w, surface_h, GL_BGRA, GL_UNSIGNED_INT_8_8_8_8_REV, surface, 0);
glBindTexture(GL_TEXTURE_RECTANGLE_EXT, 0);
}
// 2. Draw the texture to the current OpenGL context
{
glBindTexture(GL_TEXTURE_RECTANGLE_EXT, name);
glTexParameteri(GL_TEXTURE_RECTANGLE_EXT, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_RECTANGLE_EXT, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_REPLACE);
glBegin(GL_QUADS);
glColor4f(0.f, 0.f, 1.0f, 1.0f);
glTexCoord2f( (float)NSMinX(fromRect), (float)(NSMinY(fromRect)));
glVertex2f( (float)NSMinX(inRect), (float)(NSMinY(inRect)));
glTexCoord2f( (float)NSMaxX(fromRect), (float)NSMinY(fromRect));
glVertex2f( (float)NSMaxX(inRect), (float)NSMinY(inRect));
glTexCoord2f( (float)NSMaxX(fromRect), (float)NSMaxY(fromRect));
glVertex2f( (float)NSMaxX(inRect), (float)NSMaxY(inRect));
glTexCoord2f( (float)NSMinX(fromRect), (float)NSMaxY(fromRect));
glVertex2f( (float)NSMinX(inRect), (float)NSMaxY(inRect));
glEnd();
glBindTexture(GL_TEXTURE_RECTANGLE_EXT, 0);
}
glDeleteTextures(1, &name);
}
If you need to draw in the display's color profile, you can explicitly call ColorSync and pass it your source profile and destination profile. It will return to you a “recipe” to perform the color correction. That recipe actually has a linearization, a color conversion (a 3x3 conversion matrix) and a gamma.
FragmentInfo = ColorSyncTransformCopyProperty (transform, kColorSyncTransformFullConversionData, NULL);
If you like, you can combine all those operations into a 3D lookup table. That's actually what happens in the color management of many of the OS X frameworks and applications.
References:
Apple TextureUpload sample code
Draw IOSurfaces to another IOSurface
OpenGL Options for Advanced Color Management
I have a texture loaded into memory that is of RGBA format with various alpha values.
The image is loaded as so:
GLuint texture = 0;
glGenTextures(1, &texture);
glBindTexture(GL_TEXTURE_2D, texture);
self.texNum = texture;
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER,GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER,GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, self.imageWidth, self.imageHeight, 0, GL_RGBA, GL_UNSIGNED_BYTE, [self.imageData bytes]);
I want to know how I can draw this texture so that the alpha channel in the image is treated as all 1's and the texture is drawn like an RGB image.
Consider the base image:
This image is a progression from 0 to 255 alpha and has the RGB value of 255,0,0 throughout
However if I draw it with blending disabled I get an image that looks like:
www.ldeo.columbia.edu/~jcoplan/alpha/no_alpha.png
When what I really want is an image that looks like this:
www.ldeo.columbia.edu/~jcoplan/alpha/correct.png
I'd really appreciate some pointers to have it ignore the alpha channel completely. Note that I can't just load the image in as an RGB initially because I do need the alpha channel at other points.
Edit: I tried to use GL_COMBINE to solve my problem as such:
glColorf(1,1,1,1);
glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_COMBINE);
glTexEnvi(GL_TEXTURE_ENV, GL_COMBINE_RGB, GL_REPLACE);
glTexEnvi(GL_TEXTURE_ENV, GL_SRC0_RGB, GL_TEXTURE);
glTexEnvi(GL_TEXTURE_ENV, GL_OPERAND0_RGB, GL_SRC_COLOR);
glTexEnvi(GL_TEXTURE_ENV, GL_COMBINE_ALPHA, GL_REPLACE);
glTexEnvi(GL_TEXTURE_ENV, GL_SRC0_ALPHA, GL_PRIMARY_COLOR);
glTexEnvi(GL_TEXTURE_ENV, GL_OPERAND0_ALPHA, GL_SRC_ALPHA);
[self drawTexture];
But still no luck, it draws black to red still.
I have a texture loaded into memory that is of RGBA format with various alpha values
glDisable(GL_BLEND)
However if I draw it with blending disabled I get an image that looks like: www.ldeo.columbia.edu/~jcoplan/alpha/no_alpha.png
This happens because in your source image all transparent pixels are black. It's a problem with your texture/image, or maybe with loader function, but it is not an OpenGL problem.
You could probably try to fix it using glTexEnv(GL_COMBINE... ) (i.e. mix texture color with underlying color based on alpha channel), but since I haven't done something like that, i'm not completely sure, and can't give you exact operands. It was possible in Direct3D9 (using D3DTOP_MODULATEALPHA_ADDCOLOR), so most likely there is a way to do it in opengl.
You should not disable blending but use the glBlendFunc with proper parameters:
glBlendFunc(GL_ONE, GL_ZERO);
Or you could tell OpenGL to upload only the RGB channels of your image using
glPixelStorei(GL_UNPACK_ALIGNMENT, 4)
before you call glTexImage2D with format set to GL_RGB. It will cause it to skip the fourth byte of every pixel, i.e. the alpha channel.
I had a similar problem, and found out that it was because iOS image loading was doing a premultiply on the RBG values (as discussed in some of the other answers and comments here). I'd love to know whether there's a way of disabling pre-multiplication, but in the meantime I'm "un-pre-multiplying" using code derived from this thread and this thread.
// un pre-multiply
uint8_t *imageBytes = (uint8_t *)imageData ;
int byteCount = width*height*4 ;
for (int i=0; i < byteCount; i+= 4) {
uint8_t a = imageBytes[i+3] ;
if (a!=255 && a!=0 ){
float alphaFactor = 255.0/a ;
imageBytes[i] *= alphaFactor ;
imageBytes[i+1] *= alphaFactor ;
imageBytes[i+2] *= alphaFactor ;
}
}