I have a texture loaded into memory that is of RGBA format with various alpha values.
The image is loaded as so:
GLuint texture = 0;
glGenTextures(1, &texture);
glBindTexture(GL_TEXTURE_2D, texture);
self.texNum = texture;
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER,GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER,GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, self.imageWidth, self.imageHeight, 0, GL_RGBA, GL_UNSIGNED_BYTE, [self.imageData bytes]);
I want to know how I can draw this texture so that the alpha channel in the image is treated as all 1's and the texture is drawn like an RGB image.
Consider the base image:
This image is a progression from 0 to 255 alpha and has the RGB value of 255,0,0 throughout
However if I draw it with blending disabled I get an image that looks like:
www.ldeo.columbia.edu/~jcoplan/alpha/no_alpha.png
When what I really want is an image that looks like this:
www.ldeo.columbia.edu/~jcoplan/alpha/correct.png
I'd really appreciate some pointers to have it ignore the alpha channel completely. Note that I can't just load the image in as an RGB initially because I do need the alpha channel at other points.
Edit: I tried to use GL_COMBINE to solve my problem as such:
glColorf(1,1,1,1);
glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_COMBINE);
glTexEnvi(GL_TEXTURE_ENV, GL_COMBINE_RGB, GL_REPLACE);
glTexEnvi(GL_TEXTURE_ENV, GL_SRC0_RGB, GL_TEXTURE);
glTexEnvi(GL_TEXTURE_ENV, GL_OPERAND0_RGB, GL_SRC_COLOR);
glTexEnvi(GL_TEXTURE_ENV, GL_COMBINE_ALPHA, GL_REPLACE);
glTexEnvi(GL_TEXTURE_ENV, GL_SRC0_ALPHA, GL_PRIMARY_COLOR);
glTexEnvi(GL_TEXTURE_ENV, GL_OPERAND0_ALPHA, GL_SRC_ALPHA);
[self drawTexture];
But still no luck, it draws black to red still.
I have a texture loaded into memory that is of RGBA format with various alpha values
glDisable(GL_BLEND)
However if I draw it with blending disabled I get an image that looks like: www.ldeo.columbia.edu/~jcoplan/alpha/no_alpha.png
This happens because in your source image all transparent pixels are black. It's a problem with your texture/image, or maybe with loader function, but it is not an OpenGL problem.
You could probably try to fix it using glTexEnv(GL_COMBINE... ) (i.e. mix texture color with underlying color based on alpha channel), but since I haven't done something like that, i'm not completely sure, and can't give you exact operands. It was possible in Direct3D9 (using D3DTOP_MODULATEALPHA_ADDCOLOR), so most likely there is a way to do it in opengl.
You should not disable blending but use the glBlendFunc with proper parameters:
glBlendFunc(GL_ONE, GL_ZERO);
Or you could tell OpenGL to upload only the RGB channels of your image using
glPixelStorei(GL_UNPACK_ALIGNMENT, 4)
before you call glTexImage2D with format set to GL_RGB. It will cause it to skip the fourth byte of every pixel, i.e. the alpha channel.
I had a similar problem, and found out that it was because iOS image loading was doing a premultiply on the RBG values (as discussed in some of the other answers and comments here). I'd love to know whether there's a way of disabling pre-multiplication, but in the meantime I'm "un-pre-multiplying" using code derived from this thread and this thread.
// un pre-multiply
uint8_t *imageBytes = (uint8_t *)imageData ;
int byteCount = width*height*4 ;
for (int i=0; i < byteCount; i+= 4) {
uint8_t a = imageBytes[i+3] ;
if (a!=255 && a!=0 ){
float alphaFactor = 255.0/a ;
imageBytes[i] *= alphaFactor ;
imageBytes[i+1] *= alphaFactor ;
imageBytes[i+2] *= alphaFactor ;
}
}
Related
I am not really sure what the English name for what I am trying to do is, please tell me if you know.
In order to run some physically based lighting calculations. I need to write floating point data to a texture using one OpenGL shader, and read this data again in another OpenGL shader, but the data I want to store may be less than 0 or above 1.
To do this, I set up a render buffer to render to this texture as follows (This is C++):
//Set up the light map we will use for lighting calculation
glGenFramebuffers(1, &light_Framebuffer);
glBindFramebuffer(GL_FRAMEBUFFER, light_Framebuffer);
glBlendFunc(GL_SRC_ALPHA, GL_DST_ALPHA);//Needed for light blending (true additive)
glGenTextures(1, &light_texture);
glBindTexture(GL_TEXTURE_2D, light_texture);
//Initialize empty, and at the size of the internal screen
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, w, h, 0, GL_RGBA, GL_FLOAT, 0);
//No interpolation, I want pixelation
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
//Now the light framebuffer renders to the texture we will use to calculate dynamic lighting
glFramebufferTexture(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, light_texture, 0);
GLenum DrawBuffers[1] = { GL_COLOR_ATTACHMENT0 };
glDrawBuffers(1, DrawBuffers);//Color attachment 0 as before
Notice that I use type GL_FLOAT and not GL_UNSIGNED_BYTE, according to this discussion Floating point type texture should not be clipped between 0 and 1.
Now, just to test that this is true, I simply set the color somewhere outside this range in the fragment shader which creates this texture:
#version 400 core
void main()
{
gl_FragColor = vec4(2.0,-2.0,2.0,2.0);
}
After rendering to this texture, I send this texture to the program which should use it like any other texture:
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_2D, light_texture );//This is the texture I rendered too
glUniform1i(surf_lightTex_ID , 1);//This is the ID in the main display program
Again, just to check that this is working I have replaced the fragment shader with one which tests that the colors have been saved.
#version 400 core
uniform sampler2D lightSampler;
void main()
{
color = vec4(0,0,0,1);
if (texture(lightSampler,fragment_pos_uv).r>1.0)
color.r=1;
if (texture(lightSampler,fragment_pos_uv).g<0.0)
color.g=1;
}
If everything worked, everything should turn yellow, but needless to say this only gives me a black screen. So I tried the following:
#version 400 core
uniform sampler2D lightSampler;
void main()
{
color = vec4(0,0,0,1);
if (texture(lightSampler,fragment_pos_uv).r==1.0)
color.r=1;
if (texture(lightSampler,fragment_pos_uv).g==0.0)
color.g=1;
}
And I got
The parts which are green are in shadow in the testing scene, nevermind them; the main point is that all the channels of light_texture get clipped to between 0 and 1, which they should not do. I am not sure if the data is saved correctly and only clipped when I read it, or if the data is clipped to 0 to 1 when saving.
So, my question is, is there some way to read and write to an OpenGL texture, such that the data stored may be above 1 or below 0.
Also, No can not fix the problem by using 32 bit integer per channel and by applying a Sigmoid function before saving and its inverse after reading the data, that would break alpha blending.
The type and format arguments glTexImage2D only specify the format of the source image data, but do not affect the internal format of the texture. You must use a specific internal format. e.g.: GL_RGBA32F:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA32F, w, h, 0, GL_RGBA, GL_UNSIGNED_BYTE, 0);
I'm loading a font from a TGA texture. I generate the mipmap using the gluBuild2DMipmaps() function.
When the font has a certain size, it looks very good. But when it gets smaller, it gets darker and darker whenever it reaches a new mipmap level.
This is how I create the texture:
void TgaLoader::bindTexture(unsigned int* texture)
{
tImageTGA *pBitMap = m_tgaImage;
if(pBitMap == 0)
{
return;
}
glGenTextures(1, texture);
glBindTexture(GL_TEXTURE_2D, *texture);
gluBuild2DMipmaps(GL_TEXTURE_2D,
pBitMap->channels,
pBitMap->size_x,
pBitMap->size_y,
textureType,
GL_UNSIGNED_BYTE,
pBitMap->data);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR_MIPMAP_NEAREST);
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
}
This makes the text look like this (it should be white):
Barely visible.
If I change the GL_TEXTURE_MIN_FILTER to basically ignore mipmaps (using GL_LINEAR for example), it looks like this:
I've tried different filter options and also tried using glGenerateMipmap() instead of gluBuild2DMipmaps(), but I always end up with the same result.
What's wrong with the code?
I noticed a big problem in my openGL texture rendering:
Assumedly transparent pixels are rendered as solid white. According to most solutions to similar issues discussed on StackOverflow, I need to set glBlend / the proper functions, but I have already set the necessary gl state and am positive that textures are loaded correctly as far as I can tell. My texture load function is below:
GLboolean GL_texture_load(Texture* texture_id, const char* const path, const GLboolean alpha, const GLint param_edge_x, const GLint param_edge_y)
{
// load image
SDL_Surface* img = nullptr;
if (!(img = IMG_Load(path))) {
fprintf(stderr, "SDL_image could not be loaded %s, SDL_image Error: %s\n",
path, IMG_GetError());
return GL_FALSE;
}
glBindTexture(GL_TEXTURE_2D, *texture_id);
// image assignment
GLuint format = (alpha) ? GL_RGBA : GL_RGB;
glTexImage2D(GL_TEXTURE_2D, 0, format, img->w, img->h, 0, format, GL_UNSIGNED_BYTE, img->pixels);
// wrapping behavior
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, param_edge_x);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, param_edge_y);
// texture filtering
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glBindTexture(GL_TEXTURE_2D, 0);
// free the surface
SDL_FreeSurface(img);
return GL_TRUE;
}
I use Adobe Photoshop to export "for the web" 24-bit + transparency .png files -- 72 pixels/inch, 6400 x 720. I am not sure how to set the color mode (8, 16, 32), but this might have something to do with the issue. I also use the default sRGB color profile, but I thought to remove the color profile at one point. This didn't do anything.
No matter what, a png exported from Photoshop displays as solid white over transparent pixels.
If I create an image in e.g. Gimp, I have correct transparency. Importing the Adobe .psd or .png does not seem to work, and in any case I prefer to use Photoshop for editing purposes.
Has anyone experienced this issue? I imagine that Photoshop must add some strange metadata or I am not using the correct color modes--or both.
(I am concerned that this goes beyond the scope of Stack Overflow, but my issue intersects image editing and programming. Regardless, please let me know if this is not the right place.)
EDIT:
In both Photoshop and Gimp I created a test case-- 8 pixels (red, green, transparent, blue) clockwise.
In Photoshop, the transparent square is read as 1, 1, 1, 0 and displays as white.
In Gimp, the transparent square is 0, 0, 0, 0.
I also checked my fragment shader to see whether transparency works at all. Varying the alpha over time does increase transparency, so the alpha isn't outright ignored. For some reason 1, 1, 1, 0 counts as solid.
In addition, setting the background color to black with glClearColor seems to prevent the alpha from increasing transparency.
I don't know how to explain some of these behaviors, but something seems off. 0 alpha should be the same regardless of color, shouldn't it?
(Note that I render a few shapes on top of each other, but I've tried just rendering one for testing purposes.)
The best I can do is post more of my setup code (with bits omitted):
// vertex array and buffers setup
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glViewport(0, 0, SCREEN_WIDTH, SCREEN_HEIGHT);
glEnable(GL_DEPTH_TEST);
glEnable(GL_BLEND);
// I think that the blend function may be wrong (GL_ONE that is).
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
glDepthRange(0, 1);
glDepthFunc(GL_LEQUAL);
Texture tex0;
// same function as above, but generates one texture id for me
if (GL_texture_gen_and_load_1(&tex0, "./textures/sq2.png", GL_TRUE, GL_CLAMP_TO_EDGE, GL_CLAMP_TO_EDGE) == GL_FALSE) {
return EXIT_FAILURE;
}
glUseProgram(shader_2d);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, tex0);
glUniform1i(glGetUniformLocation(shader_2d, "tex0"), 0);
bool active = true;
while (active) {
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// uniforms, game logic, etc.
glDrawElements(GL_TRIANGLES, tri_data.i_count, GL_UNSIGNED_INT, (void*)0);
}
I don't know how to explain some of these behaviors, but something seems off. 0 alpha should be the same regardless of color, shouldn't it?
If you want to get an identical result for an alpha channel of 0.0, independent on the red, green and blue channels, the you have to change the blend function. See glBlendFunc.
Use:
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
This cause tha the the red, green and blue channel are multiplied by the alpha channel.
If the alpha channel is 0.0, the resulting RGB color is (0, 0, 0).
If the alpha channel is 1.0, the RGB color channels keep unchanged.
See further Alpha Compositing, OpenGL Blending and Premultiplied Alpha
I am using OpenGL, I can load tga files properly, but for some reason when i render jpg files, i do not see them correctly.
This is what the image is supposed to look like--
And this is what it looks like.. why is it stretched? is it because of the coordinates?
Here is the code i am using for drawing.
void Renderer::DrawJpg(GLuint tex, int xi, int yq, int width, int height) const
{
glBindTexture(GL_TEXTURE_2D, tex);
glBegin(GL_QUADS);
glTexCoord2i(0, 0); glVertex2i(0+xi, 0+xi);
glTexCoord2i(0, 1); glVertex2i(0+xi, height+xi);
glTexCoord2i(1, 1); glVertex2i(width+xi, height+xi);
glTexCoord2i(1, 0); glVertex2i(width+xi, 0+xi);
glEnd();
}
This is how i am loading the image...
imagename=s;
ILboolean success;
ilInit();
ilGenImages(1, &id);
ilBindImage(id);
success = ilLoadImage((const ILstring)imagename.c_str());
if (success)
{
success = ilConvertImage(IL_RGB, IL_UNSIGNED_BYTE); /* Convert every colour component into
unsigned byte. If your image contains alpha channel you can replace IL_RGB with IL_RGBA */
if (!success)
{
printf("image conversion failed.");
}
glGenTextures(1, &id);
glBindTexture(GL_TEXTURE_2D, id);
width = ilGetInteger(IL_IMAGE_WIDTH);
height = ilGetInteger(IL_IMAGE_HEIGHT);
glTexImage2D(GL_TEXTURE_2D, 0, ilGetInteger(IL_IMAGE_BPP), ilGetInteger(IL_IMAGE_WIDTH),
ilGetInteger(IL_IMAGE_HEIGHT), 0, ilGetInteger(IL_IMAGE_FORMAT), GL_UNSIGNED_BYTE,
ilGetData());
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT); // Linear Filtered
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT); // Linear Filtered
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
I probably should mention this, but some images did get rendered properly, I thought it was because width != height. But that is not the case, images with width != height also get loaded fine.
But for other images i still get this problem.
You probably need to call
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
before uploading the texture data with glTexImage2D.
From the reference pages:
GL_UNPACK_ALIGNMENT
Specifies the alignment requirements for the start of each pixel row
in memory. The allowable values are 1 (byte-alignment), 2 (rows
aligned to even-numbered bytes), 4 (word-alignment), and 8 (rows start
on double-word boundaries).
The default value for the alignment is 4 and your image loading library probably returns pixel data with byte-aligned rows, which explains why some of your images look OK (when the width is a multiple of four).
Always try to have the images width and height of the power of two because some GPU support textures only in NPOT resolution. (for example 128x128, 512x512 but not 123x533, 128x532)
And i think that here instead of GL_REPEAT you should use GL_CLAMP_TO_EDGE :)
GL_REPEAT is used when your texture coordinates are > 1.0f, CLAMP_TO_EDGE too but guarantees the image will fill the polygon without unwanted lines on edges. (it's blocking your linear filtering on edges)
Remember to try out code where floats are used (sample from comment) :)
Here is good explanation http://open.gl/textures :)
I am making a texture in my environment that excludes all white pixels. I read in a ppm file and the fourth value is always set to 0 if it is a white pixel. Everything seems to be in order, I have set up my view correctly and so forth. The texture image is visible with my current code, however the image as a whole is not fully opaque. It is highly see through. Is this a problem with how I am setting up my GL_Blend? Why is the entire texture not opaque as it should be only excluding the white pixels?
first three values are read in as rgb values and fourth value is not in file, it is selected depending on the total value of the three previous numbers from rgb. This texture is not loaded every time I render it is in a display list so only done once.
glPushMatrix();
FILE *inFile3;
char dump3[3];
int max3, k3 = 0;
inFile3 = fopen("tree.ppm", "r");
int x3;
int y3;
fscanf(inFile3, "%s", dump3);
fscanf(inFile3, "%d %d", &x3, &y3);
fscanf(inFile3, "%d", &max3);
int arraySize3 = y3*(4*x3);
int pixel3, rgb = 0;
GLubyte data3[arraySize3];
for (int i = 0; i < x3; i++) {
for (int j = 0; j < y3; j++) {
fscanf(inFile3, "%d", &pixel3);
data3[k3++] = pixel3;
rgb += pixel3;
fscanf(inFile3, "%d", &pixel3);
data3[k3++] = pixel3;
rgb += pixel3;
fscanf(inFile3, "%d", &pixel3);
data3[k3++] = pixel3;
rgb += pixel3;
data3[k3++] = ((rgb) > 760) ? 0 : 255;
rgb = 0;
}
}
fclose(inFile3);
glGenTextures(1,&texture3);
glBindTexture(GL_TEXTURE_2D,texture3);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER,GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER,GL_NEAREST_MIPMAP_NEAREST);
gluBuild2DMipmaps(GL_TEXTURE_2D,4,x3,y3,GL_RGBA,GL_UNSIGNED_BYTE,data3);
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glEnable( GL_TEXTURE_2D );
glBindTexture( GL_TEXTURE_2D, texture3 );
glTexEnvf( GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE );
glRotatef(180, 0, 0, 0);
glTranslatef(0, -19, 0);
glBegin(GL_QUADS);
glTexCoord2d(0,0); glVertex3f(30,0,10);
glTexCoord2d(0,1); glVertex3f(30,20,10);
glTexCoord2d(1,1); glVertex3f(30,20,-10);
glTexCoord2d(1,0); glVertex3f(30,0,-10);
glEnd();
glDisable(GL_TEXTURE_2D);
glDisable(GL_BLEND);
glPopMatrix();
Screen shots:
If you move the viewpoint really close to the tree, does it become opaque? Or does it if you disable mipmapping?
edit
By moving the eye closer to a tree, the level 0 mipmap (your original image) is used. To select a mipmap, OpenGL computes which level provides the best match between the size of its texels and the size of a pixel.
To disable mipmapping generation, you must use glTexImage2D to upload your texture instead of gluBuild2DMipmaps.
To disable usage of mipmaps, change the following line
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST_MIPMAP_NEAREST);
to
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
You could also do as Jim Buck suggests and use
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAX_LEVEL, 0);
to force the highest mipmap level used to 0. The lowest being by default 0, this effectively disables mipmapping. By setting GL_TEXTURE_MAX_LEVEL and GL_TEXTURE_BASE_LEVEL to the same value, you will be able to see the content of that specific mipmap level.
All this is to confirm if it's a problem with the mipmaps.
In addition to what #bernie pointed out about GluByte arrays and including glBegin/end, the problem was in..
glTexEnvf( GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE );
GL_MODULATE needed to be replaced with GL_REPLACE. This fixed the issue. Thanks for your guys' help.