TTF_RenderUTF8_Blended Rendering Colored Text - c++

I'm trying to render text using SDL_ttf and openGL.
I open the font and render the text to an SDL_Surface and then attached that surface to a texture and bind it for openGL to render it.
I have googled this issue a bunch and not getting many hits which would lead me to believe I'm understand something.
The only two functions that matter since I've pretty much made a temp variable to trouble shoot this issue. They are:
SDL_Surface* CFont::BlendedUTF8Surface() {
SDL_Surface* Surf_Text;
SDL_Color Blah;
Blah.r = 0;
Blah.b = 255;
Blah.g = 0;
if(!(Surf_Text = TTF_RenderUTF8_Blended(pFont,TxtMsg,Blah))) {
char str[256];
sprintf_s(str, "? %s \n", TTF_GetError());
OutputDebugString(str);
}
return Surf_Text;
}
This uses SDL_ttf to render the text to the Surf_Text surface. You can see I've maxed the blue channel. I'll talk about this in a minute. Here is the rendering
void CLabel::OnRender(int xOff,int yOff) {
if(Visible) {
glColor4f(1.0,1.0,1.0,1.0);
Font.Color(FontColors.r, FontColors.g, FontColors.b); //useless I overwrote the variable with Blah to test this
SDL_Surface* Surf_Text;
Surf_Text = Font.BlendedUTF8Surface();
Text_Font.OnLoad(Surf_Text);
Text_Font.RenderQuad(_X+xOff,_Y+yOff);
SDL_FreeSurface(Surf_Text);
glColor4f(0.0,0.0,0.0,1.0);
}
}
Alright, so far from what I can tell, the problem is probably coming from the current color state and the texture environment mode.
When I render the text in this fashion, the text will change colors but it's like the R and B channels have swtiched. If I make red 255, the text is blue and if I make blue 255, the text is red. Green stays green (RGB vs BGR ?).
If I remove the glColor4f call in the rendering function, the text refused to render colored at all. Always black (I habitually set the color back to (0,0,0) everytime I render someething, so possible since the mode is modulate (R = 0 * Texture (Font) R, etc) so it will be black. Makes sense.
If I set the Texture environment to DECAL then the text renders black and a box behind the text renders the color I am trying to render the text.
I think I just don't know the correct way to do this. Anyone have any experience with SDL_ttf and openGL texture environments that could give some ideas?
Edit:
I've done some rewriting of the functions and testing the surface and have finally figured out a few things. If I use GL_DECAL the text renders the correct color and the pixel values value is 0 everywhere on the surface where it's not the red color I tried rendering (which renders with a value of 255, which is strange since red is the first channel it should have been 255^3 (or in terms of hex FF0000) at least I would expect). With DECAL, the alpha space (the white space around the text that has 0 for a pixel value) shows up the color of the current glColor() call. If I use Blended, the alpha zone disappears but my text renders as blended as well (of course) so it blends with the underlying background texture.
I guess the more appropriate question is how to I blend only the white space and not the text? My guess is that I could call a new glBlendFunc(); but I have tested parameters and I'm like a child in the woods. No clue how to get the desired result.
Solution isn't completely verified, but the format of the surface is indeed BGRA but I cannot implement this correction. I'm going to attempt to create a color swap function for this I guess.
This fix did not work. Instead of setting BGR, I thought just create a new RGB surface:
if (Surface->format->Rmask == 0x00ff0000) {
Surface = SDL_CreateRGBSurfaceFrom(Surface->pixels, Surface->w, Surface->h, 32, Surface->pitch, Surface->format->Rmask, Surface->format->Gmask, Surface->format->Bmask, Surface->format->Amask);
}
After that failed to work I tried swapping Surface->format->Bmask and Surface->format->Rmask but that had no effect either.

In order to handle BGR and RGB changes you can try this code to create a texture from a SDL_Surface
int createTextureFromSurface(SDL_Surface *surface)
{
int texture;
// get the number of channels in the SDL surface
GLint nbOfColors = surface->format->BytesPerPixel;
GLenum textureFormat = 0;
switch (nbOfColors) {
case 1:
textureFormat = GL_ALPHA;
break;
case 3: // no alpha channel
if (surface->format->Rmask == 0x000000ff)
textureFormat = GL_RGB;
else
textureFormat = GL_BGR;
break;
case 4: // contains an alpha channel
if (surface->format->Rmask == 0x000000ff)
textureFormat = GL_RGBA;
else
textureFormat = GL_BGRA;
break;
default:
qDebug() << "Warning: the image is not truecolor...";
break;
}
glEnable( GL_TEXTURE_2D );
// Have OpenGL generate a texture object handle for us
glGenTextures( 1, &texture );
// Bind the texture object
glBindTexture( GL_TEXTURE_2D, texture );
// Edit the texture object's image data using the information SDL_Surface gives us
glTexImage2D( GL_TEXTURE_2D, 0, nbOfColors, surface->w, surface->h, 0,
textureFormat, GL_UNSIGNED_BYTE, surface->pixels );
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
return texture;
}

Related

Qt & OpenGL : texture transparency

I have two textures rendered in the same way. The green texture has the right transparency in the right places, but when I move the pink texture in front, it shows the background color where it should be transparent.
This is the snippet code of the paintGL method that renders the textures.
void OpenGLWidget::paintGL()
{
// ...
for (int i = 0; i < lights.size(); i++)
{
glUseProgram(lights[i].program);
setUniform3fv(program, "lightPosition", 1, &lights[i].position[0]);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, lights[i].texture);
lights[i].svg.setColor(toColor(lights[i].diffuse));
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, lights[i].svg.width(), lights[i].svg.height(), 0, GL_RGBA, GL_UNSIGNED_BYTE, lights[i].svg.toImage().constBits());
glGenerateMipmap(GL_TEXTURE_2D);
glBindVertexArray(lights[i].vertexArray);
glDrawElements(GL_TRIANGLES, lights[i].indices.size(), GL_UNSIGNED_BYTE, nullptr);
}
update();
}
The toImage method of the svg class generates a new QImage object from the svg file, so the new texture value should be updated with each frame.
Where am I doing wrong? Thanks!
This probably happens because you have depth testing enabled. Even though parts of the texture are (partly or fully) transparent, OpenGL still writes to the depth buffer, so the pink light's quad appears to obscure the green light. It works the other way round, because the pink light is drawn first, so the green light hasn't been written to the depth buffer at that point.
The usual solution to this is to render transparent textures in back to front order.
You could also just write your fragment shader to discard fragments if they are transparent. But this results in artifacts if you have semi-transparent fragments, which you have, because of texture filtering and mipmaps.

openGL Transparent pixels unexpectedly White

I noticed a big problem in my openGL texture rendering:
Assumedly transparent pixels are rendered as solid white. According to most solutions to similar issues discussed on StackOverflow, I need to set glBlend / the proper functions, but I have already set the necessary gl state and am positive that textures are loaded correctly as far as I can tell. My texture load function is below:
GLboolean GL_texture_load(Texture* texture_id, const char* const path, const GLboolean alpha, const GLint param_edge_x, const GLint param_edge_y)
{
// load image
SDL_Surface* img = nullptr;
if (!(img = IMG_Load(path))) {
fprintf(stderr, "SDL_image could not be loaded %s, SDL_image Error: %s\n",
path, IMG_GetError());
return GL_FALSE;
}
glBindTexture(GL_TEXTURE_2D, *texture_id);
// image assignment
GLuint format = (alpha) ? GL_RGBA : GL_RGB;
glTexImage2D(GL_TEXTURE_2D, 0, format, img->w, img->h, 0, format, GL_UNSIGNED_BYTE, img->pixels);
// wrapping behavior
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, param_edge_x);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, param_edge_y);
// texture filtering
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glBindTexture(GL_TEXTURE_2D, 0);
// free the surface
SDL_FreeSurface(img);
return GL_TRUE;
}
I use Adobe Photoshop to export "for the web" 24-bit + transparency .png files -- 72 pixels/inch, 6400 x 720. I am not sure how to set the color mode (8, 16, 32), but this might have something to do with the issue. I also use the default sRGB color profile, but I thought to remove the color profile at one point. This didn't do anything.
No matter what, a png exported from Photoshop displays as solid white over transparent pixels.
If I create an image in e.g. Gimp, I have correct transparency. Importing the Adobe .psd or .png does not seem to work, and in any case I prefer to use Photoshop for editing purposes.
Has anyone experienced this issue? I imagine that Photoshop must add some strange metadata or I am not using the correct color modes--or both.
(I am concerned that this goes beyond the scope of Stack Overflow, but my issue intersects image editing and programming. Regardless, please let me know if this is not the right place.)
EDIT:
In both Photoshop and Gimp I created a test case-- 8 pixels (red, green, transparent, blue) clockwise.
In Photoshop, the transparent square is read as 1, 1, 1, 0 and displays as white.
In Gimp, the transparent square is 0, 0, 0, 0.
I also checked my fragment shader to see whether transparency works at all. Varying the alpha over time does increase transparency, so the alpha isn't outright ignored. For some reason 1, 1, 1, 0 counts as solid.
In addition, setting the background color to black with glClearColor seems to prevent the alpha from increasing transparency.
I don't know how to explain some of these behaviors, but something seems off. 0 alpha should be the same regardless of color, shouldn't it?
(Note that I render a few shapes on top of each other, but I've tried just rendering one for testing purposes.)
The best I can do is post more of my setup code (with bits omitted):
// vertex array and buffers setup
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glViewport(0, 0, SCREEN_WIDTH, SCREEN_HEIGHT);
glEnable(GL_DEPTH_TEST);
glEnable(GL_BLEND);
// I think that the blend function may be wrong (GL_ONE that is).
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
glDepthRange(0, 1);
glDepthFunc(GL_LEQUAL);
Texture tex0;
// same function as above, but generates one texture id for me
if (GL_texture_gen_and_load_1(&tex0, "./textures/sq2.png", GL_TRUE, GL_CLAMP_TO_EDGE, GL_CLAMP_TO_EDGE) == GL_FALSE) {
return EXIT_FAILURE;
}
glUseProgram(shader_2d);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, tex0);
glUniform1i(glGetUniformLocation(shader_2d, "tex0"), 0);
bool active = true;
while (active) {
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// uniforms, game logic, etc.
glDrawElements(GL_TRIANGLES, tri_data.i_count, GL_UNSIGNED_INT, (void*)0);
}
I don't know how to explain some of these behaviors, but something seems off. 0 alpha should be the same regardless of color, shouldn't it?
If you want to get an identical result for an alpha channel of 0.0, independent on the red, green and blue channels, the you have to change the blend function. See glBlendFunc.
Use:
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
This cause tha the the red, green and blue channel are multiplied by the alpha channel.
If the alpha channel is 0.0, the resulting RGB color is (0, 0, 0).
If the alpha channel is 1.0, the RGB color channels keep unchanged.
See further Alpha Compositing, OpenGL Blending and Premultiplied Alpha

How to get correct SourceOver alpha compositing in SDL with OpenGL

I am using an FBO (or "Render Texture") which has an alpha channel (32bpp ARGB) and clear that with a color that is not fully opaque, for example (R=1, G=0, B=0, A=0) (i.e. completely transparent). Then I am rendering a translucent object, for example a rectangle with color (R=1, G=1, B=1, A=0.5), on top of that. (All values normalized from 0 to 1)
According to common sense, as well as imaging software such as GIMP and Photoshop, as well as several articles on Porter-Duff compositing, I would expect to get a texture that is
fully transparent outside of the rectangle
white (1.0, 1.0, 1.0) with 50 % opacity inside the rectangle.
Like so (you won't see this on the SO website):
Instead, the background color RGB values, which are (1.0, 0.0, 0.0) are weighted overall with (1 - SourceAlpha) instead of (DestAlpha * (1 - SourceAlpha)). The actual result is this:
I have verified this behavior using OpenGL directly, using SDL's wrapper API, and using SFML's wrapper API. With SDL and SFML I have also saved the results as an image (with alpha channel) instead of merely rendering to the screen to be sure that it's not a problem with the final rendering step.
What do I need to do to produce the expected SourceOver result, either with SDL, SFML, or using OpenGL directly?
Some sources:
W3 article on compositing, specifies co = αs x Cs + αb x Cb x (1 – αs), weight of Cb should be 0 if αb is 0, no matter what.
English Wiki shows destination ("B") being weighted according to αb (as well as αs, indirectly).
German Wiki shows 50% transparency examples, clearly the transparent background's original RGB values do not interfere with either the green or the magenta source, also shows that the intersection is clearly asymmetric in favor of the element that is "on top".
There are also several questions on SO that seemingly deal with this at first glance, but I could not find anything that talks abut this specific issue. People suggest different OpenGL blending functions, but the general consensus seems to be glBlendFuncSeparate(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA, GL_ONE, GL_ONE_MINUS_SRC_ALPHA), which is what both SDL and SFML use by default. I have also tried different combinations with no success.
Another suggested thing is premultiplying the color with the destination alpha, since OpenGL can only have 1 factor, but it needs 2 factors for correct SourceOver. However, I cannot make sense of that at all. If I'm premultiplying (1, 0, 0) with the destination alpha value of, say, (0.1), I get (0.1, 0, 0) (as suggested here for example). Now I can tell OpenGL the factor GL_ONE_MINUS_SRC_ALPHA for this (and source with just GL_SRC_ALPHA), but then I'm effectively blending with black, which is incorrect. Though I am not a specialist on the topic, I put a fair amount of effort into trying to understand (and at least got to the point where I managed to program a working pure software implementation of every compositing mode). My understanding is that applying an alpha value of 0.1 "via premultiplication" to (1.0, 0.0, 0.0) is not at all the same as treating the alpha value correctly as the fourth color component.
Here is a minimal and complete example using SDL. Requires SDL2 itself to compile, optionally SDL2_image if you want to save as PNG.
// Define to save the result image as PNG (requires SDL2_image), undefine to instead display it in a window
#define SAVE_IMAGE_AS_PNG
#include <SDL.h>
#include <stdio.h>
#ifdef SAVE_IMAGE_AS_PNG
#include <SDL_image.h>
#endif
int main(int argc, char **argv)
{
if (SDL_Init(SDL_INIT_VIDEO) != 0)
{
printf("init failed %s\n", SDL_GetError());
return 1;
}
#ifdef SAVE_IMAGE_AS_PNG
if (IMG_Init(IMG_INIT_PNG) == 0)
{
printf("IMG init failed %s\n", IMG_GetError());
return 1;
}
#endif
SDL_Window *window = SDL_CreateWindow("test", SDL_WINDOWPOS_CENTERED, SDL_WINDOWPOS_CENTERED, 800, 600, SDL_WINDOW_OPENGL | SDL_WINDOW_SHOWN);
if (window == NULL)
{
printf("window failed %s\n", SDL_GetError());
return 1;
}
SDL_Renderer *renderer = SDL_CreateRenderer(window, 1, SDL_RENDERER_ACCELERATED | SDL_RENDERER_TARGETTEXTURE);
if (renderer == NULL)
{
printf("renderer failed %s\n", SDL_GetError());
return 1;
}
// This is the texture that we render on
SDL_Texture *render_texture = SDL_CreateTexture(renderer, SDL_PIXELFORMAT_RGBA8888, SDL_TEXTUREACCESS_TARGET, 300, 200);
if (render_texture == NULL)
{
printf("rendertexture failed %s\n", SDL_GetError());
return 1;
}
SDL_SetTextureBlendMode(render_texture, SDL_BLENDMODE_BLEND);
SDL_SetRenderDrawBlendMode(renderer, SDL_BLENDMODE_BLEND);
printf("init ok\n");
#ifdef SAVE_IMAGE_AS_PNG
uint8_t *pixels = new uint8_t[300 * 200 * 4];
#endif
while (1)
{
SDL_Event event;
while (SDL_PollEvent(&event))
{
if (event.type == SDL_QUIT)
{
return 0;
}
}
SDL_Rect rect;
rect.x = 1;
rect.y = 0;
rect.w = 150;
rect.h = 120;
SDL_SetRenderTarget(renderer, render_texture);
SDL_SetRenderDrawColor(renderer, 255, 0, 0, 0);
SDL_RenderClear(renderer);
SDL_SetRenderDrawColor(renderer, 255, 255, 255, 127);
SDL_RenderFillRect(renderer, &rect);
#ifdef SAVE_IMAGE_AS_PNG
SDL_RenderReadPixels(renderer, NULL, SDL_PIXELFORMAT_ARGB8888, pixels, 4 * 300);
// Hopefully the masks are fine for your system. Might need to randomly change those ff parts around.
SDL_Surface *tmp_surface = SDL_CreateRGBSurfaceFrom(pixels, 300, 200, 32, 4 * 300, 0xff0000, 0xff00, 0xff, 0xff000000);
if (tmp_surface == NULL)
{
printf("surface error %s\n", SDL_GetError());
return 1;
}
if (IMG_SavePNG(tmp_surface, "t:\\sdltest.png") != 0)
{
printf("save image error %s\n", IMG_GetError());
return 1;
}
printf("image saved successfully\n");
return 0;
#endif
SDL_SetRenderTarget(renderer, NULL);
SDL_SetRenderDrawColor(renderer, 255, 255, 255, 255);
SDL_RenderClear(renderer);
SDL_RenderCopy(renderer, render_texture, NULL, NULL);
SDL_RenderPresent(renderer);
SDL_Delay(10);
}
}
Thanks to #HolyBlackCat and #Rabbid76 I was able to shed some light on this entire thing. I hope this can help out other people who want to know how about correct alpha blending and the details behind premultiplied alpha.
The basic problem is that correct "Source Over" alpha blending in actually not possible with OpenGL's built-in blend functionality (that is glEnable(GL_BLEND), glBlendFunc[Separate](...), glBlendEquation[Separate](...)) (this is the same for D3D by the way). The reason is the following:
When calculating the result color and alpha values of the blending operation (according to correct Source Over), one would have to use these functions:
Each RGB color values (normalized from 0 to 1):
RGB_f = ( alpha_s x RGB_s + alpha_d x RGB_d x (1 - alpha_s) ) / alpha_f
The alpha value (normalized from 0 to 1):
alpha_f = alpha_s + alpha_d x (1 - alpha_s)
Where
sub f is the result color/alpha,
sub s is the source (what is on top) color/alpha,
d is the destionation (what is on the bottom) color/alpha,
alpha is the processed pixel's alpha value
and RGB represents one of the pixel's red, green, or blue color values
However, OpenGL can only handle a limited variety of additional factors to go with the source or destination values (RGB_s and RGB_d in the color equation) (see here), the relevant ones in this case being GL_ONE, GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA. We can specify the alpha formula correctly using those options, but the best we can do for RGB is:
RGB_f = alpha_s x RGB_s + RGB_d x (1 - alpha_s)
Which completely lacks the destination's alpha component (alpha_d). Note that this formula is equivalent to the correct one if \alpha_d = 1. In other words, when rendering onto a framebuffer which has no alpha channel (such as the window backbuffer), this is fine, otherwise it will produce incorrect results.
To solve that problem and achieve correct alpha blending if alpha_d is NOT equal to 1, we need some gnarly workarounds. The original (first) formula above can be rewritten to
alpha_f x RGB_f = alpha_s x RGB_s + alpha_d x RGB_d x (1 - alpha_s)
if we accept the fact that the result color values will be too dark (they will be multiplied by the result alpha color). This gets rid of the division already. To get the correct RGB value, one would have to divide the result RGB value by the result alpha value, however, as it turns out that conversion usually never needed. We introduce a new symbol (pmaRGB) which denotes RGB values which are generally too dark because they have been multiplied by their corresponding pixel's alpha value.
pmaRGB_f = alpha_s x RGB_s + alpha_d x RGB_d x (1 - alpha_s)
We can also get rid of the problematic alpha_d factor by ensuring that ALL of the destination image's RGB values have been multiplied with their respective alpha values at some point. For example, if we wanted the background color (1.0, 0.5, 0, 0.3), we do not clear the framebuffer with that color, but with (0.3, 0.15, 0, 0.3) instead. In other words, we are doing one of the steps that the GPU would have to do already in advance, because the GPU can only handle one factor. If we are rendering to an existing texture, we have to ensure that it was created with premultiplied alpha. The result of our blending operations will always be textures that also have premultiplied alpha, so we can keep rendering things onto there and always be sure that the destination does have premultiplied alpha. If we are rendering to a semi-transparent texture, the semi-transparent pixels will always be too dark, depending on their alpha value (0 alpha meaning black, 1 alpha meaning the correct color). If we are rendering to a buffer which has no alpha channel (like the back buffer we use for actually displaying things), alpha_f is implicitly 1, so the premultiplied RGB values are equal to the correctly blended RGB values. This is the current formula:
pmaRGB_f = alpha_s x RGB_s + pmaRGB_d x (1 - alpha_s)
This function can be used when the source does not yet have premultiplied alpha (for example, if the source is a regular image that came out of an image processing program, with an alpha channel that is correctly blended with no premultiplied alpha).
There is a reason we might want to get rid of \alpha_s as well, and use premultiplied alpha for the source as well:
pmaRGB_f = pmaRGB_s + pmaRGB_d x (1 - alpha_s)
This formula needs to be taken if the source happens to have premultiplied alpha - because then the source pixel values are all pmaRGB instead of RGB. This is always going to be the case if we are rending to an offscreen buffer with an alpha channel using the above method. It may also be reasonable to have all texture assets stored with premultiplied alpha by default so that this formula can always be taken.
To recap, to calculate the alpha value, we always use this formula:
alpha_f = alpha_s + alpha_d x (1 - alpha_s)
, which corresponds to (GL_ONE, GL_ONE_MINUS_SRC_ALPHA). To calculate the RGB color values, if the source does not have premultiplied alpha applied to its RGB values, we use
pmaRGB_f = alpha_s x RGB_s + pmaRGB_d x (1 - alpha_s)
, which corresponds to (GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA). If it does have premultiplied alpha applied to it, we use
pmaRGB_f = pmaRGB_s + pmaRGB_d x (1 - alpha_s)
, which corresponds to (GL_ONE, GL_ONE_MINUS_SRC_ALPHA).
What that practically means in OpenGL: When rendering to a framebuffer with alpha channel, switch to the correct blending function accordingly and make sure that the FBO's texture always has premultiplied alpha applied to its RGB values. Note that the correct blending function may potentially be different for each rendered object, according to whether or not the source has premultiplied alpha. Example: We want a background [1, 0, 0, 0.1], and render an object with color [1, 1, 1, 0.5] onto it.
// Clear with the premultiplied version of the real background color - the texture (which is always the destination in all blending operations) now complies with the "destination must always have premultiplied alpha" convention.
glClearColor(0.1f, 0.0f, 0.0f, 0.1f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
//
// Option 1 - source either already has premultiplied alpha for whatever reason, or we can easily ensure that it has
//
{
// Set the drawing color to the premultiplied version of the real drawing color.
glColor4f(0.5f, 0.5f, 0.5f, 0.5f);
// Set the blending equation according to "blending source with premultiplied alpha".
glEnable(GL_BLEND);
glBlendFuncSeparate(GL_ONE, GL_ONE_MINUS_SRC_ALPHA, GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
glBlendEquationSeparate(GL_ADD, GL_ADD);
}
//
// Option 2 - source does not have premultiplied alpha
//
{
// Set the drawing color to the original version of the real drawing color.
glColor4f(1.0f, 1.0f, 1.0f, 0.5f);
// Set the blending equation according to "blending source with premultiplied alpha".
glEnable(GL_BLEND);
glBlendFuncSeparate(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA, GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
glBlendEquationSeparate(GL_ADD, GL_ADD);
}
// --- draw the thing ---
glDisable(GL_BLEND);
In either case, the resulting texture has premultiplied alpha. Here are 2 possibilities what we might want to do with this texture:
If we want to export it as an image that is correctly alpha blended (as per the SourceOver definition), we need to get its RGBA data and explicitly divide each RGB value by the corresponding pixel's alpha value.
If we want to render it onto the backbuffer (whose background color shall be (0, 0, 0.5)), we proceed as we would normally (for this example, we additionally want to modulate the texture with (0, 0, 1, 0.8)):
// The back buffer has 100 % alpha.
glClearColor(0.0f, 0.0f, 0.5f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// The color with which the texture is drawn - the modulating color's RGB values also need premultiplied alpha
glColor4f(0.0f, 0.0f, 0.8f, 0.8f);
// Set the blending equation according to "blending source with premultiplied alpha".
glEnable(GL_BLEND);
glBlendFuncSeparate(GL_ONE, GL_ONE_MINUS_SRC_ALPHA, GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
glBlendEquationSeparate(GL_ADD, GL_ADD);
// --- draw the texture ---
glDisable(GL_BLEND);
Technically, the result will have premultiplied alpha applied to it. However, because the result alpha will always be 1 for each pixel, the premultiplied RGB values are always equal to the correctly blended RGB values.
To achieve the same in SFML:
renderTexture.clear(sf::Color(25, 0, 0, 25));
sf::RectangleShape rect;
sf::RenderStates rs;
// Assuming the object has premultiplied alpha - or we can easily make sure that it has
{
rs.blendMode = sf::BlendMode(sf::BlendMode::One, sf::BlendMode::OneMinusSrcAlpha);
rect.setFillColor(sf::Color(127, 127, 127, 127));
}
// Assuming the object does not have premultiplied alpha
{
rs.blendMode = sf::BlendAlpha; // This is a shortcut for the constructor with the correct blending parameters for this type
rect.setFillColor(sf::Color(255, 255, 255, 127));
}
// --- align the rect ---
renderTexture.draw(rect, rs);
And the likewise to draw the renderTexture onto the backbuffer
// premultiplied modulation color
renderTexture_sprite.setColor(sf::Color(0, 0, 204, 204));
window.clear(sf::Color(0, 0, 127, 255));
sf::RenderStates rs;
rs.blendMode = sf::BlendMode(sf::BlendMode::One, sf::BlendMode::OneMinusSrcAlpha);
window.draw(renderTexture_sprite, rs);
Unfortunately, this is not possible with SDL afaik (at least not on the GPU as part of the rendering process). Unlike SFML, which exposes fine-grained control over the blending mode to the user, SDL does not allow setting the individual blending function components - it only has SDL_BLENDMODE_BLEND hardcoded with glBlendFuncSeparate(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA, GL_ONE, GL_ONE_MINUS_SRC_ALPHA).

Loading opengl texture using Boost.GIL

I wrote a simple app that load model using OpenGL, Assimp and Boost.GIL.
My model contains a PNG texture. When I load it using GIL and render it through OPENGL I got a wrong result. Thank of powel of codeXL, I found my texture loaded in OpenglGL is completely different from the image itself.
Here is a similar question and I followed its steps but still got same mistake.
Here are my codes:
// --------- image loading
std::experimental::filesystem::path path(pathstr);
gil::rgb8_image_t img;
if (path.extension() == ".jpg" || path.extension() == ".jpeg" || path.extension() == ".png")
{
if (path.extension() == ".png")
gil::png_read_and_convert_image(path.string(), img);
else
gil::jpeg_read_and_convert_image(path.string(), img);
_width = static_cast<int>(img.width());
_height = static_cast<int>(img.height());
typedef decltype(img)::value_type pixel;
auto srcView = gil::view(img);
//auto view = gil::interleaved_view(
// img.width(), img.height(), &*gil::view(img).pixels(), img.width() * sizeof pixel);
auto pixeldata = new pixel[_width * _height];
auto dstView = gil::interleaved_view(
img.width(), img.height(), pixeldata, img.width() * sizeof pixel);
gil::copy_pixels(srcView, dstView);
}
// ---------- texture loading
{
glBindTexture(GL_TEXTURE_2D, handle());
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB,
image.width(), image.height(),
0, GL_RGB, GL_UNSIGNED_BYTE,
reinterpret_cast<const void*>(image.data()));
glGenerateMipmap(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, 0);
}
And my texture is:
When it runs, my codeXL debugger reports me that the texture became:
all other textures of this model went wrong too.
Technically this is a FAQ, asked already several times. Essentially you're running into an alignment issue. By default (you can change it) OpenGL expects image rows to be aligned on 4 byte boundaries. If your image data doesn't match this, you get this skewed result. Adding a call to glPixelStorei(GL_UNPACK_ALIGNMENT, 1); right before the call to glTexImage… will do the trick for you. Of course you should retrieve the actual alignment from the image metadata.
The image being "upside down" is caused by OpenGL putting the origin of textures into the lower left (if all transformation matrices are left at default or have positive determinant). That is unlike most image file formats (but not all) which have it in the upper left. Just flip the vertical texture coordinate and you're golden.

How to load an icon with transparent background and display it correctly with C++ / OpenGL?

I am currently trying to load an icon which has a transparent background.
Then I create a bitmap from it and try to display the bits via glTexImage2D().
But the background of the icon never gets transparent :(
Here is some of my code:
DWORD dwBmpSize = 32*32*4;
byte* bmBits = new byte[dwBmpSize];
for(unsigned int i = 0; i <dwBmpSize; i+=4)
{
bmBits[i] = 255; // R
bmBits[i+1] = 0; // G
bmBits[i+2] = 0; // B
bmBits[i+3] = 255;// A
// I always get a red square, no matter what value i fill into alpha
}
//create texture from bitmap
glTexImage2D(target, 0,
GL_RGBA, 32, 32,
0, GL_RGBA, GL_UNSIGNED_BYTE, bmBits);
delete bmBits;
Edit: I changed the code, to be sure, that my bits have an alpha channel.
Now I am filling a 32x32 pxl area with custom values to see, what happens, instead of loading an icon. It still does not work!
What am I missing? Or is it just not possible?
You have to enable blending and set the correct blend mode.
glEnable (GL_BLEND);
glBlendFunc (GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
Also if you fill the entire alpha channel with 255 it will still be opaque. Try 128 or something instead.