Related
A while ago I asked a question similar to this one, but in that case I was trying to correct the perspective texture mapping of a trapezoid that had the horizontal lines constantly parallel with glTexCoord4f() and this is relatively simple. However, now I'm trying to fix the texture mapping of the floor and ceiling in my engine, the problem is that since both depend on the shape of the map, I need to use triangles to fill in the polygonal shapes that the map may contain.
I tried a few variations of the same method I used for correct texture mapping on trapezoids, the attempt with more "acceptable" results were when I calculated the size of the triangle's edges (with screen coordinates) and used each result in the different 'q' in each glTexCoord4f(), that is how code currently stands.
With that in mind, how can I fix this while using glTexCoord4f()?
Here is the code I used to correct the texture mapping of the walls (functional):
float u, v;
glEnable(GL_TEXTURE_2D);
glEnable(GL_DEPTH_TEST);
float sza = wyaa - wyab; //Size of the first vertical edge on the wall
float szb = wyba - wybb; //Size of the second vertical edge on the wall
//Does the wall have streeched textures?
if(!(*wall).streechTexture){
u = -texLength;
v = -texHeight;
}else{
u = -1;
v = -1;
}
glBindTexture (GL_TEXTURE_2D, texture.at((*wall).texture));
glBegin(GL_TRIANGLE_STRIP);
glTexCoord4f(0, 0, 0, sza);
glVertex3f(wxa, wyaa + shearing, -tza * 0.001953);
glTexCoord4f(u * szb, 0, 0, szb);
glVertex3f(wxb, wyba + shearing, -tzb * 0.001953);
glTexCoord4f(0, v * sza, 0, sza);
glVertex3f(wxa, wyab + shearing, -tza * 0.001953);
glTexCoord4f(u * szb, v * szb, 0, szb);
glVertex3f(wxb, wybb + shearing, -tzb * 0.001953);
glEnd();
glDisable(GL_TEXTURE_2D);
And here the current code that renders both the floor and the ceiling (which needs to be fixed):
glEnable(GL_TEXTURE_2D);
glBindTexture (GL_TEXTURE_2D, texture.at((*floor).texture));
float difA, difB, difC;
difA = vectorMag(Vertex(fxa, fyaa), Vertex(fxb, fyba)); //Size of the first edge on the triangle
difB = vectorMag(Vertex(fxb, fyba), Vertex(fxc, fyca)); //Size of the second edge on the triangle
difC = vectorMag(Vertex(fxc, fyca), Vertex(fxa, fyaa)); //Size of the third edge on the triangle
glBegin(GL_TRIANGLE_STRIP); //Rendering the floor
glTexCoord4f(ua * difA, va * difA, 0, difA);
glVertex3f(fxa, fyaa + shearing, -tza * 0.001953);
glTexCoord4f(ub * difB, vb * difB, 0, difB);
glVertex3f(fxb, fyba + shearing, -tzb * 0.001953);
glTexCoord4f(uc * difC, vc * difC, 0, difC);
glVertex3f(fxc, fyca + shearing, -tzc * 0.001953);
glEnd();
glBegin(GL_TRIANGLE_STRIP); //Rendering the ceiling
glTexCoord4f(uc, vc, 0, 1);
glVertex3f(fxc, fycb + shearing, -tzc * 0.001953);
glTexCoord4f(ub, vb, 0, 1);
glVertex3f(fxb, fybb + shearing, -tzb * 0.001953);
glTexCoord4f(ua, va, 0, 1);
glVertex3f(fxa, fyab + shearing, -tza * 0.001953);
glEnd();
glDisable(GL_TEXTURE_2D);
Here a picture of how it looks visually (for comparison purposes, the floor has the failed attempt at correct texture mapping, while the ceiling has affine texture mapping):
I understand that it would be easier if I just set a normal perspective view, but that would simply defeat the whole purpose of the engine.
This is an issue only for floor and ceiling (unless your camera can tilt). So you can render your wals as you doing. But for floors and ceiling you have these basic options (As I mentioned in your old duplicate post):
Rasterize scan line on your own
So instead of rendering triangles (which old ray casters did not do) you render vertical lines pixel by pixel using points instead of triangles. That will be much slower of coarse as GL is more suited for polygonal primitives. See draw_scanline functions in here:
Efficient floor/ceiling rendering in Raycaster
Use perspective view and pass z coordinate
Looks like you added the z coordinate already. So now you just need to set perspective view that matches your wall rendering. OpenGL will do the rest on its own. So you should add something like gluPerspective for your GL_PROJECTION matrix. but just for your floors/ceilings ...
Pass z coordinate and overide fragment shader
So you just write fragment shader that computes the perspective correct texture mapping correction in it and just output wanted texel color +/- some lighting. Here example of shaders usage:
complete GL+GLSL+VAO/VBO C++ example
For more info see:
Ray Casting with different height size
I am using an FBO (or "Render Texture") which has an alpha channel (32bpp ARGB) and clear that with a color that is not fully opaque, for example (R=1, G=0, B=0, A=0) (i.e. completely transparent). Then I am rendering a translucent object, for example a rectangle with color (R=1, G=1, B=1, A=0.5), on top of that. (All values normalized from 0 to 1)
According to common sense, as well as imaging software such as GIMP and Photoshop, as well as several articles on Porter-Duff compositing, I would expect to get a texture that is
fully transparent outside of the rectangle
white (1.0, 1.0, 1.0) with 50 % opacity inside the rectangle.
Like so (you won't see this on the SO website):
Instead, the background color RGB values, which are (1.0, 0.0, 0.0) are weighted overall with (1 - SourceAlpha) instead of (DestAlpha * (1 - SourceAlpha)). The actual result is this:
I have verified this behavior using OpenGL directly, using SDL's wrapper API, and using SFML's wrapper API. With SDL and SFML I have also saved the results as an image (with alpha channel) instead of merely rendering to the screen to be sure that it's not a problem with the final rendering step.
What do I need to do to produce the expected SourceOver result, either with SDL, SFML, or using OpenGL directly?
Some sources:
W3 article on compositing, specifies co = αs x Cs + αb x Cb x (1 – αs), weight of Cb should be 0 if αb is 0, no matter what.
English Wiki shows destination ("B") being weighted according to αb (as well as αs, indirectly).
German Wiki shows 50% transparency examples, clearly the transparent background's original RGB values do not interfere with either the green or the magenta source, also shows that the intersection is clearly asymmetric in favor of the element that is "on top".
There are also several questions on SO that seemingly deal with this at first glance, but I could not find anything that talks abut this specific issue. People suggest different OpenGL blending functions, but the general consensus seems to be glBlendFuncSeparate(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA, GL_ONE, GL_ONE_MINUS_SRC_ALPHA), which is what both SDL and SFML use by default. I have also tried different combinations with no success.
Another suggested thing is premultiplying the color with the destination alpha, since OpenGL can only have 1 factor, but it needs 2 factors for correct SourceOver. However, I cannot make sense of that at all. If I'm premultiplying (1, 0, 0) with the destination alpha value of, say, (0.1), I get (0.1, 0, 0) (as suggested here for example). Now I can tell OpenGL the factor GL_ONE_MINUS_SRC_ALPHA for this (and source with just GL_SRC_ALPHA), but then I'm effectively blending with black, which is incorrect. Though I am not a specialist on the topic, I put a fair amount of effort into trying to understand (and at least got to the point where I managed to program a working pure software implementation of every compositing mode). My understanding is that applying an alpha value of 0.1 "via premultiplication" to (1.0, 0.0, 0.0) is not at all the same as treating the alpha value correctly as the fourth color component.
Here is a minimal and complete example using SDL. Requires SDL2 itself to compile, optionally SDL2_image if you want to save as PNG.
// Define to save the result image as PNG (requires SDL2_image), undefine to instead display it in a window
#define SAVE_IMAGE_AS_PNG
#include <SDL.h>
#include <stdio.h>
#ifdef SAVE_IMAGE_AS_PNG
#include <SDL_image.h>
#endif
int main(int argc, char **argv)
{
if (SDL_Init(SDL_INIT_VIDEO) != 0)
{
printf("init failed %s\n", SDL_GetError());
return 1;
}
#ifdef SAVE_IMAGE_AS_PNG
if (IMG_Init(IMG_INIT_PNG) == 0)
{
printf("IMG init failed %s\n", IMG_GetError());
return 1;
}
#endif
SDL_Window *window = SDL_CreateWindow("test", SDL_WINDOWPOS_CENTERED, SDL_WINDOWPOS_CENTERED, 800, 600, SDL_WINDOW_OPENGL | SDL_WINDOW_SHOWN);
if (window == NULL)
{
printf("window failed %s\n", SDL_GetError());
return 1;
}
SDL_Renderer *renderer = SDL_CreateRenderer(window, 1, SDL_RENDERER_ACCELERATED | SDL_RENDERER_TARGETTEXTURE);
if (renderer == NULL)
{
printf("renderer failed %s\n", SDL_GetError());
return 1;
}
// This is the texture that we render on
SDL_Texture *render_texture = SDL_CreateTexture(renderer, SDL_PIXELFORMAT_RGBA8888, SDL_TEXTUREACCESS_TARGET, 300, 200);
if (render_texture == NULL)
{
printf("rendertexture failed %s\n", SDL_GetError());
return 1;
}
SDL_SetTextureBlendMode(render_texture, SDL_BLENDMODE_BLEND);
SDL_SetRenderDrawBlendMode(renderer, SDL_BLENDMODE_BLEND);
printf("init ok\n");
#ifdef SAVE_IMAGE_AS_PNG
uint8_t *pixels = new uint8_t[300 * 200 * 4];
#endif
while (1)
{
SDL_Event event;
while (SDL_PollEvent(&event))
{
if (event.type == SDL_QUIT)
{
return 0;
}
}
SDL_Rect rect;
rect.x = 1;
rect.y = 0;
rect.w = 150;
rect.h = 120;
SDL_SetRenderTarget(renderer, render_texture);
SDL_SetRenderDrawColor(renderer, 255, 0, 0, 0);
SDL_RenderClear(renderer);
SDL_SetRenderDrawColor(renderer, 255, 255, 255, 127);
SDL_RenderFillRect(renderer, &rect);
#ifdef SAVE_IMAGE_AS_PNG
SDL_RenderReadPixels(renderer, NULL, SDL_PIXELFORMAT_ARGB8888, pixels, 4 * 300);
// Hopefully the masks are fine for your system. Might need to randomly change those ff parts around.
SDL_Surface *tmp_surface = SDL_CreateRGBSurfaceFrom(pixels, 300, 200, 32, 4 * 300, 0xff0000, 0xff00, 0xff, 0xff000000);
if (tmp_surface == NULL)
{
printf("surface error %s\n", SDL_GetError());
return 1;
}
if (IMG_SavePNG(tmp_surface, "t:\\sdltest.png") != 0)
{
printf("save image error %s\n", IMG_GetError());
return 1;
}
printf("image saved successfully\n");
return 0;
#endif
SDL_SetRenderTarget(renderer, NULL);
SDL_SetRenderDrawColor(renderer, 255, 255, 255, 255);
SDL_RenderClear(renderer);
SDL_RenderCopy(renderer, render_texture, NULL, NULL);
SDL_RenderPresent(renderer);
SDL_Delay(10);
}
}
Thanks to #HolyBlackCat and #Rabbid76 I was able to shed some light on this entire thing. I hope this can help out other people who want to know how about correct alpha blending and the details behind premultiplied alpha.
The basic problem is that correct "Source Over" alpha blending in actually not possible with OpenGL's built-in blend functionality (that is glEnable(GL_BLEND), glBlendFunc[Separate](...), glBlendEquation[Separate](...)) (this is the same for D3D by the way). The reason is the following:
When calculating the result color and alpha values of the blending operation (according to correct Source Over), one would have to use these functions:
Each RGB color values (normalized from 0 to 1):
RGB_f = ( alpha_s x RGB_s + alpha_d x RGB_d x (1 - alpha_s) ) / alpha_f
The alpha value (normalized from 0 to 1):
alpha_f = alpha_s + alpha_d x (1 - alpha_s)
Where
sub f is the result color/alpha,
sub s is the source (what is on top) color/alpha,
d is the destionation (what is on the bottom) color/alpha,
alpha is the processed pixel's alpha value
and RGB represents one of the pixel's red, green, or blue color values
However, OpenGL can only handle a limited variety of additional factors to go with the source or destination values (RGB_s and RGB_d in the color equation) (see here), the relevant ones in this case being GL_ONE, GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA. We can specify the alpha formula correctly using those options, but the best we can do for RGB is:
RGB_f = alpha_s x RGB_s + RGB_d x (1 - alpha_s)
Which completely lacks the destination's alpha component (alpha_d). Note that this formula is equivalent to the correct one if \alpha_d = 1. In other words, when rendering onto a framebuffer which has no alpha channel (such as the window backbuffer), this is fine, otherwise it will produce incorrect results.
To solve that problem and achieve correct alpha blending if alpha_d is NOT equal to 1, we need some gnarly workarounds. The original (first) formula above can be rewritten to
alpha_f x RGB_f = alpha_s x RGB_s + alpha_d x RGB_d x (1 - alpha_s)
if we accept the fact that the result color values will be too dark (they will be multiplied by the result alpha color). This gets rid of the division already. To get the correct RGB value, one would have to divide the result RGB value by the result alpha value, however, as it turns out that conversion usually never needed. We introduce a new symbol (pmaRGB) which denotes RGB values which are generally too dark because they have been multiplied by their corresponding pixel's alpha value.
pmaRGB_f = alpha_s x RGB_s + alpha_d x RGB_d x (1 - alpha_s)
We can also get rid of the problematic alpha_d factor by ensuring that ALL of the destination image's RGB values have been multiplied with their respective alpha values at some point. For example, if we wanted the background color (1.0, 0.5, 0, 0.3), we do not clear the framebuffer with that color, but with (0.3, 0.15, 0, 0.3) instead. In other words, we are doing one of the steps that the GPU would have to do already in advance, because the GPU can only handle one factor. If we are rendering to an existing texture, we have to ensure that it was created with premultiplied alpha. The result of our blending operations will always be textures that also have premultiplied alpha, so we can keep rendering things onto there and always be sure that the destination does have premultiplied alpha. If we are rendering to a semi-transparent texture, the semi-transparent pixels will always be too dark, depending on their alpha value (0 alpha meaning black, 1 alpha meaning the correct color). If we are rendering to a buffer which has no alpha channel (like the back buffer we use for actually displaying things), alpha_f is implicitly 1, so the premultiplied RGB values are equal to the correctly blended RGB values. This is the current formula:
pmaRGB_f = alpha_s x RGB_s + pmaRGB_d x (1 - alpha_s)
This function can be used when the source does not yet have premultiplied alpha (for example, if the source is a regular image that came out of an image processing program, with an alpha channel that is correctly blended with no premultiplied alpha).
There is a reason we might want to get rid of \alpha_s as well, and use premultiplied alpha for the source as well:
pmaRGB_f = pmaRGB_s + pmaRGB_d x (1 - alpha_s)
This formula needs to be taken if the source happens to have premultiplied alpha - because then the source pixel values are all pmaRGB instead of RGB. This is always going to be the case if we are rending to an offscreen buffer with an alpha channel using the above method. It may also be reasonable to have all texture assets stored with premultiplied alpha by default so that this formula can always be taken.
To recap, to calculate the alpha value, we always use this formula:
alpha_f = alpha_s + alpha_d x (1 - alpha_s)
, which corresponds to (GL_ONE, GL_ONE_MINUS_SRC_ALPHA). To calculate the RGB color values, if the source does not have premultiplied alpha applied to its RGB values, we use
pmaRGB_f = alpha_s x RGB_s + pmaRGB_d x (1 - alpha_s)
, which corresponds to (GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA). If it does have premultiplied alpha applied to it, we use
pmaRGB_f = pmaRGB_s + pmaRGB_d x (1 - alpha_s)
, which corresponds to (GL_ONE, GL_ONE_MINUS_SRC_ALPHA).
What that practically means in OpenGL: When rendering to a framebuffer with alpha channel, switch to the correct blending function accordingly and make sure that the FBO's texture always has premultiplied alpha applied to its RGB values. Note that the correct blending function may potentially be different for each rendered object, according to whether or not the source has premultiplied alpha. Example: We want a background [1, 0, 0, 0.1], and render an object with color [1, 1, 1, 0.5] onto it.
// Clear with the premultiplied version of the real background color - the texture (which is always the destination in all blending operations) now complies with the "destination must always have premultiplied alpha" convention.
glClearColor(0.1f, 0.0f, 0.0f, 0.1f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
//
// Option 1 - source either already has premultiplied alpha for whatever reason, or we can easily ensure that it has
//
{
// Set the drawing color to the premultiplied version of the real drawing color.
glColor4f(0.5f, 0.5f, 0.5f, 0.5f);
// Set the blending equation according to "blending source with premultiplied alpha".
glEnable(GL_BLEND);
glBlendFuncSeparate(GL_ONE, GL_ONE_MINUS_SRC_ALPHA, GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
glBlendEquationSeparate(GL_ADD, GL_ADD);
}
//
// Option 2 - source does not have premultiplied alpha
//
{
// Set the drawing color to the original version of the real drawing color.
glColor4f(1.0f, 1.0f, 1.0f, 0.5f);
// Set the blending equation according to "blending source with premultiplied alpha".
glEnable(GL_BLEND);
glBlendFuncSeparate(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA, GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
glBlendEquationSeparate(GL_ADD, GL_ADD);
}
// --- draw the thing ---
glDisable(GL_BLEND);
In either case, the resulting texture has premultiplied alpha. Here are 2 possibilities what we might want to do with this texture:
If we want to export it as an image that is correctly alpha blended (as per the SourceOver definition), we need to get its RGBA data and explicitly divide each RGB value by the corresponding pixel's alpha value.
If we want to render it onto the backbuffer (whose background color shall be (0, 0, 0.5)), we proceed as we would normally (for this example, we additionally want to modulate the texture with (0, 0, 1, 0.8)):
// The back buffer has 100 % alpha.
glClearColor(0.0f, 0.0f, 0.5f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// The color with which the texture is drawn - the modulating color's RGB values also need premultiplied alpha
glColor4f(0.0f, 0.0f, 0.8f, 0.8f);
// Set the blending equation according to "blending source with premultiplied alpha".
glEnable(GL_BLEND);
glBlendFuncSeparate(GL_ONE, GL_ONE_MINUS_SRC_ALPHA, GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
glBlendEquationSeparate(GL_ADD, GL_ADD);
// --- draw the texture ---
glDisable(GL_BLEND);
Technically, the result will have premultiplied alpha applied to it. However, because the result alpha will always be 1 for each pixel, the premultiplied RGB values are always equal to the correctly blended RGB values.
To achieve the same in SFML:
renderTexture.clear(sf::Color(25, 0, 0, 25));
sf::RectangleShape rect;
sf::RenderStates rs;
// Assuming the object has premultiplied alpha - or we can easily make sure that it has
{
rs.blendMode = sf::BlendMode(sf::BlendMode::One, sf::BlendMode::OneMinusSrcAlpha);
rect.setFillColor(sf::Color(127, 127, 127, 127));
}
// Assuming the object does not have premultiplied alpha
{
rs.blendMode = sf::BlendAlpha; // This is a shortcut for the constructor with the correct blending parameters for this type
rect.setFillColor(sf::Color(255, 255, 255, 127));
}
// --- align the rect ---
renderTexture.draw(rect, rs);
And the likewise to draw the renderTexture onto the backbuffer
// premultiplied modulation color
renderTexture_sprite.setColor(sf::Color(0, 0, 204, 204));
window.clear(sf::Color(0, 0, 127, 255));
sf::RenderStates rs;
rs.blendMode = sf::BlendMode(sf::BlendMode::One, sf::BlendMode::OneMinusSrcAlpha);
window.draw(renderTexture_sprite, rs);
Unfortunately, this is not possible with SDL afaik (at least not on the GPU as part of the rendering process). Unlike SFML, which exposes fine-grained control over the blending mode to the user, SDL does not allow setting the individual blending function components - it only has SDL_BLENDMODE_BLEND hardcoded with glBlendFuncSeparate(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA, GL_ONE, GL_ONE_MINUS_SRC_ALPHA).
I've found a few places where this has been asked, but I've not yet found a good answer.
The problem: I want to render to texture, and then I want to draw that rendered texture to the screen IDENTICALLY to how It would appear if I skipped the render to texture step and were just directly rendering to the screen. I am currently using a blend mode glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA). I have glBlendFuncSeparate to play around with as well.
I want to be able to render partially transparent overlapping items to this texture. I know the blend function is currently messing up the RGB values based on the Alpha. I've seen some vague suggestions to use "premultiplied alpha" but the description is poor as to what that means. I make png files in photoshop, I know they have a translucency bit and you can't easily edit the alpha channel independently as you can with TGA. If necessary I can switch to TGA, though PNG is more convenient.
For now, for the sake of this question, assume we aren't using images, instead I am just using full color quads with alpha.
Once I render my scene to the texture I need to render that texture to another scene, and I need to BLEND the texture assuming partial transparency again. Here is where things fall apart. In the previous blending steps I clearly alter the RGB values based on Alpha, doing it again works a-okay if Alpha is 0 or 1, but if it is in in between, the result is a further darkening of those partially translucent pixels.
Playing with blend modes I've had very little luck. The best I can do is render to texture with:
glBlendFuncSeparate(GL_ONE, GL_ONE_MINUS_SRC_ALPHA, GL_ONE, GL_ONE);
I did discover that rendering multiple times with glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA) will approximate the right color (unless things overlap). But that's not exactly perfect (as you can see in the following image, the parts where the green/red/blue boxes overlap gets darker, or accumulates alpha. (EDIT: If I do the multiple draws in the render to screen part and only render once to texture, the alpha accumulation issue disappears and it does work, but why?! I don't want to have to render the same texture hundreds of times to the screen to get it to accumulate properly)
Here are some images detailing the issue (the multiple render passes are with basic blending (GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA), and they are rendered multiple times in the texture rendering step. The 3 boxes on the right are rendered 100% red, green, or blue (0-255) but at alpha values of 50% for blue, 25% for red, and 75% for green:
So, a breakdown of what I want to know:
I set blend mode to: X?
I render my scene to a texture. (Maybe I have to render with a few blend modes or multiple times?)
I set my blend mode to: Y?
I render my texture to the screen over an existing scene. (Maybe I need a different shader? Maybe I need to render the texture a few times?)
Desired behavior is that at the end of that step, the final pixel result is identical to if I were to just do this:
I set my blend mode to: (GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA)
I render my scene to the screen.
And, for completeness, here is some of my code with my original naive attempt (just regular blending):
//RENDER TO TEXTURE.
void Clipped::refreshTexture(bool a_forceRefresh) {
if(a_forceRefresh || dirtyTexture){
auto pointAABB = basicAABB();
auto textureSize = castSize<int>(pointAABB.size());
clippedTexture = DynamicTextureDefinition::make("", textureSize, {0.0f, 0.0f, 0.0f, 0.0f});
dirtyTexture = false;
texture(clippedTexture->makeHandle(Point<int>(), textureSize));
framebuffer = renderer->makeFramebuffer(castPoint<int>(pointAABB.minPoint), textureSize, clippedTexture->textureId());
{
renderer->setBlendFunction(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
SCOPE_EXIT{renderer->defaultBlendFunction(); };
renderer->modelviewMatrix().push();
SCOPE_EXIT{renderer->modelviewMatrix().pop(); };
renderer->modelviewMatrix().top().makeIdentity();
framebuffer->start();
SCOPE_EXIT{framebuffer->stop(); };
const size_t renderPasses = 1; //Not sure?
if(drawSorted){
for(size_t i = 0; i < renderPasses; ++i){
sortedRender();
}
} else{
for(size_t i = 0; i < renderPasses; ++i){
unsortedRender();
}
}
}
alertParent(VisualChange::make(shared_from_this()));
}
}
Here is the code I'm using to set up the scene:
bool Clipped::preDraw() {
refreshTexture();
pushMatrix();
SCOPE_EXIT{popMatrix(); };
renderer->setBlendFunction(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
SCOPE_EXIT{renderer->defaultBlendFunction();};
defaultDraw(GL_TRIANGLE_FAN);
return false; //returning false blocks the default rendering steps for this node.
}
And the code to render the scene:
test = MV::Scene::Rectangle::make(&renderer, MV::BoxAABB({0.0f, 0.0f}, {100.0f, 110.0f}), false);
test->texture(MV::FileTextureDefinition::make("Assets/Images/dogfox.png")->makeHandle());
box = std::shared_ptr<MV::TextBox>(new MV::TextBox(&textLibrary, MV::size(110.0f, 106.0f)));
box->setText(UTF_CHAR_STR("ABCDE FGHIJKLM NOPQRS TUVWXYZ"));
box->scene()->make<MV::Scene::Rectangle>(MV::size(65.0f, 36.0f))->color({0, 0, 1, .5})->position({80.0f, 10.0f})->setSortDepth(100);
box->scene()->make<MV::Scene::Rectangle>(MV::size(65.0f, 36.0f))->color({1, 0, 0, .25})->position({80.0f, 40.0f})->setSortDepth(101);
box->scene()->make<MV::Scene::Rectangle>(MV::size(65.0f, 36.0f))->color({0, 1, 0, .75})->position({80.0f, 70.0f})->setSortDepth(102);
test->make<MV::Scene::Rectangle>(MV::size(65.0f, 36.0f))->color({.0, 0, 1, .5})->position({110.0f, 10.0f})->setSortDepth(100);
test->make<MV::Scene::Rectangle>(MV::size(65.0f, 36.0f))->color({1, 0, 0, .25})->position({110.0f, 40.0f})->setSortDepth(101);
test->make<MV::Scene::Rectangle>(MV::size(65.0f, 36.0f))->color({.0, 1, 0, .75})->position({110.0f, 70.0f})->setSortDepth(102);
And here's my screen draw:
renderer.clearScreen();
test->draw(); //this is drawn directly to the screen.
box->scene()->draw(); //everything in here is in a clipped node with a render texture.
renderer.updateScreen();
*EDIT: FRAMEBUFFER SETUP/TEARDOWN CODE:
void glExtensionFramebufferObject::startUsingFramebuffer(std::shared_ptr<Framebuffer> a_framebuffer, bool a_push){
savedClearColor = renderer->backgroundColor();
renderer->backgroundColor({0.0, 0.0, 0.0, 0.0});
require(initialized, ResourceException("StartUsingFramebuffer failed because the extension could not be loaded"));
if(a_push){
activeFramebuffers.push_back(a_framebuffer);
}
glBindFramebuffer(GL_FRAMEBUFFER, a_framebuffer->framebuffer);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, a_framebuffer->texture, 0);
glBindRenderbuffer(GL_RENDERBUFFER, a_framebuffer->renderbuffer);
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT24, roundUpPowerOfTwo(a_framebuffer->frameSize.width), roundUpPowerOfTwo(a_framebuffer->frameSize.height));
glViewport(a_framebuffer->framePosition.x, a_framebuffer->framePosition.y, a_framebuffer->frameSize.width, a_framebuffer->frameSize.height);
renderer->projectionMatrix().push().makeOrtho(0, static_cast<MatrixValue>(a_framebuffer->frameSize.width), 0, static_cast<MatrixValue>(a_framebuffer->frameSize.height), -128.0f, 128.0f);
GLenum buffers[] = {GL_COLOR_ATTACHMENT0};
//pglDrawBuffersEXT(1, buffers);
renderer->clearScreen();
}
void glExtensionFramebufferObject::stopUsingFramebuffer(){
require(initialized, ResourceException("StopUsingFramebuffer failed because the extension could not be loaded"));
activeFramebuffers.pop_back();
if(!activeFramebuffers.empty()){
startUsingFramebuffer(activeFramebuffers.back(), false);
} else {
glBindFramebuffer(GL_FRAMEBUFFER, 0);
glBindRenderbuffer(GL_RENDERBUFFER, 0);
glViewport(0, 0, renderer->window().width(), renderer->window().height());
renderer->projectionMatrix().pop();
renderer->backgroundColor(savedClearColor);
}
}
And my clear screen code:
void Draw2D::clearScreen(){
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT | GL_STENCIL_BUFFER_BIT);
}
Based on some calculations and simulations I ran, I came up with two fairly similar solutions that seem to do the trick. One uses pre-multiplied colors in combination with a single (separate) blend function, the other one works without pre-multiplied colors, but requires changing the blend function a couple of times in the process.
Option 1: Single Blend Function, Pre-Multiplication
This approach works with a single blend function through the entire process. The blend function is:
glBlendFuncSeparate(GL_ONE, GL_ONE_MINUS_SRC_ALPHA,
GL_ONE_MINUS_DST_ALPHA, GL_ONE);
It requires pre-multiplied colors, which means that if your input color would normally be (r, g, b, a), you use (r * a, g * a, b * a, a) instead. You can perform the pre-multiplication in the fragment shader.
The sequence is:
Set the blend function to (GL_ONE, GL_ONE_MINUS_SRC_ALPHA, GL_ONE_MINUS_DST_ALPHA, GL_ONE).
Set render target to FBO.
Render layers that you want rendered to FBO, using pre-multiplied colors.
Set render target to default framebuffer.
Render layers you want below FBO content, using pre-multiplied colors.
Render FBO attachment, without applying pre-multiplication since the colors in the FBO are already pre-multiplied.
Render layers you want on top of FBO content, using pre-multiplied colors.
Option 2: Switch Blend Functions, without Pre-Multiplication
This approach does not require pre-multiplication of the colors for any step. The downside is that the blend function has to be switched a few times during the process.
Set the blend function to (GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA, GL_ONE_MINUS_DST_ALPHA, GL_ONE).
Set render target to FBO.
Render layers that you want rendered to FBO.
Set render target to default framebuffer.
(optional) Set the blend function to (GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA).
Render layers you want below FBO content.
Set the blend function to (GL_ONE, GL_ONE_MINUS_SRC_ALPHA).
Render FBO attachment.
Set the blend function to (GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA).
Render layers you want on top of FBO content.
Explanation and Proof
I think Option 1 is nicer and possibly more efficient because it does not require switching blend functions during rendering. So the detailed explanation below is for Option 1. The math for Option 2 is pretty much the same though. The only real difference is that Option 2 uses GL_SOURCE_ALPHA for the first term of the blend function to perform the pre-multiplication where necessary, where Option 1 expects pre-multiplied colors to come into the blend function.
To illustrate that this works, let's go through an example where 3 layers are rendered. I'll do all the calculations for the r and a components. The calculations for g and b would be equivalent to the ones for r. We will render three layers in the following order:
(r1, a1) pre-multiplied: (r1 * a1, a1)
(r2, a2) pre-multiplied: (r2 * a2, a2)
(r3, a3) pre-multiplied: (r3 * a3, a3)
For the reference calculation, we blend these 3 layers with the standard GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA blend function. We don't need to track the resulting alpha here since DST_ALPHA is not used in the blend function, and we don't use the pre-multiplied colors yet:
after layer 1: (a1 * r1)
after layer 2: (a2 * r2 + (1.0 - a2) * a1 * r1)
after layer 3: (a3 * r3 + (1.0 - a3) * (a2 * r2 + (1.0 - a2) * a1 * r1)) =
(a3 * r3 + (1.0 - a3) * a2 * r2 + (1.0 - a3) * (1.0 - a2) * a1 * r1)
So the last term is our target for the final result. Now, we render layers 2 and 3 into an FBO. Later we will render layer 1 into the frame buffer, and then blend the FBO on top of it. The goal is to get the same result.
From now on, we will apply the blend function listed at the start, and use pre-multiplied colors. We will also need to calculate the alphas, since DST_ALPHA is used in the blend function. First, we render layers 2 and 3 into the FBO:
after layer 2: (a2 * r2, a2)
after layer 3: (a3 * r3 + (1.0 - a3) * a2 * r2, (1.0 - a2) * a3 + a2)
Now we render to he primary framebuffer. Since we don't care about the resulting alpha, I'll only calculate the r component again:
after layer 1: (a1 * r1)
Now we blend the content of the FBO on top of this. So what we calculated for "after layer 3" in the FBO is our source color/alpha, a1 * r1 is the destination color, and GL_ONE, GL_ONE_MINUS_SRC_ALPHA is still the blend function. The colors in the FBO are already pre-multiplied, so there will be no pre-multiplication in the shader while blending the FBO content:
srcR = a3 * r3 + (1.0 - a3) * a2 * r2
srcA = (1.0 - a2) * a3 + a2
dstR = a1 * r1
ONE * srcR + ONE_MINUS_SRC_ALPHA * dstR
= srcR + (1.0 - srcA) * dstR
= a3 * r3 + (1.0 - a3) * a2 * r2 + (1.0 - ((1.0 - a2) * a3 + a2)) * a1 * r1
= a3 * r3 + (1.0 - a3) * a2 * r2 + (1.0 - a3 + a2 * a3 - a2) * a1 * r1
= a3 * r3 + (1.0 - a3) * a2 * r2 + (1.0 - a3) * (1.0 - a2) * a1 * r1
Compare the last term with the reference value we calculated above for the standard blending case, and you can tell that it's exactly the same.
This answer to a similar question has some more background on the GL_ONE_MINUS_DST_ALPHA, GL_ONE part of the blend function: OpenGL ReadPixels (Screenshot) Alpha.
I achieved my goal. Now, let me share this information with the internet, since it exists nowhere else that I could find.
Create your framebuffer (blindframebuffer etc)
Clear the framebuffer to 0, 0, 0, 0
Set your viewport properly. This is all basic stuff I took for granted in the question, but want to include here.
Now, render your scene to the framebuffer normally with glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA). Make sure the scene is sorted (just as you would normally.)
Now bind the included fragment shader. This will undo the damage dealt to the image color values via the blend function.
Render the texture to your screen with glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA)
Go back to rendering as normal with a regular shader.
The code I included in the question remains basically untouched except that I ensure I'm binding the shader I list below when I do my "preDraw" function, which is specific to my own little framework, but is basically the "draw to screen" call for my rendered texture.
I call this the "unblend" shader.
#version 330 core
smooth in vec4 color;
smooth in vec2 uv;
uniform sampler2D texture;
out vec4 colorResult;
void main(){
vec4 textureColor = texture2D(texture, uv.st);
textureColor/=sqrt(textureColor.a);
colorResult = textureColor * color;
}
Why do I do textureColor/=sqrt(textureColor.a)? Because the original color is figured like this:
resultR = r * a, resultG = g * a, resultB = b * a, resultA = a * a
Now, if we want to undo that we need to figure out what a is. The easiest way to find is to solve for "a" here:
resultA = a * a
If a is .25 when originally rendering we have:
resultA = .25 * .25
Or:
resultA = 0.0625
When the texture is being drawn to the screen though, we don't have "a" anymore. We know what resultA is, it's the texture's alpha channel. So we can sqrt(resultA) to get .25 back. Now with that value we can divide to undo the multiply:
textureColor/=sqrt(textureColor.a);
And that fixes everything up undoing the blending!
*EDIT: Well... Kinda at least. There is a sleight inaccuracy, in this case I can show it by rendering over a clear color that is not identical to the framebuffer clear color. Some alpha information seems to be lost, probably in the rgb channels. This is still good enough for me, but I wanted to follow up with the screenshot showing the inaccuracy before signing out. If anyone has a solution please provide it!
I have opened a bounty to bring this answer up to a canonical 100% correct solution. Right now, if I render more partially transparant objects over the existing transparancy the transparancy is accumulated differently than on the right resulting in a lightening of the final texture beyond what is shown on the right. Likewise, when rendered over a non-black background it's clear the results of the existing solution differ slightly as demonstrated above.
A proper solution would be identical in all cases. My existing solution cannot take the destination blending into account in the shader correction, only the source alpha.
In order to do this in a single pass you need support for separate color & alpha blending functions. First you render the texture which has foreground contribution stored in the alpha channel (i.e. 1=fully opaque, 0=fully transparent) and pre-multiplied source color value in the RGB color channel. To create this texture do the following operations:
clear the texture to RGBA=[0, 0, 0, 0]
set the color channel blending to src_color*src_alpha+dst_color*(1-src_alpha)
set the alpha channel blending to src_alpha*(1-dst_alpha)+dst_alpha
render the scene to the texture
To set the mode specified by 2) and 3), you can do: glBlendFuncSeparate(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA, GL_ONE_MINUS_DST_ALPHA, GL_ONE) and glBlendEquation(GL_FUNC_ADD)
Next render this texture to the scene by setting the color blending to:
src_color+dst_color*(1-src_alpha), i.e. glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA) and glBlendEquation(GL_FUNC_ADD)
Your problem is older than OpenGL, or personal computers, or indeed any living human. You're trying to blend two images together and make it look like they weren't blended at all. Printing presses face this exact problem. When ink is applied to paper, the result is a blend between the ink color and the paper color.
The solution is the same in paper as it is in OpenGL. You must alter your source image in order to control your final result. This is easy enough to figure out if you examine the math used to blend.
For each of R, G, B, the resultant color is (old * (1-opacity)) + (new * opacity). The basic scenario, and the one you'd like to emulate, is drawing a color directly onto the final back buffer at opacity A.
For example, opacity is 50% and your green channel has 0xFF. The result should be 0x7F on a black background (including unavoidable rounding error). You probably can't assume the background is black, so expect the green channel to vary between 0x7F and 0xFF.
You'd like to know how to emulate that result when you're really rendering to a texture, then rending the texture to the back buffer. It turns out that the "vague suggestions to use 'premultiplied alpha'" were correct. Whereas your solution is to use a shader to unblend a previous blend operation in the last step, the standard solution is to multiply the colors of your original source texture with the alpha channel (aka premultiplied alpha). When composting the intermediate texture, the RGB channels are blended without multiplying by Alpha. When rendering the texture to the back buffer, against the RGB channels are blended without multiplying by Alpha. Thus you neatly avoid the multiple multiplication problem.
Please consult these resources for a better understanding. I and most others are more familiar with this technique in DirectX, so you may have to search for the appropriate OGL flags.
My problem concerns rendering text with OpenGL -- the text is rendered into a texture, and then drawn onto a quad. The trouble is that the pixels on the edge of the texture are drawn partially transparent. The interior of the texture is fine.
I'm calculating the texture coordinates to hit the center of my texels, using NEAREST (non-)interpolation, setting the texture wrapping to CLAMP_TO_EDGE, and setting the projection matrix to place my vertices at the center of the viewport pixels. Still seeing the issue.
I'm working on VTK with their texture utilities. These are the GL calls that are used to load the texture, as determined by stepping through with a debugger:
glGenTextures(1, &id);
glBindTexture(GL_TEXTURE_2D, id);
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST );
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST );
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
// Create and bind pixel buffer object here (not shown, lots of indirection in VTK)...
glTexImage2D( GL_TEXTURE_2D, 0 , GL_RGBA, xsize, ysize, 0, format, GL_UNSIGNED_BYTE, 0);
// Unbind PBO -- also omitted
glBindTexture(GL_TEXTURE_2D, id);
glAlphaFunc (GL_GREATER, static_cast<GLclampf>(0));
glEnable (GL_ALPHA_TEST);
// I've also tried doing this here for premultiplied alpha, but it made no difference:
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
glMatrixMode(GL_TEXTURE);
glLoadIdentity();
The rendering code:
float p[2] = ...; // point to render text at
int imgDims[2] = ...; // Actual dimensions of image
float width = ...; // Width of texture in image
float height = ...; // Height of texture in image
// Prepare the quad
float xmin = p[0];
float xmax = xmin + width - 1;
float ymin = p[1];
float ymax = ymin + height - 1;
float quad[] = { xmin, ymin,
xmax, ymin,
xmax, ymax,
xmin, ymax };
// Calculate the texture coordinates.
float smin = 1.0f / (2.0f * (imgDims[0]));
float smax = (2.0 * width - 1.0f) / (2.0f * imgDims[0]);
float tmin = 1.0f / (2.0f * imgDims[1]);
float tmax = (2.0f * height - 1.0f) / (2.0f * imgDims[1]);
float texCoord[] = { smin, tmin,
smax, tmin,
smax, tmax,
smin, tmax };
// Set projection matrix to map object coords to pixel centers
// (modelview is identity)
GLint vp[4];
glGetIntegerv(GL_VIEWPORT, vp);
glMatrixMode(GL_PROJECTION);
glPushMatrix();
glLoadIdentity();
float offset = 0.5;
glOrtho(offset, vp[2] + offset,
offset, vp[3] + offset,
-1, 1);
// Disable polygon smoothing. Why not, I've tried everything else?
glDisable(GL_POLYGON_SMOOTH);
// Draw the quad
glColor4ub(255, 255, 255, 255);
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glVertexPointer(2, GL_FLOAT, 0, points);
glTexCoordPointer(2, GL_FLOAT, 0, texCoord);
glDrawArrays(GL_QUADS, 0, 4);
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
glDisableClientState(GL_VERTEX_ARRAY);
// Restore projection matrix
glMatrixMode(GL_PROJECTION);
glPopMatrix();
For debugging purposes, I've overwritten the outermost texels with red, and the next inner layer of texels with green (otherwise it's hard to see what's going on in the mostly-white text image).
I've inspected the texture in-memory using gDEBugger, and it looks as expected -- bright red and green borders around the texture area (the extra empty space is padding to make its size a power of two). For reference:
Here's what the final rendered image looks like (magnified 20x -- the black pixels are remnants of the text that was rendered under the debugging borders). Pale red border, but still a bold green inner border:
So it is just the outer edge of pixels that is affected. I'm not sure if it's color-blending or alpha-blending that's screwing things up, I'm at a loss. I've noticed that the corner pixels are twice as pale as the edge pixels, perhaps that's significant... Maybe someone here can spot the error?
Could be a "pixel perfect" problem. OpenGL defines the center of a line to be the spot that gets rasterized into a pixel. The middle is exactly half way between 1 integer and the next... to get pixel (x,y) to display "pixel perfect"... fix up your coordinates to be:
x=(int)x+0.5f; // x is a float.. makes 0.0 into 0.5, 16.343 into 16.5, etc.
y=(int)y+0.5f;
This probably is what is messing up the blending. I had the same issues with texture modulating... a single somewhat dimmer line or series of pixels at the bottom and right edges.
Okay, I've worked on it for the last few days. There were few ideas that didn't work at all. The only one that worked is to admit that this "Perfect Pixel" exists and try to trick it. Bad That I can't vote up for your answer Cosmic Bacon. But your answer, even if it looks good -- will a little bit ruin everything in a special programs like Games. My answer -- is improved yours.
Here's the solution:
Step1: Make a method that draws texture that you need and use only it for drawing. And Add 0.5f to every coordinate. Look:
public void render(Texture tex,float x1,float y1,float x2,float y2)
{
tex.bind();
GL11.glBegin(GL11.GL_QUADS);
GL11.glTexCoord2f(0,0);
GL11.glVertex2f(x1+0.5f,y1+0.5f);
GL11.glTexCoord2f(1,0);
GL11.glVertex2f(x2+0.5f,y1+0.5f);
GL11.glTexCoord2f(1,1);
GL11.glVertex2f(x2+0.5f,y2+0.5f);
GL11.glTexCoord2f(0,1);
GL11.glVertex2f(x1+0.5f,y2+0.5f);
GL11.glEnd();
}
Step2: If you're going to use "glTranslatef(somethin1,somethin2,0)" it will be nice to make a method that overcomes "Translatef" and doesn't let camera to move on fractional distance. Cause if there will be a little chance that Camera moves on, let's say, 0.3 -- Sooner or later you'll see this issue again(multiple times, i suppose). Next code makes camera follow the Object that has X and Y. And Camera will never loose the object from it's sight:
public void LookFollow(Block AF)
{
float some=5;//changing me will cause camera to move faster/slower
float mx=0,my=0;
//Right-Left
if(LookCorX!=AF.getX())
{
if(AF.getX()>LookCorX)
{
if(AF.getX()<LookCorX+2)
mx=AF.getX()-LookCorX;
if(AF.getX()>LookCorX+2)
mx=(AF.getX()-LookCorX)/some;
}
if(AF.getX()<LookCorX)
{
if(2+AF.getX()>LookCorX)
mx=AF.getX()-LookCorX;
if(2+AF.getX()<LookCorX)
mx=(AF.getX()-LookCorX)/some;
}
}
//Up-Down
if(LookCorY!=AF.getY())
{
if(AF.getY()>LookCorY)
{
if(AF.getY()<LookCorY+2)
my=AF.getY()-LookCorY;
if(AF.getY()>LookCorY+2)
my=(AF.getY()-LookCorY)/some;
}
if(AF.getY()<LookCorY)
{
if(2+AF.getY()>LookCorY)
my=AF.getY()-LookCorY;
if(2+AF.getY()<LookCorY)
my=(AF.getY()-LookCorY)/some;
}
}
//Evading "Perfect Pixel"
mx=(int)mx;
my=(int)my;
//Moving Camera
GL11.glTranslatef(-mx,-my,0);
//Saving up Position of camera.
LookCorX+=mx;
LookCorY+=my;
}
float LookCorX=300,LookCorY=200; //camera's starting position
As the result -- we receive a camera that moves a little sharper, cause steps can't be less than 1 pixel, and sometimes, it's necessary to make a smaller step, but textures are looking okay, and, it's -- a Great Progress!
Sorry for a real Big Answer. I'm still working on a Good Solution. Once I'll find something better and shorter -- this will be erased by me.
I am doing some really basic experiments around some 2D work in GL. I'm trying to draw a "picture frame" around an rectangular area. I'd like for the frame to have a consistent gradient all the way around, and so I'm constructing it with geometry that looks like four quads, one on each side of the frame, tapered in to make trapezoids that effectively have miter joins.
The vert coords are the same on the "inner" and "outer" rectangles, and the colors are the same for all inner and all outer as well, so I'd expect to see perfect blending at the edges.
But notice in the image below how there appears to be a "seam" in the corner of the join that's lighter than it should be.
I feel like I'm missing something conceptually in the math that explains this. Is this artifact somehow a result of the gradient slope? If I change all the colors to opaque blue (say), I get a perfect solid blue frame as expected.
Update: Code added below. Sorry kinda verbose. Using 2-triangle fans for the trapezoids instead of quads.
Thanks!
glClearColor(1.0, 1.0, 1.0, 1.0);
glClear(GL_COLOR_BUFFER_BIT);
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_COLOR_ARRAY);
// Prep the color array. This is the same for all trapezoids.
// 4 verts * 4 components/color = 16 values.
GLfloat colors[16];
colors[0] = 0.0;
colors[1] = 0.0;
colors[2] = 1.0;
colors[3] = 1.0;
colors[4] = 0.0;
colors[5] = 0.0;
colors[6] = 1.0;
colors[7] = 1.0;
colors[8] = 1.0;
colors[9] = 1.0;
colors[10] = 1.0;
colors[11] = 1.0;
colors[12] = 1.0;
colors[13] = 1.0;
colors[14] = 1.0;
colors[15] = 1.0;
// Draw the trapezoidal frame areas. Each one is two triangle fans.
// Fan of 2 triangles = 4 verts = 8 values
GLfloat vertices[8];
float insetOffset = 100;
float frameMaxDimension = 1000;
// Bottom
vertices[0] = 0;
vertices[1] = 0;
vertices[2] = frameMaxDimension;
vertices[3] = 0;
vertices[4] = frameMaxDimension - insetOffset;
vertices[5] = 0 + insetOffset;
vertices[6] = 0 + insetOffset;
vertices[7] = 0 + insetOffset;
glVertexPointer(2, GL_FLOAT , 0, vertices);
glColorPointer(4, GL_FLOAT, 0, colors);
glDrawArrays(GL_TRIANGLE_FAN, 0, 4);
// Left
vertices[0] = 0;
vertices[1] = frameMaxDimension;
vertices[2] = 0;
vertices[3] = 0;
vertices[4] = 0 + insetOffset;
vertices[5] = 0 + insetOffset;
vertices[6] = 0 + insetOffset;
vertices[7] = frameMaxDimension - inset;
glVertexPointer(2, GL_FLOAT , 0, vertices);
glColorPointer(4, GL_FLOAT, 0, colors);
glDrawArrays(GL_TRIANGLE_FAN, 0, 4);
/* top & right would be as expected... */
glDisableClientState(GL_COLOR_ARRAY);
glDisableClientState(GL_VERTEX_ARRAY);
As #Newbie posted in the comments,
#quixoto: open your image in Paint program, click with fill tool somewhere in the seam, and you see it makes 90 degree angle line there... means theres only 1 color, no brighter anywhere in the "seam". its just an illusion.
True. While I'm not familiar with this part of math under OpenGL, I believe this is the implicit result of how the interpolation of colors between the triangle vertices is performed... I'm positive that it's called "Bilinear interpolation".
So what to do to solve that? One possibility is to use a texture and just draw a textured quad (or several textured quads).
However, it should be easy to generate such a border in a fragment shader.
A nice solution using a GLSL shader...
Assume you're drawing a rectangle with the bottom-left corner having texture coords equal to (0,0), and the top-right corner with (1,1).
Then generating the "miter" procedurally in a fragment shader would look like this, if I'm correct:
varying vec2 coord;
uniform vec2 insetWidth; // width of the border in %, max would be 0.5
void main() {
vec3 borderColor = vec3(0,0,1);
vec3 backgroundColor = vec3(1,1,1);
// x and y inset, 0..1, 1 means border, 0 means centre
vec2 insets = max(-coord + insetWidth, vec2(0,0)) / insetWidth;
If I'm correct so far, then now for every pixel the value of insets.x has a value in the range [0..1]
determining how deep a given point is into the border horizontally,
and insets.y has the similar value for vertical depth.
The left vertical bar has insets.y == 0,
the bottom horizontal bar has insets.x = 0,, and the lower-left corner has the pair (insets.x, insets.y) covering the whole 2D range from (0,0) to (1,1). See the pic for clarity:
Now we want a transformation which for a given (x,y) pair will give us ONE value [0..1] determining how to mix background and foreground color. 1 means 100% border, 0 means 0% border. And this can be done in several ways!
The function should obey the requirements:
0 if x==0 and y==0
1 if either x==1 or y==1
smooth values in between.
Assume such function:
float bias = max(insets.x,insets.y);
It satisfies those requirements. Actually, I'm pretty sure that this function would give you the same "sharp" edge as you have above. Try to calculate it on a paper for a selection of coordinates inside that bottom-left rectangle.
If we want to have a smooth, round miter there, we just need another function here. I think that something like this would be sufficient:
float bias = min( length(insets) , 1 );
The length() function here is just sqrt(insets.x*insets.x + insets.y*insets.y). What's important: This translates to: "the farther away (in terms of Euclidean distance) we are from the border, the more visible the border should be", and the min() is just to make the result not greater than 1 (= 100%).
Note that our original function adheres to exactly the same definition - but the distance is calculated according to the Chessboard (Chebyshev) metric, not the Euclidean metric.
This implies that using, for example, Manhattan metric instead, you'd have a third possible miter shape! It would be defined like this:
float bias = min(insets.x+insets.y, 1);
I predict that this one would also have a visible "diagonal line", but the diagonal would be in the other direction ("\").
OK, so for the rest of the code, when we have the bias [0..1], we just need to mix the background and foreground color:
vec3 finalColor = mix(borderColor, backgroundColor, bias);
gl_FragColor = vec4(finalColor, 1); // return the calculated RGB, and set alpha to 1
}
And that's it! Using GLSL with OpenGL makes life simpler. Hope that helps!
I think that what you're seeing is a Mach band. Your visual system is very sensitive to changes in the 1st derivative of brightness. To get rid of this effect, you need to blur your intensities. If you plot intensity along a scanline which passes through this region, you'll see that there are two lines which meet at a sharp corner. To keep your visual system from highlighting this area, you'll need to round this join over. You can do this with either a post processing blur or by adding some more small triangles in the corner which ease the transition.
I had that in the past, and it's very sensitive to geometry. For example, if you draw them separately as triangles, in separate operations, instead of as a triangle fan, the problem is less severe (or, at least, it was in my case, which was similar but slightly different).
One thing I also tried is to draw the triangles separately, slightly overlapping onto one another, with a right composition mode (or OpenGL blending) so you don't get the effect. I worked, but I didn't end up using that because it was only a tiny part of the final product, and not worth it.
I'm sorry that I have no idea what is the root cause of this effect, however :(