Setting the scene
I'm working on a feature in scenekit where i have a camera at the center of a sphere. The sphere has a texture wrapped around it. Let's say it was a 360 degree image captured inside of a room.
So far
I have identified the positions on the sphere that correspond to the corners of the floor. I can extract and create a new flat 2d plane that matches the dimensions of the floor from the camera's perspective. E.g. If the room had a long rectangular floor, I'd create a trapezoid shaped plane.
Problem
But I would like for the new 2d plane to have the texture of the floor, not just the shape. How do I do this given that what I want to extract is not the original texture image, but the result of its projection onto the sphere?
FYI I'm pretty new to scenekit and 3d graphics stuff and I'm even newer to opengl
I assume that your image is structured in a way that lets you directly pick a pixel given an arbitrary direction. E.g. if the azimuth of the direction is mapped to the image's x-coordinate and the height of the direction to the image's y-coordinate, you would convert the direction to these parameters and pick the color at those coordinates. If that is not the case, you have to find the intersection of the according ray (starting at the camera) with the sphere and find the texture coordinate at that intersection. You can then pick the color using this texture coordinate.
Now, you have basically two options. The first option is generating a new texture for the plane. The second option is sampling the spherical image from a shader.
Option 1 - Generate a new texture
You know the extent of your plane, so you can generate a new texture whose dimensions are proportional to the plane's extents. You can use an arbitrary resolution. All you then need to do is fill the pixels of this texture. For this, you just generate the ray for a given pixel and find the according color in the spherical image like so:
input: d1, d2, d3, d3 (the four direction vectors of the plane corners)
// d3 +------+ d4
// d1 +------+ d2
for x from 0 to texture width
for y from 0 to texture height
//Find the direction vector for this pixel through bilinear interpolation
a = x / (width - 1) //horizontal interpolation parameter
b = y / (height - 1) //vertical interpolation parameter
d = (1 - a) * ((1 - b) * d1 + b * d3) + a * ((1 - b) * d2 + b * d4)
normalize d
//Sample the spherical image at d
color = sample(d)
//write the color to the new planar texture
texture(x, y) = color
next
next
Then, you have a new texture that you can apply to the plane. Barycentric interpolation might be more appropriate if you express the plane as two triangles. But as long as the plane is rectangular, the results will be the same.
Note that the sample() method depends on your image structure and needs to be implemented appropriately.
Option 2 - Sample in a shader
In option 2, you do the same thing as in option 1. But you do it in a fragment shader. You employ the vertices of the plane with their respective directions (this might be just the vertex position) and let the GPU interpolate them. This gives you directly the direction d, which you can use. Here is some pseudo shader code:
in vec3 direction;
out vec4 color;
void main()
{
color = sample(normalize(direction));
}
If your image is a cube map, you can even let the GPU do the sampling.
I checked the result of the filter-width GLSL function by coloring it in red on a plane around the camera.
The result is a bizarre pattern. I thought that it would be a circular gradient on the plane extending around the camera relative to distance. The further pixels uniformly represent more distant UV coordinates between pixels at further distances.
Why isn't fwidth(UV) a simple gradient as a function of distance from the camera? I don't understand how it would work properly if it isn't, because I want to anti-alias pixels as a function of amplitude of the UV coordinates between them.
float width = fwidth(i.uv)*.2;
return float4(width,0,0,1)*(2*i.color);
UVs that are close = black, and far = red.
Result:
the above pattern from fwidth is axis aligned, and has 1 axis of symmetry. it couldnt anti-alias 2 axis checkerboard or an n-axis texture of perlin noise or a radial checkerboard:
float2 xy0 = float2(i.uv.x , i.uv.z) + float2(-0.5, -0.5);
float c0 = length(xy0); //sqrt of xx+yy, polar coordinate radius math
float r0 = atan2(i.uv.x-.5,i.uv.z-.5);//angle polar coordinate
float ww =round(sin(c0* freq) *sin(r0* 50)*.5+.5) ;
Axis independent aliasing pattern:
The mipmaping and filtering parameters are determined by the partial derivatives of the texture coordinates in screen space, not the distance (actually as soon as the fragment stage kicks in, there's no such thing as distance anymore).
I suggest you replace the fwidth visualization with a procedurally generated checkerboard (i.e. (mod(uv.s * k, 1) > 0.5)*(mod(uv.t * k, 1) < 0.5)), where k is a scaling parameter) you'll see that the "density" of the checkerboard (and the aliasing artifacts) is the highst, where you've got the most red in your picture.
I've found a few places where this has been asked, but I've not yet found a good answer.
The problem: I want to render to texture, and then I want to draw that rendered texture to the screen IDENTICALLY to how It would appear if I skipped the render to texture step and were just directly rendering to the screen. I am currently using a blend mode glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA). I have glBlendFuncSeparate to play around with as well.
I want to be able to render partially transparent overlapping items to this texture. I know the blend function is currently messing up the RGB values based on the Alpha. I've seen some vague suggestions to use "premultiplied alpha" but the description is poor as to what that means. I make png files in photoshop, I know they have a translucency bit and you can't easily edit the alpha channel independently as you can with TGA. If necessary I can switch to TGA, though PNG is more convenient.
For now, for the sake of this question, assume we aren't using images, instead I am just using full color quads with alpha.
Once I render my scene to the texture I need to render that texture to another scene, and I need to BLEND the texture assuming partial transparency again. Here is where things fall apart. In the previous blending steps I clearly alter the RGB values based on Alpha, doing it again works a-okay if Alpha is 0 or 1, but if it is in in between, the result is a further darkening of those partially translucent pixels.
Playing with blend modes I've had very little luck. The best I can do is render to texture with:
glBlendFuncSeparate(GL_ONE, GL_ONE_MINUS_SRC_ALPHA, GL_ONE, GL_ONE);
I did discover that rendering multiple times with glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA) will approximate the right color (unless things overlap). But that's not exactly perfect (as you can see in the following image, the parts where the green/red/blue boxes overlap gets darker, or accumulates alpha. (EDIT: If I do the multiple draws in the render to screen part and only render once to texture, the alpha accumulation issue disappears and it does work, but why?! I don't want to have to render the same texture hundreds of times to the screen to get it to accumulate properly)
Here are some images detailing the issue (the multiple render passes are with basic blending (GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA), and they are rendered multiple times in the texture rendering step. The 3 boxes on the right are rendered 100% red, green, or blue (0-255) but at alpha values of 50% for blue, 25% for red, and 75% for green:
So, a breakdown of what I want to know:
I set blend mode to: X?
I render my scene to a texture. (Maybe I have to render with a few blend modes or multiple times?)
I set my blend mode to: Y?
I render my texture to the screen over an existing scene. (Maybe I need a different shader? Maybe I need to render the texture a few times?)
Desired behavior is that at the end of that step, the final pixel result is identical to if I were to just do this:
I set my blend mode to: (GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA)
I render my scene to the screen.
And, for completeness, here is some of my code with my original naive attempt (just regular blending):
//RENDER TO TEXTURE.
void Clipped::refreshTexture(bool a_forceRefresh) {
if(a_forceRefresh || dirtyTexture){
auto pointAABB = basicAABB();
auto textureSize = castSize<int>(pointAABB.size());
clippedTexture = DynamicTextureDefinition::make("", textureSize, {0.0f, 0.0f, 0.0f, 0.0f});
dirtyTexture = false;
texture(clippedTexture->makeHandle(Point<int>(), textureSize));
framebuffer = renderer->makeFramebuffer(castPoint<int>(pointAABB.minPoint), textureSize, clippedTexture->textureId());
{
renderer->setBlendFunction(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
SCOPE_EXIT{renderer->defaultBlendFunction(); };
renderer->modelviewMatrix().push();
SCOPE_EXIT{renderer->modelviewMatrix().pop(); };
renderer->modelviewMatrix().top().makeIdentity();
framebuffer->start();
SCOPE_EXIT{framebuffer->stop(); };
const size_t renderPasses = 1; //Not sure?
if(drawSorted){
for(size_t i = 0; i < renderPasses; ++i){
sortedRender();
}
} else{
for(size_t i = 0; i < renderPasses; ++i){
unsortedRender();
}
}
}
alertParent(VisualChange::make(shared_from_this()));
}
}
Here is the code I'm using to set up the scene:
bool Clipped::preDraw() {
refreshTexture();
pushMatrix();
SCOPE_EXIT{popMatrix(); };
renderer->setBlendFunction(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
SCOPE_EXIT{renderer->defaultBlendFunction();};
defaultDraw(GL_TRIANGLE_FAN);
return false; //returning false blocks the default rendering steps for this node.
}
And the code to render the scene:
test = MV::Scene::Rectangle::make(&renderer, MV::BoxAABB({0.0f, 0.0f}, {100.0f, 110.0f}), false);
test->texture(MV::FileTextureDefinition::make("Assets/Images/dogfox.png")->makeHandle());
box = std::shared_ptr<MV::TextBox>(new MV::TextBox(&textLibrary, MV::size(110.0f, 106.0f)));
box->setText(UTF_CHAR_STR("ABCDE FGHIJKLM NOPQRS TUVWXYZ"));
box->scene()->make<MV::Scene::Rectangle>(MV::size(65.0f, 36.0f))->color({0, 0, 1, .5})->position({80.0f, 10.0f})->setSortDepth(100);
box->scene()->make<MV::Scene::Rectangle>(MV::size(65.0f, 36.0f))->color({1, 0, 0, .25})->position({80.0f, 40.0f})->setSortDepth(101);
box->scene()->make<MV::Scene::Rectangle>(MV::size(65.0f, 36.0f))->color({0, 1, 0, .75})->position({80.0f, 70.0f})->setSortDepth(102);
test->make<MV::Scene::Rectangle>(MV::size(65.0f, 36.0f))->color({.0, 0, 1, .5})->position({110.0f, 10.0f})->setSortDepth(100);
test->make<MV::Scene::Rectangle>(MV::size(65.0f, 36.0f))->color({1, 0, 0, .25})->position({110.0f, 40.0f})->setSortDepth(101);
test->make<MV::Scene::Rectangle>(MV::size(65.0f, 36.0f))->color({.0, 1, 0, .75})->position({110.0f, 70.0f})->setSortDepth(102);
And here's my screen draw:
renderer.clearScreen();
test->draw(); //this is drawn directly to the screen.
box->scene()->draw(); //everything in here is in a clipped node with a render texture.
renderer.updateScreen();
*EDIT: FRAMEBUFFER SETUP/TEARDOWN CODE:
void glExtensionFramebufferObject::startUsingFramebuffer(std::shared_ptr<Framebuffer> a_framebuffer, bool a_push){
savedClearColor = renderer->backgroundColor();
renderer->backgroundColor({0.0, 0.0, 0.0, 0.0});
require(initialized, ResourceException("StartUsingFramebuffer failed because the extension could not be loaded"));
if(a_push){
activeFramebuffers.push_back(a_framebuffer);
}
glBindFramebuffer(GL_FRAMEBUFFER, a_framebuffer->framebuffer);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, a_framebuffer->texture, 0);
glBindRenderbuffer(GL_RENDERBUFFER, a_framebuffer->renderbuffer);
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT24, roundUpPowerOfTwo(a_framebuffer->frameSize.width), roundUpPowerOfTwo(a_framebuffer->frameSize.height));
glViewport(a_framebuffer->framePosition.x, a_framebuffer->framePosition.y, a_framebuffer->frameSize.width, a_framebuffer->frameSize.height);
renderer->projectionMatrix().push().makeOrtho(0, static_cast<MatrixValue>(a_framebuffer->frameSize.width), 0, static_cast<MatrixValue>(a_framebuffer->frameSize.height), -128.0f, 128.0f);
GLenum buffers[] = {GL_COLOR_ATTACHMENT0};
//pglDrawBuffersEXT(1, buffers);
renderer->clearScreen();
}
void glExtensionFramebufferObject::stopUsingFramebuffer(){
require(initialized, ResourceException("StopUsingFramebuffer failed because the extension could not be loaded"));
activeFramebuffers.pop_back();
if(!activeFramebuffers.empty()){
startUsingFramebuffer(activeFramebuffers.back(), false);
} else {
glBindFramebuffer(GL_FRAMEBUFFER, 0);
glBindRenderbuffer(GL_RENDERBUFFER, 0);
glViewport(0, 0, renderer->window().width(), renderer->window().height());
renderer->projectionMatrix().pop();
renderer->backgroundColor(savedClearColor);
}
}
And my clear screen code:
void Draw2D::clearScreen(){
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT | GL_STENCIL_BUFFER_BIT);
}
Based on some calculations and simulations I ran, I came up with two fairly similar solutions that seem to do the trick. One uses pre-multiplied colors in combination with a single (separate) blend function, the other one works without pre-multiplied colors, but requires changing the blend function a couple of times in the process.
Option 1: Single Blend Function, Pre-Multiplication
This approach works with a single blend function through the entire process. The blend function is:
glBlendFuncSeparate(GL_ONE, GL_ONE_MINUS_SRC_ALPHA,
GL_ONE_MINUS_DST_ALPHA, GL_ONE);
It requires pre-multiplied colors, which means that if your input color would normally be (r, g, b, a), you use (r * a, g * a, b * a, a) instead. You can perform the pre-multiplication in the fragment shader.
The sequence is:
Set the blend function to (GL_ONE, GL_ONE_MINUS_SRC_ALPHA, GL_ONE_MINUS_DST_ALPHA, GL_ONE).
Set render target to FBO.
Render layers that you want rendered to FBO, using pre-multiplied colors.
Set render target to default framebuffer.
Render layers you want below FBO content, using pre-multiplied colors.
Render FBO attachment, without applying pre-multiplication since the colors in the FBO are already pre-multiplied.
Render layers you want on top of FBO content, using pre-multiplied colors.
Option 2: Switch Blend Functions, without Pre-Multiplication
This approach does not require pre-multiplication of the colors for any step. The downside is that the blend function has to be switched a few times during the process.
Set the blend function to (GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA, GL_ONE_MINUS_DST_ALPHA, GL_ONE).
Set render target to FBO.
Render layers that you want rendered to FBO.
Set render target to default framebuffer.
(optional) Set the blend function to (GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA).
Render layers you want below FBO content.
Set the blend function to (GL_ONE, GL_ONE_MINUS_SRC_ALPHA).
Render FBO attachment.
Set the blend function to (GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA).
Render layers you want on top of FBO content.
Explanation and Proof
I think Option 1 is nicer and possibly more efficient because it does not require switching blend functions during rendering. So the detailed explanation below is for Option 1. The math for Option 2 is pretty much the same though. The only real difference is that Option 2 uses GL_SOURCE_ALPHA for the first term of the blend function to perform the pre-multiplication where necessary, where Option 1 expects pre-multiplied colors to come into the blend function.
To illustrate that this works, let's go through an example where 3 layers are rendered. I'll do all the calculations for the r and a components. The calculations for g and b would be equivalent to the ones for r. We will render three layers in the following order:
(r1, a1) pre-multiplied: (r1 * a1, a1)
(r2, a2) pre-multiplied: (r2 * a2, a2)
(r3, a3) pre-multiplied: (r3 * a3, a3)
For the reference calculation, we blend these 3 layers with the standard GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA blend function. We don't need to track the resulting alpha here since DST_ALPHA is not used in the blend function, and we don't use the pre-multiplied colors yet:
after layer 1: (a1 * r1)
after layer 2: (a2 * r2 + (1.0 - a2) * a1 * r1)
after layer 3: (a3 * r3 + (1.0 - a3) * (a2 * r2 + (1.0 - a2) * a1 * r1)) =
(a3 * r3 + (1.0 - a3) * a2 * r2 + (1.0 - a3) * (1.0 - a2) * a1 * r1)
So the last term is our target for the final result. Now, we render layers 2 and 3 into an FBO. Later we will render layer 1 into the frame buffer, and then blend the FBO on top of it. The goal is to get the same result.
From now on, we will apply the blend function listed at the start, and use pre-multiplied colors. We will also need to calculate the alphas, since DST_ALPHA is used in the blend function. First, we render layers 2 and 3 into the FBO:
after layer 2: (a2 * r2, a2)
after layer 3: (a3 * r3 + (1.0 - a3) * a2 * r2, (1.0 - a2) * a3 + a2)
Now we render to he primary framebuffer. Since we don't care about the resulting alpha, I'll only calculate the r component again:
after layer 1: (a1 * r1)
Now we blend the content of the FBO on top of this. So what we calculated for "after layer 3" in the FBO is our source color/alpha, a1 * r1 is the destination color, and GL_ONE, GL_ONE_MINUS_SRC_ALPHA is still the blend function. The colors in the FBO are already pre-multiplied, so there will be no pre-multiplication in the shader while blending the FBO content:
srcR = a3 * r3 + (1.0 - a3) * a2 * r2
srcA = (1.0 - a2) * a3 + a2
dstR = a1 * r1
ONE * srcR + ONE_MINUS_SRC_ALPHA * dstR
= srcR + (1.0 - srcA) * dstR
= a3 * r3 + (1.0 - a3) * a2 * r2 + (1.0 - ((1.0 - a2) * a3 + a2)) * a1 * r1
= a3 * r3 + (1.0 - a3) * a2 * r2 + (1.0 - a3 + a2 * a3 - a2) * a1 * r1
= a3 * r3 + (1.0 - a3) * a2 * r2 + (1.0 - a3) * (1.0 - a2) * a1 * r1
Compare the last term with the reference value we calculated above for the standard blending case, and you can tell that it's exactly the same.
This answer to a similar question has some more background on the GL_ONE_MINUS_DST_ALPHA, GL_ONE part of the blend function: OpenGL ReadPixels (Screenshot) Alpha.
I achieved my goal. Now, let me share this information with the internet, since it exists nowhere else that I could find.
Create your framebuffer (blindframebuffer etc)
Clear the framebuffer to 0, 0, 0, 0
Set your viewport properly. This is all basic stuff I took for granted in the question, but want to include here.
Now, render your scene to the framebuffer normally with glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA). Make sure the scene is sorted (just as you would normally.)
Now bind the included fragment shader. This will undo the damage dealt to the image color values via the blend function.
Render the texture to your screen with glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA)
Go back to rendering as normal with a regular shader.
The code I included in the question remains basically untouched except that I ensure I'm binding the shader I list below when I do my "preDraw" function, which is specific to my own little framework, but is basically the "draw to screen" call for my rendered texture.
I call this the "unblend" shader.
#version 330 core
smooth in vec4 color;
smooth in vec2 uv;
uniform sampler2D texture;
out vec4 colorResult;
void main(){
vec4 textureColor = texture2D(texture, uv.st);
textureColor/=sqrt(textureColor.a);
colorResult = textureColor * color;
}
Why do I do textureColor/=sqrt(textureColor.a)? Because the original color is figured like this:
resultR = r * a, resultG = g * a, resultB = b * a, resultA = a * a
Now, if we want to undo that we need to figure out what a is. The easiest way to find is to solve for "a" here:
resultA = a * a
If a is .25 when originally rendering we have:
resultA = .25 * .25
Or:
resultA = 0.0625
When the texture is being drawn to the screen though, we don't have "a" anymore. We know what resultA is, it's the texture's alpha channel. So we can sqrt(resultA) to get .25 back. Now with that value we can divide to undo the multiply:
textureColor/=sqrt(textureColor.a);
And that fixes everything up undoing the blending!
*EDIT: Well... Kinda at least. There is a sleight inaccuracy, in this case I can show it by rendering over a clear color that is not identical to the framebuffer clear color. Some alpha information seems to be lost, probably in the rgb channels. This is still good enough for me, but I wanted to follow up with the screenshot showing the inaccuracy before signing out. If anyone has a solution please provide it!
I have opened a bounty to bring this answer up to a canonical 100% correct solution. Right now, if I render more partially transparant objects over the existing transparancy the transparancy is accumulated differently than on the right resulting in a lightening of the final texture beyond what is shown on the right. Likewise, when rendered over a non-black background it's clear the results of the existing solution differ slightly as demonstrated above.
A proper solution would be identical in all cases. My existing solution cannot take the destination blending into account in the shader correction, only the source alpha.
In order to do this in a single pass you need support for separate color & alpha blending functions. First you render the texture which has foreground contribution stored in the alpha channel (i.e. 1=fully opaque, 0=fully transparent) and pre-multiplied source color value in the RGB color channel. To create this texture do the following operations:
clear the texture to RGBA=[0, 0, 0, 0]
set the color channel blending to src_color*src_alpha+dst_color*(1-src_alpha)
set the alpha channel blending to src_alpha*(1-dst_alpha)+dst_alpha
render the scene to the texture
To set the mode specified by 2) and 3), you can do: glBlendFuncSeparate(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA, GL_ONE_MINUS_DST_ALPHA, GL_ONE) and glBlendEquation(GL_FUNC_ADD)
Next render this texture to the scene by setting the color blending to:
src_color+dst_color*(1-src_alpha), i.e. glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA) and glBlendEquation(GL_FUNC_ADD)
Your problem is older than OpenGL, or personal computers, or indeed any living human. You're trying to blend two images together and make it look like they weren't blended at all. Printing presses face this exact problem. When ink is applied to paper, the result is a blend between the ink color and the paper color.
The solution is the same in paper as it is in OpenGL. You must alter your source image in order to control your final result. This is easy enough to figure out if you examine the math used to blend.
For each of R, G, B, the resultant color is (old * (1-opacity)) + (new * opacity). The basic scenario, and the one you'd like to emulate, is drawing a color directly onto the final back buffer at opacity A.
For example, opacity is 50% and your green channel has 0xFF. The result should be 0x7F on a black background (including unavoidable rounding error). You probably can't assume the background is black, so expect the green channel to vary between 0x7F and 0xFF.
You'd like to know how to emulate that result when you're really rendering to a texture, then rending the texture to the back buffer. It turns out that the "vague suggestions to use 'premultiplied alpha'" were correct. Whereas your solution is to use a shader to unblend a previous blend operation in the last step, the standard solution is to multiply the colors of your original source texture with the alpha channel (aka premultiplied alpha). When composting the intermediate texture, the RGB channels are blended without multiplying by Alpha. When rendering the texture to the back buffer, against the RGB channels are blended without multiplying by Alpha. Thus you neatly avoid the multiple multiplication problem.
Please consult these resources for a better understanding. I and most others are more familiar with this technique in DirectX, so you may have to search for the appropriate OGL flags.
I am doing some really basic experiments around some 2D work in GL. I'm trying to draw a "picture frame" around an rectangular area. I'd like for the frame to have a consistent gradient all the way around, and so I'm constructing it with geometry that looks like four quads, one on each side of the frame, tapered in to make trapezoids that effectively have miter joins.
The vert coords are the same on the "inner" and "outer" rectangles, and the colors are the same for all inner and all outer as well, so I'd expect to see perfect blending at the edges.
But notice in the image below how there appears to be a "seam" in the corner of the join that's lighter than it should be.
I feel like I'm missing something conceptually in the math that explains this. Is this artifact somehow a result of the gradient slope? If I change all the colors to opaque blue (say), I get a perfect solid blue frame as expected.
Update: Code added below. Sorry kinda verbose. Using 2-triangle fans for the trapezoids instead of quads.
Thanks!
glClearColor(1.0, 1.0, 1.0, 1.0);
glClear(GL_COLOR_BUFFER_BIT);
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_COLOR_ARRAY);
// Prep the color array. This is the same for all trapezoids.
// 4 verts * 4 components/color = 16 values.
GLfloat colors[16];
colors[0] = 0.0;
colors[1] = 0.0;
colors[2] = 1.0;
colors[3] = 1.0;
colors[4] = 0.0;
colors[5] = 0.0;
colors[6] = 1.0;
colors[7] = 1.0;
colors[8] = 1.0;
colors[9] = 1.0;
colors[10] = 1.0;
colors[11] = 1.0;
colors[12] = 1.0;
colors[13] = 1.0;
colors[14] = 1.0;
colors[15] = 1.0;
// Draw the trapezoidal frame areas. Each one is two triangle fans.
// Fan of 2 triangles = 4 verts = 8 values
GLfloat vertices[8];
float insetOffset = 100;
float frameMaxDimension = 1000;
// Bottom
vertices[0] = 0;
vertices[1] = 0;
vertices[2] = frameMaxDimension;
vertices[3] = 0;
vertices[4] = frameMaxDimension - insetOffset;
vertices[5] = 0 + insetOffset;
vertices[6] = 0 + insetOffset;
vertices[7] = 0 + insetOffset;
glVertexPointer(2, GL_FLOAT , 0, vertices);
glColorPointer(4, GL_FLOAT, 0, colors);
glDrawArrays(GL_TRIANGLE_FAN, 0, 4);
// Left
vertices[0] = 0;
vertices[1] = frameMaxDimension;
vertices[2] = 0;
vertices[3] = 0;
vertices[4] = 0 + insetOffset;
vertices[5] = 0 + insetOffset;
vertices[6] = 0 + insetOffset;
vertices[7] = frameMaxDimension - inset;
glVertexPointer(2, GL_FLOAT , 0, vertices);
glColorPointer(4, GL_FLOAT, 0, colors);
glDrawArrays(GL_TRIANGLE_FAN, 0, 4);
/* top & right would be as expected... */
glDisableClientState(GL_COLOR_ARRAY);
glDisableClientState(GL_VERTEX_ARRAY);
As #Newbie posted in the comments,
#quixoto: open your image in Paint program, click with fill tool somewhere in the seam, and you see it makes 90 degree angle line there... means theres only 1 color, no brighter anywhere in the "seam". its just an illusion.
True. While I'm not familiar with this part of math under OpenGL, I believe this is the implicit result of how the interpolation of colors between the triangle vertices is performed... I'm positive that it's called "Bilinear interpolation".
So what to do to solve that? One possibility is to use a texture and just draw a textured quad (or several textured quads).
However, it should be easy to generate such a border in a fragment shader.
A nice solution using a GLSL shader...
Assume you're drawing a rectangle with the bottom-left corner having texture coords equal to (0,0), and the top-right corner with (1,1).
Then generating the "miter" procedurally in a fragment shader would look like this, if I'm correct:
varying vec2 coord;
uniform vec2 insetWidth; // width of the border in %, max would be 0.5
void main() {
vec3 borderColor = vec3(0,0,1);
vec3 backgroundColor = vec3(1,1,1);
// x and y inset, 0..1, 1 means border, 0 means centre
vec2 insets = max(-coord + insetWidth, vec2(0,0)) / insetWidth;
If I'm correct so far, then now for every pixel the value of insets.x has a value in the range [0..1]
determining how deep a given point is into the border horizontally,
and insets.y has the similar value for vertical depth.
The left vertical bar has insets.y == 0,
the bottom horizontal bar has insets.x = 0,, and the lower-left corner has the pair (insets.x, insets.y) covering the whole 2D range from (0,0) to (1,1). See the pic for clarity:
Now we want a transformation which for a given (x,y) pair will give us ONE value [0..1] determining how to mix background and foreground color. 1 means 100% border, 0 means 0% border. And this can be done in several ways!
The function should obey the requirements:
0 if x==0 and y==0
1 if either x==1 or y==1
smooth values in between.
Assume such function:
float bias = max(insets.x,insets.y);
It satisfies those requirements. Actually, I'm pretty sure that this function would give you the same "sharp" edge as you have above. Try to calculate it on a paper for a selection of coordinates inside that bottom-left rectangle.
If we want to have a smooth, round miter there, we just need another function here. I think that something like this would be sufficient:
float bias = min( length(insets) , 1 );
The length() function here is just sqrt(insets.x*insets.x + insets.y*insets.y). What's important: This translates to: "the farther away (in terms of Euclidean distance) we are from the border, the more visible the border should be", and the min() is just to make the result not greater than 1 (= 100%).
Note that our original function adheres to exactly the same definition - but the distance is calculated according to the Chessboard (Chebyshev) metric, not the Euclidean metric.
This implies that using, for example, Manhattan metric instead, you'd have a third possible miter shape! It would be defined like this:
float bias = min(insets.x+insets.y, 1);
I predict that this one would also have a visible "diagonal line", but the diagonal would be in the other direction ("\").
OK, so for the rest of the code, when we have the bias [0..1], we just need to mix the background and foreground color:
vec3 finalColor = mix(borderColor, backgroundColor, bias);
gl_FragColor = vec4(finalColor, 1); // return the calculated RGB, and set alpha to 1
}
And that's it! Using GLSL with OpenGL makes life simpler. Hope that helps!
I think that what you're seeing is a Mach band. Your visual system is very sensitive to changes in the 1st derivative of brightness. To get rid of this effect, you need to blur your intensities. If you plot intensity along a scanline which passes through this region, you'll see that there are two lines which meet at a sharp corner. To keep your visual system from highlighting this area, you'll need to round this join over. You can do this with either a post processing blur or by adding some more small triangles in the corner which ease the transition.
I had that in the past, and it's very sensitive to geometry. For example, if you draw them separately as triangles, in separate operations, instead of as a triangle fan, the problem is less severe (or, at least, it was in my case, which was similar but slightly different).
One thing I also tried is to draw the triangles separately, slightly overlapping onto one another, with a right composition mode (or OpenGL blending) so you don't get the effect. I worked, but I didn't end up using that because it was only a tiny part of the final product, and not worth it.
I'm sorry that I have no idea what is the root cause of this effect, however :(