Drawing a textured quad pixel-exact in 2D with OpenGL 3.3? - c++

How to scroll a textured quad "pixel-exact" over the screen
using float-positions with GL_LINEAR-filtering?
If I try this task, I could always see a hard pixel-change
during the very smooth movement if a subpixels-coordinate is
greater or equal than 0.5.This looks like a really ugly stuttering.
I think the problem is here, that after scrolling 0.5 subpixels
the quad is so misaligned to the texture-coordinates, that OpenGL
takes now another neighbour pixel for texturing which is newly
interpolated and so the new drawn subpixel is not aligned to
the rendered subpixel before?!
Could be the solution here, to realign the texture-coordinates
on positions > 0.5f? Can somebody help me out with a function,
that deals with this problem and calculates the right uv-coordinates?
I think the uv-coordinates should moved to a new position
if the quads subpixel-position is >= 0.5f.
Here is a link to screenshot, that shows exact the hard pixel-jump
in x-direction (left) on x=0.5f in the last frame.
(zoom the screenshot with strg+mousewheel)
http://i.stack.imgur.com/n0GVA.png
Here are the relevant code-fragments:
float calcTexPos(float fTexPos,float spriteX, float spriteY){return (fTexPos)/(64.0f);}
void addSpriteToLocalVerticeArray(Vertex * ptrVertexArrayLocal, Sprite *sprite, int *ptrSpriteVerticeCounter, float fCamX, float fCamY)
{
Sprite oSpriteTranslated = *sprite;
oSpriteTranslated.x = (sprite->x) - (fCamX);
oSpriteTranslated.y = (sprite->y) - (fCamY);
ptrVertexArrayLocal[(*ptrSpriteVerticeCounter)++] = Vertex(oSpriteTranslated.x,oSpriteTranslated.y,0.0f,calcTexPos(1,oSpriteTranslated.x,oSpriteTranslated.y),calcTexPos(62,oSpriteTranslated.x,oSpriteTranslated.y)); //Left Top
ptrVertexArrayLocal[(*ptrSpriteVerticeCounter)++] = Vertex(oSpriteTranslated.x,oSpriteTranslated.y-oSpriteTranslated.height,0.0f,calcTexPos(1,oSpriteTranslated.x,oSpriteTranslated.y),calcTexPos(31,oSpriteTranslated.x,oSpriteTranslated.y)); //Left Bottom
ptrVertexArrayLocal[(*ptrSpriteVerticeCounter)++] = Vertex(oSpriteTranslated.x+oSpriteTranslated.width,oSpriteTranslated.y-oSpriteTranslated.height,0.0f,calcTexPos(33,oSpriteTranslated.x,oSpriteTranslated.y),calcTexPos(31,oSpriteTranslated.x,oSpriteTranslated.y)); //Right Bottom
ptrVertexArrayLocal[(*ptrSpriteVerticeCounter)++] = Vertex(oSpriteTranslated.x+oSpriteTranslated.width,oSpriteTranslated.y-oSpriteTranslated.height,0.0f,calcTexPos(33,oSpriteTranslated.x,oSpriteTranslated.y),calcTexPos(31,oSpriteTranslated.x,oSpriteTranslated.y)); //Right Bottom
ptrVertexArrayLocal[(*ptrSpriteVerticeCounter)++] = Vertex(oSpriteTranslated.x+oSpriteTranslated.width,oSpriteTranslated.y,0.0f,calcTexPos(33,oSpriteTranslated.x,oSpriteTranslated.y),calcTexPos(62,oSpriteTranslated.x,oSpriteTranslated.y)); //Right Top
ptrVertexArrayLocal[(*ptrSpriteVerticeCounter)++] = Vertex(oSpriteTranslated.x,oSpriteTranslated.y,0.0f,calcTexPos(1,oSpriteTranslated.x,oSpriteTranslated.y),calcTexPos(62,oSpriteTranslated.x,oSpriteTranslated.y)); //Left Top
}
What I have tried that didn't work:
Adding an offset of 0.5f to my quad-coords. That moves the problem to the stage, that
now each full 1.0f subpixel-position a hard subpixel-change appears.
Changing the calcTexPos-Function: -> (2.0f*fTexPos+1.0f)/(2.0f*64.0f)
Checking, If MSAA is activated on the driver-panel
Added transparent borders to the texture instead of copied borders

Related

Rotating 2D camera to space ship's heading in OpenGL (OpenTK)

The game is a top-down 2D space ship game -- think of "Asteroids."
Box2Dx is the physics engine and I extended the included DebugDraw, based on OpenTK, to draw additional game objects. Moving the camera so it's always centered on the player's ship and zooming in and out work perfectly. However, I really need the camera to rotate along with the ship so it's always facing in the same direction. That is, the ship will appear to be frozen in the center of the screen and the rest of the game world rotates around it as it turns.
I've tried adapting code samples, but nothing works. The best I've been able to achieve is a skewed and cut-off rendering.
Render loop:
// Clear.
Gl.glClear(Gl.GL_COLOR_BUFFER_BIT | Gl.GL_DEPTH_BUFFER_BIT);
// other rendering omitted (planets, ships, etc.)
this.OpenGlControl.Draw();
Update view -- centers on ship and should rotate to match its angle. For now, I'm just trying to rotate it by an arbitrary angle for a proof of concept, but no dice:
public void RefreshView()
{
int width = this.OpenGlControl.Width;
int height = this.OpenGlControl.Height;
Gl.glViewport(0, 0, width, height);
Gl.glMatrixMode(Gl.GL_PROJECTION);
Gl.glLoadIdentity();
float ratio = (float)width / (float)height;
Vec2 extents = new Vec2(ratio * 25.0f, 25.0f);
extents *= viewZoom;
// rotate the view
var shipAngle = 180.0f; // just a test angle for proof of concept
Gl.glRotatef(shipAngle, 0, 0, 0);
Vec2 lower = this.viewCenter - extents;
Vec2 upper = this.viewCenter + extents;
// L/R/B/T
Glu.gluOrtho2D(lower.X, upper.X, lower.Y, upper.Y);
Gl.glMatrixMode(Gl.GL_MODELVIEW);
}
Now, I'm obviously doing this wrong. Degrees of 0 and 180 will keep it right-side-up or flip it, but any other degree will actually zoom it in/out or result in only blackness, nothing rendered. Below are examples:
If ship angle is 0.0f, then game world is as expected:
Degree of 180.0f flips it vertically... seems promising:
Degree of 45 zooms out and doesn't rotate at all... that's odd:
Degree of 90 returns all black. In case you've never seen black:
Please help!
Firstly the 2-4 arguments are the axis, so please state them correctly as stated by #pingul.
More importantly the rotation is applied to the projection matrix.
// L/R/B/T
Glu.gluOrtho2D(lower.X, upper.X, lower.Y, upper.Y);
In this line your Orthogonal 2D projection matrix is being multiplied with the previous rotation and applied to your projection matrix. Which I believe is not what you want.
The solution would be move your rotation call to a place after the model view matrix mode is selected, as below
// L/R/B/T
Glu.gluOrtho2D(lower.X, upper.X, lower.Y, upper.Y);
Gl.glMatrixMode(Gl.GL_MODELVIEW);
// rotate the view
var shipAngle = 180.0f; // just a test angle for proof of concept
Gl.glRotatef(shipAngle, 0.0f, 0.0f, 1.0f);
And now your rotations will be applied to the model-view matrix stack. (I believe this is the effect you want). Keep in mind that glRotatef() creates a rotation matrix and multiplies it with the matrix at the top of the selected stack stack.
I would also strongly suggest you move away from fixed function pipeline if possible as suggested by #BDL.

OpenGL: Generating ellipse in code [duplicate]

This question already has an answer here:
OpenGL stretched shapes - aspect ratio
(1 answer)
Closed 7 years ago.
I have been trying to generate an ellipse using OpenGL and I have a feeling I have got something very wrong. I am trying to use an ellipse generating code but for simplicity, I have set the length of the major and minor axes equal. This should give me a circle but somehow that is not what is rendered with OpenGL and I am not sure what is wrong.
So the code is as follows:
glPushAttrib(GL_CURRENT_BIT);
glColor3f(1.0f, 0.0f, 0.0f);
glLineWidth(2.0);
// Draw center
glBegin(GL_POINTS);
glVertex2d(0, 0);
glEnd();
glBegin(GL_LINE_LOOP);
// This should generate a circle
for (GLfloat i = 0; i < 360; i++)
{
float x = cos(i*M_PI/180.f) * 0.5; // keep the axes radius same
float y = sin(i*M_PI/180.f) * 0.5;
glVertex2f(x, y);
}
glEnd();
glPopAttrib();
This should generate a circle as far as I can think. However. I get something like the attached image, which is not a circle. I am not sure what I am doing wrong.
It is a circle in clip space. Note that the horizontal extent is half the screen's width and the vertical extent is half the screen's height. The viewport transformation that maps clip space (-1 to 1 on both axes) to screen space basically performs a scaling and translation, which causes the deformation of the circle.
To prevent this from happening, you need to set up an appropriate projection transform, e.g. with glOrtho.

Border around texture, OpenGL

This is a code I use to draw rectangle in my program:
glBegin(GL_QUADS);
glTexCoord2f(0.0f, maxTexCoordHeight); glVertex2i(pos.x, pos.y + height);
glTexCoord2f(0.0f, 0.0f); glVertex2i(pos.x, pos.y);
glTexCoord2f(maxTexCoordWidth, 0.0f); glVertex2i(pos.x + width, pos.y);
glTexCoord2f(maxTexCoordWidth, maxTexCoordHeight); glVertex2i(pos.x + width, pos.y + height);
glEnd();
It draws just a simple rectangle with specified texture, e.g. like this:
I'd like to ask if it's possible in OpenGL to achieve border effect like this:
As you see inside this tile there's just a plain blue background which could be handled separately - just automatically resized texture. This can be achieved easily with a code snippet I gave, but the problem is with border.
If the border was supposed to be one color, I could try drawing empty, not-filled rectangle by using GL_LINES around my texture, but it's not.
Also if tiles were always with a fixed size, I could prepare a texure that would match it, but they HAVE TO be easily resizable without changing a bitmap file I use as texture.
So if it's not possible with basic OpenGL functions, what are the approaches to achieve this effect that would be most efficient and/or easy?
EDIT: It has to be 2D.
This is a classical problem of GUIs with OpenGL and is often solved using the 9-cell-pattern. In this, you add the effect to the original image (or define it by other opengl-parameters) and split the rendered quad in nine quads: three rows and three columns.
You then make the height of the upper and bottom row fixed, as you make the width of the left and the right column fixed. The center quad is scaled so that your object fits the rectangle you want to fit. You then map only the border parts of the texture to the quads forming the outer cells, while you map the center of the texture to the center quad.
Related to what was said in the comments, you could also use actual 3D effects by making the quad 3D. Noone forces you to use perspectivic projection in that case, you can stay with Orthogonal projection (2D-Mode). OpenGL will always do 3D-calculations anyways.
Aside from Jonas's answer, which is excellent, I want to add two more options.
The first one is to just make the texture look like your desired square. No fancy code necessary if you can do it in photoshop ;).
The second one is to complicate your drawing code a bit. If you look at your image you can see that every "side-slope" of your square can be drawn with two triangles. You can make your code draw 10 triangles instead of one square and use a different color for each group of two triangles:
draw() {
GLFloat i = <your_inset_here>;
//top border part, top left triangle
glColor3f(<color_0>);
glVertex2f(pos.x, pos.y);
glVertex2f(pos.x + w, pos.y);
glVertex2f(pos.x + i, pos.y + i);
//top border part, bottom right triangle
glVertex2f(pos.x + w, pos.y);
glVertex2f(pos.x + w - i, pos.y + i);
glVertex2f(pos.x + i, pos.y + i);
//repeat this process with the other coordinates for the other three borders
// draw the middle square using {(pos.x+i,pos.y+i),(pos.x+w-i,pos.y+i),(pos.x+w-i,pos.y+h-i),(pos.x+i,pos.y+h-i)} as coordinates
}
You can further improve this by creating a function to draw an irregularly shaped quad with the give coordinates and a color and call that function 5 times.

comparing rotated coordinates

I'm having little trouble whit trying to compare rotated 2D Quads coordinates to rotated x and y coordinates. I'm trying to determine if mouse was clicked inside the quad.
1) the rot's are this classes objects: (note : the operator << is overloaded for the use of the rotate coords func)
class Vector{
private:
std::vector <float> Vertices;
public:
Vector(float, float);
float GetVertice(unsigned int);
void SetVertice(unsigned int, float);
std::vector<float> operator <<(double);
};
Vector::Vector(float X,float Y){
Vertices.push_back(X);
Vertices.push_back(Y);
}
float Vector::GetVertice(unsigned int Index){
return Vertices.at(Index);
}
void Vector::SetVertice(unsigned int Index,float NewVertice){
Vertices.at(Index) = NewVertice;
}
//Return rotated coords:D
std::vector <float> Vector::operator <<(double Angle){
std::vector<float> Temp;
Temp.push_back(Vertices.at(0) * cos(Angle) - Vertices.at(1) * sin(Angle));
Temp.push_back(Vertices.at(0) * sin(Angle) + Vertices.at(1) * cos(Angle));
return Temp;
}
2) Comparasion and rotation of the coordinates THE NEW VERSION
Vector Rot1(x,y),Rot3(x,y);
double Angle;
std::vector <float> result1,result3;
Rot3.SetVertice(0,NewQuads.at(Index).GetXpos() + NewQuads.at(Index).GetWidth());
Rot3.SetVertice(1,NewQuads.at(Index).GetYpos() + NewQuads.at(Index).GetHeight());
Angle = NewQuads.at(Index).GetRotation();
result1 = Rot1 << Angle; // Rotate the mouse x and y
result3 = Rot3 << Angle; // Rotate the Quad x and y
//.at(0) = x and .at(1)=y
if(result1.at(0) >= result3.at(0) - NewQuads.at(Index).GetWidth() && result1.at(0) <= result3.at(0) ){
if(result1.at(1) >= result3.at(1) - NewQuads.at(Index).GetHeight() && result1.at(1) <= result3.at(1) ){
when i run this it works perfectly at 0 angle but when you rotate the quad, it fails.
and by failing I mean the activation area seem to just disappear.
am I doing the rotation of the coordinates correctly? or is it the comparison?
if it's the comparison how would you do it properly, I have tried changing the if's but whit out any luck...
edit
the drawing of the quad(Happens before the testing):
void Quad::Render()
{
if(!CheckIfOutOfScreen()){
glPushMatrix();
glLoadIdentity();
glTranslatef(Xpos ,Ypos ,0.f);
glRotatef(Rotation,0.f,0.f,1.f); // same rotation is used for the testing later...
glBegin(GL_QUADS);
glVertex2f(Zwidth,Zheight);
glVertex2f(Width,Zheight);
glVertex2f(Width,Height);
glVertex2f(Zwidth,Height);
glEnd();
if(State != NOT_ACTIVE)
RenderShapeTools();
glPopMatrix();
}
}
basicly I'm trying to test if mouse was clicked inside this quad:
Image
There is more than one way to achieve what you want, But from the image you posted I assume you want to draw to a surface the same size as your screen (or window) using only 2D graphics.
As you know in 3D graphics we talk about 3 coordinate references. The first is the coordinate reference of the object or model to be drawn, the second is the coordinate reference of the camera or view and the third is the coordinate reference of the screen.
In OpenGL the first two coordinate references are established through the MODELVIEW matrix and the third is achieved by the PROJECTION matrix and the viewport transformation.
In your case you want to rotate a quad and place it somewhere on the screen. Your quad has it's own model coordinates. Let's assume that for this specific 2D quad the origin is at the center of the quad and it has the dimensions of 5 by 5. Also let's assume that if we look to the center of the quad then the X axis points to the RIGHT, the Y axis points UP and the Z axis points towards the viewer.
The unrotated coordinates of the quad will be (from bottom left clockwise): (-2.5,-2.5,0), (-2.5,2.5,0), (2.5,2.5,0), (2.5,-2.5,0)
Now we want to have a camera and projection matrices and viewport so to simulate a 2D surface with known dimensions.
//Assume WinW contains the window width and WinH contains the windows height
glViewport(0,0,WinW,WinH);//Set the viewport to the whole window
glMatrixMode (GL_PROJECTION);
glLoadIdentity ();
glOrtho (0, WinW, WinH, 0, 0, 1);//Set the projection matrix to perform a 2D orthogonal projection
glMatrixMode (GL_MODELVIEW);
glLoadIdentity ();//Set the camera matrix to be the Identity matrix
You are now ready to draw your quad an this 2D surface with dimensions WinW, WinH. In this context if you just draw your quad using it's current vertices you will have the quad drawn with it's center at the bottom left of the window with each side measuring 5 pixels so you will actually see only quarter of a quad. If you want to rotate and move it you will do something like this:
//Prepare matrices as shown above
//Viewport coordinates range from bottom left (0,0) to top right (WinW,WinH)
float dX = CenterOfQuadInViewportCoordinatesX, dY = CenterOfQuadInViewportCoordinatesY;
float rotA = QuadRotationAngleAroundZAxisInDegrees;
float verticesX[4] = {-2.5,-2.5,2.5,2.5};
float verticesY[4] = {-2.5,2.5,2.5,-2.5};
//Remember that rotate is done first and translation second
glTranslatef(dX,dY,0);//Move the quad to the desired location in the viewport
glRotate(rotA, 0,0,1);//Rotate the quad around it's origin
glBegin(GL_QUADS);
glVertex2f(verticesX[0], veriticesY[0]);
glVertex2f(verticesX[1], veriticesY[1]);
glVertex2f(verticesX[2], veriticesY[2]);
glVertex2f(verticesX[3], veriticesY[3]);
glEnd();
Now you want to know whether the click of the mouse was within the rendered quad.
Whereas the viewport coordinates start from the bottom left the window coordinates start from the top left. So when you get the mouse coordinates you have to translate them to viewport coordinates in the following way:
float mouseViewportX = mouseX, mouseViewportY = WinH - mouseY - 1;
Once you have the mouse location in viewport coordinates you need to transform it to model coordinates in the following way (Please double check the calculations since I generally use my own matrix library for that and don't calculate it by hand):
//Translate the mouse location to model coordinates reference
mouseViewportX -= dX, mouseViewportY -= dY;
//Unrotate the mouse location
float invRotARad = -rotA*DEG_TO_RAD;
float sinRA = sin(invRotARad), cosRA = cos(invRotA);
float mouseInModelX = cosRA*mouseViewportX - sinRA*mouseViewportY;
float mouseInModelY = sinRA*mouseViewportX + cosRA*mouseViewportY;
And now you can finally check if the mouse falls within the quad - as you can see this is done in quad coordinates:
bool mouseInQuad = mouseInModelX > verticesX[0] && mouseInModelY < verticesX[1] &&
mouseInModelY > verticesY[0] && mouseInModelY < verticesY[1];
Hope I didn't make too many mistakes and this puts you on the right track. If you want to deal with more complex cases and 3D then you should have a look at gluUnproject (maybe you will want to implement your own) and for even more complex scenes you may need to use a stencil or depth buffers

Gradient "miter" in OpenGL shows seams at the join

I am doing some really basic experiments around some 2D work in GL. I'm trying to draw a "picture frame" around an rectangular area. I'd like for the frame to have a consistent gradient all the way around, and so I'm constructing it with geometry that looks like four quads, one on each side of the frame, tapered in to make trapezoids that effectively have miter joins.
The vert coords are the same on the "inner" and "outer" rectangles, and the colors are the same for all inner and all outer as well, so I'd expect to see perfect blending at the edges.
But notice in the image below how there appears to be a "seam" in the corner of the join that's lighter than it should be.
I feel like I'm missing something conceptually in the math that explains this. Is this artifact somehow a result of the gradient slope? If I change all the colors to opaque blue (say), I get a perfect solid blue frame as expected.
Update: Code added below. Sorry kinda verbose. Using 2-triangle fans for the trapezoids instead of quads.
Thanks!
glClearColor(1.0, 1.0, 1.0, 1.0);
glClear(GL_COLOR_BUFFER_BIT);
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_COLOR_ARRAY);
// Prep the color array. This is the same for all trapezoids.
// 4 verts * 4 components/color = 16 values.
GLfloat colors[16];
colors[0] = 0.0;
colors[1] = 0.0;
colors[2] = 1.0;
colors[3] = 1.0;
colors[4] = 0.0;
colors[5] = 0.0;
colors[6] = 1.0;
colors[7] = 1.0;
colors[8] = 1.0;
colors[9] = 1.0;
colors[10] = 1.0;
colors[11] = 1.0;
colors[12] = 1.0;
colors[13] = 1.0;
colors[14] = 1.0;
colors[15] = 1.0;
// Draw the trapezoidal frame areas. Each one is two triangle fans.
// Fan of 2 triangles = 4 verts = 8 values
GLfloat vertices[8];
float insetOffset = 100;
float frameMaxDimension = 1000;
// Bottom
vertices[0] = 0;
vertices[1] = 0;
vertices[2] = frameMaxDimension;
vertices[3] = 0;
vertices[4] = frameMaxDimension - insetOffset;
vertices[5] = 0 + insetOffset;
vertices[6] = 0 + insetOffset;
vertices[7] = 0 + insetOffset;
glVertexPointer(2, GL_FLOAT , 0, vertices);
glColorPointer(4, GL_FLOAT, 0, colors);
glDrawArrays(GL_TRIANGLE_FAN, 0, 4);
// Left
vertices[0] = 0;
vertices[1] = frameMaxDimension;
vertices[2] = 0;
vertices[3] = 0;
vertices[4] = 0 + insetOffset;
vertices[5] = 0 + insetOffset;
vertices[6] = 0 + insetOffset;
vertices[7] = frameMaxDimension - inset;
glVertexPointer(2, GL_FLOAT , 0, vertices);
glColorPointer(4, GL_FLOAT, 0, colors);
glDrawArrays(GL_TRIANGLE_FAN, 0, 4);
/* top & right would be as expected... */
glDisableClientState(GL_COLOR_ARRAY);
glDisableClientState(GL_VERTEX_ARRAY);
As #Newbie posted in the comments,
#quixoto: open your image in Paint program, click with fill tool somewhere in the seam, and you see it makes 90 degree angle line there... means theres only 1 color, no brighter anywhere in the "seam". its just an illusion.
True. While I'm not familiar with this part of math under OpenGL, I believe this is the implicit result of how the interpolation of colors between the triangle vertices is performed... I'm positive that it's called "Bilinear interpolation".
So what to do to solve that? One possibility is to use a texture and just draw a textured quad (or several textured quads).
However, it should be easy to generate such a border in a fragment shader.
A nice solution using a GLSL shader...
Assume you're drawing a rectangle with the bottom-left corner having texture coords equal to (0,0), and the top-right corner with (1,1).
Then generating the "miter" procedurally in a fragment shader would look like this, if I'm correct:
varying vec2 coord;
uniform vec2 insetWidth; // width of the border in %, max would be 0.5
void main() {
vec3 borderColor = vec3(0,0,1);
vec3 backgroundColor = vec3(1,1,1);
// x and y inset, 0..1, 1 means border, 0 means centre
vec2 insets = max(-coord + insetWidth, vec2(0,0)) / insetWidth;
If I'm correct so far, then now for every pixel the value of insets.x has a value in the range [0..1]
determining how deep a given point is into the border horizontally,
and insets.y has the similar value for vertical depth.
The left vertical bar has insets.y == 0,
the bottom horizontal bar has insets.x = 0,, and the lower-left corner has the pair (insets.x, insets.y) covering the whole 2D range from (0,0) to (1,1). See the pic for clarity:
Now we want a transformation which for a given (x,y) pair will give us ONE value [0..1] determining how to mix background and foreground color. 1 means 100% border, 0 means 0% border. And this can be done in several ways!
The function should obey the requirements:
0 if x==0 and y==0
1 if either x==1 or y==1
smooth values in between.
Assume such function:
float bias = max(insets.x,insets.y);
It satisfies those requirements. Actually, I'm pretty sure that this function would give you the same "sharp" edge as you have above. Try to calculate it on a paper for a selection of coordinates inside that bottom-left rectangle.
If we want to have a smooth, round miter there, we just need another function here. I think that something like this would be sufficient:
float bias = min( length(insets) , 1 );
The length() function here is just sqrt(insets.x*insets.x + insets.y*insets.y). What's important: This translates to: "the farther away (in terms of Euclidean distance) we are from the border, the more visible the border should be", and the min() is just to make the result not greater than 1 (= 100%).
Note that our original function adheres to exactly the same definition - but the distance is calculated according to the Chessboard (Chebyshev) metric, not the Euclidean metric.
This implies that using, for example, Manhattan metric instead, you'd have a third possible miter shape! It would be defined like this:
float bias = min(insets.x+insets.y, 1);
I predict that this one would also have a visible "diagonal line", but the diagonal would be in the other direction ("\").
OK, so for the rest of the code, when we have the bias [0..1], we just need to mix the background and foreground color:
vec3 finalColor = mix(borderColor, backgroundColor, bias);
gl_FragColor = vec4(finalColor, 1); // return the calculated RGB, and set alpha to 1
}
And that's it! Using GLSL with OpenGL makes life simpler. Hope that helps!
I think that what you're seeing is a Mach band. Your visual system is very sensitive to changes in the 1st derivative of brightness. To get rid of this effect, you need to blur your intensities. If you plot intensity along a scanline which passes through this region, you'll see that there are two lines which meet at a sharp corner. To keep your visual system from highlighting this area, you'll need to round this join over. You can do this with either a post processing blur or by adding some more small triangles in the corner which ease the transition.
I had that in the past, and it's very sensitive to geometry. For example, if you draw them separately as triangles, in separate operations, instead of as a triangle fan, the problem is less severe (or, at least, it was in my case, which was similar but slightly different).
One thing I also tried is to draw the triangles separately, slightly overlapping onto one another, with a right composition mode (or OpenGL blending) so you don't get the effect. I worked, but I didn't end up using that because it was only a tiny part of the final product, and not worth it.
I'm sorry that I have no idea what is the root cause of this effect, however :(