DirectX stretched pixels - c++

I am just starting off with directX and have run into a problem with stretched pixels. When I create the window and all the directX goodies I use two variables, width and height. For most testing I have them set to 800x600. When I draw a square on the screen it looks stretched.
However when I set the resoltion to 600x600
it looks normal and square. This led me to conclude that it was some sort of pixel stretching. In directX how do I fix this, and make the pixels square.

float aspectRatio = bufferWidth / bufferHeight;
That is completely normal. Once you project into normalized screen space, coordinates go from -1.0 to 1.0 (left to right) and -1.0 to 1.0 (bottom to top). You can see that both directions on the screen have the same range of values. This means that if you draw a square on the screen with equal height and width, it will be aspectRatio times greater in width than height. This explains the good behaviour at 600x600 but a problem at 800x600.(aspectRatio of 1.33)
If you really want a square, what you can do is simply divide the width by your aspect ratio which in your case is 800/600 (1.33) to get a polygon of equal width and height.

You may want to set your viewport according to your window size.
For example:
// Setup the viewport for rendering.
viewport.Width = (float)screenWidth;
viewport.Height = (float)screenHeight;
viewport.MinDepth = 0.0f;
viewport.MaxDepth = 1.0f;
viewport.TopLeftX = 0.0f;
viewport.TopLeftY = 0.0f;
// Create the viewport.
m_deviceContext->RSSetViewports(1, &viewport);
You can check on Rastertek Tutorials for more details.

Related

opengl: avoid clipping in perspective mode?

It looks like the solution is to change projection matrix on-the-fly? Let me do some research to see how to do it correctly.
My scenario is:===>
Say, now, I created a 3D box in a window under windows7 with perspective mode enabled. From users point of view, when users move(rotate/translate) this box, when the box is out of the window, it should be clipped/(hidden partly), that's correct. But when the box is moved inside the window, the box should always be shown totally (not clipped!), right? But my problem is, sometime, when users move the box inside the window, he would see some parts of this box are clipped (for example, one vertex of this box is clipped away). There is no limit how much users can move this box.
My understanding is:===>
when users move the box, this box is out of frustum, that's why it's clipped.
In this case, my code should adjust the frustum on-the-fly (then, projection mattrix is changed) or adjust camera on-the-fly (maybe, adjust the near-far plane as well) or do something else?
My question is:===>
what's the popular technique to avoid this kind of clipping? And make sure users feel they are moving box smoothly, not having any "jerk" (like, suddenly, the box's location is jumped to another location (because our frustum is suddenly changed largely) when users are moving the box ).
I think this is a very classic problem, there should be a perfect solution. Any code/references are appreciated!
I attached a picture to show the problem:
This was happening to me , and adjusting the perspective matrix did not allow a near plane below .5 without all my objects disappearing.
Then I read this somewhere:
DEPTH CLAMPING. - The clipping behavior against the Z position of a vertex
( ie: -w_c \ le z_c \ le w_c ) can be turned off by activating depth clamping.
glEnable( GL_DEPTH_CLAMP ) ;
And I could get close to my objects without them being clipped away.
I do not know if doing this will cause other problems , but as of yet I have not encountered any.
I would suspect that your frustum is too narrow. So, when you rotate your object parts of it are moving outside of the viewable area. As an experiment, try increasing your frustum angle, increasing your Far value to something like 1000 or even 10000 and move your camera further back from centre (higher negative value on the z-plane). This should generate a very large frustum that your object should fit within. Run your project and rotate - if the clipping effect is gone you know your problem is either with the frustum or the model scale (or both).
This code gets called before every redraw. I don't know how you're rotating/translating (timer or mouseDown), but in any case the methods described below can be done smoothly and appear natural to the user.
If your object is being clipped by the near plane, move the near cutoff plane back toward the camera (in this code, increase VIEWPLANEOFFSET). If the camera is too close to allow you to move the near plane far enough back, you may also need to move the camera back.
If your object is being clipped by the left, right, top or bottom clipping planes, adjust the camera aperture.
This is discussed in more detail below.
// ******************************* Distance of The Camera from the Origin
cameraRadius = sqrtf((camera.viewPos.x * camera.viewPos.x) + (camera.viewPos.y * camera.viewPos.y) + (camera.viewPos.z * camera.viewPos.z));
GLfloat phi = atanf(camera.viewPos.x/cameraRadius);
GLfloat theta = atanf(camera.viewPos.y/cameraRadius);
camera.viewUp.x = cosf(theta) * sinf(phi);
camera.viewUp.y = cosf(theta);
camera.viewUp.z = sinf(theta) * sinf(phi);
You'll see with the View matrix we're only defining the camera (eye) position and view direction. There's no clipping going on here yet, but the camera position will limit what we can see in that if it's too close to the object, we'll be limited in how we can set the near cutoff plane. I can't think of any reason not to set the camera back fairly far.
// ********************************************** Make the View Matrix
viewMatrix = GLKMatrix4MakeLookAt(camera.viewPos.x, camera.viewPos.y, camera.viewPos.z, camera.viewPos.x + camera.viewDir.x, camera.viewPos.y + camera.viewDir.y, camera.viewPos.z + camera.viewDir.z, camera.viewUp.x, camera.viewUp.y, camera.viewUp.z);
The Projection matrix is where the clipping frustum is defined. Again, if the camera is too close, we won't be able to set the near cutoff plane to avoid clipping the object if it's bigger than our camera distance from the origin. While I can't see any reason not to set the camera back fairly far, there are reasons (accuracy of depth culling) not to set the near/far clipping planes any further apart than you need.
In this code the camera aperture is used directly, but if you're using something like glFrustum to create the Projection matrix, it's a good idea to calculate the left and right clipping planes from the camera aperture. This way you can create a zoom effect by varying the camera aperture (maybe in a mouseDown method) so the user can zoom in or out as he likes. Increasing the aperture effectively zooms out. Decreasing the aperture effectively zooms in.
// ********************************************** Make Projection Matrix
GLfloat aspectRatio;
GLfloat cameraNear, cameraFar;
// The Camera Near and Far Cutoff Planes
cameraNear = cameraRadius - VIEWPLANEOFFSET;
if (cameraNear < 0.00001)
cameraNear = 0.00001;
cameraFar = cameraRadius + VIEWPLANEOFFSET;
if (cameraFar < 1.0)
cameraFar = 1.0;
// Get The Current Frame
NSRect viewRect = [self frame];
camera.viewWidth = viewRect.size.width;
camera.viewHeight = viewRect.size.height;
// Calculate the Ratio of The View Width / View Height
aspectRatio = viewRect.size.width / viewRect.size.height;
float fieldOfView = GLKMathDegreesToRadians(camera.aperture);
projectionMatrix = GLKMatrix4MakePerspective(fieldOfView, aspectRatio, cameraNear, cameraFar);
EDIT:
Here is some code illustrating how to calculate left and right clipping planes from the camera aperture:
GLfloat ratio, apertureHalfAngle, width;
GLfloat cameraLeft, cameraRight, cameraTop, cameraBottom, cameraNear, cameraFar;
GLfloat shapeSize = 3.0;
GLfloat cameraRadius;
// Distance of The Camera from the Origin
cameraRadius = sqrtf((camera.viewPos.x * camera.viewPos.x) + (camera.viewPos.y * camera.viewPos.y) + (camera.viewPos.z * camera.viewPos.z));
// The Camera Near and Far Cutoff Planes
cameraNear = cameraRadius - (shapeSize * 0.5);
if (cameraNear < 0.00001)
cameraNear = 0.00001;
cameraFar = cameraRadius + (shapeSize * 0.5);
if (cameraFar < 1.0)
cameraFar = 1.0;
// Calculte the camera Aperture Half Angle (radians) from the Camera Aperture (degrees)
apertureHalfAngle = (camera.aperture / 2) * PI / 180.0; // half aperture degrees to radians
// Calculate the Width from 0 of the Left and Right Camera Cutoffs
// We Use Camera Radius Rather Than Camera Near For Our Own Reasons
width = cameraRadius * tanf(apertureHalfAngle);
NSRect viewRect = [self bounds];
camera.viewWidth = viewRect.size.width;
camera.viewHeight = viewRect.size.height;
// Calculate the Ratio of The View Width / View Height
ratio = camera.viewWidth / camera.viewHeight;
// Calculate the Camera Left, Right, Top and Bottom
if (ratio >= 1.0)
{
cameraLeft = -ratio * width;
cameraRight = ratio * width;
cameraTop = width;
cameraBottom = -width;
} else {
cameraLeft = -width;
cameraRight = width;
cameraTop = width / ratio;
cameraBottom = -width / ratio;
}

Drawing a textured quad pixel-exact in 2D with OpenGL 3.3?

How to scroll a textured quad "pixel-exact" over the screen
using float-positions with GL_LINEAR-filtering?
If I try this task, I could always see a hard pixel-change
during the very smooth movement if a subpixels-coordinate is
greater or equal than 0.5.This looks like a really ugly stuttering.
I think the problem is here, that after scrolling 0.5 subpixels
the quad is so misaligned to the texture-coordinates, that OpenGL
takes now another neighbour pixel for texturing which is newly
interpolated and so the new drawn subpixel is not aligned to
the rendered subpixel before?!
Could be the solution here, to realign the texture-coordinates
on positions > 0.5f? Can somebody help me out with a function,
that deals with this problem and calculates the right uv-coordinates?
I think the uv-coordinates should moved to a new position
if the quads subpixel-position is >= 0.5f.
Here is a link to screenshot, that shows exact the hard pixel-jump
in x-direction (left) on x=0.5f in the last frame.
(zoom the screenshot with strg+mousewheel)
http://i.stack.imgur.com/n0GVA.png
Here are the relevant code-fragments:
float calcTexPos(float fTexPos,float spriteX, float spriteY){return (fTexPos)/(64.0f);}
void addSpriteToLocalVerticeArray(Vertex * ptrVertexArrayLocal, Sprite *sprite, int *ptrSpriteVerticeCounter, float fCamX, float fCamY)
{
Sprite oSpriteTranslated = *sprite;
oSpriteTranslated.x = (sprite->x) - (fCamX);
oSpriteTranslated.y = (sprite->y) - (fCamY);
ptrVertexArrayLocal[(*ptrSpriteVerticeCounter)++] = Vertex(oSpriteTranslated.x,oSpriteTranslated.y,0.0f,calcTexPos(1,oSpriteTranslated.x,oSpriteTranslated.y),calcTexPos(62,oSpriteTranslated.x,oSpriteTranslated.y)); //Left Top
ptrVertexArrayLocal[(*ptrSpriteVerticeCounter)++] = Vertex(oSpriteTranslated.x,oSpriteTranslated.y-oSpriteTranslated.height,0.0f,calcTexPos(1,oSpriteTranslated.x,oSpriteTranslated.y),calcTexPos(31,oSpriteTranslated.x,oSpriteTranslated.y)); //Left Bottom
ptrVertexArrayLocal[(*ptrSpriteVerticeCounter)++] = Vertex(oSpriteTranslated.x+oSpriteTranslated.width,oSpriteTranslated.y-oSpriteTranslated.height,0.0f,calcTexPos(33,oSpriteTranslated.x,oSpriteTranslated.y),calcTexPos(31,oSpriteTranslated.x,oSpriteTranslated.y)); //Right Bottom
ptrVertexArrayLocal[(*ptrSpriteVerticeCounter)++] = Vertex(oSpriteTranslated.x+oSpriteTranslated.width,oSpriteTranslated.y-oSpriteTranslated.height,0.0f,calcTexPos(33,oSpriteTranslated.x,oSpriteTranslated.y),calcTexPos(31,oSpriteTranslated.x,oSpriteTranslated.y)); //Right Bottom
ptrVertexArrayLocal[(*ptrSpriteVerticeCounter)++] = Vertex(oSpriteTranslated.x+oSpriteTranslated.width,oSpriteTranslated.y,0.0f,calcTexPos(33,oSpriteTranslated.x,oSpriteTranslated.y),calcTexPos(62,oSpriteTranslated.x,oSpriteTranslated.y)); //Right Top
ptrVertexArrayLocal[(*ptrSpriteVerticeCounter)++] = Vertex(oSpriteTranslated.x,oSpriteTranslated.y,0.0f,calcTexPos(1,oSpriteTranslated.x,oSpriteTranslated.y),calcTexPos(62,oSpriteTranslated.x,oSpriteTranslated.y)); //Left Top
}
What I have tried that didn't work:
Adding an offset of 0.5f to my quad-coords. That moves the problem to the stage, that
now each full 1.0f subpixel-position a hard subpixel-change appears.
Changing the calcTexPos-Function: -> (2.0f*fTexPos+1.0f)/(2.0f*64.0f)
Checking, If MSAA is activated on the driver-panel
Added transparent borders to the texture instead of copied borders

gluLookAt eyeZ not working as excpected

I'm having some trouble with the eyeZ value of gluLookAt.
The way I'd imagine it to work is like moving a camera further away, thus shrinking the object in your field of view.
I have a simple setup with a simple shape in 3d space draw via glDrawElements with an 100x100x100 ortho where 0, 0, 0 is the center of the universe. The object is at 0, 0, 0.
I'm trying to make it so when you scroll the mouse wheel you get further away/closer to the object. Here's how glulookat is called.
float eyeX = 0;
float eyeY = 0;
float eyeZ = differenceInMouseWheel();
float centerX = 0;
float centerY = 0;
float centerZ = 0;
float upX = 0;
float upY = 1;
float upZ = 0;
gluLookAt(eyeX, eyeY, eyeZ, centerX, centerY, centerZ, upX, upY, upZ);
The only thing changing here is eyeZ.
The effect is strange, I scroll for about 10 seconds and then suddenly half of the object disappears. From there more and more of it disappears. This is probably because the camera is going out off into the 50 z distance limit, but I can't understand why the object doesn't scale like it would in 3D space.
Maybe I'm misunderstanding how the center values work?
I've also tried applying differenceInMouseWheel() to centerZ but that changed nothing, I'm going to assume the center values are just so glu can get a direction and nothing more.
Maybe the up vector should change? I don't know at this point.
You are using an orthographic projection. This means that no matter how great the distance, your objects will always appear to have the same size. Your object will disappear once it reaches the far clipping plane however, which is what you are seeing when you scroll for a long time.
You have two options: Either you use a perspective projection or you implement a zoom by modifying the orthographic projection matrix like so:
Let zoom be in (0, 1], and let viewport be a rectangle that is set to your current viewport. Let near be your near clipping plane distance and far be your far clipping plane distance.
glOrtho(zoom * viewport.width / 2, zoom * viewport.width / 2, zoom * viewport.height / 2, zoom * viewport.height / 2, near, far);
Are you using a perspective projection matrix, or an orthographic one? If you don't use a perspective matrix the object's wont appear to change in size as you move the camera around.

How to tell the size of font in pixels when rendered with openGL

I'm working on the editor for Bitfighter, where we use the default OpenGL stroked font. We generally render the text with a linewidth of 2, but this makes smaller fonts less readable. What I'd like to do is detect when the fontsize will fall below some threshold, and drop the linewidth to 1. The problem is, after all the transforms and such are applied, I don't know how to tell how tall (in pixels) a font of size <fontsize> will be rendered.
This is the actual inner rendering function:
if(---something--- < thresholdSizeInPixels)
glLineWidth(1);
float scalefactor = fontsize / 120;
glPushMatrix();
glTranslatef(x, y + (fix ? 0 : size), 0);
glRotatef(angle * radiansToDegreesConversion, 0, 0, 1);
glScalef(scaleFactor, -scaleFactor, 1);
for(S32 i = 0; string[i]; i++)
OpenglUtils::drawCharacter(string[i]);
glPopMatrix();
Just before calling this, I want to check the height of the font, then drop the linewidth if necessary. What goes in the ---something--- spot?
Bitfighter is a pure old-school 2D game, so there are no fancy 3D transforms going on. All code is in C++.
My solution was to combine the first part Christian Rau's solution with a fragment of the second. Basically, I can get the current scaling factor with this:
static float modelview[16];
glGetFloatv(GL_MODELVIEW_MATRIX, modelview); // Fills modelview[]
float scalefact = modelview[0];
Then, I multiply scalefact by the fontsize in pixels, and multiply that by the ratio of windowHeight / canvasHeight to get the height in pixels that my text will be rendered.
That is...
textheight = scalefact * fontsize * widndowHeight / canvasHeight
And I liked also the idea of scaling the line thickness rather than stepping from 2 to 1 when a threshold is crossed. It all works very nicely now.
where we use the default OpenGL stroked font
OpenGL doesn't do fonts. There is no default OpenGL stroked font.
Maybe you are referring to GLUT and its glutStrokeCharacter function. Then please take note that GLUT is not part of OpenGL. It's an independent library, focused on providing a simplicistic framework for small OpenGL demos and tutorials.
To answer your question: GLUT Stroke Fonts are defined in terms of vertices, so the usual transformations apply. Since usually all transformations are linear, you can simply transform the vector (0, base_height, 0) through modelview and projection finally doing the perspective divide (gluProject does all this for you – GLU is not part OpenGL, too), the resulting vector is what you're looking for; take the vector length for scaling the width.
This should be determinable rather easily. The font's size in pixels just depends on the modelview transformation (actually only the scaling part), the projection transformation (which is a simple orthographic projection, I suppose) and the viewport settings, and of course on the size of an individual character of the font in untransformed form (what goes into the glVertex calls).
So you just take the font's basic size (lets consider the height only and call it height) and first do the modelview transformation (assuming the scaling shown in the code is the only one):
height *= scaleFactor;
Next we do the projection transformation:
height /= (top-bottom);
with top and bottom being the values you used when specifying the orthographic transformation (e.g. using glOrtho). And last but not least we do the viewport transformation:
height *= viewportHeight;
with viewportHeight being, you guessed it, the height of the viewport specified in the glViewport call. The resulting height should be the height of your font in pixels. You can use this to somehow scale the line width (without an if), as the line width parameter is in floats anyway, let OpenGL do the discretization.
If your transformation pipeline is more complicated, you could use a more general approach using the complete transformation matrices, perhaps with the help of gluProject to transform an object-space point to a screen-space point:
double x0, x1, y0, y1, z;
double modelview[16], projection[16];
int viewport[4];
glGetDoublev(GL_MODELVIEW_MATRIX, modelview);
glGetDoublev(GL_PROJECTION_MATRIX, projection);
glGetIntegerv(GL_VIEWPORT, viewport);
gluProject(0.0, 0.0, 0.0, modelview, projection, viewport, &x0, &y0, &z);
gluProject(fontWidth, fontHeight, 0.0, modelview, projection, viewport, &x1, &y1, &z);
x1 -= x0;
y1 -= y0;
fontScreenSize = sqrt(x1*x1 + y1*y1);
Here I took the diagonal of the character and not only the height, to better ignore rotations and we used the origin as reference value to ignore translations.
You might also find the answers to this question interesting, which give some more insight into OpenGL's transformation pipeline.

Gradient "miter" in OpenGL shows seams at the join

I am doing some really basic experiments around some 2D work in GL. I'm trying to draw a "picture frame" around an rectangular area. I'd like for the frame to have a consistent gradient all the way around, and so I'm constructing it with geometry that looks like four quads, one on each side of the frame, tapered in to make trapezoids that effectively have miter joins.
The vert coords are the same on the "inner" and "outer" rectangles, and the colors are the same for all inner and all outer as well, so I'd expect to see perfect blending at the edges.
But notice in the image below how there appears to be a "seam" in the corner of the join that's lighter than it should be.
I feel like I'm missing something conceptually in the math that explains this. Is this artifact somehow a result of the gradient slope? If I change all the colors to opaque blue (say), I get a perfect solid blue frame as expected.
Update: Code added below. Sorry kinda verbose. Using 2-triangle fans for the trapezoids instead of quads.
Thanks!
glClearColor(1.0, 1.0, 1.0, 1.0);
glClear(GL_COLOR_BUFFER_BIT);
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_COLOR_ARRAY);
// Prep the color array. This is the same for all trapezoids.
// 4 verts * 4 components/color = 16 values.
GLfloat colors[16];
colors[0] = 0.0;
colors[1] = 0.0;
colors[2] = 1.0;
colors[3] = 1.0;
colors[4] = 0.0;
colors[5] = 0.0;
colors[6] = 1.0;
colors[7] = 1.0;
colors[8] = 1.0;
colors[9] = 1.0;
colors[10] = 1.0;
colors[11] = 1.0;
colors[12] = 1.0;
colors[13] = 1.0;
colors[14] = 1.0;
colors[15] = 1.0;
// Draw the trapezoidal frame areas. Each one is two triangle fans.
// Fan of 2 triangles = 4 verts = 8 values
GLfloat vertices[8];
float insetOffset = 100;
float frameMaxDimension = 1000;
// Bottom
vertices[0] = 0;
vertices[1] = 0;
vertices[2] = frameMaxDimension;
vertices[3] = 0;
vertices[4] = frameMaxDimension - insetOffset;
vertices[5] = 0 + insetOffset;
vertices[6] = 0 + insetOffset;
vertices[7] = 0 + insetOffset;
glVertexPointer(2, GL_FLOAT , 0, vertices);
glColorPointer(4, GL_FLOAT, 0, colors);
glDrawArrays(GL_TRIANGLE_FAN, 0, 4);
// Left
vertices[0] = 0;
vertices[1] = frameMaxDimension;
vertices[2] = 0;
vertices[3] = 0;
vertices[4] = 0 + insetOffset;
vertices[5] = 0 + insetOffset;
vertices[6] = 0 + insetOffset;
vertices[7] = frameMaxDimension - inset;
glVertexPointer(2, GL_FLOAT , 0, vertices);
glColorPointer(4, GL_FLOAT, 0, colors);
glDrawArrays(GL_TRIANGLE_FAN, 0, 4);
/* top & right would be as expected... */
glDisableClientState(GL_COLOR_ARRAY);
glDisableClientState(GL_VERTEX_ARRAY);
As #Newbie posted in the comments,
#quixoto: open your image in Paint program, click with fill tool somewhere in the seam, and you see it makes 90 degree angle line there... means theres only 1 color, no brighter anywhere in the "seam". its just an illusion.
True. While I'm not familiar with this part of math under OpenGL, I believe this is the implicit result of how the interpolation of colors between the triangle vertices is performed... I'm positive that it's called "Bilinear interpolation".
So what to do to solve that? One possibility is to use a texture and just draw a textured quad (or several textured quads).
However, it should be easy to generate such a border in a fragment shader.
A nice solution using a GLSL shader...
Assume you're drawing a rectangle with the bottom-left corner having texture coords equal to (0,0), and the top-right corner with (1,1).
Then generating the "miter" procedurally in a fragment shader would look like this, if I'm correct:
varying vec2 coord;
uniform vec2 insetWidth; // width of the border in %, max would be 0.5
void main() {
vec3 borderColor = vec3(0,0,1);
vec3 backgroundColor = vec3(1,1,1);
// x and y inset, 0..1, 1 means border, 0 means centre
vec2 insets = max(-coord + insetWidth, vec2(0,0)) / insetWidth;
If I'm correct so far, then now for every pixel the value of insets.x has a value in the range [0..1]
determining how deep a given point is into the border horizontally,
and insets.y has the similar value for vertical depth.
The left vertical bar has insets.y == 0,
the bottom horizontal bar has insets.x = 0,, and the lower-left corner has the pair (insets.x, insets.y) covering the whole 2D range from (0,0) to (1,1). See the pic for clarity:
Now we want a transformation which for a given (x,y) pair will give us ONE value [0..1] determining how to mix background and foreground color. 1 means 100% border, 0 means 0% border. And this can be done in several ways!
The function should obey the requirements:
0 if x==0 and y==0
1 if either x==1 or y==1
smooth values in between.
Assume such function:
float bias = max(insets.x,insets.y);
It satisfies those requirements. Actually, I'm pretty sure that this function would give you the same "sharp" edge as you have above. Try to calculate it on a paper for a selection of coordinates inside that bottom-left rectangle.
If we want to have a smooth, round miter there, we just need another function here. I think that something like this would be sufficient:
float bias = min( length(insets) , 1 );
The length() function here is just sqrt(insets.x*insets.x + insets.y*insets.y). What's important: This translates to: "the farther away (in terms of Euclidean distance) we are from the border, the more visible the border should be", and the min() is just to make the result not greater than 1 (= 100%).
Note that our original function adheres to exactly the same definition - but the distance is calculated according to the Chessboard (Chebyshev) metric, not the Euclidean metric.
This implies that using, for example, Manhattan metric instead, you'd have a third possible miter shape! It would be defined like this:
float bias = min(insets.x+insets.y, 1);
I predict that this one would also have a visible "diagonal line", but the diagonal would be in the other direction ("\").
OK, so for the rest of the code, when we have the bias [0..1], we just need to mix the background and foreground color:
vec3 finalColor = mix(borderColor, backgroundColor, bias);
gl_FragColor = vec4(finalColor, 1); // return the calculated RGB, and set alpha to 1
}
And that's it! Using GLSL with OpenGL makes life simpler. Hope that helps!
I think that what you're seeing is a Mach band. Your visual system is very sensitive to changes in the 1st derivative of brightness. To get rid of this effect, you need to blur your intensities. If you plot intensity along a scanline which passes through this region, you'll see that there are two lines which meet at a sharp corner. To keep your visual system from highlighting this area, you'll need to round this join over. You can do this with either a post processing blur or by adding some more small triangles in the corner which ease the transition.
I had that in the past, and it's very sensitive to geometry. For example, if you draw them separately as triangles, in separate operations, instead of as a triangle fan, the problem is less severe (or, at least, it was in my case, which was similar but slightly different).
One thing I also tried is to draw the triangles separately, slightly overlapping onto one another, with a right composition mode (or OpenGL blending) so you don't get the effect. I worked, but I didn't end up using that because it was only a tiny part of the final product, and not worth it.
I'm sorry that I have no idea what is the root cause of this effect, however :(