How could I remove this colour interpolation artefact across a quad? - glsl

I've been reading up on a vulkan tutorial online, here: https://vulkan-tutorial.com. This question should apply to any 3D rendering API however.
In this lesson https://vulkan-tutorial.com/Vertex_buffers/Index_buffer, the tutorial had just covered using indexed rendering in order to reuse vertices when drawing the following simple two-triangle quad:
The four vertices were assigned red, green, blue and white colours as vertex attributes and the fragment shader had those colours interpolated across the triangles as expected. This leads to the ugly visual artefact on the diagonal where the two triangles meet. As I understand it, the interpolation will only be happening across each triangle, and so where the two triangles meet the interpolation doesn't cross the boundary.
How could you, generally in any rendering api, have the colours smoothly interpolated over all four corners for a nice colour wheel affect without having this hard line?

This is a correct output from a graphics api point of view. You can achieve your own desired output (a color gradient) within the shader code. You basically need to interpolate the colors yourself. To get an idea on how to do this, here is a glsl piece of code from this answer:
uniform vec2 resolution;
void main(void)
{
vec2 p = gl_FragCoord.xy / resolution.xy;
float gray = 1.0 - p.x;
float red = p.y;
gl_FragColor = vec4(red, gray*red, gray*red, 1.0);
}

Related

OpenGL shader gamma on solid color

I want to implement gamma correction on my OpenGL 3D renderer, I understand that it's absolutely relevant on texture loaded in sRGB, so I do this:
vec4 texColor;
texColor = texture(src_tex_unit0, texCoordVarying);
texColor = vec4(pow(texColor.rgb,vec3(2.2)),1.0);
vec4 colorPreGamma = texColor * (vec4(ambient,1.0) + vec4(diffuse,1.0));
fragColor = vec4(pow(colorPreGamma.rgb, vec3(1.0/gamma)),1.0);
But my question is about solid color, when the surface of the 3D object I want lit is not textured but just colored by a per vertex RGB value. In this case, do I have to transform my color in Linear space, and after the lighting operation, transform back to gamma space like I do for a texture?
Does this apply when my light are colored?
In this case, do I have to transform my color in Linear space, and after the lighting operation, transform back to gamma space like I do for a texture?
That depends: what colorspace are your colors in?
You're not doing this correction because of where they come from; you're doing it because of what the colors actually are. If the value is not linear, then you must linearize it before using it, regardless of where it comes from.
You are ultimately responsible for putting that color there. So you must have to know whether that color is in linear RGB or sRGB colorspace. And if the color is not linear, then you have to linearize it before you can get meaningful numbers from it.
In OpenGL there isn't a huge distinction between color data and other kinds of data: if you have a vec3 you can access components as .xyz or .rgb. It's all just data.
So ask yourself this: "Do I have to gamma correct my vertex positions in the vertex shader?"
Of course not, because your vertex positions are already in linear space. So if you are similarly setting your vertex colors in linear space, again no gamma correction is needed.
In other words, do you imagine vec3(0.5, 0.5, 0.5) as being a gray that is visually halfway between black and white? Then you need gamma correction.
Do you imagine it as being mathematically halfway between black and white (in terms of measurable light intensity)? Then it's already linear.

Why is my GLSL shader rendering a cleavage?

I'm working on a deferred lighting technique in 2D, using a frame buffer to accumulate light sources using the GL_MAX blend equation.
Here's what I get when rendering one light source (the geometry is a quad without a texture, I'm only using a fragment shader for colouring) to my buffer:
Which is exactly what I want - attenuation from the light source. However, when two light sources are near each other, when they overlap, they seem to produce a lower RGB value where they meet, like so:
Why is there a darker line between the two? I was expecting that with GL_MAX blend equation they would smoothly blend into each other, using the maximal value of the fragments in each location.
Here's the setup for the FBO (using LibGDX):
Gdx.gl.glClearColor(0.14f, 0.14f, 0.19f, 1);
Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT);
Gdx.gl.glBlendEquation(GLMAX_BLEND_EQUATION);
Gdx.gl.glBlendFunc(GL20.GL_SRC_COLOR, GL20.GL_DST_COLOR);
Gdx.gl.glEnable(GL20.GL_BLEND);
I don't think the call to glBlendFunc is actually necessary when using this equation. GLMAX_BLEND_EQUATION is set to 0x8008.
varying vec2 v_texCoords;
varying vec2 v_positionRelativeToLight;
uniform sampler2D u_texture;
uniform vec3 u_lightPosition;
uniform vec3 u_lightColor;
void main() {
float distanceToLight = length(v_positionRelativeToLight);
float falloffVarA = 0.1;
float falloffVarB = 1.0;
float attenuation = 1.0 / (1.0 + (falloffVarA*distanceToLight) + (falloffVarB*distanceToLight*distanceToLight));
float minDistanceOrAttenuation = min(attenuation, 1.0-distanceToLight);
float combined = minDistanceOrAttenuation * attenuation;
gl_FragColor = vec4(combined, combined, combined, 1.0);
}
There are extra variables passed in there as this fragment shader is usually more complicated, but I've cut it down to just show how the attenuation and blending is behaving.
This happens between every light source that I render where they meet - rather than the colour that I'm expecting, the meeting of two light sources - the equidistant point between the two quads, is a darker colour that I'm expecting. Any idea why and how to fix it?
This is the result of subtracting the first image from the second:
The background on the first isn't quite black, hence the yellowing on the right, but otherwise you can clearly see the black region on the left where original values were preserved, the darker arc where values from both lights were evaluated but the right was greater, then all the area on the right that the original light didn't touch.
I therefore think you're getting max-pick blending. But what you want is additive blending:
Gdx.gl.glBlendFunc(GL20.GL_ONE, GL20.GL_ONE);
... and leave the blend equation on the default of GL_FUNC_ADD.
Your result is the expected appearance for maximum blending (which is just like the lighten blend mode in Photoshop). The dark seam looks out of place perhaps because of the non-linear falloff of each light, but it's mathematically correct. If you introduce a light with a bright non-white color to it, it will look much more objectionable.
You can get around this if you render your lights to a frame buffer with inverted colors and multiplicative blending, and then render the frame buffer with inverted colors. Then the math works out to not have the seams, but it won't look unusually bright like what additive blending produces.
Use a pure white clear color on your frame buffer and then render the lights with the standard GL_ADD blend equation and the blend function GL_ONE_MINUS_DST_COLOR. Then render your FBO texture to the screen, inverting the colors again.
Two lights drawn using your method
Two lights drawn additively
Two lights, drawn sequentially with GL_ONE_MINUS_DST_COLOR, GL_ZERO and GL_ADD
The above result, inverted

Fully transparent torus in OpenGL

I have a torus rendered by OpenGL and can map a texture; there is no problem as long as the texture is opaque. But it doesn't work when I make the color selectively transparent in fragment shader. Or rather: it works but only in some areas depending on the order of triangles in vertex buffer; see the difference along the outer equator.
The torus should be evenly covered by spots.
The source image will be png, however for now I work with bmp as it is easier to load (the texture loading function is part of a tutorial).
The image has white background and spots of different colors on top of it; it is not transparent.
The desired result is nearly transparent torus with spots. Both spots in the front and the back side must be visible.
The rendering will be done offline, so I don't require speed; I just need to generate image of torus from an image of its surface.
So far my code looks like this (it is a merge of two examples):
https://gist.github.com/juriad/ba66f7184d12c5a29e84
Fragment shader is:
#version 330 core
// Interpolated values from the vertex shaders
in vec2 UV;
// Ouput data
out vec4 color;
// Values that stay constant for the whole mesh.
uniform sampler2D myTextureSampler;
void main(){
// Output color = color of the texture at the specified UV
color.rgb = texture2D( myTextureSampler, UV ).rgb;
if (color.r == 1 && color.g == 1 && color.b == 1) {
color.a = 0.2;
} else {
color.a = 1;
}
}
I know that the issue is related to order.
What I could do (but don't know what will work):
Add transparency to the input image (and find a code which loads such image).
Do something in vertex shader (see Fully transparent OpenGL model).
Sorting would solve my problem, but if I get it correctly, I have to implement it myself. I would have to find a center of each triangle (easy), project it with my matrix and compare z values.
Change somehow blending and depth handling:
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glEnable(GL_DEPTH_TEST);
glDepthMask(GL_TRUE);
glDepthFunc(GL_LEQUAL);
glDepthRange(0.0f, 1.0f);
I need an advice, how to continue.
Update, this nearly fixes the issue:
glDisable(GL_DEPTH_TEST);
//glDepthMask(GL_TRUE);
glDepthFunc(GL_NEVER);
//glDepthRange(0.0f, 1.0f);
I wanted to write that it doesn't distinguish sports in front and the back, but then I realized they are nearly white and blending with white doesn't make difference.
The new image of torus with colorized texture is:
The remaining problems are:
red spots are blue - it is related to the function loading BMP (doesn't matter)
as in the input images all the spots are of the same size, the bigger spots should be on the top and therefore saturated and not blended with white body of the torus. It seems that the order is opposite than it should be. If you compare it to the previous image, there the big spots by appearance were drawn correctly and the small ones were hidden (the back side of the torus).
How to fix the latter one?
First problem was solved by disabling depth-test (see update in the question).
Second problem was solved by manual sorting of array of all triangles. It works well even in real-time for 20000 triangles, which is more than sufficient for my purpose
The resulting source code: https://gist.github.com/juriad/80b522c856dbd00d529c
It is based on and uses includes from OpenGL tutorial: http://www.opengl-tutorial.org/download/.

How to apply texture to a part of a sphere

I am trying to put a texture in only a part of a sphere.
I have a sphere representing the earth with its topography and a terrain texture for a part of the globe, say satellite map for Italy.
I want to show that terrain over the part of the sphere where Italy is.
I'm creating my sphere drawing a set of triangle strips.
As far as I understand, if I want to use a texture I need to specify a texture coord for each vertex (glTexCoord2*). But I do not have a valid texture for all of them.
So how do I tell OpenGL to skip texture for those vertexes?
I'll assume you have two textures or a color attribute for the remainder of the sphere ("not Italy").
The easiest way to do this would be to create a texture that covers the whole sphere, but use the alpha channel. For example, use alpha=1 for "not italy" and alpha=0 for "italy". Then you could do something like this in your fragment shader (pseudo-code, I did not test anything):
...
uniform sampler2D extra_texture;
in vec2 texture_coords;
out vec3 final_color;
...
void main() {
...
// Assume color1 to be the base color for the sphere, no matter how you get it (attribute/texture), it has at least 3 components.
vec4 color2 = texture(extra_texture, texture_coords);
final_color = mix(vec3(color2), vec3(color1), color2.a);
}
The colors in mix are combined as follows, mix(x,y,a) = x*(1-a)+y*a, this is done component wise for vectors. So you can see that if alpha=1 ("not Italy"), color1 will be picked, and vice versa for alpha=0.
You could extend this to multiple layers using texture arrays or something similar, but I'd keep it simple 2-layer to begin with.

Is it possible to thicken a quadratic Bézier curve using the GPU only?

I draw lots of quadratic Bézier curves in my OpenGL program. Right now, the curves are one-pixel thin and software-generated, because I'm at a rather early stage, and it is enough to see what works.
Simply enough, given 3 control points (P0 to P2), I evaluate the following equation with t varying from 0 to 1 (with steps of 1/8) in software and use GL_LINE_STRIP to link them together:
B(t) = (1 - t)2P0 + 2(1 - t)tP1 + t2P2
Where B, obviously enough, results in a 2-dimensional vector.
This approach worked 'well enough', since even my largest curves don't need much more than 8 steps to look curved. Still, one pixel thin curves are ugly.
I wanted to write a GLSL shader that would accept control points and a uniform thickness variable to, well, make the curves thicker. At first I thought about making a pixel shader only, that would color only pixels within a thickness / 2 distance of the curve, but doing so requires solving a third degree polynomial, and choosing between three solutions inside a shader doesn't look like the best idea ever.
I then tried to look up if other people already did it. I stumbled upon a white paper by Loop and Blinn from Microsoft Research where the guys show an easy way of filling the area under a curve. While it works well to that extent, I'm having trouble adapting the idea to drawing between two bouding curves.
Finding bounding curves that match a single curve is rather easy with a geometry shader. The problems come with the fragment shader that should fill the whole thing. Their approach uses the interpolated texture coordinates to determine if a fragment falls over or under the curve; but I couldn't figure a way to do it with two curves (I'm pretty new to shaders and not a maths expert, so the fact I didn't figure out how to do it certainly doesn't mean it's impossible).
My next idea was to separate the filled curve into triangles and only use the Bézier fragment shader on the outer parts. But for that I need to split the inner and outer curves at variable spots, and that means again that I have to solve the equation, which isn't really an option.
Are there viable algorithms for stroking quadratic Bézier curves with a shader?
This partly continues my previous answer, but is actually quite different since I got a couple of central things wrong in that answer.
To allow the fragment shader to only shade between two curves, two sets of "texture" coordinates are supplied as varying variables, to which the technique of Loop-Blinn is applied.
varying vec2 texCoord1,texCoord2;
varying float insideOutside;
varying vec4 col;
void main()
{
float f1 = texCoord1[0] * texCoord1[0] - texCoord1[1];
float f2 = texCoord2[0] * texCoord2[0] - texCoord2[1];
float alpha = (sign(insideOutside*f1) + 1) * (sign(-insideOutside*f2) + 1) * 0.25;
gl_FragColor = vec4(col.rgb, col.a * alpha);
}
So far, easy. The hard part is setting up the texture coordinates in the geometry shader. Loop-Blinn specifies them for the three vertices of the control triangle, and they are interpolated appropriately across the triangle. But, here we need to have the same interpolated values available while actually rendering a different triangle.
The solution to this is to find the linear function mapping from (x,y) coordinates to the interpolated/extrapolated values. Then, these values can be set for each vertex while rendering a triangle. Here's the key part of my code for this part.
vec2[3] tex = vec2[3]( vec2(0,0), vec2(0.5,0), vec2(1,1) );
mat3 uvmat;
uvmat[0] = vec3(pos2[0].x, pos2[1].x, pos2[2].x);
uvmat[1] = vec3(pos2[0].y, pos2[1].y, pos2[2].y);
uvmat[2] = vec3(1, 1, 1);
mat3 uvInv = inverse(transpose(uvmat));
vec3 uCoeffs = vec3(tex[0][0],tex[1][0],tex[2][0]) * uvInv;
vec3 vCoeffs = vec3(tex[0][1],tex[1][1],tex[2][1]) * uvInv;
float[3] uOther, vOther;
for(i=0; i<3; i++) {
uOther[i] = dot(uCoeffs,vec3(pos1[i].xy,1));
vOther[i] = dot(vCoeffs,vec3(pos1[i].xy,1));
}
insideOutside = 1;
for(i=0; i< gl_VerticesIn; i++){
gl_Position = gl_ModelViewProjectionMatrix * pos1[i];
texCoord1 = tex[i];
texCoord2 = vec2(uOther[i], vOther[i]);
EmitVertex();
}
EndPrimitive();
Here pos1 and pos2 contain the coordinates of the two control triangles. This part renders the triangle defined by pos1, but with texCoord2 set to the translated values from the pos2 triangle. Then the pos2 triangle needs to be rendered, similarly. Then the gap between these two triangles at each end needs to filled, with both sets of coordinates translated appropriately.
The calculation of the matrix inverse requires either GLSL 1.50 or it needs to be coded manually. It would be better to solve the equation for the translation without calculating the inverse. Either way, I don't expect this part to be particularly fast in the geometry shader.
You should be able to use technique of Loop and Blinn in the paper you mentioned.
Basically you'll need to offset each control point in the normal direction, both ways, to get the control points for two curves (inner and outer). Then follow the technique in Section 3.1 of Loop and Blinn - this breaks up sections of the curve to avoid triangle overlaps, and then triangulates the main part of the interior (note that this part requires the CPU). Finally, these triangles are filled, and the small curved parts outside of them are rendered on the GPU using Loop and Blinn's technique (at the start and end of Section 3).
An alternative technique that may work for you is described here:
Thick Bezier Curves in OpenGL
EDIT:
Ah, you want to avoid even the CPU triangulation - I should have read more closely.
One issue you have is the interface between the geometry shader and the fragment shader - the geometry shader will need to generate primitives (most likely triangles) that are then individually rasterized and filled via the fragment program.
In your case with constant thickness I think quite a simple triangulation will work - using Loop and Bling for all the "curved bits". When the two control triangles don't intersect it's easy. When they do, the part outside the intersection is easy. So the only hard part is within the intersection (which should be a triangle).
Within the intersection you want to shade a pixel only if both control triangles lead to it being shaded via Loop and Bling. So the fragment shader needs to be able to do texture lookups for both triangles. One can be as standard, and you'll need to add a vec2 varying variable for the second set of texture coordinates, which you'll need to set appropriately for each vertex of the triangle. As well you'll need a uniform "sampler2D" variable for the texture which you can then sample via texture2D. Then you just shade fragments that satisfy the checks for both control triangles (within the intersection).
I think this works in every case, but it's possible I've missed something.
I don't know how to exactly solve this, but it's very interesting. I think you need every different processing unit in the GPU:
Vertex shader
Throw a normal line of points to your vertex shader. Let the vertex shader displace the points to the bezier.
Geometry shader
Let your geometry shader create an extra point per vertex.
foreach (point p in bezierCurve)
new point(p+(0,thickness,0)) // in tangent with p1-p2
Fragment shader
To stroke your bezier with a special stroke, you can use a texture with an alpha channel. You can check the alpha channel on its value. If it's zero, clip the pixel. This way, you can still make the system think it is a solid line, instead of a half-transparent one. You could apply some patterns in your alpha channel.
I hope this will help you on your way. You will have to figure out things yourself a lot, but I think that the Geometry shading will speed your bezier up.
Still for the stroking I keep with my choice of creating a GL_QUAD_STRIP and an alpha-channel texture.