GLSL 2D Rounded corners - opengl

I want to add some black outline to my game screen to make it look like the corners are rounded.
This is the effect I want to achieve:
I figured this effect was probably quite easy to create using a shader, instead of drawing a giant bitmap on top of everything.
Can someone help me with the GLSL shader code for this effect? I have 0 experience with shaders and was unable to find anything like this on the internet.

I've accidentaly found a nice solution for this. Not exactly what you've asked for, but in fact it looks even better.
// RESOLUTION is a vec2 with your window size in pixels.
vec2 pos = fragCoord.xy / RESOLUTION;
// Adjust .2 (first pow() argument) below to change frame thickness.
if (pos.x * pos.y * (1.-pos.x) * (1.-pos.y) < pow(.2,4.))
fragColor = vec4(0,0,0,1);
It gives following result:
If you don't like those thin lines, you can remove them just by upscaling the image. It can be done by adding this line:
// The .985 is 1/scale_factor. You can try to change it and see how it works.
// It needs to be adjusted if you change frame thickness.
pos = (pos - .5) * .985 + .5;
While this effect looks good, it may be smarter to add just a faint shadow instead.
It's easy to implement using the same equation: pos.x * pos.y * (1.-pos.x) * (1.-pos.y)
The value of it ranges from 0.0 at window edges to 0.5^4 in the center.
You can use some easy math to do a shadow that becomes more thick closer to the window edge.
Here is an example of how it may look.
(A screenshot from Duality, my entry for Ludum Dare 35.)

Thanks to #HolyBlackCat my shader now works. I've improved the performance and made it look smoothed.
varying vec4 v_color;
varying vec2 v_texCoord0;
uniform vec2 u_resolution;
uniform vec2 u_screenOffset;
uniform sampler2D u_sampler2D;
const float max = pow(0.2, 4);
void main()
{
vec2 pos = (gl_FragCoord.xy - u_screenOffset) / u_resolution;
float vignette = pos.x * pos.y * (1.-pos.x) * (1.-pos.y);
vec4 color = texture2D(u_sampler2D, v_texCoord0) * v_color;
color.rgb = color.rgb * smoothstep(0, max, vignette);
gl_FragColor = color;
}
Set the uniforms as follows in the resize event of libGDX:
shader.begin();
shader.setUniformf("u_resolution", viewport.getScreenWidth(), viewport.getScreenHeight());
shader.setUniformf("u_screenOffset", viewport.getScreenX(), viewport.getScreenY());
shader.end();
This will make sure the shader works with viewports (only tested with FitViewport) aswell.

Related

Fade texture inner borders to transparent in LibGDX using openGL shaders (glsl)

I'm currently working on a tile game in LibGDX and I'm trying to get a "fog of war" effect by obscuring unexplored tiles. The result I get from this is a dynamically generated black texture of the size of the screen that only covers unexplored tiles leaving the rest of the background visible. This is an example of the fog texture rendered on top of a white background:
What I'm now trying to achieve is to dynamically fade the inner borders of this texture to make it look more like a fog that slowly thickens instead of just a bunch of black boxes put together on top of the background.
Googling the problem I found out I could use shaders to do this, so I tried to learn some glsl (I'm at the very start with shaders) and I came up with this shader:
VertexShader:
//attributes passed from openGL
attribute vec3 a_position;
attribute vec2 a_texCoord0;
//variables visible from java
uniform mat4 u_projTrans;
//variables shared between fragment and vertex shader
varying vec2 v_texCoord0;
void main() {
v_texCoord0 = a_texCoord0;
gl_Position = u_projTrans * vec4(a_position, 1f);
}
FragmentShader:
//variables shared between fragment and vertex shader
varying vec2 v_texCoord0;
//variables visible from java
uniform sampler2D u_texture;
uniform vec2 u_textureSize;
uniform int u_length;
void main() {
vec4 texColor = texture2D(u_texture, v_texCoord0);
vec2 step = 1.0 / u_textureSize;
if(texColor.a > 0) {
int maxNearPixels = (u_length * 2 + 1) * (u_length * 2 + 1) - 1;
for(int i = 0; i <= u_length; i++) {
for(float j = 0; j <= u_length; j++) {
if(i != 0 || j != 0) {
texColor.a -= (1 - texture2D(u_texture, v_texCoord0 + vec2(step.x * float(i), step.y * float(j))).a) / float(maxNearPixels);
texColor.a -= (1 - texture2D(u_texture, v_texCoord0 + vec2(-step.x * float(i), step.y * float(j))).a) / float(maxNearPixels);
texColor.a -= (1 - texture2D(u_texture, v_texCoord0 + vec2(step.x * float(i), -step.y * float(j))).a) / float(maxNearPixels);
texColor.a -= (1 - texture2D(u_texture, v_texCoord0 + vec2(-step.x * float(i), -step.y * float(j))).a) / float(maxNearPixels);
}
}
}
}
gl_FragColor = texColor;
}
This is the result I got setting a length of 20:
So the shader I wrote kinda works, but has terrible performance cause it's O(n^2) where n is the length of the fade in pixels (so it can be very high, like 60 or even 80). It also has some problems, like that the edges are still a bit too sharp (I'd like a smother transition) and some of the angles of the border are less faded than others (I'd like to have a fade uniform everywhere).
I'm a little bit lost at this point: is there anything I can do to make it better and faster? Like I said I'm new to shaders, so: is it even the right way to use shaders?
As others mentioned in the comments, instead of blurring in the screen-space, you should filter in the tile-space while potentially exploiting the GPU bilinear filtering. Let's go through it with images.
First define a texture such that each pixel corresponds to a single tile, black/white depending on the fog at that tile. Here's such a texture blown up:
After applying the screen-to-tiles coordinate transformation and sampling that texture with GL_NEAREST interpolation we get the blocky result similar to what you have:
float t = texture2D(u_tiles, M*uv).r;
gl_FragColor = vec4(t,t,t,1.0);
If instead of GL_NEAREST we switch to GL_LINEAR, we get a somewhat better result:
This still looks a little blocky. To improve on that we can apply a smoothstep:
float t = texture2D(u_tiles, M*uv).r;
t = smoothstep(0.0, 1.0, t);
gl_FragColor = vec4(t,t,t,1.0);
Or here is a version with a linear shade-mapping function:
float t = texture2D(u_tiles, M*uv).r;
t = clamp((t-0.5)*1.5 + 0.5, 0.0, 1.0);
gl_FragColor = vec4(t,t,t,1.0);
Note: these images were generated within a gamma-correct pipeline (i.e. sRGB framebuffer enabled). This is one of those few scenarios, however, where ignoring gamma may give better results, so you're welcome to experiment.

How to make a take a circle out of a shader?

I'm working on a game using GLSL shaders
I'm using Go with the library Pixel, it's a 2d game and there's no "camera" (I've had people suggest using a second camera to achieve this)
My current shader is just a basic grayscale shader
#version 330 core
in vec2 vTexCoords;
out vec4 fragColor;
uniform vec4 uTexBounds;
uniform sampler2D uTexture;
void main() {
// Get our current screen coordinate
vec2 t = (vTexCoords - uTexBounds.xy) / uTexBounds.zw;
// Sum our 3 color channels
float sum = texture(uTexture, t).r;
sum += texture(uTexture, t).g;
sum += texture(uTexture, t).b;
// Divide by 3, and set the output to the result
vec4 color = vec4( sum/3, sum/3, sum/3, 1.0);
fragColor = color;
}
I want to take out a circle of the shader to show the color of objects almost like light is shining on them.
This is an example of what I'm trying to achieve
I can't really figure out what to search to find a shadertoy example or something that does this, but I've seen something similar before so I'm pretty sure it's possible.
To restate; I basically just want to remove part of the shader.
Not sure if using shaders is the best way to approach this, if there's another way then please let me know and I will remake the question.
You can easily extend this to use any arbitrary position as the "light."
Declare a uniform buffer to store the current location and a radius.
If the distance from the given location to the current pixel is less than the radius squared return the current color.
Otherwise, return its greyscale.
vec2 displacement = t - light_location;
float distanceSq = (displacement.x * displacement.x + displacement.y * displacement.y)
float radiusSq = radius * radius;
if(distanceSq < radiusSq) {
fragColor = texture(uTexture);
} else {
float sum = texture(uTexture).r;
sum += texture(uTexture).g;
sum += texture(uTexture).b;
float grey = sum / 3.0f;
fragColor = vec4(grey, grey, grey, 1.0f);
}

OpenGL screen postprocessing effects [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I've built a nice music visualizer using OpenGL in Java. It already looks pretty neat, but I've thought about adding some post processing to it. At the time, it looks like this:
There is already a framebuffer for recording the output, so I have the texture already available. Now I wonder if someone has an idea for some effects. The current Fragment shader looks like this:
#version 440
in vec3 position_FS_in;
in vec2 texCoords_FS_in;
out vec4 out_Color;
//the texture of the last Frame by now exactly the same as the output
uniform sampler2D textureSampler;
//available data:
//the average height of the lines seen in the screenshot, ranging from 0 to 1
uniform float mean;
//the array of heights of the lines seen in the screenshot
uniform float music[512];
void main()
{
vec4 texColor = texture(textureSampler, texCoords_FS_in);
//insert post processing here
out_Color = texColor;
}
Most post processing effects vary with time, so it is common to have a uniform that varies with the passage of time. For example, a "wavy" effect might be created by offsetting texture coordinates using sin(elapsedSec * wavyRadsPerSec + (PI * gl_FragCoord.y * 0.5 + 0.5) * wavyCyclesInFrame).
Some "postprocessing" effects can be done very simply, for example, instead of clearing the back buffer with glClear you can blend a nearly-black transparent quad over the whole screen. This will create a persistence effect where the past frames fade to black behind the current one.
A directional blur can be implemented by taking multiple samples at various distances from each point, and weighting the closer ones more strongly and summing. If you track the motion of a point relative to the camera position and orientation, it can be made into a motion blur implementation.
Color transformations are very simple as well, simply treat the RGB as though they are the XYZ of a vector, and do interesting transformations on it. Sepia and "psychedelic" colors can be produced this way.
You might find it helpful to convert the color into something like HSV, do transformations on that representation, and convert it back to RGB for the framebuffer write. You could affect hue, saturation, for example, fading to black and white, or intensifying the color saturation smoothly.
A "smearing into the distance" effect can be done by blending the framebuffer onto the framebuffer, by reading from texcoord that is slightly scaled up from gl_FragCoord, like texture(textureSampler, (gl_FragCoord * 1.01).xy).
On that note, you should not need those texture coordinate attributes, you can use gl_FragCoord to find out where you are on the screen, and use (an adjusted copy of) that for your texture call.
Have a look at a few shaders on GLSLSandbox for inspiration.
I have done a simple emulation of the trail effect on GLSLSandbox. In the real one, the loop would not exist, it would take one sample from a small offset. The "loop" effect would happen by itself because its input includes the output from the last frame. To emulate having a texture of the last frame, I simply made it so I can calculate what the other pixel is. You would read the last-frame texture instead of calling something like pixelAt when doing the trail effect.
You can use the wave instead of my faked sine wave. Use the uv.x to select an index, scaled appropriately.
GLSL
#ifdef GL_ES
precision mediump float;
#endif
uniform float time;
uniform vec2 mouse;
uniform vec2 resolution;
const float PI = 3.14159265358979323;// lol ya right, but hey, I memorized it
vec4 pixelAt(vec2 uv)
{
vec4 result;
float thickness = 0.05;
float movementSpeed = 0.4;
float wavesInFrame = 5.0;
float waveHeight = 0.3;
float point = (sin(time * movementSpeed +
uv.x * wavesInFrame * 2.0 * PI) *
waveHeight);
const float sharpness = 1.40;
float dist = 1.0 - abs(clamp((point - uv.y) / thickness, -1.0, 1.0));
float val;
float brightness = 0.8;
// All of the threads go the same way so this if is easy
if (sharpness != 1.0)
dist = pow(dist, sharpness);
dist *= brightness;
result = vec4(vec3(0.3, 0.6, 0.3) * dist, 1.0);
return result;
}
void main( void ) {
vec2 fc = gl_FragCoord.xy;
vec2 uv = fc / resolution - 0.5;
vec4 pixel;
pixel = pixelAt(uv);
// I can't really do postprocessing in this shader, so instead of
// doing the texturelookup, I restructured it to be able to compute
// what the other pixel might be. The real code would lookup a texel
// and there would be one sample at a small offset, the feedback
// replaces the loop.
const float e = 64.0, s = 1.0 / e;
for (float i = 0.0; i < e; ++i) {
pixel += pixelAt(uv + (uv * (i*s))) * (0.3-i*s*0.325);
}
pixel /= 1.0;
gl_FragColor = pixel;
}

Can anyone explain these snippets related to WebGL

I am referring to this link for learning how to render a texture in webgl.
I am facing some doubts as it is not very easy for a beginner to understand.
What does these snippets mean for GLSL:
vec2 zeroToOne = a_position / u_resolution;
vec2 zeroToTwo = zeroToOne * 2.0;
vec2 clipSpace = zeroToTwo - 1.0;
Also, I don't want to fill the entire canvas if my image is bigger. I want to render all textures as a 512 * 384 (4:3), how to do that by modifying the vertices.
Since I wrote the sample you linked too I'm curious how I can improve the explanation already on that site
The sample you linked to is from this page.
That page says right at the top
This is a continuation from WebGL Fundamentals. If you haven't read that I'd suggest going there first
That page says
WebGL only cares about 2 things. Clipspace coordinates and colors. Your job as a programmer using WebGL is to provide WebGL with those 2 things. You provide 2 "shaders" to do this. A Vertex shader which provides the clipspace coordinates and a fragment shader that provides the color.
Clipspace coordinates always go from -1 to +1 no matter what size your canvas is
It then shows an example using clip space coordinates.
After that it says we'd probably rather work in pixels and shows a shader with comments that details how to convert from pixels to clip space
For 2D stuff you would probably rather work in pixels than clipspace so let's change the shader so we can supply rectangles in pixels and have it convert to clipspace for us. Here's the new vertex shader
attribute vec2 a_position;
uniform vec2 u_resolution;
void main() {
// convert the rectangle from pixels to 0.0 to 1.0
vec2 zeroToOne = a_position / u_resolution;
// convert from 0->1 to 0->2
vec2 zeroToTwo = zeroToOne * 2.0;
// convert from 0->2 to -1->+1 (clipspace)
vec2 clipSpace = zeroToTwo - 1.0;
gl_Position = vec4(clipSpace, 0, 1);
}
In fact, the sample you linked to has those exact same comments in the code.
I'd love to hear any ideas how I can make that clearer
This code likely converts a_position from 0..N-1 texture resolution space to
-1..1 range.

How to improve the quality of my shadows?

First, a screenshot:
As you can see, the tops of the shadows look OK (if you look at the dirt where the tops of the shrubs are projected, it looks more or less correct), but the base of the shadows is way off.
The bottom left corner of the image shows the shadow map I computed. It's a depth-map from the POV of the light, which is also where my character is standing.
Here's another shot, from a different angle:
Any ideas what might be causing it to come out like this? Is the depth of the shrub face too similar to the depth of the ground directly behind it, perhaps? If so, how do I get around that?
I'll post the fragment shader below, leave a comment if there's anything else you need to see.
Fragment Shader
#version 330
in vec2 TexCoord0;
in vec3 Tint0;
in vec4 WorldPos;
in vec4 LightPos;
out vec4 FragColor;
uniform sampler2D TexSampler;
uniform sampler2D ShadowSampler;
uniform bool Blend;
const int MAX_LIGHTS = 16;
uniform int NumLights;
uniform vec3 Lights[MAX_LIGHTS];
const float lightRadius = 100;
float distSq(vec3 v1, vec3 v2) {
vec3 d = v1-v2;
return dot(d,d);
}
float CalcShadowFactor(vec4 LightSpacePos)
{
vec3 ProjCoords = LightSpacePos.xyz / LightSpacePos.w;
vec2 UVCoords;
UVCoords.x = 0.5 * ProjCoords.x + 0.5;
UVCoords.y = 0.5 * ProjCoords.y + 0.5;
float Depth = texture(ShadowSampler, UVCoords).x;
if (Depth < (ProjCoords.z + 0.0001))
return 0.5;
else
return 1.0;
}
void main()
{
float scale;
FragColor = texture2D(TexSampler, TexCoord0.xy);
// transparency
if(!Blend && FragColor.a < 0.5) discard;
// biome blending
FragColor *= vec4(Tint0, 1.0f);
// fog
float depth = gl_FragCoord.z / gl_FragCoord.w;
if(depth>20) {
scale = clamp(1.2-15/(depth-19),0,1);
vec3 destColor = vec3(0.671,0.792,1.00);
vec3 colorDist = destColor - FragColor.xyz;
FragColor.xyz += colorDist*scale;
}
// lighting
scale = 0.30;
for(int i=0; i<NumLights; ++i) {
float dist = distSq(WorldPos.xyz, Lights[i]);
if(dist < lightRadius) {
scale += (lightRadius-dist)/lightRadius;
}
}
scale *= CalcShadowFactor(LightPos);
FragColor.xyz *= clamp(scale,0,1.5);
}
I'm fairly certain this is an offset problem. My shadows look to be about 1 block off, but I can't figure out how to shift them, nor what's causing them to be off.
Looks like "depth map bias" actually:
Not exactly sure how to set this....do I just call glPolygonOffset before rendering the scene? Will try it...
Setting glPolygonOffset to 100,100 amplifies the problem:
I set this just before rendering the shadow map:
GL.Enable(EnableCap.PolygonOffsetFill);
GL.PolygonOffset(100f, 100.0f);
And then disabled it again. I'm not sure if that's how I'm supposed to do it. Increasing the values amplifies the problem....decreasing them to below 1 doesn't seem to improve it though.
Notice also how the shadow map in the lower left changed.
Vertex Shader
#version 330
layout(location = 0) in vec3 Position;
layout(location = 1) in vec2 TexCoord;
layout(location = 2) in mat4 Transform;
layout(location = 6) in vec4 TexSrc; // x=x, y=y, z=width, w=height
layout(location = 7) in vec3 Tint; // x=R, y=G, z=B
uniform mat4 ProjectionMatrix;
uniform mat4 LightMatrix;
out vec2 TexCoord0;
out vec3 Tint0;
out vec4 WorldPos;
out vec4 LightPos;
void main()
{
WorldPos = Transform * vec4(Position, 1.0);
gl_Position = ProjectionMatrix * WorldPos;
LightPos = LightMatrix * WorldPos;
TexCoord0 = vec2(TexSrc.x+TexCoord.x*TexSrc.z, TexSrc.y+TexCoord.y*TexSrc.w);
Tint0 = Tint;
}
While world-aligned cascaded shadow maps are great and used in most new games out there, it's not related to why your shadows have a strange offset with your current implementation.
Actually, it looks like you're sampling from the correct texels on the shadow map just on where the shadows that are occurring are exactly where you'd expect them to be, however your comparison is off.
I've added some comments to your code:
vec3 ProjCoords = LightSpacePos.xyz / LightSpacePos.w; // So far so good...
vec2 UVCoords;
UVCoords.x = 0.5 * ProjCoords.x + 0.5; // Right, you're converting X and Y from clip
UVCoords.y = 0.5 * ProjCoords.y + 0.5; // space to texel space...
float Depth = texture(ShadowSampler, UVCoords).x; // I expect we sample a value in [0,1]
if (Depth < (ProjCoords.z + 0.0001)) // Uhoh, we never converted Z's clip space to [0,1]
return 0.5;
else
return 1.0;
So, I suspect you want to compare to ProjCoords.z * 0.5 + 0.5:
if (Depth < (ProjCoords.z * 0.5 + 0.5 + 0.0001))
return 0.5;
else
return 1.0;
Also, that bias factor makes me nervous. Better yet, just take it out for now and deal with it once you get the shadows appearing in the right spots:
const float bias = 0.0;
if (Depth < (ProjCoords.z * 0.5 + 0.5 + bias))
return 0.5;
else
return 1.0;
I might not be entirely right about how to transform ProjCoords.z to match the sampled value, however this is likely the issue. Also, if you do move to cascaded shadow maps (I recommend world-aligned) I'd strongly recommend drawing frustums representing where each shadow map is viewing -- it makes debugging a whole lot easier.
This is called the "deer in headlights" effect of buffer mapped shadows. There are a several ways to minimize this effect. Look for "light space shadow mapping".
NVidia OpenGL SDK has "cascaded shadow maps" example. You might want to check it out (haven't used it myself, though).
How to improve the quality of my shadows?
The problem could be caused by using incorrect matrix while rendering shadows. Your example doesn't demonstrate how light matrices are set. By murphy's law I'll have to assume that bug lies in this missing piece of code - since you decided that this part isn't important, it probably causes the problem. If matrix used while testing the shadow is different from matrix used to render the shadow, you'll get exactly this problem.
I suggest to forget about the whole minecraft thing for a moment, and play around with shadows in simple application. Make a standalone application with floor plane and rotating cube (or teapot or whatever you want), and debug shadow maps there, until you get hang of it. Since you're willing to throw +100 bounty onto the question, you might as well post the complete code of your standalone sample here - if you still get the problem in the sample. Trying to stick technology you aren't familiar with into the middle of working(?) engine isn't a good idea anyway. Take it slow, get used to the technique/technology/effect, then integrate it.