Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I've built a nice music visualizer using OpenGL in Java. It already looks pretty neat, but I've thought about adding some post processing to it. At the time, it looks like this:
There is already a framebuffer for recording the output, so I have the texture already available. Now I wonder if someone has an idea for some effects. The current Fragment shader looks like this:
#version 440
in vec3 position_FS_in;
in vec2 texCoords_FS_in;
out vec4 out_Color;
//the texture of the last Frame by now exactly the same as the output
uniform sampler2D textureSampler;
//available data:
//the average height of the lines seen in the screenshot, ranging from 0 to 1
uniform float mean;
//the array of heights of the lines seen in the screenshot
uniform float music[512];
void main()
{
vec4 texColor = texture(textureSampler, texCoords_FS_in);
//insert post processing here
out_Color = texColor;
}
Most post processing effects vary with time, so it is common to have a uniform that varies with the passage of time. For example, a "wavy" effect might be created by offsetting texture coordinates using sin(elapsedSec * wavyRadsPerSec + (PI * gl_FragCoord.y * 0.5 + 0.5) * wavyCyclesInFrame).
Some "postprocessing" effects can be done very simply, for example, instead of clearing the back buffer with glClear you can blend a nearly-black transparent quad over the whole screen. This will create a persistence effect where the past frames fade to black behind the current one.
A directional blur can be implemented by taking multiple samples at various distances from each point, and weighting the closer ones more strongly and summing. If you track the motion of a point relative to the camera position and orientation, it can be made into a motion blur implementation.
Color transformations are very simple as well, simply treat the RGB as though they are the XYZ of a vector, and do interesting transformations on it. Sepia and "psychedelic" colors can be produced this way.
You might find it helpful to convert the color into something like HSV, do transformations on that representation, and convert it back to RGB for the framebuffer write. You could affect hue, saturation, for example, fading to black and white, or intensifying the color saturation smoothly.
A "smearing into the distance" effect can be done by blending the framebuffer onto the framebuffer, by reading from texcoord that is slightly scaled up from gl_FragCoord, like texture(textureSampler, (gl_FragCoord * 1.01).xy).
On that note, you should not need those texture coordinate attributes, you can use gl_FragCoord to find out where you are on the screen, and use (an adjusted copy of) that for your texture call.
Have a look at a few shaders on GLSLSandbox for inspiration.
I have done a simple emulation of the trail effect on GLSLSandbox. In the real one, the loop would not exist, it would take one sample from a small offset. The "loop" effect would happen by itself because its input includes the output from the last frame. To emulate having a texture of the last frame, I simply made it so I can calculate what the other pixel is. You would read the last-frame texture instead of calling something like pixelAt when doing the trail effect.
You can use the wave instead of my faked sine wave. Use the uv.x to select an index, scaled appropriately.
GLSL
#ifdef GL_ES
precision mediump float;
#endif
uniform float time;
uniform vec2 mouse;
uniform vec2 resolution;
const float PI = 3.14159265358979323;// lol ya right, but hey, I memorized it
vec4 pixelAt(vec2 uv)
{
vec4 result;
float thickness = 0.05;
float movementSpeed = 0.4;
float wavesInFrame = 5.0;
float waveHeight = 0.3;
float point = (sin(time * movementSpeed +
uv.x * wavesInFrame * 2.0 * PI) *
waveHeight);
const float sharpness = 1.40;
float dist = 1.0 - abs(clamp((point - uv.y) / thickness, -1.0, 1.0));
float val;
float brightness = 0.8;
// All of the threads go the same way so this if is easy
if (sharpness != 1.0)
dist = pow(dist, sharpness);
dist *= brightness;
result = vec4(vec3(0.3, 0.6, 0.3) * dist, 1.0);
return result;
}
void main( void ) {
vec2 fc = gl_FragCoord.xy;
vec2 uv = fc / resolution - 0.5;
vec4 pixel;
pixel = pixelAt(uv);
// I can't really do postprocessing in this shader, so instead of
// doing the texturelookup, I restructured it to be able to compute
// what the other pixel might be. The real code would lookup a texel
// and there would be one sample at a small offset, the feedback
// replaces the loop.
const float e = 64.0, s = 1.0 / e;
for (float i = 0.0; i < e; ++i) {
pixel += pixelAt(uv + (uv * (i*s))) * (0.3-i*s*0.325);
}
pixel /= 1.0;
gl_FragColor = pixel;
}
Related
Im currently in the process of writing a Voxel Cone Tracing Rendering Engine with C++ and OpenGL. Everything is going rather fine, except that I'm getting rather strange results for wider cone angles.
Right now, for the purposes of testing, all I am doing is shoot out one singular cone perpendicularly to the fragment normal. I am only calculating 'indirect light'. For reference, here is the rather simple Fragment Shader I'm using:
#version 450 core
out vec4 FragColor;
in vec3 pos_fs;
in vec3 nrm_fs;
uniform sampler3D tex3D;
vec3 indirectDiffuse();
vec3 voxelTraceCone(const vec3 from, vec3 direction);
void main()
{
FragColor = vec4(0, 0, 0, 1);
FragColor.rgb += indirectDiffuse();
}
vec3 indirectDiffuse(){
// singular cone in direction of the normal
vec3 ret = voxelTraceCone(pos_fs, nrm);
return ret;
}
vec3 voxelTraceCone(const vec3 origin, vec3 dir) {
float max_dist = 1f;
dir = normalize(dir);
float current_dist = 0.01f;
float apperture_angle = 0.01f; //Angle in Radians.
vec3 color = vec3(0.0f);
float occlusion = 0.0f;
float vox_size = 128.0f; //voxel map size
while(current_dist < max_dist && occlusion < 1) {
//Get cone diameter (tan = cathetus / cathetus)
float current_coneDiameter = 2.0f * current_dist * tan(apperture_angle * 0.5f);
//Get mipmap level which should be sampled according to the cone diameter
float vlevel = log2(current_coneDiameter * vox_size);
vec3 pos_worldspace = origin + dir * current_dist;
vec3 pos_texturespace = (pos_worldspace + vec3(1.0f)) * 0.5f; //[-1,1] Coordinates to [0,1]
vec4 voxel = textureLod(tex3D, pos_texturespace, vlevel); //get voxel
vec3 color_read = voxel.rgb;
float occlusion_read = voxel.a;
color = occlusion*color + (1 - occlusion) * occlusion_read * color_read;
occlusion = occlusion + (1 - occlusion) * occlusion_read;
float dist_factor = 0.3f; //Lower = better results but higher performance hit
current_dist += current_coneDiameter * dist_factor;
}
return color;
}
The tex3D uniform is the voxel 3d-texture.
Under a regular Phong shader (under which the voxel values are calculated) the scene looks like this:
For reference, this is what the voxel map (tex3D) (128x128x128) looks like when visualized:
Now we get to the actual problem I'm having. If I apply the shader above to the scene, I get following results:
For very small cone angles (apperture_angle=0.01) I get roughly what you might expect: The voxelized scene is essentially 'reflected' perpendicularly on each surface:
Now if I increase the apperture angle to, for example 30 degrees (apperture_angle=0.52), I get this really strange 'wavy'-looking result:
I would have expected a much more similar result to the earlier one, just less specular. Instead I get mostly the outline of each object reflected in a specular manner with some occasional pixels inside the outline. Considering this is meant to be the 'indirect lighting' in the scene, it won't look exactly good even if I add the direct light.
I have tried different values for max_dist, current_dist etc. aswell as shooting several cones instead of just one. The result remains similar, if not worse.
Does someone know what I'm doing wrong here, and how to get actual remotely realistic indirect light?
I suspect that the textureLod function somehow yields the wrong result for any LOD levels above 0, but I haven't been able to confirm this.
The Mipmaps of the 3D texture were not being generated correctly.
In addition there was no hardcap on vlevel leading to all textureLod calls returning a #000000 color that accessed any mipmaplevel above 1.
I want to add some black outline to my game screen to make it look like the corners are rounded.
This is the effect I want to achieve:
I figured this effect was probably quite easy to create using a shader, instead of drawing a giant bitmap on top of everything.
Can someone help me with the GLSL shader code for this effect? I have 0 experience with shaders and was unable to find anything like this on the internet.
I've accidentaly found a nice solution for this. Not exactly what you've asked for, but in fact it looks even better.
// RESOLUTION is a vec2 with your window size in pixels.
vec2 pos = fragCoord.xy / RESOLUTION;
// Adjust .2 (first pow() argument) below to change frame thickness.
if (pos.x * pos.y * (1.-pos.x) * (1.-pos.y) < pow(.2,4.))
fragColor = vec4(0,0,0,1);
It gives following result:
If you don't like those thin lines, you can remove them just by upscaling the image. It can be done by adding this line:
// The .985 is 1/scale_factor. You can try to change it and see how it works.
// It needs to be adjusted if you change frame thickness.
pos = (pos - .5) * .985 + .5;
While this effect looks good, it may be smarter to add just a faint shadow instead.
It's easy to implement using the same equation: pos.x * pos.y * (1.-pos.x) * (1.-pos.y)
The value of it ranges from 0.0 at window edges to 0.5^4 in the center.
You can use some easy math to do a shadow that becomes more thick closer to the window edge.
Here is an example of how it may look.
(A screenshot from Duality, my entry for Ludum Dare 35.)
Thanks to #HolyBlackCat my shader now works. I've improved the performance and made it look smoothed.
varying vec4 v_color;
varying vec2 v_texCoord0;
uniform vec2 u_resolution;
uniform vec2 u_screenOffset;
uniform sampler2D u_sampler2D;
const float max = pow(0.2, 4);
void main()
{
vec2 pos = (gl_FragCoord.xy - u_screenOffset) / u_resolution;
float vignette = pos.x * pos.y * (1.-pos.x) * (1.-pos.y);
vec4 color = texture2D(u_sampler2D, v_texCoord0) * v_color;
color.rgb = color.rgb * smoothstep(0, max, vignette);
gl_FragColor = color;
}
Set the uniforms as follows in the resize event of libGDX:
shader.begin();
shader.setUniformf("u_resolution", viewport.getScreenWidth(), viewport.getScreenHeight());
shader.setUniformf("u_screenOffset", viewport.getScreenX(), viewport.getScreenY());
shader.end();
This will make sure the shader works with viewports (only tested with FitViewport) aswell.
My aim is to pass an array of points to the shader, calculate their distance to the fragment and paint them with a circle colored with a gradient depending of that computation.
For example:
(From a working example I set up on shader toy)
Unfortunately it isn't clear to me how I should calculate and convert the coordinates passed for processing inside the shader.
What I'm currently trying is to pass two array of floats - one for x positions and one for y positions of each point - to the shader though a uniform. Then inside the shader iterate through each point like so:
#ifdef GL_ES
precision mediump float;
precision mediump int;
#endif
uniform float sourceX[100];
uniform float sourceY[100];
uniform vec2 resolution;
in vec4 gl_FragCoord;
varying vec4 vertColor;
varying vec2 center;
varying vec2 pos;
void main()
{
float intensity = 0.0;
for(int i=0; i<100; i++)
{
vec2 source = vec2(sourceX[i],sourceY[i]);
vec2 position = ( gl_FragCoord.xy / resolution.xy );
float d = distance(position, source);
intensity += exp(-0.5*d*d);
}
intensity=3.0*pow(intensity,0.02);
if (intensity<=1.0)
gl_FragColor=vec4(0.0,intensity*0.5,0.0,1.0);
else if (intensity<=2.0)
gl_FragColor=vec4(intensity-1.0, 0.5+(intensity-1.0)*0.5,0.0,1.0);
else
gl_FragColor=vec4(1.0,3.0-intensity,0.0,1.0);
}
But that doesn't work - and I believe it may be because I'm trying to work with the pixel coordinates without properly translating them. Could anyone explain to me how to make this work?
Update:
The current result is:
The sketch's code is:
PShader pointShader;
float[] sourceX;
float[] sourceY;
void setup()
{
size(1024, 1024, P3D);
background(255);
sourceX = new float[100];
sourceY = new float[100];
for (int i = 0; i<100; i++)
{
sourceX[i] = random(0, 1023);
sourceY[i] = random(0, 1023);
}
pointShader = loadShader("pointfrag.glsl", "pointvert.glsl");
shader(pointShader, POINTS);
pointShader.set("sourceX", sourceX);
pointShader.set("sourceY", sourceY);
pointShader.set("resolution", float(width), float(height));
}
void draw()
{
for (int i = 0; i<100; i++) {
strokeWeight(60);
point(sourceX[i], sourceY[i]);
}
}
while the vertex shader is:
#define PROCESSING_POINT_SHADER
uniform mat4 projection;
uniform mat4 transform;
attribute vec4 vertex;
attribute vec4 color;
attribute vec2 offset;
varying vec4 vertColor;
varying vec2 center;
varying vec2 pos;
void main() {
vec4 clip = transform * vertex;
gl_Position = clip + projection * vec4(offset, 0, 0);
vertColor = color;
center = clip.xy;
pos = offset;
}
Update:
Based on the comments it seems you have confused two different approaches:
Draw a single full screen polygon, pass in the points and calculate the final value once per fragment using a loop in the shader.
Draw bounding geometry for each point, calculate the density for just one point in the fragment shader and use additive blending to sum the densities of all points.
The other issue is your points are given in pixels but the code expects a 0 to 1 range, so d is large and the points are black. Fixing this issue as #RetoKoradi describes should address the points being black, but I suspect you'll find ramp clipping issues when many are in close proximity. Passing points into the shader limits scalability and is inefficient unless the points cover the whole viewport.
As below, I think sticking with approach 2 is better. To restructure your code for it, remove the loop, don't pass in the array of points and use center as the point coordinate instead:
//calc center in pixel coordinates
vec2 centerPixels = (center * 0.5 + 0.5) * resolution.xy;
//find the distance in pixels (avoiding aspect ratio issues)
float dPixels = distance(gl_FragCoord.xy, centerPixels);
//scale down to the 0 to 1 range
float d = dPixels / resolution.y;
//write out the intensity
gl_FragColor = vec4(exp(-0.5*d*d));
Draw this to a texture (from comments: opengl-tutorial.org code and this question) with additive blending:
glEnable(GL_BLEND);
glBlendFunc(GL_ONE, GL_ONE);
Now that texture will contain intensity as it was after your original loop. In another fragment shader during a full screen pass (draw a single triangle that covers the whole viewport), continue with:
uniform sampler2D intensityTex;
...
float intensity = texture2D(intensityTex, gl_FragCoord.xy/resolution.xy).r;
intensity = 3.0*pow(intensity, 0.02);
...
The code you have shown is fine, assuming you're drawing a full screen polygon so the fragment shader runs once for each pixel. Potential issues are:
resolution isn't set correctly
The point coordinates aren't in the range 0 to 1 on the screen.
Although minor, d will be stretched by the aspect ratio, so you might be better scaling the points up to pixel coordinates and diving distance by resolution.y.
This looks pretty similar to creating a density field for 2D metaballs. For performance you're best off limiting the density function for each point so it doesn't go on forever, then spatting discs into a texture using additive blending. This saves processing those pixels a point doesn't affect (just like in deferred shading). The result is the density field, or in your case per-pixel intensity.
These are a little related:
2D OpenGL ES Metaballs on android (no answers yet)
calculate light volume radius from intensity
gl_PointSize Corresponding to World Space Size
It looks like the point center and fragment position are in different coordinate spaces when you subtract them:
vec2 source = vec2(sourceX[i],sourceY[i]);
vec2 position = ( gl_FragCoord.xy / resolution.xy );
float d = distance(position, source);
Based on your explanation and code, source and source are in window coordinates, meaning that they are in units of pixels. gl_FragCoord is in the same coordinate space. And even though you don't show that directly, I assume that resolution is the size of the window in pixels.
This means that:
vec2 position = ( gl_FragCoord.xy / resolution.xy );
calculates the normalized position of the fragment within the window, in the range [0.0, 1.0] for both x and y. But then on the next line:
float d = distance(position, source);
you subtrace source, which is still in window coordinates, from this position in normalized coordinates.
Since it looks like you wanted the distance in normalized coordinates, which makes sense, you'll also need to normalize source:
vec2 source = vec2(sourceX[i],sourceY[i]) / resolution.xy;
I implemented a shader for the sun surface which uses simplex noise from ashima/webgl-noise. But it costs too much GPU time, especially if I'm going to use it on mobile devices. I need to do the same effect but using a noise texture. My fragment shader is below:
#ifdef GL_ES
precision highp float;
#endif
precision mediump float;
varying vec2 v_texCoord;
varying vec3 v_normal;
uniform sampler2D u_planetDay;
uniform sampler2D u_noise; //noise texture (not used yet)
uniform float u_time;
#include simplex_noise_source from Ashima
float noise(vec3 position, int octaves, float frequency, float persistence) {
float total = 0.0; // Total value so far
float maxAmplitude = 0.0; // Accumulates highest theoretical amplitude
float amplitude = 1.0;
for (int i = 0; i < octaves; i++) {
// Get the noise sample
total += ((1.0 - abs(snoise(position * frequency))) * 2.0 - 1.0) * amplitude;
//I USE LINE BELOW FOR 2D NOISE
total += ((1.0 - abs(snoise(position.xy * frequency))) * 2.0 - 1.0) * amplitude;
// Make the wavelength twice as small
frequency *= 2.0;
// Add to our maximum possible amplitude
maxAmplitude += amplitude;
// Reduce amplitude according to persistence for the next octave
amplitude *= persistence;
}
// Scale the result by the maximum amplitude
return total / maxAmplitude;
}
void main()
{
vec3 position = v_normal *2.5+ vec3(u_time, u_time, u_time);
float n1 = noise(position.xyz, 2, 7.7, 0.75) * 0.001;
vec3 ground = texture2D(u_planetDay, v_texCoord+n1).rgb;
gl_FragColor = vec4 (color, 1.0);
}
How can I correct this shader to work with a noise texture and what should the texture look like?
As far as I know, OpenGL ES 2.0 doesn't support 3D textures. Moreover, I don't know how to create 3D texture.
I wrote this 3D noise from a 2D texture function. It still uses hardware interpolation for x/y directions and then manually interpolates for z. To get noise along the z direction I've sampled the same texture at different offsets. This will probably lead to some repetition, but I haven't noticed any in my application and my guess is using primes helps.
The thing that had me stumped for a while on shadertoy.com was that texture mipmapping was enabled which caused seams at the change in value of the floor() function. A quick solution was passing a -999 bias to texture2D.
This was hard coded for a 256x256 noise texture, so adjust accordingly.
float noise3D(vec3 p)
{
p.z = fract(p.z)*256.0;
float iz = floor(p.z);
float fz = fract(p.z);
vec2 a_off = vec2(23.0, 29.0)*(iz)/256.0;
vec2 b_off = vec2(23.0, 29.0)*(iz+1.0)/256.0;
float a = texture2D(iChannel0, p.xy + a_off, -999.0).r;
float b = texture2D(iChannel0, p.xy + b_off, -999.0).r;
return mix(a, b, fz);
}
Update: To extend to perlin noise, sum samples at different frequencies:
float perlinNoise3D(vec3 p)
{
float x = 0.0;
for (float i = 0.0; i < 6.0; i += 1.0)
x += noise3D(p * pow(2.0, i)) * pow(0.5, i);
return x;
}
Trying to evaluate noise at run-time is often a bad practice unless you want to do some research work or to quickly check / debug your noise function (or see what your noise parameters visually look like).
It will always consume too much processing budget (not worth it at all), so just forget about evaluating noise at run-time.
If you store your noise results off-line, you will reduce the charge (by say over 95 %) to a simple access to memory.
I suggest to reduce all this to a texture look-up over a pre-baked 2D noise image. You are so far only impacting the fragment pipeline so a 2D noise texture is definitely the way to go (you can also use this 2D lookup for vertex positions deformation).
In order to map it on a sphere without any continuity issue, you may generate a loopable 2D image with a 4D noise, feeding the function with the coordinates of two 2D circles.
As for animating it, there are various hackish tricks either by deforming your lookup results with the time semantic in the fragment pipeline, or baking an image sequence in case you really need noise "animated with noise".
3D textures are just stacks of 2D textures, so they are too heavy to manipulate (even without animation) for what you want to do, and since you apparently need only a decent sun surface, it would be overkill.
I have the following fragment and vertex shader, in which I repeat a texture:
//Fragment
vec2 texcoordC = gl_TexCoord[0].xy;
texcoordC *= 10.0;
texcoordC.x = mod(texcoordC.x, 1.0);
texcoordC.y = mod(texcoordC.y, 1.0);
texcoordC.x = clamp(texcoordC.x, 0.0, 0.9);
texcoordC.y = clamp(texcoordC.y, 0.0, 0.9);
vec4 texColor = texture2D(sampler, texcoordC);
gl_FragColor = texColor;
//Vertex
gl_TexCoord[0] = gl_MultiTexCoord0;
colorC = gl_Color.r;
gl_Position = ftransform();
ADDED: After this process, I fetch the texture coordinates and use a texture pack:
vec4 textureGet(vec2 texcoord) {
// Tile is 1.0/16.0 part of texture, on x and y
float tileSp = 1.0 / 16.0;
vec4 color = texture2D(sampler, texcoord);
// Get tile x and y by red color stored
float texTX = mod(color.r, tileSp);
float texTY = color.r - texTX;
texTX /= tileSp;
// Testing tile
texTX = 1.0 - tileSp;
texTY = 1.0 - tileSp;
vec2 savedC = color.yz;
// This if else statement can be ignored. I use time to move the texture. Seams show without this as well.
if (color.r > 0.1) {
savedC.x = mod(savedC.x + sin(time / 200.0 * (color.r * 3.0)), 1.0);
savedC.y = mod(savedC.y + cos(time / 200.0 * (color.r * 3.0)), 1.0);
} else {
savedC.x = mod(savedC.x + time * (color.r * 3.0) / 1000.0, 1.0);
savedC.y = mod(savedC.y + time * (color.r * 3.0) / 1000.0, 1.0);
}
vec2 texcoordC = vec2(texTX + savedC.x * tileSp, texTY + savedC.y * tileSp);
vec4 res = texture2D(texturePack, texcoordC);
return res;
}
I have some troubles with showing seams (of 1 pixel it seems) however. If I leave out texcoord *= 10.0 no seams are shown (or barely), if I leave it in they appear. I clamp the coordinates (even tried lower than 1.0 and bigger than 0.0) to no avail. I strongly have the feeling it is a rounding error somewhere, but I have no idea where. ADDED: Something to note is that in the actual case I convert the texcoordC x and y to 8 bit floats. I think the cause lies here; I added another shader describing this above.
The case I show is a little more complicated in reality, so there is no use for me to do this outside the shader(!). I added the previous question which explains a little about the case.
EDIT: As you can see the natural texture span is divided by 10, and the texture is repeated (10 times). The seams appear at the border of every repeating texture. I also added a screenshot. The seams are the very thin lines (~1pixel). The picture is a cut out from a screenshot, not scaled. The repeated texture is 16x16, with 256 subpixels total.
EDIT: This is a followup question of: this question, although all necessary info should be included here.
Last picture has no time added.
Looking at the render of the UV coordinates, they are being filtered, which will cause the same issue as in your previous question, but on a smaller scale. What is happening is that by sampling the UV coordinate texture at a point between two discontinuous values (i.e. two adjacent points where the texture coordinates wrapped), you get an interpolated value which isn't in the right part of the texture. Thus the boundary between texture tiles is a mess of pixels from all over that tile.
You need to get the mapping 1:1 between screen pixels and the captured UV values. Using nearest sampling might get you some of the way there, but it should be possible to do without using that, if you have the right texture and pixel coordinates in the first place.
Secondly, you may find you get bleeding effects due to the way you are doing the texture atlas lookup, as you don't account for the way texels are sampled. This will be amplified if you use any mipmapping. Ideally you need a border, and possibly some massaging of the coordinates to account for half-texel offsets. However I don't think that's the main issue you're seeing here.