How to find the intersection points in cell noise / voronoi in GLSL? - glsl

I am extremely curious about how I can find the intersection points of voronoi cells. How can I find the positions of the points?
Basically, I would like to create a small circle at the points of intersection. Much like this:
I want a pure glsl implementation. I have tried a lot but cannot get my head around it. One possible algorithm I found was from this video on youtube: https://www.youtube.com/watch?v=Y5X1TvN9TpM
However, it's very unclear to me how to implement that.
I know a possible solution which is to find the points using some algorithm such as harris corner detection but that isn't really a glsl solution.
Glsl code that produces normal voronoi:
#ifdef GL_ES
precision mediump float;
#endif
uniform vec2 u_resolution;
uniform vec2 u_mouse;
uniform float u_time;
vec2 rand(vec2 co){
return vec2(fract(sin(dot(co, vec2(12.9898, 78.233))) * 43758.5453), fract(sin(dot(co, vec2(78.9898, 32.233))) * 237374.5453));
}
void main() {
vec2 st = vec2(gl_FragCoord.xy) / u_resolution.xy;
st *= 10.;
vec2 i_st = floor(st);
vec2 f_st = fract(st);
float m_dist = 1.;
float intensity = u_mouse.x / u_resolution.x;
for (int x = -1; x <= 1; ++x) {
for (int y = -1; y <= 1; y++){
vec2 neighbor = vec2(x, y);
vec2 point = rand(i_st + neighbor);
vec2 diff = neighbor + point * intensity - f_st;
float dist = length(diff);
m_dist = min(dist,m_dist);
}
}
gl_FragColor = vec4(vec3(m_dist), 1.);
}

The intersection points of a voronoi diagram are always equidistant from the three nearest cell-cores so they are the circumcenter of the triangle formed by those three points. Here is a formula to compute it's coordinates.
If your goal is to draw circles around the intersections, you can first find the three nearest cell-cores coordinates from the point you consider. Then then compute the coordinates of the circumcenter and finaly the distance from it to your point. You can then use a smoothstep() to draw a small circle and not a cicular gradient.

Related

Godot shader swap materials by world position on 3d mesh

I am trying to replicate something similar to this from Unity in Godot Engine with shaders, however, I am not able to find a solution. Calculating the position of the effect is the problem. How can I get the position in Godot, where I don't have access to the worlPos variable used in the video? A full code snippet of the shader would be really appreciated.
Currently, my shader code looks like this. ob_position is the position passed from the node.
shader_type spatial;
uniform vec2 ob_position = vec2(1, 0.68);
uniform float ob_radius = 0.01;
float circle(vec2 position, float radius, float feather)
{
return smoothstep(radius, radius + feather, length(position - vec2(0.5)));
}
void fragment() {
ALBEDO.rgb = vec3(circle(UV * (ob_position), ob_radius, 0.001) );
}
The video says:
Send the sphere position to the shader in script.
We can do that. First define an uniform:
uniform vec3 sphere_position;
And we can set it from code:
material.set_shader_param("sphere_position", global_transform.origin)
Since you need to set this every time the sphere moves, you can use NOTIFICATION_TRANSFORM_CHANGED which you enable by calling set_notify_local_transform(true).
Get the distance between the sphere and World Position.
To do that we need to figure out the world position of the fragment. Let us start by looking at the Fragment Build-ins. We find that:
VERTEX is the position of the fragment in view space.
CAMERA_MATRIX is the transform from view space to world space.
Yes, the naming is confusing.
So we can do this (in fragment):
vec3 pixel_world_pos = (CAMERA_MATRIX * vec4(VERTEX, 1.0)).xyz;
You can use this to debug: ALBEDO.rgb = pixel_world_pos;. In general, output whatever variable you want to visualize for debugging to ALBEDO.
And now the distance is:
float dist = distance(sphere_position, pixel_world_pos);
Control the size by dividing by radius.
While we don't have direct translation for the code in the video… sure, we can divide by radius (dist / radius). Where radius would be a uniform float.
Create a cutoff with Step.
That would be something like this: step(0.5, dist / radius).
Honestly, I would rather do this: step(radius, dist).
Your mileage may vary.
Lerp two different textures over the cutoff.
For that we can use mix. But first, define your textures as uniform sampler2D. Then you can something like this:
float threshold = step(radius, dist);
ALBEDO.rgb = mix(texture(tex1, UV).rgb, texture(tex2, UV).rgb, threshold);
Moving worldspace noise.
Add one more uniform sampler2D and set a NoiseTexture (make sure to set its noise and make seamless to true), and then we can query it with the world coordinates we already have.
float noise_value = texture(noise_texture, pixel_world_pos.xy + vec2(TIME)).r;
Add worldspace to noise.
I'm not sure what they mean. But from the visual, they use the noise to distort the cutoff. I'm not sure if this yields the same result, but it looks good to me:
vec3 pixel_world_pos = (CAMERA_MATRIX * vec4(VERTEX, 1.0)).xyz;
float noise_value = texture(noise_texture, pixel_world_pos.xy + vec2(TIME)).r;
float dist = distance(sphere_position, pixel_world_pos) + noise_value;
float threshold = step(radius, dist);
ALBEDO.rgb = mix(texture(tex1, UV).rgb, texture(tex2, UV).rgb, threshold);
Add a line to Emission (glow).
I don't understand what they did originally, so I came up with my own solution:
EMISSION = vec3(step(dist, edge + radius) * step(radius, dist));
What is going on here is that we will have a white EMISSION when dist < edge + radius and radius < dist. To reiterate, we will have white EMISSION when the distance is greater than the radius (radius < dist) and lesser than the radius plus some edge (dist < edge + radius). The comparisons become step functions, which return 0.0 or 1.0, and the AND operation is a multiplication.
Reveal object by clipping instead of adding a second texture.
I suppose that means there is another version of the shader that either uses discard or ALPHA and it is used for other objects.
This is the shader I wrote to test this:
shader_type spatial;
uniform vec3 sphere_position;
uniform sampler2D noise_texture;
uniform sampler2D tex1;
uniform sampler2D tex2;
uniform float radius;
uniform float edge;
void fragment()
{
vec3 pixel_world_pos = (CAMERA_MATRIX * vec4(VERTEX, 1.0)).xyz;
float noise_value = texture(noise_texture, pixel_world_pos.xy + vec2(TIME)).r;
float dist = distance(sphere_position, pixel_world_pos) + noise_value;
float threshold = step(radius, dist);
ALBEDO.rgb = mix(texture(tex1, UV).rgb, texture(tex2, UV).rgb, threshold);
EMISSION = vec3(step(dist, edge + radius) * step(radius, dist));
}
The answer from Theraot was a lifesaver for me however, I also needed support for multiple positions, using arrays, uniform vec3 sphere_position[];
So I came up with this:
shader_type spatial;
uniform uint ob_position_size;
uniform vec3 sphere_position[2];
uniform sampler2D noise_texture;
uniform sampler2D tex1;
uniform float radius;
uniform float edge;
void fragment()
{
vec3 pixel_world_pos = (INV_VIEW_MATRIX * vec4(VERTEX, 1.0)).xyz;
float noise_value = texture(noise_texture, pixel_world_pos.xy + vec2(TIME)).r;
ALBEDO = texture(SCREEN_TEXTURE, SCREEN_UV).rgb;
for(int i = 0; i < sphere_position.length(); i++) {
float dist = distance(sphere_position[i], pixel_world_pos) + noise_value;
float threshold = step(radius, dist);
ALBEDO.rgb = mix(texture(tex1, UV).rgb, ALBEDO.rgb, threshold);
//EMISSION = vec3(step(dist, edge + radius) * step(radius, dist));
}
}

How to render a circular vignette with GLSL

I’m trying to achieve a circular vignette with GLSL, but the result is elliptical when the texture is rectangular. What is the correct way to make it square regardless of the texture size? The input texture size (resolution) can be both rectangular or square.
I tried a solution using the discard method, but this doesn't suit what I require, as I need to use smoothstep to get a gradient edge.
Current result:
GLSL shader:
varying vec2 v_texcoord;
uniform sampler2D u_texture;
uniform vec2 u_resolution;
vec4 applyVignette(vec4 color)
{
vec2 position = (gl_FragCoord.xy / u_resolution) - vec2(0.5);
float dist = length(position);
float radius = 0.5;
float softness = 0.02;
float vignette = smoothstep(radius, radius - softness, dist);
color.rgb = color.rgb - (1.0 - vignette);
return color;
}
void main()
{
vec4 color = texture2D(u_texture, v_texcoord);
color = applyVignette(color);
gl_FragColor = color;
}
You have to respect the aspect ration when you calculate the distance to the center point of the circular view:
float dist = length(position * vec2(u_resolution.x/u_resolution.y, 1.0));
Note, if you have a rectangular viewport, where the width is greater than the height, then a perfect circle is squeezed at it left and right to an ellipse, when the coordinates are transformed from view space the normalized devices space.
You must counteract this squeezing by scaling up the x axis of the distance vector.

Strange Voxel Cone Tracing Results

Im currently in the process of writing a Voxel Cone Tracing Rendering Engine with C++ and OpenGL. Everything is going rather fine, except that I'm getting rather strange results for wider cone angles.
Right now, for the purposes of testing, all I am doing is shoot out one singular cone perpendicularly to the fragment normal. I am only calculating 'indirect light'. For reference, here is the rather simple Fragment Shader I'm using:
#version 450 core
out vec4 FragColor;
in vec3 pos_fs;
in vec3 nrm_fs;
uniform sampler3D tex3D;
vec3 indirectDiffuse();
vec3 voxelTraceCone(const vec3 from, vec3 direction);
void main()
{
FragColor = vec4(0, 0, 0, 1);
FragColor.rgb += indirectDiffuse();
}
vec3 indirectDiffuse(){
// singular cone in direction of the normal
vec3 ret = voxelTraceCone(pos_fs, nrm);
return ret;
}
vec3 voxelTraceCone(const vec3 origin, vec3 dir) {
float max_dist = 1f;
dir = normalize(dir);
float current_dist = 0.01f;
float apperture_angle = 0.01f; //Angle in Radians.
vec3 color = vec3(0.0f);
float occlusion = 0.0f;
float vox_size = 128.0f; //voxel map size
while(current_dist < max_dist && occlusion < 1) {
//Get cone diameter (tan = cathetus / cathetus)
float current_coneDiameter = 2.0f * current_dist * tan(apperture_angle * 0.5f);
//Get mipmap level which should be sampled according to the cone diameter
float vlevel = log2(current_coneDiameter * vox_size);
vec3 pos_worldspace = origin + dir * current_dist;
vec3 pos_texturespace = (pos_worldspace + vec3(1.0f)) * 0.5f; //[-1,1] Coordinates to [0,1]
vec4 voxel = textureLod(tex3D, pos_texturespace, vlevel); //get voxel
vec3 color_read = voxel.rgb;
float occlusion_read = voxel.a;
color = occlusion*color + (1 - occlusion) * occlusion_read * color_read;
occlusion = occlusion + (1 - occlusion) * occlusion_read;
float dist_factor = 0.3f; //Lower = better results but higher performance hit
current_dist += current_coneDiameter * dist_factor;
}
return color;
}
The tex3D uniform is the voxel 3d-texture.
Under a regular Phong shader (under which the voxel values are calculated) the scene looks like this:
For reference, this is what the voxel map (tex3D) (128x128x128) looks like when visualized:
Now we get to the actual problem I'm having. If I apply the shader above to the scene, I get following results:
For very small cone angles (apperture_angle=0.01) I get roughly what you might expect: The voxelized scene is essentially 'reflected' perpendicularly on each surface:
Now if I increase the apperture angle to, for example 30 degrees (apperture_angle=0.52), I get this really strange 'wavy'-looking result:
I would have expected a much more similar result to the earlier one, just less specular. Instead I get mostly the outline of each object reflected in a specular manner with some occasional pixels inside the outline. Considering this is meant to be the 'indirect lighting' in the scene, it won't look exactly good even if I add the direct light.
I have tried different values for max_dist, current_dist etc. aswell as shooting several cones instead of just one. The result remains similar, if not worse.
Does someone know what I'm doing wrong here, and how to get actual remotely realistic indirect light?
I suspect that the textureLod function somehow yields the wrong result for any LOD levels above 0, but I haven't been able to confirm this.
The Mipmaps of the 3D texture were not being generated correctly.
In addition there was no hardcap on vlevel leading to all textureLod calls returning a #000000 color that accessed any mipmaplevel above 1.

Uniform point arrays and managing fragment shader coordinates systems

My aim is to pass an array of points to the shader, calculate their distance to the fragment and paint them with a circle colored with a gradient depending of that computation.
For example:
(From a working example I set up on shader toy)
Unfortunately it isn't clear to me how I should calculate and convert the coordinates passed for processing inside the shader.
What I'm currently trying is to pass two array of floats - one for x positions and one for y positions of each point - to the shader though a uniform. Then inside the shader iterate through each point like so:
#ifdef GL_ES
precision mediump float;
precision mediump int;
#endif
uniform float sourceX[100];
uniform float sourceY[100];
uniform vec2 resolution;
in vec4 gl_FragCoord;
varying vec4 vertColor;
varying vec2 center;
varying vec2 pos;
void main()
{
float intensity = 0.0;
for(int i=0; i<100; i++)
{
vec2 source = vec2(sourceX[i],sourceY[i]);
vec2 position = ( gl_FragCoord.xy / resolution.xy );
float d = distance(position, source);
intensity += exp(-0.5*d*d);
}
intensity=3.0*pow(intensity,0.02);
if (intensity<=1.0)
gl_FragColor=vec4(0.0,intensity*0.5,0.0,1.0);
else if (intensity<=2.0)
gl_FragColor=vec4(intensity-1.0, 0.5+(intensity-1.0)*0.5,0.0,1.0);
else
gl_FragColor=vec4(1.0,3.0-intensity,0.0,1.0);
}
But that doesn't work - and I believe it may be because I'm trying to work with the pixel coordinates without properly translating them. Could anyone explain to me how to make this work?
Update:
The current result is:
The sketch's code is:
PShader pointShader;
float[] sourceX;
float[] sourceY;
void setup()
{
size(1024, 1024, P3D);
background(255);
sourceX = new float[100];
sourceY = new float[100];
for (int i = 0; i<100; i++)
{
sourceX[i] = random(0, 1023);
sourceY[i] = random(0, 1023);
}
pointShader = loadShader("pointfrag.glsl", "pointvert.glsl");
shader(pointShader, POINTS);
pointShader.set("sourceX", sourceX);
pointShader.set("sourceY", sourceY);
pointShader.set("resolution", float(width), float(height));
}
void draw()
{
for (int i = 0; i<100; i++) {
strokeWeight(60);
point(sourceX[i], sourceY[i]);
}
}
while the vertex shader is:
#define PROCESSING_POINT_SHADER
uniform mat4 projection;
uniform mat4 transform;
attribute vec4 vertex;
attribute vec4 color;
attribute vec2 offset;
varying vec4 vertColor;
varying vec2 center;
varying vec2 pos;
void main() {
vec4 clip = transform * vertex;
gl_Position = clip + projection * vec4(offset, 0, 0);
vertColor = color;
center = clip.xy;
pos = offset;
}
Update:
Based on the comments it seems you have confused two different approaches:
Draw a single full screen polygon, pass in the points and calculate the final value once per fragment using a loop in the shader.
Draw bounding geometry for each point, calculate the density for just one point in the fragment shader and use additive blending to sum the densities of all points.
The other issue is your points are given in pixels but the code expects a 0 to 1 range, so d is large and the points are black. Fixing this issue as #RetoKoradi describes should address the points being black, but I suspect you'll find ramp clipping issues when many are in close proximity. Passing points into the shader limits scalability and is inefficient unless the points cover the whole viewport.
As below, I think sticking with approach 2 is better. To restructure your code for it, remove the loop, don't pass in the array of points and use center as the point coordinate instead:
//calc center in pixel coordinates
vec2 centerPixels = (center * 0.5 + 0.5) * resolution.xy;
//find the distance in pixels (avoiding aspect ratio issues)
float dPixels = distance(gl_FragCoord.xy, centerPixels);
//scale down to the 0 to 1 range
float d = dPixels / resolution.y;
//write out the intensity
gl_FragColor = vec4(exp(-0.5*d*d));
Draw this to a texture (from comments: opengl-tutorial.org code and this question) with additive blending:
glEnable(GL_BLEND);
glBlendFunc(GL_ONE, GL_ONE);
Now that texture will contain intensity as it was after your original loop. In another fragment shader during a full screen pass (draw a single triangle that covers the whole viewport), continue with:
uniform sampler2D intensityTex;
...
float intensity = texture2D(intensityTex, gl_FragCoord.xy/resolution.xy).r;
intensity = 3.0*pow(intensity, 0.02);
...
The code you have shown is fine, assuming you're drawing a full screen polygon so the fragment shader runs once for each pixel. Potential issues are:
resolution isn't set correctly
The point coordinates aren't in the range 0 to 1 on the screen.
Although minor, d will be stretched by the aspect ratio, so you might be better scaling the points up to pixel coordinates and diving distance by resolution.y.
This looks pretty similar to creating a density field for 2D metaballs. For performance you're best off limiting the density function for each point so it doesn't go on forever, then spatting discs into a texture using additive blending. This saves processing those pixels a point doesn't affect (just like in deferred shading). The result is the density field, or in your case per-pixel intensity.
These are a little related:
2D OpenGL ES Metaballs on android (no answers yet)
calculate light volume radius from intensity
gl_PointSize Corresponding to World Space Size
It looks like the point center and fragment position are in different coordinate spaces when you subtract them:
vec2 source = vec2(sourceX[i],sourceY[i]);
vec2 position = ( gl_FragCoord.xy / resolution.xy );
float d = distance(position, source);
Based on your explanation and code, source and source are in window coordinates, meaning that they are in units of pixels. gl_FragCoord is in the same coordinate space. And even though you don't show that directly, I assume that resolution is the size of the window in pixels.
This means that:
vec2 position = ( gl_FragCoord.xy / resolution.xy );
calculates the normalized position of the fragment within the window, in the range [0.0, 1.0] for both x and y. But then on the next line:
float d = distance(position, source);
you subtrace source, which is still in window coordinates, from this position in normalized coordinates.
Since it looks like you wanted the distance in normalized coordinates, which makes sense, you'll also need to normalize source:
vec2 source = vec2(sourceX[i],sourceY[i]) / resolution.xy;

Triplanar texturing in glsl

I followed a paper called "GPU Based Algorithms for Terrain Texturing" and it says the following:
The main algorithm to apply triplanar texturing is fairly simple.
First, we check whether the slope is relatively large in the same way
that we do with slope based texturing. These regions with high slope
will be the only regions aected by the algorithm. We then check what
the larger component of the normal is, out of x and z. If x is the
larger component, we use the geometry z coordinate as the texture
coordinate s, and the geometry y coordinate as the texture coordinate
t. If z is the larger component, we use the geometry x coordinate as
the texture coordinate s, and the geometry y coordinate as the texture
coordinate t.
So I tried to implement it. This is my heightmap:
Note that I added white lines in the borders just for the experiment, so now I have maximum-height walls surrounding my map.
Now following the articles, here's the implementation in the vertex shader:
#version 430
uniform mat4 ProjectionMatrix;
uniform mat4 CameraMatrix;
uniform vec3 scale;
layout(location = 0) in vec3 vertex;
layout(location = 1) in vec3 normal;
out vec3 fsVertex;
out vec3 fsNormal;
out vec2 fsUvs;
void main()
{
fsVertex = vertex;
fsNormal = normalize(normal);
if(fsNormal.y < 0.75) {
if(fsNormal.x > fsNormal.z)
fsUvs = vertex.zy * scale.zy;
else
fsUvs = vertex.xy * scale.xy;
}
else
fsUvs = vertex.xz * scale.xz;
gl_Position = ProjectionMatrix * CameraMatrix * vec4(vertex * scale, 1.0);
}
Here's the fragment shader, if it helps.
This is what I get:
Here's a further look, for proportion.
The top and left walls (of the heightmap) are rendered ok, and the bottom and right walls still suffer from stretching. I also get these weird stretches spots next to the beginning of the walls.
What could be the cause of this?
If you want to check if the normal's x or z coordinate are longer, you should use the abs function:
if(abs(fsNormal.x) > abs(fsNormal.z))
Furthermore, the y > 0.75 seems like a coarse approximation, which is probably good enough in most cases. Actually, the maximum of abs(x), abs(y), abs(z) gives you the correct plane.
Here is a DX11/HLSL implementation i used. GLSL conversion should be easy.
With the exponent value you can tune the blending speed at the borders. i used something like 3.
float3 SampleTriplanarTexture(Texture2D<float4> tex1, Texture2D<float4> tex2, Texture2D<float4> tex3, float3 normal, float3 pos, float exponent)
{
//triplanar projection
float mXY = pow(abs(normal.z), exponent);
float mXZ = pow(abs(normal.y), exponent);
float mYZ = pow(abs(normal.x), exponent);
float total = 1.0f / (mXY + mXZ + mYZ);
mXY *= total;
mXZ *= total;
mYZ *= total;
return tex1.SampleLevel(linearSampler2, pos.xz, 0) * mXZ +
tex2.SampleLevel(linearSampler2, pos.xy, 0) * mXY +
tex3.SampleLevel(linearSampler2, pos.yz, 0) * mYZ;
}