Artifacts when sampling texture in OpenGL - opengl

I've been trying to code a fragment shader such that I can pass it an arbitrary image and it would convert it into a 9-box (a repeating center and static borders). A sample input image would be this one:
My code then creates a single square (2 triangles), with a 1-to-1 orthographic projection that draws images fine, and call the following fragment shader:
#version 330
in vec2 outTexCoord;
out vec4 fragColor;
uniform sampler2D texture_sampler;
uniform vec4 colour;
uniform vec2 top;
uniform vec2 mid;
uniform vec2 bottom;
uniform ivec2 repeat;
void main()
{
vec2 boxSize = top + bottom + repeat * mid;
vec2 boxCoord = outTexCoord * boxSize;
vec2 textCoord = boxCoord;
// Fiddle with the X coordinate, for items in the middle or bottom.
if(boxSize.x - boxCoord.x <= bottom.x)
textCoord.x = 1.0 - (boxSize.x - boxCoord.x);
else if(boxCoord.x > top.x) {
float m = boxCoord.x - top.x;
m = m / mid.x;
textCoord.x = top.x + mid.x * (m - floor(m));
}
// Fiddle with the Y coordinate, for items in the middle or bottom.
if(boxSize.y - boxCoord.y <= bottom.y)
textCoord.y = 1.0 - (boxSize.y - boxCoord.y);
else if(boxCoord.y > top.y) {
float m = boxCoord.y - top.y;
m = m / mid.y;
textCoord.y = top.y + mid.y * (m - floor(m));
}
fragColor = colour * texture(texture_sampler, textCoord);
}
The uniforms are filled so that the sizes of top, mid, bottom correspond to A, B and C respectively, with top + mid + bottom = (1,1). The shader then extrapolates the texture coordinates that map to the original texture coordinates. The new coordinates should always fall within (0,0) and (1,1). Problem is: it works but for some reason a 2-pixel horizontal distortion appears every time I "repeat":
Another example with a different image and more vertical repeats.
What bothers me enormously is that nowhere in the original is there any empty or gray pixel to to sample from. Even if they where the wrong coordinates, it should be sampling from the texture itself (I thought it could be a sampling issue, but the error does not occur either at the frontier of the triangles nor at the frontier of the texture, nor does it seem to be interpolating from nearby texture pixels). I literally don-t know where those color values are coming from! Also, no such problem seems to occur on the X axis, even if the code is equivalent :-(

Related

Efficiently iterate trough neighbouring pixels in fragment shader

For context:
I'm working on a generative 2D animation with OpenFrameworks.
I'm trying to implement a shader shifts the fills some shapes with a color, depending on the orientation of the shapes edges.
Basically it takes an image like this one:
and spits out something like this:
Note that it intentionally only takes the color from the left side of the shape.
Right now my fragment shader looks like this:
#version 150
out vec4 outputColor;
uniform sampler2DRect fbo;
uniform sampler2DRect mask;
vec2 point;
vec4 col;
float x,i;
float delta = 200;
vec4 extrudeColor()
{
x = gl_FragCoord.x > delta ? gl_FragCoord.x - delta : 0;
for(i = gl_FragCoord.x; i > x; i--)
{
point = vec2(i, gl_FragCoord.y);
if(texture(fbo, point) != vec4(0,0,0,0)){
col = texture(fbo, point);
return vec4(col.r, col.g, col.b, (i-x)/delta);
}
}
return vec4(0,0,0,1);
}
void main()
{
outputColor = texture(mask, gl_FragCoord.xy) == vec4(1,1,1,1) && texture(fbo, gl_FragCoord.xy) == vec4(0,0,0,0) ? extrudeColor() : vec4(0,0,0,0);
}
the mask sampler is just a black and white version of the second image that I use to avoid calculating pixels outside of the shapes.
The shader I have works but it is slow and I feel like I'm not using proper GPU thinking and coding.
The actual, more general question:
I'm totally new to glsl and opengl. Is there a way to make this kind of iteration trough neighbouring pixels more efficiently and without having this many texture() reads?
Maybe using matrices? IDK!
This is a highly inefficient way to approach this problem. Try to avoid conditionals (if's) and loops (for's) in your shader. I would suggest loading or generating a single texture, and then using an alpha mask to create the shape you need. The texture could remain constant, while the 2 or 8-bit mask could be generated per frame.
An alternative method would be to use a few uniforms and upload "per-line" data in an array:
#version 440 core
uniform sampler2D _Texture ; // The texture to draw
uniform vec2 _Position ; // The 'object' position (screen coordinate)
uniform int _RowOffset ; // The offset in the 'object' to start drawing
uniform int _RowLength ; // The length of the 'row' to draw
uniform int _Height ; // The height of the 'object' to draw
in vec2 _TexCoord ; // The texture coordinate passed from Vertex shader
out vec4 _FragColor ; // The output color (gl_FragColor deprecated)
void main () {
if (gl_FragCoord.x < (_Position.x + _RowOffset)) discard ;
if (gl_FragCoord.x > (_Position.x + _RowOffset + _RowLength)) discard ;
_FragColor = texture2D (_Texture, _TexCoord.st) ;
}
Or, without sampling a texture at all, you could generate a linear gradient function and sample the color from it using the Y coordinate:
const vec4 _Red = vec4 (1, 0, 0, 1) ;
const vec4 _Green vec4 (0, 1, 0, 0) ;
vec4 _GetGradientColor (float _P /* Percentage */) {
float _R = (_Red.r * _P + _Green.r * (1 - _P) / 2 ;
float _G = (_Red.g * _P + _Green.g * (1 - _P) / 2 ;
float _B = (_Red.b * _P + _Green.b * (1 - _P) / 2 ;
float _A = (_Red.a * _P + _Green.a * (1 - _P) / 2 ;
return (vec4 (_R, _G, _B, _A)) ;
}
Then in your Frag Shader,
float _P = gl_FragCoord.y - _Position.y / _Height ;
_FragColor = _GetGradientColor (_P) ;
Shader Output
Of course this all could be optimised a bit, and this only generates a 2-color gradient whereas it looks like you're needing several colors. A quick Google search for "linear gradient generator" can land you some nicer alternatives. I should also note this simple example will not work for shapes with 'holes' in them, but it can be revised to do so. If the shader math gets too heavy, then choose the texture with alpha mask option.

Why is my frag shader casting long shadows horizontally and short shadows vertically?

I have the following fragment shader:
#version 330
layout(location=0) out vec4 frag_colour;
in vec2 texelCoords;
uniform sampler2D uTexture; // the color
uniform sampler2D uTextureHeightmap; // the heightmap
uniform float uSunDistance = -10000000.0; // really far away vertically
uniform float uSunInclination; // height from the heightmap plane
uniform float uSunAzimuth; // clockwise rotation point
uniform float uQuality; // used to determine number of steps and steps size
void main()
{
vec4 c = texture(uTexture,texelCoords);
vec2 textureD = textureSize(uTexture,0);
float d = max(textureD.x,textureD.y); // use the largest dimension to determine stepsize etc
// position the sun in the centre of the screen and convert from spherical to cartesian coordinates
vec3 sunPosition = vec3(textureD.x/2,textureD.y/2,0) + vec3( uSunDistance*sin(uSunInclination)*cos(uSunAzimuth),
uSunDistance*sin(uSunInclination)*sin(uSunAzimuth),
uSunDistance*cos(uSunInclination) );
float height = texture2D(uTextureHeightmap, texelCoords).r; // starting height
vec3 direction = normalize(vec3(texelCoords,height) - sunPosition); // sunlight direction
float sampleDistance = 0;
float samples = d*uQuality;
float stepSize = 1.0 / ((samples/d) * d);
for(int i = 0; i < samples; i++)
{
sampleDistance += stepSize; // increase the sample distance
vec3 newPoint = vec3(texelCoords,height) + direction * sampleDistance; // get the coord for the next sample point
float newHeight = texture2D(uTextureHeightmap,newPoint.xy).r; // get the height of that sample point
// put it in shadow if we hit something that is higher than our starting point AND is heigher than the ray we're casting
if(newHeight > height && newHeight > newPoint.z)
{
c *= 0.5;
break;
}
}
frag_colour = c;
}
The purpose is for it to cast shadows based on a heightmap. Pretty nifty, and the results look good.
However, there's a problem where the shadows appear longer when they are horizontal compared to vertical. If I make the window size different, with a window that is taller than wide, I get the opposite effect. I.e., the shadows are casting longer in the longer dimension.
This tells me that it's to do with the way I'm stepping in the above shader, but I can't tell the problem.
To illustrate, here is the with a uSunAzimuth that results in a horizontally cast shadow:
And here is the exact same code with a uSunAzimuth for a vertical shadow:
It's not very pronounced in these low resolution images, but in larger resolutions the effect gets more exaggerated. Essentially; the shadow when you measure how it casts in all 360 degrees of azimuth clears out an ellipse instead of a circle.
The shadow fragment shader operates on a "snapshot" of the viewport. When your scene is rendered and this "snapshot" is generated, then the vertex positions are transformed by the projection matrix. The projection matrix describes the mapping from 3D points of a scene, to 2D points of the viewport and takes in account the aspect ration of the viewport.
(see Both depth buffer and triangle face orientation are reversed in OpenGL,
and Transform the modelMatrix).
This causes that the high map (uTextureHeightmap) represents a rectangular field of view, dependent on the aspect ratio.
But the texture coordinates, which you use to access the height map describe a quad in the range (0, 0) to (1, 1).
This mismatch must be balanced, by scaling with the aspect ratio.
vec3 direction = ....;
float aspectRatio = textureD.x / textureD.y;
direction.xy *= vec2( 1.0/aspectRatio, 1.0 );
I just needed to adjust the direction slightly.
float aspectCorrection = textureD.x / textureD.y;
...
vec3 direction = normalize(vec3(texelCoords,height) - sunPosition);
direction.y *= aspectCorrection;

GBUFFER Decal Projection and scaling

I have been working on projecting decals on to anything that the decals bounding box encapsulates. After reading and trying numerous code snippets (usually in HLSL) I have a some what working method in GLSL for projecting the decals.
Let me start with trying to explain what I'm doing and how this works (so far).
The code below is now fixed and works!
This all is while in the perspective view mode.
I send 2 uniforms to the fragment shader "tr" and "bl". These are the 2 corners of the bounding box. I can and will replace these with hard coded sizes because they are the size of the decals original bounding box. tr = vec3(.5, .5, .5) and br = vec3(-.5, -.5, -.5). I'd prefer to find a way to do the position tests in the decals transformed state. (more about this at the end).
Adding this for clarity. The vertex emitted from the vertex program is the bounding box multiplied by the decals matrix and than by the model view projection matrix.. I use this for the next step:
With that vertex, I get the depth value from the depth texture and with it, calculate the position in world space using the inverse of the projection matrix.
Next, I translate this position using the Inverse of the Decals matrix. (The matrix that scales, rotates and translates the 1,1,1 cube to its world location. I thought that by using the inverse of the decals transform matrix, the correct size and rotation of the screen point would be handled correctly but it is not.
Vertex Program:
//Decals color pass.
#version 330 compatibility
out mat4 matPrjInv;
out vec4 positionSS;
out vec4 positionWS;
out mat4 invd_mat;
uniform mat4 decal_matrix;
void main(void)
{
gl_Position = decal_matrix * gl_Vertex;
gl_Position = gl_ModelViewProjectionMatrix * gl_Position;
positionWS = (decal_matrix * gl_Vertex);;
positionSS = gl_Position;
matPrjInv = inverse(gl_ModelViewProjectionMatrix);
invd_mat = inverse(decal_matrix);
}
Fragment Program:
#version 330 compatibility
layout (location = 0) out vec4 gPosition;
layout (location = 1) out vec4 gNormal;
layout (location = 2) out vec4 gColor;
uniform sampler2D depthMap;
uniform sampler2D colorMap;
uniform sampler2D normalMap;
uniform mat4 matrix;
uniform vec3 tr;
uniform vec3 bl;
in vec2 TexCoords;
in vec4 positionSS; // screen space
in vec4 positionWS; // world space
in mat4 invd_mat; // inverse decal matrix
in mat4 matPrjInv; // inverse projection matrix
void clip(vec3 v){
if (v.x > tr.x || v.x < bl.x ) { discard; }
if (v.y > tr.y || v.y < bl.y ) { discard; }
if (v.z > tr.z || v.z < bl.z ) { discard; }
}
vec2 postProjToScreen(vec4 position)
{
vec2 screenPos = position.xy / position.w;
return 0.5 * (vec2(screenPos.x, screenPos.y) + 1);
}
void main(){
// Calculate UVs
vec2 UV = postProjToScreen(positionSS);
// sample the Depth from the Depthsampler
float Depth = texture2D(depthMap, UV).x * 2.0 - 1.0;
// Calculate Worldposition by recreating it out of the coordinates and depth-sample
vec4 ScreenPosition;
ScreenPosition.xy = UV * 2.0 - 1.0;
ScreenPosition.z = (Depth);
ScreenPosition.w = 1.0f;
// Transform position from screen space to world space
vec4 WorldPosition = matPrjInv * ScreenPosition ;
WorldPosition.xyz /= WorldPosition.w;
WorldPosition.w = 1.0f;
// transform to decal original position and size.
// 1 x 1 x 1
WorldPosition = invd_mat * WorldPosition;
clip (WorldPosition.xyz);
// Get UV for textures;
WorldPosition.xy += 0.5;
WorldPosition.y *= -1.0;
vec4 bump = texture2D(normalMap, WorldPosition.xy);
gColor = texture2D(colorMap, WorldPosition.xy);
//Going to have to do decals in 2 passes..
//Blend doesn't work with GBUFFER.
//Lots more to sort out.
gNormal.xyz = bump;
gPosition = positionWS;
}
And here are a couple of Images showing whats wrong.
What I get for the projection:
And this is the actual size of the decals.. Much larger than what my shader is creating!
I have tried creating a new matrix using the decals and the projection matrix to construct a sort of "lookat" matrix and translate the screen position in to the decals post transformed state.. I have not been able to get this working. Some where I am missing something but where? I thought that translating using the inverse of the decals matrix would deal with the transform and put the screen position in the proper transformed state. Ideas?
Updated the code for the texture UVs.. You may have to fiddle with the y and x depending on if your texture is flipped on x or y. I also fixed the clip sub so it works correctly. As it is, this code now works. I will update this more if needed so others don't have to go through the pain I did to get it working.
Some issues to resolve are decals laying over each other. The one on top over writes the one below. I think I will have to accumulated the colors and normals in to the default FBO and then blend(Add) them to the GBUFFER textures before or during the lighting pass. Adding more screen size textures is not a great idea so I will need to be creative and recycle any textures I can.
I found the solution to decals overlaying each other.
Turn OFF depth masking while drawing the decals and turn int back on afterwards:
glDepthMask(GL_FALSE)
OK.. I'm so excited. I found the issue.
I updated the code above again.
I had a mistake in what I was sending the shader for tr and bl:
Here is the change to clip:
void clip(vec3 v){
if (v.x > tr.x || v.x < bl.x ) { discard; }
if (v.y > tr.y || v.y < bl.y ) { discard; }
if (v.z > tr.z || v.z < bl.z ) { discard; }
}

OpenGL Computing Normals and TBN Matrix from Depth Buffer (SSAO implementation)

I'm implementing SSAO in OpenGL, following this tutorial: Jhon Chapman SSAO
Basically the technique described uses an Hemispheric kernel which is oriented along the fragment's normal. The view space z position of the sample is then compared to its screen space depth buffer value.
If the value in the depth buffer is higher, it means the sample ended up in a geometry so this fragment should be occluded.
The goal of this technique is to get rid of the classic implementation artifact where objects flat faces are greyed out.
I've have the same implementation with 2 small differencies
I'm not using a Noise texture to rotate my kernel, so I have banding artifacts, that's fine for now
I don't have access to a buffer with Per-pixel normals, so I have to compute my normal and TBN matrix only using the depth buffer.
The algorithm seems to be working fine, I can see the fragments being occluded, BUT I still have my faces greyed out...
IMO it's coming from the way I'm calculating my TBN matrix. The normals look OK but something must be wrong as my kernel doesn't seem to be properly aligned causing samples to end up in the faces.
Screenshots are with a Kernel of 8 samples and a radius of .1. the first is only the result of SSAO pass and the second one is the debug render of the generated normals.
Here is the code for the function that computes the Normal and TBN Matrix
mat3 computeTBNMatrixFromDepth(in sampler2D depthTex, in vec2 uv)
{
// Compute the normal and TBN matrix
float ld = -getLinearDepth(depthTex, uv);
vec3 x = vec3(uv.x, 0., ld);
vec3 y = vec3(0., uv.y, ld);
x = dFdx(x);
y = dFdy(y);
x = normalize(x);
y = normalize(y);
vec3 normal = normalize(cross(x, y));
return mat3(x, y, normal);
}
And the SSAO shader
#include "helper.glsl"
in vec2 vertTexcoord;
uniform sampler2D depthTex;
const int MAX_KERNEL_SIZE = 8;
uniform vec4 gKernel[MAX_KERNEL_SIZE];
// Kernel Radius in view space (meters)
const float KERNEL_RADIUS = .1;
uniform mat4 cameraProjectionMatrix;
uniform mat4 cameraProjectionMatrixInverse;
out vec4 FragColor;
void main()
{
// Get the current depth of the current pixel from the depth buffer (stored in the red channel)
float originDepth = texture(depthTex, vertTexcoord).r;
// Debug linear depth. Depth buffer is in the range [1.0];
float oLinearDepth = getLinearDepth(depthTex, vertTexcoord);
// Compute the view space position of this point from its depth value
vec4 viewport = vec4(0,0,1,1);
vec3 originPosition = getViewSpaceFromWindow(cameraProjectionMatrix, cameraProjectionMatrixInverse, viewport, vertTexcoord, originDepth);
mat3 lookAt = computeTBNMatrixFromDepth(depthTex, vertTexcoord);
vec3 normal = lookAt[2];
float occlusion = 0.;
for (int i=0; i<MAX_KERNEL_SIZE; i++)
{
// We align the Kernel Hemisphere on the fragment normal by multiplying all samples by the TBN
vec3 samplePosition = lookAt * gKernel[i].xyz;
// We want the sample position in View Space and we scale it with the kernel radius
samplePosition = originPosition + samplePosition * KERNEL_RADIUS;
// Now we need to get sample position in screen space
vec4 sampleOffset = vec4(samplePosition.xyz, 1.0);
sampleOffset = cameraProjectionMatrix * sampleOffset;
sampleOffset.xyz /= sampleOffset.w;
// Now to get the depth buffer value at the projected sample position
sampleOffset.xyz = sampleOffset.xyz * 0.5 + 0.5;
// Now can get the linear depth of the sample
float sampleOffsetLinearDepth = -getLinearDepth(depthTex, sampleOffset.xy);
// Now we need to do a range check to make sure that object
// outside of the kernel radius are not taken into account
float rangeCheck = abs(originPosition.z - sampleOffsetLinearDepth) < KERNEL_RADIUS ? 1.0 : 0.0;
// If the fragment depth is in front so it's occluding
occlusion += (sampleOffsetLinearDepth >= samplePosition.z ? 1.0 : 0.0) * rangeCheck;
}
occlusion = 1.0 - (occlusion / MAX_KERNEL_SIZE);
FragColor = vec4(vec3(occlusion), 1.0);
}
Update 1
This variation of the TBN calculation function gives the same results
mat3 computeTBNMatrixFromDepth(in sampler2D depthTex, in vec2 uv)
{
// Compute the normal and TBN matrix
float ld = -getLinearDepth(depthTex, uv);
vec3 a = vec3(uv, ld);
vec3 x = vec3(uv.x + dFdx(uv.x), uv.y, ld + dFdx(ld));
vec3 y = vec3(uv.x, uv.y + dFdy(uv.y), ld + dFdy(ld));
//x = dFdx(x);
//y = dFdy(y);
//x = normalize(x);
//y = normalize(y);
vec3 normal = normalize(cross(x - a, y - a));
vec3 first_axis = cross(normal, vec3(1.0f, 0.0f, 0.0f));
vec3 second_axis = cross(first_axis, normal);
return mat3(normalize(first_axis), normalize(second_axis), normal);
}
I think the problem is probably that you are mixing coordinate systems. You are using texture coordinates in combination with the linear depth. You can imagine two vertical surfaces facing slightly to the left of the screen. Both have the same angle from the vertical plane and should thus have the same normal right?
But let's then imagine that one of these surfaces are much further from the camera. Since fFdx/fFdy functions basically tell you the difference from the neighbor pixel, the surface far away from the camera will have greater linear depth difference over one pixel, than the surface close to the camera. But the uv.x / uv.y derivative will have the same value. That means that you will get different normals depending on the distance from the camera.
The solution is to calculate the view coordinate and use the derivative of that to calculate the normal.
vec3 viewFromDepth(in sampler2D depthTex, in vec2 uv, in vec3 view)
{
float ld = -getLinearDepth(depthTex, uv);
/// I assume ld is negative for fragments in front of the camera
/// not sure how getLinearDepth is implemented
vec3 z_scaled_view = (view / view.z) * ld;
return z_scaled_view;
}
mat3 computeTBNMatrixFromDepth(in sampler2D depthTex, in vec2 uv, in vec3 view)
{
vec3 view = viewFromDepth(depthTex, uv);
vec3 view_normal = normalize(cross(dFdx(view), dFdy(view)));
vec3 first_axis = cross(view_normal, vec3(1.0f, 0.0f, 0.0f));
vec3 second_axis = cross(first_axis, view_normal);
return mat3(view_normal, normalize(first_axis), normalize(second_axis));
}

Uniform point arrays and managing fragment shader coordinates systems

My aim is to pass an array of points to the shader, calculate their distance to the fragment and paint them with a circle colored with a gradient depending of that computation.
For example:
(From a working example I set up on shader toy)
Unfortunately it isn't clear to me how I should calculate and convert the coordinates passed for processing inside the shader.
What I'm currently trying is to pass two array of floats - one for x positions and one for y positions of each point - to the shader though a uniform. Then inside the shader iterate through each point like so:
#ifdef GL_ES
precision mediump float;
precision mediump int;
#endif
uniform float sourceX[100];
uniform float sourceY[100];
uniform vec2 resolution;
in vec4 gl_FragCoord;
varying vec4 vertColor;
varying vec2 center;
varying vec2 pos;
void main()
{
float intensity = 0.0;
for(int i=0; i<100; i++)
{
vec2 source = vec2(sourceX[i],sourceY[i]);
vec2 position = ( gl_FragCoord.xy / resolution.xy );
float d = distance(position, source);
intensity += exp(-0.5*d*d);
}
intensity=3.0*pow(intensity,0.02);
if (intensity<=1.0)
gl_FragColor=vec4(0.0,intensity*0.5,0.0,1.0);
else if (intensity<=2.0)
gl_FragColor=vec4(intensity-1.0, 0.5+(intensity-1.0)*0.5,0.0,1.0);
else
gl_FragColor=vec4(1.0,3.0-intensity,0.0,1.0);
}
But that doesn't work - and I believe it may be because I'm trying to work with the pixel coordinates without properly translating them. Could anyone explain to me how to make this work?
Update:
The current result is:
The sketch's code is:
PShader pointShader;
float[] sourceX;
float[] sourceY;
void setup()
{
size(1024, 1024, P3D);
background(255);
sourceX = new float[100];
sourceY = new float[100];
for (int i = 0; i<100; i++)
{
sourceX[i] = random(0, 1023);
sourceY[i] = random(0, 1023);
}
pointShader = loadShader("pointfrag.glsl", "pointvert.glsl");
shader(pointShader, POINTS);
pointShader.set("sourceX", sourceX);
pointShader.set("sourceY", sourceY);
pointShader.set("resolution", float(width), float(height));
}
void draw()
{
for (int i = 0; i<100; i++) {
strokeWeight(60);
point(sourceX[i], sourceY[i]);
}
}
while the vertex shader is:
#define PROCESSING_POINT_SHADER
uniform mat4 projection;
uniform mat4 transform;
attribute vec4 vertex;
attribute vec4 color;
attribute vec2 offset;
varying vec4 vertColor;
varying vec2 center;
varying vec2 pos;
void main() {
vec4 clip = transform * vertex;
gl_Position = clip + projection * vec4(offset, 0, 0);
vertColor = color;
center = clip.xy;
pos = offset;
}
Update:
Based on the comments it seems you have confused two different approaches:
Draw a single full screen polygon, pass in the points and calculate the final value once per fragment using a loop in the shader.
Draw bounding geometry for each point, calculate the density for just one point in the fragment shader and use additive blending to sum the densities of all points.
The other issue is your points are given in pixels but the code expects a 0 to 1 range, so d is large and the points are black. Fixing this issue as #RetoKoradi describes should address the points being black, but I suspect you'll find ramp clipping issues when many are in close proximity. Passing points into the shader limits scalability and is inefficient unless the points cover the whole viewport.
As below, I think sticking with approach 2 is better. To restructure your code for it, remove the loop, don't pass in the array of points and use center as the point coordinate instead:
//calc center in pixel coordinates
vec2 centerPixels = (center * 0.5 + 0.5) * resolution.xy;
//find the distance in pixels (avoiding aspect ratio issues)
float dPixels = distance(gl_FragCoord.xy, centerPixels);
//scale down to the 0 to 1 range
float d = dPixels / resolution.y;
//write out the intensity
gl_FragColor = vec4(exp(-0.5*d*d));
Draw this to a texture (from comments: opengl-tutorial.org code and this question) with additive blending:
glEnable(GL_BLEND);
glBlendFunc(GL_ONE, GL_ONE);
Now that texture will contain intensity as it was after your original loop. In another fragment shader during a full screen pass (draw a single triangle that covers the whole viewport), continue with:
uniform sampler2D intensityTex;
...
float intensity = texture2D(intensityTex, gl_FragCoord.xy/resolution.xy).r;
intensity = 3.0*pow(intensity, 0.02);
...
The code you have shown is fine, assuming you're drawing a full screen polygon so the fragment shader runs once for each pixel. Potential issues are:
resolution isn't set correctly
The point coordinates aren't in the range 0 to 1 on the screen.
Although minor, d will be stretched by the aspect ratio, so you might be better scaling the points up to pixel coordinates and diving distance by resolution.y.
This looks pretty similar to creating a density field for 2D metaballs. For performance you're best off limiting the density function for each point so it doesn't go on forever, then spatting discs into a texture using additive blending. This saves processing those pixels a point doesn't affect (just like in deferred shading). The result is the density field, or in your case per-pixel intensity.
These are a little related:
2D OpenGL ES Metaballs on android (no answers yet)
calculate light volume radius from intensity
gl_PointSize Corresponding to World Space Size
It looks like the point center and fragment position are in different coordinate spaces when you subtract them:
vec2 source = vec2(sourceX[i],sourceY[i]);
vec2 position = ( gl_FragCoord.xy / resolution.xy );
float d = distance(position, source);
Based on your explanation and code, source and source are in window coordinates, meaning that they are in units of pixels. gl_FragCoord is in the same coordinate space. And even though you don't show that directly, I assume that resolution is the size of the window in pixels.
This means that:
vec2 position = ( gl_FragCoord.xy / resolution.xy );
calculates the normalized position of the fragment within the window, in the range [0.0, 1.0] for both x and y. But then on the next line:
float d = distance(position, source);
you subtrace source, which is still in window coordinates, from this position in normalized coordinates.
Since it looks like you wanted the distance in normalized coordinates, which makes sense, you'll also need to normalize source:
vec2 source = vec2(sourceX[i],sourceY[i]) / resolution.xy;