Return floats from fragmentshader - c++

For an application I need to create a depth-image of a mesh. Basically I want a image, where each pixel tells the distance between the camera center and the intersection point. I choose to do this task with OpenGL, so that most of the computation can be done on the GPU instead of the CPU.
Here is my code for the vertex-shader. It computes the real-world coordinate of the intersection point and stores the coordinates in a varying vec4, so that I have access to it in the fragment-shader.
varying vec4 verpos;
void main(void)
{
gl_Position = ftransform();
verpos = gl_ModelViewMatrix*gl_Vertex;
}
And here the code for the fragment-shader. In the fragment-shader I use these coordinates to compute the eulidean distance to the origin (0,0,0) using pythagoras. To get access to the computed value on the CPU, my plan is to store the distance as a color using gl_FragColor, and extract the color values using glReadPixels(0, 0, width, height, GL_RED, GL_FLOAT, &distances[0]);
varying vec4 verpos;
void main()
{
float x = verpos.x;
float y = verpos.y;
float z = verpos.z;
float depth = sqrt(x*x + y*y + z*z) / 5000.0;
gl_FragColor = vec4(depth, depth, depth, 1.0);
}
The problem is, that I receive the results only in the precision of 0.003921569 (=1/255).
The resulting values contain for instance 0.364705890, 0.364705890, 0.364705890, 0.364705890, 0.368627459, 0.368627459, 0.368627459. Most of the values in a row are exactly the same, and then there is a jump of 1/255.
So somewhere inbetween gl_FragColor and glReadPixels OpenGL converts the floats to 8 bits and afterwards converts them back to float.
How can I extract distance-values as float without loosing precision?

Attach a R32F or R16F texture to a FBO and render to it. Then extract the pixels with glGetTexImage2D() like you would do with glReadPixels().

Related

Why differs gl_FragCoord.z from ((pos.z / pos.w) + 1.0) * 0.5?

Does anyone know why 'depth' (vertShader) differs from 'gl_FragCoord.z' (rendered from opengl)? Especially with decreasing z the difference becomes higher. Is it possible that 'depth' is at higher z values more precise?
.vsh
out float depth;
void main (void) {
vec4 pos = mvpMatrix * vertex;
depth = ((pos.z / pos.w) + 1.0) * 0.5;
gl_Position = pos;
}
.fsh
in float depth;
void main(void) {
gl_FragDepth = depth;// or gl_FragCoord.z;
}
There are a couple of issues with your approach, with the main points are:
gl_FragCoord.z is hyperbolically distorted window space z value. However, the hyperoblical z/w value for each vertex is just linearily interpolated in screen space for each framgent. But when you use a varying out float depth = (pos.z / pos.w), the GL will do a perspective-corrected interpolation which is non-linear. You could fix this by using flat out float depth.
(pos.z/pos.w) doesn't even make sense. Think about it: if the point lies in a plane where the camera is, you'll get pos.w=0, and no valid result. gl_FragCoord.z does not have this issue because the clipping is done before the divide, and it will do the divide for a new vertex which lies on the near plane, and which you'll never going to see (there's no vertex shader invocation for that).
The issue is also present when points lie behind the camera, they will end up mirrored in front of the camera. If you have a primitive where vertices lie on both sides of the camera, you will get complete bullshit as your interpolated depth value, no matter which interpolation method you chose.

Why is my frag shader casting long shadows horizontally and short shadows vertically?

I have the following fragment shader:
#version 330
layout(location=0) out vec4 frag_colour;
in vec2 texelCoords;
uniform sampler2D uTexture; // the color
uniform sampler2D uTextureHeightmap; // the heightmap
uniform float uSunDistance = -10000000.0; // really far away vertically
uniform float uSunInclination; // height from the heightmap plane
uniform float uSunAzimuth; // clockwise rotation point
uniform float uQuality; // used to determine number of steps and steps size
void main()
{
vec4 c = texture(uTexture,texelCoords);
vec2 textureD = textureSize(uTexture,0);
float d = max(textureD.x,textureD.y); // use the largest dimension to determine stepsize etc
// position the sun in the centre of the screen and convert from spherical to cartesian coordinates
vec3 sunPosition = vec3(textureD.x/2,textureD.y/2,0) + vec3( uSunDistance*sin(uSunInclination)*cos(uSunAzimuth),
uSunDistance*sin(uSunInclination)*sin(uSunAzimuth),
uSunDistance*cos(uSunInclination) );
float height = texture2D(uTextureHeightmap, texelCoords).r; // starting height
vec3 direction = normalize(vec3(texelCoords,height) - sunPosition); // sunlight direction
float sampleDistance = 0;
float samples = d*uQuality;
float stepSize = 1.0 / ((samples/d) * d);
for(int i = 0; i < samples; i++)
{
sampleDistance += stepSize; // increase the sample distance
vec3 newPoint = vec3(texelCoords,height) + direction * sampleDistance; // get the coord for the next sample point
float newHeight = texture2D(uTextureHeightmap,newPoint.xy).r; // get the height of that sample point
// put it in shadow if we hit something that is higher than our starting point AND is heigher than the ray we're casting
if(newHeight > height && newHeight > newPoint.z)
{
c *= 0.5;
break;
}
}
frag_colour = c;
}
The purpose is for it to cast shadows based on a heightmap. Pretty nifty, and the results look good.
However, there's a problem where the shadows appear longer when they are horizontal compared to vertical. If I make the window size different, with a window that is taller than wide, I get the opposite effect. I.e., the shadows are casting longer in the longer dimension.
This tells me that it's to do with the way I'm stepping in the above shader, but I can't tell the problem.
To illustrate, here is the with a uSunAzimuth that results in a horizontally cast shadow:
And here is the exact same code with a uSunAzimuth for a vertical shadow:
It's not very pronounced in these low resolution images, but in larger resolutions the effect gets more exaggerated. Essentially; the shadow when you measure how it casts in all 360 degrees of azimuth clears out an ellipse instead of a circle.
The shadow fragment shader operates on a "snapshot" of the viewport. When your scene is rendered and this "snapshot" is generated, then the vertex positions are transformed by the projection matrix. The projection matrix describes the mapping from 3D points of a scene, to 2D points of the viewport and takes in account the aspect ration of the viewport.
(see Both depth buffer and triangle face orientation are reversed in OpenGL,
and Transform the modelMatrix).
This causes that the high map (uTextureHeightmap) represents a rectangular field of view, dependent on the aspect ratio.
But the texture coordinates, which you use to access the height map describe a quad in the range (0, 0) to (1, 1).
This mismatch must be balanced, by scaling with the aspect ratio.
vec3 direction = ....;
float aspectRatio = textureD.x / textureD.y;
direction.xy *= vec2( 1.0/aspectRatio, 1.0 );
I just needed to adjust the direction slightly.
float aspectCorrection = textureD.x / textureD.y;
...
vec3 direction = normalize(vec3(texelCoords,height) - sunPosition);
direction.y *= aspectCorrection;

Uniform point arrays and managing fragment shader coordinates systems

My aim is to pass an array of points to the shader, calculate their distance to the fragment and paint them with a circle colored with a gradient depending of that computation.
For example:
(From a working example I set up on shader toy)
Unfortunately it isn't clear to me how I should calculate and convert the coordinates passed for processing inside the shader.
What I'm currently trying is to pass two array of floats - one for x positions and one for y positions of each point - to the shader though a uniform. Then inside the shader iterate through each point like so:
#ifdef GL_ES
precision mediump float;
precision mediump int;
#endif
uniform float sourceX[100];
uniform float sourceY[100];
uniform vec2 resolution;
in vec4 gl_FragCoord;
varying vec4 vertColor;
varying vec2 center;
varying vec2 pos;
void main()
{
float intensity = 0.0;
for(int i=0; i<100; i++)
{
vec2 source = vec2(sourceX[i],sourceY[i]);
vec2 position = ( gl_FragCoord.xy / resolution.xy );
float d = distance(position, source);
intensity += exp(-0.5*d*d);
}
intensity=3.0*pow(intensity,0.02);
if (intensity<=1.0)
gl_FragColor=vec4(0.0,intensity*0.5,0.0,1.0);
else if (intensity<=2.0)
gl_FragColor=vec4(intensity-1.0, 0.5+(intensity-1.0)*0.5,0.0,1.0);
else
gl_FragColor=vec4(1.0,3.0-intensity,0.0,1.0);
}
But that doesn't work - and I believe it may be because I'm trying to work with the pixel coordinates without properly translating them. Could anyone explain to me how to make this work?
Update:
The current result is:
The sketch's code is:
PShader pointShader;
float[] sourceX;
float[] sourceY;
void setup()
{
size(1024, 1024, P3D);
background(255);
sourceX = new float[100];
sourceY = new float[100];
for (int i = 0; i<100; i++)
{
sourceX[i] = random(0, 1023);
sourceY[i] = random(0, 1023);
}
pointShader = loadShader("pointfrag.glsl", "pointvert.glsl");
shader(pointShader, POINTS);
pointShader.set("sourceX", sourceX);
pointShader.set("sourceY", sourceY);
pointShader.set("resolution", float(width), float(height));
}
void draw()
{
for (int i = 0; i<100; i++) {
strokeWeight(60);
point(sourceX[i], sourceY[i]);
}
}
while the vertex shader is:
#define PROCESSING_POINT_SHADER
uniform mat4 projection;
uniform mat4 transform;
attribute vec4 vertex;
attribute vec4 color;
attribute vec2 offset;
varying vec4 vertColor;
varying vec2 center;
varying vec2 pos;
void main() {
vec4 clip = transform * vertex;
gl_Position = clip + projection * vec4(offset, 0, 0);
vertColor = color;
center = clip.xy;
pos = offset;
}
Update:
Based on the comments it seems you have confused two different approaches:
Draw a single full screen polygon, pass in the points and calculate the final value once per fragment using a loop in the shader.
Draw bounding geometry for each point, calculate the density for just one point in the fragment shader and use additive blending to sum the densities of all points.
The other issue is your points are given in pixels but the code expects a 0 to 1 range, so d is large and the points are black. Fixing this issue as #RetoKoradi describes should address the points being black, but I suspect you'll find ramp clipping issues when many are in close proximity. Passing points into the shader limits scalability and is inefficient unless the points cover the whole viewport.
As below, I think sticking with approach 2 is better. To restructure your code for it, remove the loop, don't pass in the array of points and use center as the point coordinate instead:
//calc center in pixel coordinates
vec2 centerPixels = (center * 0.5 + 0.5) * resolution.xy;
//find the distance in pixels (avoiding aspect ratio issues)
float dPixels = distance(gl_FragCoord.xy, centerPixels);
//scale down to the 0 to 1 range
float d = dPixels / resolution.y;
//write out the intensity
gl_FragColor = vec4(exp(-0.5*d*d));
Draw this to a texture (from comments: opengl-tutorial.org code and this question) with additive blending:
glEnable(GL_BLEND);
glBlendFunc(GL_ONE, GL_ONE);
Now that texture will contain intensity as it was after your original loop. In another fragment shader during a full screen pass (draw a single triangle that covers the whole viewport), continue with:
uniform sampler2D intensityTex;
...
float intensity = texture2D(intensityTex, gl_FragCoord.xy/resolution.xy).r;
intensity = 3.0*pow(intensity, 0.02);
...
The code you have shown is fine, assuming you're drawing a full screen polygon so the fragment shader runs once for each pixel. Potential issues are:
resolution isn't set correctly
The point coordinates aren't in the range 0 to 1 on the screen.
Although minor, d will be stretched by the aspect ratio, so you might be better scaling the points up to pixel coordinates and diving distance by resolution.y.
This looks pretty similar to creating a density field for 2D metaballs. For performance you're best off limiting the density function for each point so it doesn't go on forever, then spatting discs into a texture using additive blending. This saves processing those pixels a point doesn't affect (just like in deferred shading). The result is the density field, or in your case per-pixel intensity.
These are a little related:
2D OpenGL ES Metaballs on android (no answers yet)
calculate light volume radius from intensity
gl_PointSize Corresponding to World Space Size
It looks like the point center and fragment position are in different coordinate spaces when you subtract them:
vec2 source = vec2(sourceX[i],sourceY[i]);
vec2 position = ( gl_FragCoord.xy / resolution.xy );
float d = distance(position, source);
Based on your explanation and code, source and source are in window coordinates, meaning that they are in units of pixels. gl_FragCoord is in the same coordinate space. And even though you don't show that directly, I assume that resolution is the size of the window in pixels.
This means that:
vec2 position = ( gl_FragCoord.xy / resolution.xy );
calculates the normalized position of the fragment within the window, in the range [0.0, 1.0] for both x and y. But then on the next line:
float d = distance(position, source);
you subtrace source, which is still in window coordinates, from this position in normalized coordinates.
Since it looks like you wanted the distance in normalized coordinates, which makes sense, you'll also need to normalize source:
vec2 source = vec2(sourceX[i],sourceY[i]) / resolution.xy;

distortion correction with gpu shader bug

So I have a camera with a wide angle lens. I know the distortion coefficients, the focal length, the optical center. I want to undistort the image I get from this camera. I used OpenCV for the first try (cv::undistort), which worked well, but was way too slow.
Now I want to do this on the gpu. There is a shader doing exactly this documented in http://willsteptoe.com/post/67401705548/ar-rift-aligning-tracking-and-video-spaces-part-5
the formulas can be seen here:
http://en.wikipedia.org/wiki/Distortion_%28optics%29#Software_correction
So I went and implemented my own version as a glsl shader. I am sending a quad with texture coordinates on the corners between 0..1.
I assume the texture coordinates that arrive are the coordinates of the undistorted image. I calculate the coordinates for the distorted point corresponding to my texture coordinates. Then I sample the distorted image texture.
With this shader nothing in the final image changes. The problem I identified through a cpu implementation is, the coefficient term is very close to zero. The numbers get smaller and smaller through radius squaring etc.. So I have a scaling problem - I can't figure it out what to do differently! I tried everything... I guess it is something quite obvious, since this kind of process seems to work for a lot of people.
I left out the tangential distortion correction for simplicity.
#version 330 core
in vec2 UV;
out vec4 color;
uniform sampler2D textureSampler;
void main()
{
vec2 focalLength = vec2(438.568f, 437.699f);
vec2 opticalCenter = vec2(667.724f, 500.059f);
vec4 distortionCoefficients = vec4(-0.035109f, -0.002393f, 0.000335f, -0.000449f);
const vec2 imageSize = vec2(1280.f, 960.f);
vec2 opticalCenterUV = opticalCenter / imageSize;
vec2 shiftedUVCoordinates = (UV - opticalCenterUV);
vec2 lensCoordinates = shiftedUVCoordinates / focalLength;
float radiusSquared = sqrt(dot(lensCoordinates, lensCoordinates));
float radiusQuadrupled = radiusSquared * radiusSquared;
float coefficientTerm = distortionCoefficients.x * radiusSquared + distortionCoefficients.y * radiusQuadrupled;
vec2 distortedUV = ((lensCoordinates + lensCoordinates * (coefficientTerm))) * focalLength;
vec2 resultUV = (distortedUV + opticalCenterUV);
color = texture2D(textureSampler, resultUV);
}
I see two issues with your solution. The main issue is that you mix two different spaces. You seem to work in [0,1] texture space by converting the optical center to that space, but you did not adjust focalLenght. The key point is that for such a distortion model, the focal lenght is determined in pixels. However, now a pixel is not 1 base unit wide anymore, but 1/width and 1/height units, respectively.
You could add vec2 focalLengthUV = focalLength / imageSize, but you will see that both divisions will cancel out each other when you calculate lensCoordinates. It is much more convenient to convert the texture space UV coordinates to pixel coordinates and use that space directly:
vec2 lensCoordinates = (UV * imageSize - opticalCenter) / focalLenght;
(and also respectively changing the calculation for distortedUV and resultUV).
There is still one issue with the approach I have sketched so far: the conventions of that pixel space I mentioned earlier. In GL, the origin will be the lower left corner, while in most pixel spaces, the origin is at the top left. You might have to flip the y coordinate when doing the conversion. Another thing is where exactly pixel centers are located. So far, the code assumes that pixel centers are at integer + 0.5. The texture coordinate (0,0) is not the center of the lower left pixel, but the corner point. The parameters you use for the distortion might (I don't know OpenCV's conventions) assume the pixel centers at integers, so that instead of the conversion pixelSpace = uv * imageSize, you might need to offset this by half a pixel like pixelSpace = uv * imageSize - vec2(0.5).
The second issue I see is
float radiusSquared = sqrt(dot(lensCoordinates, lensCoordinates));
That sqrt is not correct here, as dot(a,a) will already give the squared lenght of vector a.

Per-Vertex Normals from perlin noise?

I'm generating terrain in Opengl geometry shader and am having trouble calculating normals for lighting. I'm generating the terrain dynamically each frame with a perlin noise function implemented in the geometry shader. Because of this, I need an efficient way to calculate normals per-vertex based on the noise function (no texture or anything). I am able to take cross product of 2 side to get face normals, but they are generated dynamically with the geometry so I cannot then go back and smooth the face normals for vertex normals. How can I get vertex normals on the fly just using the noise function that generates the height of my terrain in the y plane (therefore height being between 1 and -1). I believe I have to sample the noise function 4 times for each vertex, but I tried something like the following and it didn't work...
vec3 xP1 = vertex + vec3(1.0, 0.0, 0.0);
vec3 xN1 = vertex + vec3(-1.0, 0.0, 0.0);
vec3 zP1 = vertex + vec3(0.0, 0.0, 1.0);
vec3 zN1 = vertex + vec3(0.0, 0.0, -1.0);
float sx = snoise(xP1) - snoise(xN1);
float sz = snoise(zP1) - snoise(zN1);
vec3 n = vec3(-sx, 1.0, sz);
normalize(n);
return n;
The above actually generated lighting that moved around like perlin noise! So any advice for how I can get the per-vertex normals correctly?
The normal is the vector perpendicular to the tangent (also known as slope). The slope of a function is its derivative; for n dimensions its n partial derivatives. So you sample the noise around a center point P and at P ± (δx, 0) and P ± (0, δy), with δx, δy choosen to be as small as possible, but large enough for numerical stability. This yields you the tangents in each direction. Then you take the cross product of them, normalize the result and got the normal at P.
You didn't say exactly how you were actually generating the positions. So I'm going to assume that you're using the Perlin noise to generate height values in a height map. So, for any position X, Y in the hieghtmap, you use a 2D noise function to generate the Z value.
So, let's assume that your position is computed as follows:
vec3 CalcPosition(in vec2 loc) {
float height = MyNoiseFunc2D(loc);
return vec3(loc, height);
}
This generates a 3D position. But in what space is this position in? That's the question.
Most noise functions expect loc to be two values on some particular floating-point range. How good your noise function is will determine what range you can pass values in. Now, if your model space 2D positions are not guaranteed to be within the noise function's range, then you need to transform them to that range, do the computations, and then transform it back to model space.
In so doing, you now have a 3D position. The transform for the X and Y values is simple (the reverse of the transform to the noise function's space), but what of the Z? Here, you have to apply some kind of scale to the height. The noise function will return a number on the range [0, 1), so you need to scale this range to the same model space that your X and Y values are going to. This is typically done by picking a maximum height and scaling the position appropriately. Therefore, our revised calc position looks something like this:
vec3 CalcPosition(in vec2 modelLoc, const in mat3 modelToNoise, const in mat4 noiseToModel)
{
vec2 loc = modelToNoise * vec3(modelLoc, 1.0);
float height = MyNoiseFunc2D(loc);
vec4 modelPos = noiseToModel * vec4(loc, height, 1.0);
return modelPos.xyz;
}
The two matrices transform to the noise function's space, and then transform back. Your actual code could use less complicated structures, depending on your use case, but a full affine transformation is simple to describe.
OK, now that we have established that, what you need to keep in mind is this: nothing makes sense unless you know what space it is in. Your normal, your positions, nothing matters until you establish what space it is in.
This function returns positions in model space. We need to calculate normals in model space. To do that, we need 3 positions: the current position of the vertex, and two positions that are slightly offset from the current position. The positions we get must be in model space, or our normal will not be.
Therefore, we need to have the following function:
void CalcDeltas(in vec2 modelLoc, const in mat3 modelToNoise, const in mat4 noiseToModel, out vec3 modelXOffset, out vec3 modelYOffset)
{
vec2 loc = modelToNoise * vec3(modelLoc, 1.0);
vec2 xOffsetLoc = loc + vec2(delta, 0.0);
vec2 yOffsetLoc = loc + vec2(0.0, delta);
float xOffsetHeight = MyNoiseFunc2D(xOffsetLoc);
float yOffsetHeight = MyNoiseFunc2D(yOffsetLoc);
modelXOffset = (noiseToModel * vec4(xOffsetLoc, xOffsetHeight, 1.0)).xyz;
modelYOffset = (noiseToModel * vec4(yOffsetLoc, yOffsetHeight, 1.0)).xyz;
}
Obviously, you can merge these two functions into one.
The delta value is a small offset in the space of the noise texture's input. The size of this offset depends on your noise function; it needs to be big enough to return a height that is significantly different from the one returned by the actual current position. But it needs to be small enough that you aren't pulling from random parts of the noise distribution.
You should get to know your noise function.
Now that you have the three positions (the current position, the x-offset, and the y-offset) in model space, you can compute the vertex normal in model space:
vec3 modelXGrad = modelXOffset - modelPosition;
vec3 modelYGrad = modelYOffset - modelPosition;
vec3 modelNormal = normalize(cross(modelXGrad, modelYGrad));
From here, do the usual things. But never forget to keep track of the spaces of your various vectors.
Oh, and one more thing: this should be done in the vertex shader. There's no reason to do this in a geometry shader, since none of the computations affect other vertices. Let the GPU's parallelism work for you.