Convert gl_FragCoord coordinate to screen positions - opengl

I'm strictly talking about a 2d environment (in fact this is for a 2d game).
In a fragment shader, how can I convert gl_FragCoord.x and gl_FragCoord.y to screen coordinates so that the top-left pixel is 0, 0 and the bottom right is the screen resolution (for example 800, 600)? Thanks

By default the origin is at the bottom left and pixels are centered at half integer coordinates (the bottom left pixel is at (0.5, 0.5)). One way to achieve what you want is to redeclare gl_FragCoord with a layout qualifier:
layout(origin_upper_left) in vec4 gl_FragCoord;
or
layout(origin_upper_left, pixel_center_integer) in vec4 gl_FragCoord;
if you want the pixels to be centered on integer coordinates.
Another way is to pass in the screen resolution in a uniform variable and do a bit of math:
vec2 pos = vec2(gl_FragCoord.x, resolution.y - gl_FragCoord.y);
or to get integer values:
vec2 pos = vec2(gl_FragCoord.x, resolution.y - gl_FragCoord.y) - 0.5;
Note that if you want the pixels centered on integer coordinates then the pixel at the corner opposite the origin will be resolution - 1.0 not resolution.

Related

How does the coordinate system work for 3D textures in OpenGL?

I am attempting to write and read from a 3D texture, but it seems my mapping is wrong. I have used Render doc to check the textures and they look ok.
A random layer of this voluemtric texture looks like:
So just some blue to denote absence and some green values to denote pressence.
The coordinates I calculate when I write to each layer are calculated in the vertex shader as:
pos.x = (2.f*pos.x-width+2)/(width-2);
pos.y = (2.f*pos.y-depth+2)/(depth-2);
pos.z -= level;
pos.z *= 1.f/voxel_size;
gl_Position = pos;
Since the texture itself looks ok it seems these coordinates are good to achieve my goal.
It's important to note that right now voxel_size is 1 and the scale of the texture is supposed to be 1 to 1 with the scene dimensions. In essence, each pixel in the texture represents a 1x1x1 voxel in the scene.
Next I attempt to fetch the texture values as follows:
vec3 pos = vertexPos;
pos.x = (2.f*pos.x-width+2)/(width-2);
pos.y = (2.f*pos.y-depth+2)/(depth-2);
pos.z *= 1.f/(4*16);
outColor = texture(voxel_map, pos);
Where vertexPos is the global vertex position in the scene. The z coordinate may be completely wrong however (i am not sure if I am supposed to normalize the depth component or not) but that is not the only issue. If you look at the final result:
There is a horizontal sclae component problem. Since each texel represents a voxel, the color of a cube should always be a fixed color. But as you can see I am getting multiple colors for a single cube on the top faces. So my horizontal scale is wrong.
What am i doing wrong when fetching the texels from the texture?

OpenGL GLSL bloom effect bleeds on edges

I have a framebuffer called "FBScene" that renders to a texture TexScene.
I have a framebuffer called "FBBloom" that renders to a texture TexBloom.
I have a framebuffer called "FBBloomTemp" that renders to a texture TexBloomTemp.
First I render all my blooming / glowing objects to FBBloom and thus into TexBloom. Then I play ping pong with FBBloom and FBBloomTemp, alternatingly blurring horizontally / vertically to get a nice bloom texture.
Then I pass the final "TexBloom" texture and the TexScene to a screen shader that draws a screen filling quad with both textures:
gl_FragColor = texture(TexBloom, uv) + texture(TexScene, uv);
The problem is:
While blurring the images, the bloom effect bleeds into the opposite edges of the screen if the glowing object is too close to the screen border.
This is my blur shader:
vec4 color = vec4(0.0);
vec2 off1 = vec2(1.3333333333333333) * direction;
vec2 off1DivideByResolution = off1 / resolution;
vec2 uvPlusOff1 = uv + off1DivideByResolution;
vec2 uvMinusOff1 = uv - off1DivideByResolution;
color += texture(image, uv) * 0.29411764705882354;
color += texture(image, uvPlusOff1) * 0.35294117647058826;
color += texture(image, uvMinusOff1) * 0.35294117647058826;
gl_FragColor = color;
I think I need to prevent uvPlusOff1 and uvMinusOff1 from beeing outside of the -1 and +1 uv range. But I don't know how to do that.
I tried to clamp the uv values at the gap in the code above with:
float px = clamp(uvPlusOff1.x, -1, 1);
float py = clamp(uvPlusOff1.y, -1, 1);
float mx = clamp(uvMinusOff1.x, -1, 1);
float my = clamp(uvMinusOff1.y, -1, 1);
uvPlusOff1 = vec2(px, py);
uvMinusOff1 = vec2(mx, my);
But it did not work as expected. Any help is highly appreciated.
Bleeding to the other side of the screen usually happens when the wrap-mode is set to GL_REPEAT. Set it to GL_CLAMP_TO_EDGE and it shouldn't happen anymore.
Edit - To explain a little bit more why this happens in your case: A texture coordinate of [1,1] means the bottom-right corner of the bottom-right texel. When linear filtering is enabled, this location will read four pixels around that corner. In case of repeating textures, three of them are on other sides of the screen. If you want to prevent the problem manually, you have to clamp to the range [0 + 1/texture_size, 1 - 1/texture_size].
I'm also not sure why you even clamp to [-1, 1], because texture coordinates usually range from [0, 1]. Negative values will be outside of the texture and are handled by the wrap mode.

OpenGL point sprites not always rendered front to back

I'm working on a game engine with LWJGL3 in which all objects are point sprites. It uses an orthographic camera and I wrote a vertex shader that calculates the 3D position of each sprite (which also causes the fish-eye lens effect). I calculate the distance to the camera and use that value as the depth value for each point sprite. The data for these sprites is stored in chunks of 16x16 in VBOs.
The issue that I'm having is that the sprites are not always rendered front to back. When looking away from the origin, the depth testing works as intended, but when looking in the direction of the origin, sprites are rendered from back to front which causes a big performance drop.
This might seem like depth testing is not enabled, but when I disable depth testing, sprites in the back are drawn on top of the ones in front, so that is not the case.
Here's the full vertex shader:
#version 330 core
#define M_PI 3.1415926535897932384626433832795
uniform mat4 camRotMat; // Virtual camera rotation
uniform vec3 camPos; // Virtual camera position
uniform vec2 fov; // Virtual camera field of view
uniform vec2 screen; // Screen size (pixels)
in vec3 pos;
out vec4 vColor;
void main() {
// Compute distance and rotated delta position to camera
float dist = distance(pos, camPos);
vec3 dXYZ = (camRotMat * vec4(camPos - pos, 0)).xyz;
// Compute angles of this 3D position relative center of camera FOV
// Distance is never negative, so negate it manually when behind camera
vec2 rla = vec2(atan(dXYZ.x, length(dXYZ.yz)),
atan(dXYZ.z, length(dXYZ.xy) * sign(-dXYZ.y)));
// Find sprite size and coordinates of the center on the screen
float size = screen.y / dist * 2; // Sprites become smaller based on their distance
vec2 px = -rla / fov * 2; // Find pixel position on screen of this object
// Output
vColor = vec4((1 - (dist * dist) / (64 * 64)) + 0.5); // Brightness
gl_Position = vec4(px, dist / 1000, 1.0); // Position on the screen
gl_PointSize = size; // Sprite size
}
In the first image, you can see how the game normally looks. In the second one, I've disabled alpha-testing, so you can see sprites are rendered front to back. But in the third image, when looking in the direction of the origin, sprites are being drawn back to front.
Edit:
I am almost 100% certain the depth value is set correctly. The size of the sprites is directly linked to the distance, and they resize correctly when moving around. I also set the color to be brighter when the distance is lower, which works as expected.
I also set the following flags (and ofcourse clear the frame and depth buffer):
GL11.glEnable(GL11.GL_DEPTH_TEST);
GL11.glDepthFunc(GL11.GL_LEQUAL);
Edit2:
Here's a gif of what it looks like when you rotate around: https://i.imgur.com/v4iWe9p.gifv
Edit3:
I think I misunderstood how depth testing works. Here is a video of how the sprites are drawn over time: https://www.youtube.com/watch?v=KgORzkM9U2w
That explains the initial problem, so now I just need to find a way to render them in a different order depending on the camera rotation.

distortion correction with gpu shader bug

So I have a camera with a wide angle lens. I know the distortion coefficients, the focal length, the optical center. I want to undistort the image I get from this camera. I used OpenCV for the first try (cv::undistort), which worked well, but was way too slow.
Now I want to do this on the gpu. There is a shader doing exactly this documented in http://willsteptoe.com/post/67401705548/ar-rift-aligning-tracking-and-video-spaces-part-5
the formulas can be seen here:
http://en.wikipedia.org/wiki/Distortion_%28optics%29#Software_correction
So I went and implemented my own version as a glsl shader. I am sending a quad with texture coordinates on the corners between 0..1.
I assume the texture coordinates that arrive are the coordinates of the undistorted image. I calculate the coordinates for the distorted point corresponding to my texture coordinates. Then I sample the distorted image texture.
With this shader nothing in the final image changes. The problem I identified through a cpu implementation is, the coefficient term is very close to zero. The numbers get smaller and smaller through radius squaring etc.. So I have a scaling problem - I can't figure it out what to do differently! I tried everything... I guess it is something quite obvious, since this kind of process seems to work for a lot of people.
I left out the tangential distortion correction for simplicity.
#version 330 core
in vec2 UV;
out vec4 color;
uniform sampler2D textureSampler;
void main()
{
vec2 focalLength = vec2(438.568f, 437.699f);
vec2 opticalCenter = vec2(667.724f, 500.059f);
vec4 distortionCoefficients = vec4(-0.035109f, -0.002393f, 0.000335f, -0.000449f);
const vec2 imageSize = vec2(1280.f, 960.f);
vec2 opticalCenterUV = opticalCenter / imageSize;
vec2 shiftedUVCoordinates = (UV - opticalCenterUV);
vec2 lensCoordinates = shiftedUVCoordinates / focalLength;
float radiusSquared = sqrt(dot(lensCoordinates, lensCoordinates));
float radiusQuadrupled = radiusSquared * radiusSquared;
float coefficientTerm = distortionCoefficients.x * radiusSquared + distortionCoefficients.y * radiusQuadrupled;
vec2 distortedUV = ((lensCoordinates + lensCoordinates * (coefficientTerm))) * focalLength;
vec2 resultUV = (distortedUV + opticalCenterUV);
color = texture2D(textureSampler, resultUV);
}
I see two issues with your solution. The main issue is that you mix two different spaces. You seem to work in [0,1] texture space by converting the optical center to that space, but you did not adjust focalLenght. The key point is that for such a distortion model, the focal lenght is determined in pixels. However, now a pixel is not 1 base unit wide anymore, but 1/width and 1/height units, respectively.
You could add vec2 focalLengthUV = focalLength / imageSize, but you will see that both divisions will cancel out each other when you calculate lensCoordinates. It is much more convenient to convert the texture space UV coordinates to pixel coordinates and use that space directly:
vec2 lensCoordinates = (UV * imageSize - opticalCenter) / focalLenght;
(and also respectively changing the calculation for distortedUV and resultUV).
There is still one issue with the approach I have sketched so far: the conventions of that pixel space I mentioned earlier. In GL, the origin will be the lower left corner, while in most pixel spaces, the origin is at the top left. You might have to flip the y coordinate when doing the conversion. Another thing is where exactly pixel centers are located. So far, the code assumes that pixel centers are at integer + 0.5. The texture coordinate (0,0) is not the center of the lower left pixel, but the corner point. The parameters you use for the distortion might (I don't know OpenCV's conventions) assume the pixel centers at integers, so that instead of the conversion pixelSpace = uv * imageSize, you might need to offset this by half a pixel like pixelSpace = uv * imageSize - vec2(0.5).
The second issue I see is
float radiusSquared = sqrt(dot(lensCoordinates, lensCoordinates));
That sqrt is not correct here, as dot(a,a) will already give the squared lenght of vector a.

Knowing which pixel or UV you are on with GLSL?

Right now I can obtain the color of the neighbouring pixel by doing
color = texture2D(backBuffer, vec2(gl_TexCoord[0].x + i,gl_TexCoord[0].y + j);
But how can I know what pixel that is or at least the current uv of that pixel on the texture?
Which pixel of the fragment. The UV / ST is a number from 0 to 1 representing the whole texture.
I want to calculate a pixels brightness based on its distance from a point.
gl_TexCoord[0].x gives you the s texture coordinate, while gl_TexCoord[0].y gives you the s texture coordinate.
If you are writing fragment shader, the pixel position shouldn't matter. I haven't tried, but maybe you can get it using gl_in, which is defined as :
in gl_PerVertex {
vec4 gl_Position;
float gl_PointSize;
float gl_ClipDistance[];
} gl_in[];
but I am not sure it if it available for pixel shader.