Right now I can obtain the color of the neighbouring pixel by doing
color = texture2D(backBuffer, vec2(gl_TexCoord[0].x + i,gl_TexCoord[0].y + j);
But how can I know what pixel that is or at least the current uv of that pixel on the texture?
Which pixel of the fragment. The UV / ST is a number from 0 to 1 representing the whole texture.
I want to calculate a pixels brightness based on its distance from a point.
gl_TexCoord[0].x gives you the s texture coordinate, while gl_TexCoord[0].y gives you the s texture coordinate.
If you are writing fragment shader, the pixel position shouldn't matter. I haven't tried, but maybe you can get it using gl_in, which is defined as :
in gl_PerVertex {
vec4 gl_Position;
float gl_PointSize;
float gl_ClipDistance[];
} gl_in[];
but I am not sure it if it available for pixel shader.
Related
In the custom effect docs it says to calculate relative offsets for pixels using this formula :
float2 sampleLocation =
texelSpaceInput0.xy // Sample position for the current output pixel.
+ float2(0,-10) // An offset from which to sample the input, specified in pixels.
* texelSpaceInput0.zw; // Multi
For my video sequencer I have included custom effects and transitions many of them converted to HLSL from Glsl from ShaderToy. The code in Glsl uses normalized coordinates [0...1] and many calculations rely on absolute xy position, rather than relative, so I have to find a way to use absolute texture coordinates in my HLSL code.
So, that zw multiplicator I use to find the UV of the bottom right sample :
float2 FindLast(float2 MV)
{
float2 LastSample = float2(0, 0) + float2(WI,he)* MV;
return LastSample;
}
The width and height are passed to the effect as a constant buffer.
After that, the normalized coordinates are :
float2 GetNormalized(float2 UV,float2 MV)
{
float2 s = FindLast(MV);
return UV / s;
}
This works. Code from shader toy that would apply in normalized coordinate works fine in my effects. D2DSampleInput with that uv input returns the correct color.
The question is whether my solution is viable. For example I have assumed that the first top left pixel is at uv 0,0, is that correct and viable?
I'm new in HLSL and shaders, so I would appreciate your help.
I am attempting to write and read from a 3D texture, but it seems my mapping is wrong. I have used Render doc to check the textures and they look ok.
A random layer of this voluemtric texture looks like:
So just some blue to denote absence and some green values to denote pressence.
The coordinates I calculate when I write to each layer are calculated in the vertex shader as:
pos.x = (2.f*pos.x-width+2)/(width-2);
pos.y = (2.f*pos.y-depth+2)/(depth-2);
pos.z -= level;
pos.z *= 1.f/voxel_size;
gl_Position = pos;
Since the texture itself looks ok it seems these coordinates are good to achieve my goal.
It's important to note that right now voxel_size is 1 and the scale of the texture is supposed to be 1 to 1 with the scene dimensions. In essence, each pixel in the texture represents a 1x1x1 voxel in the scene.
Next I attempt to fetch the texture values as follows:
vec3 pos = vertexPos;
pos.x = (2.f*pos.x-width+2)/(width-2);
pos.y = (2.f*pos.y-depth+2)/(depth-2);
pos.z *= 1.f/(4*16);
outColor = texture(voxel_map, pos);
Where vertexPos is the global vertex position in the scene. The z coordinate may be completely wrong however (i am not sure if I am supposed to normalize the depth component or not) but that is not the only issue. If you look at the final result:
There is a horizontal sclae component problem. Since each texel represents a voxel, the color of a cube should always be a fixed color. But as you can see I am getting multiple colors for a single cube on the top faces. So my horizontal scale is wrong.
What am i doing wrong when fetching the texels from the texture?
So I have a camera with a wide angle lens. I know the distortion coefficients, the focal length, the optical center. I want to undistort the image I get from this camera. I used OpenCV for the first try (cv::undistort), which worked well, but was way too slow.
Now I want to do this on the gpu. There is a shader doing exactly this documented in http://willsteptoe.com/post/67401705548/ar-rift-aligning-tracking-and-video-spaces-part-5
the formulas can be seen here:
http://en.wikipedia.org/wiki/Distortion_%28optics%29#Software_correction
So I went and implemented my own version as a glsl shader. I am sending a quad with texture coordinates on the corners between 0..1.
I assume the texture coordinates that arrive are the coordinates of the undistorted image. I calculate the coordinates for the distorted point corresponding to my texture coordinates. Then I sample the distorted image texture.
With this shader nothing in the final image changes. The problem I identified through a cpu implementation is, the coefficient term is very close to zero. The numbers get smaller and smaller through radius squaring etc.. So I have a scaling problem - I can't figure it out what to do differently! I tried everything... I guess it is something quite obvious, since this kind of process seems to work for a lot of people.
I left out the tangential distortion correction for simplicity.
#version 330 core
in vec2 UV;
out vec4 color;
uniform sampler2D textureSampler;
void main()
{
vec2 focalLength = vec2(438.568f, 437.699f);
vec2 opticalCenter = vec2(667.724f, 500.059f);
vec4 distortionCoefficients = vec4(-0.035109f, -0.002393f, 0.000335f, -0.000449f);
const vec2 imageSize = vec2(1280.f, 960.f);
vec2 opticalCenterUV = opticalCenter / imageSize;
vec2 shiftedUVCoordinates = (UV - opticalCenterUV);
vec2 lensCoordinates = shiftedUVCoordinates / focalLength;
float radiusSquared = sqrt(dot(lensCoordinates, lensCoordinates));
float radiusQuadrupled = radiusSquared * radiusSquared;
float coefficientTerm = distortionCoefficients.x * radiusSquared + distortionCoefficients.y * radiusQuadrupled;
vec2 distortedUV = ((lensCoordinates + lensCoordinates * (coefficientTerm))) * focalLength;
vec2 resultUV = (distortedUV + opticalCenterUV);
color = texture2D(textureSampler, resultUV);
}
I see two issues with your solution. The main issue is that you mix two different spaces. You seem to work in [0,1] texture space by converting the optical center to that space, but you did not adjust focalLenght. The key point is that for such a distortion model, the focal lenght is determined in pixels. However, now a pixel is not 1 base unit wide anymore, but 1/width and 1/height units, respectively.
You could add vec2 focalLengthUV = focalLength / imageSize, but you will see that both divisions will cancel out each other when you calculate lensCoordinates. It is much more convenient to convert the texture space UV coordinates to pixel coordinates and use that space directly:
vec2 lensCoordinates = (UV * imageSize - opticalCenter) / focalLenght;
(and also respectively changing the calculation for distortedUV and resultUV).
There is still one issue with the approach I have sketched so far: the conventions of that pixel space I mentioned earlier. In GL, the origin will be the lower left corner, while in most pixel spaces, the origin is at the top left. You might have to flip the y coordinate when doing the conversion. Another thing is where exactly pixel centers are located. So far, the code assumes that pixel centers are at integer + 0.5. The texture coordinate (0,0) is not the center of the lower left pixel, but the corner point. The parameters you use for the distortion might (I don't know OpenCV's conventions) assume the pixel centers at integers, so that instead of the conversion pixelSpace = uv * imageSize, you might need to offset this by half a pixel like pixelSpace = uv * imageSize - vec2(0.5).
The second issue I see is
float radiusSquared = sqrt(dot(lensCoordinates, lensCoordinates));
That sqrt is not correct here, as dot(a,a) will already give the squared lenght of vector a.
How would I render a 2D radial field in OpenGL? I know I can render it pixel by pixel but I'm wondering if there are more efficient solutions? I don't mind if it requires OpenGL3+ functionality.
How familiar are you with shaders? Because I'm thinking an easy-ish answer would be to render a quad and then write a fragment shader to color the quad based off of how far each pixel is from the center.
Pseudocode:
vertex shader:
vec2 center = vec2((x1+x2)/2,(y1+y2)/2); //pass this to the fragment shader
fragment shader:
float dist = distance(pos,center); //"pos" is the interpolated position of the fragment. Its passed in from the vertex shader
//Now that we have the distance between each fragment and the center, we can do all kinds of stuff:
gl_fragcolor = vec4(1,1,1,dist) //Assuming you're drawing a unit square, this will make each pixel's transparency smoothly vary from 1 (right next to the center) to 0 (on the edce of the square)
gl_fragcolor = vec4(dist, dist, dist, 1.0) //Vary each pixel's color from white to black
//etc, etc
Let me know if you need more detail
In my fragment shader I can load a texture, then do this:
uniform sampler2D tex;
void main(void) {
vec4 color = texture2D(tex, gl_TexCoord[0].st);
gl_FragColor = color;
}
That sets the current pixel to color value of texture. I can modify these, etc and it works well.
But a few questions. How do I tell "which" pixel I am? For example, say I want to set pixel 100,100 (x,y) to red. Everything else to black. How do I do a :
"if currentSelf.Position() == (100,100); then color=red; else color=black?"
?
I know how to set colors, but how do I get "my" location?
Secondly, how do I get values from a neighbor pixel?
I tried this:
vec4 nextColor = texture2D(tex, gl_TexCoord[1].st);
But not clear what it is returning? if I'm pixel 100,100; how do I get the values from 101,100 or 100,101?
How do I tell "which" pixel I am?
You're not a pixel. You're a fragment. There's a reason that OpenGL calls them "Fragment shaders"; it's because they aren't pixels yet. Indeed, not only may they never become pixels (via discard or depth tests or whatever), thanks to multisampling, multiple fragments can combine to form a single pixel.
If you want to tell where your fragment shader is in window-space, use gl_FragCoord. Fragment positions are floating-point values, not integers, so you have to test with a range instead of a single "100, 100" value.
Secondly, how do I get values from a neighbor pixel?
If you're talking about the neighboring framebuffer pixel, you don't. Fragment shaders cannot arbitrarily read from the framebuffer, either in their own position or in a neighboring one.
If you're talking about accessing a neighboring texel from the one you accessed, then that's just a matter of biasing the texture coordinate you pass to texture2D. You have to get the size of the texture (since you're not using GLSL 1.30 or above, you have to manually pass this in), invert the size and either add or subtract these sizes from the S and T component of the texture coordinate.
Easy peasy.
Just compute the size of a pixel based on resolution. Then look up +1 and -1.
vec2 onePixel = vec2(1.0, 1.0) / u_textureSize;
gl_FragColor = (
texture2D(u_image, v_texCoord) +
texture2D(u_image, v_texCoord + vec2(onePixel.x, 0.0)) +
texture2D(u_image, v_texCoord + vec2(-onePixel.x, 0.0))) / 3.0;
There's a good example here