In the custom effect docs it says to calculate relative offsets for pixels using this formula :
float2 sampleLocation =
texelSpaceInput0.xy // Sample position for the current output pixel.
+ float2(0,-10) // An offset from which to sample the input, specified in pixels.
* texelSpaceInput0.zw; // Multi
For my video sequencer I have included custom effects and transitions many of them converted to HLSL from Glsl from ShaderToy. The code in Glsl uses normalized coordinates [0...1] and many calculations rely on absolute xy position, rather than relative, so I have to find a way to use absolute texture coordinates in my HLSL code.
So, that zw multiplicator I use to find the UV of the bottom right sample :
float2 FindLast(float2 MV)
{
float2 LastSample = float2(0, 0) + float2(WI,he)* MV;
return LastSample;
}
The width and height are passed to the effect as a constant buffer.
After that, the normalized coordinates are :
float2 GetNormalized(float2 UV,float2 MV)
{
float2 s = FindLast(MV);
return UV / s;
}
This works. Code from shader toy that would apply in normalized coordinate works fine in my effects. D2DSampleInput with that uv input returns the correct color.
The question is whether my solution is viable. For example I have assumed that the first top left pixel is at uv 0,0, is that correct and viable?
I'm new in HLSL and shaders, so I would appreciate your help.
Related
I'm fairly new to Shadertoy and GLSL in general. I have successfully duplicated numerous Shadertoy shaders into Blender without actually knowing how it all works. I have looked for tutorials but I'm more of a visual learner.
If someone could explain or, even better, provide some images that describe the difference between fragCoord, iResolution, & fragColor. That would be great!
I'm mainly interested in the Numbers. Because I use Blender I'm used to the canvas being 0 to 1 -or- -1 to 1
This one in particular has me a bit confused.
vec2 u = (fragCoord - iResolution.xy * .5) / iResolution.y * 8.;
I can't reproduce the remaining code in Blender without knowing the coordinate system.
Any help would be greatly appreciated!
It is normal, you cannot reproduce this code in blender without knowing the coordinate system.
The Shadertoy documentation states:
Image shaders implement the mainImage() function to generate
procedural images by calculating a color for each pixel in the image.
This function is invoked once in each pixel and the host application
must provide the appropriate input data and retrieve the output color
to assign it to the corresponding pixel on the screen. The signature
of this function is:
void mainImage( out vec4 fragColor, in vec2 fragCoord);
where fragCoord contains the coordinates of the pixel for which the
shader must calculate a color. These coordinates are counted in pixels
with values from 0.5 to resolution-0.5 over the entire rendering
surface and the resolution of this surface is transmitted to the
shader via the uniform iResolution variable.
Let me explain.
The iResolution variable is a uniform vec3 which contains the dimensions of the window and is sent to the shader with some openGL code.
The fragCoord variable is a built-in variable that contains the coordinates of the pixel where the shader is being applied.
More concretely:
fragCoord : is a vec2 that is between 0 > 640 on the X axis and 0 > 360 on the Y axis
iResolution : is a vec2 with an X value of 640 and a Y value of 360
quick note on how vectors work in OpenGL:
if you have also a hard time understanging how vector work in OpenGL, I highly recommand to read the anwser of Homan bellow, a very usefull introduction to OpenGL swizzling.
This image was calculated with the following code:
// Normalized pixel coordinates (between 0 and 1)
vec2 uv = fragCoord/iResolution.xy;
// Set R and G values based on position
vec3 col = vec3(uv.x,uv.y,0);
// Output to screen
fragColor = vec4(col,1.0);
The output ranges from 0,0 in the lower-left and 1,1 in the upper-right. This is the default lower-left windows space set by OpenGL.
This an image was calculated with the following code:
// Normalized pixel coordinates (between -0.5 and 0.5)
vec2 uv = (fragCoord - iResolution.xy * 0.5)/iResolution.xy;
// Set R and G values based on position
vec3 col = vec3(uv.x,uv.y,0);
// Output to screen
fragColor = vec4(col,1.0);
The output ranges from -0.5,-0.5 in the lower-left and 0.5,0.5 because
in the first step we subtract half of the window size [0.5] from each pixel coordinate [fragCoord]. You can see the effect in the way the red and green values don't kick into visibility until much later.
You might also want to normalize only the y axis by changing the first step to
vec2 uv = (fragCoord - iResolution.xy * 0.5)/iResolution.y;
Depending our your purpose the image can seem strange if you normalize both axes so this is a possible strategy.
This an image was calculated with the following code:
// Normalized pixel coordinates (between -0.5 to 0.5)
vec2 uv = (fragCoord - iResolution.xy * 0.5)/iResolution.xy;
// Set R and G values based on position using ceil() function
// The ceil() function returns the smallest integer that is greater than the uv value
vec3 col = vec3(ceil(uv.x),ceil(uv.y),0);
// Output to screen
fragColor = vec4(col,1.0);
The ceil() function allows us to see that the center of the image is 0, 0
As for the second part of the shadertoy documentation:
The output color is returned in fragColor as a four-component vector,
the last component being ignored by the client. The result is
retrieved in an "out" variable in anticipation of the future addition
of several rendering targets.
Really all this means is that fragColor contains four values that are shopped to the next stage in the rendering pipeline. You can find more about in and out variables here.
The values in fragColor determine the color of the pixel where the shader is being applied.
If you want to learn more about shaders these are some good starting places:
the book of shader - uniform
learn OpenGL - shader
Not to take away from the accepted answer, which is very thorough. But in case anyone else was confused about the types, iResolution is a 'uniform highp 3-component vector of float'... so actually a vec3? That's why we see in examples that fragCoord (actually a vec2) is divided by iResolution.xy (the .xy gives us a vec2). But what is the '.xy' thing? Is it a method? An attribute or property? With some random googling I found out that the '.xy' tacked on at the end is called 'swizzling'
https://www.khronos.org/opengl/wiki/Data_Type_(GLSL)#Vectors
(for convenience the gist of it is here below)
Swizzling
You can access the components of vectors using the following syntax:
vec4 someVec;
someVec.x + someVec.y;
This is called swizzling. You can use x, y, z, or w, referring to the
first, second, third, and fourth components, respectively.
The reason it has that name "swizzling" is because the following syntax is entirely valid:
vec2 someVec;
vec4 otherVec = someVec.xyxx;
vec3 thirdVec = otherVec.zyy;
You can use any combination of up to 4 of the letters to create a vector (of the same basic type) of that length. So otherVec.zyy is a vec3, which is how we can initialize a vec3 value with it. Any combination of up to 4 letters is acceptable, so long as the source vector actually has those components. Attempting to access the 'w' component of a vec3 for example is a compile-time error.
Swizzling also works on l-values (left values?):
vec4 someVec;
someVec.wzyx = vec4(1.0, 2.0, 3.0, 4.0); // Reverses the order.
someVec.zx = vec2(3.0, 5.0); // Sets the 3rd component of someVec to 3.0 and the 1st component to 5.0
However, when you use a swizzle as a way of setting component values, you cannot use the same swizzle component twice. So someVec.xx = vec2(4.0, 4.0); is not allowed.
Additionally, there are 3 sets of swizzle masks. You can use xyzw, rgba (for colors), or stpq (for texture coordinates). These three sets have no actual difference; they're just syntactic sugar. You cannot combine names from different sets in a single swizzle operation. So ".xrs" is not a valid swizzle mask.
In OpenGL 4.2 or ARB_shading_language_420pack, scalars can be swizzled as well. They obviously only have one source component, but it is legal to do this:
float aFloat;
vec4 someVec = aFloat.xxxx;
// -1 to 1
vec2 uv = (2.0 * fragCoord - iResolution.xy) / iResolution.xy;
vec3 col = vec3(uv.x, uv.y, 0.0);
fragColor = vec4(col1, 1.0);
I'm trying to implement a GPU based height map the simplest (and fastest) way that I know how. I passed a (.png, D3DX11CreateShaderResourceViewFromFile()) texture into the shader, and I'm attempting to sample it for the current pixel value. Seeing a float4, I'm currently assigning a color value from a channel to offset the y value.
Texture2D colorMap : register(t0);
SamplerState colorSampler : register(s0);
...
VOut VShader(float4 position : POSITION, float2 Texture : TEXCOORD0, float3 Normal : NORMAL)
{
VOut output;
float4 colors = colorMap.SampleLevel(colorSampler, float4(position.x*0.001, position.z*0.001, 0,0 ),0);
position.y = colors.x*128;
output.position = mul(position, WVP);
output.texture0 = Texture;
output.normal = Normal;
return output;
}
The texture is imported correctly, and I've inserted another texture and was able to successfully blend the texture with another texture (through multiplication of values), so I know that the float4 struct contains values of a sort capable of having arithmetic performed on it.
In the Vertex function, attempting to extract the values yields nothing on a grid:
The concept seemed simple enough on paper...
Since you're using a Texture2D, the Location parameter needs to be a float2.
Also, make sure that location goes from (0,0) to (1,1). For your mapping to be correct, the grid would need to be placed from (0,0,0) to (1000,0,1000).
If this is the case then this should work:
SampleLevel(colorSampler, position.xz*0.001 ,0);
Edit
I'm curious as to how you're testing this. I tried compiling your code, with added definitions for VOut and WVP, and it fails. One of the errors is that location parameter which is a float4 and should be a float2. The other error I get is the name of the function; it should be main.
If you happen to be using Visual Studio, I strongly recommend using the Graphics Debugging tools and check all the variables. I suspect the colorMap texture might be bound to the pixel shader but not the vertex shader.
So I have a camera with a wide angle lens. I know the distortion coefficients, the focal length, the optical center. I want to undistort the image I get from this camera. I used OpenCV for the first try (cv::undistort), which worked well, but was way too slow.
Now I want to do this on the gpu. There is a shader doing exactly this documented in http://willsteptoe.com/post/67401705548/ar-rift-aligning-tracking-and-video-spaces-part-5
the formulas can be seen here:
http://en.wikipedia.org/wiki/Distortion_%28optics%29#Software_correction
So I went and implemented my own version as a glsl shader. I am sending a quad with texture coordinates on the corners between 0..1.
I assume the texture coordinates that arrive are the coordinates of the undistorted image. I calculate the coordinates for the distorted point corresponding to my texture coordinates. Then I sample the distorted image texture.
With this shader nothing in the final image changes. The problem I identified through a cpu implementation is, the coefficient term is very close to zero. The numbers get smaller and smaller through radius squaring etc.. So I have a scaling problem - I can't figure it out what to do differently! I tried everything... I guess it is something quite obvious, since this kind of process seems to work for a lot of people.
I left out the tangential distortion correction for simplicity.
#version 330 core
in vec2 UV;
out vec4 color;
uniform sampler2D textureSampler;
void main()
{
vec2 focalLength = vec2(438.568f, 437.699f);
vec2 opticalCenter = vec2(667.724f, 500.059f);
vec4 distortionCoefficients = vec4(-0.035109f, -0.002393f, 0.000335f, -0.000449f);
const vec2 imageSize = vec2(1280.f, 960.f);
vec2 opticalCenterUV = opticalCenter / imageSize;
vec2 shiftedUVCoordinates = (UV - opticalCenterUV);
vec2 lensCoordinates = shiftedUVCoordinates / focalLength;
float radiusSquared = sqrt(dot(lensCoordinates, lensCoordinates));
float radiusQuadrupled = radiusSquared * radiusSquared;
float coefficientTerm = distortionCoefficients.x * radiusSquared + distortionCoefficients.y * radiusQuadrupled;
vec2 distortedUV = ((lensCoordinates + lensCoordinates * (coefficientTerm))) * focalLength;
vec2 resultUV = (distortedUV + opticalCenterUV);
color = texture2D(textureSampler, resultUV);
}
I see two issues with your solution. The main issue is that you mix two different spaces. You seem to work in [0,1] texture space by converting the optical center to that space, but you did not adjust focalLenght. The key point is that for such a distortion model, the focal lenght is determined in pixels. However, now a pixel is not 1 base unit wide anymore, but 1/width and 1/height units, respectively.
You could add vec2 focalLengthUV = focalLength / imageSize, but you will see that both divisions will cancel out each other when you calculate lensCoordinates. It is much more convenient to convert the texture space UV coordinates to pixel coordinates and use that space directly:
vec2 lensCoordinates = (UV * imageSize - opticalCenter) / focalLenght;
(and also respectively changing the calculation for distortedUV and resultUV).
There is still one issue with the approach I have sketched so far: the conventions of that pixel space I mentioned earlier. In GL, the origin will be the lower left corner, while in most pixel spaces, the origin is at the top left. You might have to flip the y coordinate when doing the conversion. Another thing is where exactly pixel centers are located. So far, the code assumes that pixel centers are at integer + 0.5. The texture coordinate (0,0) is not the center of the lower left pixel, but the corner point. The parameters you use for the distortion might (I don't know OpenCV's conventions) assume the pixel centers at integers, so that instead of the conversion pixelSpace = uv * imageSize, you might need to offset this by half a pixel like pixelSpace = uv * imageSize - vec2(0.5).
The second issue I see is
float radiusSquared = sqrt(dot(lensCoordinates, lensCoordinates));
That sqrt is not correct here, as dot(a,a) will already give the squared lenght of vector a.
Right now I can obtain the color of the neighbouring pixel by doing
color = texture2D(backBuffer, vec2(gl_TexCoord[0].x + i,gl_TexCoord[0].y + j);
But how can I know what pixel that is or at least the current uv of that pixel on the texture?
Which pixel of the fragment. The UV / ST is a number from 0 to 1 representing the whole texture.
I want to calculate a pixels brightness based on its distance from a point.
gl_TexCoord[0].x gives you the s texture coordinate, while gl_TexCoord[0].y gives you the s texture coordinate.
If you are writing fragment shader, the pixel position shouldn't matter. I haven't tried, but maybe you can get it using gl_in, which is defined as :
in gl_PerVertex {
vec4 gl_Position;
float gl_PointSize;
float gl_ClipDistance[];
} gl_in[];
but I am not sure it if it available for pixel shader.
I can render triangular gradient with simply just one triangle and using glColor for each corner.
But how to render perfect rectangular gradient? I tried with one quad, but the middle will get ugly seam. I also tried with texture of 2x2 size, it was like it should be done: proper blending from each corner, but the texture sampling precision becomes unprecise when stretched too much (i started to see pixels bigger than 1x1 size).
Is there some way of calculating this in a shader perhaps?
--
Edit: Link to images were broken(removed).
Indeed, the kind of gradient you want relies on 4 colors at each pixel, where OpenGL typically only interpolates input over triangles (so 3 inputs). Getting the perfect gradient is not possible just with the standard interpolants.
Now, as you mentioned, a 2x2 texture can do it. If you did see precision issues, I suggest switching the format of the texture to something that typically requires more precision (like a float texture).
Last, and as you mentioned also in your question, you can solve this with a shader. Say you pass an extra attribute per-vertex that corresponds to (u,v) = (0,0) (0,1) (1,0) (1,0) all the way to the pixel shader (with the vertex shader just doing a pass-through).
You can do the following in the pixel shader (note, the idea here is sound, but I did not test the code):
Vertex shader snippet:
varying vec2 uv;
attribute vec2 uvIn;
uv = uvIn;
Fragment shader:
uniform vec3 color0;
uniform vec3 color1;
varying vec2 uv;
// from wikipedia on bilinear interpolation on unit square:
// f(x,y) = f(0,0)(1-x)(1-y) + f(1,0)x(1-y) + f(0,1)(1-x)y + f(1,1) xy.
// applied here:
// gl_FragColor = color0 * ((1-x)*(1-y) + x*y) + color1*(x*(1-y) + (1-x)*y)
// gl_FragColor = color0 * (1 - x - y + 2 * x * y) + color1 * (x + y - 2 * x * y)
// after simplification:
// float temp = (x + y - 2 * x * y);
// gl_FragColor = color0 * (1-temp) + color1 * temp;
gl_FragColor = mix(color0, color1, uv.u + uv.v - 2 * uv.u * uv.v);
The problem is because you use a quad. The quad is drawn using two triangles, but the triangles are not in the orientation that you need.
If I define the quad vertices as:
A: bottom left vertex
B: bottom right vertex
C: top right vertex
D: top left vertex
I would say that the quad is composed by the following triangles:
A B D
D B C
The colors assigned to each vertex are:
A: yellow
B: red
C: yellow
D: red
Keeping in mind the geometry (the two triangles), the pixels between D and B are result of the interpolation between red and red: indeed, red!
The solution would be the a geometry with two triangles, but orientated in a different way:
A B C
A C D
But probably you will no get the exact gradient, since in middle of quad you will get a full yellow, instead of a yellow mixed with red. So, I suppose you can achieve the exact result using 4 triangles (or a triangle fan), in which the centered vertex is the interpolation between the yellow and the red.
Wooop! Effetively the result is not what I was expecting. I thought the gradient was produced by linear interpolation between colors, but surely is not (I really need to setup the LCD color space!). Indeed, the most scalable solution is rendering using fragment shaders.
Keep the solution proposed by Bahbar. I would advice to start the implementation of a pass-through vertex/fragment shader (specifying only vertices and colors you should get the previous result); then, start playing with the mix function and the texture coordinate passed to the vertex shader.
You really need to understand the rendering pipeline with programmable shaders: vertex shader is called once per vertex, fragment shader is called once per fragment (without multisampling, a fragment is a pixel; with multisampling, a a pixel is composed by a many fragments which are interpolated to get the pixel color).
The vertex shader take the input parameters (uniforms and inputs; uniforms are constant for all vertices issued between glBegin/glEnd; inputs are characteristic of each vertex shader instance (4 vertices, 4 vertex shader instances).
A fragment shader takes as input the vertex shader outputs which has produced the fragment (due the rasterization of triangles, lines and points). In the Bahbar answer the only output is the uv variable (common to both shader sources).
In you case, the vertex shader outputs the vertex texture coordinates UV (passed "as-are"). These UV coordinates are available for each fragment, and they are computed by interpolating the values outputted by the vertex shader depending on the fragment position.
Once you have those coordinates, you only need two colors: the red and the yellow in your case (in Bahbar answer corresponds to color0 and color1 uniforms). Then, mix those colors depending on the UV coordinates of the specific fragment. (*)
(*) Here is the power of shaders: you can specify different interpolation methods by simply modifying the shader source. Linear, Bilinear or Spline interpolation are implemented by specifying additional uniforms to the fragment shader.
Good practice!
Do all of your vertices have the same depth (Z) value, and are all of your triangles completely on-screen? If so, then you should have no problem getting a "perfect" color gradient over a quad made from two triangles with glColor. If not, then it's possible that your OpenGL implementation treats colors poorly.
This leads me to suspect that you may have a very old or strange OpenGL implementation. I recommend that you tell us what platform you're using, and what version of OpenGL you have...?
Without any more information, I recommend you attempt writing a shader, and avoid telling OpenGL that you want a "color." If possible, tell it that you want a "texcoord" but treat it like a color anyway. This trick has worked in some cases where color accuracy is too low.