I'm trying to display a mouse touch indicator as a circle.
So far I was able to product a vertical and horizontal bar (as x and y touch indicator) which cross each other forming a cross shape. I would like to form a circle from where they intersect.
...
float touchX = u_TouchX;
float touchY = u_ResolutionX - u_TouchY;
float smoothTouchXS = smoothstep( u_ResolutionY - touchX - 300.0, u_ResolutionY- touchX, gl_FragCoord.y );
float smoothTouchXE = smoothstep( u_ResolutionY - touchX, u_ResolutionY - touchX + 300.0, gl_FragCoord.y );
float smoothTouchYS = smoothstep( u_ResolutionX - touchY - 300.0, u_ResolutionX- touchY, gl_FragCoord.x );
float smoothTouchYE = smoothstep( u_ResolutionX - touchY, u_ResolutionX - touchY + 300.0, gl_FragCoord.x );
float finalC = ( smoothTouchXS - smoothTouchXE ) + ( smoothTouchYS - smoothTouchYE ));
photo.r += finalC;
gl_FragCoord = photo.r
This line is wrong:
float finalC = ( smoothTouchXS - smoothTouchXE ) + ( smoothTouchYS - smoothTouchYE ));
But I can't seem to figure out how to sum both bars together, so that if both of them contain color, only then an output color would be produced in the end resulting in the circle in the center of these two bars.
What is the proper function to sum the two bars together, so that the result is only returned if they both overlap? Thank you!
I would like to form a circle from where they intersect.
Starting with your code, the easiest solution is to multiply the "smooth" touch from the x and y axis instead of sum them:
float finalC = (smoothTouchXS - smoothTouchXE) * (smoothTouchYS - smoothTouchYE);
See the preview, where (smoothTouchXS - smoothTouchXE) * (smoothTouchYS - smoothTouchYE) is put on top of (smoothTouchXS - smoothTouchXE) + (smoothTouchYS - smoothTouchYE);
One further possibility is to calculate "smooth" offset to the "touch" point on the x and y axis. Then find the maximum offset and use the inverse result (1.0 - offset). This solution would form a rectangle:
const float max_dist = 300.0;
vec2 touch = vec2(u_TouchX, u_ResolutionY - u_TouchY);
vec2 touch_dist = abs(touch - gl_FragCoord.xy);
vec2 smootTouch = smoothstep(0.0, max_dist, touch_dist);
float finalC = max(0.0, 1.0-max(smootTouch.x, smootTouch.y));
Another possibility is to use the distance to the "touch" point, which would form a circle:
const float max_dist = 300.0;
vec2 touch = vec2(u_TouchX, u_ResolutionY - u_TouchY);
vec2 touch_dist = abs(touch - gl_FragCoord.xy);
float smootTouch = smoothstep(0.0, max_dist, length(touch_dist));
float finalC = max(0.0, 1.0-smootTouch);
I'm trying to replicate the automatic bilinear filtering algorithm of Unity3D using the next code:
fixed4 GetBilinearFilteredColor(float2 texcoord)
{
fixed4 s1 = SampleSpriteTexture(texcoord + float2(0.0, _MainTex_TexelSize.y));
fixed4 s2 = SampleSpriteTexture(texcoord + float2(_MainTex_TexelSize.x, 0.0));
fixed4 s3 = SampleSpriteTexture(texcoord + float2(_MainTex_TexelSize.x, _MainTex_TexelSize.y));
fixed4 s4 = SampleSpriteTexture(texcoord);
float2 TexturePosition = float2(texcoord)* _MainTex_TexelSize.z;
float fu = frac(TexturePosition.x);
float fv = frac(TexturePosition.y);
float4 tmp1 = lerp(s4, s2, fu);
float4 tmp2 = lerp(s1, s3, fu);
return lerp(tmp1, tmp2, fv);
}
fixed4 frag(v2f IN) : SV_Target
{
fixed4 c = GetBilinearFilteredColor(IN.texcoord) * IN.color;
c.rgb *= c.a;
return c;
}
I thought I was using the correct algoritm because is the only one I have seen out there for bilinear. But I tried it using unity with the same texture duplicated:
1º texture: is in Point filtering and using the custom bilinear shader (maded from the default sprite shader).
2º texture: is in Bilinear filter with the default sprite shader
And this is the result:
You can see that they are different and also there is some displacement in my custom shader that makes the sprite being off-center when rotating in the Z axis.
Any idea of what I'm doing wrong?
Any idea of what is doing Unity3D different?
Are there another algorithm's who fits in the Unity3D default filtering?
Solution
Updated with the complete code solution with Nico's code for other people who search for it here:
fixed4 GetBilinearFilteredColor(float2 texcoord)
{
fixed4 s1 = SampleSpriteTexture(texcoord + float2(0.0, _MainTex_TexelSize.y));
fixed4 s2 = SampleSpriteTexture(texcoord + float2(_MainTex_TexelSize.x, 0.0));
fixed4 s3 = SampleSpriteTexture(texcoord + float2(_MainTex_TexelSize.x, _MainTex_TexelSize.y));
fixed4 s4 = SampleSpriteTexture(texcoord);
float2 TexturePosition = float2(texcoord)* _MainTex_TexelSize.z;
float fu = frac(TexturePosition.x);
float fv = frac(TexturePosition.y);
float4 tmp1 = lerp(s4, s2, fu);
float4 tmp2 = lerp(s1, s3, fu);
return lerp(tmp1, tmp2, fv);
}
fixed4 frag(v2f IN) : SV_Target
{
fixed4 c = GetBilinearFilteredColor(IN.texcoord - 0.498 * _MainTex_TexelSize.xy) * IN.color;
c.rgb *= c.a;
return c;
}
And the image test with the result:
Why don't substract 0.5 exactly?
If you test it you will see some edge cases where it jumps to (pixel - 1).
Let's take a closer look at what you are actually doing. I will stick to the 1D case because it is easier to visualize.
You have an array of pixels and a texture position. I assume, _MainTex_TexelSize.z is set in a way, such that it gives pixel coordinates. This is what you get (the boxes represent pixels, numbers in boxes the pixel number and numbers below the pixel space coordinates):
With your sampling (assuming nearest point sampling), you will get pixels 2 and 3. However, you see that the interpolation coordinate for lerp is actually wrong. You will pass the fractional part of the texture position (i.e. 0.8) but it should be 0.3 (= 0.8 - 0.5). The reasoning behind this is quite simple: If you land at the center of a pixel, you want to use the pixel value. If you land right in the middle between two pixels, you want to use the average of both pixel values (i.e. an interpolation value of 0.5). Right now, you have basically an offset by a half pixel to the left.
When you solve the first problem, there is a second one:
In this case, you actually want to blend between pixel 1 and 2. But because you always go to the right in your sampling, you will blend between 2 and 3. Again, with a wrong interpolation value.
The solution should be quite simple: Subtract half of the pixel width from the texture coordinate before doing anything with it, which is probably just the following (assuming that your variables hold the things I think):
fixed4 c = GetBilinearFilteredColor(IN.texcoord - 0.5 * _MainTex_TexelSize.xy) * IN.color;
Another reason why the results are different could be that Unity actually uses a different filter, e.g. bicubic (but I don't know). Also, the usage of mipmaps could influence the result.
I can't find any documentation of different behavior, so this is just a sanity check that I'm not doing anything wrong...
I've created some helper functions in GLSL to output float/vec/mat comparisons as a color:
note: pretty sure there aren't any errors here, just including it so you know exactly what I'm doing...
//returns true or false if floats are eq (within some epsillon)
bool feq(float a, float b)
{
float c = a-b;
return (c > -0.05 && c < 0.05);
}
returns true or false if vecs are eq
bool veq(vec4 a, vec4 b)
{
return
(
feq(a.x, b.x) &&
feq(a.y, b.y) &&
feq(a.z, b.z) &&
feq(a.w, b.w) &&
true
);
}
//returns color indicating where first diff lies between vecs
//white for "no diff"
vec4 cveq(vec4 a, vec4 b)
{
if(!feq(a.x, b.x)) return vec4(1.,0.,0.,1.);
else if(!feq(a.y, b.y)) return vec4(0.,1.,0.,1.);
else if(!feq(a.z, b.z)) return vec4(0.,0.,1.,1.);
else if(!feq(a.w, b.w)) return vec4(1.,1.,0.,1.);
else return vec4(1.,1.,1.,1.);
}
//returns true or false if mats are eq
bool meq(mat4 a, mat4 b)
{
return
(
veq(a[0],b[0]) &&
veq(a[1],b[1]) &&
veq(a[2],b[2]) &&
veq(a[3],b[3]) &&
true
);
}
//returns color indicating where first diff lies between mats
//white means "no diff"
vec4 cmeq(mat4 a, mat4 b)
{
if(!veq(a[0],b[0])) return vec4(1.,0.,0.,1.);
else if(!veq(a[1],b[1])) return vec4(0.,1.,0.,1.);
else if(!veq(a[2],b[2])) return vec4(0.,0.,1.,1.);
else if(!veq(a[3],b[3])) return vec4(1.,1.,0.,1.);
else return vec4(1.,1.,1.,1.);
}
So I have a model mat, a view mat, and a proj mat. I'm rendering a rectangle on screen (that is correctly projected/transformed...), and setting its color based on how well each steps of the calculations match with my on-cpu-calculated equivalents.
uniform mat4 model_mat;
uniform mat4 view_mat;
uniform mat4 proj_mat;
attribute vec4 position;
varying vec4 var_color;
void main()
{
//this code works (at least visually)- the rect is transformed as expected
vec4 model_pos = model_mat * position;
gl_Position = proj_mat * view_mat * model_pos;
//this is the test code that does the same as above, but tests its results against CPU calculated equivalents
mat4 m;
//test proj
//compares the passed in uniform 'proj_mat' against a hardcoded rep of 'proj_mat' as printf'd by the CPU
m[0] = vec4(1.542351,0.000000,0.000000,0.000000);
m[1] = vec4(0.000000,1.542351,0.000000,0.000000);
m[2] = vec4(0.000000,0.000000,-1.020202,-1.000000);
m[3] = vec4(0.000000,0.000000,-2.020202,0.000000);
var_color = cmeq(proj_mat,m); //THIS PASSES (the rect is white)
//view
//compares the passed in uniform 'view_mat' against a hardcoded rep of 'view_mat' as printf'd by the CPU
m[0] = vec4(1.000000,0.000000,-0.000000,0.000000);
m[1] = vec4(-0.000000,0.894427,0.447214,0.000000);
m[2] = vec4(0.000000,-0.447214,0.894427,0.000000);
m[3] = vec4(-0.000000,-0.000000,-22.360680,1.000000);
var_color = cmeq(view_mat,m); //THIS PASSES (the rect is white)
//projview
mat4 pv = proj_mat*view_mat;
//proj_mat*view_mat
//compares the result of GPU computed proj*view against a hardcoded rep of proj*view **<- NOTE ORDER** as printf'd by the CPU
m[0] = vec4(1.542351,0.000000,0.000000,0.000000);
m[1] = vec4(0.000000,1.379521,-0.689760,0.000000);
m[2] = vec4(0.000000,-0.456248,-0.912496,20.792208);
m[3] = vec4(0.000000,-0.447214,-0.894427,22.360680);
var_color = cmeq(pv,m); //THIS FAILS (the rect is green)
//view_mat*proj_mat
//compares the result of GPU computed proj*view against a hardcoded rep of view*proj **<- NOTE ORDER** as printf'd by the CPU
m[0] = vec4(1.542351,0.000000,0.000000,0.000000);
m[1] = vec4(0.000000,1.379521,0.456248,0.903462);
m[2] = vec4(0.000000,0.689760,21.448183,-1.806924);
m[3] = vec4(0.000000,0.000000,-1.000000,0.000000);
var_color = cmeq(pv,m); //THIS FAILS (the rect is green)
//view_mat_t*proj_mat_t
//compares the result of GPU computed proj*view against a hardcoded rep of view_t*proj_t **<- '_t' = transpose, also note order** as printf'd by the CPU
m[0] = vec4(1.542351,0.000000,0.000000,0.000000);
m[1] = vec4(0.000000,1.379521,-0.456248,-0.447214);
m[2] = vec4(0.000000,-0.689760,-0.912496,-0.894427);
m[3] = vec4(0.000000,0.000000,20.792208,22.360680);
var_color = cmeq(pv,m); //THIS PASSES (the rect is white)
}
And here are my CPU vector/matrix calcs (matrices are col-order [m.x is first column, not first row]):
fv4 matmulfv4(fm4 m, fv4 v)
{
return fv4
{ m.x[0]*v.x+m.y[0]*v.y+m.z[0]*v.z+m.w[0]*v.w,
m.x[1]*v.x+m.y[1]*v.y+m.z[1]*v.z+m.w[1]*v.w,
m.x[2]*v.x+m.y[2]*v.y+m.z[2]*v.z+m.w[2]*v.w,
m.x[3]*v.x+m.y[3]*v.y+m.z[3]*v.z+m.w[3]*v.w };
}
fm4 mulfm4(fm4 a, fm4 b)
{
return fm4
{ { a.x[0]*b.x[0]+a.y[0]*b.x[1]+a.z[0]*b.x[2]+a.w[0]*b.x[3], a.x[0]*b.y[0]+a.y[0]*b.y[1]+a.z[0]*b.y[2]+a.w[0]*b.y[3], a.x[0]*b.z[0]+a.y[0]*b.z[1]+a.z[0]*b.z[2]+a.w[0]*b.z[3], a.x[0]*b.w[0]+a.y[0]*b.w[1]+a.z[0]*b.w[2]+a.w[0]*b.w[3] },
{ a.x[1]*b.x[0]+a.y[1]*b.x[1]+a.z[1]*b.x[2]+a.w[1]*b.x[3], a.x[1]*b.y[0]+a.y[1]*b.y[1]+a.z[1]*b.y[2]+a.w[1]*b.y[3], a.x[1]*b.z[0]+a.y[1]*b.z[1]+a.z[1]*b.z[2]+a.w[1]*b.z[3], a.x[1]*b.w[0]+a.y[1]*b.w[1]+a.z[1]*b.w[2]+a.w[1]*b.w[3] },
{ a.x[2]*b.x[0]+a.y[2]*b.x[1]+a.z[2]*b.x[2]+a.w[2]*b.x[3], a.x[2]*b.y[0]+a.y[2]*b.y[1]+a.z[2]*b.y[2]+a.w[2]*b.y[3], a.x[2]*b.z[0]+a.y[2]*b.z[1]+a.z[2]*b.z[2]+a.w[2]*b.z[3], a.x[2]*b.w[0]+a.y[2]*b.w[1]+a.z[2]*b.w[2]+a.w[2]*b.w[3] },
{ a.x[3]*b.x[0]+a.y[3]*b.x[1]+a.z[3]*b.x[2]+a.w[3]*b.x[3], a.x[3]*b.y[0]+a.y[3]*b.y[1]+a.z[3]*b.y[2]+a.w[3]*b.y[3], a.x[3]*b.z[0]+a.y[3]*b.z[1]+a.z[3]*b.z[2]+a.w[3]*b.z[3], a.x[3]*b.w[0]+a.y[3]*b.w[1]+a.z[3]*b.w[2]+a.w[3]*b.w[3] } };
}
A key thing to notice is that the view_mat_t * proj_mat_t on the CPU matched the proj_mat * view_mat on the GPU. Does anyone know why? I've done tests on matrices on the CPU and compared them to results of online matrix multipliers, and they seem correct...
I know that the GPU does things between vert shader and frag shader (I think it like, divides gl_Position by gl_Position.w or something?)... is there something else I'm not taking into account going on here in just the vert shader? Is something being auto-transposed at some point?
You may wish to consider GLM for CPU-side Matrix instantiation and calculations. It'll help reduce possible sources of errors.
Secondly, GPUs and CPUs do not perform identical calculations. The IEEE 754 standard for computing Floating Point Numbers has relatively rigorous standards for how these calculations have to be performed and to what degree they have to be accurate, but:
It's still possible for numbers to come up different in the least significant bit (and more than that depending on the specific operation/function being used)
Some GPU vendors opt out of ensuring strict IEEE compliance in the first place (Nvidia has been known in the past to prioritize Speed over strict IEEE compliance)
I would finally note that your CPU-side computations leave a lot of room for rounding errors, which can add up. The usual advice for these kinds of questions, then, is to include tolerance in your code for small amounts of deviations. Usually code to check for 'equality' of two floating point numbers presumes that abs(x-y) < 0.000001 means x and y are essentially equal. Naturally, the specific number will have to be calibrated for your personal use.
And of course, you'll want to check to make sure that all your matrices/uniforms are being passed in correctly.
Ok. I've found an answer. There is nothing special about matrix operations from within a single shader. There are, however, a couple things you should be aware of:
:1: OpenGL (GLSL) uses column-major matrices. So to construct the matrix that would be visually represented in a mathematical context as this:
1 2 3 4
5 6 7 8
9 10 11 12
13 14 15 16
you would, from within GLSL use:
mat4 m = mat4(
vec4( 1, 5, 9,13),
vec4( 2, 6,10,14),
vec4( 3, 7,11,15),
vec4( 4, 8,12,16),
);
:2: If you instead use row-major matrices on the CPU, make sure to set the "transpose" flag to true when uploading the matrix uniforms to the shader, and make sure to set it to false if you're using col-major matrices.
So long as you are aware of these two things, you should be good to go.
My particular problem above was that I was in the middle of switching from row-major to col-major in my CPU implementation and wasn't thorough in ensuring that implementation was taken into account across all my CPU matrix operations.
Specifically, here is my now-correct mat4 multiplication implementation, assuming col-major matrices:
fm4 mulfm4(fm4 a, fm4 b)
{
return fm4
{ { a.x[0]*b.x[0] + a.y[0]*b.x[1] + a.z[0]*b.x[2] + a.w[0]*b.x[3], a.x[1]*b.x[0] + a.y[1]*b.x[1] + a.z[1]*b.x[2] + a.w[1]*b.x[3], a.x[2]*b.x[0] + a.y[2]*b.x[1] + a.z[2]*b.x[2] + a.w[2]*b.x[3], a.x[3]*b.x[0] + a.y[3]*b.x[1] + a.z[3]*b.x[2] + a.w[3]*b.x[3] },
{ a.x[0]*b.y[0] + a.y[0]*b.y[1] + a.z[0]*b.y[2] + a.w[0]*b.y[3], a.x[1]*b.y[0] + a.y[1]*b.y[1] + a.z[1]*b.y[2] + a.w[1]*b.y[3], a.x[2]*b.y[0] + a.y[2]*b.y[1] + a.z[2]*b.y[2] + a.w[2]*b.y[3], a.x[3]*b.y[0] + a.y[3]*b.y[1] + a.z[3]*b.y[2] + a.w[3]*b.y[3] },
{ a.x[0]*b.z[0] + a.y[0]*b.z[1] + a.z[0]*b.z[2] + a.w[0]*b.z[3], a.x[1]*b.z[0] + a.y[1]*b.z[1] + a.z[1]*b.z[2] + a.w[1]*b.z[3], a.x[2]*b.z[0] + a.y[2]*b.z[1] + a.z[2]*b.z[2] + a.w[2]*b.z[3], a.x[3]*b.z[0] + a.y[3]*b.z[1] + a.z[3]*b.z[2] + a.w[3]*b.z[3] },
{ a.x[0]*b.w[0] + a.y[0]*b.w[1] + a.z[0]*b.w[2] + a.w[0]*b.w[3], a.x[1]*b.w[0] + a.y[1]*b.w[1] + a.z[1]*b.w[2] + a.w[1]*b.w[3], a.x[2]*b.w[0] + a.y[2]*b.w[1] + a.z[2]*b.w[2] + a.w[2]*b.w[3], a.x[3]*b.w[0] + a.y[3]*b.w[1] + a.z[3]*b.w[2] + a.w[3]*b.w[3] } };
}
again, the above implementation is for column major matrices. That means that a.x is the first column of the matrix, not the row.
A key thing to notice is that the view_mat_t * proj_mat_t on the CPU
matched the proj_mat * view_mat on the GPU. Does anyone know why?
The reason for this is that for two matrices A, B: A * B = (B' * A')', where ' indicates the transpose operation. As already pointed out by yourself, your math code (as well as popular math libraries such as GLM) uses a row-major representation of matrices, while OpenGL (by default) uses a column-major representation. What this means is that the matrix A,
(a b c)
A = (d e f)
(g h i)
in your CPU math library is stored in memory as [a, b, c, d, e, f, g, h, i], whereas defined in a GLSL shader, it would be stored as [a, d, g, b, e, h, c, f, i]. So if you upload the data [a, b, c, d, e, f, g, h, i] of the GLM matrix with glUniformMatrix3fv with the transpose parameter set to GL_FALSE, then the matrix you will see in GLSL is
(a d g)
A' = (b e h)
(c f i)
which is the transposed original matrix. Having realized that changing the interpretation of the matrix data between row-major and column-major leads to a transposed version of the original matrix, you can now explain why suddenly the matrix multiplication works the other way around. Your view_mat_t and proj_mat_t on the CPU are interpreted as view_mat_t' and proj_mat_t' in your GLSL shader, so uploading the pre-calculated view_mat_t * proj_mat_t to the shader will lead to the same result as uploading both matrices separately and then calculating proj_mat_t * view_mat_t.
In my previous question, it was established that, when texturing a quad, the face is broken down into triangles and the texture coordinates interpolated in an affine manner.
Unfortunately, I do not know how to fix that. The provided link was useful, but it doesn't give the desired effect. The author concludes: "Note that the image looks as if it's a long rectangular quad extending into the distance. . . . It can become quite confusing . . . because of the "false depth perception" that this produces."
What I would like to do is to have the texturing preserve the original scaling of the texture. For example, in the trapezoidal case, I want the vertical spacing of the texels to be the same (example created with paint program):
Notice that, by virtue of the vertical spacing being identical, yet the quad's obvious distortion, straight lines in texture space are no longer straight lines in world space. Thus, I believe the required mapping to be nonlinear.
The question is: is this even possible in the fixed function pipeline? I'm not even sure exactly what the "right answer" is for more general quads; I imagine that the interpolation functions could get very complicated very fast, and I realize that "preserve the original scaling" isn't exactly an algorithm. World-space triangles are no longer linear in texture space.
As an aside, I do not really understand the 3rd and 4th texture coordinates; if someone could point me to a resource, that would be great.
The best approach to a solution working with modern gpu-api's can be found on Nathan Reed's blog.
But you end up with a problem similar to the original problem with the perspective :( I try to solve it and will post a solution once i have it.
Edit:
There is a working sample of a Quad-Texture on shadertoy.
Here is a simplified modified version I did, all honors deserved to Inigo Quilez:
float cross2d( in vec2 a, in vec2 b )
{
return a.x*b.y - a.y*b.x;
}
// given a point p and a quad defined by four points {a,b,c,d}, return the bilinear
// coordinates of p in the quad. Returns (-1,-1) if the point is outside of the quad.
vec2 invBilinear( in vec2 p, in vec2 a, in vec2 b, in vec2 c, in vec2 d )
{
vec2 e = b-a;
vec2 f = d-a;
vec2 g = a-b+c-d;
vec2 h = p-a;
float k2 = cross2d( g, f );
float k1 = cross2d( e, f ) + cross2d( h, g );
float k0 = cross2d( h, e );
float k2u = cross2d( e, g );
float k1u = cross2d( e, f ) + cross2d( g, h );
float k0u = cross2d( h, f);
float v1, u1, v2, u2;
float w = k1*k1 - 4.0*k0*k2;
w = sqrt( w );
v1 = (-k1 - w)/(2.0*k2);
u1 = (-k1u - w)/(2.0*k2u);
bool b1 = v1>0.0 && v1<1.0 && u1>0.0 && u1<1.0;
if( b1 )
return vec2( u1, v1 );
v2 = (-k1 + w)/(2.0*k2);
u2 = (-k1u + w)/(2.0*k2u);
bool b2 = v2>0.0 && v2<1.0 && u2>0.0 && u2<1.0;
if( b2 )
return vec2( u2, v2 )*.5;
return vec2(-1.0);
}
float sdSegment( in vec2 p, in vec2 a, in vec2 b )
{
vec2 pa = p - a;
vec2 ba = b - a;
float h = clamp( dot(pa,ba)/dot(ba,ba), 0.0, 1.0 );
return length( pa - ba*h );
}
vec3 hash3( float n )
{
return fract(sin(vec3(n,n+1.0,n+2.0))*43758.5453123);
}
//added for dithering
bool even(float a)
{
return fract(a/2.0) <= 0.5;
}
void mainImage( out vec4 fragColor, in vec2 fragCoord )
{
vec2 p = (-iResolution.xy + 2.0*fragCoord.xy)/iResolution.y;
// background
vec3 bg = vec3(sin(iTime+(fragCoord.y * .01)),sin(3.0*iTime+2.0+(fragCoord.y * .01)),sin(5.0*iTime+4.0+(fragCoord.y * .01)));
vec3 col = bg;
// move points
vec2 a = sin( 0.11*iTime + vec2(0.1,4.0) );
vec2 b = sin( 0.13*iTime + vec2(1.0,3.0) );
vec2 c = cos( 0.17*iTime + vec2(2.0,2.0) );
vec2 d = cos( 0.15*iTime + vec2(3.0,1.0) );
// area of the quad
vec2 uv = invBilinear( p, a, b, c, d );
if( uv.x>-0.5 )
{
col = texture( iChannel0, uv ).xyz;
}
//mesh or screen door or dithering like in many sega saturn games
fragColor = vec4( col, 1.0 );
}
Wow this is quite a complex problem. Basically you aren't bringing perspective into the equation. Your final x,y coord are divided by the z value you get when you rotate. This means you get a smaller change in texture space (s,t) the further you get from the camera.
However by doing this you, effectively, interpolate linearly as z increases. I wrote an ANCIENT demo that did this back in 1997. This is called "affine" texture mapping.
Thing is because you are dividing x and y by z you actually need to interpolate your values using "1/z". This properly takes into account the perspective that is being applied (when you divide x & y) by z. Hence you end up with "perspective correct" texture mapping.
I wish I could go into more detail than that but over the last 15 odd years of alcohol abuse my memory has got a bit hazy ;) One of these days I'd love to re-visit software rendering as I'm convinced that with the likes of OpenCL taking off it will soon end up being a better way to write a rendering engine!
Anyway, hope thats some help and apologies I can't be more help.
(As an aside when I was figuring all that rendering out 15 years ago I'd loved to have had all the resources available to me now, the internet makes my job and life so much easier. Working everything out from first principles was incredibly painful. I recall initially trying the 1/z interpolation but it made thins smaller as it got closer and I gave up on it when I should've inverted things ... to this day, as my knowledge has increased exponentially, i STILL really wish i'd written a "slow" but fully perspective correct renderer ... one day I'll get round to it ;))
Good luck!