I'm trying to create a vertex shader that produces a waving flag. I got the vertex positions going fine, but I'm having trouble adjusting the NORMALS to be what they should be in a waving flag. I did a little kludge, but it works from only SOME angles.
Here is the pertinent vertex shader code:
vec4 aPos=input_XYZ;
float aTime=(uniform_timer/75.)*7.;
aPos.y+=sin(aTime)*.15f*input.XYZ.x; // The wave
output_XYZ=aPos*uniform_ComboMatrix; // Into scene space
vec4 aWorkNormal=input_Normal;
aWorkNormal.y+=sin(aTime)*.25f; // <-- Here's the kludge to inexpensively tip the normal into the "wave" pattern
output_Normal=aWorkNormal*uniform_NormalizedWorldMatrix; // Put into same space as the light's vector
So I want to lose the kludge and actually give it the correct normal for flag waving... what's the right way to transform/rotate a normal on a waving flag to point correctly after the y position is modifier by sin?
The transformation you apply to aPos is a linear one, which can be described by the matrix:
[ 1 0 0 ]
M = [ C 1 0 ]
[ 0 0 1 ]
C = sin(aTime)*0.15
The normals then need to be transformed by the matrix W = transpose(inverse(M)):
[ 1 -C 0 ]
W = [ 0 1 0 ]
[ 0 0 1 ]
Turning it back into code:
vec4 aWorkNormal = input_Normal;
aWorkNormal.x -= sin(aTime)*.15f*aWorkNormal.y;
Related
I am trying to draw a very simple curve in just a fragment shader where there is a horizontal section, a transition section, then another horizontal section. It looks like the following:
My approach:
Rather than using bezier curves (which would then make it more complicated with thickness), I tried to take a shortcut. Basically, I just use one smooth step to transition between horizontal segments, which gives a decent curve. To compute thickness of the curve, for any given fragment x, I compute the y and ultimately the coordinate of where on the line we should be (x,y). Unfortunately, this isn't computing the shortest distance to the curve as seen below.
Below is a diagram to help perhaps understand the function I am having trouble with.
// Start is a 2D point where the line will start
// End is a 2d point where the line will end
// transition_x is the "x" position where we're use a smoothstep to transition between points
float CurvedLine(vec2 start, vec2 end, float transition_x) {
// Setup variables for positioning the line
float curve_width_frac = bendWidth; // How wide should we make the S bend
float thickness = abs(end.x - start.x) * curve_width_frac; // normalize
float start_blend = transition_x - thickness;
float end_blend = transition_x + thickness;
// for the current fragment, if you draw a line straight up, what's the first point it hits?
float progress_along_line = smoothstep(start_blend, end_blend, frag_coord.x);
vec2 point_on_line_from_x = vec2(frag_coord.x, mix(start.y,end.y, progress_along_line)); // given an x, this is the Y
// Convert to application specific stuff since units are a little odd
vec2 nearest_coord = point_on_line_from_x * dimensions;
vec2 rad_as_coord = rad * dimensions;
// return pseudo distance function where 1 is inside and 0 is outside
return 1.0 - smoothstep(lineWidth * dimensions.y, lineWidth * 1.2 * dimensions.y, distance(nearest_coord, rad_as_coord));
// return mix(vec4(1.0), vec4(0.0), s));
}
So I am familiar with given a line or line segment, compute the shortest distance to the line but I am not too sure how to tackle it with this curved segment. Any suggestions would be greatly appreciated.
I would do this in 2 passes:
render thin curve
do not yet use target colors but BW/grayscale instead ... Black background white lines to make the next step easier.
smooth the original image and threshold
so simply use any FIR smoothing or Gaussian blur that will bleed the colors up to half of your thickness distance. After this just threshold the result against background and recolor to wanted colors. The smoothing needs the rendered image from #1 as input. You can use simple convolution with circular mask:
0 0 0 1 1 1 0 0 0
0 0 1 1 1 1 1 0 0
0 1 1 1 1 1 1 1 0
1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1
0 1 1 1 1 1 1 1 0
0 0 1 1 1 1 1 0 0
0 0 0 1 1 1 0 0 0
btw. the color intensity after convoluiton like this will be a function of distance from center so it can be used as texture coordinate or shading parameter if you want ...
Also instead of convolution matrix you can use 2 nested for loops instead:
// convolution
col=vec4(0.0,0.0,0.0,0.0);
for (y=-r;y<=+r;y++)
for (x=-r;x<=+r;x++)
if ((x*x)+(y*y)<=r*r)
col+=texture2D(sampler,vec2(x0+x*mx,y0+y*my));
// threshold & recolor
if (col.r>threshold) col=col_curve; // assuming 1st pass has red channel used
else col=col_background;
where x0,y0 is your fragment position in texture and mx,my scales from pixels to texture coordinate scale. Also you need to handle edge cases when as x+x0 and y+y0 can be outside your texture.
Beware the thicker the curve the slower this will get ... For higher thicknesses is faster to apply smaller radius smoothing few times (more passes)
Here some related QAs that could covers some of the steps:
OpenGL Scale Single Pixel Line for multi pass (old api)
How to implement 2D raycasting light effect in GLSL scanning input texture
I am quite new to OpenCV so please forgive me if I am asking something obvious.
I have a program that gives me position and rotation of moving camera. But to be sure if my program works correctly I want to draw those results in 3 coordinate system.
I also have camera projection matrix
Camera matrix: [1135,52 0 1139,49
0 1023,50 543,50
0 0 1]
example how my result looks (calculated camera position):
Position = [ 0,92725 0,041710 -0,372177 0,0803997
-0,0279857 -0,983288 -0,1179896 -0,0466907
0,373459 -0,177219 0,910561, 1,19969
0 0 0 1 ]
I'm writing a Solar System simulator in OpenGL. So one of the components in the Solar System is the Earth's orbit (simple circle around the sun, created by the gluDisk function).
I was wondering how can I retrieve the normal vector of this disk because I need to use it as a rotation vector for the camera (which needs to follow Earth's spin around the Sun).
This is the code (Java) that creates the orbit, it works well.
Using this code, how could I retrieve the normal of this disk? (as the disk is contained in some plane, which is defined by some normal).
gl.glPushMatrix();
// if tilt is 0, align the orbit with the xz plane
gl.glRotated(90d - orbitTilt, 1, 0, 0);
gl.glTranslated(0, 0, 0); // orbit is around the origin
gl.glColor3d(0.5, 0.5, 0.5); // gray
// draw orbit
glu.gluQuadricDrawStyle(quad, GLU.GLU_SILHOUETTE);
glu.gluDisk(quad, 0, position.length(), slices, 1);
gl.glPopMatrix();
Before transformation, the disk is in the xy-plane. This means that the normals point in the z-direction, (0, 0, 1). In fact, gluDisk() will emit these normal during rendering, and the transformations you specify will be applied to them, so for rendering you don't have to do anything.
To calculate the transformed normal yourself, you only need to apply the rotation. The translation is not used for transforming normals.
So we need to apply a rotation of 90 - t around the x-axis to the vector (0, 0, 1). Applying the corresponding rotation matrix gives:
[ 1 0 0 ] [ 0 ] [ 0 ]
[ 0 cos(90-t) -sin(90-t) ] * [ 0 ] = [ -sin(90-t) ]
[ 0 sin(90-t) cos(90-t) ] [ 1 ] [ cos(90-t) ]
Applying a couple of trigonometric identities:
cos(90 - t) = sin(t)
sin(90 - t) = cos(t)
gives:
[ 0 ] [ 0 ]
[ -sin(90-t) ] = [ -cos(t) ]
[ cos(90-t) ] [ sin(t) ]
When you apply the sin() and cos() functions in your code, keep in mind that the angles are specified in radians. So if your orbitTilt angle is currently in degrees, the normal would be:
xNormal = 0.0;
yNormal = -cos(orbitTilt * (M_PI / 180.0));
zNormal = sin(orbitTilt * (M_PI / 180.0));
What I am doing in vertex shader is:
shadowCoord = shadowVP * mMatrix * vec4(vertex_position,1.0);
Now to get it back in the range [-1, 1] I did this in the fragment shader:
vec3 proj = shadowCoord.xyz / shadowCoord.w;
But if I test the z value of such point I get a value bigger than 1.
The perspective matrix I use is obtained via:
glm::perspective(FOV, aspectRatio, near, far);
And it results in:
[2.4142 0 0 0
0 2.4142 0 0
0 0 -1.02 -1
0 0 -0.202 0]
and the shadowVP is:
shadow_Perp * shadow_View
Shouldn't proj.z be in the range [-1,1]?
Shouldn't proj.z be in the range [-1,1]?
No. It is in the range [-1,1] if the point lies inside the frustum. And the frustum is defined as -w <= x,y,z <= w for any vetrex in clip space (and that w varies per vertex). But you don't do any clipping, so any value can result here. Note two things:
While I said the implication "v inside the frustum" => "NDC coords in [-1,1]" holds true, the opposite does not. That means you can get the NDC coords inside [-1,1] for points which lie outside of the frusutm (that might even lie behind the "viewing position").
You might also get the division by 0 here.
I'm working on a computer vision problem which requires rendering a 3d model using a calibrated camera. I'm writing a function that breaks the calibrated camera matrix into a modelview matrix and a projection matrix, but I've run into an interesting phenomenon in opengl that defies explanation (at least by me).
The short description is that negating the projection matrix results in nothing being rendered (at least in my experience). I would expect that multiplying the projection matrix by any scalar would have no effect, because it transforms homogeneous coordinates, which are unaffected by scaling.
Below is my reasoning why I find this to be unexpected; maybe someone can point out where my reasoning is flawed.
Imagine the following perspective projection matrix, which gives correct results:
[ a b c 0 ]
P = [ 0 d e 0 ]
[ 0 0 f g ]
[ 0 0 h 0 ]
Multiplying this by camera coordinates gives homogeneous clip coordinates:
[x_c] [ a b c 0 ] [X_e]
[y_c] = [ 0 d e 0 ] * [Y_e]
[z_c] [ 0 0 f g ] [Z_e]
[w_c] [ 0 0 h 0 ] [W_e]
Finally, to get normalized device coordinates, we divide x_c, y_c, and z_c by w_c:
[x_n] [x_c/w_c]
[y_n] = [y_c/w_c]
[z_n] [z_c/w_c]
Now, if we negate P, the resulting clip coordinates should be negated, but since they are homogeneous coordinates, multiplying by any scalar (e.g. -1) shouldn't have any affect on the resulting normalized device coordinates. However, in openGl, negating P results in nothing being rendered. I can multiply P by any non-negative scalar and get the exact same rendered results, but as soon as I multiply by a negative scalar, nothing renders. What is going on here??
Thanks!
Well, the gist of it is that clipping testing is done through:
-w_c < x_c < w_c
-w_c < y_c < w_c
-w_c < z_c < w_c
Multiplying by a negative value breaks this test.
I just found this tidbit, which makes progress toward an answer:
From Red book, appendix G:
Avoid using negative w vertex coordinates and negative q texture coordinates. OpenGL might not clip such coordinates correctly and might make interpolation errors when shading primitives defined by such coordinates.
Inverting the projection matrix will result in negative W clipping coordinate, and apparently opengl doesn't like this. But can anyone explain WHY opengl doesn't handle this case?
reference: http://glprogramming.com/red/appendixg.html
Reasons I can think of:
By inverting the projection matrix, the coordinates will no longer be within your zNear and zFar planes of the view frustum (necessarily greater than 0).
To create window coordinates, the normalized device coordinates are translated/scaled by the viewport. So, if you've used a negative scalar for the clip coordinates, the normalized device coordinates (now inverted) translate the viewport to window coordinates that are... off of your window (to the left and below, if you will)
Also, since you mentioned using a camera matrix and that you have inverted the projection matrix, I have to ask... to which matrices are you applying what from the camera matrix? Operating on the projection matrix save near/far/fovy/aspect causes all sorts of problems in the depth buffer including anything that uses z (depth testing, face culling, etc).
The OpenGL FAQ section on transformations has some more details.