I have confused about bias matrix in shadow mapping. According this question: bias matrix in shadow mapping, bias matrix is used to scale down and translate to [0..1]x and [0..1]y. So I image that if we don't use bias matrix, the texture would be filled by only 1/4 scene size? Is that true? Or are there some magic here?
Not entirely, but the result is the same. As the answer from the question you linked said. After the w divide your coordinates are in NDC space, ergo in the range [-1, 1] (x, y and z). Now when you're sampling from a texture the coordinates you should give are in 'texture space', and OpenGL defined that space to be in the range [0, 1] (at least for 2D textures). x=0 y=0 being the bottom left of the texture, and x=1 y=1 the top right of the texture.
This means, when you are going to sample from your rendered depth texture, you have to transform your calculated texture coordinates from [-1, 1] to [0, 1]. If you don't do this, the texture will be fine, but only a quarter of your coordinates will fall in the range you actually want to sample from.
You don't want to bias the objects to be rendered to the depth texture, as OpenGL will transform the coordinates from NDC to window coordinates (the window being your texture in this case, use glViewport for the correct transformation) for you.
To apply the bias to your texture coordinates you can use a texture bias matrix, and multiply it by your projection matrix, so the shaders don't have to worry about it. The post you linked already gave that matrix:
const GLdouble bias[16] = {
0.5, 0.0, 0.0, 0.0,
0.0, 0.5, 0.0, 0.0,
0.0, 0.0, 0.5, 0.0,
0.5, 0.5, 0.5, 1.0};
Provided your matrices are column major this matrix should transform [-1, 1] to [0, 1], it will first multiply by 0.5 and then add 0.5. If your matrices are row major you should simply transpose the matrix and you're good to go.
Hope this helped.
Related
I know that I can perform a transformation(rotation,scale) in the vertex shader. However I wonder if I can do the similar thing in the fragment shader.
shadertoy.com - transformation_in_fragment_shade
I tried to apply transformation in fragment shader by multiply the uv but being confused by the result.
void mainImage( out vec4 fragColor, in vec2 fragCoord )
{
// Normalized pixel coordinates (from 0 to 1)
vec2 uv = fragCoord/iResolution.xy;
uv = fract(uv * 2.);
vec4 img = texture(iChannel0, uv);
// Output to screen
fragColor = img;
}
I expected the image will be scaled by 2x instead of zoom out, and I'm also confused by the meaning of fract here, why apply this function can split image into 4 chunks.
So the question is, can I apply rotation or scaling in the fragment shader here? Should I use different approach to do the rotation, scaling here?
Texture coordinates are in range [0.0, 1.0]. (0, 0) is the bottom left and (1, 1) is the top right. The texture looks up the texture at a particular coordinate and returns the color from that coordinate.
If you multiply the texture coordinates by 2, the coordinates will be in the range [0.0, 2.0]. The coordinates from 1.0 to 2.0 are not on the texture and clamped to 1.0.
The fract function skips the integral part. For example, 1.5 becomes 0.5. So uv = fract(uv * 2.) maps the coordinates from the interval [0.0, 1.0] to 2 intervals [0.0, 1.0], [0.0, 1.0].
If you want to zoom in (2x) to the center you need to project the texture between the coordinates (0.25, 0.25) and (0.75, 0.75) in the view:
uv = uv / 2.0 + 0.25; // maps [0.0, 1.0] to [0.25, 0.75]
You can do the same with a 3x3 matrix:
mat3 zoom = mat3(
vec3(0.5, 0.0, 0.0),
vec3(0.0, 0.5, 0.0),
vec3(0.25, 0.25, 1.0));
uv = (zoom * vec3(uv, 1.0)).xy;
What's the easiest way to render objects with vertices and normals of the object given?
Draw the vertices (point, normal, colour), then draw the vertices + normal (assuming normal is unit length, point + normal, normal, colour) and give it a colour.
For normal mapping, colour RGB = normal XYZ * 0.5 + (0.5, 0.5, 0.5).
The half and then add (0.5, 0.5, 0.5) is effectively scaling it and placing its direction within the +ve quadrant. There should be no negative values for the colour components.
Might be a weird question, but I'm fairly unexperienced with OpenGL's 3D, so can somebody please tell me how to draw a simple 2D box (C++ preferred) when:
GL_PROJECTION_MATRIX = [1.125, 0.00, 0.00, 0.0]
[0.000, 2.00, 0.00, 0.0]
[0.000, 0.00, -1.0, 0.0]
[0.000, -1.0, 0.00, 1.0]
GL_MODELVIEW_MATRIX = [1.0, 0.0, 0.0, 0.0]
[0.0, 1.0, 0.0, 0.0]
[0.0, 0.0, 1.0, 0.0]
[0.0, 0.0, 0.0, 1.0]
Changing these two is not possible due to external code.
What fixed function GL will do is multiply each vertex first by modelview, then by the projection matrix and finally divide by the (clip space) w component to reach NDC space. In NDC space, the viewing volume is represented by the cube [-1,1] along all 3 dimensions.
So, in general, knowing the matrices that are used, you can project the viewing volume back into eye or object space by inverting that chaing of transformations and trasforming back the corner points of the NDC cube (assuming the matrices can be inverted, what usually is the case).
Assuming the typical matrix storage order of fixed function GL, this prjection matrix is some sort of orthogonal projection, so there is no perspectivic distortion and the viewing volume will be a cuboid in eye space/object space.
If one uses the matrices you did specify, then everything drawing in x in [-0,8889, 0.8889] (left, right), y in [0,1] (bottom/top) and z in [-1,1] (far!, near) should be visible.
I am using a simple form of Phong Shading Model where the scene is lit by a directional light shining in the direction -y, with monochromatic light of intensity 1. The viewpoint is infinitely far away, looking along the direction given by the vector (-1, 0, -1).
In this case, the shading equation is given by
I = k_d*L(N dot L)+k_s*L(R dot V)^n_s
where L is the Directional Light source, kd, ks both are 0.5 and n_s = 50
In this case, how can I compute the R vector?
I am confused because for computing finite vectors we need finite coordinates. In case of the Directional Light, it's infinitely far away in the -y direction.
Reflect vector can be calculated by using reflect function from GLSL.
vec3 toEye = normalize(vec3(0.0) - vVaryingPos);
vec3 lightRef = normalize(reflect(-light, normal));
float spec = pow(dot(lightRef, toEye), 64.0f);
specularColor = vec3(1.0)*max(spec, 0.0);
calculations are done in eye space... so the eyePos is in vec3(0.0)
Those equations use normal vectors. So in your case, when you are using directional light it simply means that L is constant and equals [x=0, y=1, z=0] (or [0, -1, 0], I don't remember, you should check if in your equation L points to the point or away from the point).
I've created a basic orthographic shader that displays sprites from textures. It works great.
I've added a "zoom" factor to it to allow the sprite to scale to become larger or smaller. Assuming that the texture is anchored with its origin in the "lower left", what it does is shrink towards that origin point, or expand from it towards the upper right. What I actually want is to shrink or expand "in place" to stay centered.
So, one way of achieving that would be to figure out how many pixels I'll shrink or expand, and compensate. I'm not quite sure how I'd do that, and I also know that's not the best way. I fooled with order of my translates and scales, thinking I can scale first and then place, but I just get various bad results. I can't wrap my head around a way to solve the issue.
Here's my shader:
// Set up orthographic projection (960 x 640)
mat4 projectionMatrix = mat4( 2.0/960.0, 0.0, 0.0, -1.0,
0.0, 2.0/640.0, 0.0, -1.0,
0.0, 0.0, -1.0, 0.0,
0.0, 0.0, 0.0, 1.0);
void main()
{
// Set position
gl_Position = a_position;
// Translate by the uniforms for offsetting
gl_Position.x += translateX;
gl_Position.y += translateY;
// Apply our (pre-computed) zoom factor to the X and Y of our matrix
projectionMatrix[0][0] *= zoomFactorX;
projectionMatrix[1][1] *= zoomFactorY;
// Translate
gl_Position *= projectionMatrix;
// Pass right along to the frag shader
v_texCoord = a_texCoord;
}
mat4 projectionMatrix =
Matrices in GLSL are constructed column-wise. For a mat4, the first 4 values are the first column, then the next 4 values are the second column and so on.
You transposed your matrix.
Also, what are those -1's for?
For the rest of your question, scaling is not something the projection matrix should be dealing with. Not the kind of scaling you're talking about. Scales should be applied to the positions before you multiply them with the projection matrix. Just like for 3D objects.
You didn't post what your sprite's vertex data is, so there's no way to know for sure. But the way it ought to work is that the vertex positions for the sprite should be centered at the sprite's center (which is wherever you define it to be).
So if you have a 16x24 sprite, and you want the center of the sprite to be offset 8 pixels right and 8 pixels up, then your sprite rectangle should be (-8, -8) to (8, 16) (from a bottom-left coordinate system).
Then, if you scale it, it will scale around the center of the sprite's coordinate system.