If I have a matrix then the point O(0,0,0) will be translated to some point P(x, y, z). So rotating a matrix about its current position is effectively multiplying the matrix by a rotation matrix about P.
So I want a function like:
mat4 rotate(mat4 matrix, vec3 axis, float angle);
my current idea is:
vec4 p = {0, 0, 0, 1};
p = p * matrix;
generate translation matrix T, from point p
generate rotation matrix R, from axis and angle
return matrix * T * R * -T;
but I feel like there should be a more efficient way to do this...
Yep, that's how I'd do it. But one subtle correction, reverse the order of -T and T:
return matrix * -T * R * T
You want to first 'undo' the translational origin of matrix, then rotate, then re-do translational origin. This is easier to see if you take, for example, a traditional scale/rotate/translate matrix (S * R2 * T), expand it, then you can see more easily:
(S * R2 * T) * -T * R * T
Is doing what you want.
EDIT: With respect to efficiency, totally depends on usage. No, this is not 'great' -- usually you have more information about matrix that will allow you to do this in a less round-about way. E.g., if the matrix is constructed from S * R * T above, obviously we could have simply changed the way that the matrix is constructed in the first place -- S * R2 * R * T, injecting the rotation where it should be without having to 'undo' anything.
But unless you are doing this in real-time on 10K+ matrix that need to be recomputed each time, then it should not be a problem.
If matrix is coming from an unknown source and you need to modify it ex-post-facto, indeed, there really isn't any other choice.
In common an transformation matrix (OpenGL/glsl/glm) is defined like this:
mat4 m44 = mat4(
vec4( Xx, Xy, Xz, 0.0), // x-axis
vec4( Yx, Xy, Yz, 0.0), // y-axis
vec4( Zx Zy Zz, 0.0), // z-axis
vec4( Tx, Ty, Tz, 1.0) ); // translation
A translation matrix looks like this:
mat4 translate = mat4(
vec4( 0.0, 0.0, 0.0, 0.0),
vec4( 0.0, 0.0, 0.0, 0.0),
vec4( 0.0 0.0 0.0, 0.0),
vec4( Tx, Ty, Tz, 1.0) );
And a rotation matrix (e.g. around Y-Axis) looks like this:
float angle;
mat4 rotate = mat4(
vec4( cos(angle), 0, sin(angle), 0 ),
vec4( 0, 1, 0, 0 ),
vec4( -sin(angle), 0, cos(angle), 0 ),
vec4( 0, 0, 0, 1 ) )
A matrix multiplication C = A * B works like this:
mat4 A, B, C;
// C = A * B
for ( int k = 0; k < 4; ++ k )
for ( int j = 0; j < 4; ++ j )
C[k][j] = A[0][l] * B[k][0] + A[1][j] * B[k][1] + A[2][j] * B[k][2] + A[3][j] * B[k][3];
This means that the result of translate * rotate is:
mat4 m = mat4(
vec4( cos(angle), 0, sin(angle), 0 ),
vec4( 0, 1, 0, 0 ),
vec4( -sin(angle), 0, cos(angle), 0 ),
vec4( tx, ty, tz, 1 ) );
This means if you want to rotate a matrix M around its origin, then you have to split the matrix in a "oriantation" matrix and a "translation" matrtix. Rotate the orientation matrix and add the translation matrix again:
mat4 M, R;
float Tx = M[3][0];
float Ty = M[3][1];
float Tz = M[3][2];
M[3][0] = 0.0; M[3][1] = 0.0; M[3][2] = 0.0;
mat4 MR = R * M;
MR[3][0] = Tx; MR[3][1] = Ty; M[3][2] = Tz;
Related
I tried scaling orthogonal projection matrix but it seems it doesn't scale its dimensions.I am using orthogonal projection for directional lighting. But if the main camera is above ground I want to make my ortho matrix range bigger.
For example if camera is zero , orho matrix is
glm::ortho(-10.0f , 10.0f , -10.0f , 10.0f , 0.01f, 4000.0f)
but if camera goes 400 in y direction i want this matrix to be like
glm::ortho(-410.0f , 410.0f , -410.0f , 410.0f , 0.01f, 4000.0f)
but I want to do this in the shader with matrix multiplication or addition
The Orthographic Projection matrix can be computed by:
r = right, l = left, b = bottom, t = top, n = near, f = far
X: 2/(r-l) 0 0 0
y: 0 2/(t-b) 0 0
z: 0 0 -2/(f-n) 0
t: -(r+l)/(r-l) -(t+b)/(t-b) -(f+n)/(f-n) 1
In glsl, for instance:
float l = -410.0f;
float r = 410.0f;
float b = -410.0f;
float t = 410.0f;
float n = 0.01f;
float f = 4000.0f;
mat4 projection = mat4(
vec4(2.0/(r-l), 0.0, 0.0, 0.0),
vec4(0.0, 2.0/(t-b), 0.0, 0.0),
vec4(0.0, 0.0, -2.0/(f-n), 0.0),
vec4(-(r+l)/(r-l), -(t+b)/(t-b), -(f+n)/(f-n), 1.0)
);
Furthermore you can scale the orthographic projection matrix:
c++
glm::mat4 ortho = glm::ortho(-10.0f, 10.0f, -10.0f, 10.0f, 0.01f, 4000.0f);
float scale = 10.0f / 410.0f;
glm::mat4 scale_mat = glm::scale(glm::mat4(1.0f), glm::vec3(scale, scale, 1.0f));
glm::mat4 ortho_new = ortho * scale_mat;
glsl
float scale = 10.0 / 410.0;
mat4 scale_mat = mat4(
vec4(scale, 0.0, 0.0, 0.0),
vec4(0.0, scale, 0.0, 0.0),
vec4(0.0, 0.0, 1.0, 0.0),
vec4(0.0, 0.0, 0.0, 1.0)
);
mat4 ortho_new = ortho * scale_mat;
I am having trouble implementing RayPicking in a Fragment-Shader, I understand I must start from my mouse coordinates, but I am not sure what to multiply my origin with.
I have tried creating a 3 component vector, with x and y as my mouse coordinates divided by my resolution, and in z I have tried using my p(for point in space, calculated as rayOrigin + rayDirection * t) with no luck.
Here is a Shadertoy that tries what I am looking for.
float ray( vec3 ro, vec3 rd, out float d )
{
float t = 0.0; d = 0.0;
for( int i = 0; i < STEPS; ++i )
{
vec3 p = ro + rd * t;
d = map( p );
if( d < EPS || t > FAR ) break;
t += d;
}
return t;
}
vec3 shad( vec3 ro, vec3 rd, vec2 uv )
{
float t = 0.0, d = 0.0;
t = ray( ro, rd, d );
float x = ( 2.0 * iMouse.x ) / iResolution.x - 1.0;
float y = 1.0 - ( 2.0 * iMouse.y ) / iResolution.y;
float z = 1.0;
vec3 p = ro + rd * t;
vec3 n = nor( p );
vec3 lig = ( vec3( x, -y, z ) );
lig += ro + rd;
lig = normalize( lig );
vec3 ref = reflect( rd, n );
float amb = 0.5 + 0.5 * n.y;
float dif = max( 0.0, dot( n, lig ) );
float spe = pow( clamp( dot( ref, lig ), 0.0, 1.0 ), 16.0 );
vec3 col = vec3( 0 );
col += 0.1 * amb;
col += 0.2 * dif;
col += spe;
return col;
}
I expect to get a light that moves as if I was shooting a ray from my mouse coordinates to the SDF.
This is the correct code:
// Our sphere-tracing algorithm.
float ray( vec3 ro, vec3 rd, out float d )
{
float t = 0.0; d = 0.0;
for( int i = 0; i < STEPS; ++i )
{
vec3 p = ro + rd * t;
d = map( p );
if( d < EPS || t > FAR ) break;
t += d;
}
return t;
}
// Here we compute all our lighting calculations.
vec3 shad( vec3 ro, vec3 rd, vec2 uv )
{
float t = 0.0, d = 0.0;
t = ray( ro, rd, d );
vec3 p = ro + rd * t;
vec3 n = nor( p );
// The values of the variable lig are not random they are in the same position as our rayOrigin for our sphere tracing algo, that goes in main's body.
vec3 lig = ( vec3( 0, 0, 2 ) );
// Here is where we "shoot" our ray from the mouse position. Our ray's origin.
vec2 uvl = ( -iResolution.xy + 2.0 * iMouse.xy ) / iResolution.y;
// This is our ray's direction.
vec3 lir = normalize( vec3( uvl, -1 ) );
// Here we get our SDF(dO) and our incrementing value(tO).
float dO = 0.0, tO = ray( lig, lir, dO );
// Now we update our vector with the direction and incrementing steps.
lig += lir * tO;
// We must normalize lights as they are just a direction, the magnitude screws the lighting calculations.
lig = normalize( lig );
vec3 ref = reflect( rd, n );
float amb = 0.5 + 0.5 * n.y;
float dif = max( 0.0, dot( n, lig ) );
float spe = pow( clamp( dot( ref, lig ), 0.0, 1.0 ), 16.0 );
vec3 col = vec3( 0 );
col += 0.1 * amb;
col += 0.2 * dif;
col += spe;
return col;
}
// Last step, here we create the origin and direction of our rays that we shoot against the SDF.
void mainImage( out vec4 fragColor, in vec2 fragCoord )
{
//Normalize the coordinates.
vec2 uv = ( -iResolution.xy + 2.0 * fragCoord.xy ) / iResolution.y;
// This is our ray's origin. We must use the same values for our lig's origin.
vec3 ro = vec3( 0, 0, 2 );
// This is our ray's direction.
vec3 rd = normalize( vec3( uv, -1 ) );
// Our SDF(d) and our incrementing steps(t), we only need our SDF(d) to bail the shading calculations according to our epsilon(EPS).
float t = 0.0, d = 0.0;
t = ray( ro, rd, d );
vec3 col = d < EPS ? shad( ro, rd, uv ) : vec3( 0 );
fragColor = vec4( col, 1 );
}
GLM matrices doesnt seem to work without transfom
glm::mat4 proj = glm::ortho(0.0f,960.0f,0.0f,540.0f,-1.0f, 1.0f);
GL_TRUE has to be set :
glUniformMatrix4fv(GetUniformLocation(name),1 ,GL_TRUE,&matrix[0][0])
isn't GLM already suppose to be in column major form?
If you don't wan to transpose the matrix, then ythe vector has to be multiplied to the matrix from the right in the shader code:
mat4 transformation;
vec4 vertexPosition;
gl_Position = transformation * vertexPosition;
Explanation:
See GLSL Programming/Vector and Matrix Operations:
Furthermore, the *-operator can be used for matrix-vector products of the corresponding dimension, e.g.:
vec2 v = vec2(10., 20.);
mat2 m = mat2(1., 2., 3., 4.);
vec2 w = m * v; // = vec2(1. * 10. + 3. * 20., 2. * 10. + 4. * 20.)
Note that the vector has to be multiplied to the matrix from the right.
If a vector is multiplied to a matrix from the left, the result corresponds to to multiplying a column vector to the transposed matrix from the right. This corresponds to multiplying a column vector to the transposed matrix from the right:
Thus, multiplying a vector from the left to a matrix corresponds to multiplying it from the right to the transposed matrix:
vec2 v = vec2(10., 20.);
mat2 m = mat2(1., 2., 3., 4.);
vec2 w = v * m; // = vec2(1. * 10. + 2. * 20., 3. * 10. + 4. * 20.)
This means:
If a matrix is defined like this:
mat4 m44 = mat4(
vec4( Xx, Xy, Xz, 0.0),
vec4( Yx, Xy, Yz, 0.0),
vec4( Zx Zy Zz, 0.0),
vec4( Tx, Ty, Tz, 1.0) );
And the matrix uniform mat4 transformation is set like this (see glUniformMatrix4fv:
glUniformMatrix4fv( .... , 1, GL_FALSE, &(m44[0][0] );
Then that the vector has to be multiplied to the matrix from the right:
gl_Position = transformation * vertexPosition;
But of course, the matrix can be set up transposed:
mat4 m44 = mat4(
vec4( Xx, Yx, Zx, Tx),
vec4( Xy, Yy, Zy, Ty),
vec4( Xz Yz Zz, Tz),
vec4( 0.0, 0.0, 0.0, 1.0) );
Or can be transposet when set to the uniform variable:
glUniformMatrix4fv( .... , 1, GL_TRUE, &(m44[0][0] );
Then that the vector has to be multiplied to the matrix from the left:
gl_Position = vertexPosition * transformation;
Note, that the glm API documentation refers to The OpenGL Shading Language specification 4.20.
Iam trying to create a procedural water puddle in webGL with "water ripples" by vertex displacement.
The problem I'm having is that I get a noise I can't explain.
Below is the first pass vertex shader where I calculate the vertex positions that i later render to a texture that i then use in the second pass.
void main() {
float damping = 0.5;
vNormal = normal;
// wave radius
float timemod = 0.55;
float ttime = mod(time , timemod);
float frequency = 2.0*PI/waveWidth;
float phase = frequency * 0.21;
vec4 v = vec4(position,1.0);
// Loop through array of start positions
for(int i = 0; i < 200; i++){
float cCenterX = ripplePos[i].x;
float cCenterY = ripplePos[i].y;
vec2 center = vec2(cCenterX, cCenterY) ;
if(center.x == 0.0 && center.y == 0.0)
center = normalize(center);
// wave width
float tolerance = 0.005;
radius = sqrt(pow( uv.x - center.x , 2.0) + pow( uv.y -center.y, 2.0));
// Creating a ripple
float w_height = (tolerance - (min(tolerance,pow(ripplePos[i].z-radius*10.0,2.0)) )) * (1.0-ripplePos[i].z/timemod) *5.82;
// -2.07 in the end to keep plane at right height. Trial and error solution
v.z += waveHeight*(1.0+w_height/tolerance) / 2.0 - 2.07;
vNormal = normal+v.z;
}
vPosition = v.xyz;
gl_Position = projectionMatrix * modelViewMatrix * v;
}
And the first pass fragment shader that writes to the texture:
void main()
{
vec3 p = normalize(vPosition);
p.x = (p.x+1.0)*0.5;
p.y = (p.y+1.0)*0.5;
gl_FragColor = vec4( normalize(p), 1.0);
}
The second vertex shader is a standard passthrough.
Second pass fragmentshader is where I try to calculate the normals to be used for light calculations.
void main() {
float w = 1.0 / 200.0;
float h = 1.0 / 200.0;
// Nearest Nieghbours
vec3 p0 = texture2D(rttTexture, vUV).xyz;
vec3 p1 = texture2D(rttTexture, vUV + vec2(-w, 0)).xyz;
vec3 p2 = texture2D(rttTexture, vUV + vec2( w, 0)).xyz;
vec3 p3 = texture2D(rttTexture, vUV + vec2( 0, h)).xyz;
vec3 p4 = texture2D(rttTexture, vUV + vec2( 0, -h)).xyz;
vec3 nVec1 = p2 - p0;
vec3 nVec2 = p3 - p0;
vec3 vNormal = cross(nVec1, nVec2);
vec3 N = normalize(vNormal);
float theZ = texture2D(rttTexture, vUV).r;
//gl_FragColor = vec4(1.,.0,1.,1.);
//gl_FragColor = texture2D(tDiffuse, vUV);
gl_FragColor = vec4(vec3(N), 1.0);
}
The result is this:
The image displays the normalmap and the noise I'm refering to is the inconsistency of the blue.
Here is a live demonstration:
http://oskarhavsvik.se/jonasgerling_water_ripple/waterRTT-clean.html
I appreciate any tips and pointers, not only fixes for this problem. But the code in genereal, I'm here to learn.
After a brief look it seems like your problem is in storing x/y positions.
gl_FragColor = vec4(vec3(p0*0.5+0.5), 1.0);
You don't need to store them anyway, because the texel position implicitly gives the x/y value. Just change your normal points to something like this...
vec3 p2 = vec3(1, 0, texture2D(rttTexture, vUV + vec2(w, 0)).z);
Rather than 1, 0 you will want to use a scale appropriate to the size of your displayed quad relative to the wave height. Anyway, the result now looks like this.
The height/z seems to be scaled by distance from the centre, so I went looking for a normalize() and removed it...
vec3 p = vPosition;
gl_FragColor = vec4(p*0.5+0.5, 1.0);
The normals now look like this...
I am making an OpenGL c++ application that tracks the users location in relation to the screen and then updates the rendered scene to the perspective of the user. This is know as "desktop VR" or you can think of the screen as a diorama or fish tank. I am rather new to OpenGL and have only defined a very simple scene thus far, just a cube, and it is initially rendered correctly.
The problem is that when I start moving and want to rerender the cube scene, the projection plane seems translated and I don't see what I think I should. I want this plane fixed. If I were writing a ray tracer, my window would always be fixed, but my eye is allowed to wander. Can someone please explain to me how I can achieve the effect I desire (pinning the viewing window) while having my camera/eye wander at a non-origin coordinate?
All of the examples I find demand that the camera/eye be at the origin, but this is not conceptually convenient for me. Also, because this is a "fish tank", I am setting my d_near to be the xy-plane, where z = 0.
In screen/world space, I assign the center of the screen to (0,0,0) and its 4 corners to:
TL(-44.25, 25, 0)
TR( 44.25, 25, 0)
BR( 44.25,-25, 0)
BL(-44.25,-25, 0)
These values are in cm for a 16x9 display.
I then calculate the user's eye (actually a web cam on my face) using POSIT to be usually somewhere in the range of (+/-40, +/-40, 40-250). My POSIT method is accurate.
I am defining my own matrices for the perspective and viewing transforms and using shaders.
I initialize as follows:
float right = 44.25;
float left = -44.25;
float top = 25.00;
float bottom = -25.00;
vec3 eye = vec3(0.0, 0.0, 100.0);
vec3 view_dir = vec3(0.0, 0.0, -1.0);
vec3 up = vec3(0.0, 1.0, 0.0);
vec3 n = normalize(-view_dir);
vec3 u = normalize(cross(up, n));
vec3 v = normalize(cross(n, u));
float d_x = -(dot(eye, u));
float d_y = -(dot(eye, v));
float d_z = -(dot(eye, n));
float d_near = eye.z;
float d_far = d_near + 50;
// perspective transform matrix
mat4 P = mat4((2.0*d_near)/(right-left ), 0, (right+left)/(right-left), 0,
0, (2.0*d_near)/(top-bottom), (top+bottom)/(top-bottom), 0,
0, 0, -(d_far+d_near)/(d_far-d_near), -(2.0*d_far*d_near)/(d_far-d_near),
0, 0, -1.0, 0);
// viewing transform matrix
mat4 V = mat4(u.x, u.y, u.z, d_x,
v.x, v.y, v.z, d_y,
n.x, n.y, n.z, d_z,
0.0, 0.0, 0.0, 1.0);
mat4 MV = C * V;
//MV = V;
From what I gather looking on the web, my view_dir and up are to remain fixed. This means that I need only to update d_near and d_far as well as d_x, d_y, and d_y? I do this in my glutIdleFunc( idle );
void idle (void) {
hBuffer->getFrame(hFrame);
if (hFrame->goodH && hFrame->timeStamp != timeStamp) {
timeStamp = hFrame->timeStamp;
std::cout << "(" << hFrame->eye.x << ", " <<
hFrame->eye.y << ", " <<
hFrame->eye.z << ") \n";
eye = vec3(hFrame->eye.x, hFrame->eye.y, hFrame->eye.z);
d_near = eye.z;
d_far = eye.z + 50;
P = mat4((2.0*d_near)/(right-left), 0, (right+left)/(right-left), 0,
0, (2.0*d_near)/(top-bottom), (top+bottom)/(top-bottom), 0,
0, 0, -(d_far+d_near)/(d_far-d_near), -(2.0*d_far*d_near)/(d_far-d_near),
0, 0, -1.0, 0);
d_x = -(dot(eye, u));
d_y = -(dot(eye, v));
d_z = -(dot(eye, n));
C = mat4(1.0, 0.0, 0.0, eye.x,
0.0, 1.0, 0.0, eye.y,
0.0, 0.0, 1.0, 0.0,
0.0, 0.0, 0.0, 1.0);
V = mat4(u.x, u.y, u.z, d_x,
v.x, v.y, v.z, d_y,
n.x, n.y, n.z, d_z,
0.0, 0.0, 0.0, 1.0);
MV = C * V;
//MV = V;
glutPostRedisplay();
}
}
Here is my shader code:
#version 150
uniform mat4 MV;
uniform mat4 P;
in vec4 vPosition;
in vec4 vColor;
out vec4 color;
void
main()
{
gl_Position = P * MV * vPosition;
color = vColor;
}
Ok, I made some changes to my code, but without success. When I use V in place of MV in the vertex shader, everything looks as I want it to, the perspective is correct and the objects are the right size, however, the scene is translated by the displacement of the camera.
When using C and V to obtain MV, my scene is rendered from the perspective of an observer straight on and the rendered scene fills the window as it should, but the perspective of the eye/camera is lost.
Really, what I want is to translate the 2D pixels, the projection plane, by the appropriate x and y values of the eye/camera, so as to keep the center of the object (whose xy center is (0,0)) in the center of the rendered image. I am guided by the examples in the textbook "Interactive Computer Graphics: A Top-Down Approach with Shader-Based OpenGL (6th Edition)". Using the files paired with the book freely available on the web, I am continuing with the row major approach.
The following images are taken when not using the matrix C to create MV. When I do use C to create MV, all scenes look like the first image below. I desire no translation in z and so I leave that as 0.
Because the projection plane and my camera plane are parallel, the conversion from one to the other coordinate system is simply a translation and inv(T) ~ -T.
Here is my image for the eye at (0,0,50):
Here is my image for the eye at (56,-16,50):
The solution is to update d_x, d_y, and d_z, accounting for the new eye position, but to never change u, v, or n. In addition, one must update the matrix P with new values for left, right, top and bottom, as they relate to the new position of the camera/eye.
I initialize with this:
float screen_right = 44.25;
float screen_left = -44.25;
float screen_top = 25.00;
float screen_bottom = -25.00;
float screen_front = 0.0;
float screen_back = -150;
And now my idle function looks like this, note the calculations for top, bottom, right, and left:
void idle (void) {
hBuffer->getFrame(&hFrame);
if (hFrame.goodH && hFrame.timeStamp != timeStamp) {
timeStamp = hFrame.timeStamp;
//std::cout << "(" << hFrame.eye.x << ", " <<
// hFrame.eye.y << ", " <<
// hFrame.eye.z << ") \n";
eye = vec3(hFrame.eye.x, hFrame.eye.y, hFrame.eye.z);
d_near = eye.z;
d_far = eye.z + abs(screen_back) + 1;
float top = screen_top - eye.y;
float bottom = top - 2*screen_top;
float right = screen_right - eye.x;
float left = right - 2*screen_right;
P = mat4((2.0 * d_near)/(right - left ), 0.0, (right + left)/(right - left), 0.0,
0.0, (2.0 * d_near)/(top - bottom), (top + bottom)/(top - bottom), 0.0,
0.0, 0.0, -(d_far + d_near)/(d_far - d_near), -(2.0 * d_far * d_near)/(d_far - d_near),
0.0, 0.0, -1.0, 0.0);
d_x = -(dot(eye, u));
d_y = -(dot(eye, v));
d_z = -(dot(eye, n));
V = mat4(u.x, u.y, u.z, d_x,
v.x, v.y, v.z, d_y,
n.x, n.y, n.z, d_z,
0.0, 0.0, 0.0, 1.0);
glutPostRedisplay();
}
}
This redefinition of the perspective matrix keeps the rendered image from translating. I still have camera capture and screen grabbing synchronization issues, but I am able to create the images like following updated in real-time for the position of the user:
All of the examples I find demand that the camera/eye be at the origin
This is not a demand. OpenGL doesn't define a camera, it all boils down to transformations. Those are usually called projection and modelview (model and view transform combined). The camera, i.e. view transform is simply the inverse of the positioning of the camera in world space. So say you built yourself a matrix for your camera C, then you'd premultiply it's inverse on the modelview.
mat4 MV = inv(C) * V
On a side note, your shader code is wrong:
gl_Position = P * ( (V * vPosition ) / vPosition.w );
^^^^^^^^^^^^^^
The homogenous divide is not to be done in the shader, as it is hardwired into the render pipeline.