I have a basic square. How can i rotate it ?
let vertices = vec![
x, y, uv1.0, uv1.0, layer, // top left
// ---
x + w, y, uv2.0, uv2.1, layer, // top right
// ---
x + w, y + h, uv3.0, uv3.1, layer, // bottom right
// ---
x, y + h, uv4.0, uv4.1, layer // bottom left
This is my orthographic projection matrix.
let c_str_vert = CString::new("MVP".as_bytes()).unwrap();
let modelLoc = gl::GetUniformLocation(shaderProgram, c_str_vert.as_ptr());
let model = cgmath::ortho(0.0, SCR_WIDTH as f32, SCR_HEIGHT as f32, 0.0, -1.0, 1.0);
gl::UniformMatrix4fv(modelLoc, 1, gl::FALSE, model.as_ptr());
#version 430 core
layout(location = 0) in vec2 position;
// ...
uniform mat4 MVP;
void main() {
gl_Position = MVP * vec4(position.x, position.y, 0.0, 1.0);
// ...
}
I have a lot of squares but I don't want to rotate them all.
width and height is 100px, how can I turn my square to make it look like this?
I know that I can achieve this by using transformation matrices. I did it in web development while working on svg, but I have no idea how I can get this working in OpenGl.
I would like to know how can i multiply matrix and vector in OpenGL.
I would like to know how can i multiply matrix and vector in OpenGL.
You already multiplying matrices with a vector
gl_Position = MVP * vec4(position.x, position.y, 0.0, 1.0);
All you have to do is multiply your MVP matrix with your rotation matrix and than with your vector
gl_Position = MVP * rotationMatrix * vec4(position.x, position.y, 0.0, 1.0);
The rotation Matrix should be also a 4x4 matrix
i tried the library nalgebra and nalgebra-glm. API cgmath confuses me.
let mut ortho = glm::ortho(0.0, SCR_WIDTH as f32, SCR_HEIGHT as f32, 0.0, -1.0, 1.0);
ortho = glm::translate(&ortho, &glm::vec3(0.0, 50.0, 0.0));
let rad = 30.0 * (PI / 360.0);
ortho = glm::rotate(&ortho, rad, &glm::vec3(0.0, 0.0, 1.0));
ortho = glm::translate(&ortho, &glm::vec3(0.0, -50.0, 0.0) );
Thank you for all the answers
Related
I want to move from basic shadow mapping on to adaptive biased shadow mapping.
I found a paper which describes how to do it, but I am not sure how to achieve a certain step in the process:
The idea is to have a plane P (which is basically just the normal of the current fragment's surface in the fragment shader stage) and the world space position of the fragment (F1 in the picture above).
In order to calculate the correct bias (to fight shadow acne) I need to find the world space position of F2 which I can get if I shoot a ray from the light source through the center of the shadow map's texel center. This ray then eventually hits the plane P which results in the needed point F2.
With F1 and F2 now known, I then can calculate the distance between F1 and F2 along the light ray (I guess) and thus get the ideal bias to fight shadow acne.
Right now my basic shader code looks like this:
Vertex shader:
in vec3 aLocalObjectPos;
out vec4 vShadowCoord;
out vec3 vF1;
// to shift the coordinates from [-1;1] to [0;1]
const mat4 biasMatrix = mat4(
0.5, 0.0, 0.0, 0.0,
0.0, 0.5, 0.0, 0.0,
0.0, 0.0, 0.5, 0.0,
0.5, 0.5, 0.5, 1.0
);
int main()
{
// get the vertex position in the light's view space:
vShadowCoord = (biasMatrix * viewProjShadowMap * modelMatrix) * vec4(aLocalObjectPos, 1.0);
vF1 = (modelMatrix * vec4(aLocalObjectPos, 1.0)).xyz;
}
Helper method in fragment shader:
uniform sampler2DShadow uTextureShadowMap;
float calculateShadow(float bias)
{
vShadowCoord.z -= bias;
return textureProjOffset(uTextureShadowMap, vShadowCoord, ivec2(0, 0));
}
My problem now is:
How do I get the light ray that goes from the light source through the shadow map's texel center?
I already found this topic: Adaptive Depth Bias for Shadow Maps Ray Casting
Unfortunately there is no answer and I don't quite get all the things the author is talking about :-/
So, I think I have figured it out myself. I followed the directions in this paper:
http://cwyman.org/papers/i3d14_adaptiveBias.pdf
Vertex Shader (not much going on there):
const mat4 biasMatrix = mat4(
0.5, 0.0, 0.0, 0.0,
0.0, 0.5, 0.0, 0.0,
0.0, 0.0, 0.5, 0.0,
0.5, 0.5, 0.5, 1.0
);
in vec4 aPosition; // vertex in model's local space (not modified in any way)
uniform mat4 uVPShadowMap; // light's view-projection matrix
out vec4 vShadowCoord;
void main()
{
// ...
vShadowCoord = (biasMatrix * uVPShadowMap * uModelMatrix) * aPosition;
// ...
}
Fragment Shader:
#version 450
in vec3 vFragmentWorldSpace; // fragment position in World space
in vec4 vShadowCoord; // texture coordinates for shadow map lookup (see vertex shader)
uniform sampler2DShadow uTextureShadowMap;
uniform vec4 uLightPosition; // Light's position in world space
uniform vec2 uLightNearFar; // Light's zNear and zFar values
uniform float uK; // variable offset faktor to tweak the computed bias a little bit
uniform mat4 uVPShadowMap; // light's view-projection matrix
const vec4 corners[2] = vec4[]( // frustum diagonal points in light's view space normalized [-1;+1]
vec4(-1.0, -1.0, -1.0, 1.0), // left bottom near
vec4( 1.0, 1.0, 1.0, 1.0) // right top far
);
float calculateShadowIntensity(vec3 fragmentNormal)
{
// get fragment's position in light space:
vec4 fragmentLightSpace = uVPShadowMap * vec4(vFragmentWorldSpace, 1.0);
vec3 fragmentLightSpaceNormalized = fragmentLightSpace.xyz / fragmentLightSpace.w; // range [-1;+1]
vec3 fragmentLightSpaceNormalizedUV = fragmentLightSpaceNormalized * 0.5 + vec3(0.5, 0.5, 0.5); // range [ 0; 1]
// get shadow map's texture size:
ivec2 textureDimensions = textureSize(uTextureShadowMap, 0);
vec2 delta = vec2(textureDimensions.x, textureDimensions.y);
// get width of every texel:
vec2 textureStep = vec2(1.0 / textureDimensions.x, 1.0 / textureDimensions.y);
// get the UV coordinates of the texel center:
vec2 fragmentLightSpaceUVScaled = fragmentLightSpaceNormalizedUV.xy * delta;
vec2 texelCenterUV = floor(fragmentLightSpaceUVScaled) * textureStep + textureStep / 2;
// convert range for texel center in light space in range [-1;+1]:
vec2 texelCenterLightSpaceNormalized = 2.0 * texelCenterUV - vec2(1.0, 1.0);
// recreate light ray in world space:
vec4 recreatedVec4 = vec4(texelCenterLightSpaceNormalized.x, texelCenterLightSpaceNormalized.y, -uLightsNearFar.x, 1.0);
mat4 vpShadowMapInversed = inverse(uVPShadowMap);
vec4 texelCenterWorldSpace = vpShadowMapInversed * recreatedVec4;
vec3 lightRayNormalized = normalize(texelCenterWorldSpace.xyz - uLightsPositions.xyz);
// compute scene scale for epsilon computation:
vec4 frustum1 = vpShadowMapInversed * corners[0];
frustum1 = frustum1 / frustum1.w;
vec4 frustum2 = vpShadowMapInversed * corners[1];
frustum2 = frustum2 / frustum2.w;
float ln = uLightNearFar.x;
float lf = uLightNearFar.y;
// compute light ray intersection with fragment plane:
float dotLightRayfragmentNormal = dot(fragmentNormal, lightRayNormalized);
float d = dot(fragmentNormal, vFragmentWorldSpace);
float x = (d - dot(fragmentNormal, uLightsPositions.xyz)) / dot(fragmentNormal, lightRayNormalized);
vec4 intersectionWorldSpace = vec4(uLightsPositions.xyz + lightRayNormalized * x, 1.0);
// compute bias:
vec4 texelInLightSpace = uVPShadowMap * intersectionWorldSpace;
float intersectionDepthTexelCenterUV = (texelInLightSpace.z / texelInLightSpace.w) / 2.0 + 0.5;
float fragmentDepthLightSpaceUV = fragmentLightSpaceNormalizedUV.z;
float bias = intersectionDepthTexelCenterUV - fragmentDepthLightSpaceUV;
float depthCompressionResult = pow(lf - fragmentLightSpaceNormalizedUV.z * (lf - ln), 2.0) / (lf * ln * (lf - ln));
float epsilon = depthCompressionResult * length(frustum1.xyz - frustum2.xyz) * uK;
bias += epsilon;
vec4 shadowCoord = vShadowCoord;
shadowCoord.z -= bias;
float shadowValue = textureProj(uTextureShadowMap, shadowCoord);
return max(shadowValue, 0.0);
}
Please note that this is a very verbose method (you could optimise several steps, I know) to better explain what I did to make it work.
All my shadow casting lights use perspective projection.
I tested the results on the CPU side in a separate project (only c# with the math structs from the OpenTK package) and they seem reasonable. I used several light positions, texture sizes, etc. The bias values looked ok in all my tests. Of course, this is no proof, but I have a good feeling about this.
In the end:
The benefits were very small. The visual results are good (especially for shadow maps with >= 2048 samples per dimension) but I still had to tweak the offset value (uniform float uK in the fragment shader) for each of my scenes. I found values from 0.01 to 0.03 to deliver useable results.
I lost about 10% performance (fps-wise) compared to my previous approach (slope-scaled bias) and gained maybe 1% of visual fidelity when it comes to shadows (acne, peter panning). The 1% is not measured - only felt by me :-)
I wanted this approach to be the "one-solution-to-all-problems". But I guess, there is no "fire-and-forget" solution when it comes to shadow mapping ;-/
So the short of it is that I'm trying to switch from old openGL's glClipPlane function to gl_ClipDistance[0].
For glClipPlane I could intuitively do (pseudocode)
pushMatrix()
glRotatef(camera.xRot, 1,0,0)
glRotatef(camera.yRot + 180, 0,1,0)
glTranslate(x-camera.x, y-camera.y, z-camera.z)
glClipPlane(plane_equation)
popMatrix()
and this would translate the plane to the correct location, and face it in the right direction.
For the life of me I cannot get the plane to translate with GLSL - I've tried passing various model matrices, model/view matrices, and altering the plane equation, but no matter what I do the plane is attached to the "Camera" instead of being attached to the "object". As in, moving the camera also moves the portion of the object being clipped, which is less than ideal.
Here are some things I've tried in my vertex shader based on random google searches:
vec4 modelPos = ModelMat * vec4( Position, 1.0 );
gl_Position = ProjMat * ModelViewMat * vec4(Position, 1.0);
gl_ClipDistance[0] = dot(modelPos,uClipPlane);
or:
// vec4 modelPos = ModelMat * vec4( Position, 1.0 );
gl_Position = ProjMat * ModelViewMat * vec4(Position, 1.0);
gl_ClipDistance[0] = dot(gl_Position,uClipPlane);
or:
vec4 modelPos = ModelMat * vec4( Position, 1.0 );
gl_Position = ProjMat * ModelViewMat * vec4(Position, 1.0);
gl_ClipDistance[0] = dot(uClipPlane, ModelMat);
Is it just that I don't understand how to properly calculate the model matrix? or is there some obvious plane translation step that I'm missing that could solve my problems?
I tried scaling orthogonal projection matrix but it seems it doesn't scale its dimensions.I am using orthogonal projection for directional lighting. But if the main camera is above ground I want to make my ortho matrix range bigger.
For example if camera is zero , orho matrix is
glm::ortho(-10.0f , 10.0f , -10.0f , 10.0f , 0.01f, 4000.0f)
but if camera goes 400 in y direction i want this matrix to be like
glm::ortho(-410.0f , 410.0f , -410.0f , 410.0f , 0.01f, 4000.0f)
but I want to do this in the shader with matrix multiplication or addition
The Orthographic Projection matrix can be computed by:
r = right, l = left, b = bottom, t = top, n = near, f = far
X: 2/(r-l) 0 0 0
y: 0 2/(t-b) 0 0
z: 0 0 -2/(f-n) 0
t: -(r+l)/(r-l) -(t+b)/(t-b) -(f+n)/(f-n) 1
In glsl, for instance:
float l = -410.0f;
float r = 410.0f;
float b = -410.0f;
float t = 410.0f;
float n = 0.01f;
float f = 4000.0f;
mat4 projection = mat4(
vec4(2.0/(r-l), 0.0, 0.0, 0.0),
vec4(0.0, 2.0/(t-b), 0.0, 0.0),
vec4(0.0, 0.0, -2.0/(f-n), 0.0),
vec4(-(r+l)/(r-l), -(t+b)/(t-b), -(f+n)/(f-n), 1.0)
);
Furthermore you can scale the orthographic projection matrix:
c++
glm::mat4 ortho = glm::ortho(-10.0f, 10.0f, -10.0f, 10.0f, 0.01f, 4000.0f);
float scale = 10.0f / 410.0f;
glm::mat4 scale_mat = glm::scale(glm::mat4(1.0f), glm::vec3(scale, scale, 1.0f));
glm::mat4 ortho_new = ortho * scale_mat;
glsl
float scale = 10.0 / 410.0;
mat4 scale_mat = mat4(
vec4(scale, 0.0, 0.0, 0.0),
vec4(0.0, scale, 0.0, 0.0),
vec4(0.0, 0.0, 1.0, 0.0),
vec4(0.0, 0.0, 0.0, 1.0)
);
mat4 ortho_new = ortho * scale_mat;
I am making an OpenGL c++ application that tracks the users location in relation to the screen and then updates the rendered scene to the perspective of the user. This is know as "desktop VR" or you can think of the screen as a diorama or fish tank. I am rather new to OpenGL and have only defined a very simple scene thus far, just a cube, and it is initially rendered correctly.
The problem is that when I start moving and want to rerender the cube scene, the projection plane seems translated and I don't see what I think I should. I want this plane fixed. If I were writing a ray tracer, my window would always be fixed, but my eye is allowed to wander. Can someone please explain to me how I can achieve the effect I desire (pinning the viewing window) while having my camera/eye wander at a non-origin coordinate?
All of the examples I find demand that the camera/eye be at the origin, but this is not conceptually convenient for me. Also, because this is a "fish tank", I am setting my d_near to be the xy-plane, where z = 0.
In screen/world space, I assign the center of the screen to (0,0,0) and its 4 corners to:
TL(-44.25, 25, 0)
TR( 44.25, 25, 0)
BR( 44.25,-25, 0)
BL(-44.25,-25, 0)
These values are in cm for a 16x9 display.
I then calculate the user's eye (actually a web cam on my face) using POSIT to be usually somewhere in the range of (+/-40, +/-40, 40-250). My POSIT method is accurate.
I am defining my own matrices for the perspective and viewing transforms and using shaders.
I initialize as follows:
float right = 44.25;
float left = -44.25;
float top = 25.00;
float bottom = -25.00;
vec3 eye = vec3(0.0, 0.0, 100.0);
vec3 view_dir = vec3(0.0, 0.0, -1.0);
vec3 up = vec3(0.0, 1.0, 0.0);
vec3 n = normalize(-view_dir);
vec3 u = normalize(cross(up, n));
vec3 v = normalize(cross(n, u));
float d_x = -(dot(eye, u));
float d_y = -(dot(eye, v));
float d_z = -(dot(eye, n));
float d_near = eye.z;
float d_far = d_near + 50;
// perspective transform matrix
mat4 P = mat4((2.0*d_near)/(right-left ), 0, (right+left)/(right-left), 0,
0, (2.0*d_near)/(top-bottom), (top+bottom)/(top-bottom), 0,
0, 0, -(d_far+d_near)/(d_far-d_near), -(2.0*d_far*d_near)/(d_far-d_near),
0, 0, -1.0, 0);
// viewing transform matrix
mat4 V = mat4(u.x, u.y, u.z, d_x,
v.x, v.y, v.z, d_y,
n.x, n.y, n.z, d_z,
0.0, 0.0, 0.0, 1.0);
mat4 MV = C * V;
//MV = V;
From what I gather looking on the web, my view_dir and up are to remain fixed. This means that I need only to update d_near and d_far as well as d_x, d_y, and d_y? I do this in my glutIdleFunc( idle );
void idle (void) {
hBuffer->getFrame(hFrame);
if (hFrame->goodH && hFrame->timeStamp != timeStamp) {
timeStamp = hFrame->timeStamp;
std::cout << "(" << hFrame->eye.x << ", " <<
hFrame->eye.y << ", " <<
hFrame->eye.z << ") \n";
eye = vec3(hFrame->eye.x, hFrame->eye.y, hFrame->eye.z);
d_near = eye.z;
d_far = eye.z + 50;
P = mat4((2.0*d_near)/(right-left), 0, (right+left)/(right-left), 0,
0, (2.0*d_near)/(top-bottom), (top+bottom)/(top-bottom), 0,
0, 0, -(d_far+d_near)/(d_far-d_near), -(2.0*d_far*d_near)/(d_far-d_near),
0, 0, -1.0, 0);
d_x = -(dot(eye, u));
d_y = -(dot(eye, v));
d_z = -(dot(eye, n));
C = mat4(1.0, 0.0, 0.0, eye.x,
0.0, 1.0, 0.0, eye.y,
0.0, 0.0, 1.0, 0.0,
0.0, 0.0, 0.0, 1.0);
V = mat4(u.x, u.y, u.z, d_x,
v.x, v.y, v.z, d_y,
n.x, n.y, n.z, d_z,
0.0, 0.0, 0.0, 1.0);
MV = C * V;
//MV = V;
glutPostRedisplay();
}
}
Here is my shader code:
#version 150
uniform mat4 MV;
uniform mat4 P;
in vec4 vPosition;
in vec4 vColor;
out vec4 color;
void
main()
{
gl_Position = P * MV * vPosition;
color = vColor;
}
Ok, I made some changes to my code, but without success. When I use V in place of MV in the vertex shader, everything looks as I want it to, the perspective is correct and the objects are the right size, however, the scene is translated by the displacement of the camera.
When using C and V to obtain MV, my scene is rendered from the perspective of an observer straight on and the rendered scene fills the window as it should, but the perspective of the eye/camera is lost.
Really, what I want is to translate the 2D pixels, the projection plane, by the appropriate x and y values of the eye/camera, so as to keep the center of the object (whose xy center is (0,0)) in the center of the rendered image. I am guided by the examples in the textbook "Interactive Computer Graphics: A Top-Down Approach with Shader-Based OpenGL (6th Edition)". Using the files paired with the book freely available on the web, I am continuing with the row major approach.
The following images are taken when not using the matrix C to create MV. When I do use C to create MV, all scenes look like the first image below. I desire no translation in z and so I leave that as 0.
Because the projection plane and my camera plane are parallel, the conversion from one to the other coordinate system is simply a translation and inv(T) ~ -T.
Here is my image for the eye at (0,0,50):
Here is my image for the eye at (56,-16,50):
The solution is to update d_x, d_y, and d_z, accounting for the new eye position, but to never change u, v, or n. In addition, one must update the matrix P with new values for left, right, top and bottom, as they relate to the new position of the camera/eye.
I initialize with this:
float screen_right = 44.25;
float screen_left = -44.25;
float screen_top = 25.00;
float screen_bottom = -25.00;
float screen_front = 0.0;
float screen_back = -150;
And now my idle function looks like this, note the calculations for top, bottom, right, and left:
void idle (void) {
hBuffer->getFrame(&hFrame);
if (hFrame.goodH && hFrame.timeStamp != timeStamp) {
timeStamp = hFrame.timeStamp;
//std::cout << "(" << hFrame.eye.x << ", " <<
// hFrame.eye.y << ", " <<
// hFrame.eye.z << ") \n";
eye = vec3(hFrame.eye.x, hFrame.eye.y, hFrame.eye.z);
d_near = eye.z;
d_far = eye.z + abs(screen_back) + 1;
float top = screen_top - eye.y;
float bottom = top - 2*screen_top;
float right = screen_right - eye.x;
float left = right - 2*screen_right;
P = mat4((2.0 * d_near)/(right - left ), 0.0, (right + left)/(right - left), 0.0,
0.0, (2.0 * d_near)/(top - bottom), (top + bottom)/(top - bottom), 0.0,
0.0, 0.0, -(d_far + d_near)/(d_far - d_near), -(2.0 * d_far * d_near)/(d_far - d_near),
0.0, 0.0, -1.0, 0.0);
d_x = -(dot(eye, u));
d_y = -(dot(eye, v));
d_z = -(dot(eye, n));
V = mat4(u.x, u.y, u.z, d_x,
v.x, v.y, v.z, d_y,
n.x, n.y, n.z, d_z,
0.0, 0.0, 0.0, 1.0);
glutPostRedisplay();
}
}
This redefinition of the perspective matrix keeps the rendered image from translating. I still have camera capture and screen grabbing synchronization issues, but I am able to create the images like following updated in real-time for the position of the user:
All of the examples I find demand that the camera/eye be at the origin
This is not a demand. OpenGL doesn't define a camera, it all boils down to transformations. Those are usually called projection and modelview (model and view transform combined). The camera, i.e. view transform is simply the inverse of the positioning of the camera in world space. So say you built yourself a matrix for your camera C, then you'd premultiply it's inverse on the modelview.
mat4 MV = inv(C) * V
On a side note, your shader code is wrong:
gl_Position = P * ( (V * vPosition ) / vPosition.w );
^^^^^^^^^^^^^^
The homogenous divide is not to be done in the shader, as it is hardwired into the render pipeline.
i have implemented shadowmapping with an FBO and GLSL.
it is used on a heightfield. that is some objects (trees, plants, ...) cast shadows on the heightfield.
the problem i have, is that the shadows are only visible on the ground of the heightfield. that is, where the heightfield's height = 0. as soon as there is some height involved, the shadows disappear. if i look at the shadowmap itself, everything looks fine... objects that are closer to the light are darker.
here is my GLSL vertexshader:
uniform mat4 lightView, lightProjection;
const mat4 biasMatrix = mat4( 0.5, 0.0, 0.0, 0.0,
0.0, 0.5, 0.0, 0.0,
0.0, 0.0, 0.5, 0.0,
0.5, 0.5, 0.5, 1.0); //bias from [-1, 1] to [0, 1]
void main()
{
gl_Position = ftransform();
mat4 shadowMatrix = biasMatrix * lightProjection * lightView;
shadowTexCoord = shadowMatrix * gl_Vertex;
}
fragmentshader:
uniform sampler2DShadow shadowmap;
varying vec4 shadowTexCoord;
void main()
{
vec4 shadow = shadow2DProj(shadowmap, shadowTexCoord, 0.0);
float colorshadow = shadow.r < 0.1 ? 0.5 : 1.0;
vec4 color = vec4(1,1,1,1);
gl_FragColor = vec4( color*colorshadow, color.w );
}
thanks a lot for any help on this!
I think there might be some confusion between the different spaces here. As written, it looks like your code would only work if gl_ModelViewMatrix for the ground contains only camera transformations. This is because ftransform basically goes
gl_Position = gl_ProjectionMatrix * (gl_ModelViewMatrix * gl_Vertex)
that means that the gl_Vertex is specified in object coordinates. However typically the view matrix of the light maps from world coordinates to the light's view space so this code would only work if object space = world space. So basically, lets say you scale the terrain, well then object space doesn't equal world space anymore. Because of this you need to separate out the gl_ModelViewMatrix into two parts: the camera view matrix and the modeling transform (eg object -> world space)
I havent tested this code, but I would try something like this:
uniform mat4 lightView, lightProjection;
uniform mat4 camView, camProj, modelTrans;
const mat4 biasMatrix = mat4( 0.5, 0.0, 0.0, 0.0,
0.0, 0.5, 0.0, 0.0,
0.0, 0.0, 0.5, 0.0,
0.5, 0.5, 0.5, 1.0); //bias from [-1, 1] to [0, 1]
void main()
{
mat4 modelViewProjMatrix = camProj * camView * modelTrans;
gl_Position = modelViewProjMatrix * gl_Vertex;
mat4 shadowMatrix = biasMatrix * lightProjection * lightView * modelTrans;
shadowTexCoord = shadowMatrix * gl_Vertex;
}
Technically it's faster to multiply the matrices on the CPU and only pass the exact ones you need but for getting stuff working sometimes its easier to do this way.
Maybe you just missed it copy-pasting, but I don't see shadowTexCoord as varying in the vertex shader. This should result in a compilation error, though.