I don't know why but it seems my z-axis is bugged (it seems it is doubling the value or something)
This is supposed to be a cube
https://scontent-mad1-1.xx.fbcdn.net/hphotos-xtp1/v/t34.0-12/11998230_879538255460372_61658668_n.jpg?oh=dd08fee0a66e37bf8f3aae2fba107fa1&oe=55F6170C
However it seems like it is "bigger in depth" what seems wrong
My pr_Matrix:
mat4 mat4::prespective(float fov, float aspectRatio, float near, float far){
mat4 result;
float yScale = 1.0f / tan(toRadians(fov/2.0f));
float xScale = yScale / aspectRatio;
float frustumLength = far - near;
result.elements[0 + 0 * 4] = xScale;
result.elements[1 + 1 * 4] = yScale;
result.elements[2 + 2 * 4] = -(far + near) / frustumLength;
result.elements[3 + 2 * 4] = -1.0f;
result.elements[2 + 3 * 4] = -(2.0f * far * near) / frustumLength;
return result;
}
My ml_Matrix:
maths::mat4 &ProjectionMatrix(){
maths::mat4 m_ProjM = maths::mat4::identity();
m_ProjM *= maths::mat4::translation(m_Position);
m_ProjM *= maths::mat4::rotation(m_Rotation.x, maths::vec3(1, 0, 0));
m_ProjM *= maths::mat4::rotation(m_Rotation.y, maths::vec3(0, 1, 0));
m_ProjM *= maths::mat4::rotation(m_Rotation.z, maths::vec3(0, 0, 1));
maths::mat4 scale_matrix = maths::mat4(m_Scale);
scale_matrix.elements[3 + 3 * 4] = 1.0f;
m_ProjM *= scale_matrix;
return m_ProjM;
}
My vw_matrix (camera)
void update(){
maths::mat4 newMatrix = maths::mat4::identity();
newMatrix *= maths::mat4::rotation(m_Pitch, maths::vec3(1, 0, 0));
newMatrix *= maths::mat4::rotation(m_Yaw, maths::vec3(0, 1, 0));
newMatrix *= maths::mat4::translation(maths::vec3(-m_Pos.x, -m_Pos.y, -m_Pos.z));
m_ViewMatrix = newMatrix;
}
My Matrix Multiplication in the glsl code:
vec4 worldPosition = ml_matrix * vec4(position, 1.0);
vec4 positionRelativeToCamera = vw_matrix * worldPosition;
gl_Position = pr_matrix * positionRelativeToCamera;
Edit: I think I got it! (Will double check maths once I get time) Most tutorials(so, the source of my code) use a mat4::prespective(float fov, float aspectRatio, float near, float far), the thing they don't say is that "fov" means "fovy" (so "vertical field of view"), the results seem to replicate the "usual fov in games" now with this simple change:
float xScale = 1.0f / tan(toRadians(fov/2.0f));
float yScale = xScale * aspectRatio;
Thanks for pointing it out as a fov problem Henk De Boer
This is simply a result of FOV, the FOV you supply greatly alters the way an image looks. It's a matter of personal taste what you choose. To get a natural feeling scene anything between 45-60 is good, but you can enhance this to make the scene feel more immediate or the action more fast paced. Observe how the following image has twice the exact same geometry, but the dock to the right is protruding much further in to the viewport with a 90 degree FOV.
My guess would be that in declaring:
mat4 result;
You're invoking a default constructor that's initialising the matrix to identity.
Then in the subsequent code that sets up the matrix, you need a call to:
result.elements[3 + 3 * 4] = 0.0f;
Also, for what it's worth, in my code the lines:
result.elements[3 + 2 * 4] = -1.0f;
result.elements[2 + 3 * 4] = -(2.0f * far * near) / frustumLength;
Are not negated. TBH, I'm not sure if that's anything to worry about.
Related
I'm trying to create a fish-eye effect but only in a small radius around the mouse position. I've been able to modify this code to work about the mouse position (demo) but I can't figure out where the zooming is coming from. I'm expecting the output to warp the image similarly to this (ignore the color inversion for the sake of this question):
Relevant code:
// Check if within given radius of the mouse
vec2 diff = myUV - u_mouse - 0.5;
float distance = dot(diff, diff); // square of distance, saves a square-root
// Add fish-eye
if(distance <= u_radius_squared) {
vec2 xy = 2.0 * (myUV - u_mouse) - 1.0;
float d = length(xy * maxFactor);
float z = sqrt(1.0 - d * d);
float r = atan(d, z) / PI;
float phi = atan(xy.y, xy.x);
myUV.x = d * r * cos(phi) + 0.5 + u_mouse.x;
myUV.y = d * r * sin(phi) + 0.5 + u_mouse.y;
}
vec3 tex = texture2D(tMap, myUV).rgb;
gl_FragColor.rgb = tex;
This is my first shader, so other improvements besides fixing this issue are also welcome.
Compute the vector from the current fragment to the mouse and the length of the vector:
vec2 diff = myUV - u_mouse;
float distance = length(diff);
The new texture coordinate is the sum of the mouse position and the scaled direction vector:
myUV = u_mouse + normalize(diff) * u_radius * f(distance/u_radius);
For instance:
uniform float u_radius;
uniform vec2 u_mouse;
void main()
{
vec2 diff = myUV - u_mouse;
float distance = length(diff);
if (distance <= u_radius)
{
float scale = (1.0 - cos(distance/u_radius * PI * 0.5));
myUV = u_mouse + normalize(diff) * u_radius * scale;
}
vec3 tex = texture2D(tMap, myUV).rgb;
gl_FragColor = vec4(tex, 1.0);
}
I encounter some difficulties to implement a procedural texture of checkerboard. Here is what I need to get:
Here is what i get:
It's close but my texture is kind of rotated in respect of what I need to get.
Here is the code of my shader:
#version 330
in vec2 uv;
out vec3 color;
uniform sampler1D colormap;
void main() {
float sx = sin(10*3.14*uv.x)/2 + 0.5;
float sy = sin(10*3.14*uv.y)/2 + 0.5;
float s = (sx + sy)/2;
if(true){
color = texture(colormap,s).rgb;
}
colormap is a mapping from 0 to 1, where 0 correspond to red, 1 to green.
I think the problem is coming from the formula i use, (sx+sy)/2 . I need to get the square not rotated but aligned with the border of the big square.
If someone has an idea to get the good formula.
Thanks.
You can add a rotation operation to "uv":
mat2 R(float degrees){
mat2 R = mat2(1);
float alpha = radians(degrees);
R[0][0] = cos(alpha);
R[0][1] = sin(alpha);
R[1][0] = -sin(alpha);
R[1][1] = cos(alpha);
return R;
}
void main(){
ver2 new_uv = R(45) * uv;
float sx = sin(10*3.14*new_uv.x)/2 + 0.5;
float sy = sin(10*3.14*new_uv.y)/2 + 0.5;
float s = (sx + sy)/2;
color = texture(colormap,s).rgb;
}
Maybe something like this:
float sx = sin(10.0 * M_PI * uv.x);
float sy = sin(10.0 * M_PI * uv.y);
float s = sx * sy / 2.0 + 0.5;
Example (without texture):
https://www.shadertoy.com/view/4sdSzn
I have created a simple 2D area using OpenGL, comprised of tiles. These tiles have been stretched relative to the screen's aspect ratio by default. To fix this I have attempted to use an orthographic projection matrix. Here is how I created it:
public void createProjectionMatrix() {
float left = 0;
float right = DisplayManager.getScreenWidth();
float top = 0;
float bottom = DisplayManager.getScreenHeight();
float near = 1;
float far = -1;
projectionMatrix.m00 = 2 / (r - l);
projectionMatrix.m11 = 2 / (t - b);
projectionMatrix.m22 = -2 / (f - n);
projectionMatrix.m30 = - (r + l) / (r - l);
projectionMatrix.m31 = - (t + b) / (t - b);
projectionMatrix.m32 = - (f + n) / (f - n);
projectionMatrix.m33 = 1;
}
The problem probably lies here but I just can't find it. I then call this method with the creation of my renderer, store it in a uniform variable and use it in the vertex shader like so:
vec4 worldPosition = transformationMatrix * vec4(position, 0, 1);
gl_Position = projectionMatrix * viewMatrix * worldPosition;
Where projectionMatrix is a mat4 which corresponds to the previously created orthographic projection matrix.
Right now absolutely nothing except for the clear color renders.
EDIT:
The orthographic projection matrix is created and loaded into the shaders right after the renderer's creation and after the shader's creation.
public Renderer() {
createOrthoMatrix();
terrainShader.start();
terrainShader.loadProjectionMatrix(projectionMatrix);
terrainShader.stop();
GL11.glEnable(GL13.GL_MULTISAMPLE);
GL11.glClearColor(0, 0, 0.5f, 1);
}
The rest of the matrices are passed in at each render with the loadUniforms() method.
for(Terrain t : batch) {
loadUniforms(t, terrainManager, camera, lights);
GL11.glDrawElements(GL11.GL_TRIANGLES, model.getModel().getVertexCount(), GL11.GL_UNSIGNED_INT, 0);
}
private void loadUniforms(Terrain t, TerrainManager tm, Camera camera, List<Light> lights) {
Matrix4f matrix = Maths.createTransformationMatrix(t.getPosition(), 0, 0, 0, 1);
terrainShader.loadTransformationMatrix(matrix);
terrainShader.loadViewMatrix(camera);
terrainShader.loadNumberOfRows(tm.getNumberOfRows());
terrainShader.loadOffset(t.getOffset());
terrainShader.loadLights(lights);
}
Finally this is what the vertex shader looks like:
#version 400 core
in vec2 position;
uniform mat4 transformationMatrix;
uniform mat4 viewMatrix;
uniform mat4 projectionMatrix;
void main(void) {
vec4 worldPosition = transformationMatrix * vec4(position, 0, 1);
gl_Position = projectionMatrix * viewMatrix * worldPosition;
}
It has been a long and arduous task (if I may sound incompetent myself). But I have found a solution to my problem, there is probably a better way to solve it, but this is how I did it.
I changed the createProjectionMatrix() to look like
public void createProjectionMatrix() {
float width = Display.getWidth();
float height = Display.getHeight();
float left = -width;
float right = width * 1f;
float top = height * 1f;
float bottom = -height;
float near = 0;
float far = 10;
projectionMatrix.m00 = (2f / (right - left)) * 1000;
projectionMatrix.m11 = (2f / (top - bottom)) * 1000;
projectionMatrix.m22 = 2f / (far - near);
projectionMatrix.m30 = - (right + left) / (right - left);
projectionMatrix.m31 = - (top + bottom) / (top - bottom);
projectionMatrix.m32 = -(far + near) / (far - near);
projectionMatrix.m33 = 1;
}
Multiplying m00 and m11 by a large number is the only way I am able to see anything besides the clear color. If I remember this correctly, it is because the renderer is rendering at less than a pixel. This idea was presented to me by #NicoSchertler. So thank you very much! The shaders looks the same and now it runs well. If anyone has a less bootleg solution please share it, as I will be glad to see how it was solved. Here is a link that was very helpful to me, OpenGL 3+ with orthographic projection of directional light.
This will solve your problem with the aspect. Try it out:
public void createProjectionMatrix() {
float srcaspect = 4f / 3f; /* Default aspect ratio to scale ortho, can be other than 4:3 display origin. */
float dstaspect = DisplayManager.getScreenWidth() / DisplayManager.getScreenHeight();
float yscale = (dstaspect < (1f / 1f) ? dstaspect : 1f / 1f) / (1f / 1f);
float scale = 0.5f*(DisplayManager.getScreenHeight());
float top = scale - (scale / yscale);
float bottom = scale + (scale / yscale);
float left = scale - (scale * dstaspect / yscale) - (scale - (scale * srcaspect));
float right = scale + (scale * dstaspect / yscale) - (scale - (scale * srcaspect));
float near = 10;
float far = 1000;
projectionMatrix.m00 = 2 / (r - l);
projectionMatrix.m11 = 2 / (t - b);
projectionMatrix.m22 = -2 / (f - n);
projectionMatrix.m30 = - (r + l) / (r - l);
projectionMatrix.m31 = - (t + b) / (t - b);
projectionMatrix.m32 = - (f + n) / (f - n);
projectionMatrix.m33 = 1;
}
currently I am learning 3D rendering theory with the book "Learning Modern 3D Graphics Programming" and are right now stuck in one of the "Further Study" activities on the review of chapter four, specifically the last activity.
The third activity was answered in this question, I understood it with no problem. However, this last activity asks me to do all that this time using only matrices.
I have a solution partially working, but it feels quite a hack to me, and probably not the correct way to do it.
My solution to the third question involved oscilating the 3d vector E's x, y, and z components by an arbitrary range and produced a zooming-in-out cube (growing from bottom-left, per OpenGL origin point). I wanted to do this again using matrices, it looked like this:
However I get this results with matrices (ignoring the background color change):
Now to the code...
The matrix is a float[16] called theMatrix that represents a 4x4 matrix with the data written in column-major order with everything but the following elements initialized to zero:
float fFrustumScale = 1.0f; float fzNear = 1.0f; float fzFar = 3.0f;
theMatrix[0] = fFrustumScale;
theMatrix[5] = fFrustumScale;
theMatrix[10] = (fzFar + fzNear) / (fzNear - fzFar);
theMatrix[14] = (2 * fzFar * fzNear) / (fzNear - fzFar);
theMatrix[11] = -1.0f;
then the rest of the code stays the same like the matrixPerspective tutorial lesson until we get to the void display()function:
//Hacked-up variables pretending to be a single vector (E)
float x = 0.0f, y = 0.0f, z = -1.0f;
//variables used for the oscilating zoom-in-out
int counter = 0;
float increment = -0.005f;
int steps = 250;
void display()
{
glClearColor(0.15f, 0.15f, 0.2f, 0.0f);
glClear(GL_COLOR_BUFFER_BIT);
glUseProgram(theProgram);
//Oscillating values
while (counter <= steps)
{
x += increment;
y += increment;
z += increment;
counter++;
if (counter >= steps)
{
counter = 0;
increment *= -1.0f;
}
break;
}
//Introduce the new data to the array before sending as a 4x4 matrix to the shader
theMatrix[0] = -x * -z;
theMatrix[5] = -y * -z;
//Update the matrix with the new values after processing with E
glUniformMatrix4fv(perspectiveMatrixUniform, 1, GL_FALSE, theMatrix);
/*
cube rendering code ommited for simplification
*/
glutSwapBuffers();
glutPostRedisplay();
}
And here is the vertex shader code that uses the matrix:
#version 330
layout(location = 0) in vec4 position;
layout(location = 1) in vec4 color;
smooth out vec4 theColor;
uniform vec2 offset;
uniform mat4 perspectiveMatrix;
void main()
{
vec4 cameraPos = position + vec4(offset.x, offset.y, 0.0, 0.0);
gl_Position = perspectiveMatrix * cameraPos;
theColor = color;
}
What I am doing wrong, or what I am confusing? Thanks for the time reading all of this.
In OpenGL there are three major matrices that you need to be aware of:
The Model Matrix D: Maps vertices from an object's local coordinate system into the world's cordinate system.
The View Matrix V: Maps vertices from the world's coordinate system to the camera's coordinate system.
The Projection Matrix P: Maps (or more suitably projects) vertices from camera's space onto the screen.
Mutliplied the model and the view matrix give us the so called Model-view Matrix M, which maps the vertices from the object's local coordinates to the camera's cordinate system.
Altering specific elements of the model-view matrix results in certain afine transfomations of the camera.
For example, the 3 matrix elements of the rightmost column are for the translation transformation. The diagonal elements are for the scaling transformation. Altering appropriately the elements of the sub-matrix
are for the rotation transformations along camera's axis X, Y and Z.
The above transformations in C++ code are quite simple and are displayed below:
void translate(GLfloat const dx, GLfloat const dy, GLfloat dz, GLfloat *M)
{
M[12] = dx; M[13] = dy; M[14] = dz;
}
void scale(GLfloat const sx, GLfloat sy, GLfloat sz, GLfloat *M)
{
M[0] = sx; M[5] = sy; M[10] = sz;
}
void rotateX(GLfloat const radians, GLfloat *M)
{
M[5] = std::cosf(radians); M[6] = -std::sinf(radians);
M[9] = -M[6]; M[10] = M[5];
}
void rotateY(GLfloat const radians, GLfloat *M)
{
M[0] = std::cosf(radians); M[2] = std::sinf(radians);
M[8] = -M[2]; M[10] = M[0];
}
void rotateZ(GLfloat const radians, GLfloat *M)
{
M[0] = std::cosf(radians); M[1] = std::sinf(radians);
M[4] = -M[1]; M[5] = M[0];
}
Now you have to define the projection matrix P.
Orthographic projection:
// These paramaters are lens properties.
// The "near" and "far" create the Depth of Field.
// The "left", "right", "bottom" and "top" represent the rectangle formed
// by the near area, this rectangle will also be the size of the visible area.
GLfloat near = 0.001, far = 100.0;
GLfloat left = 0.0, right = 320.0;
GLfloat bottom = 480.0, top = 0.0;
// First Column
P[0] = 2.0 / (right - left);
P[1] = 0.0;
P[2] = 0.0;
P[3] = 0.0;
// Second Column
P[4] = 0.0;
P[5] = 2.0 / (top - bottom);
P[6] = 0.0;
P[7] = 0.0;
// Third Column
P[8] = 0.0;
P[9] = 0.0;
P[10] = -2.0 / (far - near);
P[11] = 0.0;
// Fourth Column
P[12] = -(right + left) / (right - left);
P[13] = -(top + bottom) / (top - bottom);
P[14] = -(far + near) / (far - near);
P[15] = 1;
Perspective Projection:
// These paramaters are about lens properties.
// The "near" and "far" create the Depth of Field.
// The "angleOfView", as the name suggests, is the angle of view.
// The "aspectRatio" is the cool thing about this matrix. OpenGL doesn't
// has any information about the screen you are rendering for. So the
// results could seem stretched. But this variable puts the thing into the
// right path. The aspect ratio is your device screen (or desired area) width
// divided by its height. This will give you a number < 1.0 the the area
// has more vertical space and a number > 1.0 is the area has more horizontal
// space. Aspect Ratio of 1.0 represents a square area.
GLfloat near = 0.001;
GLfloat far = 100.0;
GLfloat angleOfView = 0.25 * 3.1415;
GLfloat aspectRatio = 0.75;
// Some calculus before the formula.
GLfloat size = near * std::tanf(0.5 * angleOfView);
GLfloat left = -size
GLfloat right = size;
GLfloat bottom = -size / aspectRatio;
GLfloat top = size / aspectRatio;
// First Column
P[0] = 2.0 * near / (right - left);
P[1] = 0.0;
P[2] = 0.0;
P[3] = 0.0;
// Second Column
P[4] = 0.0;
P[5] = 2.0 * near / (top - bottom);
P[6] = 0.0;
P[7] = 0.0;
// Third Column
P[8] = (right + left) / (right - left);
P[9] = (top + bottom) / (top - bottom);
P[10] = -(far + near) / (far - near);
P[11] = -1.0;
// Fourth Column
P[12] = 0.0;
P[13] = 0.0;
P[14] = -(2.0 * far * near) / (far - near);
P[15] = 0.0;
Then your shader will become:
#version 330
layout(location = 0) in vec4 position;
layout(location = 1) in vec4 color;
smooth out vec4 theColor;
uniform mat4 modelViewMatrix;
uniform mat4 projectionMatrix;
void main()
{
gl_Position = projectionMatrix * modelViewMatrix * position;
theColor = color;
}
Bibliography:
http://blog.db-in.com/cameras-on-opengl-es-2-x/
http://www.songho.ca/opengl/gl_transform.html
Sampling from a depth buffer in a shader returns values between 0 and 1, as expected.
Given the near- and far- clip planes of the camera, how do I calculate the true z value at this point, i.e. the distance from the camera?
From http://web.archive.org/web/20130416194336/http://olivers.posterous.com/linear-depth-in-glsl-for-real
// == Post-process frag shader ===========================================
uniform sampler2D depthBuffTex;
uniform float zNear;
uniform float zFar;
varying vec2 vTexCoord;
void main(void)
{
float z_b = texture2D(depthBuffTex, vTexCoord).x;
float z_n = 2.0 * z_b - 1.0;
float z_e = 2.0 * zNear * zFar / (zFar + zNear - z_n * (zFar - zNear));
}
[edit] So here's the explanation (with 2 mistakes, see Christian's comment below) :
An OpenGL perspective matrix looks like this :
When you multiply this matrix by an homogeneous point [x,y,z,1], it gives you: [don't care, don't care, Az+B, -z] (with A and B the 2 big components in the matrix).
OpenGl next does the perspective division: it divides this vector by its w component. This operation is not done in shaders (except special cases like shadowmapping) but in hardware; you can't control it. w = -z, so the Z value becomes -A/z -B.
We are now in Normalized Device Coordinates. The Z value is between 0 and 1. For some stupid reason, OpenGL requires that it should be moved to the [-1,1] range (just like x and y). A scaling and offset is applied.
This final value is then stored in the buffer.
The above code does the exact opposite :
z_b is the raw value stored in the buffer
z_n linearly transforms z_b from [-1,1] to [0,1]
z_e is the same formula as z_n=-A/z_e -B, but solved for z_e instead. It's equivalent to z_e = -A / (z_n+B). A and B should be computed on the CPU and sent as uniforms, btw.
The opposite function is :
varying float depth; // Linear depth, in world units
void main(void)
{
float A = gl_ProjectionMatrix[2].z;
float B = gl_ProjectionMatrix[3].z;
gl_FragDepth = 0.5*(-A*depth + B) / depth + 0.5;
}
I know this is an old, old question, but I've found myself back here more than once on various occasions, so I thought I'd share my code that does the forward and reverse conversions.
This is based on #Calvin1602's answer. These work in GLSL or plain old C code.
uniform float zNear = 0.1;
uniform float zFar = 500.0;
// depthSample from depthTexture.r, for instance
float linearDepth(float depthSample)
{
depthSample = 2.0 * depthSample - 1.0;
float zLinear = 2.0 * zNear * zFar / (zFar + zNear - depthSample * (zFar - zNear));
return zLinear;
}
// result suitable for assigning to gl_FragDepth
float depthSample(float linearDepth)
{
float nonLinearDepth = (zFar + zNear - 2.0 * zNear * zFar / linearDepth) / (zFar - zNear);
nonLinearDepth = (nonLinearDepth + 1.0) / 2.0;
return nonLinearDepth;
}
I ended up here trying to solve a similar problem when Nicol Bolas's comment on this page made me realize what I was doing wrong. If you want the distance to the camera and not the distance to the camera plane, you can compute it as follows (in GLSL):
float GetDistanceFromCamera(float depth,
vec2 screen_pixel,
vec2 resolution) {
float fov = ...
float near = ...
float far = ...
float distance_to_plane = near / (far - depth * (far - near)) * far;
vec2 center = resolution / 2.0f - 0.5;
float focal_length = (resolution.y / 2.0f) / tan(fov / 2.0f);
float diagonal = length(vec3(screen_pixel.x - center.x,
screen_pixel.y - center.y,
focal_length));
return distance_to_plane * (diagonal / focal_length);
}
(source) Thanks to github user cassfalg:
https://github.com/carla-simulator/carla/issues/2287