I have a simple vertex shader
#version 330 core
uniform mat4 projectionMatrix;
uniform mat4 viewMatrix;
uniform mat4 modelMatrix;
in vec3 in_Position;
out vec3 pass_Color;
void main(void)
{
//gl_Position = projectionMatrix * viewMatrix * modelMatrix * vec4(in_Position, 1.0);
gl_Position = vec4(in_Position, 1.0);
pass_Color = vec3(1,1,1);
}
In my client code i have
glm::vec4 vec1(-1,-1,0,1);//first
glm::vec4 vec2(0,1,0,1);//second
glm::vec4 vec3(1,-1,0,1);//third
glm::mat4 m = projectionMatrix * viewMatrix * modelMatrix;
//translate on client side
vec1 = m * vec1;
vec2 = m * vec2;
vec3 = m * vec3;
//first vertex
vertices[0] = vec1.x;
vertices[1] = vec1.y;
vertices[2] = vec1.z;
//second
vertices[3] = vec2.x;
vertices[4] = vec2.y;
vertices[5] = vec2.z;
//third
vertices[6] = vec3.x;
vertices[7] = vec3.y;
vertices[8] = vec3.z;
Now my question if i use no matrix multiplication in the shader and none in client code this will render me a nice triangle which strectch the whole screen, so i take it the vertex shader maps cordinates its given to the screen in a cordinate system with x=-1..1 and y=-1..1
If i do the matrix multiplication in the shader everything works nice. But if i comment out the code in the shader like shown and do it on the client i get odd results. Shouldnt it yield the same result?
Have i gotten it wrong thinking the output of the vertex shader gl_Position is 2D cordinates despite being a vec4?
Thanks for any help. I really like to understand what exactly the output of the vertex shader is in terms of vertex position.
The problem is in your shader as it accepts only 3 components of position. It is OK to set the forth coordinate to 1 (like you do it) if the coordinate is not in projection space yet.
When you are doing the transformation in client space, the results are correct 4-component homogeneous vectors. You just need to use them as is in your vertex shader:
in vec4 in_Position.
...
gl_Position = in_Position.
Related
I have a set of 3d points and I want to render each of these points as rectangles(for ease). I want these rectangles to simulate the behaviour of 3d objects in a sense that they maintain aspect ratio in regards to camera. Basically I want them to do something like this:
Here is what I do: In the vertex shader I basically do nothing and just pass the vertex down the pipeline
gl_Position = vec4(vtx_position, 1.0);
In the geometry shader I try to generate these rectangles by projecting the input vertices to modelview space and then generating 4 output vertices with the same offset from input and emitting them after multiplying them with projection matrix:
uniform mat4 MV;
uniform mat4 PROJ;
uniform float size;
position = MV * gl_in[0].gl_Position;
gl_Position = position;
gl_Position.xy += vec2(-size, -size);
gl_Position = PROJ * gl_Position;
EmitVertex();
gl_Position = position;
gl_Position.xy += vec2(-size, size);
gl_Position = PROJ * gl_Position;
EmitVertex();
gl_Position = position;
gl_Position.xy += vec2(size, -size);
gl_Position = PROJ * gl_Position;
EmitVertex();
gl_Position = position;
gl_Position.xy += vec2(size, size);
gl_Position = PROJ * gl_Position;
EmitVertex();
Finally in fragment shader I just fill them with color. However on output I get something like this:
While each rectangle is positioned correctly thir sizes are off. What did I do wrong? What should be done to achieve result like in the first picture?
As it turned out while I was searching for the problem in shades I was also multiplying size uniform with aspect ratio in c++ code.
I'm a new opengl programmer. I am plotting a height map composed of thousands of triangles in a 3D graph format. They are scaled so that they are plotted between -1 and +1 in the three axis. Now I am able to zoom in the X axis only and am able to translate in the X axis as well by applying the appropriate scale and translation matrices. This effectively allows me to zoom right into the data and move it in the x direction as I choose.
The problem is, once I zoom, the data in the x direction now extends outside the -1 to + 1 region which the boundaries of a graph. I want this data to not be shown.
How is this done in modern OpenGL?
Thank you
Edit:
The matrices are as follows:
plottingProgram["projection_matrix"].SetValue(Matrix4.CreatePerspectiveFieldOfView(0.45f, (float)width / height, 0.1f, 1000f));
plottingProgram["view_matrix"].SetValue(Matrix4.LookAt(new Vector3(0, 0, 10), Vector3.Zero, new Vector3(0, 1, 0)));
and the vertex shader is
public static string VertexShader = #"
#version 130
in vec3 vertexPosition;
out vec2 textureCoordinate;
uniform mat4 projection_matrix;
uniform mat4 view_matrix;
uniform mat4 model_matrix;
void main(void)
{
textureCoordinate = vertexPosition.xy;
gl_Position = projection_matrix * view_matrix * model_matrix * vec4(vertexPosition, 1);
}
";
Here is a link to the graph:
http://va2fsq.com/wp-content/uploads/graph.jpg
Thanks
I solved this. After trying many things like glScissor, I happened upon the glClip_distance. So my initial try placed a Uniform in the shader which was set to
in vec3 vertexPosition;
uniform vec4 plane0 = (-1,0,0,1);
uniform vec4 plane1 = (1,0,0,1);
void main(void)
{
gl_ClipDistance[0] = dot(vec4(vertexPosition,1), plane0;
gl_ClipDistance[1] = dot(vec4(vertexPosition,1), plane2;
Now the problem with this is that the vec4 planes are scaled and translated by any model matrix scaling or transformations. So that wouldn't work. So the solution is to move the clip vectors outside of the vertex shader and apply the opposite scaling to them as follows:
Vector4 clip1 = new Vector4(-1, 0, 0, 1.0f / scale);
Vector4 clip2 = new Vector4(1, 0, 0, 1.0f / scale);
plottingProgram["model_matrix"].SetValue(Matrix4.CreateScaling(new Vector3(scale,1,1)) * Matrix4.CreateTranslation(new Vector3(0,0,0)) *Matrix4.CreateRotationY(yangle) * Matrix4.CreateRotationX(xangle));
plottingProgram["plane0"].SetValue(clip1);
plottingProgram["plane1"].SetValue(clip2);
and the complete vertex shader is given by
public static string VertexShader = #"
#version 130
in vec3 vertexPosition;
out vec2 textureCoordinate;
uniform mat4 projection_matrix;
uniform mat4 view_matrix;
uniform mat4 model_matrix;
uniform vec4 plane0;
uniform vec4 plane1;
void main(void)
{
textureCoordinate = vertexPosition.xy;
gl_ClipDistance[0] = dot(vec4(vertexPosition,1), plane0);
gl_ClipDistance[1] = dot(vec4(vertexPosition,1), plane1);
gl_Position = projection_matrix * view_matrix * model_matrix * vec4(vertexPosition, 1);
}
";
You can also translate in the same way.
I'm working on OpenGL application using the QT5 Gui framework, However, I'm not an expert in OpenGL and I'm facing a couple of issues when trying to simulate directional light. I'm using 'almost' the same algorithm I used in an WebGL application which works just fine.
The application is used to render multiple adjacent cells of a large gridblock (each of which is represented by 8 independent vertices) meaning that some vertices of the whole gridblock are duplicated in the VBO. the normals are calculated per face in geometry shader as shown below in the code.
QOpenGLWidget paintGL() body.
void OpenGLWidget::paintGL()
{
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glEnable(GL_DEPTH_TEST);
glEnable(GL_CULL_FACE);
m_camera = camera.toMatrix();
m_world.setToIdentity();
m_program->bind();
m_program->setUniformValue(m_projMatrixLoc, m_proj);
m_program->setUniformValue(m_mvMatrixLoc, m_camera * m_world);
QMatrix3x3 normalMatrix = (m_camera * m_world).normalMatrix();
m_program->setUniformValue(m_normalMatrixLoc, normalMatrix);
QVector3D lightDirection = QVector3D(1,1,1);
lightDirection.normalize();
QVector3D directionalColor = QVector3D(1,1,1);
QVector3D ambientLight = QVector3D(0.2,0.2,0.2);
m_program->setUniformValue(m_lightDirectionLoc, lightDirection);
m_program->setUniformValue(m_directionalColorLoc, directionalColor);
m_program->setUniformValue(m_ambientColorLoc, ambientLight);
geometries->drawGeometry(m_program);
m_program->release();
}
}
Vertex Shader
#version 330
layout(location = 0) in vec4 vertex;
uniform mat4 projMatrix;
uniform mat4 mvMatrix;
void main()
{
gl_Position = projMatrix * mvMatrix * vertex;
}
Geometry Shader
#version 330
layout ( triangles ) in;
layout ( triangle_strip, max_vertices = 3 ) out;
out vec3 transformedNormal;
uniform mat3 normalMatrix;
void main()
{
vec3 A = gl_in[2].gl_Position.xyz - gl_in[0].gl_Position.xyz;
vec3 B = gl_in[1].gl_Position.xyz - gl_in[0].gl_Position.xyz;
gl_Position = gl_in[0].gl_Position;
transformedNormal = normalMatrix * normalize(cross(A,B));
EmitVertex();
gl_Position = gl_in[1].gl_Position;
transformedNormal = normalMatrix * normalize(cross(A,B));
EmitVertex();
gl_Position = gl_in[2].gl_Position;
transformedNormal = normalMatrix * normalize(cross(A,B));
EmitVertex();
EndPrimitive();
}
Fragment Shader
#version 330
in vec3 transformedNormal;
out vec4 fColor;
uniform vec3 lightDirection;
uniform vec3 ambientColor;
uniform vec3 directionalColor;
void main()
{
highp float directionalLightWeighting = max(dot(transformedNormal, lightDirection), 0.0);
vec3 vLightWeighting = ambientColor + directionalColor * directionalLightWeighting;
highp vec3 color = vec3(1, 1, 0.0);
fColor = vec4(color*vLightWeighting, 1.0);
}
The 1st issue is that lighting on the faces seems to change whenever the camera angle changes (camera location doesn't affect it, only the angle). You can see this behavior in the following snapshot. My guess is that I'm doing something wrong when calculating the normal matrix, but I can't figure out what it is.
The 2nd issue (The one causing me headaches) is whenever The camera is moved, edges of the cells show blocky and rigged lines that flickers when the camera moves around. this effect gets really nasty when there are too many cells clustered together.
The model used in the snapshot is just a sample slab of 10 cells to better illustrate the faulty effects. The actual models (gridblock) contain up to 200K cells stacked together.
EDIT: 2nd issue solution.
I was using znear/zfar of 0.01f and 50000.0f respecticvely, when I
changed the znear to 1.0f, this effect disappeared. According to OpenGL Wiki this is caused by a zNear clipping plane value that's too close to 0.0. As the zNear clipping plane is set increasingly closer to 0.0, the effective precision of the depth buffer decreases dramatically
EDIT2: I tried debug drawing the normals as suggested in the comments,
I quickly realized that I probably shouldn't calculate them based on
gl_Position (after MVP matrix multiplication in VS) instead I should use the
original vertex locations, so i modified the the shaders as follows:
Vertex Shader (UPDATED)
#version 330
layout(location = 0) in vec4 vertex;
out vec3 vert;
uniform mat4 projMatrix;
uniform mat4 mvMatrix;
void main()
{
vert = vertex.xyz;
gl_Position = projMatrix * mvMatrix * vertex;
}
Geometry Shader (UPDATED)
#version 330
layout ( triangles ) in;
layout ( triangle_strip, max_vertices = 3 ) out;
in vec3 vert [];
out vec3 transformedNormal;
uniform mat3 normalMatrix;
void main()
{
vec3 A = vert[2].xyz - vert[0].xyz;
vec3 B = vert[1].xyz - vert[0].xyz;
gl_Position = gl_in[0].gl_Position;
transformedNormal = normalize(normalMatrix * normalize(cross(A,B)));
EmitVertex();
gl_Position = gl_in[1].gl_Position;
transformedNormal = normalize(normalMatrix * normalize(cross(A,B)));
EmitVertex();
gl_Position = gl_in[2].gl_Position;
transformedNormal = normalize(normalMatrix * normalize(cross(A,B)));
EmitVertex();
EndPrimitive();
}
But even after this modification the normals of the surface still change with the camera angle, as shown below in the screenshot. I dont know if the normal calculation is wrong or the normal matrix calculation is done wrong or maybe both...
EDIT3: 1st Issue Solution: changing normal calculation in GS from
transformedNormal = normalize(normalMatrix * normalize(cross(A,B)));
to transformedNormal = normalize(cross(A,B)); seems to solve the
problem. Omitting the normalMatrix from the calculation fixed the
issue and the normals dont change with the viewing angle.
If I missed any important/relevant information, please notify me in a comment.
Depth buffer precision
Depth buffer is usually stored as 16 or 24 bit buffer. It is a HW implementation of float normalized to specific range. So you can see there is very few bits for mantissa/exponent in comparison to standard float.
if I oversimplify things and assume integer values instead float then for 16 bit buffer you got 2^16 values. if you got znear=0.1 and zfar=50000.0 then you got only 65535 values on the full range. Now as the Depth valued are nonlinear you got higher accuracy near znear and much much lower near zfar plane so the depth values will jump with higher and higher step causing accuracy problems where any 2 polygons are near.
I empirically got this for setting the planes in my views:
(zfar-znear)/desired_accuracy_step > 0.3*(2^n)
Where n is the depth buffer bit-width and desired_accuracy_step is the wanted resolution in Z axis I need. Sometimes I saw it exchanged by znear value.
I've got this vertex shader:
#version 400
layout(location=0) in vec4 vertPosition;
layout(location=1) in vec2 vertUV;
layout(location=2) in vec3 vertNormal;
uniform mat4 MVP;
out vec3 fragPosition;
out vec2 fragUV;
out vec3 fragNormal;
void main(void){
gl_Position = MVP * vertPosition;
fragPosition = (MVP * vertPosition).rgb;
fragUV = vertUV;
fragNormal = (MVP * vec4(vertNormal,0)).rgb;
}
As soon as I'm multiplying vertPosition by MVP the object to render disappears, even if MVP is an identity matrix.
Here's the code where I'm setting MVP:
mat4 model = translate(mat4(), vec3(0,0,0));
mat4 view = translate(mat4(), vec3(0,0,-10));
mat4 proj = perspective(90.0f, (float)1280/720, 0.001f, 1000.0f);
mat4 MVP = model * view * proj;
GLint uniform_loc = glGetUniformLocation(this->_shaderID, "MVP");
glUniformMatrix4fv(uniform_loc, 1, GL_FALSE, value_ptr(matrix));
I tried different multiplication orders and multiplicating in the shader.
You've got the order in which you multiple MVP reversed. OpenGL matrix multiplication is column major right associative, hence it should be
mat4 MVP = proj * view * model;
A near clipping plane distance of 0.0001 is far too low. Essentially you're compressing all of the depth buffer resolution into a very small range. Make near as large as possible.
I don't get why you're selecting .rgb for the fragPosition and fragNormal.
From the look of your example you have forgotten to set the shader program before setting the uniform. Try using glUseProgram(this->program); (depending on where you've stored your program) before setting the matrix!
I'm working on an implementation of normal mapping, calculating the tangent vectors via the ASSIMP library.
The normal mapping seems to work perfectly on objects that have a model matrix close to the identity matrix. As long as I start translating and scaling, my lighting seems off. As you can see in the picture the normal mapping works perfectly on the container cube, but the lighting fails on the large floor (direction of the specular light should be towards the player, not towards the container).
I get the feeling it somehow has something to do with the position of the light (currently traversing from x = -10 to x = 10 over time) not properly being included in the calculations as long as I start changing the model matrix (via translations/scaling). I'm posting all the relevant code and hope you guys can somehow see something I'm missing since I've been staring at my code for days.
Vertex shader
#version 330
layout(location = 0) in vec3 position;
layout(location = 1) in vec3 normal;
layout(location = 2) in vec3 tangent;
layout(location = 3) in vec3 color;
layout(location = 4) in vec2 texCoord;
// fragment pass through
out vec3 Position;
out vec3 Normal;
out vec3 Tangent;
out vec3 Color;
out vec2 TexCoord;
out vec3 TangentSurface2Light;
out vec3 TangentSurface2View;
uniform vec3 lightPos;
// vertex transformation
uniform mat4 model;
uniform mat4 view;
uniform mat4 projection;
void main()
{
mat3 normalMatrix = transpose(mat3(inverse(view * model)));
Position = vec3((view * model) * vec4(position, 1.0));
Normal = normalMatrix * normal;
Tangent = tangent;
Color = color;
TexCoord = texCoord;
gl_Position = projection * view * model * vec4(position, 1.0);
vec3 light = vec3(view * vec4(lightPos, 1.0));
vec3 n = normalize(normalMatrix * normal);
vec3 t = normalize(normalMatrix * tangent);
vec3 b = cross(n, t);
mat3 mat = mat3(t.x, b.x ,n.x, t.y, b.y ,n.y, t.z, b.z ,n.z);
vec3 vector = normalize(light - Position);
TangentSurface2Light = mat * vector;
vector = normalize(-Position);
TangentSurface2View = mat * vector;
}
Fragment shader
#version 330
in vec3 Position;
in vec3 Normal;
in vec3 Tangent;
in vec3 Color;
in vec2 TexCoord;
in vec3 TangentSurface2Light;
in vec3 TangentSurface2View;
out vec4 outColor;
uniform vec3 lightPos;
uniform mat4 view;
uniform sampler2D texture0;
uniform sampler2D texture_normal; // normal
uniform float repeatFactor = 1;
void main()
{
vec4 texColor = texture(texture0, TexCoord * repeatFactor);
vec3 light = vec3(view * vec4(lightPos, 1.0));
float dist = length(light - Position);
float att = 1.0 / (1.0 + 0.01 * dist + 0.001 * dist * dist);
// Ambient
vec4 ambient = vec4(0.2);
// Diffuse
vec3 surface2light = normalize(TangentSurface2Light);
vec3 norm = normalize(texture(texture_normal, TexCoord * repeatFactor).xyz * 2.0 - 1.0);
float contribution = max(dot(norm, surface2light), 0.0);
vec4 diffuse = contribution * vec4(0.8);
// Specular
vec3 surf2view = normalize(TangentSurface2View);
vec3 reflection = reflect(-surface2light, norm); // reflection vector
float specContribution = pow(max(dot(surf2view, reflection), 0.0), 32);
vec4 specular = vec4(0.6) * specContribution;
outColor = (ambient + (diffuse * att)+ (specular * pow(att, 3))) * texColor;
}
OpenGL Drawing Code
void Render()
{
...
glm::mat4 view, projection; // Model will be done via MatrixStack
view = glm::lookAt(position, position + direction, up); // cam pos, look at (eye pos), up vec
projection = glm::perspective(45.0f, (float)width/(float)height, 0.1f, 1000.0f);
glUniformMatrix4fv(glGetUniformLocation(basicShader.shaderProgram, "view"), 1, GL_FALSE, glm::value_ptr(view));
glUniformMatrix4fv(glGetUniformLocation(basicShader.shaderProgram, "projection"), 1, GL_FALSE, glm::value_ptr(projection));
// Lighting
lightPos.x = 0.0 + sin(time / 125) * 10;
glUniform3f(glGetUniformLocation(basicShader.shaderProgram, "lightPos"), lightPos.x, lightPos.y, lightPos.z);
// Objects (use bump mapping on this cube)
bumpShader.Use();
glUniformMatrix4fv(glGetUniformLocation(bumpShader.shaderProgram, "view"), 1, GL_FALSE, glm::value_ptr(view));
glUniformMatrix4fv(glGetUniformLocation(bumpShader.shaderProgram, "projection"), 1, GL_FALSE, glm::value_ptr(projection));
glUniform3f(glGetUniformLocation(bumpShader.shaderProgram, "lightPos"), lightPos.x, lightPos.y, lightPos.z);
MatrixStack::LoadIdentity();
MatrixStack::Scale(2);
MatrixStack::ToShader(glGetUniformLocation(bumpShader.shaderProgram, "model"));
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, resources.GetTexture("container"));
glUniform1i(glGetUniformLocation(bumpShader.shaderProgram, "img"), 0);
glActiveTexture(GL_TEXTURE1); // Normal map
glBindTexture(GL_TEXTURE_2D, resources.GetTexture("container_normal"));
glUniform1i(glGetUniformLocation(bumpShader.shaderProgram, "normalMap"), 1);
glUniform1f(glGetUniformLocation(bumpShader.shaderProgram, "repeatFactor"), 1);
cubeNormal.Draw();
MatrixStack::LoadIdentity();
MatrixStack::Translate(glm::vec3(0.0f, -22.0f, 0.0f));
MatrixStack::Scale(glm::vec3(200.0f, 20.0f, 200.0f));
MatrixStack::ToShader(glGetUniformLocation(bumpShader.shaderProgram, "model"));
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, resources.GetTexture("floor"));
glActiveTexture(GL_TEXTURE1); // Normal map
glBindTexture(GL_TEXTURE_2D, resources.GetTexture("floor_normal"));
glUniform1f(glGetUniformLocation(bumpShader.shaderProgram, "repeatFactor"), 100);
cubeNormal.Draw();
MatrixStack::LoadIdentity();
glActiveTexture(GL_TEXTURE0);
...
}
EDIT
I now loaded my objects using the ASSIMP library with the 'aiProcess_CalcTangentSpace' flag enabled and changed my shaders accordingly to adapt to the new changes. Since ASSIMP now automatically calculates the correct tangent vectors I should have valid tangent vectors and my problem should be solved (as noted by Nicol Bolas), but I still have the same issue with the specular lighting acting strange and the diffuse lighting not really showing up. I guess there is still something else that is not working correctly. I unmarked your answer as the correct answer Nicol Bolas (for now) and updated my code accordingly since there is still something I'm missing.
It probably has something to do with translating. As soon as I add a translate (-22.0f in y direction) to the Model matrix it reacts with strange lighting. As long as the floor (which is actually a cube) has no translation the lighting looks fine.
calculating the tangent vectors in the vertex shader
Well there's your problem. That's not possible for an arbitrary surface.
The tangent and bitangent are not arbitrary vectors that are perpendicular to one another. They are model-space direction vectors that point in the direction of the texture coordinates. The tangent points in the direction of the S texture coordinate, and the bitangent points in the direction of the T texture coordinate (or U and V for the tex coords, if you prefer).
This effectively computes the orientation of the texture relative to each vertex on the surface. You need this orientation, because the way the texture is mapped to the surface matters when you want to make sense of a tangent-space vector.
Remember: tangent-space is the space perpendicular to a surface. But you need to know how that surface is mapped to the object in order to know where "up" is, for example. Take a square surface. You could map a texture so that the +Y part of the square is oriented along the +T direction of the texture. Or it could be along the +X of the square. You could even map it so that the texture is distorted, or rotated at an arbitrary angle.
The tangent and bitangent vectors are intended to correct for this mapping. They point in the S and T directions in model space. So, combined with the normal, they form a transformation matrix to transform from tangent-space into whatever space the 3 vectors are in (you generally transform the NBT to camera space or whatever space you use for lighting before using them).
You cannot compute them by just taking the normal and crossing it with some arbitrary vector. That produces a perpendicular normal, but not the right one.
In order to correctly compute the tangent/bitangent, you need access to more than one vertex. You need to be able to see how the texture coordinates change over the surface of the mesh, which is how you compute the S and T directions relative to the mesh.
Vertex shaders cannot access more than one vertex. Geometry shaders can't (generally) access enough vertices to do this either. Compute the tangent/bitangent off-line on the CPU.
mat3 mat = mat3(t.x, b.x ,n.x, t.y, b.y ,n.y, t.z, b.z ,n.z);
Is wrong. In order to use the tbn matrix correctly, you must transpose it,like so:
mat3 mat = transpose(mat3(t.x, b.x ,n.x, t.y, b.y ,n.y, t.z, b.z ,n.z));
then use it to transform your light and view vectors into tangent space. Alternatively(and less efficiently), pass the untransposed tbn matrix to the fragment shader, and use it to transform the sampled normal into view space. It's an easy thing to miss, but very important. See http://www.opengl-tutorial.org/intermediate-tutorials/tutorial-13-normal-mapping/ for more info.
On a side note, a minor optimisation you can do for your vertex shader, is to calculate the normal matrix on the cpu, per mesh, as it will be the same for all vertices on the mesh, and so reduce unnecessary calculations.