I'm trying to rotate a bunch of objects on their x axis.
This is how I calculate an object's transform:
glm::mat4 GameObject::getTransform(float angle) {
glm::mat4 model = glm::mat4(1.0f);
model = glm::translate(model, position);
model = glm::rotate(model, angle, glm::vec3(1.0f, 0.0f, 0.0f));
model = glm::scale(model, scaleValue);
return model;
}
I've tried to put the translate, rotate, scale functions into different order with no avail. Only strange behaviour.
This is how I iterate over objects and draw them:
for (auto row : objectRows) {
for (auto object : row) {
glm::mat4 model = object->getTransform(glfwGetTime());
glm::mat4 mvp = projection * view * model;
mainShader.setMat4("model", model);
mainShader.setMat4("mvp", mvp);
mainShader.setVec3("objectColour", object->colour);
object->mesh.draw(mainShader);
}
}
The vertex shader:
#version 330 core
layout (location = 0) in vec3 aPos;
layout (location = 1) in vec3 aNormal;
out vec3 fragPos;
out vec3 normal;
uniform mat4 model;
uniform mat4 mvp;
void main()
{
fragPos = vec3(model * vec4(aPos, 1.0));
normal = mat3(transpose(inverse(model))) * aNormal;
gl_Position = mvp * vec4(fragPos, 1.0f);
}
And the result:
As you can see the objects at the top rotate only around themselves and the lower other objects are the more they rotate around what I think is the world origin point?
I've read many similar looking posts explaining about the order of multiplying the matrices but nothing seems to help, and I can't help to think it is something stupidly simple that I'm overlooking.
Turns out the problem was in the vertex shader.
void main()
{
fragPos = vec3(model * vec4(aPos, 1.0));
normal = mat3(transpose(inverse(model))) * aNormal;
gl_Position = mvp * vec4(fragPos, 1.0f);
}
I accidentally was multiplying the model matrix twice. fragPos is a result of multiplying model with a vertex. Two lines below I multiply mvp with fragPos, so the calculation is model * view * projection * model.
To fix this I separated mvp and set each matrix as its own uniform in the shader and changed the line gl_Position = mvp * vec4(fragPos, 1.0f); to gl_Position = projection * view * vec4(fragPos, 1.0);
Related
I want to scale a triangle with a model matrix. I have this code:
void Triangle::UpdateTransform()
{
mView = glm::translate(glm::mat4(1.0f), glm::vec3(0.0f));
mModel = glm::scale(glm::mat4(1.0f), glm::vec3(2.f));
mModel = glm::translate(glm::mat4(1.0f), mLocation);
mMVP = mProj*mView*mModel;
}
With this code I get no results.But if I change the order of the scale and translation:
mView = glm::translate(glm::mat4(1.0f), glm::vec3(0.0f));
mModel = glm::translate(glm::mat4(1.0f), mLocation);
mModel = glm::scale(glm::mat4(1.0f), glm::vec3(2.f));
mMVP = mProj*mView*mModel;
I get a very weird result: result(triangle should be at the center)
I have no idea what is causing this, maybe it has something to do with the orders.
I'd really appreciate some help.
My vertex shader:
#version 410 core
layout(location = 0) in vec4 position;
uniform mat4 u_MVP;
void main()
{
gl_Position = u_MVP * position;
};
The 1st argument of glm::scale and glm::translate is the input matrix. These functions define a matrix and multiply the input matrix by the newly specified matrix.
In both cases, you specify the Identity matrix (glm::mat4(1.0f)) for the inout matrix. You have to pass mModel as the input matrix. e.g.:
mModel = glm::translate(glm::mat4(1.0f), mLocation);
mModel = glm::scale(mModel, glm::vec3(2.f)); // <-- mModel instead of glm::mat4(1.0f)
I have a set of 3d points and I want to render each of these points as rectangles(for ease). I want these rectangles to simulate the behaviour of 3d objects in a sense that they maintain aspect ratio in regards to camera. Basically I want them to do something like this:
Here is what I do: In the vertex shader I basically do nothing and just pass the vertex down the pipeline
gl_Position = vec4(vtx_position, 1.0);
In the geometry shader I try to generate these rectangles by projecting the input vertices to modelview space and then generating 4 output vertices with the same offset from input and emitting them after multiplying them with projection matrix:
uniform mat4 MV;
uniform mat4 PROJ;
uniform float size;
position = MV * gl_in[0].gl_Position;
gl_Position = position;
gl_Position.xy += vec2(-size, -size);
gl_Position = PROJ * gl_Position;
EmitVertex();
gl_Position = position;
gl_Position.xy += vec2(-size, size);
gl_Position = PROJ * gl_Position;
EmitVertex();
gl_Position = position;
gl_Position.xy += vec2(size, -size);
gl_Position = PROJ * gl_Position;
EmitVertex();
gl_Position = position;
gl_Position.xy += vec2(size, size);
gl_Position = PROJ * gl_Position;
EmitVertex();
Finally in fragment shader I just fill them with color. However on output I get something like this:
While each rectangle is positioned correctly thir sizes are off. What did I do wrong? What should be done to achieve result like in the first picture?
As it turned out while I was searching for the problem in shades I was also multiplying size uniform with aspect ratio in c++ code.
So I am ecountering this small problem where my camera is fixed on the player wrongly.
The blue sprite in the left upper corner is the player but it is supposed to be in the center of the screen. All the other threads on this matter where using the fixed rendering pipeline while I use the VBO based one.
My matrices are as followed:
Transform matrix:
glm::vec2 position = glm::vec2(x, y);
glm::vec2 size = glm::vec2(width, height);
this->transform = glm::translate(this->transform, glm::vec3(position, 0.0f));
this->transform = glm::scale(this->transform, glm::vec3(size, 1.0f));
Projection matrix:
glm::mat4 Screen::projection2D = glm::ortho(0.0f, (float)800, (float)600, 0.0f, -1.0f, 1.0f);
View matrix (where translation is the translation of the player):
Screen::view = glm::lookAt(translation, translation+glm::vec3(0,0,-1), glm::vec3(0,1,0));
And the vertex shader:
#version 330 core
layout (location = 0) in vec3 aPos;
layout (location = 1) in vec2 aTexCoord;
uniform mat4 transform;
uniform mat4 projection;
uniform mat4 view;
out vec2 TexCoord;
void main()
{
gl_Position = projection * view* transform * vec4(aPos.xy, 0.0, 1.0);
TexCoord = aTexCoord;
}
So what is going wrong here. Is there something I did not understand about the way it works? Or did I make a minor mistake somewhere?
So I found the answer myself XD,
The translation has to be centered by subtracting half of the screen width and height.
glm::vec3 cameraPos = glm::vec3(translation.x-Screen::width*0.5f, translation.y-Screen::height*0.5f, translation.z);
SOLUTION:
Thanks to Rabbid76, I needed to update the Normal vector in the vertex shader with the model matrix. Updated vertex shader:
#version 330 core
layout (location = 0) in vec3 aPos;
layout (location = 1) in vec3 aNormal;
out vec3 FragPos;
out vec3 Normal;
uniform mat4 projection;
uniform mat4 view;
uniform mat4 model;
uniform float scale;
void main()
{
FragPos = vec3(model * vec4(aPos, 1.0));
Normal = vec3(model * vec4(aNormal, 0.0));
gl_Position = projection * view * vec4(FragPos, 1.0);
}
QUESTION
I am trying to correctly load a collada (dae) file in Assimp, but the normals seem to come out wrong. I would like help with figuring this out. I have a feeling it is to do with how I am handling the transformation matrix. As an example, here's a screenshot of the OpenGL application loading an obj file:
In the above screenshot, the light is positioned directly above the models at x=0 and z=0. The normals are displaying correctly. When I load a dae file, I get the following:
The light position seems to be coming from the -z side.
here is the code I currently have to load the models:
Load the model file, and call the processNode() method which includes an aiMatrix4x4()
void Model::loadModel(std::string filename)
{
Assimp::Importer importer;
const aiScene *scene = importer.ReadFile(filename, aiProcess_Triangulate | aiProcess_FlipUVs | aiProcess_CalcTangentSpace | aiProcess_GenBoundingBoxes);
if (!scene || !scene->mRootNode) {
std::cout << "ERROR::ASSIMP Could not load model: " << importer.GetErrorString() << std::endl;
}
else {
this->directory = filename.substr(0, filename.find_last_of('/'));
this->processNode(scene->mRootNode, scene, aiMatrix4x4());
}
}
processNode() is a recursive method which primarily iterates over node->mMeshes i multiply the transformation.
void Model::processNode(aiNode* node, const aiScene* scene, aiMatrix4x4 transformation)
{
for (unsigned int i = 0; i < node->mNumMeshes; i++) {
aiMesh* mesh = scene->mMeshes[node->mMeshes[i]];
// only apply transformation on meshs not entities such as lights or camera.
transformation *= node->mTransformation;
this->meshes.push_back(processMesh(mesh, scene, transformation));
}
for (unsigned int i = 0; i < node->mNumChildren; i++)
{
processNode(node->mChildren[i], scene, transformation);
}
}
processMesh() handles collecting all mesh data (vertices, indices etc)
Mesh Model::processMesh(aiMesh* mesh, const aiScene* scene, aiMatrix4x4 transformation)
{
glm::vec3 extents;
glm::vec3 origin;
std::vector<Vertex> vertices = this->vertices(mesh, extents, origin, transformation);
std::vector<unsigned int> indices = this->indices(mesh);
std::vector<Texture> textures = this->textures(mesh, scene);
return Mesh(
vertices,
indices,
textures,
extents,
origin,
mesh->mName
);
}
Next the vertices() method is called to get all the vertices. It passes the transformation matrix. Here, i multiply the vertices with the matrix (transformation * mesh->mVertices[i];). I have a strong feeling that I am not doing something right here, and I am missing something.
std::vector<Vertex> Model::vertices(aiMesh* mesh, glm::vec3& extents, glm::vec3 &origin, aiMatrix4x4 transformation)
{
std::vector<Vertex> vertices;
for (unsigned int i = 0; i < mesh->mNumVertices; i++) {
Vertex vertex;
glm::vec3 vector3;
aiVector3D v = transformation * mesh->mVertices[i];
// Vertices
vector3.x = v.x;
vector3.y = v.y;
vector3.z = v.z;
vertex.position = vector3;
// Normals
if (mesh->mNormals) {
vector3.x = mesh->mNormals[i].x;
vector3.y = mesh->mNormals[i].y;
vector3.z = mesh->mNormals[i].z;
vertex.normal = vector3;
}
// Texture coordinates
if (mesh->mTextureCoords[0]) {
glm::vec2 vector2;
vector2.x = mesh->mTextureCoords[0][i].x;
vector2.y = mesh->mTextureCoords[0][i].y;
vertex.texCoord = vector2;
}
else {
vertex.texCoord = glm::vec2(0, 0);
}
if (mesh->mTangents) {
vector3.x = mesh->mTangents[i].x;
vector3.y = mesh->mTangents[i].y;
vector3.z = mesh->mTangents[i].z;
vertex.tangent = vector3;
}
// Bitangent
if (mesh->mBitangents) {
vector3.x = mesh->mBitangents[i].x;
vector3.y = mesh->mBitangents[i].y;
vector3.z = mesh->mBitangents[i].z;
vertex.bitangent = vector3;
}
vertices.push_back(vertex);
}
glm::vec3 min = glm::vec3(mesh->mAABB.mMin.x, mesh->mAABB.mMin.y, mesh->mAABB.mMin.z);
glm::vec3 max = glm::vec3(mesh->mAABB.mMax.x, mesh->mAABB.mMax.y, mesh->mAABB.mMax.z);
extents = (max - min) * 0.5f;
origin = glm::vec3((min.x + max.x) / 2.0f, (min.y + max.y) / 2.0f, (min.z + max.z) / 2.0f);
printf("%f,%f,%f\n", origin.x, origin.y, origin.z);
return vertices;
}
As an added note, if it is helpful, here is the fragment shader i am using on the model:
#version 330 core
out vec4 FragColor;
in vec3 Normal;
in vec3 FragPos;
uniform vec3 lightPos;
uniform vec3 viewPos;
vec3 lightColor = vec3(1,1,1);
vec3 objectColor = vec3(0.6, 0.6, 0.6);
uniform float shininess = 32.0f;
uniform vec3 material_specular = vec3(0.1f, 0.1f, 0.1f);
uniform vec3 light_specular = vec3(0.5f, 0.5f, 0.5f);
void main()
{
// ambient
float ambientStrength = 0.2;
vec3 ambient = ambientStrength * lightColor;
// diffuse
vec3 norm = normalize(Normal);
vec3 lightDir = normalize(lightPos - FragPos);
float diff = max(dot(norm, lightDir), 0.0);
vec3 diffuse = diff * lightColor;
// specular
vec3 viewDir = normalize(viewPos - FragPos);
vec3 reflectDir = reflect(-lightDir, norm);
float spec = pow(max(dot(viewDir, reflectDir), 0.0), shininess);
vec3 specular = light_specular * (spec * material_specular);
vec3 result = (ambient + diffuse + specular) * objectColor;
FragColor = vec4(result, 1.0);
}
Here is the vertex shader:
#version 330 core
layout (location = 0) in vec3 aPos;
layout (location = 1) in vec3 aNormal;
out vec3 FragPos;
out vec3 Normal;
uniform mat4 projection;
uniform mat4 view;
uniform mat4 model;
uniform float scale;
void main()
{
FragPos = vec3(model * vec4(aPos, 1.0));
Normal = aNormal;
gl_Position = projection * view * vec4(FragPos, 1.0);
}
FragPos is a position in world space, because it is the vertex position transformed by the model matrix. lightPos and viewPos seems to be positions in world space, too.
So have to transform the normal vector aNormal, from model space to world space, too.
You have to transform the normal vector by the the inverse transposed of the upper left 3*3, of the 4*4 model matrix:
Normal = transpose(inverse(mat3(model))) * aNormal;
Possibly it is sufficient to transform by the upper left 3*3, of the 4*4 model matrix:
(See In which cases is the inverse matrix equal to the transpose?)
Normal = mat3(model) * aNormal;
See also:
Why is the transposed inverse of the model view matrix used to transform the normal vectors?
Why transforming normals with the transpose of the inverse of the modelview matrix?
I have a simple vertex shader
#version 330 core
uniform mat4 projectionMatrix;
uniform mat4 viewMatrix;
uniform mat4 modelMatrix;
in vec3 in_Position;
out vec3 pass_Color;
void main(void)
{
//gl_Position = projectionMatrix * viewMatrix * modelMatrix * vec4(in_Position, 1.0);
gl_Position = vec4(in_Position, 1.0);
pass_Color = vec3(1,1,1);
}
In my client code i have
glm::vec4 vec1(-1,-1,0,1);//first
glm::vec4 vec2(0,1,0,1);//second
glm::vec4 vec3(1,-1,0,1);//third
glm::mat4 m = projectionMatrix * viewMatrix * modelMatrix;
//translate on client side
vec1 = m * vec1;
vec2 = m * vec2;
vec3 = m * vec3;
//first vertex
vertices[0] = vec1.x;
vertices[1] = vec1.y;
vertices[2] = vec1.z;
//second
vertices[3] = vec2.x;
vertices[4] = vec2.y;
vertices[5] = vec2.z;
//third
vertices[6] = vec3.x;
vertices[7] = vec3.y;
vertices[8] = vec3.z;
Now my question if i use no matrix multiplication in the shader and none in client code this will render me a nice triangle which strectch the whole screen, so i take it the vertex shader maps cordinates its given to the screen in a cordinate system with x=-1..1 and y=-1..1
If i do the matrix multiplication in the shader everything works nice. But if i comment out the code in the shader like shown and do it on the client i get odd results. Shouldnt it yield the same result?
Have i gotten it wrong thinking the output of the vertex shader gl_Position is 2D cordinates despite being a vec4?
Thanks for any help. I really like to understand what exactly the output of the vertex shader is in terms of vertex position.
The problem is in your shader as it accepts only 3 components of position. It is OK to set the forth coordinate to 1 (like you do it) if the coordinate is not in projection space yet.
When you are doing the transformation in client space, the results are correct 4-component homogeneous vectors. You just need to use them as is in your vertex shader:
in vec4 in_Position.
...
gl_Position = in_Position.