OpenGL object not scaling properly - c++

I want to scale a triangle with a model matrix. I have this code:
void Triangle::UpdateTransform()
{
mView = glm::translate(glm::mat4(1.0f), glm::vec3(0.0f));
mModel = glm::scale(glm::mat4(1.0f), glm::vec3(2.f));
mModel = glm::translate(glm::mat4(1.0f), mLocation);
mMVP = mProj*mView*mModel;
}
With this code I get no results.But if I change the order of the scale and translation:
mView = glm::translate(glm::mat4(1.0f), glm::vec3(0.0f));
mModel = glm::translate(glm::mat4(1.0f), mLocation);
mModel = glm::scale(glm::mat4(1.0f), glm::vec3(2.f));
mMVP = mProj*mView*mModel;
I get a very weird result: result(triangle should be at the center)
I have no idea what is causing this, maybe it has something to do with the orders.
I'd really appreciate some help.
My vertex shader:
#version 410 core
layout(location = 0) in vec4 position;
uniform mat4 u_MVP;
void main()
{
gl_Position = u_MVP * position;
};

The 1st argument of glm::scale and glm::translate is the input matrix. These functions define a matrix and multiply the input matrix by the newly specified matrix.
In both cases, you specify the Identity matrix (glm::mat4(1.0f)) for the inout matrix. You have to pass mModel as the input matrix. e.g.:
mModel = glm::translate(glm::mat4(1.0f), mLocation);
mModel = glm::scale(mModel, glm::vec3(2.f)); // <-- mModel instead of glm::mat4(1.0f)

Related

GLSL convert valid glm::mat4 matrix to nan matrix

I follow tutorial http://www.mbsoftworks.sk/tutorials/opengl3/ and try to compile 10th example.
Everything work fine besides place I send (projection matrix mul modelview matrix) to the shader.
There is place where I send matrix:
//...
// render.cpp
glm::mat4 projectionMatrix = *(oglControl->getProjectionMatrix());
glm::mat4 cam = glm::translate(mModelView, cCamera.vEye);
auto newM = projectionMatrix * cam;
spDirectionalLight.setUniform("projectionMatrixMulModelViewMatrix",&newM);
//...
//...
// setUniform implementation
void CShaderProgram::setUniform(string sName, glm::mat4* mMatrices, int iCount)
{
int iLoc = glGetUniformLocation(uiProgram, sName.c_str());
glUniformMatrix4fv(iLoc, iCount, FALSE, (GLfloat*)mMatrices);
}
//...
and at the end mMatrices contain
] .
Shader code
#version 330 core
uniform mat4 projectionMatrixMulModelViewMatrix;
uniform mat4 normalMatrix;
layout (location = 0) in vec3 inPosition;
layout (location = 1) in vec2 inCoord;
layout (location = 2) in vec3 inNormal;
out vec2 texCoord;
smooth out vec3 vNormal;
void main()
{
gl_Position = projectionMatrixMulModelViewMatrix*vec4(inPosition, 1.0);
texCoord = inCoord;
vec4 vRes = normalMatrix*vec4(inNormal, 0.0);
vNormal = vRes.xyz;
}
The result is blank screen. Renderdoc debugger tells me that gl_position matrix completely NaN.
Renderdoc screeenshot
When I send glm::mat4(1) I got valid result.
Why after multiplication shader got NaN vector?
Seems like VS2017 tricked me. I haven't initialize matrix deep inside code:
glm::mat4 mModelView = cCamera.look();
glm::mat4 CFlyingCamera::look()
{
glm::mat4 result = glm::mat4(1.0f);
result = glm::lookAt(vEye, vView, vUp);
return result;
}
and here is my fault
void CFlyingCamera::update()
{
// Change camera view direction
rotateWithMouse();
// Get view direction
glm::vec3 vMove = vView-vEye;
vMove = glm::normalize(vMove);
vMove *= fSpeed;
// Get normal to view direction vector
glm::vec3 vStrafe = glm::cross(vView-vEye, vUp);
vStrafe = glm::normalize(vStrafe);
vStrafe *= fSpeed;
int iMove = 0;
////
////
///// ERROR HERE, vMoveBy isn't initialized
glm::vec3 vMoveBy;
/////
////
////
// Get vector of move
if(Keys::key(iForw))vMoveBy += vMove*appMain.sof(1.0f);
if(Keys::key(iBack))vMoveBy -= vMove*appMain.sof(1.0f);
if(Keys::key(iLeft))vMoveBy -= vStrafe*appMain.sof(1.0f);
if(Keys::key(iRight))vMoveBy += vStrafe*appMain.sof(1.0f);
vEye += vMoveBy; vView += vMoveBy;
}
But debugger gave me valid matrix anyway. Do you know why it's happened?

Loading a Collada (dae) model from Assimp shows incorrect normals

SOLUTION:
Thanks to Rabbid76, I needed to update the Normal vector in the vertex shader with the model matrix. Updated vertex shader:
#version 330 core
layout (location = 0) in vec3 aPos;
layout (location = 1) in vec3 aNormal;
out vec3 FragPos;
out vec3 Normal;
uniform mat4 projection;
uniform mat4 view;
uniform mat4 model;
uniform float scale;
void main()
{
FragPos = vec3(model * vec4(aPos, 1.0));
Normal = vec3(model * vec4(aNormal, 0.0));
gl_Position = projection * view * vec4(FragPos, 1.0);
}
QUESTION
I am trying to correctly load a collada (dae) file in Assimp, but the normals seem to come out wrong. I would like help with figuring this out. I have a feeling it is to do with how I am handling the transformation matrix. As an example, here's a screenshot of the OpenGL application loading an obj file:
In the above screenshot, the light is positioned directly above the models at x=0 and z=0. The normals are displaying correctly. When I load a dae file, I get the following:
The light position seems to be coming from the -z side.
here is the code I currently have to load the models:
Load the model file, and call the processNode() method which includes an aiMatrix4x4()
void Model::loadModel(std::string filename)
{
Assimp::Importer importer;
const aiScene *scene = importer.ReadFile(filename, aiProcess_Triangulate | aiProcess_FlipUVs | aiProcess_CalcTangentSpace | aiProcess_GenBoundingBoxes);
if (!scene || !scene->mRootNode) {
std::cout << "ERROR::ASSIMP Could not load model: " << importer.GetErrorString() << std::endl;
}
else {
this->directory = filename.substr(0, filename.find_last_of('/'));
this->processNode(scene->mRootNode, scene, aiMatrix4x4());
}
}
processNode() is a recursive method which primarily iterates over node->mMeshes i multiply the transformation.
void Model::processNode(aiNode* node, const aiScene* scene, aiMatrix4x4 transformation)
{
for (unsigned int i = 0; i < node->mNumMeshes; i++) {
aiMesh* mesh = scene->mMeshes[node->mMeshes[i]];
// only apply transformation on meshs not entities such as lights or camera.
transformation *= node->mTransformation;
this->meshes.push_back(processMesh(mesh, scene, transformation));
}
for (unsigned int i = 0; i < node->mNumChildren; i++)
{
processNode(node->mChildren[i], scene, transformation);
}
}
processMesh() handles collecting all mesh data (vertices, indices etc)
Mesh Model::processMesh(aiMesh* mesh, const aiScene* scene, aiMatrix4x4 transformation)
{
glm::vec3 extents;
glm::vec3 origin;
std::vector<Vertex> vertices = this->vertices(mesh, extents, origin, transformation);
std::vector<unsigned int> indices = this->indices(mesh);
std::vector<Texture> textures = this->textures(mesh, scene);
return Mesh(
vertices,
indices,
textures,
extents,
origin,
mesh->mName
);
}
Next the vertices() method is called to get all the vertices. It passes the transformation matrix. Here, i multiply the vertices with the matrix (transformation * mesh->mVertices[i];). I have a strong feeling that I am not doing something right here, and I am missing something.
std::vector<Vertex> Model::vertices(aiMesh* mesh, glm::vec3& extents, glm::vec3 &origin, aiMatrix4x4 transformation)
{
std::vector<Vertex> vertices;
for (unsigned int i = 0; i < mesh->mNumVertices; i++) {
Vertex vertex;
glm::vec3 vector3;
aiVector3D v = transformation * mesh->mVertices[i];
// Vertices
vector3.x = v.x;
vector3.y = v.y;
vector3.z = v.z;
vertex.position = vector3;
// Normals
if (mesh->mNormals) {
vector3.x = mesh->mNormals[i].x;
vector3.y = mesh->mNormals[i].y;
vector3.z = mesh->mNormals[i].z;
vertex.normal = vector3;
}
// Texture coordinates
if (mesh->mTextureCoords[0]) {
glm::vec2 vector2;
vector2.x = mesh->mTextureCoords[0][i].x;
vector2.y = mesh->mTextureCoords[0][i].y;
vertex.texCoord = vector2;
}
else {
vertex.texCoord = glm::vec2(0, 0);
}
if (mesh->mTangents) {
vector3.x = mesh->mTangents[i].x;
vector3.y = mesh->mTangents[i].y;
vector3.z = mesh->mTangents[i].z;
vertex.tangent = vector3;
}
// Bitangent
if (mesh->mBitangents) {
vector3.x = mesh->mBitangents[i].x;
vector3.y = mesh->mBitangents[i].y;
vector3.z = mesh->mBitangents[i].z;
vertex.bitangent = vector3;
}
vertices.push_back(vertex);
}
glm::vec3 min = glm::vec3(mesh->mAABB.mMin.x, mesh->mAABB.mMin.y, mesh->mAABB.mMin.z);
glm::vec3 max = glm::vec3(mesh->mAABB.mMax.x, mesh->mAABB.mMax.y, mesh->mAABB.mMax.z);
extents = (max - min) * 0.5f;
origin = glm::vec3((min.x + max.x) / 2.0f, (min.y + max.y) / 2.0f, (min.z + max.z) / 2.0f);
printf("%f,%f,%f\n", origin.x, origin.y, origin.z);
return vertices;
}
As an added note, if it is helpful, here is the fragment shader i am using on the model:
#version 330 core
out vec4 FragColor;
in vec3 Normal;
in vec3 FragPos;
uniform vec3 lightPos;
uniform vec3 viewPos;
vec3 lightColor = vec3(1,1,1);
vec3 objectColor = vec3(0.6, 0.6, 0.6);
uniform float shininess = 32.0f;
uniform vec3 material_specular = vec3(0.1f, 0.1f, 0.1f);
uniform vec3 light_specular = vec3(0.5f, 0.5f, 0.5f);
void main()
{
// ambient
float ambientStrength = 0.2;
vec3 ambient = ambientStrength * lightColor;
// diffuse
vec3 norm = normalize(Normal);
vec3 lightDir = normalize(lightPos - FragPos);
float diff = max(dot(norm, lightDir), 0.0);
vec3 diffuse = diff * lightColor;
// specular
vec3 viewDir = normalize(viewPos - FragPos);
vec3 reflectDir = reflect(-lightDir, norm);
float spec = pow(max(dot(viewDir, reflectDir), 0.0), shininess);
vec3 specular = light_specular * (spec * material_specular);
vec3 result = (ambient + diffuse + specular) * objectColor;
FragColor = vec4(result, 1.0);
}
Here is the vertex shader:
#version 330 core
layout (location = 0) in vec3 aPos;
layout (location = 1) in vec3 aNormal;
out vec3 FragPos;
out vec3 Normal;
uniform mat4 projection;
uniform mat4 view;
uniform mat4 model;
uniform float scale;
void main()
{
FragPos = vec3(model * vec4(aPos, 1.0));
Normal = aNormal;
gl_Position = projection * view * vec4(FragPos, 1.0);
}
FragPos is a position in world space, because it is the vertex position transformed by the model matrix. lightPos and viewPos seems to be positions in world space, too.
So have to transform the normal vector aNormal, from model space to world space, too.
You have to transform the normal vector by the the inverse transposed of the upper left 3*3, of the 4*4 model matrix:
Normal = transpose(inverse(mat3(model))) * aNormal;
Possibly it is sufficient to transform by the upper left 3*3, of the 4*4 model matrix:
(See In which cases is the inverse matrix equal to the transpose?)
Normal = mat3(model) * aNormal;
See also:
Why is the transposed inverse of the model view matrix used to transform the normal vectors?
Why transforming normals with the transpose of the inverse of the modelview matrix?

GLM rotates objects around origin and around of the object itself

I'm trying to rotate a bunch of objects on their x axis.
This is how I calculate an object's transform:
glm::mat4 GameObject::getTransform(float angle) {
glm::mat4 model = glm::mat4(1.0f);
model = glm::translate(model, position);
model = glm::rotate(model, angle, glm::vec3(1.0f, 0.0f, 0.0f));
model = glm::scale(model, scaleValue);
return model;
}
I've tried to put the translate, rotate, scale functions into different order with no avail. Only strange behaviour.
This is how I iterate over objects and draw them:
for (auto row : objectRows) {
for (auto object : row) {
glm::mat4 model = object->getTransform(glfwGetTime());
glm::mat4 mvp = projection * view * model;
mainShader.setMat4("model", model);
mainShader.setMat4("mvp", mvp);
mainShader.setVec3("objectColour", object->colour);
object->mesh.draw(mainShader);
}
}
The vertex shader:
#version 330 core
layout (location = 0) in vec3 aPos;
layout (location = 1) in vec3 aNormal;
out vec3 fragPos;
out vec3 normal;
uniform mat4 model;
uniform mat4 mvp;
void main()
{
fragPos = vec3(model * vec4(aPos, 1.0));
normal = mat3(transpose(inverse(model))) * aNormal;
gl_Position = mvp * vec4(fragPos, 1.0f);
}
And the result:
As you can see the objects at the top rotate only around themselves and the lower other objects are the more they rotate around what I think is the world origin point?
I've read many similar looking posts explaining about the order of multiplying the matrices but nothing seems to help, and I can't help to think it is something stupidly simple that I'm overlooking.
Turns out the problem was in the vertex shader.
void main()
{
fragPos = vec3(model * vec4(aPos, 1.0));
normal = mat3(transpose(inverse(model))) * aNormal;
gl_Position = mvp * vec4(fragPos, 1.0f);
}
I accidentally was multiplying the model matrix twice. fragPos is a result of multiplying model with a vertex. Two lines below I multiply mvp with fragPos, so the calculation is model * view * projection * model.
To fix this I separated mvp and set each matrix as its own uniform in the shader and changed the line gl_Position = mvp * vec4(fragPos, 1.0f); to gl_Position = projection * view * vec4(fragPos, 1.0);

OpenGL Projection Matrix showing Orthographic

I got an orthographic camera working however I wanted to try and implement a perspective camera so I can do some parallax effects later down the line. I am having some issues when trying to implement it. It seems like the depth is not working correctly. I am rotating a 2d image along the x-axis to simulate it laying somewhat down so I get see the projection matrix working. It is still showing as an orthographic perspective though.
Here is some of my code:
CameraPersp::CameraPersp() :
_camPos(0.0f,0.0f,0.0f), _modelMatrix(1.0f), _viewMatrix(1.0f), _projectionMatrix(1.0f)
Function called init to setup the matrix variables:
void CameraPersp::init(int screenWidth, int screenHeight)
{
_screenHeight = screenHeight;
_screenWidth = screenWidth;
_modelMatrix = glm::translate(_modelMatrix, glm::vec3(0.0f, 0.0f, 0.0f));
_modelMatrix = glm::rotate(_modelMatrix, glm::radians(-55.0f), glm::vec3(1.0f, 0.0f, 0.0f));
_viewMatrix = glm::translate(_viewMatrix, glm::vec3(0.0f, 0.0f, -3.0f));
_projectionMatrix = glm::perspective(glm::radians(45.0f), static_cast<float>(_screenWidth) / _screenHeight, 0.1f, 100.0f);
}
Initializing a texture to be loaded in with x,y,z,width,height,src
_sprites.back()->init(-0.5f, -0.5f, 0.0f, 1.0f, 1.0f, "src/content/sprites/DungeonCrawlStoneSoupFull/monster/deep_elf_death_mage.png");
Sending in the matrices to the vertexShader:
GLint mLocation = _colorProgram.getUniformLocation("M");
glm::mat4 mMatrix = _camera.getMMatrix();
//glUniformMatrix4fv(mLocation, 1, GL_FALSE, &(mMatrix[0][0]));
glUniformMatrix4fv(mLocation, 1, GL_FALSE, glm::value_ptr(mMatrix));
GLint vLocation = _colorProgram.getUniformLocation("V");
glm::mat4 vMatrix = _camera.getVMatrix();
//glUniformMatrix4fv(vLocation, 1, GL_FALSE, &(vMatrix[0][0]));
glUniformMatrix4fv(vLocation, 1, GL_FALSE, glm::value_ptr(vMatrix));
GLint pLocation = _colorProgram.getUniformLocation("P");
glm::mat4 pMatrix = _camera.getPMatrix();
//glUniformMatrix4fv(pLocation, 1, GL_FALSE, &(pMatrix[0][0]));
glUniformMatrix4fv(pLocation, 1, GL_FALSE, glm::value_ptr(pMatrix));
Here is my vertex shader:
#version 460
//The vertex shader operates on each vertex
//input data from VBO. Each vertex is 2 floats
in vec3 vertexPosition;
in vec4 vertexColor;
in vec2 vertexUV;
out vec3 fragPosition;
out vec4 fragColor;
out vec2 fragUV;
//uniform mat4 MVP;
uniform mat4 M;
uniform mat4 V;
uniform mat4 P;
void main() {
//Set the x,y position on the screen
//gl_Position.xy = vertexPosition;
gl_Position = M * V * P * vec4(vertexPosition, 1.0);
//the z position is zero since we are 2d
//gl_Position.z = 0.0;
//indicate that the coordinates are nomalized
gl_Position.w = 1.0;
fragPosition = vertexPosition;
fragColor = vertexColor;
// opengl needs to flip the coordinates
fragUV = vec2(vertexUV.x, 1.0 - vertexUV.y);
}
I can see the image "squish" a little because it is still rendering the perspective as orthographic. If I remove the rotation on the x-axis, it is not longer squished because it isn't laying down at all. Any thoughts on what I am doing wrong? I can supply more info upon request but I think I put in most of the meat of things.
Picture:
You shouldn't modify gl_Position.w
gl_Position = M * V * P * vec4(vertexPosition, 1.0); // gl_Position is good
//indicate that the coordinates are nomalized < not true
gl_Position.w = 1.0; // Now perspective divisor is lost, projection isn't correct

OpenGL Vertex shader transformation and output

I have a simple vertex shader
#version 330 core
uniform mat4 projectionMatrix;
uniform mat4 viewMatrix;
uniform mat4 modelMatrix;
in vec3 in_Position;
out vec3 pass_Color;
void main(void)
{
//gl_Position = projectionMatrix * viewMatrix * modelMatrix * vec4(in_Position, 1.0);
gl_Position = vec4(in_Position, 1.0);
pass_Color = vec3(1,1,1);
}
In my client code i have
glm::vec4 vec1(-1,-1,0,1);//first
glm::vec4 vec2(0,1,0,1);//second
glm::vec4 vec3(1,-1,0,1);//third
glm::mat4 m = projectionMatrix * viewMatrix * modelMatrix;
//translate on client side
vec1 = m * vec1;
vec2 = m * vec2;
vec3 = m * vec3;
//first vertex
vertices[0] = vec1.x;
vertices[1] = vec1.y;
vertices[2] = vec1.z;
//second
vertices[3] = vec2.x;
vertices[4] = vec2.y;
vertices[5] = vec2.z;
//third
vertices[6] = vec3.x;
vertices[7] = vec3.y;
vertices[8] = vec3.z;
Now my question if i use no matrix multiplication in the shader and none in client code this will render me a nice triangle which strectch the whole screen, so i take it the vertex shader maps cordinates its given to the screen in a cordinate system with x=-1..1 and y=-1..1
If i do the matrix multiplication in the shader everything works nice. But if i comment out the code in the shader like shown and do it on the client i get odd results. Shouldnt it yield the same result?
Have i gotten it wrong thinking the output of the vertex shader gl_Position is 2D cordinates despite being a vec4?
Thanks for any help. I really like to understand what exactly the output of the vertex shader is in terms of vertex position.
The problem is in your shader as it accepts only 3 components of position. It is OK to set the forth coordinate to 1 (like you do it) if the coordinate is not in projection space yet.
When you are doing the transformation in client space, the results are correct 4-component homogeneous vectors. You just need to use them as is in your vertex shader:
in vec4 in_Position.
...
gl_Position = in_Position.