I'm trying to multiply my cube's position by uniform model matrix but it's making my model invisible :(
There's my vertex shader:
#version 330 core
layout (location = 0) in vec3 aPos;
layout (location = 1) in vec2 aTex;
uniform mat4 model;
out vec2 aTexture;
void main() {
gl_Position = model * vec4(aPos, 1.0f);
aTexture = aTex;
}
And the cube render method:
void Cube::render(const GLSLShader& shader) const {
shader.use();
if (_texture != nullptr) {
_texture->use();
}
glm::mat4 m = glm::translate(m, glm::vec3(0.0f, 0.0f, 0.0f));
shader.addMat4Uniform("model", m);
glBindVertexArray(_vao);
glDrawArrays(GL_TRIANGLES, 0, 36);
}
addMat4Uniform method:
void GLSLShader::addMat4Uniform(const std::string& name, glm::mat4 val) const {
glUniformMatrix4fv(glGetUniformLocation(_shader_program, name.c_str()), 1,
GL_FALSE, glm::value_ptr(val));
}
When I'm multiplying by empty mat4 like:
glm::mat4 m; //= glm::translate(m, glm::vec3(0.0f, 0.0f, 0.0f));
shader.addMat4Uniform("model", m);
Everything is fine, but when I uncomment the glm::translate function cube becames invisible :(
In the line
glm::mat4 m = glm::translate(m, glm::vec3(0.0f, 0.0f, 0.0f));
the variable m is not initilized when it is used as a paramter for glm::translate.
Initialize the variable m first and then use it:
glm::mat4 m(1.0f); // identity matrix
m = glm::translate(m, glm::vec3(0.0f, 0.0f, 0.0f));
Further note, that an identity matrix should be initialized by the single parameter 1.0:
glm::mat4 m(1.0f);
See the glm API documentation which refers to The OpenGL Shading Language specification 4.20.
5.4.2 Vector and Matrix Constructors
If there is a single scalar parameter to a vector constructor, it is used to initialize all components of the constructed vector to that scalar’s value. If there is a single scalar parameter to a matrix constructor, it is used to initialize all the components on the matrix’s diagonal, with the remaining components initialized to 0.0.
Related
My scene: (the video is blurry because I had to convert this to a GIF)
There are two other objects that should be rendered here!
I am writing a program with GLFW/OpenGL. Essentially what I am trying to do is to be able to render a bunch of independent objects, who all can move freely around. To do this, I create a shader, a VAO, a VBO, and a EBO for each model that I want to render. static_models is a vector of class Model, and class Model is just a way to organize my vertices, indices, colors, and normals.
First is creating the vector of Models: (I know this class works as it should, because I use the exact same class for different shaders and buffer objects and things render well)
std::vector<Model> static_models; // scale // color
Model plane("models/plane.ply", { 1.0f, 1.0f, 1.0f }, { 1.0f, 1.0f, 1.0f });
Model tetrahedron("models/tetrahedron.ply", { 1.0f, 1.0f, 1.0f }, { 0.2f, 1.0f, 1.0f });
static_models.emplace_back(plane);
static_models.emplace_back(tetrahedron);
The code for generating the shader objects, VAOS, VBOS, and EBOS:
for (int i = 0; i < static_models.size(); i++)
{
Shader tempShader("plane.vert", "plane.frag");
// create a shader program for each model (in case we need to rotate them or transform them in some way they will be independent)
static_model_shaders.emplace_back(tempShader);
VAOS_static.emplace_back();
VAOS_static.back().Bind();
VBO tempVBO(&static_models.at(i).vertices.front(), static_models.at(i).vertices.size() * sizeof(GLfloat));
EBO tempEBO(&static_models.at(i).indices.front(), static_models.at(i).indices.size() * sizeof(GLuint));
VAOS_static.back().LinkAttrib(tempVBO, 0, 3, GL_FLOAT, 11 * sizeof(float), (void*)0);
VAOS_static.back().LinkAttrib(tempVBO, 1, 3, GL_FLOAT, 11 * sizeof(float), (void*)(3 * sizeof(float)));
VAOS_static.back().LinkAttrib(tempVBO, 2, 2, GL_FLOAT, 11 * sizeof(float), (void*)(6 * sizeof(float)));
VAOS_static.back().LinkAttrib(tempVBO, 3, 3, GL_FLOAT, 11 * sizeof(float), (void*)(8 * sizeof(float)));
VAOS_static.back().Unbind();
tempVBO.Unbind();
tempEBO.Unbind();
}
Then the code to create the positions and mat4 matrixes for each model:
// static model vectors for position and matrix
std::vector<glm::vec3> staticModelPositions;
std::vector<glm::mat4> staticModels;
// initialize all static_model object positions
for (int i = 0; i < static_models.size(); i++)
{
staticModelPositions.emplace_back();
staticModelPositions.back() = glm::vec3(0.0f, 1.0f, 0.0f);
staticModels.emplace_back();
staticModels.back() = glm::translate(staticModels.back(), staticModelPositions.back());
}
Then I set some initial values for the uniforms:
std::vector<Texture> textures;
//static objects
for (int i = 0; i < static_models.size(); i++)
{
//activate first before setting uniforms
static_model_shaders.at(i).Activate();
// static model load model, then load lightColor, then load lightPos for each static_model
glUniformMatrix4fv(glGetUniformLocation(static_model_shaders.at(i).ID, "model"), 1, GL_FALSE, glm::value_ptr(staticModels.at(i)));
glUniform4f(glGetUniformLocation(static_model_shaders.at(i).ID, "lightColor"), lightColor.x, lightColor.y, lightColor.z, 1.0f);
glUniform3f(glGetUniformLocation(static_model_shaders.at(i).ID, "lightPos"), lightPos.x, lightPos.y, lightPos.z);
//create texture objects
textures.emplace_back(Texture("brick.png", GL_TEXTURE_2D, GL_TEXTURE0, GL_RGBA, GL_UNSIGNED_BYTE));
textures.back().texUnit(static_model_shaders.at(i), "tex0", 0);
}
Then drawing the models in the game loop: (game loop not shown this is a big program)
//draw all static models (each with a different shader and matrix)
for (int i = 0; i < static_model_shaders.size(); i++)
{
//activate shader for current model
// Tells OpenGL which Shader Program we want to use
static_model_shaders.at(i).Activate();
// Exports the camera Position to the Fragment Shader for specular lighting
glUniform3f(glGetUniformLocation(static_model_shaders.at(i).ID, "camPos"), camera.Position.x, camera.Position.y, camera.Position.z);
glUniformMatrix4fv(glGetUniformLocation(static_model_shaders.at(i).ID, "model"), 1, GL_FALSE, glm::value_ptr(staticModels.at(i)));
glUniform4f(glGetUniformLocation(static_model_shaders.at(i).ID, "lightColor"), lightColor.x, lightColor.y, lightColor.z, 1.0f);
// Export the camMatrix to the Vertex Shader of the pyramid
camera.Matrix(static_model_shaders.at(i), "camMatrix");
// Binds texture so that is appears in rendering
textures.at(i).Bind();
VAOS_static.at(i).Bind();
glDrawElements(GL_TRIANGLES, static_models.at(i).indices.size(), GL_UNSIGNED_INT, 0);
VAOS_static.at(i).Unbind();
}
My vertex shader:
#version 330 core
// Positions/Coordinates
layout (location = 0) in vec3 aPos;
// Colors
layout (location = 1) in vec3 aColor;
// Texture Coordinates
layout (location = 2) in vec2 aTex;
// Normals (not necessarily normalized)
layout (location = 3) in vec3 aNormal;
// Outputs the color for the Fragment Shader
out vec3 color;
// Outputs the texture coordinates to the Fragment Shader
out vec2 texCoord;
// Outputs the normal for the Fragment Shader
out vec3 Normal;
// Outputs the current position for the Fragment Shader
out vec3 crntPos;
// Imports the camera matrix from the main function
uniform mat4 camMatrix;
// Imports the model matrix from the main function
uniform mat4 model;
void main()
{
// calculates current position
crntPos = vec3(model * vec4(aPos, 1.0f));
// Outputs the positions/coordinates of all vertices
gl_Position = camMatrix * vec4(crntPos, 1.0);
// Assigns the colors from the Vertex Data to "color"
color = aColor;
// Assigns the texture coordinates from the Vertex Data to "texCoord"
texCoord = aTex;
// Assigns the normal from the Vertex Data to "Normal"
Normal = aNormal;
}
And fragment shader:
#version 330 core
// Outputs colors in RGBA
out vec4 FragColor;
// Imports the color from the Vertex Shader
in vec3 color;
// Imports the texture coordinates from the Vertex Shader
in vec2 texCoord;
// Imports the normal from the Vertex Shader
in vec3 Normal;
// Imports the current position from the Vertex Shader
in vec3 crntPos;
// Gets the Texture Unit from the main function
uniform sampler2D tex0;
// Gets the color of the light from the main function
uniform vec4 lightColor;
// Gets the position of the light from the main function
uniform vec3 lightPos;
// Gets the position of the camera from the main function
uniform vec3 camPos;
void main()
{
// ambient lighting
float ambient = 0.40f;
// diffuse lighting
vec3 normal = normalize(Normal);
vec3 lightDirection = normalize(lightPos - crntPos);
float diffuse = max(dot(normal, lightDirection), 0.0f);
// specular lighting
float specularLight = 0.50f;
vec3 viewDirection = normalize(camPos - crntPos);
vec3 reflectionDirection = reflect(-lightDirection, normal);
float specAmount = pow(max(dot(viewDirection, reflectionDirection), 0.0f), 8);
float specular = specAmount * specularLight;
// outputs final color
FragColor = texture(tex0, texCoord) * lightColor * (diffuse + ambient + specular);
}
I have other objects in the scene, and they render and update well. There are no errors in the code and everything runs fine, the objects in static_models are just not being rendered. Anyone have any ideas as to why it wouldn't be showing anything?
I fixed this after a very long time spent. The issue was this block of code:
// static model vectors for position and matrix
std::vector<glm::vec3> staticModelPositions;
std::vector<glm::mat4> staticModels;
// initialize all static_model object positions
for (int i = 0; i < static_models.size(); i++)
{
staticModelPositions.emplace_back();
staticModelPositions.back() = glm::vec3(0.0f, 1.0f, 0.0f);
staticModels.emplace_back();
staticModels.back() = glm::translate(staticModels.back(), staticModelPositions.back());
}
There is a line missing here. After doing staticModels.emplace_back(); we must create the identity matrix for the model. This code allows the program to function as intended:
// static model vectors for position and matrix
std::vector<glm::vec3> staticModelPositions;
std::vector<glm::mat4> staticModels;
// initialize all static_model object positions
for (int i = 0; i < static_models.size(); i++)
{
staticModelPositions.emplace_back();
staticModelPositions.back() = glm::vec3(0.0f, 1.0f, 0.0f);
staticModels.emplace_back();
staticModels.at(i) = glm::mat4(1.0f);
staticModels.back() = glm::translate(staticModels.back(), staticModelPositions.back());
}
Rencently, I am trying to render a triangle(as Figure 1) in my Window Content View (OSX NSView) using OpenGL, I make an "Orthographic projection" with GLM library function glm::ortho, after render, the vertexes of the triangle are all in wrong place, they seems has an offset to the Window Content View.
I have 2 questions:
Am I misunderstood about glm::ortho(base the following code)?
When the window resize(Zoom In, Zoom Out), How to keep the triangle retain the same place in the Window(i.e. the top vertex at the middle of the width, and the bottom vertexes at the corner)?
The following is the result:
my render function:
- (void)render
{
float view_width = self.frame.size.width;
float view_height = self.frame.size.height;
glViewport(0, 0, view_width, view_height);
glClearColor(0.0, 0.0, 0.0, 1.0);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// Using Orthographic Projection Matrix, vertex position
// using view coordinate(pixel coordinate)
float positions[] = {
0.0f, 0.0f, 0.0f, 1.0f,
view_width, 0.0f, 0.0f, 1.0f,
view_width/(float)2.0, view_height, 0.0f, 1.0f,
};
glBufferSubData(GL_ARRAY_BUFFER, 0, sizeof(positions), positions);
glm::mat4 p = glm::ortho(0.0f, view_width, 0.0f, view_height);
glm::mat4 v = glm::lookAt(glm::vec3(0, 0, 1), glm::vec3(0, 0, 0), glm::vec3(0, 1, 0));
glm::mat4 m = glm::mat4(1.0f);
// upload uniforms to shader
glUniformMatrix4fv(_projectionUniform, 1, GL_FALSE, &p[0][0]);
glUniformMatrix4fv(_viewUniform, 1, GL_FALSE, &v[0][0]);
glUniformMatrix4fv(_modelUniform, 1, GL_FALSE, &m[0][0]);
glDrawElements(GL_TRIANGLE_STRIP, sizeof(positions) / sizeof(positions[0]),GL_UNSIGNED_SHORT, 0);
[_openGLContext flushBuffer];
}
my vertex shader:
#version 410
in vec4 position;
uniform highp mat4 projection;
uniform highp mat4 view;
uniform highp mat4 model;
void main (void)
{
gl_Position = position * projection * view * model;
}
A glm matrix is initialized in the same way as GLSL matrix. See The OpenGL Shading Language 4.6, 5.4.2 Vector and Matrix Constructors, page 101 for further information.
A vector has to be multiplied to the matrix from the right.
See GLSL Programming/Vector and Matrix Operations:
Note that the vector has to be multiplied to the matrix from the right.
If a vector is multiplied to a matrix from the left, the result corresponds to multiplying a row vector from the left to the matrix. This corresponds to multiplying a column vector to the transposed matrix from the right.
This means you've to change the vertex transformation in the vertex shader:
gl_Position = position * projection * view * model;
gl_Position = projection * view * model * position;
#Rabbid76 has answered my first question, it works! Thanks a lot.
The second question, in OSX, when resizing a window(contain a OpenGL View), the NSOpenGLContext should be update, like this:
- (void)setFrameSize:(NSSize)newSize {
[super setFrameSize:newSize];
// update the _openGLContext object
[_openGLContext update];
// reset viewport
glViewport(0, 0, newSize.width*2, newSize.height*2);
// render
[self render];
}
I got an orthographic camera working however I wanted to try and implement a perspective camera so I can do some parallax effects later down the line. I am having some issues when trying to implement it. It seems like the depth is not working correctly. I am rotating a 2d image along the x-axis to simulate it laying somewhat down so I get see the projection matrix working. It is still showing as an orthographic perspective though.
Here is some of my code:
CameraPersp::CameraPersp() :
_camPos(0.0f,0.0f,0.0f), _modelMatrix(1.0f), _viewMatrix(1.0f), _projectionMatrix(1.0f)
Function called init to setup the matrix variables:
void CameraPersp::init(int screenWidth, int screenHeight)
{
_screenHeight = screenHeight;
_screenWidth = screenWidth;
_modelMatrix = glm::translate(_modelMatrix, glm::vec3(0.0f, 0.0f, 0.0f));
_modelMatrix = glm::rotate(_modelMatrix, glm::radians(-55.0f), glm::vec3(1.0f, 0.0f, 0.0f));
_viewMatrix = glm::translate(_viewMatrix, glm::vec3(0.0f, 0.0f, -3.0f));
_projectionMatrix = glm::perspective(glm::radians(45.0f), static_cast<float>(_screenWidth) / _screenHeight, 0.1f, 100.0f);
}
Initializing a texture to be loaded in with x,y,z,width,height,src
_sprites.back()->init(-0.5f, -0.5f, 0.0f, 1.0f, 1.0f, "src/content/sprites/DungeonCrawlStoneSoupFull/monster/deep_elf_death_mage.png");
Sending in the matrices to the vertexShader:
GLint mLocation = _colorProgram.getUniformLocation("M");
glm::mat4 mMatrix = _camera.getMMatrix();
//glUniformMatrix4fv(mLocation, 1, GL_FALSE, &(mMatrix[0][0]));
glUniformMatrix4fv(mLocation, 1, GL_FALSE, glm::value_ptr(mMatrix));
GLint vLocation = _colorProgram.getUniformLocation("V");
glm::mat4 vMatrix = _camera.getVMatrix();
//glUniformMatrix4fv(vLocation, 1, GL_FALSE, &(vMatrix[0][0]));
glUniformMatrix4fv(vLocation, 1, GL_FALSE, glm::value_ptr(vMatrix));
GLint pLocation = _colorProgram.getUniformLocation("P");
glm::mat4 pMatrix = _camera.getPMatrix();
//glUniformMatrix4fv(pLocation, 1, GL_FALSE, &(pMatrix[0][0]));
glUniformMatrix4fv(pLocation, 1, GL_FALSE, glm::value_ptr(pMatrix));
Here is my vertex shader:
#version 460
//The vertex shader operates on each vertex
//input data from VBO. Each vertex is 2 floats
in vec3 vertexPosition;
in vec4 vertexColor;
in vec2 vertexUV;
out vec3 fragPosition;
out vec4 fragColor;
out vec2 fragUV;
//uniform mat4 MVP;
uniform mat4 M;
uniform mat4 V;
uniform mat4 P;
void main() {
//Set the x,y position on the screen
//gl_Position.xy = vertexPosition;
gl_Position = M * V * P * vec4(vertexPosition, 1.0);
//the z position is zero since we are 2d
//gl_Position.z = 0.0;
//indicate that the coordinates are nomalized
gl_Position.w = 1.0;
fragPosition = vertexPosition;
fragColor = vertexColor;
// opengl needs to flip the coordinates
fragUV = vec2(vertexUV.x, 1.0 - vertexUV.y);
}
I can see the image "squish" a little because it is still rendering the perspective as orthographic. If I remove the rotation on the x-axis, it is not longer squished because it isn't laying down at all. Any thoughts on what I am doing wrong? I can supply more info upon request but I think I put in most of the meat of things.
Picture:
You shouldn't modify gl_Position.w
gl_Position = M * V * P * vec4(vertexPosition, 1.0); // gl_Position is good
//indicate that the coordinates are nomalized < not true
gl_Position.w = 1.0; // Now perspective divisor is lost, projection isn't correct
I'm learning the OpenGL using the learnopengl tutorials, and in the transformations chapter. I understood everything he did and the theory (maths) behind it. But while trying to practice my object isn't showing I copied his code and paste it and still nothing changed!
Here is my vertex shader:
#version 330 core
layout (location = 0) in vec3 aPos;
layout (location = 1) in vec2 aTexCoord;
out vec2 TexCoord;
uniform mat4 transform;
void main()
{
gl_Position = transform * vec4(aPos, 1.0);
TexCoord = vec2(aTexCoord.x, aTexCoord.y);
}
My rendering:
// bind textures on corresponding texture units
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, texture1);
// create transformations
glm::mat4 transform;
transform = glm::rotate(transform, glm::radians((float)glfwGetTime()), glm::vec3(0.0f, 0.0f, 1.0f));
// get matrix's uniform location and set matrix
ourShader.use();
unsigned int transformLoc = glGetUniformLocation(ourShader.ID, "transform");
glUniformMatrix4fv(transformLoc, 1, GL_FALSE, glm::value_ptr(transform));
// render container
glBindVertexArray(VAO);
glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_INT, 0);
when I remove the transform from the multiplication in vertex shader everything works fine
You have to initialize the matrix variable glm::mat4 transform.
The glm API documentation refers to The OpenGL Shading Language specification 4.20.
5.4.2 Vector and Matrix Constructors
If there is a single scalar parameter to a vector constructor, it is used to initialize all components of the constructed vector to that scalar’s value. If there is a single scalar parameter to a matrix constructor, it is used to initialize all the components on the matrix’s diagonal, with the remaining components initialized to 0.0.
This means, that an identity matrix can be initialized by the single parameter 1.0:
glm::mat4 transform(1.0f);
I've been following a tutorial on modern OpenGL with the GLM library
I'm on a segment where we introduce matrices for transforming models, positioning the camera, and adding perspective.
I've got a triangle:
const GLfloat vertexBufferData[] = {
-1.0f, -1.0f, 0.0f,
1.0f, -1.0f, 0.0f,
0.0f, 1.0f, 0.0f,
};
I've got my shaders:
GLuint programID = loadShaders("testVertexShader.glsl",
"testFragmentShader.glsl");
I've got a model matrix that does no transformations:
glm::mat4 modelMatrix = glm::mat4(1.0f); /* Identity matrix */
I've got a camera matrix:
glm::mat4 cameraMatrix = glm::lookAt(
glm::vec3(4.0f, 4.0f, 3.0f), /*Camera position*/
glm::vec3(0.0f, 0.0f, 0.0f), /*Camera target*/
glm::vec3(0.0f, 1.0f, 0.0f) /*Up vector*/
);
And I've got a projection matrix:
glm::mat4 projectionMatrix = glm::perspective(
90.0f, /*FOV in degrees*/
4.0f / 3.0f, /*Aspect ratio*/
0.1f, /*Near clipping distance*/
100.0f /*Far clipping distance*/
);
Then I multiply all the matrices together to get the final matrix for the triangle I want to draw:
glm::mat4 finalMatrix = projectionMatrix
* cameraMatrix
* modelMatrix;
Then I send the matrix to GLSL (I think?):
GLuint matrixID = glGetUniformLocation(programID, "MVP");
glUniformMatrix4fv(matrixID, 1, GL_FALSE, &finalMatrix[0][0]);
Then I do shader stuff I don't understand very well:
/*vertex shader*/
#version 330 core
in vec3 vertexPosition_modelspace;
uniform mat4 MVP;
void main(){
vec4 v = vec4(vertexPosition_modelspace, 1);
gl_Position = MVP * v;
}
/*fragment shader*/
#version 330 core
out vec3 color;
void main(){
color = vec3(1, 1, 0);
}
Everything compiles and runs, but I see no triangle. I've moved the triangle and camera around, thinking maybe the camera was pointed the wrong way, but with no success. I was able to successfully get a triangle on the screen before we introduced matrices, but now, no triangle. The triangle should be at origin, and the camera is a few units away from origin, looking at origin.
Turns out, you need to send the matrix to the shader after you've bound the shader.
In other words, you call glUniformMatrix4fv() after glUseProgram()
Lots of things could be your problem - try outputting a vec4 color instead, with alpha explicitly set to 1. One thing I often do as a sanity check is to have the vertex shader ignore all inputs, and to just output vertices directly, e.g. something like:
void main(){
if (gl_VertexID == 0) {
gl_Position = vec4(-1, -1, 0, 1);
} else if (gl_VertexID == 1) {
gl_Position = vec4(1, -1, 0, 1);
} else if (gl_VertexID == 2) {
gl_Position = vec4(0, 1, 0, 1);
}
}
If that works, then you can try adding your vertex position input back in. If that works, you can add your camera or projection matrices back in, etc.
More generally, remove things until something works, and you understand why it works, and then add parts back in until you stop understanding why things don't work. Quite often I've been off by a sign, or in the order of multiplication.