I'm learning the OpenGL using the learnopengl tutorials, and in the transformations chapter. I understood everything he did and the theory (maths) behind it. But while trying to practice my object isn't showing I copied his code and paste it and still nothing changed!
Here is my vertex shader:
#version 330 core
layout (location = 0) in vec3 aPos;
layout (location = 1) in vec2 aTexCoord;
out vec2 TexCoord;
uniform mat4 transform;
void main()
{
gl_Position = transform * vec4(aPos, 1.0);
TexCoord = vec2(aTexCoord.x, aTexCoord.y);
}
My rendering:
// bind textures on corresponding texture units
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, texture1);
// create transformations
glm::mat4 transform;
transform = glm::rotate(transform, glm::radians((float)glfwGetTime()), glm::vec3(0.0f, 0.0f, 1.0f));
// get matrix's uniform location and set matrix
ourShader.use();
unsigned int transformLoc = glGetUniformLocation(ourShader.ID, "transform");
glUniformMatrix4fv(transformLoc, 1, GL_FALSE, glm::value_ptr(transform));
// render container
glBindVertexArray(VAO);
glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_INT, 0);
when I remove the transform from the multiplication in vertex shader everything works fine
You have to initialize the matrix variable glm::mat4 transform.
The glm API documentation refers to The OpenGL Shading Language specification 4.20.
5.4.2 Vector and Matrix Constructors
If there is a single scalar parameter to a vector constructor, it is used to initialize all components of the constructed vector to that scalar’s value. If there is a single scalar parameter to a matrix constructor, it is used to initialize all the components on the matrix’s diagonal, with the remaining components initialized to 0.0.
This means, that an identity matrix can be initialized by the single parameter 1.0:
glm::mat4 transform(1.0f);
Related
So i'm trying to render a rectangle in openGL using index buffers however instead i'm getting a triangle with one vertex at the origin (even though no vertex in my rectangle is suppsoed to go at the origin).
void Renderer::drawRect(int x,int y,int width, int height)
{
//(Ignoring method arguments for debugging)
float vertices[12] = {200.f, 300.f, 0.f,
200.f, 100.f, 0.f,
600.f, 100.f, 0.f,
600.f, 300.f, 0.f};
unsigned int indices[6] = {0,1,3,1,2,3};
glBindVertexArray(this->flat_shape_VAO);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER,this->element_buffer);
glBufferData(GL_ELEMENT_ARRAY_BUFFER,sizeof(indices),indices,GL_DYNAMIC_DRAW);
glBindBuffer(GL_ARRAY_BUFFER,this->render_buffer);
glBufferData(GL_ARRAY_BUFFER,sizeof(vertices),vertices,GL_DYNAMIC_DRAW);
glEnableVertexAttribArray(0);
glUseProgram(this->shader_program);
glUniformMatrix4fv(this->model_view_projection_uniform,1,GL_FALSE,glm::value_ptr(this->model_view_projection_mat));
glUniform3f(this->color_uniform,(float) this->color.r,(float)this->color.g,(float)this->color.b);
glDrawElements(GL_TRIANGLES,6,GL_UNSIGNED_INT,nullptr);
}
My projection matrix is working fine I can still render a triangle at the correct screen coords. I suspect maybe I did index buffering wrong? Transformation matrices also work fine, atleast on my triangles.
Edit:
The VAO's attributes are set up in the class constructor with glVertexAttribPointer();
Edit 2:
I disabled shaders completely and something interesting happened.
Here is the shader source code:
(vertex shader)
#version 330 core
layout (location = 0) in vec3 aPos;
uniform mat4 mvp;
uniform vec3 aColor;
out vec3 color;
void main()
{
gl_Position = mvp * vec4(aPos, 1.0);
color = aColor;
}
(fragment shader)
#version 330 core
in vec3 color;
out vec4 FragColor;
void main()
{
FragColor = vec4(color,1.0f);
}
My projection matrix shouldn't work with shaders disabled yet I still see a triangle rendering on the screen..??
What is the stride argument of glVertexAttribPointer? stride specifies the byte offset between consecutive generic vertex attributes. In your case it should be 0 or 12 (3*sizeof(float)) but if you look at your images it seems to be 24 because the triangle has the 1st (200, 300) and 3rd (600, 100) vertices and one more vertex with the coordinate (0, 0).
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 6 * sizeof(float), nullptr);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 3 * sizeof(float), nullptr);
Can I use glColor3f(), glVertex3f() or other API functions with shader? I wrote a shader for draw a colorful cube and it works fine.
My vertex shader and fragment shader look like this
#vertext shader
#version 330 core
layout (location = 0) in vec3 aPos;
layout (location = 1) in vec3 aColor;
uniform mat4 model;
uniform mat4 view;
uniform mat4 proj;
out vec4 vertexColor;
void main()
{
gl_Position = proj * view * model * vec4(aPos.x, aPos.y, aPos.z, 1.0);
vertexColor = vec4(aColor, 1.0);
};
#fragment shader
#version 330 core
in vec4 vertexColor;
out vec4 FragColor;
void main(){
FragColor = vertexColor;
};
Noe, I try to use gl functions along with my colorful cube. Let's say I have some draw code like this.
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
gluLookAt(0, 0, -1, 0, 0, 0, 0, 1, 0);
glColor3f(1.0, 0.0, 0.0);
glLineWidth(3);
glBegin(GL_LINES);
glVertex3f(-1, -1, 0);
glVertex3f(1, 1, 0);
Since I used glUseProgram() to use my own shader. The above gl functions doesn't seems to work as expect (coordinates and color are both wrong). How does function like glVertex3f() pass vertex to shader? And how do the shaders should look like when using gl function to draw?
Can I use glColor3f(), glVertex3f() or other API functions with shader?
Yes you can.
However, you need to use a Compatibility profile OpenGL Context and you are limited to a GLSL 1.20 vertex shader and the Vertex Shader Built-In Attributes (e.g. gl_Vertex, gl_Color). You can combine a GLSL 1.20 vertex shader with your fragment shader. The matrices in the fixed function matrix stack can be accessed with Built-In Uniforms like gl_ModelViewProjectionMatrix.
All attributes and uniforms are specified in detail in the OpenGL Shading Language 1.20 Specification.
A suitable vertex shader can look like this:
#version 120
varying vec4 vertexColor;
void main()
{
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
vertexColor = gl_Color;
};
#version 330
in vec4 vertexColor;
out vec4 FragColor;
void main(){
FragColor = vertexColor;
};
The glBegin()/glEnd() directives are used in compatibility profile of OpenGL as opposed to core profile which is more modern. However you are compiling your shaders in core profile using the line #version 330 core.
Even if the shaders are not compiled in the core profile, I don't think they'll work since I believe you can't pass vertex attributes with location indices (aPos, aColor) using glVertex3f.
I would recommend using the core Opengl for render calls. That means you should not use you glBegin()...glEnd() and pass vertex coordinates in every render cycle. Instead, the cube coordinates to GPU before-hand and let your shaders access those values:
Create VertexBuffer objects using glGenBuffers().
Store your vertex data in the buffer using glBufferData().
Extract the aPos and aColor attributes from the buffer and assign them indices of 0 and 1 respectively using glVertexAttribPointer().
This should work and no changes to your shader code would be necessary.
EDIT:
For rendering in compatibility profile, the data provided within glBegin/glEnd is ran through a default shader pipeline. You can't customize the pipeline using explicit shader code (like you did now), but you can modify some basic things in the pipeline (such as color, phong lighting, texture). So if you want to get the results your shader code represents, you need to do something like this:
glBegin(GL_TRIANGLES);
glColor3f(1.0f, 0.0f, 0.0f); glVertex3f(-1.0f,-0.25f,0.0f); //First vertex
glColor3f(0.0f, 1.0f, 0.0f); glVertex3f(-0.5f,-0.25f,0.0f); // Second vertex
...
glEnd();
This way of sending full object details during render call is called Immediate mode. For adding lighting, you need glEnable(GL_LIGHTING), add normal info for each vertex and a bunch of other stuff.
If you use core profile, you can define your own shaders but you can't use Immediate mode during the render calls. You need to pass the vertex data before your rendering loop. Basically glBegin,glEnd,'glVertex3f' are not supported in core profile and you need to use the 3 points above to store the data in your graphics device before your render anything (which is done using glDrawArrays()). This tutorial provides a good introduction to these concepts and can help you draw the cube you want using core profile.
My scene: (the video is blurry because I had to convert this to a GIF)
There are two other objects that should be rendered here!
I am writing a program with GLFW/OpenGL. Essentially what I am trying to do is to be able to render a bunch of independent objects, who all can move freely around. To do this, I create a shader, a VAO, a VBO, and a EBO for each model that I want to render. static_models is a vector of class Model, and class Model is just a way to organize my vertices, indices, colors, and normals.
First is creating the vector of Models: (I know this class works as it should, because I use the exact same class for different shaders and buffer objects and things render well)
std::vector<Model> static_models; // scale // color
Model plane("models/plane.ply", { 1.0f, 1.0f, 1.0f }, { 1.0f, 1.0f, 1.0f });
Model tetrahedron("models/tetrahedron.ply", { 1.0f, 1.0f, 1.0f }, { 0.2f, 1.0f, 1.0f });
static_models.emplace_back(plane);
static_models.emplace_back(tetrahedron);
The code for generating the shader objects, VAOS, VBOS, and EBOS:
for (int i = 0; i < static_models.size(); i++)
{
Shader tempShader("plane.vert", "plane.frag");
// create a shader program for each model (in case we need to rotate them or transform them in some way they will be independent)
static_model_shaders.emplace_back(tempShader);
VAOS_static.emplace_back();
VAOS_static.back().Bind();
VBO tempVBO(&static_models.at(i).vertices.front(), static_models.at(i).vertices.size() * sizeof(GLfloat));
EBO tempEBO(&static_models.at(i).indices.front(), static_models.at(i).indices.size() * sizeof(GLuint));
VAOS_static.back().LinkAttrib(tempVBO, 0, 3, GL_FLOAT, 11 * sizeof(float), (void*)0);
VAOS_static.back().LinkAttrib(tempVBO, 1, 3, GL_FLOAT, 11 * sizeof(float), (void*)(3 * sizeof(float)));
VAOS_static.back().LinkAttrib(tempVBO, 2, 2, GL_FLOAT, 11 * sizeof(float), (void*)(6 * sizeof(float)));
VAOS_static.back().LinkAttrib(tempVBO, 3, 3, GL_FLOAT, 11 * sizeof(float), (void*)(8 * sizeof(float)));
VAOS_static.back().Unbind();
tempVBO.Unbind();
tempEBO.Unbind();
}
Then the code to create the positions and mat4 matrixes for each model:
// static model vectors for position and matrix
std::vector<glm::vec3> staticModelPositions;
std::vector<glm::mat4> staticModels;
// initialize all static_model object positions
for (int i = 0; i < static_models.size(); i++)
{
staticModelPositions.emplace_back();
staticModelPositions.back() = glm::vec3(0.0f, 1.0f, 0.0f);
staticModels.emplace_back();
staticModels.back() = glm::translate(staticModels.back(), staticModelPositions.back());
}
Then I set some initial values for the uniforms:
std::vector<Texture> textures;
//static objects
for (int i = 0; i < static_models.size(); i++)
{
//activate first before setting uniforms
static_model_shaders.at(i).Activate();
// static model load model, then load lightColor, then load lightPos for each static_model
glUniformMatrix4fv(glGetUniformLocation(static_model_shaders.at(i).ID, "model"), 1, GL_FALSE, glm::value_ptr(staticModels.at(i)));
glUniform4f(glGetUniformLocation(static_model_shaders.at(i).ID, "lightColor"), lightColor.x, lightColor.y, lightColor.z, 1.0f);
glUniform3f(glGetUniformLocation(static_model_shaders.at(i).ID, "lightPos"), lightPos.x, lightPos.y, lightPos.z);
//create texture objects
textures.emplace_back(Texture("brick.png", GL_TEXTURE_2D, GL_TEXTURE0, GL_RGBA, GL_UNSIGNED_BYTE));
textures.back().texUnit(static_model_shaders.at(i), "tex0", 0);
}
Then drawing the models in the game loop: (game loop not shown this is a big program)
//draw all static models (each with a different shader and matrix)
for (int i = 0; i < static_model_shaders.size(); i++)
{
//activate shader for current model
// Tells OpenGL which Shader Program we want to use
static_model_shaders.at(i).Activate();
// Exports the camera Position to the Fragment Shader for specular lighting
glUniform3f(glGetUniformLocation(static_model_shaders.at(i).ID, "camPos"), camera.Position.x, camera.Position.y, camera.Position.z);
glUniformMatrix4fv(glGetUniformLocation(static_model_shaders.at(i).ID, "model"), 1, GL_FALSE, glm::value_ptr(staticModels.at(i)));
glUniform4f(glGetUniformLocation(static_model_shaders.at(i).ID, "lightColor"), lightColor.x, lightColor.y, lightColor.z, 1.0f);
// Export the camMatrix to the Vertex Shader of the pyramid
camera.Matrix(static_model_shaders.at(i), "camMatrix");
// Binds texture so that is appears in rendering
textures.at(i).Bind();
VAOS_static.at(i).Bind();
glDrawElements(GL_TRIANGLES, static_models.at(i).indices.size(), GL_UNSIGNED_INT, 0);
VAOS_static.at(i).Unbind();
}
My vertex shader:
#version 330 core
// Positions/Coordinates
layout (location = 0) in vec3 aPos;
// Colors
layout (location = 1) in vec3 aColor;
// Texture Coordinates
layout (location = 2) in vec2 aTex;
// Normals (not necessarily normalized)
layout (location = 3) in vec3 aNormal;
// Outputs the color for the Fragment Shader
out vec3 color;
// Outputs the texture coordinates to the Fragment Shader
out vec2 texCoord;
// Outputs the normal for the Fragment Shader
out vec3 Normal;
// Outputs the current position for the Fragment Shader
out vec3 crntPos;
// Imports the camera matrix from the main function
uniform mat4 camMatrix;
// Imports the model matrix from the main function
uniform mat4 model;
void main()
{
// calculates current position
crntPos = vec3(model * vec4(aPos, 1.0f));
// Outputs the positions/coordinates of all vertices
gl_Position = camMatrix * vec4(crntPos, 1.0);
// Assigns the colors from the Vertex Data to "color"
color = aColor;
// Assigns the texture coordinates from the Vertex Data to "texCoord"
texCoord = aTex;
// Assigns the normal from the Vertex Data to "Normal"
Normal = aNormal;
}
And fragment shader:
#version 330 core
// Outputs colors in RGBA
out vec4 FragColor;
// Imports the color from the Vertex Shader
in vec3 color;
// Imports the texture coordinates from the Vertex Shader
in vec2 texCoord;
// Imports the normal from the Vertex Shader
in vec3 Normal;
// Imports the current position from the Vertex Shader
in vec3 crntPos;
// Gets the Texture Unit from the main function
uniform sampler2D tex0;
// Gets the color of the light from the main function
uniform vec4 lightColor;
// Gets the position of the light from the main function
uniform vec3 lightPos;
// Gets the position of the camera from the main function
uniform vec3 camPos;
void main()
{
// ambient lighting
float ambient = 0.40f;
// diffuse lighting
vec3 normal = normalize(Normal);
vec3 lightDirection = normalize(lightPos - crntPos);
float diffuse = max(dot(normal, lightDirection), 0.0f);
// specular lighting
float specularLight = 0.50f;
vec3 viewDirection = normalize(camPos - crntPos);
vec3 reflectionDirection = reflect(-lightDirection, normal);
float specAmount = pow(max(dot(viewDirection, reflectionDirection), 0.0f), 8);
float specular = specAmount * specularLight;
// outputs final color
FragColor = texture(tex0, texCoord) * lightColor * (diffuse + ambient + specular);
}
I have other objects in the scene, and they render and update well. There are no errors in the code and everything runs fine, the objects in static_models are just not being rendered. Anyone have any ideas as to why it wouldn't be showing anything?
I fixed this after a very long time spent. The issue was this block of code:
// static model vectors for position and matrix
std::vector<glm::vec3> staticModelPositions;
std::vector<glm::mat4> staticModels;
// initialize all static_model object positions
for (int i = 0; i < static_models.size(); i++)
{
staticModelPositions.emplace_back();
staticModelPositions.back() = glm::vec3(0.0f, 1.0f, 0.0f);
staticModels.emplace_back();
staticModels.back() = glm::translate(staticModels.back(), staticModelPositions.back());
}
There is a line missing here. After doing staticModels.emplace_back(); we must create the identity matrix for the model. This code allows the program to function as intended:
// static model vectors for position and matrix
std::vector<glm::vec3> staticModelPositions;
std::vector<glm::mat4> staticModels;
// initialize all static_model object positions
for (int i = 0; i < static_models.size(); i++)
{
staticModelPositions.emplace_back();
staticModelPositions.back() = glm::vec3(0.0f, 1.0f, 0.0f);
staticModels.emplace_back();
staticModels.at(i) = glm::mat4(1.0f);
staticModels.back() = glm::translate(staticModels.back(), staticModelPositions.back());
}
I'm trying to multiply my cube's position by uniform model matrix but it's making my model invisible :(
There's my vertex shader:
#version 330 core
layout (location = 0) in vec3 aPos;
layout (location = 1) in vec2 aTex;
uniform mat4 model;
out vec2 aTexture;
void main() {
gl_Position = model * vec4(aPos, 1.0f);
aTexture = aTex;
}
And the cube render method:
void Cube::render(const GLSLShader& shader) const {
shader.use();
if (_texture != nullptr) {
_texture->use();
}
glm::mat4 m = glm::translate(m, glm::vec3(0.0f, 0.0f, 0.0f));
shader.addMat4Uniform("model", m);
glBindVertexArray(_vao);
glDrawArrays(GL_TRIANGLES, 0, 36);
}
addMat4Uniform method:
void GLSLShader::addMat4Uniform(const std::string& name, glm::mat4 val) const {
glUniformMatrix4fv(glGetUniformLocation(_shader_program, name.c_str()), 1,
GL_FALSE, glm::value_ptr(val));
}
When I'm multiplying by empty mat4 like:
glm::mat4 m; //= glm::translate(m, glm::vec3(0.0f, 0.0f, 0.0f));
shader.addMat4Uniform("model", m);
Everything is fine, but when I uncomment the glm::translate function cube becames invisible :(
In the line
glm::mat4 m = glm::translate(m, glm::vec3(0.0f, 0.0f, 0.0f));
the variable m is not initilized when it is used as a paramter for glm::translate.
Initialize the variable m first and then use it:
glm::mat4 m(1.0f); // identity matrix
m = glm::translate(m, glm::vec3(0.0f, 0.0f, 0.0f));
Further note, that an identity matrix should be initialized by the single parameter 1.0:
glm::mat4 m(1.0f);
See the glm API documentation which refers to The OpenGL Shading Language specification 4.20.
5.4.2 Vector and Matrix Constructors
If there is a single scalar parameter to a vector constructor, it is used to initialize all components of the constructed vector to that scalar’s value. If there is a single scalar parameter to a matrix constructor, it is used to initialize all the components on the matrix’s diagonal, with the remaining components initialized to 0.0.
I can't seem to get my Square into the correct viewing Matrix in order to manipulate it using glmfunctions.
This is basically my main.cpp which consists of init() which loads a texture, glsl vert/frag files and constructs the square from the ground class. There is also a reshape() and display() function which calls drawGround() that renders the actual square.
Inside the drawGround() I've added the model/view matrices and done a small translation but it doesn't work... I've been playing with it for hours and can't seem to get it working....
void display()
{
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
drawGround();
glUseProgram(0);
}
void drawGround(){
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glUseProgram(myShader.handle());
GLuint matLocation = glGetUniformLocation(myShader.handle(), "ProjectionMatrix");
glUniformMatrix4fv(matLocation, 1, GL_FALSE, &ProjectionMatrix[0][0]);
glm::mat4 viewingMatrix = glm::translate(glm::mat4(1.0),glm::vec3(0,0,-1));
ModelViewMatrix = glm::translate(viewingMatrix,glm::vec3(15.0,0.0,0));
glUniformMatrix4fv(glGetUniformLocation(myShader.handle(), "ModelViewMatrix"), 1, GL_FALSE, &ModelViewMatrix[0][0]);
ground.render(texName, &myShader);
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindVertexArray(0);
glUseProgram(0);
}
Yet on my other program I have the following function renderSky() which works just fine.
Please help me figure out where I'm going wrong...
If you need to see the Ground Class, let me know.
void renderSky(){
glUseProgram(mySkyShader.handle());
GLuint matLocation = glGetUniformLocation(mySkyShader.handle(), "ProjectionMatrix");
glUniformMatrix4fv(matLocation, 1, GL_FALSE, &ProjectionMatrix[0][0]);
glm::mat4 viewingMatrix = glm::translate(glm::mat4(1.0),glm::vec3(0,0,-1));
ModelViewMatrix = glm::translate(viewSkyMatrix,glm::vec3(15,0,0));
glUniformMatrix4fv(glGetUniformLocation(mySkyShader.handle(), "ModelViewMatrix"), 1, GL_FALSE, &ModelViewMatrix[0][0]);
skyBox.render();
glUseProgram(0);
}
This is the vertex shader:
#version 150
in vec3 in_Position;
in vec4 in_Color;
out vec4 ex_Color;
in vec2 in_TexCoord;
out vec2 ex_TexCoord;
void main(void)
{
gl_Position = vec4(in_Position, 1.0);
ex_Color = in_Color;
ex_TexCoord = in_TexCoord;
}
In order to get the object into the correct Model/View space, it has to be multiplied against the in_Position in the vertex shader.
gl_Position = vec4(in_Position, 1.0); becomes this:
gl_Position = ProjectionMatrix * ModelViewMatrix * vec4(in_Position, 1.0);
Also, the Model and Projection matrices need to have uniforms added to the shader:
uniform mat4 ModelViewMatrix;
uniform mat4 ProjectionMatrix;
The vertex shader works with this additional code and allows the matrices to be manipulated by any following glm functions.