How do I apply the drawing position in the world via shaders?
My vertex shader looks like this:
in vec2 position;
uniform mat4x4 model;
uniform mat4x4 view;
uniform mat4x4 projection;
void main() {
gl_Position = projection * view * model * vec4(position, 0.0, 1.0);
}
Where position is the positions of the vertexes of the triangles.
I'm binding the matrices as follows.
view:
glm::mat4x4 view = glm::lookAt(
glm::vec3(0.0f, 1.2f, 1.2f), // camera position
glm::vec3(0.0f, 0.0f, 0.0f), // camera target
glm::vec3(0.0f, 0.0f, 1.0f)); // camera up axis
GLint view_uniform = glGetUniformLocation(shader, "view");
glUniformMatrix4fv(view_uniform, 1, GL_FALSE, glm::value_ptr(view));
projection:
glm::mat4x4 projection = glm::perspective(80.0f, 640.0f/480.0f, 0.1f, 100.0f);
GLint projection_uniform = glGetUniformLocation(shader, "projection");
glUniformMatrix4fv(projection_uniform, 1, GL_FALSE, glm::value_ptr(projection));
model transformation:
glm::mat4x4 model;
model = glm::translate(model, glm::vec3(1.0f, 0.0f, 0.0f));
model = glm::rotate(model, static_cast<float>((glm::sin(currenttime)) * 360.0), glm::vec3(0.0, 0.0, 1.0));
GLint trans_uniform = glGetUniformLocation(shader, "model");
glUniformMatrix4fv(trans_uniform, 1, GL_FALSE, glm::value_ptr(model));
So this way I have to compute the position transformation each frame on the CPU. Is this the recommended way or is there a better one?
So this way I have to compute the position transformation each frame on the CPU. Is this the recommended way or is there a better one?
Yes. Calculating a new transform once for a mesh on the CPU and then applying it to all vertices inside the mesh inside the vertex shader is not going to be a very slow operation and needs to be done every frame.
In the render() method you usually do the following things
create matrix for camera (once per frame usually)
for each object in the scene:
create transformation (position) matrix
draw object
projection matrix can be created once per windowResize, or when creating matrix for camera.
Answer: Your code is good, it is a basic way to draw/update objects.
You could go into some framework/system that manages it automatically. You should not care (right now) about the performance of those matrix creation procedures... it is very fast. Drawing is more problematic.
as jozxyqk wrote in one comment, you can create ModelViewProjMatrix and send one combined matrix instead of three different one.
Related
Rencently, I am trying to render a triangle(as Figure 1) in my Window Content View (OSX NSView) using OpenGL, I make an "Orthographic projection" with GLM library function glm::ortho, after render, the vertexes of the triangle are all in wrong place, they seems has an offset to the Window Content View.
I have 2 questions:
Am I misunderstood about glm::ortho(base the following code)?
When the window resize(Zoom In, Zoom Out), How to keep the triangle retain the same place in the Window(i.e. the top vertex at the middle of the width, and the bottom vertexes at the corner)?
The following is the result:
my render function:
- (void)render
{
float view_width = self.frame.size.width;
float view_height = self.frame.size.height;
glViewport(0, 0, view_width, view_height);
glClearColor(0.0, 0.0, 0.0, 1.0);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// Using Orthographic Projection Matrix, vertex position
// using view coordinate(pixel coordinate)
float positions[] = {
0.0f, 0.0f, 0.0f, 1.0f,
view_width, 0.0f, 0.0f, 1.0f,
view_width/(float)2.0, view_height, 0.0f, 1.0f,
};
glBufferSubData(GL_ARRAY_BUFFER, 0, sizeof(positions), positions);
glm::mat4 p = glm::ortho(0.0f, view_width, 0.0f, view_height);
glm::mat4 v = glm::lookAt(glm::vec3(0, 0, 1), glm::vec3(0, 0, 0), glm::vec3(0, 1, 0));
glm::mat4 m = glm::mat4(1.0f);
// upload uniforms to shader
glUniformMatrix4fv(_projectionUniform, 1, GL_FALSE, &p[0][0]);
glUniformMatrix4fv(_viewUniform, 1, GL_FALSE, &v[0][0]);
glUniformMatrix4fv(_modelUniform, 1, GL_FALSE, &m[0][0]);
glDrawElements(GL_TRIANGLE_STRIP, sizeof(positions) / sizeof(positions[0]),GL_UNSIGNED_SHORT, 0);
[_openGLContext flushBuffer];
}
my vertex shader:
#version 410
in vec4 position;
uniform highp mat4 projection;
uniform highp mat4 view;
uniform highp mat4 model;
void main (void)
{
gl_Position = position * projection * view * model;
}
A glm matrix is initialized in the same way as GLSL matrix. See The OpenGL Shading Language 4.6, 5.4.2 Vector and Matrix Constructors, page 101 for further information.
A vector has to be multiplied to the matrix from the right.
See GLSL Programming/Vector and Matrix Operations:
Note that the vector has to be multiplied to the matrix from the right.
If a vector is multiplied to a matrix from the left, the result corresponds to multiplying a row vector from the left to the matrix. This corresponds to multiplying a column vector to the transposed matrix from the right.
This means you've to change the vertex transformation in the vertex shader:
gl_Position = position * projection * view * model;
gl_Position = projection * view * model * position;
#Rabbid76 has answered my first question, it works! Thanks a lot.
The second question, in OSX, when resizing a window(contain a OpenGL View), the NSOpenGLContext should be update, like this:
- (void)setFrameSize:(NSSize)newSize {
[super setFrameSize:newSize];
// update the _openGLContext object
[_openGLContext update];
// reset viewport
glViewport(0, 0, newSize.width*2, newSize.height*2);
// render
[self render];
}
I got an orthographic camera working however I wanted to try and implement a perspective camera so I can do some parallax effects later down the line. I am having some issues when trying to implement it. It seems like the depth is not working correctly. I am rotating a 2d image along the x-axis to simulate it laying somewhat down so I get see the projection matrix working. It is still showing as an orthographic perspective though.
Here is some of my code:
CameraPersp::CameraPersp() :
_camPos(0.0f,0.0f,0.0f), _modelMatrix(1.0f), _viewMatrix(1.0f), _projectionMatrix(1.0f)
Function called init to setup the matrix variables:
void CameraPersp::init(int screenWidth, int screenHeight)
{
_screenHeight = screenHeight;
_screenWidth = screenWidth;
_modelMatrix = glm::translate(_modelMatrix, glm::vec3(0.0f, 0.0f, 0.0f));
_modelMatrix = glm::rotate(_modelMatrix, glm::radians(-55.0f), glm::vec3(1.0f, 0.0f, 0.0f));
_viewMatrix = glm::translate(_viewMatrix, glm::vec3(0.0f, 0.0f, -3.0f));
_projectionMatrix = glm::perspective(glm::radians(45.0f), static_cast<float>(_screenWidth) / _screenHeight, 0.1f, 100.0f);
}
Initializing a texture to be loaded in with x,y,z,width,height,src
_sprites.back()->init(-0.5f, -0.5f, 0.0f, 1.0f, 1.0f, "src/content/sprites/DungeonCrawlStoneSoupFull/monster/deep_elf_death_mage.png");
Sending in the matrices to the vertexShader:
GLint mLocation = _colorProgram.getUniformLocation("M");
glm::mat4 mMatrix = _camera.getMMatrix();
//glUniformMatrix4fv(mLocation, 1, GL_FALSE, &(mMatrix[0][0]));
glUniformMatrix4fv(mLocation, 1, GL_FALSE, glm::value_ptr(mMatrix));
GLint vLocation = _colorProgram.getUniformLocation("V");
glm::mat4 vMatrix = _camera.getVMatrix();
//glUniformMatrix4fv(vLocation, 1, GL_FALSE, &(vMatrix[0][0]));
glUniformMatrix4fv(vLocation, 1, GL_FALSE, glm::value_ptr(vMatrix));
GLint pLocation = _colorProgram.getUniformLocation("P");
glm::mat4 pMatrix = _camera.getPMatrix();
//glUniformMatrix4fv(pLocation, 1, GL_FALSE, &(pMatrix[0][0]));
glUniformMatrix4fv(pLocation, 1, GL_FALSE, glm::value_ptr(pMatrix));
Here is my vertex shader:
#version 460
//The vertex shader operates on each vertex
//input data from VBO. Each vertex is 2 floats
in vec3 vertexPosition;
in vec4 vertexColor;
in vec2 vertexUV;
out vec3 fragPosition;
out vec4 fragColor;
out vec2 fragUV;
//uniform mat4 MVP;
uniform mat4 M;
uniform mat4 V;
uniform mat4 P;
void main() {
//Set the x,y position on the screen
//gl_Position.xy = vertexPosition;
gl_Position = M * V * P * vec4(vertexPosition, 1.0);
//the z position is zero since we are 2d
//gl_Position.z = 0.0;
//indicate that the coordinates are nomalized
gl_Position.w = 1.0;
fragPosition = vertexPosition;
fragColor = vertexColor;
// opengl needs to flip the coordinates
fragUV = vec2(vertexUV.x, 1.0 - vertexUV.y);
}
I can see the image "squish" a little because it is still rendering the perspective as orthographic. If I remove the rotation on the x-axis, it is not longer squished because it isn't laying down at all. Any thoughts on what I am doing wrong? I can supply more info upon request but I think I put in most of the meat of things.
Picture:
You shouldn't modify gl_Position.w
gl_Position = M * V * P * vec4(vertexPosition, 1.0); // gl_Position is good
//indicate that the coordinates are nomalized < not true
gl_Position.w = 1.0; // Now perspective divisor is lost, projection isn't correct
I've been following a tutorial on modern OpenGL with the GLM library
I'm on a segment where we introduce matrices for transforming models, positioning the camera, and adding perspective.
I've got a triangle:
const GLfloat vertexBufferData[] = {
-1.0f, -1.0f, 0.0f,
1.0f, -1.0f, 0.0f,
0.0f, 1.0f, 0.0f,
};
I've got my shaders:
GLuint programID = loadShaders("testVertexShader.glsl",
"testFragmentShader.glsl");
I've got a model matrix that does no transformations:
glm::mat4 modelMatrix = glm::mat4(1.0f); /* Identity matrix */
I've got a camera matrix:
glm::mat4 cameraMatrix = glm::lookAt(
glm::vec3(4.0f, 4.0f, 3.0f), /*Camera position*/
glm::vec3(0.0f, 0.0f, 0.0f), /*Camera target*/
glm::vec3(0.0f, 1.0f, 0.0f) /*Up vector*/
);
And I've got a projection matrix:
glm::mat4 projectionMatrix = glm::perspective(
90.0f, /*FOV in degrees*/
4.0f / 3.0f, /*Aspect ratio*/
0.1f, /*Near clipping distance*/
100.0f /*Far clipping distance*/
);
Then I multiply all the matrices together to get the final matrix for the triangle I want to draw:
glm::mat4 finalMatrix = projectionMatrix
* cameraMatrix
* modelMatrix;
Then I send the matrix to GLSL (I think?):
GLuint matrixID = glGetUniformLocation(programID, "MVP");
glUniformMatrix4fv(matrixID, 1, GL_FALSE, &finalMatrix[0][0]);
Then I do shader stuff I don't understand very well:
/*vertex shader*/
#version 330 core
in vec3 vertexPosition_modelspace;
uniform mat4 MVP;
void main(){
vec4 v = vec4(vertexPosition_modelspace, 1);
gl_Position = MVP * v;
}
/*fragment shader*/
#version 330 core
out vec3 color;
void main(){
color = vec3(1, 1, 0);
}
Everything compiles and runs, but I see no triangle. I've moved the triangle and camera around, thinking maybe the camera was pointed the wrong way, but with no success. I was able to successfully get a triangle on the screen before we introduced matrices, but now, no triangle. The triangle should be at origin, and the camera is a few units away from origin, looking at origin.
Turns out, you need to send the matrix to the shader after you've bound the shader.
In other words, you call glUniformMatrix4fv() after glUseProgram()
Lots of things could be your problem - try outputting a vec4 color instead, with alpha explicitly set to 1. One thing I often do as a sanity check is to have the vertex shader ignore all inputs, and to just output vertices directly, e.g. something like:
void main(){
if (gl_VertexID == 0) {
gl_Position = vec4(-1, -1, 0, 1);
} else if (gl_VertexID == 1) {
gl_Position = vec4(1, -1, 0, 1);
} else if (gl_VertexID == 2) {
gl_Position = vec4(0, 1, 0, 1);
}
}
If that works, then you can try adding your vertex position input back in. If that works, you can add your camera or projection matrices back in, etc.
More generally, remove things until something works, and you understand why it works, and then add parts back in until you stop understanding why things don't work. Quite often I've been off by a sign, or in the order of multiplication.
I'm trying to port an old SDL game I wrote some time ago to 3.3+ OpenGL and I'm having a few issues getting a proper orthogonal matrix and it's corresponding view.
glm::mat4 proj = glm::ortho( 0.0f, static_cast<float>(win_W), static_cast<float>(win_H), 0.0f,-5.0f, 5.0f);
If I simply use this projection and try to render say a quad inside the boundaries of win_W and win_H, it works. However, if instead I try to simulate a camera using:
glm::mat4 view = glm::lookAt(
glm::vec3(0.0f, 0.0f, 1.0f),//cam pos
glm::vec3(0.0f, 0.0f, 0.0f),//looking at
glm::vec3(0.0f, 0.0f, 1.0f)//floored
);
I get nothing. Even when positioning the quad in the center, same if instead I center the view:
glm::mat4 view = glm::lookAt(
glm::vec3(static_cast<float>(win_W), static_cast<float>(winH), 1.0f),//cam pos
glm::vec3(static_cast<float>(win_W), static_cast<float>(winH), 0.0f),//looking at
glm::vec3(0.0f, 0.0f, 1.0f)//floored
);
Considering that SDL usually subtracts the camera values from the vertices in the game to simulate a view, is it simply better to replace my MVP operation in the shader like this:
#version 150
in vec2 position;
in vec3 color;
out vec3 Color;
uniform mat4 model;
uniform mat4 view;
uniform mat4 projection;
uniform mat4 camera;
void main()
{
Color = color;
//faulty view
//gl_Position = projection *view * model * vec4(position, 0.0, 1.0);
//new SDL style camera
mat4 newPosition = projection * model * vec4(position, 0.0, 1.0);
gl_Position = newPosition - camera;
}
Unfortunately the source I'm learning from (ArcSynthesis Book) doesn't cover a view that involves a coordinate system other than the default NDC. And for some reason even today most discussion about opengl is about deprecated versions.
So in short, whats the best way of setting up the view to an orthogonal projection matrix for simple 2D rendering in modern opengl?
Your lookup call actually does not make sense at all, since your up vector (0,0,1) is colinear to the viewing direction (0,0,-1). This will not produce a useable matrix.
You probably want (0,1,0) as up vector. However, even if you change that, you might be surprised about the result. In combination with the projection matrix you set up, the point you specified as the lookat target will not appear in the center, but at the corner of the screen. Your projection does not map (0,0) to the center, which one typically assumes when using some LookAt function.
I agree to Colonel Thirty Two's comment: Don't use LookAt in this scenario, unless you have a very good reason to.
So I can draw a spinning cube using OpenGL3.2+ and translate it away from the 0,0,0 and to the left, but when I try and draw a second one (towards the right), it doesn't render...
This is my display function:
void display()
{
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glUseProgram(myShader.handle());
GLuint matLocation = glGetUniformLocation(myShader.handle(), "ProjectionMatrix");
glUniformMatrix4fv(matLocation, 1, GL_FALSE, &ProjectionMatrix[0][0]);
spinY+=0.03;
if(spinY>360) spinY = 0;
glm::mat4 viewMatrix;
viewMatrix = glm::translate(glm::mat4(1.0),glm::vec3(0,0,-100)); //viewing matrix
ModelViewMatrix = glm::translate(viewMatrix,glm::vec3(-30,0,0)); //translate object from the origin
ModelViewMatrix = glm::rotate(ModelViewMatrix,spinY, glm::vec3(0,1,0)); //rotate object about y axis
glUniformMatrix4fv(glGetUniformLocation(myShader.handle(), "ModelViewMatrix"), 1, GL_FALSE, &ModelViewMatrix[0][0]); //pass matrix to shader
//Add the following line just before the line to draw the cube to
//check that the origin of the cube in eye space is (-30, 0, -100);
result = glm::vec3(ModelViewMatrix * glm::vec4(0,0,0,1));
std::cout<<glm::to_string(result)<<std::endl; //print matrix to get coordinates.
myCube.render();
glUseProgram(0);
}
I want to be able to use the same Cube class / size etc, but just render it again (I assume that's the most efficient / best way to do it).
I tried this
void display()
{
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glUseProgram(myShader.handle());
GLuint matLocation = glGetUniformLocation(myShader.handle(), "ProjectionMatrix");
glUniformMatrix4fv(matLocation, 1, GL_FALSE, &ProjectionMatrix[0][0]);
spinY+=0.03;
if(spinY>360) spinY = 0;
glm::mat4 viewMatrix;
viewMatrix = glm::translate(glm::mat4(1.0),glm::vec3(0,0,-100)); //viewing matrix
ModelViewMatrix = glm::translate(viewMatrix,glm::vec3(-30,0,0)); //translate object from the origin
ModelViewMatrix = glm::rotate(ModelViewMatrix,spinY, glm::vec3(0,1,0)); //rotate object about y axis
glUniformMatrix4fv(glGetUniformLocation(myShader.handle(), "ModelViewMatrix"), 1, GL_FALSE, &ModelViewMatrix[0][0]); //pass matrix to shader
//Add the following line just before the line to draw the cube to
//check that the origin of the cube in eye space is (-30, 0, -100);
result = glm::vec3(ModelViewMatrix * glm::vec4(0,0,0,1));
std::cout<<glm::to_string(result)<<std::endl; //print matrix to get coordinates.
myCube.render();
glm::mat4 viewMatrix_TWO;
viewMatrix_TWO = glm::translate(glm::mat4(1.0),glm::vec3(0,0,-100)); //viewing matrix
ModelViewMatrix_TWO = glm::translate(viewMatrix_TWO,glm::vec3(30,0,0)); //translate object from the origin
ModelViewMatrix_TWO = glm::rotate(ModelViewMatrix_TWO,spinY, glm::vec3(0,1,0)); //rotate object about y axis
glUniformMatrix4fv(glGetUniformLocation(myShader.handle(), "ModelViewMatrix_TWO"), 1, GL_FALSE, &ModelViewMatrix[0][0]); //pass matrix to shader
myCube.render();
glUseProgram(0);
}
Obviously, I've implemented it wrong... How can I get a cube either side of the screen? Thanks.
UPDATE
I realised, I hadn't created a second cube object, but with that now implemented, it still doesn't work... Am I confusing how the view/model matrices interact? I've created a new one for each object....
New Code:
myCube.render();
spinX+=0.03;
if(spinX>360) spinX = 0;
glm::mat4 viewMatrix_Two,ModelViewMatrix_Two;
viewMatrix_Two = glm::translate(glm::mat4(1.0),glm::vec3(0,0,-100)); //viewing matrix
ModelViewMatrix_Two = glm::translate(viewMatrix_Two,glm::vec3(30,0,0)); //translate object from the origin
ModelViewMatrix_Two = glm::rotate(ModelViewMatrix_Two,spinX, glm::vec3(0,1,0)); //rotate object about y axis
glUniformMatrix4fv(glGetUniformLocation(myShader.handle(), "ModelViewMatrix_Two"), 1, GL_FALSE, &ModelViewMatrix_Two[0][0]); //pass matrix to shader
myCube_Two.render();
UPDATE
Shader:
uniform mat4 ModelViewMatrix;
//uniform mat4 ModelViewMatrix_Two; //NOT NEEDED - USED SAME SHADER OBJECT
uniform mat4 ProjectionMatrix;
in vec3 in_Position; // Position coming in
in vec3 in_Color; // colour coming in
out vec3 ex_Color; // colour leaving the vertex, this will be sent to the fragment shader
void main(void)
{
gl_Position = ProjectionMatrix * ModelViewMatrix * vec4(in_Position, 1.0);
//gl_Position = ProjectionMatrix * ModelViewMatrix_Two * vec4(in_Position, 1.0);
ex_Color = in_Color;
}
In the end, I created a second Cube object, second viewing matrix and used them with the already established model matrix in my shader seems both cubes are called/rendered individually.
The correct code is:
glm::mat4 viewMatrix_Two, ModelViewMatrix_Two;
viewMatrix_Two = glm::translate(glm::mat4(1.0),glm::vec3(0,0,-200));
ModelViewMatrix = glm::translate(viewMatrix_Two,glm::vec3(30,0,0));
ModelViewMatrix = glm::rotate(ModelViewMatrix,spinX, glm::vec3(1,0,0));
glUniformMatrix4fv(glGetUniformLocation(myShader.handle(), "ModelViewMatrix"), 1, GL_FALSE, &ModelViewMatrix[0][0]); //pass matrix to shader
myCube_Two.render();
Unless your shader has a uniform called ModelViewMatrix_Two, this won't work. I don't see a reason why your shader would need another uniform for the model view since you are not drawing both cube on the same call. If it's not the problem, can you post your shader code?