I am trying to move objects with my 3D world in different ways but I can't move one object without affecting the entire scene. I tried using a second shader with different uniform names and I had some very strange results like objects disappearing and other annoying stuff.
I tried linking and unlinking programs but everything seems to translate together when I apply different matrices to the different shaders in hopes of seeing them move differently.
The TRANSLATE matrix is just a rotation * scale * translation matrix.
Edit - here is how set my uniforms:
//All of my mat4's
// Sorry for not initialising any of the vec3 or mat4's don't want the code to be too lengthy
perspectiveproj = glm::perspective(glm::radians(95.0f), static_cast<float>(width)/height , 0.01f, 150.0f);
views = glm::lookAt(position, position + viewdirection, UP);
trans1 = glm::rotate(trans1, 0.0f, glm::vec3(0.0f, 1.0f, 0.0f));
trans1 = glm::scale(trans1, glm::vec3(0.0f, 0.0f, 0.0f));
trans1 = glm::translate(trans1, glm::vec3(1.0f, 0.0f, 1.0f));
//These are the uniforms for my perspective matrix per shader
int persp = glGetUniformLocation(shader_one, "perspective");
glUniformMatrix4fv(persp, 1, GL_FALSE, glm::value_ptr(perspectiveproj));
int persp2 = glGetUniformLocation(shader_two, "perspective");
glUniformMatrix4fv(persp2, 1, GL_FALSE, glm::value_ptr(perspectiveproj));
//These are the uniforms for my lookAt matrix per shader
int Look = glGetUniformLocation(shader_one, "lookAt");
glUniformMatrix4fv(Look, 1, GL_FALSE, glm::value_ptr(views));
int Look2 = glGetUniformLocation(shader_two, "perspective");
glUniformMatrix4fv(Look2, 1, GL_FALSE, glm::value_ptr(views));
//This is the one uniform for my Translation to one shader object matrix
moving Shader two
//objects differently than shader one
int Moveoneshader = glGetUniformLocation(shader_two, "TRANSLATE");
glUniformMatrix4fv(Moveoneshader, 1, GL_FALSE, glm::value_ptr(trans1))
shader one:
gl_Positions = perspective * lookAt * vec4(position.x, position.y, position.z, 1.0);
shader two:
gl_Positions = perspective * lookAt * TRANSLATE * vec4(position.x, position.y, position.z, 1.0);
linking and drawing:
glUseProgram(shader_one);
glBindVertexArray(vao_one);
glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_INT, nullptr);
glDeleteProgram(shader_one);
glUseProgram(shader_two);
glBindVertexArray(vao_two);
glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_INT, nullptr);
glDeleteProgram(shader_two);
It seems that you are having trouble understanding the mechanics behind using a shader.
A shader is supposed to be a set of instructions that can run on multiple inputs, e.g. objects.
Let's first call the TRANSLATE matrix model matrix, since it holds all transformations that affect our model directly. The model matrix can have different values for different objects. So instead of using different shaders, you can use one generalized shader that calculates:
gl_Position = perspective * view * model * vec4(position, 1.0);
where view equals lookAt. I have exchanged the names of your matrices to follow naming conventions. I advise you to use these names so that you can find more information during research.
When creating a model matrix, you have to be careful about the order of matrix multiplication as well. In most cases, you want your model matrix to be composed like this
model = translate * rotate * scale
to avoid distortions of your object.
To be able to render multiple objects with their own respective model matrix, you have to loop over all objects and update the matrix value in the shader before drawing the object. A simplified example would be:
std::string name = "model";
for (Object obj : objects)
{
glUniformMatrix4fv(glGetUniformLocation(shaderID, name.c_str()), 1,
GL_FALSE, glm::value_ptr(model));
// draw object
}
You can read more about this here https://learnopengl.com/Getting-started/Coordinate-Systems.
Related to your problem, objects can disappear if you draw them with multiple shaders. This is related to how shaders write their data to your screen. By default, the active shader writes on all pixels of your screen. This means that when switching shaders to draw with the second shader after drawing with the first shader, the result of the first shader will be overwritten.
To combine multiple images, you can use Framebuffers. Instead of writing directly on your screen, you can use them to write into images first. Later, these images can be combined in a third shader.
However, this will cost way too much memory and will be too computationally inefficient to consider for your scenario. These techniques are usually applied when rendering post-processing effects.
Related
I have two classes with their own model coordinates, colors, etc. I also have two shader programs that are logically the same. First I execute one shader program, edit the uniforms with the traditional view and projection matrices, and then I call the class to edit the model matrix uniquely, and then draw it's primitives. Immediately afterwards, I do the exact same thing, but with the second shader program, edit the uniforms again, and call the second class to draw it's primitives and it's own unique model matrix coordinates.
In the second class, I translate the model matrix each iteration, but not in the first class. For some reason it translates the model matrix in the first class as well, and I dont know why?
Source code:
//First shader program, update view and proj matrix, and have first class draw it's vertices
executable.Execute();
GLuint viewMatrix = glGetUniformLocation(executable.getComp(), "viewMatrix");
glUniformMatrix4fv(viewMatrix, 1, GL_FALSE, glm::value_ptr(freeView.getFreeView()));
GLuint projMatrix = glGetUniformLocation(executable.getComp(), "projectionMatrix");
glUniformMatrix4fv(projMatrix, 1, GL_FALSE, glm::value_ptr(projectionMatrix.getProjectionMatrix()));
temp.useClass(executable);
//Second Shader program, update view and proj matrix, and have second class draw it's vertices
executable2.Execute();
viewMatrix = glGetUniformLocation(executable2.getComp(), "viewMatrix");
glUniformMatrix4fv(viewMatrix, 1, GL_FALSE, glm::value_ptr(freeView.getFreeView()));
projMatrix = glGetUniformLocation(executable2.getComp(), "projectionMatrix");
glUniformMatrix4fv(projMatrix, 1, GL_FALSE, glm::value_ptr(projectionMatrix.getProjectionMatrix()));
temp2.useClass(executable2);
VertexShader:
#version 330 core
layout(location = 0) in vec3 positions;
layout(location = 1) in vec3 colors;
uniform mat4 modelMatrix;
uniform mat4 viewMatrix;
uniform mat4 projectionMatrix;
out vec3 color;
void main()
{
gl_Position = projectionMatrix * viewMatrix * modelMatrix * vec4(positions, 1.0f);
color = colors;
}
The second vertex shader is logically the same, with just different variable names, and the fragment shader just outputs color.
useClass function (from class one):
glBindVertexArray(tempVAO);
glm::mat4 modelMat;
modelMat = glm::mat4();
GLuint modelMatrix = glGetUniformLocation(exe.getComp(), "modelMatrix");
glUniformMatrix4fv(modelMatrix, 1, GL_FALSE, glm::value_ptr(modelMat));
glDrawArrays(GL_TRIANGLES, 0, 3);
glBindVertexArray(0);
useClass function (from class two):
glBindVertexArray(tempVAO);
for(GLuint i = 0; i < 9; i++)
{
model[i] = glm::translate(model[i], gravity);
GLuint modelMatrix = glGetUniformLocation(exe.getComp(), "modelMatrix");
glUniformMatrix4fv(modelMatrix, 1, GL_FALSE, glm::value_ptr(model[i]));
glDrawArrays(GL_POINTS, 0, 1);
}
glBindVertexArray(0);
Both classes have data protection, and I just don't understand how translating the model matrix in one class, makes the model matrix in another class get translated as well, when using two shader programs? When I use one shader program for both classes, the translating works out fine, but not so much when I use two shader programs(one for each class)...
EDIT: After working on my project a little more, I figure out that the same problem happens when I compile and link two different shader programs with the same exact vertex and fragment shader, and just use each shader program before I draw from each class. So now the question I have is more along the lines of: Why does using two identical shader programs in between draws cause all of the vertices/model matrices to get translated?
I figured out what the problem was. Basically, since there is not really a way to directly exit the execution of a shader, my program was getting confused when I passed shaders getting executed through functions into other parts of the program. For some reason the program was thinking two shader programs were getting executed at the same time, hence why the model matrix was not getting reset consistently. To fix this issue, I limited the scope of each individual shader. Instead of having shaders executed in the same function and then passed through to other classes, I put each shader in the respective class that it gets used in.
so I'm an openGL beginner and am attempting to draw 'Bones' recursively.
I can draw my mesh fine within my 'do' loop however when I try to pass the 'Bone' object to a function to draw the mesh it doesn't draw?
void drawBone(Bone &bone, mat4 ProjectionMatrix, mat4 ViewMatrix)
{
ModelMatrix = bone.getBoneModel();
MVP = ProjectionMatrix * ViewMatrix * ModelMatrix;
glUniformMatrix4fv(MatrixID, 1, GL_FALSE, &MVP[0][0]);
glDrawArrays(GL_TRIANGLES, 0, vertices.size());
}
If I paste those 4 lines back into my 'do' loop in place of 'drawBone()' the mesh draws just fine.
Any help would be appreciated! :)
You have a few global variables like MatrixID and vertices.
make sure these are globally accessible and you are not redeclaring them within your "do" loop
I'm creating basic OpenGL scene and I have problem with manipulating with my object. Each has different transformation matrix, there's also modelview/translation/scaling matrix for whole scene.
How do I bind this data tomy object before executing calculations from vertex shader? I've read about gl(Push|Pop)Matrix(), but these functions are deprecated from what I understood.
A bit of my code. Position from vertex shader:
gl_Position = gl_ProjectionMatrix * gl_ModelViewMatrix * gl_Vertex;
And C++ function to display objects:
// Clear etc...
mat4 lookAt = glm::lookAt();
glLoadMatrixf(&lookAt[0][0]);
mat4 combined = lookAt * (mat4) sceneTranslation * (mat4) sceneScale;
glLoadMatrixf(&combined[0][0]);
mat4 objectTransform(1.0);
// Transformations...
// No idea if it works, but objects are affected by camera position but not individually scaled, moved etc.
GLuint gl_ModelViewMatrix = glGetUniformLocation(shaderprogram, "gl_ModelViewMatrix");
glUniformMatrix4fv(gl_ModelViewMatrix, 1, GL_FALSE, &objectTransform[0][0]);
// For example
glutSolidCube(1.0);
glutSwapBuffers();
Well, you dont have to use glLoadMatrix and other built in matrix functions, because it could be even more difficult than handling your own matrixes.
A simple camera example without controlling it, its a static camera:
glm::mat4x4 view_matrix = glm::lookAt(
cameraPosition,
cameraPosition+directionVector, //or the focus point the camera is pointing to
upVector);
It returns a 4x4 matrix, this is the view matrix.
glm::mat4x4 projection_matrix =
glm::perspective(60.0f, float(screenWidth)/float(screenHeight), 1.0f, 1000.0f);
this is the projection matrix
so now you have the view and projection matrixes, and you can send it to the shader:
gShader->bindShader();
gShader->sendUniform4x4("view_matrix",glm::value_ptr(view_matrix));
gShader->sendUniform4x4("projection_matrix",glm::value_ptr(projection_matrix));
the bindshader is the simple glUseProgram(shaderprog);
the uniform program is
void sendUniform4x4(const string& name, const float* matrix, bool transpose=false)
{
GLuint location = getUniformLocation(name);
glUniformMatrix4fv(location, 1, transpose, matrix);
}
Your model matrix is individual for each of your objects:
glm::mat4x4 model_matrix= glm::mat4(1); //this is the identity matrix, so its static
model_matrix= glm::rotate(model_matrix,
rotationdegree,
vec3(axis)); //same as opengl function.
This created a model matrix, and you can send it to your shader too
gShader->bindShader();
gShader->sendUniform4x4("model_matrix",glm::value_ptr(model_matrix));
the glm::value_ptr(...) creates a 2 dimensional array of your matrix.
in your shader code don't use the glModelViewMatrix and gl_ProjectionMatrix,
matrixes are sent via uniforms.
uniform mat4 projection_matrix;
uniform mat4 view_matrix;
uniform mat4 model_matrix;
void main(){
gl_Position = projection_matrix*view_matrix*model_matrix*gl_Vertex;
//i wrote gl_Vertex because of the glutSolidTeapot.
}
I've never used this build in mesh function so i dont know how it works, supposing it is sending the vertexes to the shader with immediate mode use gl_Vertex.
If you create your own meshes use VBO vertexattribpointer and drawarrays/elements.
Don't forget to bind the shader before sending uniforms.
So with a complete example:
glm::mat4x4 view_matrix = glm::lookAt(2,4,2,-1,-1,-1,0,1,0);
glm::mat4x4 projection_matrix =
glm::perspective(60.0f, float(screenWidth)/float(screenHeight), 1.0f, 10.0f);
glm::mat4x4 model_matrix= glm::mat4(1); //this remains unchanged
glm::mat4x4 teapot_model_matrix= glm::rotate(model_matrix, //this is the teapots model matrix, apply transformations to this
45,
glm::vec3(1,1,1));
teapot_model_matrix = glm::scale(teapot_model_matrix,vec3(2,2,2);
gShader->bindShader();
gShader->sendUniform4x4("model_matrix",glm::value_ptr(model_matrix));
gShader->sendUniform4x4("view_matrix",glm::value_ptr(view_matrix));
gShader->sendUniform4x4("projection_matrix",glm::value_ptr(projection_matrix));
glutSolidCube(0.0); //i don't know what the (0.0) stands for :/
glutSwapBuffers();
///////////////////////////////////
in your shader:
uniform mat4 projection_matrix; //these are the matrixes you've sent
uniform mat4 view_matrix;
uniform mat4 model_matrix;
void main(){
gl_Position = projection_matrix*view_matrix*model_matrix*vec4(gl_Vertex.xyz,1);
}
Now you should have a camera positioned at 2,4,2, focusing at -1,-1,-1, and the up vector is pointing up:)
A teapot is rotated by 45 degrees around the (1,1,1) vector, and scaled by 2 in every direction.
After changing the model matrix, send it to the shader, so if you have more objects to render
send it after each if you want to have different transformations applied to each mesh.
A pseudocode for this looks like:
camera.lookat(camerapostion,focuspoint,updirection); //sets the view
camera.project(fov,aspect ratio,near plane, far plane) //and projection matrix
camera.sendviewmatrixtoshader;
camera.sendprojectionmatrixtoshader;
obj1.rotate(45 degrees, 1,1,1); //these functions should transform the model matrix of the object. Make sure each one has its own.
obj1.sendmodelmatrixtoshader;
obj2.scale(2,1,1);
obj2.sendmodelmatrixtoshader;
If it doesn't work try it with a vertexBuffer, and a simple triangle or cube created by yourself.
You should use a math library, I recommend GLM. It has its matrix functions just like in OpenGL, and uses column major matrixes so you can calculate your owns, and apply them for objects.
First, you should have a matrix class for your scene, which calculates your view matrix, and projection matrix. (glm::lookAt, and glm::project). They work the same as in openGL. You can send them as uniforms to the vertex shader.
For the obejcts, you calculate your own marixes, and send them as the model matrix to the shader(s).
In the shader or on cpu you calculate the mv matrix:
vp = proj*view.
You send your individual model matrixes to the shader and calculate the final position:
gl_Position = vp*m*vec4(vertex.xyz,1);
MODEL MATRIX
with glm, you can easily calculate, transform you matrixes. You create a simple identity matrix:
glm::mat4x4(1) //identity
you can translate, rotate, scale it.
glm::scale
glm::rotate
glm::translate
They work like in immediate mode in opengl.
after you have your matrix send it via the uniform.
MORE MODEL MATRIX
shader->senduniform("proj", camera.projectionmatrix);
shader->senduniform("view", camera.viewmatrix);
glm::mat4 model(1);
obj1.modelmatrix = glm::translate(model,vec3(1,2,1));
shader->senduniform("model", obj1.modelmatrix);
objectloader.render(obj1);
obj2.modelmatrix = glm::rotate(model,obj2.degrees,vec3(obj2.rotationaxis));
shader->senduniform("model", obj2.modelmatrix);
objectloader.render(obj2);
This is just one way to do this. You can write a class for push/pop matrix calculations, automate the method above like this:
obj1.rotate(degrees,vec3(axis)); //this calculates the obj1.modelmatrix for example rotating the identity matrix.
obj1.translate(vec3(x,y,z))//more transform
obj1.render();
//continue with object 2
VIEW MATRIX
the view matrix almost the same as model matrix. Use this to control the global "model matrix", the camera. This transforms your screen globally, and you can have model matrixes for your objects individually.
In my camera class I calculate this with the glm::lookAt(the same as opengl) then send it via uniform to all shaders I use.
Then when I render something I can manipulate its model matrix, rotating or scaling it, but the view matrix is global.
If you want a static object, you don't have to use model matrix on it, you can calculate the position with only:
gl_Position = projmatrix*viewmatrix*staticobjectvertex;
GLOBAL MODEL MATRIX
You can have a global model matrix too.
Use it like
renderer.globmodel.rotate(axis,degree);
renderer.globmodel.scale(x,y,z);
Send it as uniform too, and apply it after the objects' model matrix.
(I've used it to render ocean reflections to texture.)
To sum up:
create a global view(camera) matrix
create a model matrix for each of your sceens, meshes or objects
transform the objects' matrixes individually
send the projection, model and view matrixes via uniforms to the shader
calculate the final position: proj*camera*model*vertex
move your objects, and move your camera
I'm not saying there aren't any better way to do this, but this works for me well.
PS: if you'd like some camera class tuts I have a pretty good one;).
How do I apply the drawing position in the world via shaders?
My vertex shader looks like this:
in vec2 position;
uniform mat4x4 model;
uniform mat4x4 view;
uniform mat4x4 projection;
void main() {
gl_Position = projection * view * model * vec4(position, 0.0, 1.0);
}
Where position is the positions of the vertexes of the triangles.
I'm binding the matrices as follows.
view:
glm::mat4x4 view = glm::lookAt(
glm::vec3(0.0f, 1.2f, 1.2f), // camera position
glm::vec3(0.0f, 0.0f, 0.0f), // camera target
glm::vec3(0.0f, 0.0f, 1.0f)); // camera up axis
GLint view_uniform = glGetUniformLocation(shader, "view");
glUniformMatrix4fv(view_uniform, 1, GL_FALSE, glm::value_ptr(view));
projection:
glm::mat4x4 projection = glm::perspective(80.0f, 640.0f/480.0f, 0.1f, 100.0f);
GLint projection_uniform = glGetUniformLocation(shader, "projection");
glUniformMatrix4fv(projection_uniform, 1, GL_FALSE, glm::value_ptr(projection));
model transformation:
glm::mat4x4 model;
model = glm::translate(model, glm::vec3(1.0f, 0.0f, 0.0f));
model = glm::rotate(model, static_cast<float>((glm::sin(currenttime)) * 360.0), glm::vec3(0.0, 0.0, 1.0));
GLint trans_uniform = glGetUniformLocation(shader, "model");
glUniformMatrix4fv(trans_uniform, 1, GL_FALSE, glm::value_ptr(model));
So this way I have to compute the position transformation each frame on the CPU. Is this the recommended way or is there a better one?
So this way I have to compute the position transformation each frame on the CPU. Is this the recommended way or is there a better one?
Yes. Calculating a new transform once for a mesh on the CPU and then applying it to all vertices inside the mesh inside the vertex shader is not going to be a very slow operation and needs to be done every frame.
In the render() method you usually do the following things
create matrix for camera (once per frame usually)
for each object in the scene:
create transformation (position) matrix
draw object
projection matrix can be created once per windowResize, or when creating matrix for camera.
Answer: Your code is good, it is a basic way to draw/update objects.
You could go into some framework/system that manages it automatically. You should not care (right now) about the performance of those matrix creation procedures... it is very fast. Drawing is more problematic.
as jozxyqk wrote in one comment, you can create ModelViewProjMatrix and send one combined matrix instead of three different one.
I'm attempting to create a Camera class for a 3D OpenGL project. However I cannot figure out how to actually apply the camera to my scene. I have these Camera functions (amongst others):
void Camera::update(){
glm::vec3 direction(cos(_verticalAngle) * sin(_horizontalAngle), sin(_verticalAngle), cos(_verticalAngle) * cos(_horizontalAngle));
glm::vec3 right = glm::vec3(sin(_horizontalAngle - 3.14f/2.0f), 0, cos(_horizontalAngle - 3.14f/2.0f));
glm::vec3 up = glm::cross(right, direction);
_projectionMatrix = glm::perspective(_FoV, float(VIEWPORT_X) / float(VIEWPORT_Y), 0.1f, 250.0f);
_viewMatrix = glm::lookAt(_position, _position + direction, up);
}
glm::mat4 Camera::getProjectionMatrix(){
return _projectionMatrix;
}
glm::mat4 Camera::getViewMatrix(){
return _viewMatrix;
}
They were created from a tutorial, I'm not sure if they work though since I can't test them. What I want to do is get OpenGL to use the view and projection matrices to simulate a camera. How exactly do I tell OpenGL to use those projection and view matrices, so that it properly simulates a camera separate from model's transformations? I'm aware OpenGL will not accept glm matrices by default, but I have seen this type of thing in a few tutorials:
glm::mat4 ProjectionMatrix = getProjectionMatrix();
glm::mat4 ViewMatrix = getViewMatrix();
glm::mat4 ModelMatrix = glm::mat4(1.0);
glm::mat4 MVP = ProjectionMatrix * ViewMatrix * ModelMatrix;
glUniformMatrix4fv(MatrixID, 1, GL_FALSE, &MVP[0][0]);
but glUniformMatrix4fv (which I think applies the camera transforms?) makes no sense to me. It always has something to do with shaders, which I have none of. I simply have a wireframe test mesh currently. Could someone provide me a code snippet for this problem?
use glLoadMatrixf() if you are not using shaders, if you want just multiply current matrix use glMultMatrixf(), current matrix mode can switch use glMatrixMode(GL_PROJECTION or GL_MODELVIEW); for example(this is your code):
lm::mat4 ProjectionMatrix = getProjectionMatrix();
//setup projection matrix for opengl
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glMultMatrixf(glm::value_ptr(ProjectionMatrix));
or:
glMultMatrixf(&ProjectionMatrix[0][0]);
EDIT:
if you want to apply transform to your model:(model_view are combined in fixed function)
glMatrixMode(GL_MODELVIEW);
glLoadMatrixf(&viewMatrix[0][0]);
glMultMatrixf(&modelTransform[0][0]); //model * view
Draw_your_model();
you might need to set your transform like this :
glm::translate(modelTransform,-10,-10,-10); so your model will be at (10,10,10)
I don't know about using GLM, but I can help with the regular OpenGL part.
glUniformMatrix4fv updates a 4x4 uniform matrix at the location specified by MatrixID in a particular shader program.
I recommend working through Learning Modern 3D Graphics Programming, which is excellent as both a reference and guide.
For a discussion of how these uniforms are used within the GLSL shader program see:
Learning Modern 3D Graphics Programming - Chapter 3
Based at your code you should do the following :
glm::mat4 ProjectionMatrix = getProjectionMatrix();
glm::mat4 ViewMatrix = inverse(getViewMatrix());//view(camera) must be inverse(if you don't already do it)
glm::mat4 ModelMatrix = glm::mat4(1.0);
glm::mat4 MVP = ProjectionMatrix * ViewMatrix * ModelMatrix;
glUniformMatrix4fv(MatrixID, 1, GL_FALSE, value_ptr(MVP));//use value_ptr method to pass matrix pointer
Also to set proper camera matrix I would suggest using lookAt() GLM build in method to calculate eye, dir ,up and compose those into final matrix for you.