Consider the following vertex shader:
attribute vec4 a_Position;
uniform mat4 u_ModelMatrix;
void main() {
gl_Position = u_ModelMatrix * a_Position;
gl_PointSize = 3.0;
}
In my Javascript program I manipulate the u_ModelMatrix to have a rotation and translation. This works on a triangle that I draw. But I noticed that if I draw a second object with its own vertex buffer object:
var vertexBuffer = gl.createBuffer();
gl.bindBuffer(gl.ARRAY_BUFFER, vertexBuffer);
gl.bufferData(gl.ARRAY_BUFFER, point, gl.STATIC_DRAW);
gl.uniform4f(u_FragColor, 1,1,0,1);
gl.drawArrays(gl.POINTS, 0, 1);
Then the translations and rotations don't apply to this object. I thought it would since gl_Position in the GLSL program is the points multiplied by the matrix. This is what I would like to happen, but I'm just curious as to why is this the case?
Buffers get bound to vertex attributes when you call gl.vertexAttribPointer. Whatever buffer was bound to gl.ARRAY_BUFFER at the time you call gl.vertexAttribPointer is now bound to that attribute. Above you're creating a new buffer but since there is no call to gl.vertexAttribPointer your attribute is still pointing to whatever buffer you previously attached.
Whether you want to replace the contents of the previous already existing buffer or create a new buffer is up to you.
Related
Im having a little problem with glDrawArraysInstanced().
Right now Im trying to draw a chess board with pieces.
I have all the models loaded in properly.
Ive tried drawing pawns only with instance drawing and it worked. I would send an array with transformation vec3s to shader through a uniform and move throught the array with gl_InstanceID
That would be done with this for loop (individual draw call for each model):
for (auto& i : this->models) {
i->draw(this->shaders[0], count);
}
which eventually leads to:
glDrawArraysInstanced(GL_TRIANGLES, 0, vertices.size(), count);
where the vertex shader is:
#version 460
layout(location = 0) in vec3 vertex_pos;
layout(location = 1) in vec2 vertex_texcoord;
layout(location = 2) in vec3 vertex_normal;
out vec3 vs_pos;
out vec2 vs_texcoord;
out vec3 vs_normal;
flat out int InstanceID;
uniform mat4 modelMatrix;
uniform mat4 viewMatrix;
uniform mat4 projectionMatrix;
uniform vec3 offsets[16];
void main(void){
vec3 offset = offsets[gl_InstanceID]; //saving transformation in the offset
InstanceID = gl_InstanceID; //unimportant
vs_pos = vec4(modelMatrix * vec4(vertex_pos + offset, 1.f)).xyz; //using the offset
vs_texcoord = vec2(vertex_texcoord.x,1.f-vertex_texcoord.y);
vs_normal = mat3(transpose(inverse(modelMatrix))) * vertex_normal;
gl_Position = projectionMatrix * viewMatrix * modelMatrix * vec4(vertex_pos + offset,1.f); //using the offset
}
Now my problem is that I dont know how to draw multiple objects in this way and change their transformations since gl_InstanceID starts from 0 on each draw call and thus my array with transformations would be used again from the beggining (which would just draw next pieces on pawns positions).
Any help will be appreciated.
You've got two problems. Or rather, you have one problem, but the natural solution will create a second problem for you.
The natural solution to your problem is to use one of the base-instance rendering functions, like glDrawElementsInstancedBaseInstance. These allow you to specify a starting instance for your instanced rendering calls.
This will precipitate a second problem: gl_InstanceID does not respect the base instance. It will always be on the range [0, instancecount). Only instance arrays respect the base instance. So instead of using a uniform to provide your per-instance data, you must use instance array rendering. This means storing the per-instance data in a buffer object (which you should have done anyway) and accessing it via a VS input whose VAO specifies that the particular attribute is instanced.
This also has the advantage of not restricting your instance count to uniform limitations.
OpenGL 4.6/ARB_shader_draw_parameters allows access to the gl_BaseInstance vertex shader input, which provides the baseinstance value specified by the draw command. So if you don't want to/can't use instanced arrays (for example, the amount of per-instance data is too big for the attribute limitations), you will have to rely on that extension/4.6 functionality. Recent desktop GL drivers offer this functionality, so if your hardware is decently new, you should be able to use it.
I have two classes with their own model coordinates, colors, etc. I also have two shader programs that are logically the same. First I execute one shader program, edit the uniforms with the traditional view and projection matrices, and then I call the class to edit the model matrix uniquely, and then draw it's primitives. Immediately afterwards, I do the exact same thing, but with the second shader program, edit the uniforms again, and call the second class to draw it's primitives and it's own unique model matrix coordinates.
In the second class, I translate the model matrix each iteration, but not in the first class. For some reason it translates the model matrix in the first class as well, and I dont know why?
Source code:
//First shader program, update view and proj matrix, and have first class draw it's vertices
executable.Execute();
GLuint viewMatrix = glGetUniformLocation(executable.getComp(), "viewMatrix");
glUniformMatrix4fv(viewMatrix, 1, GL_FALSE, glm::value_ptr(freeView.getFreeView()));
GLuint projMatrix = glGetUniformLocation(executable.getComp(), "projectionMatrix");
glUniformMatrix4fv(projMatrix, 1, GL_FALSE, glm::value_ptr(projectionMatrix.getProjectionMatrix()));
temp.useClass(executable);
//Second Shader program, update view and proj matrix, and have second class draw it's vertices
executable2.Execute();
viewMatrix = glGetUniformLocation(executable2.getComp(), "viewMatrix");
glUniformMatrix4fv(viewMatrix, 1, GL_FALSE, glm::value_ptr(freeView.getFreeView()));
projMatrix = glGetUniformLocation(executable2.getComp(), "projectionMatrix");
glUniformMatrix4fv(projMatrix, 1, GL_FALSE, glm::value_ptr(projectionMatrix.getProjectionMatrix()));
temp2.useClass(executable2);
VertexShader:
#version 330 core
layout(location = 0) in vec3 positions;
layout(location = 1) in vec3 colors;
uniform mat4 modelMatrix;
uniform mat4 viewMatrix;
uniform mat4 projectionMatrix;
out vec3 color;
void main()
{
gl_Position = projectionMatrix * viewMatrix * modelMatrix * vec4(positions, 1.0f);
color = colors;
}
The second vertex shader is logically the same, with just different variable names, and the fragment shader just outputs color.
useClass function (from class one):
glBindVertexArray(tempVAO);
glm::mat4 modelMat;
modelMat = glm::mat4();
GLuint modelMatrix = glGetUniformLocation(exe.getComp(), "modelMatrix");
glUniformMatrix4fv(modelMatrix, 1, GL_FALSE, glm::value_ptr(modelMat));
glDrawArrays(GL_TRIANGLES, 0, 3);
glBindVertexArray(0);
useClass function (from class two):
glBindVertexArray(tempVAO);
for(GLuint i = 0; i < 9; i++)
{
model[i] = glm::translate(model[i], gravity);
GLuint modelMatrix = glGetUniformLocation(exe.getComp(), "modelMatrix");
glUniformMatrix4fv(modelMatrix, 1, GL_FALSE, glm::value_ptr(model[i]));
glDrawArrays(GL_POINTS, 0, 1);
}
glBindVertexArray(0);
Both classes have data protection, and I just don't understand how translating the model matrix in one class, makes the model matrix in another class get translated as well, when using two shader programs? When I use one shader program for both classes, the translating works out fine, but not so much when I use two shader programs(one for each class)...
EDIT: After working on my project a little more, I figure out that the same problem happens when I compile and link two different shader programs with the same exact vertex and fragment shader, and just use each shader program before I draw from each class. So now the question I have is more along the lines of: Why does using two identical shader programs in between draws cause all of the vertices/model matrices to get translated?
I figured out what the problem was. Basically, since there is not really a way to directly exit the execution of a shader, my program was getting confused when I passed shaders getting executed through functions into other parts of the program. For some reason the program was thinking two shader programs were getting executed at the same time, hence why the model matrix was not getting reset consistently. To fix this issue, I limited the scope of each individual shader. Instead of having shaders executed in the same function and then passed through to other classes, I put each shader in the respective class that it gets used in.
I am trying to render an object using GLM for matrix transformations, but I'm getting this:
EDIT: Forgot to mention that the object I'm trying to render is a simple Torus.
I did a lot of digging around and one thing I noticed is glGetUniformLocation(program, "mvp") returns -1. The docs says it will return -1 if the uniform variable isn't used in the shader, even if it is declared. As you can see below, it has been declared and is being used in the vertex shader. I've checked against program to make sure it is valid, and such.
So my questions are:
Question 1:
Why is glGetUniformLocation(program, "mvp") returning -1 even though it is declared and is being used in the vertex shader?
Question 2: (Which I think may be related to Q1)
Another thing I'm not particularly clear on. My GameObject class has a struct called Mesh with variables GLuint vao and GLuint[4] vbo (Vertex Array Object) and (Vertex Buffer Object). I am using Assimp, and my GameObject class is based on this tutorial. The meshes are rendered in the same way as the tutorial, using:
glBindVertexArray(vao);
glDrawElements(GL_TRIANGLES, elementCount, GL_UNSIGNED_INT, NULL);
I'm not sure how VAO's and VBO's work. What I've found is that VAO's are used if you want access to the vertex arrays throughout your program, and VBO's are used if you just want to send it to the graphics card and not touch it again (Correct me if I'm wrong here). So why does the tutorial mix them? In the constructor for a mesh, it creates and binds a VAO then doesn't touch it for the rest of the constructor (unless creating and binding VBO's have an effect on the currently bound VAO). It then goes on and creates and binds VBO's for the vertex buffer, normal buffer, texture coordinate buffer, and index buffer. To render the object it binds the VAO and calls glDrawElements. What I'm confused about is how/where does OpenGL access the VBO's, and if it can't with the setup in the tutorial, which I'm pretty sure it can, what needs to change?
Source
void GameObject::render() {
GLuint program = material->shader->program;
glUseProgram(program);
glm::mat4 mvp = Game::camera->mvpMatrix(this->position);
GLuint mvpLoc = glGetUniformLocation(program, "mvp");
printf("MVP Location: %d\n", mvpLoc); // prints -1
glUniformMatrix4fv(mvpLoc, 1, GL_FALSE, &mvp[0][0]);
for (unsigned int i = 0; i < meshes.size(); i++) {
meshes.at(i)->render(); // renders element array for each mesh in the GameObject
}
}
Vertex shader (simple unlit red color):
#version 330 core
layout(location = 0) in vec3 position;
uniform mat4 mvp;
out vec3 vertColor;
void main(void) {
gl_Position = mvp * vec4(position, 1);
vertColor = vec3(1, 0, 0);
}
Fragment shader:
#version 330 core
in vec3 vertColor;
out vec3 color;
void main(void) {
color = vertColor;
}
Question 1
You've pretty much answered this one yourself. glGetUniformLocation(program, name) gets the location of the uniform "mvp" in the shader program program and returns -1 if the uniform is not declared (or not used: if you don't use it, it doesn't get compiled in). Your shader does declare and use mvp, which strongly suggests there is an issue with compiling the program. Are you sure you are using this shader in the program?
Question 2
A VBO stores the data values that the GPU will use. These could be colour values, normals, texture coordinates, whatever you like.
A VAO is used to express the layout of your VBOs - think of it like a map, indicating to your program where to find the data in the VBOs.
The example program does touch the VAO whenever it calls glVertexAttribPointer, e.g.
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, NULL);
This is not related to your missing uniform.
Alright, so I've got basic models loading and rendering in an OpenGL engine. I had animations working for a model. However, when I tried adding multiple models with animations to a scene, I got a bunch of weird behaviour - the last model animated incorrectly.
In trying to isolate the issue, I believe I've happened upon something that might be related - when rendering a model, if I 'zero out' the bone data in OpenGL (that is, send in a bunch of identity matrices), and THEN send the actual bone data, I get weird 'stuttering' in a models animation. It looks like there is a gap in the animation, where the model suddenly goes back to it's neutral position, then quickly goes back to the animation on the next frame.
I'm using Debian 7 64bit with the proprietary NVidia graphics drivers installed (GeForce GTX 560M with 3GB VRAM).
I have a video of this happening here: http://jarrettchisholm.com/static/videos/wolf_model_animation_problem_1.ogv
It's a bit hard to see in the video (it doesn't catch all of the frames I guess). You can see it more clearly when the wolf is on its side. This happens throughout the animation.
My model render code:
for ( glm::detail::uint32 i = 0; i < meshes_.size(); i++ )
{
if ( textures_[i] != nullptr )
{
// TODO: bind to an actual texture position (for multiple textures per mesh, which we currently don't support...maybe at some point we will??? Why would we need multiple textures?)
textures_[i]->bind();
//shader->bindVariable( "Texture", textures_[i]->getBindPoint() );
}
if ( materials_[i] != nullptr )
{
materials_[i]->bind();
shader->bindVariable( "Material", materials_[i]->getBindPoint() );
}
if (currentAnimation_ != nullptr)
{
// This is when I send the Identity matrices to the shader
emptyAnimation_->bind();
shader->bindVariable( "Bones", emptyAnimation_->getBindPoint() );
glw::Animation* a = currentAnimation_->getAnimation();
a->setAnimationTime( currentAnimation_->getAnimationTime() );
// This generates the new bone matrices
a->generateBoneTransforms(globalInverseTransformation_, rootBoneNode_, meshes_[i]->getBoneData());
// This sends the new bone matrices to the shader,
// and also binds the buffer
a->bind();
// This sets the bind point to the Bone uniform matrix in the shader
shader->bindVariable( "Bones", a->getBindPoint() );
}
else
{
// Zero out the animation data
// TODO: Do we need to do this?
// TODO: find a better way to load 'empty' bone data in the shader
emptyAnimation_->bind();
shader->bindVariable( "Bones", emptyAnimation_->getBindPoint() );
}
meshes_[i]->render();
}
The shader binding code:
void GlslShaderProgram::bindVariable(std::string varName, GLuint bindPoint)
{
GLuint uniformBlockIndex = glGetUniformBlockIndex(programId_, varName.c_str());
glUniformBlockBinding(programId_, uniformBlockIndex, bindPoint);
}
Animation code:
...
// This gets called when we create an Animation object
void Animation::setupAnimationUbo()
{
bufferId_ = openGlDevice_->createBufferObject(GL_UNIFORM_BUFFER, 100 * sizeof(glm::mat4), ¤tTransforms_[0]);
}
void Animation::loadIntoVideoMemory()
{
glBindBuffer(GL_UNIFORM_BUFFER, bufferId_);
glBufferSubData(GL_UNIFORM_BUFFER, 0, currentTransforms_.size() * sizeof(glm::mat4), ¤tTransforms_[0]);
}
/**
* Will stream the latest transformation matrices into opengl memory, and will then bind the data to a bind point.
*/
void Animation::bind()
{
loadIntoVideoMemory();
bindPoint_ = openGlDevice_->bindBuffer( bufferId_ );
}
...
My OpenGL Wrapper code:
...
GLuint OpenGlDevice::createBufferObject(GLenum target, glmd::uint32 totalSize, const void* dataPointer)
{
GLuint bufferId = 0;
glGenBuffers(1, &bufferId);
glBindBuffer(target, bufferId);
glBufferData(target, totalSize, dataPointer, GL_DYNAMIC_DRAW);
glBindBuffer(target, 0);
bufferIds_.push_back(bufferId);
return bufferId;
}
...
GLuint OpenGlDevice::bindBuffer(GLuint bufferId)
{
// TODO: Do I need a better algorithm here?
GLuint bindPoint = bindPoints_[currentBindPoint_];
currentBindPoint_++;
if ( currentBindPoint_ > bindPoints_.size() )
currentBindPoint_ = 1;
glBindBufferBase(GL_UNIFORM_BUFFER, bindPoint, bufferId);
return bindPoint;
}
...
My Vertex shader:
#version 150 core
uniform mat4 projectionMatrix;
uniform mat4 viewMatrix;
uniform mat4 modelMatrix;
uniform mat4 pvmMatrix;
uniform mat3 normalMatrix;
in vec3 in_Position;
in vec2 in_Texture;
in vec3 in_Normal;
in ivec4 in_BoneIds;
in vec4 in_BoneWeights;
out vec2 textureCoord;
out vec3 normalDirection;
out vec3 lightDirection;
struct Light {
vec4 ambient;
vec4 diffuse;
vec4 specular;
vec4 position;
vec4 direction;
};
layout(std140) uniform Lights
{
Light lights[ 2 ];
};
layout(std140) uniform Bones
{
mat4 bones[ 100 ];
};
void main() {
// Calculate the transformation on the vertex position based on the bone weightings
mat4 boneTransform = bones[ in_BoneIds[0] ] * in_BoneWeights[0];
boneTransform += bones[ in_BoneIds[1] ] * in_BoneWeights[1];
boneTransform += bones[ in_BoneIds[2] ] * in_BoneWeights[2];
boneTransform += bones[ in_BoneIds[3] ] * in_BoneWeights[3];
vec4 tempPosition = boneTransform * vec4(in_Position, 1.0);
gl_Position = pvmMatrix * tempPosition;
vec4 lightDirTemp = viewMatrix * lights[0].direction;
textureCoord = in_Texture;
normalDirection = normalize(normalMatrix * in_Normal);
lightDirection = normalize(vec3(lightDirTemp));
}
I apologize if I haven't included enough information - I put in what I thought would be useful. If you want/need to see more, you can get all of the code at https://github.com/jarrettchisholm/glr under the master_animation_work branch.
It isn't really opengl specific.
When exporter exports model, some of them export "skin parade" pose. I.e. the pose in which "bone modifier" was initially applied.
In your case, It is probably one of those
Either your exporter exported this "skin parade" as the very first frame (and animation loops over it)
Or your animation framework can't loop around properly, - can't find next frame when it is on the last animation key, and use "skin parade" as the default key.
The problem is probably in routine that calculates transforms for animations.
Here's how you debug it.
Render debug bone hierarchy (using dumbest shader possible or even fixed-function opengl). Debug bone hierarchy could look like this:
In the picture - orange lines show current position of animation bones. Flying coordinates systems (the ones that are not connected) show default locations. triangle and square are debug geometry for other purposes and are not related to animation system.
Visually check if bone hierarchy moves correctly.
If this "default frame" appears in debug hierarchy (i.e. bones themselves take "skin parade" pose once in a while), it is either an animation framework problem, purely mathematical and it doesn't have anything to do with opengl itself, or it is exporter problem (extra frame)
If it does not appear there (i.e. bones move around properly BUT geometry stands in skin parade pose), it is shader problem.
Debug animation skeleton should be rendered without any bone weights - just calculate world-space position of bones and connect them with simple lines. Use dumbest shader possible or fixed-function.
My program was meant to draw a simple textured cube on screen, however, I cannot get it to render anything other than the clear color. This is my draw function:
void testRender() {
glClearColor(.25f, 0.35f, 0.15f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT);
glUniformMatrix4fv(resources.uniforms.m4ModelViewProjection, 1, GL_FALSE, (const GLfloat*)resources.modelviewProjection.modelViewProjection);
glEnableVertexAttribArray(resources.attributes.vTexCoord);
glEnableVertexAttribArray(resources.attributes.vVertex);
//deal with vTexCoord first
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER,resources.hiBuffer);
glBindBuffer(GL_ARRAY_BUFFER, resources.htcBuffer);
glVertexAttribPointer(resources.attributes.vTexCoord,2,GL_FLOAT,GL_FALSE,sizeof(GLfloat)*2,(void*)0);
//now the other one
glBindBuffer(GL_ARRAY_BUFFER,resources.hvBuffer);
glVertexAttribPointer(resources.attributes.vVertex,3,GL_FLOAT,GL_FALSE,sizeof(GLfloat)*3,(void*)0);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, resources.htextures[0]);
glUniform1i(resources.uniforms.colorMap, 0);
glDrawElements(GL_TRIANGLES, 36, GL_UNSIGNED_SHORT, (void*)0);
//clean up a bit
};
In addition, here is the vertex shader:
#version 330
in vec3 vVertex;
in vec2 vTexCoord;
uniform mat4 m4ModelViewProjection;
smooth out vec2 vVarryingTexCoord;
void main(void) {
vVarryingTexCoord = vTexCoord;
gl_Position = m4ModelViewProjection * vec4(vVertex, 1.0);
};
and the fragment shader (I have given up on textures for now):
#version 330
uniform sampler2D colorMap;
in vec2 vVarryingTexCoord;
out vec4 vVaryingFragColor;
void main(void) {
vVaryingFragColor = texture(colorMap, vVarryingTexCoord);
vVaryingFragColor = vec4(1.0,1.0,1.0,1.0);
};
the vertex array buffer for the position coordinates make a simple cube (with all coordinates a signed 0.25) while the modelview projection is just the inverse camera matrix (moved back by a factor of two) applied to a perspective matrix. However, even without the matrix transformation, I am unable to see anything onscreen. Originally, I had two different buffers that needed two different element index lists, but now both buffers (containing the vertex and texture coordinate data) are the same length and in order. The code itself is derived from the Durian Software Tutorial and the latest OpenGL Superbible. The rest of the code is here.
By this point, I have tried nearly everything I can think of. Is this code even remotely close? If so, why can't I get anything to render onscreen?
You're looking pretty good so far.
The only thing that I see right now is that you've got DEPTH_TEST enabled, but you don't clear the depth buffer. Even if the buffer initialized to a good value, you would be drawing empty scenes on every frame after the first one, because the depth buffer's not being cleared.
If that does not help, can you make sure that you have no glGetError() errors? You may have to clean up your unused texturing attributes/uniforms to get the errors to be clean, but that would be my next step.