glBufferData and glBufferSubData Offset - c++

I'm attempting to render Suzanne (from Blender) in OpenGL 3.3, but the buffer data doesn't seem to be correct. I get this:
https://gyazo.com/ab82f9acb6854a49fccc527ed96cc4e8
I also tried to render a sphere with a simple texture:
https://gyazo.com/85c1e87fcc4eab128ca37b1a0cb1deaa
My importer inserts the vertex data into a std::vector as single floats:
if(line.substr(0,2) == "v ")
{
/** Vertex position */
std::istringstream s(line.substr(2));
float v[3];
s >> v[0]; s >> v[1]; s >> v[2];
this->vertices.push_back(v[0]);
this->vertices.push_back(v[1]);
this->vertices.push_back(v[2]);
}
I setup the array buffer as follows:
glGenBuffers(1, &this->vbo);
glBindBuffer(GL_ARRAY_BUFFER, this->vbo);
glBufferData(GL_ARRAY_BUFFER,
sizeof(float)*(this->vertices.size()+this->textures.size()+this->normals.size()),
NULL,
GL_STATIC_DRAW);
And then I insert the actual data using glBufferSubData
glBufferSubData(GL_ARRAY_BUFFER, 0, sizeof(float)*this->vertices.size(), this->vertices.data());
glBufferSubData(GL_ARRAY_BUFFER, sizeof(float)*this->vertices.size(), sizeof(float)*this->textures.size(), this->textures.data());
glBufferSubData(GL_ARRAY_BUFFER, sizeof(float)*(this->vertices.size()+this->textures.size()), sizeof(float)*this->normals.size(), this->normals.data());
I also insert the indices in the same way (GL_ELEMENT_ARRAY_BUFFER of course).
I then point to the information:
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 3*sizeof(float), (GLvoid*)0);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 3*sizeof(float), (GLvoid*)(sizeof(float)*this->v.size()));
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 3*sizeof(float), (GLvoid*)(sizeof(float)*this->v.size()+this->vt.size()));
My vertex shader takes in the data like this:
layout(location = 0) in vec3 position;
layout(location = 1) in vec2 texCoord;
layout(location = 2) in vec3 normals;
Am I screwing up the offsets?
EDIT:
I figured out the biggest issue! I wrote an external Lua program to convert the obj files to a format that was easier to import, but it ended up messing with the data and duplicating the indices on the "f #/#/# #/#/# #/#/#" so the file looked like this (x->y->x) instead of (x->y->z)
Also fixed a few other errors thanks to the responses below!

Without seeing your shaders I can't be 100% sure if your calls to glVertexAttribPointer are legit. I also can't tell if you want interleaved vertex data in a single VBO or not. What you currently have packs all of the vertex positions in first, then all of the texture coordinates, and finally all of the normals.
To interleave the data you would first want to put all of it into a single array (or vector) so that each vertex repeats the PPPTTNNN pattern. Where PPP are the three position floats, TT are the two texcoord floats, and NNN are the three normal floats.
It would look something like this (using bogus types, values, and spacing to help illustrate the pattern):
float[] vertices = {
/* pX, pY, pZ, tX, tY, nX, nY, nZ */
1, 1, 1, 0, 0, 1, 1, 1, // vertex 1
0, 0, 0, 1, 1, 0, 0, 0, // vertex 2
1, 1, 1, 0, 0, 1, 1, 1, // vertex 3
...
};
Let's say you put it all into a single vector called vertices, then you can upload it with a single command:
glBufferData(GL_ARRAY_BUFFER, sizeof(float) * this->vertices.size(), this->vertices.data(), GL_STATIC_DRAW);
You could also put each attribute into its own VBO. How you decide to store the data on the GPU is ultimately up to you. If you are purposely storing the data the way you have it, let me know in the comments and I'll update the answer.
Ok, now the shader bits.
Let's say you've got a vertex shader that looks like this:
in vec3 position;
in vec2 texcoord;
in vec3 normal;
out vec2 uv;
void main() {
gl_Position = vec4(position, 1);
uv = texcoord;
}
And a fragment shader that looks like this:
in vec2 uv;
uniform sampler2D image;
out vec4 color;
void main() {
color = texture(image, uv);
}
Then you would want the following glVertexAttribPointer calls:
int stride = (3 + 2 + 3) * sizeof(float);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, stride, (GLvoid*)0);
glVertexAttribPointer(1, 2, GL_FLOAT, GL_FALSE, stride, (GLvoid*)3);
glVertexAttribPointer(2, 3, GL_FLOAT, GL_FALSE, stride, (GLvoid*)5);
Notice how the first parameter to each call is a different number. This corresponds to position, texcoord, normal respectively in the vertex shader.
Also texture coordinates are typically only a pair of floats (e.g. vec2 texcoord) so I changed the second parameter to 2 for the texcoord call.
Finally the last parameter, when using an interleaved array, only needs to specify the offset per vertex. Thus we get 0, 3, and 5 for position offset, texcoord offset, and normal offset respectively.
Hopefully this gets you where you want to be.
Check out the docs.gl page on glVertexAttribPointer for more info.

Related

Using texture2D on OpenGL 3.3

So I've been fiddling with an old University project done in OpenGL 3.3 (FreeGLUT + GLEW) and I've run into some problems.
Right at the start, I run the program and I'm getting an error compiling the BumpMap Fragment Shader:
#version 330 core
#define lightCount 10
in vec4 vertPos;
in vec4 eyeModel;
in vec3 normalInterp;
in vec4 ambient;
in vec4 color;
in vec4 spec;
in vec2 texuv;
in vec4 lightPosition[lightCount];
struct LightSource {
vec4 Position;
vec4 Direction;
vec4 Color;
float CutOff;
float AmbientIntensity;
float DiffuseIntensity;
float SpecularIntensity;
float ConstantAttenuation;
float LinearAttenuation;
float ExponentialAttenuation;
int lightType;
};
layout(std140) uniform LightSources {
LightSource lightSource[10];
};
uniform sampler2D diffuse_tex;
uniform sampler2D normal_tex;
out vec4 out_Color;
void main() {
out_Color = vec4(0);
for(int i=0; i<lightCount; i++) {
if(lightSource[i].lightType == 0)
continue;
vec3 NormalMap = texture2D(normal_tex, texuv).rgb;
vec3 normal = normalize(NormalMap * 2.0 - 1.0); //normalize(normalInterp);
vec4 LightDirection = vertPos - lightSource[i].Position;
float Distance = length(LightDirection);
LightDirection = normalize(LightDirection);
vec4 ambientColor = ambient * lightSource[i].Color * lightSource[i].AmbientIntensity;
vec4 diffuseColor = vec4(0, 0, 0, 0);
vec4 dColor = texture2D(diffuse_tex, texuv);
vec4 specularColor = vec4(0, 0, 0, 0);
float DiffuseFactor = dot(normal, vec3(-LightDirection));
if (DiffuseFactor > 0) {
diffuseColor = dColor * lightSource[i].Color * lightSource[i].DiffuseIntensity * DiffuseFactor;
vec3 VertexToEye = normalize(vec3(eyeModel - vertPos));
vec3 LightReflect = normalize(reflect(vec3(LightDirection), normal));
float SpecularFactor = dot(VertexToEye, LightReflect);
SpecularFactor = pow(SpecularFactor, 255);
if(SpecularFactor > 0.0){
//SpecularFactor = pow( max(SpecularFactor,0.0), 255);
specularColor = spec * lightSource[i].Color * lightSource[i].SpecularIntensity * SpecularFactor;
}
}
out_Color += ambientColor + diffuseColor + specularColor;
}
}
ERROR: 0:55: 'function' : is removed in Forward Compatible context texture2D
ERROR: 0:55: 'texture2D' : no matching overloaded function found (using implicit conversion)
So I looked the problem up and even though I thought it was weird I was getting this problem on a project I knew had been in working condition, I switched the texture2D call for a texture call and now the shader compiles, but I get a different error, where creating the buffer object for the first object in the scene:
//Consts defined here for readability
#define VERTICES 0
#define COLORS 1
#define NORMALS 2
#define TEXUVS 3
#define AMBIENTS 4
#define TANGENTS 5
#define SPECULARS 6
#define SPECULARS_CONSTANTS 7
#define NOISE_SCALE 8
void BufferObject::createBufferObject() {
glGenVertexArrays(1, &_vertexArrayObjectID);
glBindVertexArray(_vertexArrayObjectID);
glGenBuffers(1, &_vertexBufferObjectID);
glBindBuffer(GL_ARRAY_BUFFER, _vertexBufferObjectID);
glBufferData(GL_ARRAY_BUFFER, sizeof(Vertex)*_vertexCount, _vertices, GL_STATIC_DRAW);
glEnableVertexAttribArray(VERTICES);
glVertexAttribPointer(VERTICES, 4, GL_FLOAT, GL_FALSE, sizeof(Vertex), 0);
glEnableVertexAttribArray(COLORS);
glVertexAttribPointer(COLORS, 4, GL_FLOAT, GL_FALSE, sizeof(Vertex), (GLvoid*)sizeof(_vertices[0].XYZW));
glEnableVertexAttribArray(NORMALS);
glVertexAttribPointer(NORMALS, 4, GL_FLOAT, GL_FALSE, sizeof(Vertex), (GLvoid*)(sizeof(_vertices[0].XYZW)+sizeof(_vertices[0].RGBA)));
glEnableVertexAttribArray(TEXUVS);
glVertexAttribPointer(TEXUVS, 2, GL_FLOAT, GL_FALSE, sizeof(Vertex), (GLvoid*)(sizeof(_vertices[0].XYZW)+sizeof(_vertices[0].RGBA)+sizeof(_vertices[0].NORMAL)));
glEnableVertexAttribArray(AMBIENTS);
glVertexAttribPointer(AMBIENTS, 4, GL_FLOAT, GL_FALSE, sizeof(Vertex), (GLvoid*)(sizeof(_vertices[0].XYZW)+sizeof(_vertices[0].RGBA)+sizeof(_vertices[0].NORMAL)+sizeof(_vertices[0].TEXUV)));
glEnableVertexAttribArray(TANGENTS);
glVertexAttribPointer(TANGENTS, 4, GL_FLOAT, GL_FALSE, sizeof(Vertex), (GLvoid*)(sizeof(_vertices[0].XYZW)+sizeof(_vertices[0].RGBA)+sizeof(_vertices[0].NORMAL)+sizeof(_vertices[0].TEXUV)+sizeof(_vertices[0].AMBIENT)));
glEnableVertexAttribArray(SPECULARS);
glVertexAttribPointer(SPECULARS, 4, GL_FLOAT, GL_FALSE, sizeof(Vertex), (GLvoid*)(sizeof(_vertices[0].XYZW)+sizeof(_vertices[0].RGBA)+sizeof(_vertices[0].NORMAL)+sizeof(_vertices[0].TEXUV)+sizeof(_vertices[0].AMBIENT)+sizeof(_vertices[0].TANGENT)));
glEnableVertexAttribArray(SPECULARS_CONSTANTS);
glVertexAttribPointer(SPECULARS_CONSTANTS, 1, GL_FLOAT, GL_FALSE, sizeof(Vertex), (GLvoid*)(sizeof(_vertices[0].XYZW)+sizeof(_vertices[0].RGBA)+sizeof(_vertices[0].NORMAL)+sizeof(_vertices[0].TEXUV)+sizeof(_vertices[0].AMBIENT)+sizeof(_vertices[0].TANGENT)+sizeof(_vertices[0].SPECULAR)));
glBindVertexArray(0);
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindBuffer(GL_UNIFORM_BUFFER, 0);
glDisableVertexAttribArray(VERTICES);
glDisableVertexAttribArray(COLORS);
glDisableVertexAttribArray(NORMALS);
glDisableVertexAttribArray(TEXUVS);
glDisableVertexAttribArray(AMBIENTS);
glDisableVertexAttribArray(TANGENTS);
glDisableVertexAttribArray(SPECULARS);
glDisableVertexAttribArray(SPECULARS_CONSTANTS);
Utility::checkOpenGLError("ERROR: Buffer Object creation failed.");
}
OpenGL ERROR [Invalid Operation] = 1282
And that's all the info I'm getting. I've moved the checkOpenGLError around and figured out the line glDisableVertexAttribArray(VERTICES) is giving the error.
After a bit more of digging I found out that you're not supposed to set glBindVertexArray(0) (at least before you glDisableVertexAttribArray, from what I remember we set those flags to 0 so we wouldn't accidentally affect anything we didn't want)
At this point the error moves to where we're drawing one of the scene objects. At this point I've hit a bit of a wall and don't where to go to next. I guess my question is whether there is a configuration when running the project that needs to be set, or whether just running this on a more recent graphics card could account for the different behaviour. As a final note, this is running on windows off of Visual Studio 10 (or 15, switched to 10 when reverted all changes and didn't retarget the solution) in widnows, the program configurations are as follows:
//GLUT Init
glutInit(&argc, argv);
glutInitContextVersion(3, 3);
glutInitContextFlags(GLUT_FORWARD_COMPATIBLE);
glutInitContextProfile(GLUT_CORE_PROFILE);
glutSetOption(GLUT_ACTION_ON_WINDOW_CLOSE,GLUT_ACTION_GLUTMAINLOOP_RETURNS);
glutInitWindowSize(windowWidth, windowHeight);
glutInitDisplayMode(GLUT_DEPTH | GLUT_DOUBLE | GLUT_RGBA);
windowHandle = glutCreateWindow(CAPTION);
//GLEW Init
glewExperimental = GL_TRUE;
GLenum result = glewInit();
//GLUT Init
std::cerr << "CONTEXT: OpenGL v" << glGetString(GL_VERSION) << std::endl;
glClearColor(0.1f, 0.1f, 0.1f, 1.0f);
glEnable(GL_DEPTH_TEST);
glDepthFunc(GL_LEQUAL);
glDepthMask(GL_TRUE);
glDepthRange(0.0,1.0);
glClearDepth(1.0);
glEnable(GL_CULL_FACE);
glCullFace(GL_BACK);
glFrontFace(GL_CCW);
with the context above being:
CONTEXT: OpenGL v3.3.0 - Build 22.20.16.4749
Let me know if any aditional information is required, I didn't want to add any more unnecessary clutter and the project is too big to just paste it all here...
In your shader you are using glsl version 330 core, which means texture2D() is deprecated and you should use texture() instead.
As for your INVALID OPERATION error, the problem is that you unbound the vao with glBindVertexArray(0); and then called glDisableVertexAttribArray(VERTICES); which operates on the currently bound vao. You should move glBindVertexArray(0); under these calls.
First let me refer to the specification, OpenGL 4.6 API Core Profile Specification; 10.3.1 Vertex Array Objects; page 347 :
The name space for vertex array objects is the unsigned integers, with zero reserved by the GL.
...
A vertex array object is created by binding a name returned by GenVertexArray with the command
void BindVertexArray( uint array );
array is the vertex array object name. The resulting vertex array object is a new state vector, comprising all the state and with the same initial values listed in tables 23.3 and 23.4.
BindVertexArray may also be used to bind an existing vertex array object. If the bind is successful no change is made to the state of the bound vertex array object, and any previous binding is broken.
Tables 23.3, Vertex Array Object State
VERTEX_ATTRIB_ARRAY_ENABLED, VERTEX_ATTRIB_ARRAY_SIZE, VERTEX_ATTRIB_ARRAY_STRIDE, VERTEX_ATTRIB_ARRAY_TYPE, VERTEX_ATTRIB_ARRAY_NORMALIZED, VERTEX_ATTRIB_ARRAY_INTEGER, VERTEX_ATTRIB_ARRAY_LONG, VERTEX_ATTRIB_ARRAY_DIVISOR, VERTEX_ATTRIB_ARRAY_POINTER
Table 23.4, Vertex Array Object State
ELEMENT_ARRAY_BUFFER_BINDING, VERTEX_ATTRIB_ARRAY_BUFFER_BINDING, VERTEX_ATTRIB_BINDING, VERTEX_ATTRIB_RELATIVE_OFFSET, VERTEX_BINDING_OFFSET, VERTEX_BINDING_STRIDE, VERTEX_BINDING_DIVISOR, VERTEX_BINDING_BUFFER.
This means that a Vertex Array Object collects all the information which is necessary to draw an object. In the vertex array object is stored the information about the location of the vertex attributes and the format.
Further the vertex array object "knows" whether an attribute is enabled or disabled.
If you do
glBindVertexArray(0);
glDisableVertexAttribArray( .... );
this causes an INVALID_OPERATION error, when you use a core profile OpenGL context, because then the vertex array object 0 is not a valid vertex array object.
If you would use a compatibility profile context this would not cause an error, because then the vertex array object 0 is the default vertex array object and is valid.
If you do
glBindVertexArray(_vertexArrayObjectID);
glEnableVertexAttribArray( ..... );
glVertexAttribPointer( ..... );
glDisableVertexAttribArray( ..... );
glBindVertexArray(0);
then the draw call will fail. You have made the effort to define all the arrays of generic vertex attributes data and to enable them all correctly, but right after doing that you disable them again. So in the vertex array object is stored the "disabled" state for all the attributes.
The correct procedure to define a vertex array object is:
Generate the vertex buffers, and creates and initializes the buffer object's data store (this step can be done after creating and binding the vertex array object, too):
glGenBuffers(1, &_vertexBufferObjectID);
glBindBuffer(GL_ARRAY_BUFFER, _vertexBufferObjectID);
glBufferData(GL_ARRAY_BUFFER, sizeof(Vertex)*_vertexCount, _vertices, GL_STATIC_DRAW);
Generate and bind the vertex array object:
glGenVertexArrays(1, &_vertexArrayObjectID);
glBindVertexArray(_vertexArrayObjectID);
Define and enable the arrays of generic vertex attributes data (this has to be done after binding the vertex array object):
glBindBuffer(GL_ARRAY_BUFFER, _vertexBufferObjectID);
glEnableVertexAttribArray(VERTICES);
glVertexAttribPointer(VERTICES, 4, GL_FLOAT, GL_FALSE, sizeof(Vertex), 0);
....
glBindBuffer(GL_ARRAY_BUFFER, 0);
If you would use an element buffer (GL_ELEMENT_ARRAY_BUFFER), then you would have to specify it now, because the name of (reference to) the element buffer object is stored in the vertex array object, and this has to be currently bound, when the element buffer object gets bound.
glGenBuffers(1, &ibo);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, ibo);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, ..... );
Finally you can do glBindVertexArray(0). But there is no reason to do that. It is sufficient to bind a new vertex array object, before you specify a new mesh, or to bind the proper vertex array object before you draw a mesh.
Further there is no need for glDisableVertexAttribArray, as long you don't want to change the vertex array object specification. The state "enabled" is stored in the vertex array object an kept there. If you bind a new vertex array object, the then the object and so all its states become current.
Now drawing is simple:
glBindVertexArray(_vertexArrayObjectID);
glDrawArrays( .... );
Again there is no need of glBindVertexArray(0), after the draw call (especially in core mode).

Opengl vao breaks during draw call

Currently I have a system that contains 2 buffers, one created CPU side and set to a buffer. and then one from a ssbo and populated from another shader.
Here is the struct for the ssbo data:
struct ssbo_data_t
{
int maxcount = 1024;
float baz[1024];
} ssbo_data;
Then the vao is build like:
//Build buffer
glGenBuffers(1, &buffer);
glBindBuffer(GL_ARRAY_BUFFER, buffer);
glBufferData(GL_ARRAY_BUFFER, arraysize * sizeof(DATATYPE), dataarr, GL_DYNAMIC_DRAW);
//Build vao
glGenVertexArrays(1, &vao);
glBindVertexArray(vao);
//index,size,type,norm,stride,pointerToFirst
glEnableVertexAttribArray(0); //Position of light
glVertexAttribPointer(0, 2, GL_FLOAT, GL_FALSE, 0, 0);
glEnableVertexAttribArray(1); //Color
glVertexAttribPointer(1, 3, GL_FLOAT, GL_FALSE, 0, (void*)(sizeof(DATATYPE) * 2));
glEnableVertexAttribArray(2); //Intensity
glVertexAttribPointer(2, 1, GL_FLOAT, GL_FALSE, 0, (void*)(sizeof(DATATYPE) * 5));
glEnableVertexAttribArray(3); //Layer
glVertexAttribPointer(3, 1, GL_FLOAT, GL_FALSE, 0, (void*)(sizeof(DATATYPE) * 6));
//Try to bind ssbo for lighting data.
glBindBuffer(GL_ARRAY_BUFFER, ssbo);
glEnableVertexAttribArray(4); //MaxSize
glVertexAttribPointer(4, 1, GL_INT, GL_FALSE, 0, (void*)offsetof(ssbo_data_t,maxcount));
glEnableVertexAttribArray(5); //Corner Data
glVertexAttribPointer(5, 2, GL_FLOAT, GL_FALSE, sizeof(float) * 2, (void*)offsetof(ssbo_data_t, baz));
My question is that if I set no stride for the one buffer, and then a stride for another part of the buffer it should work right? Or is there something i'm missing?
Cause when I actually preform the draw call, the first line is drawn, but all others have a position of -1,-1 Like the first buffer bound to the VAO was reset.
I am however receiving the correct potions from the other buffer, which shows it is working. Its like the first buffer is being unbound on the next draw call.
Due to how large the project is, I cant really post a working example.
As you can see here, I place the primitive to the left and the other object to the right.
The primitive is drawn, and sets its corner positions to the SSBO.
The second object is then drawn and creates 4 line segments from its position.
The first vertex, works as expected, and creates a line segment from its position and terminates on the corner of the box, specified by the VAO.
The next draws don't read the position correctly. But do take the correct information from the ssbo. So... ?
Might be helpful if I posted the vertex shader:
#version 430 core
layout(location = 0) in vec2 aPos;
layout(location = 1) in vec3 aColor;
layout(location = 2) in float intensity;
layout(location = 3) in float layer;
layout(location = 4) in int maxcount;
layout(location = 5) in vec2 loc;
uniform vec2 screensize;
out VS_OUT{
vec3 color;
float intensity;
vec4 targ;
} vs_out;
void main()
{
vec2 npos = ((aPos - (screensize / 2.0)) / screensize) * 2; //Convert mouse chords to screen chords.
gl_Position = vec4(npos, layer, 1.0);
vs_out.targ = vec4(loc, layer, 1.0);
vs_out.color = aColor;
vs_out.intensity = intensity;
}
To answer your question, yes you can mix and match strides and offsets in the same buffer and share vertex attribute data with an SSBO (Shader Storage Buffer Object).
Vertex stride
I'm not sure how you're trying to use the contents of the SSBO there. However, the binding of vertex attributes #4 and #5 looks fishy.
glVertexAttribPointer(4, 1, GL_INT, GL_FALSE, 0, (void*)offsetof(ssbo_data_t,maxcount));
The first vertex will have maxcount as expected in the x component. However, a stride of 0 means that consecutive vertex attributes are packed. The second vertex will therefore read 32 bits from baz[0] and treat that as an integer. The third will read baz[1] and so on. In short, only the first vertex will have the correct maxcount value and the rest will have bogus values.
To fix this, use instanced arrays (also known as vertex divisor). The function to use is glVertexAttribDivisor(). Another option is to bind your SSBO directly and read it as an SSBO (buffer type in GLSL). See the SSBO OpenGL wiki page for an example.
Finally:
glVertexAttribPointer(5, 2, GL_FLOAT, GL_FALSE, sizeof(float) * 2, (void*)offsetof(ssbo_data_t, baz));
You can use a stride of 0 here because your attribute values are tightly packed. sizeof(float) * 2 gives the same result.
The (wrong) solution
I wrongly stated that BindVertexArray() resets the current GL_ARRAY_BUFFER binding. This is not the case. It does reset the indexed vertex buffer binding though but you already set those.

GLM mat4x4 to layout qualifier

I am currently learning OpenGL for a hobby project and I reached the point where I want to do some particle effects with instancing.
To give each of my particles its own transformation, I am building a vector with transformation matrices and use the values for the buffer:
glBindBuffer(GL_ARRAY_BUFFER, particleEffect->getTransformationBufferID());
std::vector<glm::mat4x4> particleTransformations;
const std::vector<Particle>& particles = particleEffect->getParticles();
for(std::vector<Particle>::const_iterator particleIt = particles.cbegin();
particleIt != particles.cend();
++ particleIt)
{
particleTransformations.push_back(particleIt->getTransformation());
}
glBufferSubData(GL_ARRAY_BUFFER, 0, sizeof(particleTransformations[0]) * particles.size(), particleTransformations.data());
glEnableVertexAttribArray(4);
glVertexAttribPointer(4, 16, GL_FLOAT, GL_FALSE, 0, 0);
I am initializing my transformation buffer this way:
glGenBuffers(1, &transformationBufferID_);
glBindBuffer(GL_ARRAY_BUFFER, transformationBufferID_);
glBufferData(GL_ARRAY_BUFFER, sizeof(glm::mat4x4) * particles_.size(), nullptr, GL_STREAM_DRAW);
And I use this simple draw code:
glVertexAttribDivisor(0, 0);
glVertexAttribDivisor(1, 0);
glVertexAttribDivisor(2, 0);
glVertexAttribDivisor(3, 0);
glVertexAttribDivisor(4, 1); //(4, 1), because each particle has its own transformation matrix at (layout location = 4)?!
glDrawArraysInstanced(GL_TRIANGLE_STRIP, 0, particleEffect->getVertices().size(), particleEffect->getParticles().size());//getVertices() are the same vertices for each particle
glDisableVertexAttribArray(0);
glDisableVertexAttribArray(1);
glDisableVertexAttribArray(2);
glDisableVertexAttribArray(3);
glDisableVertexAttribArray(4);
My vertex shader header looks like this:
#version 430 core
layout(location = 0) in vec3 vertexPosition;
layout(location = 1) in vec3 normals;
layout(location = 2) in vec4 colors;
layout(location = 3) in vec2 uvCoordinates;
layout(location = 4) in mat4 transformation;
This leads to a crash with the glError 1281, but only if I use glEnableVertexAttribArray(4). Otherwise this doesn't crash. I am also able to draw my simple particle mesh when I use an identity matrix instead of my transformation matrix.
I used glGetAttribLocation(programID, "transformation") to find out if the location is correct. Yep, it is correct.
Your problem is not actually the layout qualifier at all, you need to understand some nuances of vertex attributes.
On GPUs, every vertex attribute is a 4-component vector, whether you declare it float, vec2, vec3 or vec4. That works fine for all of the types I just mentioned, the unused parts of that 4-component vector are assigned 0,0,1 respectively.
For mat4, however, the story is very different. A mat4 is effectively an array of four vec4s, and this means that layout(location = 4) in mat4 transformation; actually occupies 4 different locations (all sequential).
transformation is composed of four 4-component vectors assigned to locations 4, 5, 6 and 7. glVertexAttribPointer(4, 16, GL_FLOAT, GL_FALSE, 0, 0); is wrong, and this needs to be split into multiple calls like so:
// Setup 4 different vertex attributes (one for each column of your matrix).
// Each matrix has a stride of 64-bytes and each column of the matrix advances 16-bytes
glVertexAttribPointer (4, 4, GL_FLOAT, GL_FALSE, 64, 0);
glVertexAttribPointer (5, 4, GL_FLOAT, GL_FALSE, 64, 16);
glVertexAttribPointer (6, 4, GL_FLOAT, GL_FALSE, 64, 32);
glVertexAttribPointer (7, 4, GL_FLOAT, GL_FALSE, 64, 48);
You will also need to enable, disable and setup a vertex attribute divisor for all four of those attribute locations to make your code work correctly.

Getting maximum/minimum luminance of texture OpenGL

I'm starting with OpenGL, and I want to create a tone mapping - algorithm.
I know that my first step is get the max/min luminance value of the HDR image.
I have the image in a texture in FBO, and I'm not sure how to start.
I think the best way is to pass tex coords to a fragment shader and then go through all the pixels and generates somehow smaller textures.
But, I don't know how to do downsampling manually until I had a 1x1 texture; should I had a lot of FBO? where I create each new texture?
I searched a lot of info but I still have no clear almost anything.
I would appreciate some help to situate myself and to start.
EDIT 1. Here's my shaders, and how I pass texture coords to vertex shader:
To pass texture coords and vertex positions, I draw a quad using VBO:
void drawQuad(Shaders* shad){
// coords: vertex (3) + texture (2)
std::vector<GLfloat> quadVerts = {
-1, 1, 0, 0, 0,
-1, -1, 0, 0, 1,
1, 1, 0, 1, 0,
1, -1, 0, 1, 1};
GLuint quadVbo;
glGenBuffers(1, &quadVbo);
glBindBuffer(GL_ARRAY_BUFFER, quadVbo);
glBufferData(GL_ARRAY_BUFFER, 4 * 5 * sizeof(GLfloat), &quadVerts[0], GL_STATIC_DRAW);
// Shader attributes
GLuint vVertex = shad->getLocation("vVertex");
GLuint vUV = shad->getLocation("vUV");
glEnableClientState(GL_VERTEX_ARRAY);
glVertexPointer(3, GL_FLOAT, 3 * sizeof(GLfloat), NULL);
// Set attribs
glEnableVertexAttribArray(vVertex);
glVertexAttribPointer(vVertex, 3, GL_FLOAT, GL_FALSE, sizeof(GLfloat) * 5, 0);
glEnableVertexAttribArray(vUV);
glVertexAttribPointer(vUV, 2, GL_FLOAT, GL_FALSE, sizeof(GLfloat) * 5, (void*)(3 * sizeof(GLfloat)));
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4); // Draw
glBindBuffer(GL_ARRAY_BUFFER, 0);
glDisableVertexAttribArray(vVertex);
glDisableVertexAttribArray(vUV);
}
Vertex shader:
#version 420
in vec2 vUV;
in vec4 vVertex;
smooth out vec2 vTexCoord;
uniform mat4 MVP;
void main()
{
vTexCoord = vec2(vUV.x * 1024,vUV.y * 512);
gl_Position = MVP * vVertex;
}
And fragment shader:
#version 420
smooth in vec2 vTexCoord;
layout(binding=0) uniform sampler2D texHDR; // Tex image unit binding
layout(location=0) out vec4 color; //Frag data output location
vec4[4] col;
void main(void)
{
for(int i=0;i<=1;++i){
for(int j=0;j<=1;++j){
col[(2*i+j)] = texelFetch(texHDR, ivec2(2*vTexCoord.x+i,2*vTexCoord.y+j),0);
}
}
color = (col[0]+col[1]+col[2]+col[3])/4;
}
In this test code, I have a texture with size 1024*512. My idea is to render to texture attached to GL_ATTACHMENT_0 in a FBO (layout(location=0)) using this shaders and the texture binded in GL_TEXTURE_0 which has the image (layout(binding=0)).
My target is to have the image in texHDR in my FBO texture reducing its size by two.
For downsampling, all you need to do in the fragment shader is multiple texture lookups, then combine them for the output fragment. For example, you could do 2x2 lookups, so each pass would reduce the resolution in x and y by a factor 2.
Let's say you want to reduce a 1024x1024 image. Then you would render a quad into a 512x512 image. Set it up so your vertex shader simply generates values for x and y between 0 and 511. The fragment shader then calls texelFetch(tex, ivec2(2*x+i,2*y+j)), where i and j loop from 0 to 1. Cache those four values, output min and max in your texture.

Glsl skinning obstacle, who can jump that?

I'm currently trying to set up a GPU skinning (with glsl) but it's not working the way I would :) Actually it's not working at all. My mesh disappear when I try this glsl code :
layout(location = 0) in vec3 vertexPos;
layout(location = 1) in vec2 vertexUv;
layout(location = 2) in vec3 vertexNor;
layout(location = 5) in ivec4 joints_influences;
layout(location = 6) in vec4 weights_influences;
uniform mat4 ViewProj, View, Proj, Model;
out vec3 vertexPosEye;
out vec3 vertexNorEye;
const int MAX_INFLUENCES = 4;
const int MAX_BONES = 50;
uniform mat4 animation_matrices[MAX_BONES];
uniform mat4 inv_bind_matrices[MAX_BONES];
void main()
{
vertexPosEye = (View * Model * vec4(vertexPos, 1)).xyz; // Position
vertexNorEye = (View * Model * vec4(vertexNor, 0)).xyz; // Normal matrix
vec4 final_v = vec4(0, 0, 0, 0);
for (int i = 0; i < MAX_INFLUENCES; i++)
{
vec4 v = vec4(vertexPos, 1)
* inv_bind_matrices[joints_influences[i]]
* animation_matrices[joints_influences[i]]
* weights_influences[i];
final_v += v;
}
gl_Position = ViewProj * Model * final_v;
}
when I try this :
gl_Position = ViewProj * Model * vertexPos;
My mesh is back :) but no animations anymore of course...
Here's my application (c++) code when I set VBO attributes :
// Vertex position
glGenBuffers(1, &buffer[0]);
glBindBuffer(GL_ARRAY_BUFFER, buffer[0]);
glBufferData(GL_ARRAY_BUFFER, vertices.pos.size() * sizeof(bVector3), &vertices.pos[0], GL_STATIC_DRAW);
// Ibid for uv, normals, tangents and bitangents.
// Skinning : joints index
glGenBuffers(1, &buffer[5]);
glBindBuffer(GL_ARRAY_BUFFER, buffer[5]);
glBufferData(GL_ARRAY_BUFFER, vertices.joints.size() * sizeof(SkinningJoints), &vertices.joints[0], GL_STATIC_DRAW);
// Skinning : weights
glGenBuffers(1, &buffer[6]);
glBindBuffer(GL_ARRAY_BUFFER, buffer[6]);
glBufferData(GL_ARRAY_BUFFER, vertices.weights.size() * sizeof(SkinningWeights), &vertices.weights[0], GL_STATIC_DRAW);
// Indices
glGenBuffers(1, &buffer[7]);
glBindBuffer(GL_ARRAY_BUFFER, buffer[7]);
glBufferData(GL_ARRAY_BUFFER, vertices.indices.size() * sizeof(bUshort), &vertices.indices[0], GL_STATIC_DRAW);
In the main loop :
glEnableVertexAttribArray(0);
glBindBuffer(GL_ARRAY_BUFFER, m->GetBuffer(0));
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, BUFFER_OFFSET(0));
glEnableVertexAttribArray(for uv, normals, tangents and bitangents)...
glEnableVertexAttribArray(5);
glBindBuffer(GL_ARRAY_BUFFER, m->GetBuffer(5));
glVertexAttribPointer(5, 4, GL_INT, GL_FALSE, 0, BUFFER_OFFSET(0));
glEnableVertexAttribArray(6);
glBindBuffer(GL_ARRAY_BUFFER, m->GetBuffer(6));
glVertexAttribPointer(6, 4, GL_FLOAT, GL_FALSE, 0, BUFFER_OFFSET(0));
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, m->GetBuffer(7));
glDrawElements(GL_TRIANGLES, m->vertices.indices.size(), GL_UNSIGNED_SHORT, BUFFER_OFFSET(0));
Here is my RenderingVertices struct (after Barr's recomendations):
struct RenderingVertices
{
// std::vector<Vec3>
vVec3 pos, nor, tang, btan;
vVec2 uv;
vUshort indices;
vector<SkinningJoints> joints;
vector<SkinningWeights> weights;
};
And here is my SkinningJoints struct :
struct SkinningJoints
{
int j[MAX_BONES_PER_VERT];
SkinningJoints(Vertex::Weights weights[MAX_BONES_PER_VERT])
{
for (bUint i = 0; i < MAX_BONES_PER_VERT; i++)
j[i] = weights[i].jid;
}
};
My SkinningWeights struct is almost the same, with an array of float instead of int.
Now when I try to debug the joints index, weights values and final vertex as colors, here is what I get :
// Shader
color_debug = joints_influences;
http://www.images-host.fr/view.php?img=00021800pop.jpg
color_debug = weights_influences;
http://www.images-host.fr/view.php?img=00021800pop2.jpg
Another interesting thing, when I try this :
vec4 pop = vec4(vertexPos, 1) * animation_matrices[1] * inv_bind_matrices[1] * 1.0;
gl_Position = ViewProj * Model * pop;
My all mesh is actually rotating, which means that my uniform animation_matrices is good.
Anyone can see what i'm doing wrong here ?
I finally got it working. For those who may be interested, here is what I was doing wrong :
When I send joints indices array to Glsl, instead of doing this:
glEnableVertexAttribArray(5);
glBindBuffer(GL_ARRAY_BUFFER, m->GetBuffer(5));
glVertexAttribPointer(5, 4, GL_INT, GL_FALSE, 0, BUFFER_OFFSET(0));
I needed to do this:
glEnableVertexAttribArray(5);
glBindBuffer(GL_ARRAY_BUFFER, m->GetBuffer(5));
glVertexAttribIPointer(5, 4, GL_INT, 0, BUFFER_OFFSET(0));
You have to look closely to find the difference. Instead of calling glVertexAttribPointer(), I needed to call glVertexAttribIPointer() because joints indices are int.
Hope this will help someone someday.
Did you try debugging your skinning attributes? Output the vertex weight as colors so that you can confirm you have meaningful values? If everything is black you'll know where to look.
From a quick glance at your RenderingVertices I can spot a first problem. You are passing a Vector of pointers to GL which I don't think is what you want to do.
Most of the time you will limit skinning influences to 4 joint/weight pairs per vertex. So you can get away with a simple array (ie. SkinningJoints joints[4];).