I am porting this sample (site) to jogl but I noticed something wasn't perfect in the image, some artefacts on the floor and shapes not exactly squared, as you can see (dont care about color, is varying), also the floor doesnt look good:
Therefore I tried to render only the floor first (if you wanna try, pretty easy, swith SQRT_BUILDING_COUNT from 100 -> 0) and there I have already the first problems, it is supposed to be a square based on two triangles, but I see only one of them.
My vertex structure:
public float[] position = new float[3];
public byte[] color = new byte[4];
public float[] attrib0 = new float[4];
public float[] attrib1 = new float[4];
public float[] attrib2 = new float[4];
public float[] attrib3 = new float[4];
public float[] attrib4 = new float[4];
public float[] attrib5 = new float[4];
public float[] attrib6 = new float[4];
attrib0-6 are unused at the moment
My VS inputs:
// Input attributes
layout(location=0) in vec4 iPos;
layout(location=1) in vec4 iColor;
layout(location=2) in PerMeshUniforms* bindlessPerMeshUniformsPtr;
layout(location=3) in vec4 iAttrib3;
layout(location=4) in vec4 iAttrib4;
layout(location=5) in vec4 iAttrib5;
layout(location=6) in vec4 iAttrib6;
layout(location=7) in vec4 iAttrib7;
I am declaring iPos as vec3, so I guess it will padded as vec4(iPos, 1) in the VS
I transfer data to gpu:
gl4.glNamedBufferData(vertexBuffer[0], Vertex.size() * vertices.size(),
GLBuffers.newDirectFloatBuffer(verticesArray), GL4.GL_STATIC_DRAW);
gl4.glNamedBufferData(indexBuffer[0], GLBuffers.SIZEOF_SHORT * indices.size(),
GLBuffers.newDirectShortBuffer(indicesArray), GL4.GL_STATIC_DRAW);
Then before I render I call:
gl4.glEnableVertexArrayAttrib(0, 0);
gl4.glEnableVertexArrayAttrib(0, 1);
Then render, original code is:
// Set up attribute 0 for the position (3 floats)
glVertexArrayVertexAttribOffsetEXT(0, m_vertexBuffer, 0, 3, GL_FLOAT, GL_FALSE, sizeof(Vertex), Vertex::PositionOffset);
// Set up attribute 1 for the color (4 unsigned bytes)
glVertexArrayVertexAttribOffsetEXT(0, m_vertexBuffer, 1, 4, GL_UNSIGNED_BYTE, GL_TRUE, sizeof(Vertex), Vertex::ColorOffset);
I substituted it with:
// Set up attribute 0 for the position (3 floats)
gl4.glVertexArrayVertexBuffer(0, 0, vertexBuffer[0], Vertex.positionOffset,
Vertex.size());
gl4.glVertexArrayAttribFormat(0, 0, 3, GL4.GL_FLOAT, false, Vertex.size());
// Set up attribute 1 for the color (4 unsigned bytes)
gl4.glVertexArrayVertexBuffer(0, 1, vertexBuffer[0], Vertex.colorOffset,
Vertex.size());
gl4.glVertexArrayAttribFormat(0, 1, 4, GL4.GL_UNSIGNED_BYTE, true, Vertex.size());
And then I finish the render:
// Reset state
gl4.glDisableVertexArrayAttrib(0, 0);
gl4.glDisableVertexArrayAttrib(0, 1);
I admit I never used dsa before, I always used GL3 with the normal vbo, vao and ibo, binding and unbinding..
Culling is off.
What's wrong then?
Solved, the problem was I didnt implement properly dsa
glEnableVertexAttribArray(vao, 0);
glEnableVertexAttribArray(vao, 1);
// Setup the formats
glVertexArrayAttribFormat(vao, 0, 3, GL_FLOAT, GL_FALSE, 0);
glVertexArrayAttribFormat(vao, 1, 2, GL_FLOAT, GL_FALSE, 0);
// Setup the buffer sources
glVertexArrayVertexBuffer(vao, 0, buffers[0], 0, 0); // Note the 2nd argument here is a 'binding index', not the attribute index
glVertexArrayVertexBuffer(vao, 1, buffers[1], 0, 0);
// Link them up
glVertexArrayAttribBinding(vao, 0, 0); // Associate attrib 0 (first 0) with binding 0 (second 0).
glVertexArrayAttribBinding(vao, 1, 1);
plus glVertexArrayElementBuffer if you have indexed rendering
Related
I am using GLEW32 with GLFW and write code on C++. I have encountered some problems passing values to shader. I have successfully passed vec2, uvec3 etc. Now I want to pass multiple values for each vertex:
uvec2 (or vec2 - not very important) - X and Y position;
uvec4 - RGBA color. I can also use int to decode RGBA from int32, but uvec4 would be more convinient :)
But there's another problem: I can set different attributes types using glVertexAttribIPointer() or glVertexAttribPointer():
glVertexAttribIPointer(0, 2, GL_UNSIGNED_SHORT, ...);
glEnableVertexAttribArray(0);
glVertexAttribIPointer(1, 4, GL_UNSIGNED_BYTE, ...);
glEnableVertexAttribArray(1);
But I cannot pass values of different types to glBufferData():
glBufferData(GL_ARRAY_BUFFER, sizeof(array), array, GL_DYNAMIC_DRAW); // Just one array!!!
I tried to do this using uniforms, but the code was tooo bulky and inefficient that I gave it up. Are there any ways to do such a manipulations "properly"?
I found a solution in Kai Burjack's comment. I made a structure to keep data and send it. The code is:
struct dataToSend
{
GLushort x, y;
GLubyte r, g, b, a;
}
...
glVertexAttribIPointer(0, 2, GL_UNSIGNED_SHORT, sizeof(dataToSend), (GLvoid *) 0);
glEnableVertexAttribArray(0);
glVertexAttribIPointer(1, 4, GL_UNSIGNED_BYTE, sizeof(dataToSend), (GLvoid *) (2 * sizeof(GLushort)));
glEnableVertexAttribArray(1);
...
// (pixels) x y r g b a
dataToSend data [4] = {{width, height, 255, 255, 0, 255}, // Corner 1 (Up-Right)
{width, 0, 255, 0, 255, 255}, // Corner 2 (Down-Right)
{0, 0, 0, 255, 0, 0}, // Corner 3 (Down-Left)
{0, height, 0, 255, 255, 255}}; // Corner 4 (Up-Left)
glBufferData(GL_ARRAY_BUFFER, sizeof(data), (GLvoid *) data, GL_DYNAMIC_DRAW);
That was kinda weird experience to use GLvoid and point to a structure instead of array but this lets one to pass almost any data to shaders. The result image is here:
Rendered image
I have a set of vertices and normals stored in a buffer. I want to display most of the vertices as points but i want draw lines for the remaining few vertices. These are all stored inside one vector with the points part in the front and i know the location of the buffer until they are to be displayed using points. I also know the count of elements for each drawing task. I am using only one vao and one buffer object for this task.
I initialize the GLWidget with the following code:
void GLWidget::initializeGL()
{
connect(context(), &QOpenGLContext::aboutToBeDestroyed, this, &GLWidget::cleanup);
initializeOpenGLFunctions();
glClearColor(0, 0, 0, m_transparent ? 0 : 1);
m_program = new QOpenGLShaderProgram;
m_program->addShaderFromSourceCode(QOpenGLShader::Vertex, m_core ? vertexShaderSourceCore : vertexShaderSource);
m_program->addShaderFromSourceCode(QOpenGLShader::Fragment, m_core ? fragmentShaderSourceCore : fragmentShaderSource);
m_program->bindAttributeLocation("vertex", 0);
m_program->bindAttributeLocation("normal", 1);
m_program->link();
m_program->bind();
m_projMatrixLoc = m_program->uniformLocation("projMatrix");
m_mvMatrixLoc = m_program->uniformLocation("mvMatrix");
m_normalMatrixLoc = m_program->uniformLocation("normalMatrix");
m_lightPosLoc = m_program->uniformLocation("lightPos");
m_vao.create();
QOpenGLVertexArrayObject::Binder vaoBinder(&m_vao);
m_obj.create();
setupBuffer();
setupVertexAttribs();
m_camera.setToIdentity();
QVector3D camPos = QVector3D(0.0, 0.0, 15.0);
m_camera.translate(-camPos);
QVector3D camTarget = QVector3D(0.0, 0.0, 0.0);
QVector3D camDirection = QVector3D(camPos - camTarget).normalized();
QVector3D worldUp = QVector3D(0.0, 1.0, 0.0);
QVector3D camRight = QVector3D::crossProduct(worldUp, camDirection).normalized();
QVector3D camUp = QVector3D::crossProduct(camDirection, camRight);
m_camera.lookAt(camPos, camTarget, camUp);
// Light position is fixed.
m_program->setUniformValue(m_lightPosLoc, QVector3D(0, 0, 200));
m_program->release();
}
Where the functions setupBuffer() setupVertexAtrribs() do the same tasks as their names imply. The vertices are layed out in the buffer with xyz positions of the vertex followed by the xyz of its associated normal. They are implemented as follows
void GLWidget::setupBuffer()
{
m_obj.bind();
m_obj.allocate(vertices.constData(), vertices.size() * sizeof(GLfloat));
}
void GLWidget::setupVertexAttribs()
{
m_obj.bind();
QOpenGLFunctions *f = QOpenGLContext::currentContext()->functions();
f->glEnableVertexAttribArray(0);
f->glEnableVertexAttribArray(1);
f->glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 6 * sizeof(GLfloat), reinterpret_cast<void *>(0));
f->glVertexAttribPointer(1, 3, GL_FLOAT, GL_FALSE, 6 * sizeof(GLfloat), reinterpret_cast<void *>(3 * sizeof(GLfloat)));
m_obj.release();
}
Now, the QVector vertices is the buffer that is being passed to opengl. The last few entries in this vector are the vertices that need to be drawn using GL_LINES.
My paintGL() function looks something like this:
void GLWidget::paintGL()
{
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glEnable(GL_DEPTH_TEST);
glEnable(GL_CULL_FACE);
glEnable(GL_POINT_SIZE);
glEnable(GL_LINE_WIDTH);
glPointSize(2);
glLineWidth(10);
m_world.setToIdentity();
m_world.rotate(180.0f - (m_xRot / 16.0f), 1, 0, 0);
m_world.rotate(m_yRot / 16.0f, 0, 1, 0);
m_world.rotate(m_zRot / 16.0f, 0, 0, 1);
m_world.scale(m_dispScale);
QOpenGLVertexArrayObject::Binder vaoBinder(&m_vao);
m_program->bind();
m_program->setUniformValue(m_projMatrixLoc, m_proj);
m_program->setUniformValue(m_mvMatrixLoc, m_camera * m_world);
QMatrix3x3 normalMatrix = m_world.normalMatrix();
m_program->setUniformValue(m_normalMatrixLoc, normalMatrix);
glDrawArrays(GL_POINTS, 0, vertices.size() - camVertices.size());
// Draw camera frustums
glDrawArrays(GL_LINES, vertices.size() - camVertices.size(), camVertices.size());
//glDrawElements(GL_POINTS, vecIndices.size(), GL_UNSIGNED_INT, 0);
m_program->release();
}
QVector camVertices is another vector that contains points that need to be drawn using lines. The data in camVertices is appended to the end of the vector 'Vertices' before rendering. As seen in the above code, I call glDrawArrays function twice - first, starting from 0 index of the buffer, second, starting from where the previous call ended to display the remaining of the points.
The problem is that the points are being displayed fine. However the second call only displays the points but does not draw any lines.
Here's a link to a screenshot of the displayed output - https://drive.google.com/open?id=1i7CjO1qkBALw78KKYGvBteydhfAWh3wh
The picture shows an example of the displayed out where the bright green points seen on the top outlying from the rest (box of many points) are the ones that are to be drawn with lines. However, I only see points but not any lines.
i did a simple test and i'm able to draw points and lines using the following code:
glDrawArrays(GL_POINTS, 0, verticesCount() - 10);
glDrawArrays(GL_LINES, 10, 10);
Which is not very different from yours except for the variables. I'm also using 1 VAO. So it's definitely possible to draw lines after points as we would expect.
Could you try the same (using an integer instead of your variables)
Can you show the debug information on your vertices?
Can you upload a minimal compilable example?
I have a lot of points that i need to draw in a batch and i have been trying it for two days and i cant seem get any progress with glDrawArrays. I have tried DrawNode and drawing each individual point for testing and it works correctly... but i cant seem to get glDrawArray to give any visual result.
Here is my drawing code(changed a few variable names):
auto glProgram = getGLProgram();
if (glProgram == nullptr) {
setGLProgramState(GLProgramState::getOrCreateWithGLProgramName(
GLProgram::SHADER_NAME_POSITION_COLOR));
glProgram = getGLProgram();
if (glProgram == nullptr) {
return;
}
}
glProgram->use();
glProgram->setUniformsForBuiltins();
GL::enableVertexAttribs(GL::VERTEX_ATTRIB_FLAG_POSITION | GL::VERTEX_ATTRIB_FLAG_COLOR);
GLfloat *vertices = new GLfloat[myStruct->data.size()*2];
GLfloat *colors = new GLfloat[myStruct->data.size()*4];
int vIndex = 0;
int cIndex = 0;
for (std::vector<myPointStruct*>::iterator it = myStruct->data.begin(); it != myStruct->data.end(); ++it) {
vertices[vIndex++] = (*it)->pos.x;
vertices[vIndex++] = (*it)->pos.y;
colors[cIndex++] = (*it)->color.r;
colors[cIndex++] = (*it)->color.g;
colors[cIndex++] = (*it)->color.b;
colors[cIndex++] = (*it)->color.a;
glLineWidth(10);
glVertexAttribPointer(GLProgram::VERTEX_ATTRIB_POSITION, 2, GL_FLOAT, GL_FALSE, sizeof(GLfloat), &vertices[0]);
glVertexAttribPointer(GLProgram::VERTEX_ATTRIB_COLOR, 4, GL_FLOAT, GL_FALSE, sizeof(GLfloat), &colors[0]);
glBlendFuncSeparate(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA, GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
glDrawArrays(GL_POINTS, 0, (GLsizei) myStruct->data.size());
CC_INCREMENT_GL_DRAWN_BATCHES_AND_VERTICES(1, (GLsizei) myStruct->data.size());
And here is how i call the method:
_renderTexture->begin();
myMethodForDrawing();
_renderTexture->end();
Director::getInstance()->getRenderer()->render();
I have also tried:
_renderTexture->begin();
_customCommand.init(_renderTexture->getGlobalZOrder());
_customCommand.func = CC_CALLBACK_0(MyClass:: myMethodForDrawing,this);
auto renderer = Director::getInstance()->getRenderer();
renderer->addCommand(&_customCommand);
_renderTexture->end();
The 5th paramter of glVertexAttribPointer specifies the byte offset between consecutive generic vertex attributes. If stride is 0, the generic vertex attributes are understood to be tightly packed in the array.
Since your vertices and colors are tightly packed, you do not need to set the stride parameter. Note, sizeof(GLfloat) is wrong anyway. In you case it would be 2 * sizeof(GLfloat) for vertices and 4 * sizeof(GLfloat) for colors.
Change your code like this (focus on the 0 for the 5th parameter):
glVertexAttribPointer(GLProgram::VERTEX_ATTRIB_POSITION, 2, GL_FLOAT, GL_FALSE, 0, &vertices[0]);
glVertexAttribPointer(GLProgram::VERTEX_ATTRIB_COLOR, 4, GL_FLOAT, GL_FALSE, 0, &colors[0]);
I have drawn a box. I added the basic class that deals vao and vbo.
class BasicObject_V2
{
public:
BasicObject_V2(GLuint typeOfPrimitives = 0){
this->typeOfPrimitives = typeOfPrimitives;
switch (this->typeOfPrimitives){
case GL_QUADS: numVertsPerPrimitive = 4; break;
}}
void allocateMemory(){
glGenVertexArrays(1, &vaoID);
glBindVertexArray(vaoID);
glGenBuffers(1, &vboID);
glBindBuffer(GL_ARRAY_BUFFER, vboID);
glBufferData(GL_ARRAY_BUFFER, vertices.size() * sizeof(vertexAttributes), &vertices[0], GL_STATIC_DRAW);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 48, (const void*)0);
glVertexAttribPointer(1, 3, GL_FLOAT, GL_FALSE, 48, (const void*)12);
glVertexAttribPointer(2, 4, GL_FLOAT, GL_FALSE, 48, (const void*)24);
glVertexAttribPointer(3, 2, GL_FLOAT, GL_FALSE, 48, (const void*)40);
glEnableVertexAttribArray(0);
glEnableVertexAttribArray(1);
glEnableVertexAttribArray(2);
glEnableVertexAttribArray(3);
}
GLuint vboID;
GLuint vaoID;
GLuint typeOfPrimitives;
GLuint numVertsPerPrimitive;
int getNumberOfVertices(){
return vertices.size();
}
struct vertexAttributes{
glm::vec3 pos;
glm::vec3 normal;
glm::vec4 color;
glm::vec2 texCoord;
};
std::vector<vertexAttributes> vertices;
};
Here is the my Box-class.
class Box_V2 : public BasicObject_V2 {
public:
Box_V2(float width = 1, float height = 1, float length = 1) : BasicObject_V2(GL_QUADS) {
float wHalf = width / 2.0f;
float hHalf = height / 2.0f;
float lHalf = length / 2.0f;
vertexAttributes va;
glm::vec3 corners[8];
corners[0] = glm::vec3(-wHalf, -hHalf, lHalf);
corners[1] = (...)
glm::vec3 normals[6];
normals[0] = glm::vec3(0, 0, 1); // front
//(...)
// FRONT
va.pos = corners[0];
va.normal = normals[0];
va.texCoord = glm::vec2(0.0, 0.0);
vertices.push_back(va);
//...
allocateMemory();
}
};
Also I have set the texture for it (acoording to the opengl superbible example), but it does work at all.
static GLubyte checkImage[128][128][4];
void makeCheckImage(void)
{
int i, j, c;
for (i = 0; i<128; i++) {
for (j = 0; j<128; j++) {
c = ((((i & 0x8) == 0) ^ ((j & 0x8)) == 0)) * 255;
checkImage[i][j][0] = (GLubyte)c;
checkImage[i][j][1] = (GLubyte)c;
checkImage[i][j][2] = (GLubyte)c;
checkImage[i][j][3] = (GLubyte)255;
}}}
void init(){
glClearColor(0.7, 0.4, 0.6, 1);
//Create a Texture
makeCheckImage();
//Texure
glGenTextures(4, tex);
//glCreateTextures(GL_TEXTURE_2D, 4, tex);//4.5
glTextureStorage2D(tex[0], 0, GL_RGBA32F, 2, 2);
glBindTexture(GL_TEXTURE_2D, tex[0]);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, 2, 2, 0, GL_RGBA, GL_FLOAT, checkImage);
//glTextureSubImage2D(tex[0], 0, 0, 0, 2, 2, GL_RGBA, GL_FLOAT, checkImage);//4.5
//load the shader
programID = loadShaders("vertex.vsh", "fragment.fsh");
glUseProgram(programID);
//create a box
box_v2 = new Box_V2();
glBindTexture(GL_TEXTURE_2D, tex[0]);
}}
void drawing(BasicObject* object){
glBindVertexArray(object->vaoID);
glDrawArrays(object->typeOfPrimitives, 0, object->getNumberOfVertices() * object->numVertsPerPrimitive);
glBindVertexArray(0);
}
This is my shaders:
VS:
#version 450 core
layout(location = 0) in vec3 pos;
layout(location = 1) in vec3 normal;
layout(location = 2) in vec4 color;
layout(location = 3) in vec2 texCoord;
uniform mat4 rotate;(...)
out vec4 v_normal;
out vec4 v_color;
out vec4 v_pos;
out vec2 v_texCoord;
void main(){
v_normal = rotate*vec4(normal,1);//*model;
v_color=color;
v_texCoord=texCoord;
gl_Position = rotate*scale*trans*vec4(pos.xyz, 1);
}
And FS:
#version 450 core
in vec4 v_color;
in vec2 v_texCoord;
uniform sampler2D ourTexture;
out vec4 outColor;
void main(){
//outColor=texelFetch(ourTexture, ivec2(gl_FragCoord.xy), 0); //4.5
outColor=texture(ourTexture,v_texCoord*vec2(3.0, 1.0));
}
Now it draws a black box without any texture on it. I'm new in OpenGL and I can't find where is the problem.
There are a lot of things wrong with your code. It looks like you tried to write it one way, then tried to write it another way, leading to some kind of Frankenstein's code where it's not really one way or the other.
glGenTextures(4, tex);
//glCreateTextures(GL_TEXTURE_2D, 4, tex);//4.5
glTextureStorage2D(tex[0], 0, GL_RGBA32F, 2, 2);
This is a perfect example. You use the non-DSA glGenTextures call, but you immediately turn around and use the DSA glTextureStorage2D call on it. You cannot do that. Why? Because tex[0] hasn't been created yet.
glGenTextures only allocates the name for the texture, not its state data. Only by binding the texture does it get contents. That's why DSA introduced the glCreate* functions; they allocate both the name and state for the object. This is also why glCreateTextures takes a target; that target is part of the texture object's state.
In short, the commented out call was correct all along. Looking at your code as written, the glTextureStorage2D call fails with an OpenGL error (FYI: turn on KHR_debug to see when errors happen).
So let's look at what your code would look like if we take out the failure parts:
glGenTextures(4, tex);
glBindTexture(GL_TEXTURE_2D, tex[0]);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, 2, 2, 0, GL_RGBA, GL_FLOAT, checkImage);
There are two problems here. First, as BDL mentioned, your data is described incorrectly. You're passing GLubytes, but you told OpenGL you were passing GLfloats. Don't lie to OpenGL.
Second, there's the internal format. Because you used an unsized internal format (and because you're not in OpenGL ES land), the format your implementation chooses will be unsigned normalized. It won't be a 32-bit floating point format. If you want one of those, you must explicitly ask for it, with GL_RGBA32F.
Furthermore, because you allocated mutable storage for your texture (ie: because glTextureStorage2D failed), you now have a problem. You only allocated one mipmap level, but the default filtering mode will try to fetch from multiple mipmaps. And you never set the texture's mipmap range to only sample from the first. So your texture is not complete. An immutable storage texture would have made the mipmap range match what you allocated, but mutable storage textures aren't so nice.
You should always set your mipmap range when creating mutable storage textures:
glTexParameteri(GL_TEXTURE2D, GL_TEXTURE_BASE_LEVEL, 0);
glTexParameteri(GL_TEXTURE2D, GL_TEXTURE_MAX_LEVEL, 0); //It's a closed range.
Lastly, you bind the texture to unit 0 for rendering (mostly by accident, since you never call glActiveTexture). But you never bother to inform your shader of this, by setting the ourTexture uniform value to 0.
There are several Problems with your code:
Data type doesn't match data
The data comes in an unsigned byte format (GLubyte checkImage[128][128][4]) with a size of of 4 bytes per pixel (1 byte per channel), but you tell OpenGL that it is in GL_FLOAT format, which would require 16 bytes per pixel (4 bytes per channel).
Data storage
See answer from 246Nt.
Mipmaps/Filters
Per default OpenGL has the minification filter set to GL_LINEAR_MIPMAP_LINEAR which requries the user to supply valid mipmaps for the texture. One can either change this filter to GL_LINEAR or GL_NEAREST which do not require mipmaps (see glTexParameteri), or generate mipmaps (glGenerateMipmaps).
I am trying to use a 4x4 matrix as a vertex attribute, using this code:
Mat4 matrices[numVerts];
int mtxBoneID = glGetAttribLocation(hProgram, "aMtxBone");
glEnableVertexAttribArray(mtxBoneID + 0);
glEnableVertexAttribArray(mtxBoneID + 1);
glEnableVertexAttribArray(mtxBoneID + 2);
glEnableVertexAttribArray(mtxBoneID + 3);
glVertexAttribPointer(mtxBoneID + 0, 4, GL_FLOAT, GL_FALSE, sizeof(Mat4), ((Vec4*)matrices) + 0);
glVertexAttribPointer(mtxBoneID + 1, 4, GL_FLOAT, GL_FALSE, sizeof(Mat4), ((Vec4*)matrices) + 1);
glVertexAttribPointer(mtxBoneID + 2, 4, GL_FLOAT, GL_FALSE, sizeof(Mat4), ((Vec4*)matrices) + 2);
glVertexAttribPointer(mtxBoneID + 3, 4, GL_FLOAT, GL_FALSE, sizeof(Mat4), ((Vec4*)matrices) + 3);
// shader:
// ...
attribute mat4 aMtxBone;
// ...
But all I get on the screen is garbage.
So, my answer was deleted because of reasons unknown.
Here I go again, I'll format it differently this time.
I had the EXACT same problem as the question had/has, stuff would be drawn quite messed up. At least that's the description.
What fixed it for me was calling
glDrawElementsInstancedBaseVertex(GL_TRIANGLES,
d->drawCount,
GL_UNSIGNED_INT,
0,
p_objects.size(),
0);
To draw my stuff, just this and nothing more.
HOWEVER!
This MIGHT not be the only issue you have if your stuff is drawn incorrectly. The entire list of stuff you NEED to get correct it immense. So i'm going to put a link here, which has a great tutorial on how to draw lots of stuff. Also, you can get the source code and try it out yourself.
http://ogldev.atspace.co.uk/www/tutorial33/tutorial33.html
Sources here: http://ogldev.atspace.co.uk/ogldev-source.zip
Downvote me again and delete my answer, it's content hasn't changed at all from what it was.
Ban me from the site, your loss.
you may try something like this
in your shader use layout
layout(location=x) in mat4 <name>;
x won't be equal to glGetAttribLocation,you must maintain it by yourself.it is equal to number times you call glVertexAttribPointer.
example
layout(location=0) in vec4 in_Position;
layout(location=1) in vec4 in_Color;
layout(location=2) in vec4 in_Normal;
glVertexAttribPointer(0, xxxxxxxx);
glVertexAttribPointer(1, xxxxxx);
glVertexAttribPointer(2, xxxxxx);
First you need to create VBO (glGenBuffers) for your matrix, then bind it (glBindBuffer) as current, then use
glVertexAttribPointer(mtxBoneID + 0, 4, GL_FLOAT, GL_FALSE, 0, 0);
glVertexAttribPointer(mtxBoneID + 1, 4, GL_FLOAT, GL_FALSE, 0, 4);
glVertexAttribPointer(mtxBoneID + 2, 4, GL_FLOAT, GL_FALSE, 0, 8);
glVertexAttribPointer(mtxBoneID + 3, 4, GL_FLOAT, GL_FALSE, 0, 12);
instead of your glVertexAttribPointer calls.
It looks like your offsets are off. It should be
((Vec4*)matrices) + sizeOf(Vec4)*i
instead of
((Vec4*)matrices) + i