I thought that this was impossible but I'm seeing it with my software. I have built a wrapper object to manage my buffer objects (I am working with shared contexts so I can't use VAOs), and the VBO side of things was working fine until I starting testing it with IBOs (glDrawElements(), I'm using a pure OpenGL 3+ environment).
Here is the code for adding a buffer to my object (Sy_GLObject):
QList< uint > Sy_GLObject::addBuffers( uint numBuffers, GLenum target,
GLenum numericType, GLenum usage )
{
uint* adds = new uint[numBuffers];
glGenBuffers( numBuffers, adds );
QList< uint > l;
for ( uint i = 0; i < numBuffers; ++i ) {
Buffer buffer( target, adds[i], 0, numericType, usage );
buffers_ << buffer;
l << i;
}
delete[] adds;
Sy_GL::checkError();
return l;
}
And the buffer names returned by this function are fine, until it is called by this code:
void Sy_BVH::initialiseGLObject()
{
Sy_application::getMainWindow()->getActiveProject(
)->getModelContext()->makeCurrent();
GLuint vLoc = Sy_settings::get( "shader/flat/vertexLoc" ).toUInt();
drawBBs_ = Sy_GLObject::createObject();
// Add vertex array.
drawBBs_->addBuffers( 1 );
drawBBs_->buffers()[0].setVertexPointer( vLoc );
// Add indices array.
drawBBs_->addBuffers( 1, GL_ELEMENT_ARRAY_BUFFER, GL_UNSIGNED_INT );
}
For some reason the indices array and vertex array names are both the same! setVertexPointer() does not actually call glVertexAttribPointer(), it just stores the parameters for it in a POD class - so no OpenGL calls are made between the two addBuffers() commands. The vertex call is 'correct' as it is one higher than the previous glGenBuffers() result, but from addBuffers() point of view there should be no difference between the calls.
Are there circumstances where glGenBuffers can possibly return the name of a buffer already in use!?
Thanks!
Update
To make sure threading was not a factor, I wrapped a static mutex around the glGenBuffers() block.
QMutexLocker locker( &mutex_ ); // mutex_ is a QMutex static class member.
uint* adds = new uint[numBuffers];
glGenBuffers( numBuffers, adds );
locker.unlock();
But it had absolutely no effect...
Thanks to Ilian Dinev over at the OpenGL.org forums for pointing out this stupid error. I created my Buffer object on the stack and conveniently had it's destructor call glDeleteBuffers(). Fantastic bit of design.
Related
I understand how to create the vertex and fragments shaders and how to create vertex arrays and put them on the buffer, but how do i link the two?
Meaning - when i run my program, how does it know that the vertices array that is on the currently active buffer should be "fed" to the vertex shader?
Is that being done simply by using glVertexAttribPointer?
how does it know that the vertices array that is on the currently active buffer should be "fed" to the vertex shader
The "currently active buffer", i.e. GL_ARRAY_BUFFER, isn't used during the draw call. It's only purpose is to tell the glVertexAttribPointer functions which buffer to bind to the VAO. Once this connection is established GL_ARRAY_BUFFER can be unbound.
... but how do i link the two?
The link between your vertex arrays and the vertex shader is the currently active vertex array object (VAO). The pipeline will fetch the vertex attributes that the shader needs from the buffers that were bound to the VAO.
It may help to summarized VAO state as the following pseudo-C definitions:
struct VertexArrayObject
{
// VertexArrayElementBuffer
uint element_buffer;
struct Binding {
// VertexArrayVertexBuffers
uint buffer;
intptr offset;
sizei stride;
// VertexArrayBindingDivisor
uint divisor;
} bindings[];
struct Attrib {
// VertexArrayAttribBinding
uint binding; // This is an index into bindings[]
// EnableVertexArrayAttrib
bool enabled;
// VertexArrayAttrib*Format
int size;
enum type;
boolean normalized;
boolean integer;
boolean long;
uint relativeoffset;
} attribs[];
};
The comments mention the corresponding OpenGL 4.5 DSA function that can be used to set the respective state.
When you use glVertexAttribPointer, it essentially does the following on the currently bound VAO:
vao.attribs[index].binding = index;
vao.attribs[index].size = size;
vao.attribs[index].type = type;
vao.attribs[index].normalized = normalized;
vao.attribs[index].relativeoffset = 0;
vao.bindings[index].buffer = current ARRAY_BUFFER;
vao.bindings[index].offset = pointer;
vao.bindings[index].stride = stride; // if stride == 0 computes based on size and type
Notice that it's the glVertexAttribPointer call that binds the 'active' buffer to the VAO. If instead you set up your VAO using the direct state access API (DSA), you don't even need to ever 'activate' any buffer.
Originally using glDrawElementsInstancedBaseVertex to draw the scene meshes. All the meshes vertex attributes are being interleaved in a single buffer object. In total there are only 30 unique meshes. So I've been calling draw 30 times with instance counts, etc. but now I want to batch the draw calls into one using glMultiDrawElementsIndirect. Since I have no experience with this command function, I've been reading articles here and there to understand the implementation with little success. (For testing purposes all meshes are instanced only once).
The command structure from the OpenGL reference page.
struct DrawElementsIndirectCommand
{
GLuint vertexCount;
GLuint instanceCount;
GLuint firstVertex;
GLuint baseVertex;
GLuint baseInstance;
};
DrawElementsIndirectCommand commands[30];
// Populate commands.
for (size_t index { 0 }; index < 30; ++index)
{
const Mesh* mesh{ m_meshes[index] };
commands[index].vertexCount = mesh->elementCount;
commands[index].instanceCount = 1; // Just testing with 1 instance, ATM.
commands[index].firstVertex = mesh->elementOffset();
commands[index].baseVertex = mesh->verticeIndex();
commands[index].baseInstance = 0; // Shouldn't impact testing?
}
// Create and populate the GL_DRAW_INDIRECT_BUFFER buffer... bla bla
Then later down the line, after setup I do some drawing.
// Some prep before drawing like bind VAO, update buffers, etc.
// Draw?
if (RenderMode == MULTIDRAW)
{
// Bind, Draw, Unbind
glBindBuffer(GL_DRAW_INDIRECT_BUFFER, m_indirectBuffer);
glMultiDrawElementsIndirect (GL_TRIANGLES, GL_UNSIGNED_INT, nullptr, 30, 0);
glBindBuffer(GL_DRAW_INDIRECT_BUFFER, 0);
}
else
{
for (size_t index { 0 }; index < 30; ++index)
{
const Mesh* mesh { m_meshes[index] };
glDrawElementsInstancedBaseVertex(
GL_TRIANGLES,
mesh->elementCount,
GL_UNSIGNED_INT,
reinterpret_cast<GLvoid*>(mesh->elementOffset()),
1,
mesh->verticeIndex());
}
}
Now the glDrawElements... still works fine like before when switched. But trying glMultiDraw... gives indistinguishable meshes but when I set the firstVertex to 0 for all commands, the meshes look almost correct (at least distinguishable) but still largely wrong in places?? I feel I'm missing something important about indirect multi-drawing?
//Indirect data
commands[index].firstVertex = mesh->elementOffset();
//Direct draw call
reinterpret_cast<GLvoid*>(mesh->elementOffset()),
That's not how it works for indirect rendering. The firstVertex is not a byte offset; it's the first vertex index. So you have to divide the byte offset by the size of the index to compute firstVertex:
commands[index].firstVertex = mesh->elementOffset() / sizeof(GLuint);
The result of that should be a whole number. If it wasn't, then you were doing unaligned reads, which probably hurt your performance. So fix that ;)
This question already has answers here:
What is the proper way to modify OpenGL vertex buffer?
(3 answers)
Closed 2 years ago.
I've got a training app written in winapi
So, I've got GL initialized there and I've got node-based system, that can be described by couple of classes
class mesh
{
GLuint vbo_index; //this is for having unique vbo
float *vertex_array;
float *normal_array;
unsigned int vertex_count;
etc.. //all those mesh things.
....
}
class node
{
bool is_mesh; //the node may or may not represent a mesh
mesh * mesh_ptr; //if it does then this pointer is a valid address
}
I've also got 2 global variables for keeping record of renderable mesh..
mesh **mesh_table;
unsigned int mesh_count;
Right now I'm experimenting on 2 objects. So I create 2 nodes of type mesh::cube with customizable number of x y and z segments. Expected behaviour of my app is let the user click between 2 of the nodes CUBE0, CUBE1 and show their customizable attributes - segments x, segments y, segments z. The user tweaks both objecs' parameters and they are being rendered out on top of each other in wireframe mode, so we can see the changing in their topology in real time.
When the node is being created for the first time, if the node type is mesh, then the mesh object is generated and it's mesh_ptr is written into the mesh_table and mesh_count increments. After that my opengl window class creates a unique vertex buffer object for the new mesh and stores it's index in the mesh_ptr.vbo_index
void window_glview::add_mesh_to_GPU(mesh* mesh_data)
{
glGenBuffers(1,&mesh_data->vbo_index);
glBindBuffer(GL_ARRAY_BUFFER ,mesh_data->vbo_index);
glBufferData(GL_ARRAY_BUFFER ,mesh_data->vertex_count*3*4,mesh_data->vertex_array,GL_DYNAMIC_DRAW);
glVertexAttribPointer(5,3,GL_FLOAT,GL_FALSE,0,NULL);//set vertex attrib (0)
glEnableVertexAttribArray(5);
}
After that the user is able to tweak the parameters and each time the parameter value changes the object's mesh information is being re-evaluated based on the new parameter values, while still being the same mesh instance, after that VBO data is being updated by
void window_glview::update_vbo(mesh *_mesh)
{
glBindBuffer(GL_ARRAY_BUFFER,_mesh->vbo_vertex);
glBufferData(GL_ARRAY_BUFFER,_mesh->vertex_count*12,_mesh->vertex_array,GL_DYNAMIC_DRAW);
glBindBuffer(GL_ARRAY_BUFFER,0);
}
and the whole scene redrawn by
for (unsigned short i=0;i<mesh_count;i++)
draw_mesh(mesh_table[i],GL_QUADS,false);
SwapBuffers(hDC);
The function for a single mesh is
bool window_glview::draw_mesh(mesh* mesh_data,unsigned int GL_DRAW_METHOD,bool indices)
{
glUseProgram(id_program);
glBindBuffer(GL_ARRAY_BUFFER,mesh_data->vbo_index);
GLuint id_matrix_loc = glGetUniformLocation(id_program, "in_Matrix");
glUniformMatrix4fv(id_matrix_loc,1,GL_TRUE,cam.matrixResult.get());
GLuint id_color_loc=glGetUniformLocation(id_program,"uColor");
glPolygonMode( GL_FRONT_AND_BACK, GL_LINE );
glUniform3f(id_color_loc,mesh_color[0],mesh_color[1],mesh_color[2]);
glDrawArrays(GL_DRAW_METHOD,0,mesh_data->vertex_count);
glBindBuffer(GL_ARRAY_BUFFER,0);
glUseProgram(0);
return true;
}
The problem is that only the last object in stack is being drawn that way, and the other object's points are all in 0 0 0, so in the viewport it's rendered one cube with proper parameters and one cube just as a DOT
QUESTION: Where did I go wrong?
You have a fundamental misunderstanding of what glBindBuffer(GL_ARRAY_BUFFER,mesh_data->vbo_vertex); does.
That sets the bound array buffer, which is actually only used by a handful of commands (mostly glVertexAttrib{I|L}Pointer (...)), binding the buffer itself is not going to do anything useful.
What you need to do is something along the lines of this:
bool window_glview::draw_mesh(mesh* mesh_data,unsigned int GL_DRAW_METHOD,bool indices)
{
glUseProgram(id_program);
//
// Setup Vertex Pointers in addition to binding a VBO
//
glBindBuffer(GL_ARRAY_BUFFER,mesh_data->vbo_vertex);
glVertexAttribPointer(5,3,GL_FLOAT,GL_FALSE,0,NULL);//set vertex attrib (0)
glEnableVertexAttribArray(5);
GLuint id_matrix_loc = glGetUniformLocation(id_program, "in_Matrix");
glUniformMatrix4fv(id_matrix_loc,1,GL_TRUE,cam.matrixResult.get());
GLuint id_color_loc=glGetUniformLocation(id_program,"uColor");
glPolygonMode( GL_FRONT_AND_BACK, GL_LINE );
glUniform3f(id_color_loc,mesh_color[0],mesh_color[1],mesh_color[2]);
glDrawArrays(GL_DRAW_METHOD,0,mesh_data->vertex_count);
glBindBuffer(GL_ARRAY_BUFFER,0);
glUseProgram(0);
return true;
}
Now, if you really want to make this simple and be able to do this just by changing a single object binding, I would suggest you look into Vertex Array Objects. They will persistently store the vertex pointer state.
in your draw glBindBuffer(GL_ARRAY_BUFFER,mesh_data->vbo_index); doesn't actually do anything; the information about the vertex attribute is not bound to the buffer at all. it is set in the glVertexAttribPointer(5,3,GL_FLOAT,GL_FALSE,0,NULL); call which gets overwritten each time a new mesh is uploaded.
either create and use a VAO or move that call from add_mesh_to_GPU to draw_mesh:
for the VAO you would do:
void window_glview::add_mesh_to_GPU(mesh* mesh_data)
{
glGenVertexArrays(1, &mesh_data->vao_index);//new GLInt field
glBindVertexArray(mesh_data->vao_index);
glGenBuffers(1,&mesh_data->vbo_index);
glBindBuffer(GL_ARRAY_BUFFER ,mesh_data->vbo_index);
glBufferData(GL_ARRAY_BUFFER ,mesh_data->vertex_count*3*4,mesh_data->vertex_array,GL_DYNAMIC_DRAW);
glVertexAttribPointer(5,3,GL_FLOAT,GL_FALSE,0,NULL);//set vertex attrib (0)
glEnableVertexAttribArray(5);
glBindVertexArray(0);
}
bool window_glview::draw_mesh(mesh* mesh_data,unsigned int GL_DRAW_METHOD,bool indices)
{
glBindVertexArray(mesh_data->vao_index);
glUseProgram(id_program);
GLuint id_matrix_loc = glGetUniformLocation(id_program, "in_Matrix");
glUniformMatrix4fv(id_matrix_loc,1,GL_TRUE,cam.matrixResult.get());
GLuint id_color_loc=glGetUniformLocation(id_program,"uColor");
glPolygonMode( GL_FRONT_AND_BACK, GL_LINE );
glUniform3f(id_color_loc,mesh_color[0],mesh_color[1],mesh_color[2]);
glDrawArrays(GL_DRAW_METHOD,0,mesh_data->vertex_count);
glUseProgram(0);
glBindVertexArray(0);
return true;
}
Using DirectX 9, I am trying to create and then fill in an LPDIRECT3DTEXTURE9 texture in the following way.
First, I create the texture with IDirect3DTexture9::CreateTexture:
LPDIRECT3DTEXTURE9 pTexture;
if ( FAILED( pd3dDevice->CreateTexture( MAX_IMAGE_WIDTH,
MAX_IMAGE_HEIGHT,
1,
0, // D3DUSAGE_DYNAMIC,
D3DFMT_A8R8G8B8,
D3DPOOL_MANAGED, // D3DPOOL_DEFAULT,
&pTexture,
NULL ) ) )
{
// Handle error case
}
Then, I try and lock a rectangle on the texture as follows:
unsigned int uiSize = GetTextureSize();
D3DLOCKED_RECT rect;
ARGB BlackColor = { (char)0xFF, (char)0xFF, (char)0xFF, (char)0x00 };
::ZeroMemory( &rect, sizeof( D3DLOCKED_RECT ) );
// Lock outline texture to rect, and then cast rect to bits and use bits as outlineTexture access point
if ( pTexture == NULL )
{
return ERROR_NOT_INITIALIZED;
}
pTexture->LockRect( 0, &rect, NULL, D3DLOCK_NOSYSLOCK ); // Consider ?
ARGB* bits = (ARGB*)rect.pBits;
for ( unsigned int uiPixel = 0; uiPixel < uiSize; ++uiPixel )
{
// Copy all black pixels only
if ( compositeMask[uiPixel] == BlackColor )
{
bits[uiPixel] = compositeMask[uiPixel];
}
}
pTexture->UnlockRect( 0 );
return ERROR_SUCCESS;
ARGB is just a struct defined as follows:
struct ARGB
{
char b;
char g;
char r;
char a;
bool operator==( ARGB& comp )
{
if ( a == comp.a &&
r == comp.r &&
g == comp.g &&
b == comp.b )
return TRUE;
else
return FALSE;
}
bool operator!=( ARGB& comp )
{
return !( this->operator==( comp ) );
}
};
What I want to do is pre-calculate an array of pixel data (a black outline) depending on an in-application algorithm, and then only write the pure black pixels from that set of pixel data onto my LPDIRECT3DTEXTURE9 to be rendered later.
The application currently throws a ACCESS_VIOLATION exception (0xC0000005) at the LockRect call. Can anyone possibly explain why?
Here's the exact exception detail:
Unhandled exception at 0x0132F261 in TestApp.exe: 0xC0000005: Access violation reading location 0x00000001.
The location varied between 0x00000000 and 0x00000001... Does that hint at anything?
Also, if there's a better way to do what I am trying to do, then I'd be all ears :)
Like the other commentators on your question, I can't see anything wrong in principle with the way that you create and lock the texture. I have done the same myself - creating a texture in D3DPOOL_MANAGED and using LockRect to update the content.
However, there are three areas that concern me. I'm posting as an answer because there's far too much for a comment, so please bear with me...
Using the D3DLOCK_NOSYSLOCK flag when locking. I have found that this can cause conflicts when the D3D device has been created for multithreaded operation.
The way you access the locked bits takes no account of the stride. I appreciate that the error apparently occurs before this code, but it's worth mentioning anyway.
You are casting to your own struct for pixel access and it's unclear what the actual size of the struct may be because I can't see your packing options for the project.
So, I suggest three things that you can do to identify if any of the above are causing a problem:
First, just use the default zero flag for the locking call
pTexture->LockRect( 0, &rect, NULL, 0 );
Second, verify that your ARGB structure really is 4 bytes
ASSERT(sizeof(ARGB) == 4);
Finally, do nothing except lock and unlock the texture and see if you still get a runtime error, but also check the return code
HRESULT hr = pTexture->LockRect( 0, &rect, NULL, 0 );
ASSERT(SUCCEEDED(hr));
hr = pTexture->UnlockRect( 0 );
ASSERT(SUCCEEDED(hr));
In any case, when updating the texture bits, you must do it on a row-by-row basis, taking account of the stride reported back from the LockRect call in D3DLOCKED_RECT.Pitch.
Perhaps you could update your question with the results of the above and I can amend this answer as necessary.
This was mind-numbingly stupid. Sorry everyone.
I followed the texture pointer all the way through the code; the LPDIRECT3DTEXTURE9 pointers are actually being stored inside another custom Texture class object type with extra contextual data attached to it; these wrapper objects were members of another class that was being copied and used all over the place, and yet there was no assignment operator or copy constructor written for the class. At some point, out of the huge list of textures being processed, one of the textures sent from the container class was found to be invalid because it actually was; it was supposed to contain a copy of another texture, but contained only an invalid pointer.
Sorry for the unfortunate amateur error everyone, but thank you all for the great pointers and assurance
Been delving into un-managed DirectX 11 for the first time (bear with me) and there's an issue that, although asked several times over the forums still leaves me with questions.
I am developing as app in which objects are added to the scene over time. On each render loop I want to collect all vertices in the scene and render them reusing a single vertex and index buffer for performance and best practice. My question is regarding the usage of dynamic vertex and index buffers. I haven't been able to fully understand their correct usage when scene content changes.
vertexBufferDescription.Usage = D3D11_USAGE_DYNAMIC;
vertexBufferDescription.BindFlags = D3D11_BIND_VERTEX_BUFFER;
vertexBufferDescription.CPUAccessFlags = D3D11_CPU_ACCESS_WRITE;
vertexBufferDescription.MiscFlags = 0;
vertexBufferDescription.StructureByteStride = 0;
Should I create the buffers when the scene is initialized and somehow update their content in every frame? If so, what ByteSize should I set in the buffer description? And what do I initialize it with?
Or, should I create it the first time the scene is rendered (frame 1) using the current vertex count as its size? If so, when I add another object to the scene, don't I need to recreate the buffer and changing the buffer description's ByteWidth to the new vertex count? If my scene keeps updating its vertices on each frame, the usage of a single dynamic buffer would loose its purpose this way...
I've been testing initializing the buffer on the first time the scene is rendered, and from there on, using Map/Unmap on each frame. I start by filling in a vector list with all the scene objects and then update the resource like so:
void Scene::Render()
{
(...)
std::vector<VERTEX> totalVertices;
std::vector<int> totalIndices;
int totalVertexCount = 0;
int totalIndexCount = 0;
for (shapeIterator = models.begin(); shapeIterator != models.end(); ++shapeIterator)
{
Model* currentModel = (*shapeIterator);
// totalVertices gets filled here...
}
// At this point totalVertices and totalIndices have all scene data
if (isVertexBufferSet)
{
// This is where it copies the new vertices to the buffer.
// but it's causing flickering in the entire screen...
D3D11_MAPPED_SUBRESOURCE resource;
context->Map(vertexBuffer, 0, D3D11_MAP_WRITE_DISCARD, 0, &resource);
memcpy(resource.pData, &totalVertices[0], sizeof(totalVertices));
context->Unmap(vertexBuffer, 0);
}
else
{
// This is run in the first frame. But what if new vertices are added to the scene?
vertexBufferDescription.ByteWidth = sizeof(VERTEX) * totalVertexCount;
UINT stride = sizeof(VERTEX);
UINT offset = 0;
D3D11_SUBRESOURCE_DATA resourceData;
ZeroMemory(&resourceData, sizeof(resourceData));
resourceData.pSysMem = &totalVertices[0];
device->CreateBuffer(&vertexBufferDescription, &resourceData, &vertexBuffer);
context->IASetVertexBuffers(0, 1, &vertexBuffer, &stride, &offset);
isVertexBufferSet = true;
}
In the end of the render loop, while keeping track of the buffer position of the vertices for each object, I finally invoke Draw():
context->Draw(objectVertexCount, currentVertexOffset);
}
My current implementation is causing my whole scene to flicker. But no memory leaks. Wonder if it has anything to do with the way I am using the Map/Unmap API?
Also, in this scenario, when would it be ideal to invoke buffer->Release()?
Tips or code sample would be great! Thanks in advance!
At the memcpy into the vertex buffer you do the following:
memcpy(resource.pData, &totalVertices[0], sizeof(totalVertices));
sizeof( totalVertices ) is just asking for the size of a std::vector< VERTEX > which is not what you want.
Try the following code:
memcpy(resource.pData, &totalVertices[0], sizeof( VERTEX ) * totalVertices.size() );
Also you don't appear to calling IASetVertexBuffers when isVertexBufferSet is true. Make sure you do so.