glMapBufferRange Access Violation - c++

I want to store some particles in a shader-storage-buffer. I use the glMapBufferRange() function to set the particles values but I always get an Access Violation error whenever this function is called.
glGenBuffers(1, &bufferID);
glBindBuffer(GL_SHADER_STORAGE_BUFFER, bufferID);
glBufferData(GL_SHADER_STORAGE_BUFFER, numParticles*sizeof(Particle), NULL ,GL_STATIC_DRAW);
struct Particle* particles = (struct Particle*) glMapBufferRange(GL_SHADER_STORAGE_BUFFER, 0, numParticles*sizeof(Particle), GL_MAP_WRITE_BIT | GL_MAP_INVALIDATE_BUFFER_BIT);
for(int i = 0; i < numParticles; ++i){
//.. Do something with particles..//
}
glUnmapBuffer(GL_SHADER_STORAGE_BUFFER);
When I use glMapBuffer() instead, everything works fine. I already made sure that I have created an OpenGL context with glfw and initialized glew properly.

Ok, I finally found the problem. When I designed my GLFW-Window class I used the GLFW_OPENGL_FORWARD_COMPAT hint to create a forward-compatible OpenGL context. I don't know why I did this, but when I don't use this hint everything works fine. :)

Related

Can I call `glDrawArrays` multiple times while updating the same `GL_ARRAY_BUFFER`?

In a single frame, is it "allowed" to update the same GL_ARRAY_BUFFER continuously and keep calling glDrawArrays after each update?
I know this is probably not the best and not the most recommended way to do it, but my question is: Can I do this and expect to get the GL_ARRAY_BUFFER updated before every call to glDrawArrays ?
Code example would look like this:
// setup a single buffer and bind it
GLuint vbo;
glGenBuffers(1, &vbo);
glBindBuffer(GL_ARRAY_BUFFER, vbo);
while (!renderStack.empty())
{
SomeObjectClass * my_object = renderStack.back();
renderStack.pop_back();
// calculate the current buffer size for data to be drawn in this iteration
SomeDataArrays * subArrays = my_object->arrayData();
unsigned int totalBufferSize = subArrays->bufferSize();
unsigned int vertCount = my_object->vertexCount();
// initialise the buffer to the desired size and content
glBufferData(GL_ARRAY_BUFFER, totalBufferSize, NULL, GL_STREAM_DRAW);
// actually transfer some data to the GPU through glBufferSubData
for (int j = 0; j < subArrays->size(); ++j)
{
unsigned int subBufferOffset = subArrays->get(j)->bufferOffset();
unsigned int subBufferSize = subArrays->get(j)->bufferSize();
void * subBufferData = subArrays->get(j)->bufferData();
glBufferSubData(GL_ARRAY_BUFFER, subBufferOffset, subBufferSize, subBufferData);
unsigned int subAttributeLocation = subArrays->get(j)->attributeLocation();
// set some vertex attribute pointers
glVertexAttribPointer(subAttributeLocation, ...);
glEnableVertexAttribArray(subAttributeLocation, ...);
}
glDrawArrays(GL_POINTS, 0, (GLsizei)vertCount);
}
You may ask - why would I want to do that and not just preload everything onto the GPU at once ... well, obvious answer, because I can't do that when there is too much data that can't fit into a single buffer.
My problem is, that I can only see the result of one of the glDrawArrays calls (I believe the first one) or in other words, it appears as if the GL_ARRAY_BUFFER is not updated before each glDrawArrays call, which brings me back to my question, if this is even possible.
I am using an OpenGL 3.2 CoreProfile (under OS X) and link with GLEW for OpenGL setup as well as Qt 5 for setting up the window creation.
Yes, this is legal OpenGL code. It is in no way something that anyone should ever actually do. But it is legal. Indeed, it makes even less sense in your case, because you're calling glVertexAttribPointer for every object.
If you can't fit all your vertex data into memory, or need to generate it on the GPU, then you should stream the data with proper buffer streaming techniques.

Accessing Buffers in OpenGL

So I am aware that you can generate buffers using:
GLuint Buffer = 0;
glGenBuffers (1, &Buffer);
I am told that this will generate 1 buffer in the address of Buffer. I am also told that if I do this:
GLuint Buffer = 0;
glGenBuffers (2, &Buffer);
Then it will generate 2 buffers in the address of Buffer. So how do I access this 2nd buffer?
You should pass an array/vector to glGenBuffers, for example as follows:
std::vector<GLuint> buffers(2); //or std::array<GLuint, 2> buffers;
glGenBuffers(2, &buffers[0]);
...
// Access buffer elements at buffers[0] and buffers[1]
...
glDeleteBuffers(2, &buffers[0]);
While some people consider plain arrays to be obsolete in C++ (and I don't mean to start a holy war), it's worth pointing out that this also works without using any C++ containers. An old style array will work just fine:
GLuint buffers[2];
glGenBuffers(2, buffers);
Then use buffers[0] and buffers[1] to reference the two buffer names you generated.

Getting data from vector in an std::map

I'm trying to write a class that renders models from .3ds files and I'm running into a really annoying issue. I have a map mapping from integers to vectors of doubles
map<int, vector<double> >
that I am using to map the vertices which have different material properties. After that I try to iterate through all of the keys and have OpenGL render them like so:
glPushMatrix();
glPushClientAttrib(GL_CLIENT_ALL_ATTRIB_BITS);
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
for(map<int, vector<double> >::iterator iter = myMaterialVertices.begin(); iter != myMaterialVertices.end(); iter++)
{
vector<double> test = iter->second;
glVertexPointer(3, GL_DOUBLE, 0, test.data());
//get the texture coords here
glDrawArrays(GL_TRIANGLES, 0, iter->second.size() / 3);
}
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
glPopClientAttrib();
glPopMatrix();
Unfortunately this gives me an error on the glDrawArrays call every single time telling me that I am trying to read address 0. I interpreted this to mean there was a null pointer issue, so I put in the test vector to make sure the data was there. The vector gets loaded correctly but still gives the same error. What am I doing wrong?
Since it was suggested in the comments I will put up an answer. The call to
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
causes OpenGL to expect a pointer to an array of texture coordinates. When none was specified it threw a null pointer exception (attempt to read 0x00000000). The lesson here being don't enable client states unless you plan on defining the appropriate pointer.

OpenGl - Vertex Buffer Object Is Not Drawn To Screen

I am trying to go from displaying my GL_QUADS with glBegin() to displaying them with a VBO. Unfortunately after my change the program does not display the QUADS anymore. All I see is a black screen. That is what I am doing:
I define two VBOs. One for vertex data, one for indices, and allocate some space in them:
glGenBuffersARB(2, pboIds);
glBindBufferARB(GL_ARRAY_BUFFER_ARB, pboIds[0]);
glBufferDataARB(GL_ARRAY_BUFFER_ARB, 50000*sizeof(GLfloat), 0, GL_STREAM_DRAW_ARB);
glBindBufferARB(GL_ELEMENT_ARRAY_BUFFER_ARB, pboIds[1]);
glBufferDataARB(GL_ELEMENT_ARRAY_BUFFER_ARB, 50000*sizeof(GLint), 0, GL_STREAM_DRAW_ARB);
Then I map them and write information to them:
float *ptr = (float*)glMapBufferARB(GL_ARRAY_BUFFER_ARB, GL_READ_WRITE_ARB);
int *ptrind = (int*)glMapBufferARB(GL_ELEMENT_ARRAY_BUFFER_ARB, GL_READ_WRITE_ARB);
int nVertexCount=0;
for (...){
*ptr=vertX; ++ptr; *ptrind=nVertexCount; ++ptrind; nVertexCount++;
*ptr=vertY; ++ptr; *ptrind=nVertexCount; ++ptrind; nVertexCount++;
*ptr=vertZ; ++ptr; *ptrind=nVertexCount; ++ptrind; nVertexCount++;
}
glUnmapBufferARB(GL_ARRAY_BUFFER_ARB);
glUnmapBufferARB(GL_ELEMENT_ARRAY_BUFFER_ARB);
Finally, I try to display them:
glEnableClientState(GL_VERTEX_ARRAY);
glVertexPointer(3, GL_FLOAT, 0, 0);
glDrawElements(GL_QUADS, nVertexCount, GL_INT, (pboIds+1));
glDisableClientState(GL_VERTEX_ARRAY);
My program compiles just fine. I don't get any crashes when executing it. But all I see - is a black screen. I use the same routine to determine the vertex coordinates as I did with glBegin(GL_QUADS). Back then I could see white quads on the screen. Maybe I am doing something wrong in my VBO code?
EDIT:
Unfortunately I am under windows. Do I have to do something about these to stop using the ARB extension? I have that from an example code:
#ifdef _WIN32
PFNGLGENBUFFERSARBPROC pglGenBuffersARB = 0;
PFNGLBINDBUFFERARBPROC pglBindBufferARB = 0;
PFNGLBUFFERDATAARBPROC pglBufferDataARB = 0;
PFNGLBUFFERSUBDATAARBPROC pglBufferSubDataARB = 0;
PFNGLDELETEBUFFERSARBPROC pglDeleteBuffersARB = 0;
PFNGLGETBUFFERPARAMETERIVARBPROC pglGetBufferParameterivARB = 0;
PFNGLMAPBUFFERARBPROC pglMapBufferARB = 0;
PFNGLUNMAPBUFFERARBPROC pglUnmapBufferARB = 0;
#define glGenBuffersARB pglGenBuffersARB
#define glBindBufferARB pglBindBufferARB
#define glBufferDataARB pglBufferDataARB
#define glBufferSubDataARB pglBufferSubDataARB
#define glDeleteBuffersARB pglDeleteBuffersARB
#define glGetBufferParameterivARB pglGetBufferParameterivARB
#define glMapBufferARB pglMapBufferARB
#define glUnmapBufferARB pglUnmapBufferARB
#endif
glDrawElements(GL_QUADS, nVertexCount, GL_INT, (pboIds+1));
You don't pass the buffer object name to this function. It uses whatever buffer object happens to be bound to GL_ELEMENT_ARRAY_BUFFER at the time. That value is the offset from the beginning of the buffer; it specifies where the first index is.
Also, stop using the ARB extension version of buffer objects. Just use the core stuff.
Unfortunately I am under windows.
I don't see what that has anything to do with it. Use an OpenGL loading library, if you can't do it yourself.

Learning to use VBOs properly

So I've been trying to teach myself to use VBOs, in order to boost the performance of my OpenGL project and learn more advanced stuff than fixed-function rendering. But I haven't found much in the way of a decent tutorial; the best ones I've found so far are Songho's tutorials and the stuff at OpenGL.org, but I seem to be missing some kind of background knowledge to fully understand what's going on, though I can't tell exactly what it is I'm not getting, save the usage of a few parameters.
In any case, I've forged on ahead and come up with some cannibalized code that, at least, doesn't crash, but it leads to bizarre results. What I want to render is this (rendered using fixed-function; it's supposed to be brown and the background grey, but all my OpenGL screenshots seem to adopt magenta as their favorite color; maybe it's because I use SFML for the window?).
What I get, though, is this:
I'm at a loss. Here's the relevant code I use, first for setting up the buffer objects (I allocate lots of memory as per this guy's recommendation to allocate 4-8MB):
GLuint WorldBuffer;
GLuint IndexBuffer;
...
glGenBuffers(1, &WorldBuffer);
glBindBuffer(GL_ARRAY_BUFFER, WorldBuffer);
int SizeInBytes = 1024 * 2048;
glBufferData(GL_ARRAY_BUFFER, SizeInBytes, NULL, GL_STATIC_DRAW);
glGenBuffers(1, &IndexBuffer);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, IndexBuffer);
SizeInBytes = 1024 * 2048;
glBufferData(GL_ELEMENT_ARRAY_BUFFER, SizeInBytes, NULL, GL_STATIC_DRAW);
Then for uploading the data into the buffer. Note that CreateVertexArray() fills the vector at the passed location with vertex data, with each vertex contributing 3 floats for position and 3 floats for normal (one of the most confusing things about the various tutorials was what format I should store and transfer my actual vertex data in; this seemed like a decent approximation):
std::vector<float>* VertArray = new std::vector<float>;
pWorld->CreateVertexArray(VertArray);
unsigned short Indice = 0;
for (int i = 0; i < VertArray->size(); ++i)
{
std::cout << (*VertArray)[i] << std::endl;
glBufferSubData(GL_ARRAY_BUFFER, i * sizeof(float), sizeof(float), &((*VertArray)[i]));
glBufferSubData(GL_ELEMENT_ARRAY_BUFFER, i * sizeof(unsigned short), sizeof(unsigned short), &(Indice));
++Indice;
}
delete VertArray;
Indice -= 1;
After that, in the game loop, I use this code:
glBindBuffer(GL_ARRAY_BUFFER, WorldBuffer);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, IndexBuffer);
glEnableClientState(GL_VERTEX_ARRAY);
glVertexPointer(3, GL_FLOAT, 0, 0);
glNormalPointer(GL_FLOAT, 0, 0);
glDrawElements(GL_TRIANGLES, Indice, GL_UNSIGNED_SHORT, 0);
glDisableClientState(GL_VERTEX_ARRAY);
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0);
I'll be totally honest - I'm not sure I understand what the third parameter of glVertexPointer() and glNormalPointer() ought to be (stride is the offset in bytes, but Songho uses an offset of 0 bytes between values - what?), or what the last parameter of either of those is. The initial value is said to be 0; but it's supposed to be a pointer. Passing a null pointer in order to get the first coordinate/normal value of the array seems bizarre. This guy uses BUFFER_OFFSET(0) and BUFFER_OFFSET(12), but when I try that, I'm told that BUFFER_OFFSET() is undefined.
Plus, the last parameter of glDrawElements() is supposed to be an address, but again, Songho uses an address of 0. If I use &IndexBuffer instead of 0, I get a blank screen without anything rendering at all, except the background.
Can someone enlighten me, or at least point me in the direction of something that will help me figure this out? Thanks!
The initial value is said to be 0; but it's supposed to be a pointer.
The context (not meaning the OpenGL one) matters. If one of the gl*Pointer functions is called with no Buffer Object being bound to GL_ARRAY_BUFFER, then it is a pointer into client process address space. If a Buffer Object is bound to GL_ARRAY_BUFFER it's an offset into the currently bound buffer object (you may thing the BO forming a virtual address space, to which the parameter to gl*Pointer is then an pointer into that server side address space).
Now let's have a look at your code
std::vector<float>* VertArray = new std::vector<float>;
You shouldn't really mix STL containers and new, learn about the RAII pattern.
pWorld->CreateVertexArray(VertArray);
This is problematic, since you'll delete VertexArray later on, leaving you with a dangling pointer. Not good.
unsigned short Indice = 0;
for (int i = 0; i < VertArray->size(); ++i)
{
std::cout << (*VertArray)[i] << std::endl;
glBufferSubData(GL_ARRAY_BUFFER, i * sizeof(float), sizeof(float), &((*VertArray)[i]));
You should submit large batches of data with glBufferSubData, not individual data points.
glBufferSubData(GL_ELEMENT_ARRAY_BUFFER, i * sizeof(unsigned short), sizeof(unsigned short), &(Indice));
You're passing just incrementing indices into the GL_ELEMENT_ARRAY_BUFFER, thus enumerating the vertices. Why? You can have this, without the extra work using glDrawArrays insteaf of glDrawElements.
++Indice;
}
delete VertArray;
You're deleting VertArray, thus keeping a dangling pointer.
Indice -= 1;
Why didn't you just use the loop counter i?
So how to fix this? Like this:
std::vector<float> VertexArray;
pWorld->LoadVertexArray(VertexArray); // World::LoadVertexArray(std::vector<float> &);
glBufferSubData(GL_ARRAY_BUFFER, 0, sizeof(float)*VertexArray->size(), &VertexArray[0] );
And using glDrawArrays; of course if you're not enumerating vertices, but have a list of faces→vertex indices, using a glDrawElements is mandatory.
Don't call glBufferSubData for each vertex. It misses the point of VBO. You are supposed to create big buffer of your vertex data, and then pass it to OpenGL in a single go.
Read http://www.opengl.org/sdk/docs/man/xhtml/glVertexPointer.xml
When using VBOs those pointers are relative to VBO data. That's why it's usually 0 or small offset value.
stride = 0 means the data is tightly packed and OpenGL can calculate the stride from other parameters.
I usually use VBO like this:
struct Vertex
{
vec3f position;
vec3f normal;
};
Vertex[size] data;
...
glBufferData(GL_ARRAY_BUFFER, size*sizeof(Vertex), data, GL_STATIC_DRAW);
...
glVertexPointer(3,GL_FLOAT,sizeof(Vertex),offsetof(Vertex,position));
glNormalPointer(3,GL_FLOAT,sizeof(Vertex),offsetof(Vertex,normal));
Just pass a single chunk of vertex data. And then use gl*Pointer to describe how the data is packed using offsetof macro.
For knowing about the offset of the last parameter just look at this post....
What's the "offset" parameter in GLES20.glVertexAttribPointer/glDrawElements, and where does ptr/indices come from?