I have trouble rendering some geometry by using a vertex buffer object. I intend to draw a plane of points, so basically one vertex at every discrete position in my space. However, I cannot render that plane, as every time I call glDrawElements(...), application crashes returning an access violation exception. There must be some mistake while initialization, I guess.
This is what I have so far:
#define SPACE_X 512
#define SPACE_Z 512
typedef struct{
GLfloat x, y, z; // position
GLfloat nx, ny, nz; // normals
GLfloat r, g, b, a; // colors
} Vertex;
typedef struct{
GLuint i; // index
} Index;
// create vertex buffer
GLuint vertexBufferObject;
glGenBuffers(1, &vertexBufferObject);
// create index buffer
GLuint indexBufferObject;
glGenBuffers(1, &indexBufferObject);
// determine number of vertices / primitives
const int numberOfVertices = SPACE_X * SPACE_Z;
const int numberOfPrimitives = numberOfVertices; // As I'm going to render GL_POINTS, number of primitives is the same as number of vertices
// create vertex array
Vertex* vertexArray = new Vertex[numberOfVertices];
// create index array
Index* indexArray = new Index[numberOfPrimitives];
// create planes (vertex array)
// color of the vertices is red for now
int index = -1;
for(GLfloat x = -SPACE_X / 2; x < SPACE_X / 2; x++) {
for(GLfloat z = -SPACE_Z / 2; z < SPACE_Z / 2; z++) {
index++;
vertexArray[index].x = x;
vertexArray[index].y = 0.0f;
vertexArray[index].z = z;
vertexArray[index].nx = 0.0f;
vertexArray[index].ny = 0.0f;
vertexArray[index].nz = 1.0f;
vertexArray[index].r = 1.0;
vertexArray[index].g = 0.0;
vertexArray[index].b = 0.0;
vertexArray[index].a = 1.0;
}
}
// bind vertex buffer
glBindBuffer(GL_ARRAY_BUFFER, vertexBufferObject);
// buffer vertex array
glBufferData(GL_ARRAY_BUFFER, numberOfVertices * sizeof(Vertex), vertexArray, GL_DTREAM_DRAW);
// bind vertex buffer again
glBindBuffer(GL_ARRAY_BUFFER, vertexBufferObject);
// enable attrib index 0 (positions)
glEnableVertexAttribArray(0);
// pass positions in
glVertexAttribPointer((GLuint)0, 3, GL_FLOAT, GL_FALSE, sizeof(Vertex), vertexArray);
// enable attribute index 1 (normals)
glEnableVertexAttribArray(1);
// pass normals in
glVertexAttribPointer((GLuint)1, 3, GL_FLOAT, GL_FALSE, sizeof(Vertex), &vertexArray[0].nx);
// enable attribute index 2 (colors)
glEnableVertexAttribArray(2);
// pass colors in
glVertexAttribPointer((GLuint)2, 4, GL_FLOAT, GL_FALSE, sizeof(Vertex), &vertexArray[0].r);
// create index array
for(GLunit i = 0; i < numberOfPrimitives; i++) {
indexArray[i].i = i;
}
// bind buffer
glBindBuffer(GL_ELEMENET_ARRAY_BUFFER, indexBufferObject);
// buffer indices
glBufferData(GL_ELEMENET_ARRAY_BUFFER, numberOfPrimitives * sizeof(Index), indexArray, GL_STREAM_DRAW);
// bind buffer again
glBindBuffer(GL_ELEMENET_ARRAY_BUFFER, indexBufferObject);
// AND HERE IT CRASHES!
// draw plane of GL_POINTS
glDrawElements(GL_POINTS, numberOfPrimitives, GL_UNSIGNED_INT, indexArray);
// bind default buffers
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0);
// delete vertex / index buffers
glDeleteBuffers(1, &vertexBufferObject);
glDeleteBuffers(1, &indexBufferObject);
delete[] vertexArray;
vertexArray = NULL;
delete[] indexArray;
indexArray = NULL;
When you are using buffer objects, the last parameters in the gl*Pointer and 4th parameter in glDrawElements are no longer addresses in main memory (yours still are!), but offsets into the buffer objects. Make sure to compute these offsets in bytes! The "offsetof" macro is very helpful there.
Look at the second example on this page and compare it to what you did: http://www.opengl.org/wiki/VBO_-_just_examples
And you have one typo: GL_DTREAM_DRAW.
The method glEnableClientState(...) is deprecated! Sorry, for some reason I had overseen that fact.
Related
I am having difficulty to draw vertices from a structure-of-array like data structure. I think it might be the way I am using the stride and pointer arguments in glVertexAttribPointer call. I have a structure like this:
struct RadarReturn_t
{
float32_t x;
float32_t y;
float32_t z;
float32_t prob;
}
And I am using RadarReturn_t in another struct like this:
struct Detections_t
{
uint32_t currentScanNum;
std::array<RadarReturn_t, 64> detections;
}
Lets assume I want to draw a 100 of Detections_t. I have created one VBO to pack all of this information like this:
glGenBuffers(1, &mRadarVbo);
for (uint32_t iScan = 0; iScan < mMaxNumScans; ++iScan)
{
// Set the timestamp to 0U
mPersistentDetections.at(iScan).currentScanNum = 0U;
for (uint32_t iDet = 0; iDet < 64; ++iDet)
{
RadarReturn_t& detection = mPersistentDetections.at(iScan).detections.at(iDet);
detection.x = 0.0F;
detection.y = 0.0F;
detection.z = 0.0F;
detection.probability = 0.0F;
}
}
// Bind the VBO and copy the initial data to the graphics card
glBindBuffer(GL_ARRAY_BUFFER, mRadarVbo);
glBufferData(GL_ARRAY_BUFFER,
mMaxNumScans * sizeof(DetectionData_t),
&mPersistentDetections,
GL_DYNAMIC_DRAW);
Where mPersistentDetections is:
std::array<DetectionData_t, mMaxNumScans> mPersistentDetections;
Later on in my code, I update the buffer with new incoming data like this for currentScanNum:
// Offset is: 64 Radar returns plus one scanNumber
uint32_t offset = scanNum* ((64*sizeof(RadarReturn_t)) + 1*sizeof(GLuint));
glBufferSubData(GL_ARRAY_BUFFER, offset , sizeof(GLuint), &mPersistentDetections.at(scanNum).currentScanNum)
and like this for detections:
uint32_t dataSize = 64 * sizeof(RadarReturn_t);
glBufferSubData(GL_ARRAY_BUFFER,
offset + sizeof(GLuint),
dataSize,
&mPersistentDetections.at(scanNum).detections);
This is how I represent the VAO:
// Bind the VAO
glBindVertexArray(mRadarVao);
// Specify the layout of timestamp data
glVertexAttribIPointer(0,
1,
GL_UNSIGNED_INT,
sizeof(DetectionData_t),
(GLvoid*) 0);
// Specify the layout of the radar return data
glVertexAttribPointer(1,
4,
GL_FLOAT,
GL_FALSE,
sizeof(DetectionData_t),
(GLvoid*) (sizeof(GLuint)));
And finally the draw call:
glDrawArrays(GL_POINTS, 0, mMaxNumScans* 64);
If I am drawing this for mMaxNumScans = 100 am not able to draw 100x64 vertices here for some reason. Can you please point me out where I am going wrong?
EDIT:
As per suggestion from #Rabbid76, I have modified the Detections_t struct as follows:
struct Detections_t
{
std::array<RadarReturn_t, 64> detections;
std::array<uint32_t, 64> scanNumbers;
}
I have also appropriately modified glBufferData and glBufferSubData calls. And here is where I am still having an issue. I am not able to get the correct stride argument.
glVertexAttribPointer(0,
mDetectionAttributeSize,
GL_FLOAT,
GL_FALSE,
sizeof(RadarReturn_t),
reinterpret_cast<void*>offsetof(DetectionData_t, detectionData_t::detections)));
glVertexAttribIPointer(1,
mTimestampAttributeSize,
GL_INT,
sizeof(RadarReturn_t),
reinterpret_cast<void*>(offsetof(DetectionData_t, DetectionData_t::scanCounter)));
If I set the stride of attribute 0 to sizeof(Detection_t), not all the points will be drawn. Only sizeof(RadarReturn_t) draws all the points.
And if I set the stride of attribute 1 to sizeof(Detection_t) the color (I am using the scanNumber to vary the alpha value) of the detections becomes transparent after only a few scans.
I would appreciate if someone can tell me what the stride value is supposed to be in this case for attribute 0 and attribute 1.
I am going to answer my question. I was able to get it working properly by changing the RadarReturn_t struct like this:
struct RadarReturn_t
{
float32_t x;
float32_t y;
float32_t z;
float32_t prob;
int32_t scanCounter;
};
And then using the attributes like this:
glVertexAttribPointer(0,
4,
GL_FLOAT,
GL_FALSE,
sizeof(RadarReturn_t),
nullptr));
glVertexAttribIPointer(1,
1,
GL_INT,
sizeof(RadarReturn_t),
(void*) (4 * sizeof(GL_FLOAT)));
I have an array of 131072 values to draw in opengl with shaders. The coordinate of each point is calculated with the indice of the value, but i can't draw that. Now i have an error in the glDrawArrays.
This is part of my code set the vao and vbo, imagen is a CGfloat pointer with the data;
int pixels = 131072;
// Create vertex array objects
glGenVertexArrays(1, &vao);
glBindVertexArray(vao);
// Create vertex buffers
glGenBuffers(1, &vbo);
// VBO for coordinates of first square
glBindBuffer(GL_ARRAY_BUFFER, vbo);
glBufferData(GL_ARRAY_BUFFER,
pixels * sizeof(GLfloat),
imagen,
GL_STATIC_DRAW);
glVertexAttribPointer(0, pixels, GL_FLOAT, GL_FALSE, 0, 0);
glEnableVertexAttribArray(0);
and this is my display function:
void display(void) {
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glBindVertexArray(vao);
glDrawArrays(GL_POINTS, 0, 1);
glBindVertexArray(0);
glutSwapBuffers();
glutPostRedisplay();
}
if i pass an array to shader how can handle the array to calculate the coordinates with the index of each value???
Edit
This is how calculate the coordinates of each point with the index of the array, if i have one cube of 64x64x32 pixels i do this:
XX = 64;
YY = 64;
ZZ = 32;
x = index % XX;
y = (index / XX) % YY;
z = (int) floor((double) index / (XX * YY));
And with the value of the each element of the array calculate the color of that point
Edit 2
This is the image that i get when i draw all points and i need fill this object and get a volume
I am trying to approximate a curved surface using quadrilateral patches. I did it using straight forward rendering using GL_QUADS and specifying the four vertices of the quad patch.
Now I am trying to get some performance using vertex buffers and overlayed array (verNor) of vertices and normals. The problem is that I get some random shapes but not the correct shape I got previously.
Here I am putting my code:
GLenum err = glewInit();
if (GLEW_OK != err){
std::cout<<"Filed to Initialize GLEW :: "<<glewGetErrorString(err)<<std::endl;
}
verNor = new GLfloat [NA*NP*6]; // NA and NP are number of points in lets say x and y axis
indices = new GLuint [(NA)*(NP)*4]; // When the tube is cut an spread out.
// VBOs
glGenBuffers(1, &vbo_tube); // Ask the GPU driver for a buffer array. "vbo" now has the ID
glGenBuffers(1, &ibo_indices);
// For Vertices and Normals which are interleved
glBindBuffer(GL_ARRAY_BUFFER, vbo_tube);
glBufferData(GL_ARRAY_BUFFER, sizeof(GLfloat) * 6*NA*NP, NULL, GL_STATIC_DRAW);
// Obtaining the pointer to the memory in graphics buffer
buffer_verNor = glMapBuffer(GL_ARRAY_BUFFER,GL_WRITE_ONLY);
// For Indices
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, ibo_indices);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(int) * 4*(NA-1)*(NP-1), NULL, GL_STATIC_DRAW);
buffer_indices = glMapBuffer(GL_ELEMENT_ARRAY_BUFFER,GL_WRITE_ONLY);
// Calculate the vertices of the points around the tube. Correctness guarenteed because I can draw exactly what I wanted
// using normal stright forward GL_QUADS that is drawing quad by quad and with out any VBOs
// Calculated vertices are stored in vPoints.
for (int i=0; i<NP; i++) {
for (int j=0; j<NA; j++) {
// Calculate the normals of each and every point above and store them in v3
// Storing the vertices
verNor[6*( (i)*NA+(j) )+0] = (GLfloat)vPoints[i*NA+j].GetX();
verNor[6*( (i)*NA+(j) )+1] = (GLfloat)vPoints[i*NA+j].GetY();
verNor[6*( (i)*NA+(j) )+2] = (GLfloat)vPoints[i*NA+j].GetZ();
// Storing the Normals
verNor[6*((i-1)*NA+(j-1))+3] = (GLfloat)v3.GetX();
verNor[6*((i-1)*NA+(j-1))+4] = (GLfloat)v3.GetY();
verNor[6*((i-1)*NA+(j-1))+5] = (GLfloat)v3.GetZ();
// Calculating the indices which form the quad
indices[4*((i)*NA+(j))+0] = (GLuint) (i)*NA+j ;
indices[4*((i)*NA+(j))+1] = (GLuint) (i+1)*NA+j ;
indices[4*((i)*NA+(j))+2] = (GLuint) (i+1)*NA+j+1 ;
indices[4*((i)*NA+(j))+3] = (GLuint) (i)*NA+j+1 ;
}
}
memcpy(buffer_verNor, verNor, 6*(NA)*(NP));
glUnmapBuffer(GL_ARRAY_BUFFER); // Unmapping the buffer
memcpy(buffer_indices, indices, 4*(NA-1)*(NP-1));
glUnmapBuffer(GL_ELEMENT_ARRAY_BUFFER);
glEnable(GL_LIGHTING);
// Performing the Vertex Buffer Stuff
// For Vertices and Normals
glBindBuffer(GL_ARRAY_BUFFER, vbo_tube);
glVertexPointer( 3, GL_FLOAT, 6*sizeof(GLfloat), (GLvoid*)((char*)NULL + 0*sizeof(GLfloat)) );
glNormalPointer( GL_FLOAT, 6*sizeof(GLfloat), (GLvoid*)(((char*)NULL)+3*sizeof(GLfloat)) );
// For Indices
// Mapping the indices_vbo memory here
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, ibo_indices);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(GLuint)*4*(NA-1)*(NP-1), indices, GL_STATIC_DRAW);
// Enabling all the buffers and drawing the quad patches
glBindBuffer(GL_ARRAY_BUFFER, vbo_tube);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, ibo_indices);
// Enabling normals and vertices to draw
glEnableClientState (GL_NORMAL_ARRAY);
glEnableClientState(GL_VERTEX_ARRAY);
// Drawing the patches
glDrawElements(GL_QUADS, (NA-1)*(NP-1), GL_UNSIGNED_INT,(GLvoid*)((char*)NULL));
// Disabling the buffer objects for safety
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0);
glBindBuffer(GL_ARRAY_BUFFER, 0);
glDisableClientState(GL_NORMAL_ARRAY);
glDisableClientState(GL_VERTEX_ARRAY);
glDeleteBuffers(1, &vbo_tube);
glDeleteBuffers(1, &ibo_indices);
The gird has NA by NP points so I have to draw (NP-1)*(NA-1) quads.
Also I can only get some thing(but not correct) drawn only when I give wrong offsets and stride in glVertexPointer() and glNormalPointer() function. Correct ones i think are
vertexPointer :: Stride - 6*sizeof(GLfloat) , offset - 0(last argument)
normalPointer :: Stride - 6*sizeof(GLfloat) , offset - 3*sizeof(GLfloat)
I've been following the tutorial listed here.
I want to draw a single triangle using an Index Buffer Object, a Vertex Buffer Object, my own Vertex and Fragment Shader, and my own vertex structure.
My problem is that nothing shows up when I draw. I'm not sure what i'm doing wrong. My shaders work fine, I've tested them without the use of ibo/vbo's and they are fine.
Here is my code:
GLuint vao[1], vbo_vertex[1], index_buffer[1];
typedef struct{
GLfloat x,y,z; // Vertex.
GLfloat r,g,b; // Colors.
} spicyVertex;
void initializeBuffers(){
spicyVertex* simple_triangle = new spicyVertex[3];
// V0 - bottom
simple_triangle[0].x = 0.0f;
simple_triangle[0].y = -0.5f;
simple_triangle[0].z = 0.0f;
simple_triangle[0].r = 1.0f;
simple_triangle[0].g = 0.0f;
simple_triangle[0].b = 0.0f;
// V1 - top right
simple_triangle[0].x = 0.5f;
simple_triangle[0].y = 0.5f;
simple_triangle[0].z = 0.0f;
simple_triangle[0].r = 1.0f;
simple_triangle[0].g = 0.0f;
simple_triangle[0].b = 0.0f;
// V2 - top left
simple_triangle[0].x = -0.5f;
simple_triangle[0].y = 0.5f;
simple_triangle[0].z = 0.0f;
simple_triangle[0].r = 1.0f;
simple_triangle[0].g = 0.0f;
simple_triangle[0].b = 0.0f;
// Setup the vertex buffer data.
glGenBuffers(1, &vbo_vertex[0]);
glBindBuffer(GL_ARRAY_BUFFER, vbo_vertex[0]);
glBufferData(GL_ARRAY_BUFFER, 3*sizeof(spicyVertex), simple_triangle, GL_STATIC_DRAW);
// Index setup
GLushort *indices = new GLushort[3];
indices[0]=0;
indices[1]=1;
indices[2]=2;
glGenBuffers(1, &index_buffer[0]);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, index_buffer[0]);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, 3*sizeof(GLushort), indices, GL_STATIC_DRAW);
// By this point all of our data should be on the graphics device.
// VAO setup.
glGenVertexArrays(1, &vao[0]);
glBindVertexArray(vao[0]);
// Bind the vertex buffer and setup pointers for the VAO.
glBindBuffer(GL_ARRAY_BUFFER, vbo_vertex[0]);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, sizeof(spicyVertex), BUFFER_OFFSET(0));
glVertexAttribPointer(1, 3, GL_FLOAT, GL_FALSE, sizeof(spicyVertex), BUFFER_OFFSET(sizeof(spicyVertex)*3));
glEnableVertexAttribArray(0);
glEnableVertexAttribArray(1);
glDisableVertexAttribArray(2);
glDisableVertexAttribArray(3);
// Bind the index buffeer for the VAO.
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, index_buffer[0]);
// Cleanup.
delete [] simple_triangle;
delete [] indices;
glBindVertexArray(0);
glDisableVertexAttribArray(0);
glBindBuffer(GL_ARRAY_BUFFER,0);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER,0);
}
void Draw_indexed_Vao(){
glBindVertexArray(vao[0]); // select first VAO
glDrawRangeElements(GL_TRIANGLES,0, 3, 3, GL_UNSIGNED_SHORT, NULL);
glBindVertexArray(0);
}
static void display(void){
glUseProgramObjectARB( programObj );
Draw_indexed_Vao();
}
I'm not performing any view transformations, when I use more basic means of drawing everything shows up just fine right in front of the camera. I really do think it's something about the way I'm declaring these buffers.
EDIT 1: The application is double buffered.
EDIT 2: SOLVED. The 3 vertices V0, V1 and V2 were all modifying the same array element. As in, I wasn't using simple_triangle[0],simple_triangle[1], simple_triangle[2], but that I was only working with simple_triangle[0]. Thank you again for catching my silly error.
Adding an actual answer.
V1 and V2 are both modifying simple_triangle[0] so there is only ever one vertex.
You might need to call glFlush() in order to push the contents of the buffers you've drawn to onto the screen. Also, depending on whether you're using double buffering, the call required may be glutSwapBuffers() (if you're using GLUT) or some other swap call.
You might wanna change
' glVertexAttribPointer(1, 3, GL_FLOAT, GL_FALSE, sizeof(spicyVertex), BUFFER_OFFSET(sizeof(spicyVertex)*3));'
to
'glColorAttribPointer(1, 3, GL_FLOAT, GL_FALSE, sizeof(spicyVertex), BUFFER_OFFSET(sizeof(spicyVertex)*3));'
But I'm not 100% sure.
I'm trying to draw a terrain with GL_TRIANGLE_STRIP and glDrawElements but I'm having a really hard time understanding the indices thing behind glDrawElements...
Here's what I have so far:
void Terrain::GenerateVertexBufferObjects(float ox, float oy, float oz) {
float startWidth, startLength, *vArray;
int vCount, vIndex = -1;
// width = length = 256
startWidth = (width / 2.0f) - width;
startLength = (length / 2.0f) - length;
vCount = 3 * width * length;
vArray = new float[vCount];
for(int z = 0; z < length; z++) {
// vIndex == vIndex + width * 3 || width * 3 = 256 * 3 = 768
for(int x = 0; x < width; x++) {
vArray[++vIndex] = ox + startWidth + (x * stepWidth);
vArray[++vIndex] = oy + heights[z][x];
vArray[++vIndex] = oz + startLength + (z * stepLength);
}
}
glGenBuffers(1, &vertexBuffer);
glBindBuffer(GL_ARRAY_BUFFER, vertexBuffer);
glBufferData(GL_ARRAY_BUFFER, sizeof(float) * vCount, vArray, GL_STATIC_DRAW);
glBindBuffer(GL_ARRAY_BUFFER, 0);
}
void Terrain::DrawVBO(unsigned int texID, float ox, float oy, float oz) {
float terrainLight[] = { 1.0f, 1.0f, 1.0f, 1.0f };
if(!generatedVBOs) {
GenerateVertexBufferObjects(ox, oy, oz);
generatedVBOs = true;
}
unsigned int indices[] = { 0, 768, 3, 771 };
glGenBuffers(1, &indexBuffer);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, indexBuffer);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(unsigned int) * 4, indices, GL_STATIC_DRAW);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0);
glEnableClientState(GL_VERTEX_ARRAY);
glBindBuffer(GL_ARRAY_BUFFER, vertexBuffer);
glVertexPointer(3, GL_FLOAT, 0, 0);
glPolygonMode(GL_FRONT_AND_BACK, GL_LINE);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, indexBuffer);
glMaterialfv(GL_FRONT_AND_BACK, GL_AMBIENT_AND_DIFFUSE, terrainLight);
glDrawElements(GL_TRIANGLE_STRIP, 4, GL_UNSIGNED_INT, 0);
glPolygonMode(GL_FRONT_AND_BACK, GL_FILL);
glDisableClientState(GL_VERTEX_ARRAY);
glBindBuffer(GL_ARRAY_BUFFER, 0);
}
I believe my vArray is correct, I use the same values when drawing with glBegin(GL_TRIANGLE_STRIP)/glEnd which works just fine.
My guess was to use just the index of the x coordinate for each vertex. But I have no idea if that's the right way to use indices with glDrawElements.
0: Index of the x coordinate from the first vertex of the triangle. Location: (-128, -128).
768: Index of the x coordinate from the second vertex of the triangle. Location: (-128, -127)
3: Index of the x coordinate from the third vertex of the triangle. Location: (-127, -128)
771: Index of the x coordinate from the fourth vertex, which will draw a second triangle. Location: (-127, -127).
I think everything is making sense so far?
What's not working is that the location values above (which I doubled checked on vArray and they are correct) are not the same which glDrawElements is using. Two triangles are drawn but they are a lot bigger than what they should be. It starts correctly at (-128, -128) but it goes to something like (-125, -125) instead of (-127, -127).
I can't understand what I'm doing wrong here...
Using something like the following solves my problem:
unsigned int indices[] = { 0, 256, 1, 257 };
I think it's safe to assume that the index is the x coordinate and that OpenGL is expecting that to be followed by y and z but we shouldn't increase by 3 ourselves, the server does it for us.
And now that I think about it, glDrawElements has the word element on it, which in this case is a vertex with 3 coordinates as specified in glVertexPointer and we need to pass the indices to the element, not the vertex.
I feel so dumb now...