c++ OpenGL: Mesh only appears on bottom half of the window - c++

I have just begun learning OpenGL, and I think there is a problem with my index array formula.
I'm trying to render a square terrain using IBO. When I draw with glDrawElements, the result would only appear on the bottom half of the screen, all tightly packed in a rectangular shape, while when I use glDrawArrays it works out perfectly with a square shaped and centered mesh.
I load my vertex height values from a grayscale, here is how I load vertices and create indices:
For vertices: right to left, bottom to top
int numVertices = image.width() * image.height() * 3;
float rowResize = image.width() / 2;
float colResize = image.height() / 2;
GLfloat* vertexData;
vertexData = new GLfloat[numVertices];
int counter = 0;
for (float j = 0; j < col; j++){
for (float i = 0; i < row; i++){
vertexData[counter++] = (i - rowResize) / rowResize;
vertexData[counter++] = (j - colResize) / colResize;
vertexData[counter++] = image.getColor(i,j) / 255.0f;
}
}
For indices: Trying to follow the order of {0, 1, 2, 1, 3, 2...}
2 3
-------
|\ |
| \ |
| \ |
| \ |
| \|
-------
0 1
int numIndices = (row - 1) * (col - 1) * 2 * 3;
unsigned short* indexData = new unsigned short[numIndices];
counter = 0;
for (short y = 0; y < col - 1; y++){
for (short x = 0; x < row - 1; x++){
// lower triangle
short L_first = y*row + x;
short L_second = L_first + 1;
short L_third = L_first + row;
//upper triangle
short U_first = L_first + 1;
short U_second = U_first + row;
short U_third = U_second - 1;
indexData[counter++] = L_first;
indexData[counter++] = L_second;
indexData[counter++] = L_third;
indexData[counter++] = U_first;
indexData[counter++] = U_second;
indexData[counter++] = U_third;
}
}
I initialized VAO, VBO and IBO, and then gen, bind, link data for each buffer object, and then unbind all.
In the game loop I have:
glBindVertexArray(VAO);
glEnableVertexAttribArray(0);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, sizeof(GLfloat) * 3, 0);
//glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, ibo);
glDrawArrays(GL_POINTS, 0, numVertices);
//glDrawElements(GL_TRIANGLE_STRIP, numIndices, GL_UNSIGNED_SHORT, 0);
glBindVertexArray(0);
glfwSwapBuffers(window);
Since drawing from vertices works and drawing from indices doesn't, what could be wrong with my indices generation?
Thank you for your help!
(Weird thing: I just tried with another grayscale image, and it worked well with both drawing from verticesGL_POINTS and indicesGL_TRIANGLE_STRIP...welp)
Pictures
Using glDrawArrays
Using glDrawElements

Related

How to fix texturing a terrain?

I have coded a flat terrain made up of triangles, but it seems like there some mirroring occurs. I want it to imitate grass, but it is blurried in some spots. Do I need to add some params in glTexParameteri? Or maybe it is an error which is associated with the drawing code?
A function which reads in a texture:
GLuint Terrain::write_model_texture(const char* filename)
{
GLuint tex;
// Activate texture 0
glActiveTexture(GL_TEXTURE0);
// Read into computers memory
std::vector<unsigned char> image;
unsigned width, height;
// Read the image
unsigned error = lodepng::decode(image, width, height, filename);
// Import to graphics card memory
glGenTextures(1, &tex); //Initialize one handle
glBindTexture(GL_TEXTURE_2D, tex); //Activate handle
// Copy image to graphics cards memory represented by the active handle
glTexImage2D(GL_TEXTURE_2D, 0, 4, width, height, 0,
GL_RGBA, GL_UNSIGNED_BYTE, (unsigned char*)image.data());
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
return tex;
}
Also code which draws triangles:
this->P = glm::mat4(1.0f);
this->V = glm::mat4(1.0f);
this->M = glm::mat4(1.0f);
P = in.P;
V = in.V;
//M = glm::rotate(M, glm::radians(90.0f), glm::vec3(1.0f, 0.0f, 0.0f));
M = glm::translate(M, glm::vec3(-5.0f, -5.0f, 0.0f));
//M = glm::scale(M, glm::vec3(10.0f, 10.0f, 10.0f));
for (int row = 0; row < terrain_height; row++)
{
int col;
// adding a row of vertices
for (col = 0; col < terrain_width; col++) {
// x, y, z, 1
//std::cout << random_num << std::endl;
terrain_verts.emplace_back(col, row, 0, 1);
terrain_norms.emplace_back(0.0f, 0.0f, 1.0f, 0);
}
// adding a row of indices
for (col = 0; col < terrain_width; col++)
{
terrain_indices.emplace_back(col + row * terrain_width);
terrain_indices.emplace_back(col + row * terrain_width + 1);
terrain_indices.emplace_back(col + terrain_width * (row + 1) - 1);
}
for (col = terrain_width - 1; col >= 0; col--)
{
terrain_indices.emplace_back(col + row * terrain_width);
terrain_indices.emplace_back(col + terrain_width * (row + 1) - 1);
terrain_indices.emplace_back(col + terrain_width * (row + 1));
}
// adding a row of texture coordinates
if (row % 2 == 0)
{
for (col = 0; col < terrain_width; col += 2)
{
terrain_texture_coordinates.emplace_back(0, 0);
terrain_texture_coordinates.emplace_back(1, 0);
}
}
else
{
for (col = 0; col < terrain_width; col += 2)
{
terrain_texture_coordinates.emplace_back(0, 1);
terrain_texture_coordinates.emplace_back(1, 1);
}
}
}
spTextured->use();
glUniformMatrix4fv(spTextured->u("P"), 1, false, glm::value_ptr(P));
glUniformMatrix4fv(spTextured->u("V"), 1, false, glm::value_ptr(V));
glEnableVertexAttribArray(spTextured->a("vertex"));
glEnableVertexAttribArray(spTextured->a("texCoord"));
glEnableVertexAttribArray(spTextured->a("normal"));
glUniformMatrix4fv(spTextured->u("M"), 1, false, glm::value_ptr(M));
glVertexAttribPointer(spTextured->a("vertex"), 4, GL_FLOAT, false, 0, terrain_verts.data());
glVertexAttribPointer(spTextured->a("texCoord"), 2, GL_FLOAT, false, 0, terrain_texture_coordinates.data());
glVertexAttribPointer(spTextured->a("normal"), 4, GL_FLOAT, false, 0, terrain_norms.data());
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, tex);
glUniform1i(spTextured->u("tex"), 0);
glDrawElements(GL_TRIANGLES, terrain_indices_count(), GL_UNSIGNED_INT, terrain_indices.data());
glDisableVertexAttribArray(spTextured->a("vertex"));
glDisableVertexAttribArray(spTextured->a("color"));
glDisableVertexAttribArray(spTextured->a("normal"));
Your indices do not make the slightest sense:
for (col = 0; col < terrain_width; col++)
{
terrain_indices.emplace_back(col + row * terrain_width);
terrain_indices.emplace_back(col + row * terrain_width + 1);
terrain_indices.emplace_back(col + terrain_width * (row + 1) - 1);
}
for (col = terrain_width - 1; col >= 0; col--)
{
terrain_indices.emplace_back(col + row * terrain_width);
terrain_indices.emplace_back(col + terrain_width * (row + 1) - 1);
terrain_indices.emplace_back(col + terrain_width * (row + 1));
}
If you look at your grid of data:
row = 0: 0 --- 1 --- 2 --- 3
| | | |
| | | |
row = 1: 4 --- 5 --- 6 ----7
^ ^ ^ ^
col=0 1 2 3
For row 0, column 0, you generate two triangles with the following vertices:
0, 1, 3 which is a deformed triangle with zero area
0, 3, 4 which goes completely across your grid:
For inner cells like row=0, col=1, you get:
1, 2, 4 which crosses 2 grid cells
1, 4, 5 which is at least belonging to a grid cell (although not the one it should)
You can actually see these patterns in your screenshot, you just need to take into account that you draw lots of weirdly overlapping triangles.
Your tex coords also won't work that way. You generate tex coords for alternating rows like this:
row = 0: (0,0) --- (1,0)
| |
| |
row = 1: (0,1) --- (1,1)
| |
| |
row = 2: (0,0) --- (1,0)
If you map texcoords that way, the cells between row 1 and row 2 will have the image vertically mirrored to those between row 0 and 1. You can't share the vertices of a row for the grid cella below and above it that way, you would have to duplicate the row vertices with different texcoords to make that work.
However, that is not necessary as you can use GL_REPEAT texture wrap mode and simply use texoords outside the [0,1] range like:
row = 0: (0,0) --- (1,0) --- (2,0)
| | |
| | |
row = 1: (0,1) --- (1,1) --- (2,1)
| | |
| | |
row = 2: (0,2) --- (1,2) --- (2,2)

OpenGL textures are being rendered with weird interruptions

I am working on an OpenGL engine and my textures are being rendered weirdly. The textures are mostly full and working, but they have little weird interruptions. Here's what it looks like.
The bottom right corner is what the textures are supposed to look like, there are also randomly colored squares of blue peppered in there. These solid squares (not textured) do not have these interruptions.
I can provide code, but I'm not sure what to show because I've checked everywhere and I don't know where the problem is from.
I am working on a Java and a C++ version. Here is the renderer in Java (If you want to see something else just ask):
public class BatchRenderer2D extends Renderer2D {
private static final int MAX_SPRITES = 60000;
private static final int VERTEX_SIZE = Float.BYTES * 3 + + Float.BYTES * 2 + Float.BYTES * 1 + Float.BYTES * 1;
private static final int SPRITE_SIZE = VERTEX_SIZE * 4;
private static final int BUFFER_SIZE = SPRITE_SIZE * MAX_SPRITES;
private static final int INDICES_SIZE = MAX_SPRITES * 6;
private static final int SHADER_VERTEX_INDEX = 0;
private static final int SHADER_UV_INDEX = 1;
private static final int SHADER_TID_INDEX = 2;
private static final int SHADER_COLOR_INDEX = 3;
private int VAO;
private int VBO;
private IndexBuffer IBO;
private int indexCount;
private FloatBuffer buffer;
private List<Integer> textureSlots = new ArrayList<Integer>();
public BatchRenderer2D() {
init();
}
public void destroy() {
IBO.delete();
glDeleteBuffers(VBO);
glDeleteVertexArrays(VAO);
glDeleteBuffers(VBO);
}
public void init() {
VAO = glGenVertexArrays();
VBO = glGenBuffers();
glBindVertexArray(VAO);
glBindBuffer(GL_ARRAY_BUFFER, VBO);
glBufferData(GL_ARRAY_BUFFER, BUFFER_SIZE, GL_DYNAMIC_DRAW);
glEnableVertexAttribArray(SHADER_VERTEX_INDEX);
glEnableVertexAttribArray(SHADER_UV_INDEX);
glEnableVertexAttribArray(SHADER_TID_INDEX);
glEnableVertexAttribArray(SHADER_COLOR_INDEX);
glVertexAttribPointer(SHADER_VERTEX_INDEX, 3, GL_FLOAT, false, VERTEX_SIZE, 0);
glVertexAttribPointer(SHADER_UV_INDEX, 2, GL_FLOAT, false, VERTEX_SIZE, 3 * 4);
glVertexAttribPointer(SHADER_TID_INDEX, 1, GL_FLOAT, false, VERTEX_SIZE, 3 * 4 + 2 * 4);
glVertexAttribPointer(SHADER_COLOR_INDEX, 4, GL_UNSIGNED_BYTE, true, VERTEX_SIZE, 3 * 4 + 2 * 4 + 1 * 4);
glBindBuffer(GL_ARRAY_BUFFER, 0);
int[] indices = new int[INDICES_SIZE];
int offset = 0;
for (int i = 0; i < INDICES_SIZE; i += 6) {
indices[ i ] = offset + 0;
indices[i + 1] = offset + 1;
indices[i + 2] = offset + 2;
indices[i + 3] = offset + 2;
indices[i + 4] = offset + 3;
indices[i + 5] = offset + 0;
offset += 4;
}
IBO = new IndexBuffer(indices, INDICES_SIZE);
glBindVertexArray(0);
}
#Override
public void begin() {
glBindBuffer(GL_ARRAY_BUFFER, VBO);
buffer = glMapBuffer(GL_ARRAY_BUFFER, GL_WRITE_ONLY).asFloatBuffer();
}
#Override
public void submit(Renderable2D renderable) {
Vector3f position = renderable.getPosition();
Vector2f size = renderable.getSize();
Vector4f color = renderable.getColor();
List<Vector2f> uv = renderable.getUV();
float tid = renderable.getTID();
float c = 0;
float ts = 0.0f;
if (tid > 0) {
boolean found = false;
for(int i = 0; i < textureSlots.size(); i++) {
if(textureSlots.get(i) == tid) {
ts = (float)(i + 1);
found = true;
break;
}
}
if(!found) {
if(textureSlots.size() >= 32) {
end();
flush();
begin();
}
textureSlots.add((int)tid);
ts = (float)textureSlots.size();
}
} else {
int r = (int) (color.x * 255);
int g = (int) (color.y * 255);
int b = (int) (color.z * 255);
int a = (int) (color.w * 255);
c = Float.intBitsToFloat((r << 0) | (g << 8) | (b << 16) | (a << 24));
}
transformationBack.multiply(position).store(buffer);
uv.get(0).store(buffer);
buffer.put(ts);
buffer.put(c);
transformationBack.multiply(new Vector3f(position.x, position.y + size.y, position.z)).store(buffer);
uv.get(1).store(buffer);
buffer.put(ts);
buffer.put(c);
transformationBack.multiply(new Vector3f(position.x + size.x, position.y + size.y, position.z)).store(buffer);
uv.get(2).store(buffer);
buffer.put(ts);
buffer.put(c);
transformationBack.multiply(new Vector3f(position.x + size.x, position.y, position.z)).store(buffer);
uv.get(3).store(buffer);
buffer.put(ts);
buffer.put(c);
indexCount += 6;
}
#Override
public void end() {
glUnmapBuffer(GL_ARRAY_BUFFER);
glBindBuffer(GL_ARRAY_BUFFER, 0);
}
#Override
public void flush() {
for(int i = 0; i < textureSlots.size(); i++) {
glActiveTexture(GL_TEXTURE0 + i);
glBindTexture(GL_TEXTURE_2D, textureSlots.get(i));
}
glBindVertexArray(VAO);
IBO.bind();
glDrawElements(GL_TRIANGLES, indexCount, GL_UNSIGNED_INT, NULL);
IBO.unbind();
glBindVertexArray(0);
indexCount = 0;
}
}
You didn't provide but I'm pretty sure I know the reason (had same problem, following The Cherno tutorial? ;)). Just as information, what is your gpu? (It seems AMD has more problems). Linking my thread for source
Important part:
Fragment Shader:
#version 330 core
if(fs_in.tid > 0.0){
int tid = int(fs_in.tid - 0.5);
texColor = texture(textures[tid], fs_in.uv);
}
What you try to do here is not allowed as per the GLSL 3.30 specification which states
Samplers aggregated into arrays within a shader (using square brackets [ ]) can only be indexed with integral constant expressions (see section 4.3.3 “Constant Expressions”).
Your tid is not a constant, so this will not work.
In GL 4, this constraint has been somewhat relaxed to (quote is from GLSL 4.50 spec):
When aggregated into arrays within a shader, samplers can only be indexed with a dynamically uniform integral expression, otherwise results are undefined.
Your now your input also isn't dynamically uniform either, so you will get undefined results too.
(Thanks derhass)
One "simple" solution(but not pretty and I believe with a small impact on performance):
switch(tid){
case 0: textureColor = texture(textures[0], fs_in.uv); break;
...
case 31: textureColor = texture(textures[31], fs_in.uv); break;
}
Also, as a small note, you're doing a lot of matrix multiplication there for squares, you could simply multiply the first one and then go and add the values, it boosted my performance around 200 fps's (in your example, multiply, then add y, then add x, then subtract y again)
Edit:
Clearly my algebra is not where it should be, what I said you could do(and is now with strike) is completely wrong, sorry

C++ OpenGL Large Mesh is missing triangles

So I'm putting together a height map renderer that will do most of the work int he vertex shader, but first of course I generate a mesh to render, at the moment I'm playing around with upper limits of openGL and C++ to see how dense a mesh I can render (so I later have something to go by in terms of LoD mesh dividing)
ANYWAY! to cut to the issue;
the issue I noticed after testing a meshResolution of 32, 64 and at 128 I experienced runtime crashes, I stopped them by using the a self made class "indexFace" which holds 6 indices to lower the array length, problem is at 128 resolution only a 3rd of the mesh actually displays, I was wondering if there was a limit to how many indices openGL can render or hold using 1 set of BufferObjects or if its an issue with my handling of the C++ side of things.
I'm generating the mesh via the following:
void HeightMapMesh::GenerateMesh(GLfloat meshScale, GLushort meshResolution)
{
GLushort vertexCount = (meshResolution + 1) * (meshResolution + 1);
Vertex_Texture* vertexData = new Vertex_Texture[vertexCount];
GLushort indexCount = (meshResolution * meshResolution) * 6;
//indexFace holds 6 GLushort's in an attempt to overcome the array size limit
indexFace* indexData = new indexFace[meshResolution * meshResolution];
GLfloat scalar = meshScale / ((GLfloat)meshResolution);
GLfloat posX = 0;
GLfloat posY = 0;
for (int x = 0; x <= meshResolution; x++)
{
posX = ((GLfloat)x) * scalar;
for (int y = 0; y <= meshResolution; y++)
{
posY = ((GLfloat)y) * scalar;
vertexData[y + (x * (meshResolution + 1))] = Vertex_Texture(posX, posY, 0.0f, x, y);
}
}
GLint indexPosition;
GLint TL, TR, BL, BR;
for (int x = 0; x < meshResolution; x++)
{
for (int y = 0; y < meshResolution; y++)
{
indexPosition = (y + (x * (meshResolution)));
BL = y + (x * (meshResolution + 1));
TL = y + 1 + (x * (meshResolution + 1));
BR = y + ((x + 1) * (meshResolution + 1));
TR = y + 1 + ((x + 1) * (meshResolution + 1));
indexData[indexPosition] = indexFace(
BL, TR, TL,
BL, BR, TR
);
}
}
mesh.Fill(vertexData, vertexCount, (void *)indexData, indexCount, GL_STATIC_DRAW, GL_STATIC_DRAW);
delete [] vertexData;
delete [] indexData;
}
//This is for mesh.Fill()
void Fill(T* vertData, GLushort vertCount, void* indData, GLushort indCount, GLenum vertUsage, GLenum indUsage)
{
indexCount = indCount;
vertexCount = vertCount;
glBindBuffer(GL_ARRAY_BUFFER, vertexBufferObjectID);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, indexBufferObjectID);
glBufferData(GL_ARRAY_BUFFER, sizeof(T) * vertexCount, vertData, vertUsage);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(GLushort) * indexCount, indData, indUsage);
}
its because you made your indices shorts.
For example this: GLushort indexCount = (meshResolution * meshResolution) * 6; is hitting USHRT_MAX at a value of 105 for meshResolution. (105*105*6 = 66150 > 65535)
Use ints as indices. So change your indices everywhere to unsigned ints and do the final draw call like this:
glDrawElements( GL_QUADS, indCount, GL_UNSIGNED_INT, indices); //do this
//glDrawElements( GL_QUADS, indCount, GL_UNSIGNED_SHORT, indices); //instead of this
//also GL_QUADS is deprecated but it seems your data is in that format so I left it that way
You could save a bunch of indices if you drew GL_TRIANGLE_STRIPs instead or better yet do tesselation on the GPU, since this is like the perfect use-case for it.

Drawing many circles using GL_LINE_LOOP

I have problem rendering many circles on the screen using these code.
float degree = 0;
unsigned int ctr = 0;
for(int xi = -3200; xi < 3200; xi+= 2*r)
{
for(int yi = 4800; yi > -4800; yi-= 2*r)
{
for(int i = 0; i < 360; ++i)
{
vertices.push_back(xi + r * cos(float(degree)));
vertices.push_back(yi + r * sin(float(degree)));
vertices.push_back(-8);
indices.push_back(i+ctr);
++degree;
}
ctr += 360;
degree = 0;
}
}
unsigned int i = 0;
for(i = 0; i < indices.size()/360; ++i)
{
glEnableClientState(GL_VERTEX_ARRAY);
glVertexPointer(3, GL_FLOAT, 0, &vertices[i*360]);
glLineWidth(1);
glDrawElements(GL_LINE_LOOP, 360, GL_UNSIGNED_INT, &indices[i*360]);
glDisableClientState(GL_VERTEX_ARRAY);
}
Here is the result
In addition, the program crashes when I change xi value to [-6400, 6400]
Leaving aside the questionable nature of this technique, you look to be accessing the indices incorrectly.
glVertexPointer(3, GL_FLOAT, 0, &vertices[i*360]);
glDrawElements(GL_LINE_LOOP, 360, GL_UNSIGNED_INT, &indices[i*360]);
The indices of glDrawElements specify an offset from the vertices at glVertexPointer. You've defined the indices as relative to the start of the vertex buffer:
indices.push_back(i+ctr);
But you're moving the buffer offset for each circle you draw. So in your indices buffer, the second circle starts at index 360. But when you draw the second circle, you also move the vertex pointer such that index 360 is the 0th element of the pointer.
Then when you try to access index 360, you're actually accessing element 720 (360 + start of buffer #360).

Hard time understanding indices with glDrawElements

I'm trying to draw a terrain with GL_TRIANGLE_STRIP and glDrawElements but I'm having a really hard time understanding the indices thing behind glDrawElements...
Here's what I have so far:
void Terrain::GenerateVertexBufferObjects(float ox, float oy, float oz) {
float startWidth, startLength, *vArray;
int vCount, vIndex = -1;
// width = length = 256
startWidth = (width / 2.0f) - width;
startLength = (length / 2.0f) - length;
vCount = 3 * width * length;
vArray = new float[vCount];
for(int z = 0; z < length; z++) {
// vIndex == vIndex + width * 3 || width * 3 = 256 * 3 = 768
for(int x = 0; x < width; x++) {
vArray[++vIndex] = ox + startWidth + (x * stepWidth);
vArray[++vIndex] = oy + heights[z][x];
vArray[++vIndex] = oz + startLength + (z * stepLength);
}
}
glGenBuffers(1, &vertexBuffer);
glBindBuffer(GL_ARRAY_BUFFER, vertexBuffer);
glBufferData(GL_ARRAY_BUFFER, sizeof(float) * vCount, vArray, GL_STATIC_DRAW);
glBindBuffer(GL_ARRAY_BUFFER, 0);
}
void Terrain::DrawVBO(unsigned int texID, float ox, float oy, float oz) {
float terrainLight[] = { 1.0f, 1.0f, 1.0f, 1.0f };
if(!generatedVBOs) {
GenerateVertexBufferObjects(ox, oy, oz);
generatedVBOs = true;
}
unsigned int indices[] = { 0, 768, 3, 771 };
glGenBuffers(1, &indexBuffer);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, indexBuffer);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(unsigned int) * 4, indices, GL_STATIC_DRAW);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0);
glEnableClientState(GL_VERTEX_ARRAY);
glBindBuffer(GL_ARRAY_BUFFER, vertexBuffer);
glVertexPointer(3, GL_FLOAT, 0, 0);
glPolygonMode(GL_FRONT_AND_BACK, GL_LINE);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, indexBuffer);
glMaterialfv(GL_FRONT_AND_BACK, GL_AMBIENT_AND_DIFFUSE, terrainLight);
glDrawElements(GL_TRIANGLE_STRIP, 4, GL_UNSIGNED_INT, 0);
glPolygonMode(GL_FRONT_AND_BACK, GL_FILL);
glDisableClientState(GL_VERTEX_ARRAY);
glBindBuffer(GL_ARRAY_BUFFER, 0);
}
I believe my vArray is correct, I use the same values when drawing with glBegin(GL_TRIANGLE_STRIP)/glEnd which works just fine.
My guess was to use just the index of the x coordinate for each vertex. But I have no idea if that's the right way to use indices with glDrawElements.
0: Index of the x coordinate from the first vertex of the triangle. Location: (-128, -128).
768: Index of the x coordinate from the second vertex of the triangle. Location: (-128, -127)
3: Index of the x coordinate from the third vertex of the triangle. Location: (-127, -128)
771: Index of the x coordinate from the fourth vertex, which will draw a second triangle. Location: (-127, -127).
I think everything is making sense so far?
What's not working is that the location values above (which I doubled checked on vArray and they are correct) are not the same which glDrawElements is using. Two triangles are drawn but they are a lot bigger than what they should be. It starts correctly at (-128, -128) but it goes to something like (-125, -125) instead of (-127, -127).
I can't understand what I'm doing wrong here...
Using something like the following solves my problem:
unsigned int indices[] = { 0, 256, 1, 257 };
I think it's safe to assume that the index is the x coordinate and that OpenGL is expecting that to be followed by y and z but we shouldn't increase by 3 ourselves, the server does it for us.
And now that I think about it, glDrawElements has the word element on it, which in this case is a vertex with 3 coordinates as specified in glVertexPointer and we need to pass the indices to the element, not the vertex.
I feel so dumb now...