Drawing a model from obj file - c++

I'm trying to write an obj viewer with openGl. This program has to draw only lines of model's faces, so i need to load:
vertices: sometimes in 3D and sometimes in 4D;
faces: index lists, with random lengths;
Now i load only obj file, with 3 element per face, so i can draw the element in GL_TRIANGLE mode, but i'm getting somes trouble with some models:
http://people.sc.fsu.edu/~jburkardt/data/obj/icosahedron.obj
the loading phase seems working good, i think the problem is in the render() function:
static void render(void)
{
glClearColor( 0.0f, 0.0f, 0.0, 0.0f);
glClear(GL_COLOR_BUFFER_BIT);
glBindBuffer(GL_ARRAY_BUFFER, g_resources.vertex_buffer);
glEnableClientState(GL_VERTEX_ARRAY);
glVertexPointer(
3, /* size */
GL_FLOAT, /* type */
3*sizeof(GLfloat), /* stride */
(void*)0 /* array buffer offset */
);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, g_resources.element_buffer);
glPolygonMode(GL_FRONT_AND_BACK, GL_LINE);
glEnable(GL_CULL_FACE);
glCullFace(GL_BACK);
glDrawElements(
GL_TRIANGLES, /* mode */
theModel->face.size(), /* count */
GL_UNSIGNED_INT, /* type */
(void*)0 /* element array buffer offset */
);
glDisableClientState(GL_VERTEX_ARRAY);
glutSwapBuffers();
}
I have also some questions:
The indeces start from 1 or 0?
what's about the indeces enumeration? Is it in clockwise style?
Is it a good solution triangulate the faces with more than 3 indeces?

The indices are 1 based
Counter clockwise
Yes

if you are getting issue only with some models, try to disable GL_CULL_FACE to see the difference it can be the vertex order thing.
Indices in OBJ are 1 based, GL ones are 0 based, so you need -1 when making a buffer.

Related

Writing and reading from the same texture for an iterative DE solver on OpenGL

I am trying to write a fluid simulator that requires solving iteratively some differential equations (Lattice-Boltzmann Method). I want it to be a real-time graphical visualisation using OpenGL. I ran into a problem. I use a shader to perform relevant calculations on GPU. What I what is to pass the texture describing the state of the system at time t into the shader, shader performs the calculation and returns the state of the system at time t+dt, I render the texture on a quad and then pass the texture back into the shader. However, I found that I can not read and write to the same texture at the same time. But I am sure I have seen implementations of such calculations on GPU. How do they work around it? I think I saw a few discussion on a different way of working around the fact that OpenGL can read and write the same texture, but I could not quite understand them and adapt them to my case. To render to texture I use: glFramebufferTexture(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, renderedTexture, 0);
Here is my rendering routine:
do{
//count frames
frame_counter++;
// Render to our framebuffer
glBindFramebuffer(GL_FRAMEBUFFER, FramebufferName);
glViewport(0,0,windowWidth,windowHeight); // Render on the whole framebuffer, complete from the lower left corner to the upper right
// Clear the screen
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// Use our shader
glUseProgram(programID);
// Bind our texture in Texture Unit 0
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, renderTexture);
glUniform1i(TextureID, 0);
printf("Inv Width: %f", (float)1.0/windowWidth);
//Pass inverse widths (put outside of the cycle in future)
glUniform1f(invWidthID, (float)1.0/windowWidth);
glUniform1f(invHeightID, (float)1.0/windowHeight);
// 1rst attribute buffer : vertices
glEnableVertexAttribArray(0);
glBindBuffer(GL_ARRAY_BUFFER, quad_vertexbuffer);
glVertexAttribPointer(
0, // attribute 0. No particular reason for 0, but must match the layout in the shader.
3, // size
GL_FLOAT, // type
GL_FALSE, // normalized?
0, // stride
(void*)0 // array buffer offset
);
// Draw the triangles !
glDrawArrays(GL_TRIANGLES, 0, 6); // 2*3 indices starting at 0 -> 2 triangles
glDisableVertexAttribArray(0);
// Render to the screen
glBindFramebuffer(GL_FRAMEBUFFER, 0);
// Render on the whole framebuffer, complete from the lower left corner to the upper right
glViewport(0,0,windowWidth,windowHeight);
// Clear the screen
glClear( GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// Use our shader
glUseProgram(quad_programID);
// Bind our texture in Texture Unit 0
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, renderedTexture);
// Set our "renderedTexture" sampler to user Texture Unit 0
glUniform1i(texID, 0);
glUniform1f(timeID, (float)(glfwGetTime()*10.0f) );
// 1rst attribute buffer : vertices
glEnableVertexAttribArray(0);
glBindBuffer(GL_ARRAY_BUFFER, quad_vertexbuffer);
glVertexAttribPointer(
0, // attribute 0. No particular reason for 0, but must match the layout in the shader.
3, // size
GL_FLOAT, // type
GL_FALSE, // normalized?
0, // stride
(void*)0 // array buffer offset
);
// Draw the triangles !
glDrawArrays(GL_TRIANGLES, 0, 6); // 2*3 indices starting at 0 -> 2 triangles
glDisableVertexAttribArray(0);
glReadBuffer(GL_BACK);
glBindTexture(GL_TEXTURE_2D, sourceTexture);
glCopyTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, 0, 0, windowWidth, windowHeight, 0);
// Swap buffers
glfwSwapBuffers(window);
glfwPollEvents();
}
What happens now, is that when I render to the framebuffer, I the texture I get as an input is empty, I think. But when I render the same texture on screen, it renders succesfully what I excpect.
Okay, I think I've managed to figure something out. Instead of rendering to a framebuffer what I can do is to use glCopyTexImage2D to copy whatever got rendered on the screen to a texture. Now, however, I have another issue: I can't understand if glCopyTexImage2D will work with a frame buffer. It works with onscreen rendering, but I am failing to get it to work when I am rendering to a framebuffer. Not sure if this is even possible in the first place. Made a separate question on this:
Does glCopyTexImage2D work when rendering offscreen?

OpenGL Height Map from text segmentation fault

I am trying to create a heightmap from 25 float values like so:
#define HEIGHT_VERTS 5
#define VALS_PER_VERT_HEIGHT 5
float heightmapVerts[ HEIGHT_VERTS*VALS_PER_VERT_HEIGHT ] = {
//5
-0.9, -0.6, -0.4, -0.6, -0.9,
-0.2, 0.1, 0.3, 0.1, -0.3,
0, 0.4, 0.8, 0.4, 0,
-0.2, 0.1, 0.3, 0.1, -0.3,
0.5, -0.6, -0.4, -0.6, -0.9,
};
I am getting a segmentation fault when calling:
glDrawArrays(GL_TRIANGLES, 0, HEIGHT_VERTS);
I have been suggested that it's because the size argument of glVertexAttribPointer() must be 1, 2, 3, or 4. I pass 5 with:
glVertexAttribPointer(vertLocHeight, VALS_PER_VERT_HEIGHT, GL_FLOAT, GL_FALSE, 0, 0);
but I get another error saying that I have too many vertices if these values are smaller (eg: #define VALS_PER_VERT_HEIGHT 3)
error: too many initializers for ‘float [15]’
I have attached the rest of my code for some context, I am very very new to OpenGL so I apologize if the code is messy.
#include <stdio.h>
// GLEW loads OpenGL extensions. Required for all OpenGL programs.
#include <GL/glew.h>
#ifdef __APPLE__
#include <GLUT/glut.h>
#else
#include <GL/glut.h>
#endif
// Utility code to load and compile GLSL shader programs
#include "shader.hpp"
#include <iostream>
#include <fstream>
#include <vector>
#include "glm/glm.hpp"
#define WINDOW_WIDTH 400
#define WINDOW_HEIGHT 400
#define VALS_PER_VERT_HEIGHT 5//5
#define VALS_PER_COLOUR_HEIGHT 4
#define HEIGHT_VERTS 5 //height map vertices per line
using namespace std;
// Handle to our VAO generated in setShaderData method
//heightmap
unsigned int vertexVaoHandleHeight;
// Handle to our shader program
unsigned int programID;
/**
* Sets the shader uniforms and vertex data
* This happens ONCE only, before any frames are rendered
* #param id, Shader program object to use
* #returns 0 for success, error otherwise
*/
int setShaderData(const unsigned int &id)
{
float heightmapVerts[ HEIGHT_VERTS*VALS_PER_VERT_HEIGHT ] = {
//5
-0.9, -0.6, -0.4, -0.6, -0.9,
-0.2, 0.1, 0.3, 0.1, -0.3,
0, 0.4, 0.8, 0.4, 0,
-0.2, 0.1, 0.3, 0.1, -0.3,
0.5, -0.6, -0.4, -0.6, -0.9,
};
// Colours for each vertex; red, green, blue and alpha
// This data is indexed the same order as the vertex data, but reads 4 values
// Alpha will not be used directly in this example program
float heightColours[ HEIGHT_VERTS*VALS_PER_COLOUR_HEIGHT ] = {
0.8f, 0.7f, 0.5f, 1.0f,
0.3f, 0.7f, 0.1f, 1.0f,
0.8f, 0.2f, 0.5f, 1.0f,
};
// heightmap stuff ##################################################
// Generate storage on the GPU for our triangle and make it current.
// A VAO is a set of data buffers on the GPU
glGenVertexArrays(1, &vertexVaoHandleHeight);
glBindVertexArray(vertexVaoHandleHeight);
// Generate new buffers in our VAO
// A single data buffer store for generic, per-vertex attributes
unsigned int bufferHeight[2];
glGenBuffers(2, bufferHeight);
// Allocate GPU memory for our vertices and copy them over
glBindBuffer(GL_ARRAY_BUFFER, bufferHeight[0]);
glBufferData(GL_ARRAY_BUFFER, sizeof(float)*HEIGHT_VERTS*VALS_PER_VERT_HEIGHT, heightmapVerts, GL_STATIC_DRAW);
// Do the same for our vertex colours
glBindBuffer(GL_ARRAY_BUFFER, bufferHeight[1]);
glBufferData(GL_ARRAY_BUFFER, sizeof(float)*HEIGHT_VERTS*VALS_PER_COLOUR_HEIGHT, heightColours, GL_STATIC_DRAW);
// Now we tell OpenGL how to interpret the data we just gave it
// Tell OpenGL what shader variable it corresponds to
// Tell OpenGL how it's formatted (floating point, 3 values per vertex)
int vertLocHeight = glGetAttribLocation(id, "a_vertex");
glBindBuffer(GL_ARRAY_BUFFER, bufferHeight[0]);
glEnableVertexAttribArray(vertLocHeight);
glVertexAttribPointer(vertLocHeight, VALS_PER_VERT_HEIGHT, GL_FLOAT, GL_FALSE, 0, 0);
// Do the same for the vertex colours
int colourLocHeight = glGetAttribLocation(id, "a_colour");
glBindBuffer(GL_ARRAY_BUFFER, bufferHeight[1]);
glEnableVertexAttribArray(colourLocHeight);
glVertexAttribPointer(colourLocHeight, VALS_PER_COLOUR_HEIGHT, GL_FLOAT, GL_FALSE, 0, 0);
// heightmap stuff ##################################################
// An argument of zero un-binds all VAO's and stops us
// from accidentally changing the VAO state
glBindVertexArray(0);
// The same is true for buffers, so we un-bind it too
glBindBuffer(GL_ARRAY_BUFFER, 0);
return 0; // return success
}
/**
* Renders a frame of the state and shaders we have set up to the window
* Executed each time a frame is to be drawn.
*/
void render()
{
// Clear the previous pixels we have drawn to the colour buffer (display buffer)
// Called each frame so we don't draw over the top of everything previous
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glUseProgram(programID);
// HEIGHT MAP STUFF ###################################
// Make the VAO with our vertex data buffer current
glBindVertexArray(vertexVaoHandleHeight);
// Send command to GPU to draw the data in the current VAO as triangles
//CRASHES HERE
glDrawArrays(GL_TRIANGLES, 0, HEIGHT_VERTS);
glBindVertexArray(0); // Un-bind the VAO
// HEIGHT MAP STUFF ###################################
glutSwapBuffers(); // Swap the back buffer with the front buffer, showing what has been rendered
glFlush(); // Guarantees previous commands have been completed before continuing
}
You are messing things up.
1) With glVertexAttribPointer(), you setup vertex attributes - those are almost always vectors of some kind. For vertex position, if you need to draw scene in 2D, pass size = 2 (because each vertex has x and y coordinates), for 3D - pass 3 (x, y, z).
2) I think your interpretation of heightmap is also quite incorrect. You filled array only with height values (in 3D space, those are Y coordinates). But where are X and Z? You need to render vertices, so you need to pass all x, y and z coords, so OpenGL can know where each point should be rendered.
Your program crashes, because you send not enough data and OpenGL tries to read from memory, that doesn't belong to you.
I assume, that you want a heightmap, which is a 5x5 grid? Init data this way:
float heightmapVerts[25] =
{
//leave this as it is right now
};
vec3 vertices[5][5];
for(int z_num = 0; z_num < 5; ++z_num)
{
for(int x_num = 0; x_num < 5; ++x_num)
{
vertices[z_num][x_num].x = x_num * 0.5f;
vertices[z_num][x_num].z = z_num * 0.5f;
vertices[z_num][x_num].y = heightmapVerts[z_num * 5 + x_num];
}
}
Then you can call:
glVertexAttribPointer(vertLocHeight, 3, GL_FLOAT, GL_FALSE, 0, 0);
Update:
vec3 stands for 3-dimensional vector. I wrote it as pseudocode to
illustrate conception, but for the sake of simplicity, you may want to
use this great library: OpenGL Mathematics.
Update 2:
One more thing: color data is also set improperly. You probably want
color to be an RGB value, so every vertex needs additional 3 floats to
represent its color. Also, position and color should be place in
single VBO, there is no need for separating them.
I am not sure, if you got all basics required to do any simple
drawings. You may want to read these articles:
OpenGL Wiki
This nice tutorial
Lighthouse 3D

Display latency using OpenGL Quad Buffer with nvidia stereoscopic 3D

I need to achieve real-time performance (60fps) with my stereoscopic 3D application in c++ for video rendering, using OpenGL Quad Buffer. The application runs on Xubuntu Lts 14.04.
My hardware setup is composed by two Gopro cameras, a nvidia quadro k4000 card and a 120Hz display.
I'm experiencing image latency when watching the displayed stereo video.
There is no important delay when displaying only from one camera, in a mono setup, while not using the quad buffer functionality.
When measuring the times of execution of the display function from OpenGL it takes less than 10 mseconds to execute on most cycles, but there are spikes of more than 60ms that are probably the origin of such latency.
I've tested both with and without setting the sync to vblank option on the graphic's card, and the difference is very little, resulting in the same delayed image on the screen.
The code bellow is part of a more complex application so I hope the parts I provide at least shed a light on what I am trying to accomplish.
EDIT 1: This is the initialization code, outside the display function.
createTexture(&textureID, width, height);
createPBO(&pbo, width, height);
// map OpenGL buffer object for writing from CUDA
cudaGLMapBufferObject((void **)(Img, pbo);
// Unmap Buffers
cudaGLUnmapBufferObject(pbo);
// Paths for the stereo shaders' files
const std::string vertex_file_path = "VertexShader_stereo.vertexshader";
const std::string fragment_file_path = "FragmentShader_stereo.fragmentshader";
GLfloat vertices [] = {
// Position (2 elements) Texcoords (2 elements)
-1.0f, 1.0f, 0.0f, 0.0f, // Top-left
1.0f, 1.0f, 1.0f, 0.0f, // Top-right
1.0f, -1.0f, 1.0f, 1.0f, // Bottom-right
-1.0f, -1.0f, 0.0f, 1.0f // Bottom-left
};
GLushort elements [] = {
0, 1, 2, // indices of the first triangle
2, 3, 0 // indices of the second triangle
};
// Create a pixel buffer object for the right view
createPBO(&pbo_Right, width, height);
// map OpenGL buffer object for writing from CUDA
cudaGLMapBufferObject((void **)Img_Right, pbo_Right);
// Unmap Buffers
cudaGLUnmapBufferObject(pbo_Right);
// Create a vector buffer object
createVBO(&vbo, vertices, sizeof(vertices));
// Create a element buffer object
createEBO(&ebo, elements, sizeof(elements));
// Create shader program from the shaders
shaderProgram = LoadShader(vertex_file_path, fragment_file_path);
posAttrib = glGetAttribLocation(shaderProgram, "position");
texAttrib = glGetAttribLocation(shaderProgram, "texcoord");
}
Edit 2: The createPBO function goes like this:
createPBO(GLuint* pbo_, unsigned int w, unsigned int h) {
// set up vertex data parameter
int num_texels = w * h;
int num_values = num_texels * 4;
int size_tex_data = sizeof(GLubyte) * num_values;
// Generate a buffer ID called a PBO (Pixel Buffer Object)
glGenBuffers(1, pbo_);
// Make this the current UNPACK buffer (OpenGL is state-based)
glBindBuffer(GL_PIXEL_UNPACK_BUFFER, *pbo_);
// Allocate data for the buffer. 4-channel 8-bit image
glBufferData(GL_PIXEL_UNPACK_BUFFER, size_tex_data, NULL, GL_DYNAMIC_COPY);
cudaGLRegisterBufferObject(*pbo_);
}
Bellow is the part of my display function related to the OpenGL environment. I am using several glFinish() calls as an experiment, since it seems to be enhancing the execution times.
// Make this the current array buffer
glBindBuffer(GL_ARRAY_BUFFER, vbo);
// Make this the current element array buffer
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, ebo);
// Activate shader program
glUseProgram(shaderProgram);
// Specify the layout of the vertex data
glEnableVertexAttribArray(posAttrib);
glVertexAttribPointer(posAttrib, // index of of attribute in array
2, // size
GL_FLOAT, // type
GL_FALSE, // normalized
4 * sizeof(GLfloat), // stride
0); // offset in the array
glEnableVertexAttribArray(texAttrib);
glVertexAttribPointer(texAttrib, // index of of attribute in array
2, // size
GL_FLOAT, // type
GL_FALSE, // normalized
4* sizeof(GLfloat), // stride
(void*)(2 * sizeof(GLfloat))); // offset in the array
glFinish();
// Specifies the target to which the buffer object is bound. GL_PIXEL_UNPACK_BUFFER affects the glTexSubImage2D command.
glBindBuffer( GL_PIXEL_UNPACK_BUFFER, pbo);
// Draw Left Back Buffer
glDrawBuffer(GL_BACK_LEFT);
// Specifies the target (GL_TEXTURE_2D) to which the texture is bound
glBindTexture(GL_TEXTURE_2D, textureID);
// Note: NULL indicates the data resides in device memory
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, width, height, GL_RGBA, GL_UNSIGNED_BYTE, NULL);
// Draw a rectangle from the 2 triangles using 6 indices
glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_SHORT, 0);
// Specifies the target to which the buffer object is bound. GL_PIXEL_UNPACK_BUFFER affects the glTexSubImage2D command.
glBindBuffer(GL_PIXEL_UNPACK_BUFFER, pbo_Right);
// Draw Right Back Buffer
glDrawBuffer(GL_BACK_RIGHT);
// Note: NULL indicates the data resides in device memory
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, width, height, GL_RGBA, GL_UNSIGNED_BYTE, NULL);
// Draw a rectangle from the 2 triangles using 6 indices
glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_SHORT, 0);
glFinish();
// Disable and unbind objects
glDisableVertexAttribArray(posAttrib);
glDisableVertexAttribArray(texAttrib);
glUseProgram(0);
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0);
// Swap buffers
glutSwapBuffers();
glutPostRedisplay();
It is possible to make some optimizations? or is this a known problem in stereoscopic 3D?

OpenGL vertex array not rendering

I'm trying to draw some basic triangles using opengl, but it's not rendering on screen. These are the relevant functions:
glewInit();
glClearColor(0.0, 0.0, 0.0, 1.0);
glFrontFace(GL_CW);
glCullFace(GL_BACK);
glEnable(GL_CULL_FACE);
glEnable(GL_DEPTH_TEST);
Vertex vertices[] = {Vertex(Vector3f(0.0, 1.0, 0.0)),
Vertex(Vector3f(-1.0, -1.0, 0.0)),
Vertex(Vector3f(1.0, -1.0, 0.0))};
mesh.addVertices(vertices, 3);
Pastebin links to Vertex.hpp and Vector3f.hpp:
Vertex.hpp
Vector3f.hpp
/*
* Mesh.cpp:
*/
Mesh::Mesh()
{
glGenBuffers(1, &m_vbo); // unsigned int Mesh::m_vbo
}
void Mesh::addVertices(Vertex vertices[4], int indexSize)
{
m_size = indexSize * 3;
glBindBuffer(GL_ARRAY_BUFFER, m_vbo);
glBufferData(GL_ARRAY_BUFFER, m_size, vertices, GL_STATIC_DRAW);
}
void Mesh::draw()
{
glEnableVertexAttribArray(0);
glBindBuffer(GL_ARRAY_BUFFER, m_vbo);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 4 * sizeof(Vertex), 0);
glDrawArrays(GL_TRIANGLES, 0, m_size);
glDisableVertexAttribArray(0);
}
It's just black if I call glClear otherwise just the random noise of a default window. I can make it draw a triangle by using the most primitive method:
glBegin(GL_TRIANGLES);
glColor3f(0.4, 0.0, 0.0);
glVertex2d(0.0, 0.5);
glVertex2d(-0.5, -0.5);
glVertex2d(0.5, -0.5);
glEnd();
That works and displays what it should do correctly, so I guess that at least says my application is not 100% busted. The tutorial I'm following is in Java, and I'm translating it to C++ SFML as I go along, so I guess it's possible that something got lost in translation so to speak, unless I'm just missing something really basic (more likely.)
How do we fix this so it uses the Vertex list to draw the triangle like it's supposed to?
So many mistakes. There are truly a lot of examples, in any language, so why?
const float pi = 3.141592653589793; is member field of Vector3f. Do you realise this is non-static member and it is included in each and every Vector3f you use, so your vectors actually have four elements - x, y, z, and pi? Did you informed GL about it, so it could skip this garbage data? I don't think so.
You using glVertexAttribPointer, but don't have active shader. There is no guarantee that position is in slot 0. Either use glVertexPointer, or use shader with position attribute bound to 0.
void Mesh::addVertices(Vertex vertices[4], int indexSize) what [4] supposed to mean here? While it is not an error, it is at least misguiding.
glBufferData(GL_ARRAY_BUFFER, m_size, vertices, GL_STATIC_DRAW); m_size is 3*3 in your example, while documentation says it should be array size in bytes - which is sizeof(Vertex) * indexSize.
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 4 * sizeof(Vertex), 0); why stride parameter is 4*sizeof(Vertex)? Either set it to 0 or write correct stride - which is sizeof(Vertex).
glDrawArrays(GL_TRIANGLES, 0, m_size); m_size is already [incorrectly] set as "vertex buffer size", while DrawArrays expects number of vertices to draw - which is m_size / sizeof(Vertex) (given m_size is calculated correctly).

OpenGL: Is enabling all vertex attributes needed?

I read this tutorial arc synthesis, vertex attributes, chapter 2 playing with colors and decided to play around with the code. To sum up, this tutorial explain how to pass vertices colors and coordinates to vertex and fragment shaders to put some color on a triangle.
Here is a code of the display function (from the tutorial with some changes) that works as expected :
void display()
{
// cleaning screen
glClearColor(0.0f, 0.0f, 0.0f, 0.0f);
glClear(GL_COLOR_BUFFER_BIT);
//using expected program and binding to expected buffer
glUseProgram(theProgram);
glBindBuffer(GL_ARRAY_BUFFER, vertexBufferObject);
glDisableVertexAttribArray(0);
glDisableVertexAttribArray(1);
//setting data to attrib 0
glEnableVertexAttribArray(0);
glVertexAttribPointer(0, 4, GL_FLOAT, GL_FALSE, 0, 0);
//glDisableVertexAttribArray(0); // if uncommenting this out, it does not work anymore
//setting data to attrib 1
glEnableVertexAttribArray(1);
glVertexAttribPointer(1, 4, GL_FLOAT, GL_FALSE, 0, (void*)48);
//cleaning and rendering
glDrawArrays(GL_TRIANGLES, 0, 3);
glDisableVertexAttribArray(0);
glDisableVertexAttribArray(1);
glUseProgram(0);
glfwSwapBuffers();
}
Now if uncommenting the line
//glDisableVertexAttribArray(0);
before setting data to attribute 1 this does not work anymore. Why is that ? Plus, I don't get it why attribute 0 can be set without 1 enabled and not the contrary. By the way, what is the usefulness of enabling/disabling vertices attributes ? I mean you (at least I) will probably end up enabling all vertices attributes so why they are off by default ?
It's at the glDrawArrays call that currently enabled attributes are read and passed to the renderer. If you disable them before that then they won't get passed from your buffer.
There can be a lot of potential attributes available (up to glGetIntegerv(GL_MAX_VERTEX_ATTRIBS, &result); at least 16) and most applications don't need that many.
The position attribute in the shader is set to index 0 and if it doesn't get assigned then the shader gets all points with the same location (typically 0,0,0,1). Whereas index 1 is the color data and if that is missing it's not a big deal.