Relationship of offset in glMapBufferRange(...) and first in glDrawArraysInstanced(...) - c++

I'm struggling to understand the relationship between the offset variables in the two functions and how the offset value affects gl_VertexID and gl_InstanceID variables in the shader.
Through reading of the functions documentation, I think glMapBufferRange expects offset to be the number of bytes from the start of the buffer, whereas glDrawArraysInstanced expects first to be the the number of strides as specified by glVertexAttribPointer.
However that doesn't seem to be the case, as the below code doesn't work if offsetVerts has a value different from 0. For 0 it renders 3 squares on the screen, as I expected it.
The other possible error source would be the value of gl_VertexID. I'd expect it to be 0,1,2,3 for the 4 vertex shader calls per instance, regardless of the offset value.
Just to make sure I also tried using a first value that is multiple of 4 and vertices[int(mod(gl_VertexID,4))] for the position lookup, without success.
How can I alternate the code to make it work with offsets other than 0?
glGetError() calls are omitted here to shorten the code, it's 0 through the whole process. GL version is 3.3.
Init code:
GLuint buff_id, v_id;
GLint bytesPerVertex = 2*sizeof(GLfloat); //8
glGenBuffers( 1, &buff_id );
glBindBuffer( GL_ARRAY_BUFFER, buff_id );
glGenVertexArrays( 1, &v_id );
glBufferData( GL_ARRAY_BUFFER, 1024, NULL, GL_STREAM_DRAW );
glBindVertexArray( v_id );
glEnableVertexAttribArray( posLoc );
glVertexAttribPointer( posLoc, 2, GL_FLOAT, GL_FALSE, bytesPerVertex, (void *)0 );
glVertexAttribDivisor( posLoc, 1 );
glBindVertexArray( 0 );
glBindBuffer( GL_ARRAY_BUFFER, 0 );
float *data_ptr = nullptr;
int numVerts = 3;
int offsetVerts = 0;
render code:
glBindBuffer( GL_ARRAY_BUFFER, buff_id );
data_ptr = (float *)glMapBufferRange( GL_ARRAY_BUFFER,
bytesPerVertex * offsetVerts,
bytesPerVertex * numVerts,
GL_MAP_WRITE_BIT );
data_ptr[0] = 50;
data_ptr[1] = 50;
data_ptr[2] = 150;
data_ptr[3] = 50;
data_ptr[4] = 250;
data_ptr[5] = 50;
glUnmapBuffer( GL_ARRAY_BUFFER );
glBindBuffer( GL_ARRAY_BUFFER, 0 );
glBindVertexArray( v_id );
glDrawArraysInstanced( GL_TRIANGLE_STRIP, offsetVerts, 4, 3 );
glBindVertexArray( 0 );
vertex shader:
#version 330
uniform mat4 proj;
in vec2 pos;
void main() {
vec2 vertices[4]= vec2[4](
vec2(pos.x, pos.y),
vec2(pos.x + 10.0f, pos.y),
vec2(pos.x, pos.y + 10.0f ),
vec2(pos.x + 10.0f, pos.y + 10.0f )
);
gl_Position = proj * vec4(vertices[gl_VertexID], 1, 1);
}
fragment shader:
#version 330
out vec4 LFragment;
void main() {
LFragment = vec4( 1.0f, 1.0f, 1.0f, 1.0f );
}

The other possible error source would be the value of gl_VertexID. I'd expect it to be 0,1,2,3 for the 4 vertex shader calls per instance, regardless of the offset value.
There is no offset value in glDrawArrays*
The base function for this is
glDrawArrays(type, first, count), and this just will generate primitives from a consecutive sub-array of the specified vertex attribute arrays, from index frist to frist+count-1. Hence, gl_VertexID will be in the range first,first+count-1.
You are actually not using any vertex attribute array, you turned your attribute into an per-instance attribute. But the first parameter will not introduce an offset into these. You can either adjust your attribute pointer to include the offset, or you can use glDrawArraysInstancedBaseInstance to specify the offset you need.
Note that the gl_InstanceID will not reflect the base instance you set there, it will still count from 0 relative to the begin of the draw call. But the actuall instance values fetched from the array will use the offset.

Related

GLSL: Rendering a 2D texture

I was following LazyFoo's tutorial on GLSL 2D texturing (http://lazyfoo.net/tutorials/OpenGL/34_glsl_texturing/index.php), and I was able to get most parts working.
However, the program renders the texture zoomed up real close. Is this an issue with the vertex, or the texture lookup? Below is the vertex shader I was using in my implementation:
texCoord = LTexCoord;
gl_Position = gl_ProjectionMatrix * gl_ModelViewMatrix * vec4( LVertexPos2D.x, LVertexPos2D.y, 0.0, 1.0 );
And below is the fragment shader I was using:
gl_FragColor = texture( textureID, texCoord );
As for the render function, I deviate from the tutorial by using opengl's fixed pipeline matrices (don't need to update matrices ):
//If the texture exists
if( mTextureID != 0 )
{
//Texture coordinates
GLfloat texTop = 0.f;
GLfloat texBottom = (GLfloat)mImageHeight / (GLfloat)mTextureHeight;
GLfloat texLeft = 0.f;
GLfloat texRight = (GLfloat)mImageWidth / (GLfloat)mTextureWidth;
//Vertex coordinates
GLfloat quadWidth = mImageWidth;
GLfloat quadHeight = mImageHeight;
//Set vertex data
LVertexData2D vData[ 4 ];
//Texture coordinates
vData[ 0 ].texCoord.s = texLeft; vData[ 0 ].texCoord.t = texTop;
vData[ 1 ].texCoord.s = texRight; vData[ 1 ].texCoord.t = texTop;
vData[ 2 ].texCoord.s = texRight; vData[ 2 ].texCoord.t = texBottom;
vData[ 3 ].texCoord.s = texLeft; vData[ 3 ].texCoord.t = texBottom;
//Vertex positions
vData[ 0 ].position.x = 0.f; vData[ 0 ].position.y = 0.f;
vData[ 1 ].position.x = quadWidth; vData[ 1 ].position.y = 0.f;
vData[ 2 ].position.x = quadWidth; vData[ 2 ].position.y = quadHeight;
vData[ 3 ].position.x = 0.f; vData[ 3 ].position.y = quadHeight;
glEnable(GL_TEXTURE_2D);
glBindTexture( GL_TEXTURE_2D, mTextureID );
glContext.textureShader->bind();
glContext.textureShader->setTextureID( mTextureID );
glContext.textureShader->enableVertexPointer();
glContext.textureShader->enableTexCoordPointer();
glBindBuffer( GL_ARRAY_BUFFER, mVBOID );
glBufferSubData( GL_ARRAY_BUFFER, 0, 4 * sizeof(LVertexData2D), vData );
glContext.textureShader->setTexCoordPointer( sizeof(LVertexData2D), (GLvoid*)offsetof( LVertexData2D, texCoord ) );
glContext.textureShader->setVertexPointer( sizeof(LVertexData2D), (GLvoid*)offsetof( LVertexData2D, position ) );
glBindBuffer( GL_ELEMENT_ARRAY_BUFFER, mIBOID );
glDrawElements( GL_TRIANGLE_FAN, 4, GL_UNSIGNED_INT, NULL );
glContext.textureShader->disableVertexPointer();
glContext.textureShader->disableTexCoordPointer();
glContext.textureShader->unbind();
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0);
glBindTexture( GL_TEXTURE_2D, NULL );
glDisable(GL_TEXTURE_2D); // disable texture 2d
}
}
In response to Koradi, the vertex and texture coordinates are instantiated as such below:
void TextureShader::setVertexPointer( GLsizei stride, const GLvoid* data )
{
glVertexAttribPointer( mVertexPosLocation, 2, GL_FLOAT, GL_FALSE, stride, data );
}
void TextureShader::setTexCoordPointer( GLsizei stride, const GLvoid* data )
{
glVertexAttribPointer( mTexCoordLocation, 2, GL_FLOAT, GL_FALSE, stride, data );
}
It is rendered in the main loop with the following code:
glPushMatrix();
glTranslatef( glContext.gFBOTexture->imageWidth() / -2.f, glContext.gFBOTexture->imageHeight() / -2.f, 0.f );
glContext.gFBOTexture->render();
glPopMatrix();
Is there something obvious that I am overlooking? I am new to GLSL.
Edit: Added more code
After mulling over it for a few days, the issue was with how to send sampler2D uniforms into GLSL:
glBindTexture( GL_TEXTURE_2D, mTextureID );
glContext.textureShader->bind();
glContext.textureShader->setTextureID( mTextureID );
was corrected to:
glBindTexture( GL_TEXTURE_2D, mTextureID );
glContext.textureShader->bind();
glContext.textureShader->setTextureID( 0 );
setTextureID() sets the sampler2D uniform variable. Once the texture is binded, the sampler2D uniform should be set to 0, not the texture address.

OpenGL drawing meshes incorrectly

I'm attempting to make an OpenGL Engine in C++, but cannot render meshes correctly. Meshes, when rendered, create faces that connect two random points on the mesh, or a random point on the mesh with 0,0,0.
The problem can be seen here:
(I made it a wireframe to see the problem more clearly)
Code:
// Render all meshes (Graphics.cpp)
for( int curMesh = 0; curMesh < numMesh; curMesh++ ) {
// Save pointer of buffer
meshes[curMesh]->updatebuf();
Buffer buffer = meshes[curMesh]->buffer;
// Update model matrix
glm::mat4 mvp = Proj*View*(meshes[curMesh]->model);
// Initialize vertex array
glBindBuffer( GL_ARRAY_BUFFER, vertbuffer );
glBufferData( GL_ARRAY_BUFFER, sizeof(GLfloat)*buffer.numcoords*3, meshes[curMesh]->verts, GL_STATIC_DRAW );
// Pass information to shader
GLuint posID = glGetAttribLocation( shader, "s_vPosition" );
glVertexAttribPointer( posID, 3, GL_FLOAT, GL_FALSE, 0, (void*)0 );
glEnableVertexAttribArray( posID );
// Check if texture applicable
if( meshes[curMesh]->texID != NULL && meshes[curMesh]->uvs != NULL ) {
// Initialize uv array
glBindBuffer( GL_ARRAY_BUFFER, uvbuffer );
glBufferData( GL_ARRAY_BUFFER, sizeof(GLfloat)*buffer.numcoords*2, meshes[curMesh]->uvs, GL_STATIC_DRAW );
// Pass information to shader
GLuint uvID = glGetAttribLocation( shader, "s_vUV" );
glVertexAttribPointer( uvID, 2, GL_FLOAT, GL_FALSE, 0, (void*)(0) );
glEnableVertexAttribArray( uvID );
// Set mesh texture
glActiveTexture( GL_TEXTURE0 );
glBindTexture( GL_TEXTURE_2D, meshes[curMesh]->texID );
GLuint texID = glGetUniformLocation( shader, "Sampler" );
glUniform1i( texID, 0 );
}
// Actiavte shader
glUseProgram( shader );
// Set MVP matrix
GLuint mvpID = glGetUniformLocation( shader, "MVP" );
glUniformMatrix4fv( mvpID, 1, GL_FALSE, &mvp[0][0] );
// Draw verticies on screen
bool wireframe = true;
if( wireframe )
for(int i = 0; i < buffer.numcoords; i += 3)
glDrawArrays(GL_LINE_LOOP, i, 3);
else
glDrawArrays( GL_TRIANGLES, 0, buffer.numcoords );
}
// Mesh Class (Graphics.h)
class mesh {
public:
mesh();
void updatebuf();
Buffer buffer;
GLuint texID;
bool updated;
GLfloat* verts;
GLfloat* uvs;
glm::mat4 model;
};
My Obj loading code is here: https://www.dropbox.com/s/tdcpg4vok11lf9d/ObjReader.txt (It's pretty crude and isn't organized, but should still work)
This looks like a primitive restart issue to me. Hard to tell what exactly is the problem without seeing some code. It would help a lot to see the about 20 lines above and below and including the drawing calls render the teapot. I.e. the 20 lines before the corresponding glDrawArrays, glDrawElements or glBegin call and the 20 lines after.
subtract 1 from the indices for your use, since these are 1-based indices, and you will almost certainly need 0-based indices.
This is because your triangles are not connected for the wireframe to look perfect.
In case triangles is not connected you should construct index buffer.

Drawing With a Shader Storage Object Not Working

With all of my objects that are to be rendered, I use glDrawElements. However, my venture into Compute Shaders has left me a setup that uses glDrawArrays. As with many who are breaching the topic, I used this PDF as a basis. The problem is that when it is rendered, nothing appears.
#include "LogoTail.h"
LogoTail::LogoTail(int tag1) {
tag = tag1;
needLoad = false;
shader = LoadShaders("vertex-shader[LogoTail].txt","fragment-shader[LogoTail].txt");
shaderCompute = LoadShaders("compute-shader[LogoTail].txt");
for( int i = 0; i < NUM_PARTICLES; i++ )
{
points[ i ].x = 0.0f;
points[ i ].y = 0.0f;
points[ i ].z = 0.0f;
points[ i ].w = 1.0f;
}
glGenBuffers( 1, &posSSbo);
glBindBuffer( GL_SHADER_STORAGE_BUFFER, posSSbo );
glBufferData( GL_SHADER_STORAGE_BUFFER, sizeof(points), points, GL_STATIC_DRAW );
glUnmapBuffer(GL_SHADER_STORAGE_BUFFER);
for( int i = 0; i < NUM_PARTICLES; i++ )
{
times[ i ].x = 0.0f;
}
glGenBuffers( 1, &birthSSbo);
glBindBuffer( GL_SHADER_STORAGE_BUFFER, birthSSbo );
glBufferData( GL_SHADER_STORAGE_BUFFER, sizeof(times), times, GL_STATIC_DRAW );
glUnmapBuffer(GL_SHADER_STORAGE_BUFFER);
for( int i = 0; i < NUM_PARTICLES; i++ )
{
vels[ i ].vx = 0.0f;
vels[ i ].vy = 0.0f;
vels[ i ].vz = 0.0f;
vels[ i ].vw = 0.0f;
}
glGenBuffers( 1, &velSSbo );
glBindBuffer( GL_SHADER_STORAGE_BUFFER, velSSbo );
glBufferData( GL_SHADER_STORAGE_BUFFER, sizeof(vels), vels, GL_STATIC_DRAW );
glUnmapBuffer(GL_SHADER_STORAGE_BUFFER);
}
void LogoTail::Update(const double dt, float sunTime,glm::vec3 sunN) {
position=glm::translate(glm::mat4(), glm::vec3(4.5f,0,0));
}
void LogoTail::Draw(shading::Camera& camera){
shaderCompute->use();
glBindBufferBase( GL_SHADER_STORAGE_BUFFER, 4, posSSbo );
glBindBufferBase( GL_SHADER_STORAGE_BUFFER, 5, velSSbo );
glBindBufferBase( GL_SHADER_STORAGE_BUFFER, 6, birthSSbo );
glDispatchCompute( NUM_PARTICLES / WORK_GROUP_SIZE, 1, 1 );
glMemoryBarrier( GL_VERTEX_ATTRIB_ARRAY_BARRIER_BIT );
shaderCompute->stopUsing();
shader->use();
shader->setUniform("camera", camera.matrix());
shader->setUniform("model",position);
glBindBuffer( GL_ARRAY_BUFFER, posSSbo );
glVertexPointer( 4, GL_FLOAT, 0, (void *)0 );
glEnableClientState( GL_VERTEX_ARRAY );
glDrawArrays( GL_POINTS, 0, NUM_PARTICLES );
glDisableClientState( GL_VERTEX_ARRAY );
glBindBuffer( GL_ARRAY_BUFFER, 0 );
shader->stopUsing();
}
The header contains the needed structures and other variables so they do not fall out of scope for the specific object.
Here is the compute shader itself.
#version 430 core
#extension GL_ARB_compute_shader : enable
#extension GL_ARB_shader_storage_buffer_object : enable
layout( std140, binding=4 ) buffer Pos
{
vec4 Positions[ ]; // array of vec4 structures
};
layout( std140, binding=5 ) buffer Vel
{
vec4 Velocities[ ]; // array of vec4 structures
};
layout( std140, binding=6 ) buffer Tim
{
float BirthTimes[ ]; // array of structures
};
layout( local_size_x = 128, local_size_y = 1, local_size_z = 1 ) in;
const vec3 G = vec3( 0., -0.2, 0. );
const float DT = 0.016666;
void main() {
uint gid = gl_GlobalInvocationID.x; // the .y and .z are both 1
vec3 p = Positions[ gid ].xyz;
vec3 v = Velocities[ gid ].xyz;
vec3 pp = p + v*DT + .5*DT*DT*G;
vec3 vp = v + G*DT;
Positions[ gid ].xyz = pp;
Velocities[ gid ].xyz = vp;
}
For testing purposes I lowered the gravity.
I believe that nothing is out of scope, nor is there a needed bind, but yet it alludes me to why the particles are not drawing.
In addition, I also added a geometry shader that constructs a quad around each point but it did not solve anything.
Last 5 lines seems to me problematic:
glBindBuffer( GL_ARRAY_BUFFER, posSSbo );
glVertexPointer( 4, GL_FLOAT, 0, (void *)0 );
glEnableClientState( GL_VERTEX_ARRAY );
glDrawArrays( GL_POINTS, 0, NUM_PARTICLES );
glDisableClientState( GL_VERTEX_ARRAY );
glBindBuffer( GL_ARRAY_BUFFER, 0 );
My guess is You are trying to use old way of doing things in programmable pipeline.I am not sure how it is stated in OpenGL specs but it seems that in the newer versions (GL4.2) you are forced to bind your vertex buffers to VAO(may be that is vendor specific rules?).Once I needed to implement OIT and tried Cyril Crassin's demo which was using buffers with elements draw-just like you.I am using GL4.2 and NVidia cards.Nothing was showing up.I then bound them to a VAO and the issue was gone.So that is what I suggest you to try.

Common pitfalls/causes of bad vertex data?

So, I've been working on this .OBJ/.MTL mesh parser for the past week and a half. Within this time I've been tracking down/fixing a lot of bugs, cleaning up code, documenting it, etc etc.
The problem is that, with every bug I fix, there still is this issue which crops up, and since a picture is worth a thousand words...
Using GL_LINE_LOOP
(NOTE: the pyramid on the right tipping outward from the sphere is the problem here)
Using GL_TRIANGLES
What's even more interesting is that this "bad" vertex data appears to move with the camera when floating around the scene...except that it scales and sticks outside of the mesh.
The odd thing here is that while I'm sure the issue has something to do with memory, I've been checking for issues which contradict whether or not the parsing algorithm works properly. After some unit tests, it appears to be working fine.
So, I thought it may be a Linux nVidia driver issue. I updated the driver to the next version, restarted, and still no dice.
After some heavy thinking, I've been trying to find errors in the following code.
//! every 3 vertices should represent a triangle, therefore we'll want to
//! use the indices to grab their corresponding vertices. Since the cross product
//! of two sides of every triangle (where one side = Vn - Vm, 'n' and 'm' being on the range of 1..3),
//! we first grab the three vertices, and then compute the normal using the their differences.
const uInt32 length = mesh->vertices.size();
//! declare a pointer to the vector so we can perform simple
//! memory copies to get the indices for each triangle within the
//! iteration.
GLuint* const pIndexBuf = &mesh->indices[ 0 ];
for ( uInt32 i = 0; i < length; i += 3 )
{
GLuint thisTriIndices[ 3 ];
memcpy( thisTriIndices, pIndexBuf + i, sizeof( GLuint ) * 3 );
vec3 vertexOne = vec3( mesh->vertices[ thisTriIndices[ 0 ] ] );
vec3 vertexTwo = vec3( mesh->vertices[ thisTriIndices[ 1 ] ] );
vec3 vertexThree = vec3( mesh->vertices[ thisTriIndices[ 2 ] ] );
vec3 sideOne = vertexTwo - vertexOne;
vec3 sideTwo = vertexThree - vertexOne;
vec3 surfaceNormal = glm::cross( sideOne, sideTwo );
mesh->normals.push_back( surfaceNormal );
}
The current one shown in the picture doesn't even have normal data, so the idea is to compute surface normals for it, hence the above code. While I've made some checks to see if the index data was being loaded properly within the loop, I haven't been able to find anything yet.
I think the way I'm laying out my memory might have problems too, but I can't quite put my finger on what the problem would be. In case I've missed something, I'll throw in my glVertexAttribPointer calls:
//! Gen some buf handles
glGenBuffers( NUM_BUFFERS_PER_MESH, mesh->buffers );
//! Load the respective buffer data for the mesh
__LoadVec4Buffer( mesh->buffers[ BUFFER_VERTEX ], mesh->vertices ); //! positons
__LoadVec4Buffer( mesh->buffers[ BUFFER_COLOR ], mesh->colors ); //! material colors
__LoadVec3Buffer( mesh->buffers[ BUFFER_NORMAL ], mesh->normals ); //! normals
__LoadIndexBuffer( mesh->buffers[ BUFFER_INDEX ], mesh->indices ); //! indices
//! assign the vertex array a value
glGenVertexArrays( 1, &mesh->vertexArray );
//! Specify the memory layout for each attribute
glBindVertexArray( mesh->vertexArray );
//! Position and color are both stored in BUFFER_VERTEX.
glBindBuffer( GL_ARRAY_BUFFER, mesh->buffers[ BUFFER_VERTEX ] );
glEnableVertexAttribArray( meshProgram->attributes[ "position" ] );
glVertexAttribPointer( meshProgram->attributes[ "position" ], //! index
4, //! num vals
GL_FLOAT, GL_FALSE, //! value type, normalized?
sizeof( vec4 ), //! number of bytes until next value in the buffer
( void* ) 0 ); //! offset of the memory in the buffer
glBindBuffer( GL_ARRAY_BUFFER, mesh->buffers[ BUFFER_COLOR ] );
glEnableVertexAttribArray( meshProgram->attributes[ "color" ] );
glVertexAttribPointer( meshProgram->attributes[ "color" ],
4,
GL_FLOAT, GL_FALSE,
sizeof( vec4 ),
( void* ) 0 );
//! Now we specify the layout for the normals
glBindBuffer( GL_ARRAY_BUFFER, mesh->buffers[ BUFFER_NORMAL ] );
glEnableVertexAttribArray( meshProgram->attributes[ "normal" ] );
glVertexAttribPointer( meshProgram->attributes[ "normal" ],
3,
GL_FLOAT, GL_FALSE,
sizeof( vec3 ),
( void* )0 );
//! Include the index buffer within the vertex array
glBindBuffer( GL_ELEMENT_ARRAY_BUFFER, mesh->buffers[ BUFFER_INDEX ] );
glBindVertexArray( 0 );
Any kind of point in the right direction at the very least would be appreciated: I have no idea what the common causes are for these issues.
Edit: posted draw code on request
glBindVertexArray( mMeshes[ i ]->vertexArray );
UBO::LoadMatrix4( UBO::MATRIX_MODELVIEW, modelView.top() );
UBO::LoadMatrix4( UBO::MATRIX_PROJECTION, camera.projection() );
glDrawElements( GL_TRIANGLES, mMeshes[ i ]->indices.size(), GL_UNSIGNED_INT, ( void* )0 );
glBindVertexArray( 0 );
I found the final culprit, in conjunction with #radical7's suggestions, these fixed the issue for the most part.
// round mesh->indices.size() down if it's not already divisible by 3.
// the rounded value is stored in numTris
std::vector< vec4 > newVertices;
uInt32 indicesLen = Math_FloorForMultiple( mesh->indices.size(), 3 );
// declare a pointer to the vector so we can perform simple
// memory copies to get the indices for each triangle within the
// iteration.
newVertices.reserve( indicesLen );
const GLuint* const pIndexBuf = &mesh->indices[ 0 ];
for ( uInt32 i = 0; i < indicesLen; i += 3 )
{
const GLuint* const thisTriIndices = pIndexBuf + i;
vec4 vertexOne = mesh->vertices[ thisTriIndices[ 0 ] - 1 ];
vec4 vertexTwo = mesh->vertices[ thisTriIndices[ 1 ] - 1 ];
vec4 vertexThree = mesh->vertices[ thisTriIndices[ 2 ] - 1 ];
vec4 sideOne = vertexTwo - vertexOne;
vec4 sideTwo = vertexThree - vertexOne;
vec3 surfaceNormal = glm::cross( vec3( sideOne ), vec3( sideTwo ) );
mesh->normals.push_back( surfaceNormal );
mesh->normals.push_back( surfaceNormal + vec3( sideOne ) );
mesh->normals.push_back( surfaceNormal + vec3( sideTwo ) );
newVertices.push_back( vertexOne );
newVertices.push_back( vertexTwo );
newVertices.push_back( vertexThree );
}
mesh->vertices.clear();
mesh->vertices = newVertices;
Note that when the vertices are grabbed in the loop, via the call to mesh->vertices[ thisTriIndices[ x ] - 1 ], the - 1 is extremely important: OBJ mesh files store their face indices starting from 1...N indices, as opposed to 0....N-1 indices.
The indices themselves also shouldn't be used to draw the mesh, but rather as a means to obtain a new buffer of vertices from an already temporary buffer of vertices: you use the indices to access the elements within the temporary vertices, and then for each vertex obtained from the temp buffer, you add that vertex to a new buffer. This way, you'll get the number of vertices specified in the correct draw order. Thus, you want to draw them using vertex arrays only.

glVertexAttribPointer() working only with the first stream

I am trying to use glVertexAttribPointer() to give some data to my vertex shader. The thing is that it's working only with the FIRST attribute...
Here is my OpenGL code:
struct Flag_vertex
{
GLfloat position_1[ 8 ];
GLfloat position_2[ 8 ];
};
Flag_vertex flag_vertex;
... // fill some data to flag_vertex
GLuint vertexbuffer_id;
glGenBuffers( 1, &vertexbuffer_id );
glBindBuffer( GL_ARRAY_BUFFER, vertexbuffer_id );
glBufferData( GL_ARRAY_BUFFER, sizeof(flag_vertex), &flag_vertex, GL_STATIC_DRAW );
glEnableVertexAttribArray( 0 );
glEnableVertexAttribArray( 1 );
glBindBuffer( GL_ARRAY_BUFFER, vertexbuffer_id );
glVertexAttribPointer( 0, 2, GL_FLOAT, GL_FALSE, 0, (void*)offsetof(Flag_vertex, position_1) );
glVertexAttribPointer( 1, 2, GL_FLOAT, GL_FALSE, 0, (void*)offsetof(Flag_vertex, position_2) );
and my shader is something like:
#version 420 core
layout(location = 0) in vec2 in_position_1;
layout(location = 1) in vec2 in_position_2;
out vec2 texcoord;
void main()
{
gl_Position = vec4(in_position_X, 0.0, 1.0);
texcoord = in_position_X * vec2(0.5) + vec2(0.5);
}
If I use "in_position_1" my texture RENDERS PERFECTLY, but if I use in_position_2 nothing happens...
Tip: before link my shaders I am doing:
glBindAttribLocation( programID, 0, "in_position_1");
glBindAttribLocation( programID, 1, "in_position_2");
Why it works only with the first stream? I need more data going to my vertex... I need to send color, etc... any hint?
glVertexAttribPointer( 0, 2, GL_FLOAT, GL_FALSE, 0, (void*)offsetof(Flag_vertex, position_1) );
glVertexAttribPointer( 1, 2, GL_FLOAT, GL_FALSE, 0, (void*)offsetof(Flag_vertex, position_2) );
These lines don't make sense. At least, not with how Flag_vertex is defined. If Flag_vertex is really supposed to be a vertex (and not a quad), then it makes no sense for it to have 8 floats. If each Flag_vertex defines a full quad, then you named it wrong; it's not a vertex at all, it's Flag_quad.
So it's hard to know what you're even trying to accomplish here.
Also:
If I use "in_position_1" my texture RENDERS PERFECTLY, but if I use in_position_2 nothing happens...
Of course it does. Your position data is in attribute 0. Your position data is therefore not in attribute 1. If you pretend attribute 1 has your position data when it clearly doesn't, you will not get reasonable results.
Your problem is that you're always using attribute 0 when you should be using both of them. You shouldn't be picking one or the other. Use in_position_1 for the position and in_position_2 for the texture coordinate. And try to name them reasonably, based on what they do (like position and texture_coord or something). Don't use numbers for them.
Tip: before link my shaders I am doing:
That is the exact same thing as the layout(location=#) setting in the shader. If you want it in the shader, then put it in the shader. If you want it in your OpenGL code, then put it in your OpenGL code. Don't put it in both places.