Multiple images of same mesh without duplicate triangle transfers - c++

I take multiple images of the same mesh using OpenGL, GLEW and GLFW. The mesh (triangles) doesn't change in each shot, only the ModelViewMatrix does.
Here's the important code of my mainloop:
for (int i = 0; i < number_of_images; i++) {
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
/* set GL_MODELVIEW matrix depending on i */
glBegin(GL_TRIANGLES);
for (Triangle &t : mesh) {
for (Point &p : t) {
glVertex3f(p.x, p.y, p.z);
}
}
glReadPixels(/*...*/) // get picture and store it somewhere
glfwSwapBuffers();
}
As you can see, I set/transfer the triangle vertices for each shot I want to take. Is there a solution in which I only need to transfer them once? My mesh is quite large, so this transfer takes quite some time.

In the year 2016 you must not use glBegin/glEnd. No way. Use Vertex Array Obejcts instead; and use custom vertex and/or geometry shaders to reposition and modify your vertex data. Using these techniques, you will upload your data to the GPU once, and then you'll be able to draw the same mesh with various transformations.
Here is an outline of how your code may look like:
// 1. Initialization.
// Object handles:
GLuint vao;
GLuint verticesVbo;
// Generate and bind vertex array object.
glGenVertexArrays(1, &vao);
glBindVertexArray(vao);
// Generate a buffer object.
glGenBuffers(1, &verticesVbo);
// Enable vertex attribute number 0, which
// corresponds to vertex coordinates in older OpenGL versions.
const GLuint ATTRIBINDEX_VERTEX = 0;
glEnableVertexAttribArray(ATTRIBINDEX_VERTEX);
// Bind buffer object.
glBindBuffer(GL_ARRAY_BUFFER, verticesVbo);
// Mesh geometry. In your actual code you probably will generate
// or load these data instead of hard-coding.
// This is an example of a single triangle.
GLfloat vertices[] = {
0.0f, 0.0f, -9.0f,
0.0f, 0.1f, -9.0f,
1.0f, 1.0f, -9.0f
};
// Determine vertex data format.
glVertexAttribPointer(ATTRIBINDEX_VERTEX, 3, GL_FLOAT, GL_FALSE, 0, 0);
// Pass actual data to the GPU.
glBufferData(GL_ARRAY_BUFFER, sizeof(GLfloat)*3*3, vertices, GL_STATIC_DRAW);
// Initialization complete - unbinding objects.
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindVertexArray(0);
// 2. Draw calls.
while(/* draw calls are needed */) {
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glBindVertexArray(vao);
// Set transformation matrix and/or other
// transformation parameters here using glUniform* calls.
glDrawArrays(GL_TRIANGLES, 0, 3);
glBindVertexArray(0); // Unbinding just as an example in case if some other code will bind something else later.
}
And a vertex shader may look like this:
layout(location=0) in vec3 vertex_pos;
uniform mat4 viewProjectionMatrix; // Assuming you set this before glDrawArrays.
void main(void) {
gl_Position = viewProjectionMatrix * vec4(vertex_pos, 1.0f);
}
Also take a look at this page for a good modern accelerated graphics book.

#BDL already commented that you should abandon the immediate mode drawing calls (glBegin … glEnd) and switch to Vertex Array drawing (glDrawElements, glDrawArrays) that fetch their data from Vertex Buffer Objects (VBOs). #Sergey mentioned Vertex Array Objects in his answer, but those are actually state containers for VBOs.
A very important thing you have to understand – and the way you asked your question it's apparently something you're not aware of, yet – is, that OpenGL does not deal with "meshes", "scenes" or the like. OpenGL is just a drawing API. It draws points… lines… and triangles… one at a time… with no connection between them whatsoever. That's it. So when you show multiple views of the "same" thing, you must draw it several times. There's no way around this.
Most recent versions of OpenGL support multiple viewport rendering, but it still takes a geometry shader to multiply the geometry into several pieces to be drawn.

Related

I don't understand how glGenVertexArrays and glBindVertexArray works [duplicate]

I am just starting to learn OpenGL today from this tutorial: http://openglbook.com/the-book/
I got to chapter 2, where I draw a triangle, and I understand everything except VAOs (is this acronym OK?). The tutorial has this code:
glGenVertexArrays(1, &VaoId);
glBindVertexArray(VaoId);
While I understand that the code is necessary, I have no clue what it does. Although I never use VaoId past this point (except to destroy it), the code does not function without it. I am assuming this is because it is required to be bound, but I don't know why. Does this exact code just need to be part of every OpenGL program? The tutorial explains VAOs as:
A Vertex Array Object (or VAO) is an object that describes how the vertex attributes are stored in a Vertex Buffer Object (or VBO). This means that the VAO is not the actual object storing the vertex data, but the descriptor of the vertex data. Vertex attributes can be described by the glVertexAttribPointer function and its two sister functions glVertexAttribIPointer and glVertexAttribLPointer, the first of which we’ll explore below.
I don't understand how the VAO describes the vertex attributes. I have not described them in any way. Does it get the information from the glVertexAttribPointer? I guess this must be it. Is the VAO simply a destination for the information from glVertexAttribPointer?
On a side note, is the tutorial I am following acceptable? Is there anything I should watch out for or a better tutorial to follow?
"Vertex Array Object" is brought to you by the OpenGL ARB Subcommittee for Silly Names.
Think of it as a geometry object. (As an old time SGI Performer programmer, I call them geosets.) The instance variables/members of the object are your vertex pointer, normal pointer, color pointer, attrib N pointer, ...
When a VAO is first bound, you assign these members by calling
glEnableClientState(GL_VERTEX_ARRAY); glVertexPointer...;
glEnableClientState(GL_NORMAL_ARRAY); glNormalPointer...;
and so on. Which attributes are enabled and the pointers you supply are stored in the VAO.
After that when you bind the VAO again, all the those attributes and pointers also become current. So one glBindVertexArray call is equivalent to all the code previously needed to set up all the attributes. It's handy for passing geometry around between functions or methods without having to create your own structs or objects.
(One time setup, multiple use is the easiest way to use VAOs, but you can also change attributes just by binding it and doing more enable/pointer calls. VAOs are not constants.)
More info in response to Patrick's questions:
The default for a newly created VAO is that it's empty (AFAIK). No geometry at all, not even vertexes, so if you try to draw it, you'll get an OpenGL error. This is reasonably sane, as in "initialize everything to False/NULL/zero".
You only need to glEnableClientState when you set things up. The VAO remembers the enable/disable state for each pointer.
Yes the VAO will store glEnableVertexAttribArray and glVertexAttrib. The old vertex, normal, color, ... arrays are the same as attribute arrays, vertex == #0 and so on.
I always think about VAO as an array of data buffers used by OpenGL. Using modern OpenGL you will create a VAO and Vertex Buffer Objects.
//vaoB is a buffer
glGenVertexArrays(1, vaoB); //creates one VAO
glBindVertexArray(vao.get(0));
glGenBuffers(vbo.length, vbo, 0); //vbo is a buffer
glBindVertexArray(vao.get(1));
glGenBuffers(vbo1.length, vbo1, 0); //vbo1 is a buffer
glBindVertexArray(vao.get(2));
glGenBuffers(vbo2.length, vbo2, 0); //vbo2 is a buffer
The next step is to bind data to a buffer:
glBindBuffer(GL_ARRAY_BUFFER, vbo[0]);
glBufferData(GL_ARRAY_BUFFER,vertBuf.limit()*4, vertBuf, GL_STATIC_DRAW); //vertf buf is a floatbuffer of vertices
At this point OpenGL Sees:
Now we can use glVertexAttribPointer to tell OpenGL what the data in the buffer represents:
glBindBuffer(GL_ARRAY_BUFFER, 0); //bind VBO at 0
glVertexAttribPointer(0, 3, GL_FLOAT, false, 0, 0); //each vertex has 3 components of size GL_FLOAT with 0 stride (space) between them and the first component starts at 0 (start of data)
OpenGL now has the data in the buffer and knows how the data is organized into vertices. The same process can be applied to texture coordinates etc but for texture coordinates there would be two values.
glBindBuffer(GL_ARRAY_BUFFER, vbo[1]);
glBufferData(GL_ARRAY_BUFFER,coordBuf.limit()*4, coordBuf, GL_STATIC_DRAW);
glVertexAttribPointer(0, 2, GL_FLOAT, false, 0, 0);
Next you can bind texture and draw arrays, you will want to create a Vert and Frag shader, compile and attach it to a program (not included here).
glActiveTexture(textureID); //bind our texture
glBindTexture(GL_TEXTURE_2D, textureID);
glDrawArrays(GL_TRIANGLES,0,6); //in this case 6 indices are used for two triangles forming a square
Vertex Array Objects are like macros in word processing programs and the like. A good description is found here.
Macros just remember the actions you did, such as activate this attribute, bind that buffer, etc. When you call glBindVertexArray( yourVAOId ), it simply replays those attribute pointer bindings and buffer bindings.
So your next call to draw uses whatever was bound by the VAO.
VAO's don't store vertex data. No. The vertex data is stored in a vertex buffer or in an array of client memory.
VAO is an object that represents the vertex fetch stage of the OpenGL pipeline and is used to supply input to the vertex shader.
You can create vertex array object like this
GLuint vao;
glCreateVertexArrays(1, &vao);
glBindVertexArray(vao);
First let' do a simple example. Consider such an input parameter in a shader code
layout (location = 0) in vec4 offset; // input vertex attribute
To fill in this attribute we can use
glVertexAttrib4fv(0, attrib); // updates the value of input attribute 0
Although the vertex array object stores these static attribute values for
you, it can do a lot more.
After creating vertex array object we can start filling in its state. We will ask OpenGL to fill it automatically using the data stored in a buffer object that we supply. Each vertex attribute gets to fetch data from a buffer bound to one of several vertex buffer bindings. For this end we use glVertexArrayAttribBinding(GLuint vao, GLuint attribindex, GLuint bindingindex). Also we use the glVertexArrayVertexBuffer() function to bind a buffer to one of the vertex buffer bindings. We use the glVertexArrayAttribFormat() function to describe the layout and format of the data, and finally we enable automatic filling of the attribute by calling glEnableVertexAttribArray().
When a vertex attribute is enabled, OpenGL will feed data to the vertex shader based on the format and location information you’ve provided with
glVertexArrayVertexBuffer() and glVertexArrayAttribFormat(). When
the attribute is disabled, the vertex shader will be provided with the static information you provide with a call to glVertexAttrib*().
// First, bind a vertex buffer to the VAO
glVertexArrayVertexBuffer(vao, 0, buffer, 0, sizeof(vmath::vec4));
// Now, describe the data to OpenGL, tell it where it is, and turn on automatic
// vertex fetching for the specified attribute
glVertexArrayAttribFormat(vao, 0, 4, GL_FLOAT, GL_FALSE, 0);
glEnableVertexArrayAttrib(vao, 0);
And code in a shader
layout (location = 0) in vec4 position;
After all you need to call to glDeleteVertexArrays(1, &vao).
You can read OpenGL SuperBible to understand it better.
I was trying to understand this as well and now that I think I do, it would be prudent to post a code example aimed at
people less familiar with OpenGL architecture, as I found the previous examples not very illuminating and most tutorials
just tell you to copy paste the code without explaining it.
(This is in C++ but the code can be easily translated to C)
In this example, we'll be rendering a rectangle, which has 4 vertices. Each vertex has a position (vec3, xyz), texture coordinate (vec2, uv) and color attribute (vec4, rgba).
I think it's cleanest to separate each attribute into their own array:
float positions[] = {
+0.5, +0.5, 0,
+0.5, -0.5, 0,
-0.5, -0.5, 0,
-0.5, +0.5, 0
};
float colors[] = {
1, 1, 1, 1,
1, 1, 1, 1,
1, 1, 1, 1,
1, 1, 1, 1
};
float tex_coords[] = {
0, 0,
0, 1,
1, 1,
1, 0
};
Our vertex array object will describe the four vertices with these properties.
First, we need to create the vertex array:
GLuint vertex_array;
glGenVertexArrays(1, &vertex_array);
Each vertex array has a number of buffers, these can be thought of as properties of the array. Each vertex array has an
arbitrary number of "slots" for the buffers. Along with which buffer is in which slot, it saves the CPU-side pointer to
the data for the buffer, and the CPU-side datas format. We need to make OpenGL aware of both which slot to use, where the
data is, and how it is formatted.
The buffers slots are indexed, so the first buffer is index 0, the second is 1, etc.
These locations correspond to the layout defined in the vertex shader:
// vertex shader
std::string _noop_vertex_shader_source = R"(
#version 420
layout (location = 0) in vec3 _position_3d; // slot 0: xyz
layout (location = 1) in vec4 _color_rgba; // slot 1: rgba
layout (location = 2) in vec2 _tex_coord; // slot 2: uv
out vec2 _vertex_tex_coord;
out vec4 _vertex_color_rgba;
void main()
{
gl_Position = vec4(_position_3d.xy, 1, 1); // forward position to fragment shader
_vertex_color_rgba = _color_rgba; // forward color to fragment shader
_vertex_tex_coord = _tex_coord; // forward tex coord to fragment shader
}
)";
We see that the position property is at location 0, the color property at 1 and the tex coords at 2. We'll store these
for clarity:
// property locations from our shader
const auto vertex_pos_location = 0;
const auto vertex_color_location = 1;
const auto vertex_tex_coord_location = 2;
We now need to tell OpenGL the information about each buffer outlined above:
// bind the array, this makes OpenGL aware that we are modifying it with future calls
glBindVertexArray(vertex_array);
// create the position buffer
glGenBuffers(1, &position_buffer);
// bind the buffer so opengl knows we are currently operating on it
glBindBuffer(GL_ARRAY_BUFFER, position_buffer);
// tell opengl where the data pointer is
glBufferData(GL_ARRAY_BUFFER, sizeof(positions), positions, GL_STATIC_DRAW);
// tell opengl how the data is formatted
glVertexAttribPointer(vertex_pos_location, 3, GL_FLOAT, GL_FALSE, 0, (void*) 0);
// tell opengl that this slot should be used
glEnableVertexAttribArray(vertex_pos_location);
Here, we generate a buffer that will hold our position data. For glVertexAttribPointer, we choose the
correct location, 3 elements (as the positions are xyz coordinates), and no offset or stride.
Because we have a separate array for all our properties, we can leave both as 0.
Similar to the position, we generate and fill the buffers for the color and tex coord property:
// color
glGenBuffers(1, &color_buffer); // generate
glBindBuffer(GL_ARRAY_BUFFER, color_buffer); // bind
glBufferData(GL_ARRAY_BUFFER, sizeof(colors), colors, GL_STATIC_DRAW); // set pointer
glVertexAttribPointer(vertex_color_location, 4, GL_FLOAT, GL_FALSE, 0, (void*) 0); // set data format
glEnableVertexAttribArray(vertex_color_location); // enable slot
// tex coords
glGenBuffers(1, &tex_coord_buffer);
glBindBuffer(GL_ARRAY_BUFFER, tex_coord_buffer);
glBufferData(GL_ARRAY_BUFFER, sizeof(tex_coords), tex_coords, GL_STATIC_DRAW);
glVertexAttribPointer(vertex_tex_coord_location, 2, GL_FLOAT, GL_FALSE, 0, (void*) 0);
glEnableVertexAttribArray(vertex_tex_coord_location);
Where we chose 4 elements for the colors because they are in RGBA format and 2 for the tex coords for obvious reasons.
The last thing we need to render a vertex array is an element buffer. These can be thought of as a list of
indices that define which order the vertices will be rendered in. For us, we want to render the
rectangle as two tris in a triangle fan, so we choose the following element buffer:
// vertex order
static uint32_t indices[] = {
0, 1, 2, 1, 2, 3
};
glGenBuffers(1, &element_buffer); // generate
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, element_buffer); // bind
glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(indices), indices, GL_STATIC_DRAW) // set pointer
We do not need to enable the element buffers slot, it is separate from the vertex array. We don't have to specify the format of the elements buffer here, that will be done during glDrawElements in the render step.
So why all this? All these functions tell OpenGL where to look for the data for the vertices. Specifying the pointers to
the correct buffer data and their layout, if we now bind the vertex array during a render step:
glUseProgram(shader.get_program_id()); // shader program with our vertex shader
glBindVertexArray(vertex_array);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, element_buffer);
glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_INT, 0);
Where 6 is the number of elements in the element buffer.
This is all that's needed to correctly update the in values in the vertex shader. OpenGL will move the data from
our CPU-side positions, colors and tex_coords into the correct locations 0, 1 and 2 of the vertex shader respectively.
We don't need to bind anything else, the vertex array remembers what we gave it and does it for us, which is why it's convenient and should be preferred in modern OpenGL.
In summary:
Each vertex array has n buffers for arbitrary properties and 1 element buffer. For each property / buffer, we need to
a) generate it (glGenBuffers)
b) bind it (glBindBuffer(GL_ARRAY_BUFFER)
c) tell OpenGL where the data is located in RAM (glBufferData)
d) tell OpenGL how the data is formatted (glVertexAttribPointer)
e) tell OpenGL to use that slot (glEnableVertexAttribArray)
for the element buffer, we only need to generate it, bind it to GL_ELEMENT_ARRAY_BUFFER, then tell opengl
where the data is.
Hopefully that helped shed some light on things. I'm almost positive there will be factual errors in this post as
I'm also mostly new to OpenGL but this was the way I conceptualized it to get my code working.

Can't draw things with EBOs

In a C++ application I am writing I am trying to draw a quad using an EBO (element buffer object). Whenever I try to I can't get that quad to draw at all. What am I doing wrong?
code:
//vertices and indices
GLfloat vertices[]={
//position texture coordinate
-0.005f,0.02f,0.0f, 0.0f,1.0f,
0.02f,0.02f,0.0f, 1.0f,1.0f,
0.02f,-0.02f,0.0f, 1.0f,0.0f,
-0.005f,-0.02f,0.0f, 0.0f,0.0f,
};
GLfloat indices[]={
0,1,3,
2,3,1
};
//initialization
glCreateVertexArrays(1,&VAO);
glBindVertexArray(VAO);
glCreateBuffers(1,&VBO);
glCreateBuffers(1,&EBO);
glBindBuffer(GL_ARRAY_BUFFER,VBO);
glBufferData(GL_ARRAY_BUFFER,sizeof(vertices),vertices,GL_STATIC_DRAW);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER,EBO);
glBufferData(GL_ELEMENT_ARRAY_BUFFER,sizeof(indices),indices,GL_STATIC_DRAW);
glVertexAttribPointer(0,3,GL_FLOAT,GL_FALSE,5*sizeof(GLfloat),(GLvoid*)nullptr);
glEnableVertexAttribArray(0);
glVertexAttribPointer(1,2,GL_FLOAT,GL_FALSE,5*sizeof(GLfloat),(GLvoid*)(3*sizeof(GLfloat)));
glEnableVertexAttribArray(1);
glBindVertexArray(0);
//drawing commands
transformLocation=glGetUniformLocation(textureProgram,"transform");
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D,woodTexture);
glUseProgram(textureProgram);
glUniformMatrix4fv(transformLocation,1,GL_FALSE,glm::value_ptr(transform));
glBindVertexArray(bowHandleVAO);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER,bowHandleEBO);
glDrawElements(GL_TRIANGLES,6,GL_UNSIGNED_INT,nullptr);
This works with the glDrawArrays equivalent to this, but whenever I try to Use EBOs it won't draw anything. Comment if you need more information.
The most immediate error that I can see is a type mismatch between your indices definitions and usage at calling glDrawElements
Suggestion: Change GLFloat to GLuint, i.e., define your indices as:
GLuint indices[]={ //...
In addition to what Amadeus says about changing your indices array from float to GLuint, you seem to be using the wrong VAO and EBO. In the code you show us you buffer all your data into a buffer object in VAO and indices to EBO, but then when you try to draw you're drawing with bowHandleVAO and bowHandleEBO.

Can I switch from glDrawArrays to using Vertex Buffer Objects?

I have an OpenGL 1.1 ES 2D sprite engine that's based on one GL_TRIANGLE_FAN per sprite. The main rendering code that gets called per-sprite, per-frame is as follows:
void drawTexture(BitmapImage* aImage, short* vertices, float* texCoords,
ColorMap &colorMap, TInt xDest, TInt yDest, TInt aAlpha)
{
glPushMatrix();
glLoadIdentity();
glBindTexture(GL_TEXTURE_2D, textureId);
glVertexPointer(3, GL_SHORT, 0, vertices);
glTexCoordPointer(2, GL_FLOAT, 0, texCoords);
glColorPointer(RGBA_BYTES, GL_UNSIGNED_BYTE, 0, colorMap.GetMap());
TFloat scaleX, scaleY;
aImage->getScale(scaleX, scaleY);
glTranslatef((float)xDest, (float)yDest, 0.0f);
glScalef(scaleX, scaleY, 1.0f);
glRotatef(aImage->getRotAngle(), 0.0f, 0.0f, 1.0f);
glDrawArrays(GL_TRIANGLE_FAN, 0, 4);
glPopMatrix();
}
I've been told that switching to Vertex Buffer Objects (VBOs) will significantly increase the performance of rendering, so I'd like to do that. My research thus far has lead me to several examples showing how to set up individual vertex, color, and texture offset buffers, but good examples of how to interleave this data have been more elusive.
For example, I'm pretty sure this is how I'd set up to render with my vertex data in a VBO:
glGenBuffers(1, &batchBufferHandle);
glBindBuffer(GL_ARRAY_BUFFER, batchBufferHandle);
glBufferData(GL_ARRAY_BUFFER, dataSize, data, GL_STATIC_DRAW);
glVertextPointer(3, GL_SHORT, 0, 0);
glDrawElements(..., 0);
glBindBuffer(GL_ARRAY_BUFFER, 0);
glDeleteBuffers(1, &batchBufferHandle);
Apparently I'd generate and bind similar buffers for texture coordinates and vertex color data, though I'm not 100% clear on how setting those up would differ.
My understanding is that the speed boost would come from rendering a bunch of these triangle fans in one "draw call", but what is a "draw call" in this context? DrawElements() gets called multiple times using this methodology, so that can't be it...?
Whatever the case, it would mean that I'd have to generate a VBO (or three) that contain all the data in series for a bunch of sprites. That can be difficult enough on its own given the legacy code I'm dealing with, but I also need to translate, scale, and rotate each individual sprite. Where does that data go in the VBO(s)?
My conclusion thus far is using VBOs is only helpful in the case of a SINGLE, but complex object. It would appear what I want to do is not possible -- provide OpenGL with a list of sprites to render (including all vertex, color, texture map, scale, rotation, and translation information for each).
Is my assessment correct or is there a way to do this (using OpenGL ES 1.1)?

GLSL, combining 2D and 3D textures

I am trying to blend a 3D texture with a 2D one to make a terrain. The 3D texture has moss, sand, snow and the like, interpolated to enhance the illusion of heights. The 2D texture currently only has an orange line across meant to be a "road". This is my fragment shader:
# version 420
uniform sampler3D mainTexture;
uniform sampler2D roadTexture;
void main() {
vec4 diffuse3D = texture3D(mainTexture, gl_TexCoord[0].stp);
vec4 diffuse2D = texture2D(roadTexture, gl_TexCoord[1].st);
// Yes, I am aware I am only returning the 2D texture value
// However this is for testing purposes only
// Doing gl_FragColor = diffuse3D + diffuse2D;
// Or any other operation returns the 3D texture only
gl_FragColor = diffuse2D;
}
And this is my drawing call:
void Terrain::Draw() {
glEnableClientState(GL_VERTEX_ARRAY);
glVertexPointer(3, GL_FLOAT, sizeof(glm::vec3), &v[0].x);
glEnableClientState(GL_NORMAL_ARRAY);
glNormalPointer(GL_FLOAT, sizeof(glm::vec3), &n[0].x);
s.enable(); // simple glUseProgram call within my Shader object
glClientActiveTexture(GL_TEXTURE0);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glEnable(GL_TEXTURE_3D);
glBindTexture(GL_TEXTURE_3D, id_texture);
s.setSampler("mainTexture",0); // Calls to glGetUniformLocation and glUniform1i
glTexCoordPointer(3, GL_FLOAT, sizeof(glm::vec3), &t[0].x);
glClientActiveTexture(GL_TEXTURE1);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, id_texture_road);
s.setSampler("roadTexture",1); // Same as above
glTexCoordPointer(2, GL_FLOAT, sizeof(glm::vec2), &t2[0].x);
glPushMatrix();
glScalef(scalex,scaley,scalez);
glDrawElements(GL_TRIANGLES, sizei, GL_UNSIGNED_INT, index);
glPopMatrix();
s.disable(); // glUseProgram(0)
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_NORMAL_ARRAY);
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
glDisable(GL_TEXTURE_3D);
glDisable(GL_TEXTURE_2D);
}
Here is the code for my setSampler() method:
void Shader::setSampler(std::string name, GLint value)
{
GLuint loc = glGetUniformLocation(program, name.c_str());
if (loc>0)
{
glUniform1i(loc, value);
}
}
The result is a solid black color upon the whole terrain. I have sadly been unable to find information on sampler3D, but the diffuse3D variable in my fragment shader does compute to the correct texture, and my texture coordinates for the 2D texture are being correcly sent to the fragment shader (I know this because I used them to color the terrain for testing and got a smooth gradinent from green to red, what you would expect using only the first 2 coordinates). I also checked the values passed to my setSampler() method and I do get the 0 and 1, and the 1 and 2 locations corresponding to them.
All of the help I can find on this issue is around the vicinity of the advice provided here, which I have already implemented).
Can anybody assist?
EDIT: So, just for kicks, I swapped my texture units so the 2D texture became unit 0 and the 3D became unit 1. Now only the 2D texture is rendered. But my texture units are passed correctly (at least in appearence) to the shader. Any clues?
vec4 diffuse3D = texture3D(mainTexture, gl_TexCoord[0].stp);
vec4 diffuse2D = texture2D(roadTexture, gl_TexCoord[1].st);
gl_FragColor = diffuse2D;
Let's pretend that this wasn't using shaders. Let's pretend you were just writing a function in C++ that returns a value.
int FuncName(int val1, int val2)
{
int test1 = Compute(val1);
int test2 = Compute(val2);
return test2;
}
What will this function return? Obviously, it returns Compute(val2), completely ignoring the value of test1. It won't magically combine test1 and test2. They're separate values, and therefore, they remain separate unless you explicitly combine them.
Just like your fragment shader.
Shaders aren't magic; they're programming. They only do what you tell them to. So if you say, "get a value from a texture and then don't do anything with it", it will dutifully do exactly that. Though odds are good that the compiler will optimize out the texture fetch entirely.
If you want a "blend" of two textures, you must blend them. You must fetch from each texture, then use both values to compute a new color.
How exactly you do that depends entirely on you. Maybe your 2D texture has some alpha that represents how much of the 2D texture to show. I don't know; you didn't describe what your texture looks like or how exactly you plan to show the road in some places and not in others.
the reason you get a black color is simply that you don't set proper uniform variables.
# version 420
uniform sampler3D mainTexture;
uniform sampler2D roadTexture;
void main() {
vec4 diffuse3D = texture3D(mainTexture, gl_TexCoord[0].stp);
vec4 diffuse2D = texture2D(roadTexture, gl_TexCoord[1].st);
gl_FragColor = diffuse2D;
}
what this shader is doing, is looking up the value of 'roadTexture' and displaying it. unfortunately, it has no clue which texture unit 'roadTexture' is currently bound to, and thus will acess texture unit 0, where your 3d texture is bound - so your're trying to access a 3d texture with 2d texcoords, which may well return all black. you'll need to query the uniform locations of your textures with glGetUniformLocation and then set them to the correct texture units ( 0/1, respectively ) with glUniform1i.
EDIT: also, you're using deprecated functionality, so your shader version directive should be changed to #version 420 compatibility - the default is core
You need to call glEnableClientState(GL_TEXTURE_COORD_ARRAY); again after you have made the second texture unit active with glClientActiveTexture(GL_TEXTURE1);
from http://www.opengl.org/sdk/docs/man2/xhtml/glEnableClientState.xml
enabling and disabling GL_TEXTURE_COORD_ARRAY affects the active client texture unit.
Just solved this problem. Apprently you still need glActiveTexture() in addition to glClientActiveTexture(). This was the code that worked, for anyone who gets the same problem:
glClientActiveTexture(GL_TEXTURE0);
glActiveTexture(GL_TEXTURE0);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glBindTexture(GL_TEXTURE_3D, id_texture);
s.setSampler("mainTexture",0); // Calls to glGetUniformLocation and glUniform1i
glTexCoordPointer(3, GL_FLOAT, sizeof(glm::vec3), &t[0].x);
glClientActiveTexture(GL_TEXTURE1);
glActiveTexture(GL_TEXTURE1);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glBindTexture(GL_TEXTURE_2D, id_texture_road);
s.setSampler("roadTexture",1); // Same as above
glTexCoordPointer(2, GL_FLOAT, sizeof(glm::vec2), &t2[0].x);
// Drawing Calls
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_NORMAL_ARRAY);
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
glClientActiveTexture(GL_TEXTURE0);
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
glActiveTexture(GL_TEXTURE0);
Thanks for reading.

What are Vertex Array Objects?

I am just starting to learn OpenGL today from this tutorial: http://openglbook.com/the-book/
I got to chapter 2, where I draw a triangle, and I understand everything except VAOs (is this acronym OK?). The tutorial has this code:
glGenVertexArrays(1, &VaoId);
glBindVertexArray(VaoId);
While I understand that the code is necessary, I have no clue what it does. Although I never use VaoId past this point (except to destroy it), the code does not function without it. I am assuming this is because it is required to be bound, but I don't know why. Does this exact code just need to be part of every OpenGL program? The tutorial explains VAOs as:
A Vertex Array Object (or VAO) is an object that describes how the vertex attributes are stored in a Vertex Buffer Object (or VBO). This means that the VAO is not the actual object storing the vertex data, but the descriptor of the vertex data. Vertex attributes can be described by the glVertexAttribPointer function and its two sister functions glVertexAttribIPointer and glVertexAttribLPointer, the first of which we’ll explore below.
I don't understand how the VAO describes the vertex attributes. I have not described them in any way. Does it get the information from the glVertexAttribPointer? I guess this must be it. Is the VAO simply a destination for the information from glVertexAttribPointer?
On a side note, is the tutorial I am following acceptable? Is there anything I should watch out for or a better tutorial to follow?
"Vertex Array Object" is brought to you by the OpenGL ARB Subcommittee for Silly Names.
Think of it as a geometry object. (As an old time SGI Performer programmer, I call them geosets.) The instance variables/members of the object are your vertex pointer, normal pointer, color pointer, attrib N pointer, ...
When a VAO is first bound, you assign these members by calling
glEnableClientState(GL_VERTEX_ARRAY); glVertexPointer...;
glEnableClientState(GL_NORMAL_ARRAY); glNormalPointer...;
and so on. Which attributes are enabled and the pointers you supply are stored in the VAO.
After that when you bind the VAO again, all the those attributes and pointers also become current. So one glBindVertexArray call is equivalent to all the code previously needed to set up all the attributes. It's handy for passing geometry around between functions or methods without having to create your own structs or objects.
(One time setup, multiple use is the easiest way to use VAOs, but you can also change attributes just by binding it and doing more enable/pointer calls. VAOs are not constants.)
More info in response to Patrick's questions:
The default for a newly created VAO is that it's empty (AFAIK). No geometry at all, not even vertexes, so if you try to draw it, you'll get an OpenGL error. This is reasonably sane, as in "initialize everything to False/NULL/zero".
You only need to glEnableClientState when you set things up. The VAO remembers the enable/disable state for each pointer.
Yes the VAO will store glEnableVertexAttribArray and glVertexAttrib. The old vertex, normal, color, ... arrays are the same as attribute arrays, vertex == #0 and so on.
I always think about VAO as an array of data buffers used by OpenGL. Using modern OpenGL you will create a VAO and Vertex Buffer Objects.
//vaoB is a buffer
glGenVertexArrays(1, vaoB); //creates one VAO
glBindVertexArray(vao.get(0));
glGenBuffers(vbo.length, vbo, 0); //vbo is a buffer
glBindVertexArray(vao.get(1));
glGenBuffers(vbo1.length, vbo1, 0); //vbo1 is a buffer
glBindVertexArray(vao.get(2));
glGenBuffers(vbo2.length, vbo2, 0); //vbo2 is a buffer
The next step is to bind data to a buffer:
glBindBuffer(GL_ARRAY_BUFFER, vbo[0]);
glBufferData(GL_ARRAY_BUFFER,vertBuf.limit()*4, vertBuf, GL_STATIC_DRAW); //vertf buf is a floatbuffer of vertices
At this point OpenGL Sees:
Now we can use glVertexAttribPointer to tell OpenGL what the data in the buffer represents:
glBindBuffer(GL_ARRAY_BUFFER, 0); //bind VBO at 0
glVertexAttribPointer(0, 3, GL_FLOAT, false, 0, 0); //each vertex has 3 components of size GL_FLOAT with 0 stride (space) between them and the first component starts at 0 (start of data)
OpenGL now has the data in the buffer and knows how the data is organized into vertices. The same process can be applied to texture coordinates etc but for texture coordinates there would be two values.
glBindBuffer(GL_ARRAY_BUFFER, vbo[1]);
glBufferData(GL_ARRAY_BUFFER,coordBuf.limit()*4, coordBuf, GL_STATIC_DRAW);
glVertexAttribPointer(0, 2, GL_FLOAT, false, 0, 0);
Next you can bind texture and draw arrays, you will want to create a Vert and Frag shader, compile and attach it to a program (not included here).
glActiveTexture(textureID); //bind our texture
glBindTexture(GL_TEXTURE_2D, textureID);
glDrawArrays(GL_TRIANGLES,0,6); //in this case 6 indices are used for two triangles forming a square
Vertex Array Objects are like macros in word processing programs and the like. A good description is found here.
Macros just remember the actions you did, such as activate this attribute, bind that buffer, etc. When you call glBindVertexArray( yourVAOId ), it simply replays those attribute pointer bindings and buffer bindings.
So your next call to draw uses whatever was bound by the VAO.
VAO's don't store vertex data. No. The vertex data is stored in a vertex buffer or in an array of client memory.
VAO is an object that represents the vertex fetch stage of the OpenGL pipeline and is used to supply input to the vertex shader.
You can create vertex array object like this
GLuint vao;
glCreateVertexArrays(1, &vao);
glBindVertexArray(vao);
First let' do a simple example. Consider such an input parameter in a shader code
layout (location = 0) in vec4 offset; // input vertex attribute
To fill in this attribute we can use
glVertexAttrib4fv(0, attrib); // updates the value of input attribute 0
Although the vertex array object stores these static attribute values for
you, it can do a lot more.
After creating vertex array object we can start filling in its state. We will ask OpenGL to fill it automatically using the data stored in a buffer object that we supply. Each vertex attribute gets to fetch data from a buffer bound to one of several vertex buffer bindings. For this end we use glVertexArrayAttribBinding(GLuint vao, GLuint attribindex, GLuint bindingindex). Also we use the glVertexArrayVertexBuffer() function to bind a buffer to one of the vertex buffer bindings. We use the glVertexArrayAttribFormat() function to describe the layout and format of the data, and finally we enable automatic filling of the attribute by calling glEnableVertexAttribArray().
When a vertex attribute is enabled, OpenGL will feed data to the vertex shader based on the format and location information you’ve provided with
glVertexArrayVertexBuffer() and glVertexArrayAttribFormat(). When
the attribute is disabled, the vertex shader will be provided with the static information you provide with a call to glVertexAttrib*().
// First, bind a vertex buffer to the VAO
glVertexArrayVertexBuffer(vao, 0, buffer, 0, sizeof(vmath::vec4));
// Now, describe the data to OpenGL, tell it where it is, and turn on automatic
// vertex fetching for the specified attribute
glVertexArrayAttribFormat(vao, 0, 4, GL_FLOAT, GL_FALSE, 0);
glEnableVertexArrayAttrib(vao, 0);
And code in a shader
layout (location = 0) in vec4 position;
After all you need to call to glDeleteVertexArrays(1, &vao).
You can read OpenGL SuperBible to understand it better.
I was trying to understand this as well and now that I think I do, it would be prudent to post a code example aimed at
people less familiar with OpenGL architecture, as I found the previous examples not very illuminating and most tutorials
just tell you to copy paste the code without explaining it.
(This is in C++ but the code can be easily translated to C)
In this example, we'll be rendering a rectangle, which has 4 vertices. Each vertex has a position (vec3, xyz), texture coordinate (vec2, uv) and color attribute (vec4, rgba).
I think it's cleanest to separate each attribute into their own array:
float positions[] = {
+0.5, +0.5, 0,
+0.5, -0.5, 0,
-0.5, -0.5, 0,
-0.5, +0.5, 0
};
float colors[] = {
1, 1, 1, 1,
1, 1, 1, 1,
1, 1, 1, 1,
1, 1, 1, 1
};
float tex_coords[] = {
0, 0,
0, 1,
1, 1,
1, 0
};
Our vertex array object will describe the four vertices with these properties.
First, we need to create the vertex array:
GLuint vertex_array;
glGenVertexArrays(1, &vertex_array);
Each vertex array has a number of buffers, these can be thought of as properties of the array. Each vertex array has an
arbitrary number of "slots" for the buffers. Along with which buffer is in which slot, it saves the CPU-side pointer to
the data for the buffer, and the CPU-side datas format. We need to make OpenGL aware of both which slot to use, where the
data is, and how it is formatted.
The buffers slots are indexed, so the first buffer is index 0, the second is 1, etc.
These locations correspond to the layout defined in the vertex shader:
// vertex shader
std::string _noop_vertex_shader_source = R"(
#version 420
layout (location = 0) in vec3 _position_3d; // slot 0: xyz
layout (location = 1) in vec4 _color_rgba; // slot 1: rgba
layout (location = 2) in vec2 _tex_coord; // slot 2: uv
out vec2 _vertex_tex_coord;
out vec4 _vertex_color_rgba;
void main()
{
gl_Position = vec4(_position_3d.xy, 1, 1); // forward position to fragment shader
_vertex_color_rgba = _color_rgba; // forward color to fragment shader
_vertex_tex_coord = _tex_coord; // forward tex coord to fragment shader
}
)";
We see that the position property is at location 0, the color property at 1 and the tex coords at 2. We'll store these
for clarity:
// property locations from our shader
const auto vertex_pos_location = 0;
const auto vertex_color_location = 1;
const auto vertex_tex_coord_location = 2;
We now need to tell OpenGL the information about each buffer outlined above:
// bind the array, this makes OpenGL aware that we are modifying it with future calls
glBindVertexArray(vertex_array);
// create the position buffer
glGenBuffers(1, &position_buffer);
// bind the buffer so opengl knows we are currently operating on it
glBindBuffer(GL_ARRAY_BUFFER, position_buffer);
// tell opengl where the data pointer is
glBufferData(GL_ARRAY_BUFFER, sizeof(positions), positions, GL_STATIC_DRAW);
// tell opengl how the data is formatted
glVertexAttribPointer(vertex_pos_location, 3, GL_FLOAT, GL_FALSE, 0, (void*) 0);
// tell opengl that this slot should be used
glEnableVertexAttribArray(vertex_pos_location);
Here, we generate a buffer that will hold our position data. For glVertexAttribPointer, we choose the
correct location, 3 elements (as the positions are xyz coordinates), and no offset or stride.
Because we have a separate array for all our properties, we can leave both as 0.
Similar to the position, we generate and fill the buffers for the color and tex coord property:
// color
glGenBuffers(1, &color_buffer); // generate
glBindBuffer(GL_ARRAY_BUFFER, color_buffer); // bind
glBufferData(GL_ARRAY_BUFFER, sizeof(colors), colors, GL_STATIC_DRAW); // set pointer
glVertexAttribPointer(vertex_color_location, 4, GL_FLOAT, GL_FALSE, 0, (void*) 0); // set data format
glEnableVertexAttribArray(vertex_color_location); // enable slot
// tex coords
glGenBuffers(1, &tex_coord_buffer);
glBindBuffer(GL_ARRAY_BUFFER, tex_coord_buffer);
glBufferData(GL_ARRAY_BUFFER, sizeof(tex_coords), tex_coords, GL_STATIC_DRAW);
glVertexAttribPointer(vertex_tex_coord_location, 2, GL_FLOAT, GL_FALSE, 0, (void*) 0);
glEnableVertexAttribArray(vertex_tex_coord_location);
Where we chose 4 elements for the colors because they are in RGBA format and 2 for the tex coords for obvious reasons.
The last thing we need to render a vertex array is an element buffer. These can be thought of as a list of
indices that define which order the vertices will be rendered in. For us, we want to render the
rectangle as two tris in a triangle fan, so we choose the following element buffer:
// vertex order
static uint32_t indices[] = {
0, 1, 2, 1, 2, 3
};
glGenBuffers(1, &element_buffer); // generate
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, element_buffer); // bind
glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(indices), indices, GL_STATIC_DRAW) // set pointer
We do not need to enable the element buffers slot, it is separate from the vertex array. We don't have to specify the format of the elements buffer here, that will be done during glDrawElements in the render step.
So why all this? All these functions tell OpenGL where to look for the data for the vertices. Specifying the pointers to
the correct buffer data and their layout, if we now bind the vertex array during a render step:
glUseProgram(shader.get_program_id()); // shader program with our vertex shader
glBindVertexArray(vertex_array);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, element_buffer);
glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_INT, 0);
Where 6 is the number of elements in the element buffer.
This is all that's needed to correctly update the in values in the vertex shader. OpenGL will move the data from
our CPU-side positions, colors and tex_coords into the correct locations 0, 1 and 2 of the vertex shader respectively.
We don't need to bind anything else, the vertex array remembers what we gave it and does it for us, which is why it's convenient and should be preferred in modern OpenGL.
In summary:
Each vertex array has n buffers for arbitrary properties and 1 element buffer. For each property / buffer, we need to
a) generate it (glGenBuffers)
b) bind it (glBindBuffer(GL_ARRAY_BUFFER)
c) tell OpenGL where the data is located in RAM (glBufferData)
d) tell OpenGL how the data is formatted (glVertexAttribPointer)
e) tell OpenGL to use that slot (glEnableVertexAttribArray)
for the element buffer, we only need to generate it, bind it to GL_ELEMENT_ARRAY_BUFFER, then tell opengl
where the data is.
Hopefully that helped shed some light on things. I'm almost positive there will be factual errors in this post as
I'm also mostly new to OpenGL but this was the way I conceptualized it to get my code working.