I'm trying to read some example of code our teacher gave us about the use of VAO and VBO in openGL, but I'm having a hard time understanding it. I commented each line to show what I understood. Can someone explain me what's happening here?
glGenVertexArrays( 2, vao ); // create a vao of size 2
glBindVertexArray( vao[0] ); // we state that we're going to work on the first element in the vao
// p0, p1, ... have been defined somewhere else
GLfloat sommetsCube[] = { p0, p4, p1, p5, p2, p6, p3, p7,
p1, p2, p0, p3, p4, p7, p5, p6 };
glGenBuffers( 1, &vboCube ); // we generate a buffer using data from vboCube (which have been defined somewhere else)
// I really don't get what those lines do
// though i think this has something to do with sending data to the GPU
glBindBuffer( GL_ARRAY_BUFFER, vboCube );
glBufferData( GL_ARRAY_BUFFER, sizeof(sommetsCube), sommetsCube, GL_STATIC_DRAW );
glVertexAttribPointer( locVertex, 3, GL_FLOAT, GL_FALSE, 0, 0 );
glEnableVertexAttribArray(locVertex);
glBindVertexArray(0); // we're no longer working with the first element in the vao
Also, I understand the nature of a VBO, but I'm not so sure about the nature of a VAO. Are they arrays of VBO? Are they something else?
Your understanding of the first line is wrong. It generates 2 VAO objects and stores them in the array. Compare with the glGenBuffers line which generates one VBO and stores it in a single variable. The &vboCube treats vboCube as an array[1]
Beyond that, a VAO can be thought of as a geometry node in the scene graph, a collection of vertices, texture coordinates, etc.
The two buffer calls do indeed send data to the GPU. The two attrib calls define what that data will be used for in the geometry.
Setting up VBO/VAO data is a bit repetitive and ugly in OpenGL. The good news is that these few lines are all you really need to know, and you'll soon be able to recognise them everywhere.
Oh and if you're serious about learning OpenGL, buy the OpenGL SuperBible.
Hope this helps.
Related
I am working on a 3D mesh I am storing in an array: each element of the array is a 4D point in homogeneous coordinates (x, y, z, w). I use OpenCL to do some calculations on these data, which later I want to visualise, therefore I set up an OpenCL/GL interop context. I have created a shared buffer between OpenCL and OpenGL by using the clCreateFromGLBuffer function on a GL_ARRAY_BUFFER:
...
glGenBuffers(1, &vbo);
glBindBuffer(GL_ARRAY_BUFFER, vbo);
glBufferData(GL_ARRAY_BUFFER, size_of_data, data, GL_DYNAMIC_DRAW);
vbo_buff = clCreateFromGLBuffer(ctx, CL_MEM_READ_WRITE, vbo, &err);
...
In the vertex shader, I access data this way:
layout (location = 0) in vec4 data;
out VS_OUT
{
vec4 vertex;
} vs_out;
void main(void)
{
vs_out.vertex = data;
}
Then in the geometry shader I do something like this:
layout (points) in;
layout (triangle_strip, max_vertices = MAX_VERT) out;
in VS_OUT
{
vec4 vertex;
} gs_in[];
void main()
{
gl_Position = gs_in[0].vertex;
EmitVertex();
...etc...
}
This gives me the ability of generating geometry based on the position of each point the stored in the data array.
This way, the geometry I can generate is only based on the current point being processed by the geometry shader: e.g. I am able to construct a small cube (voxel) around each point.
Now I would like to be able to access to the position of other points in the data array within the geometry shader: e.g. I would like to be able to retrieve the coordinates of another point (indexed by another shared buffer of an arbitrary length) besides the one which is currently processed in order to draw a line connecting them.
The problem I have is that in the geometry shader gs_in[0].vertex gives me the position of each point but I don't know which one for at the time (which index?). Moreover I don't know how to access the position of other points besides that one at the same time.
In an hypothetical pseudo-code I would like to be able to do something like this:
point_A = gs_in[0].vertex[index_A];
point_B = gs_in[0].vertex[index_B];
draw_line_between_A_and_B(point_A, point_B);
It is not clear to me whether this is possible or not, or how to achieve this within a geometry shader. I would like to stick to this approach because the calculations I do in the OpenCL kernels implement a cellular automata, hence it is convenient for me to organise my code (neutrino) in terms of central nodes and related neighbours.
All suggestions are welcome.
Thanks.
but I don't know which one for at the time (which index?)
See gl_PrimitiveIDIn
I don't know how to access the position of other points besides that one at the same time.
You can bind same source buffer two times, as a vertex source and as GL_TEXTURE_BUFFER. If your OpenGL implementation supports it, you'll then be able to read from there.
Unlike Direct3D, in GL the support for the feature is optional, the spec says GL_MAX_GEOMETRY_SHADER_STORAGE_BLOCKS can be zero.
"This gives me the ability of generating geometry based on the position of each point the stored in the data array."
No it does not. The input to the geometry shader are not all the vertex attributes in the buffer. Let me quote the Geometry Shader wiki page:
Geometry shaders take a primitive as input; each primitive is composed of some number of vertices, as defined by the input primitive type in the shader.
Primitives are a single point, line primitive or triangle primitive. For instance, If the primitive type is GL_POINTS, then the size of the input array is 1 and you can only access the vertex coordinate of the point, or if the primitive type is GL_TRIANGLES, the the size of the input array is 3 and you can only access the 3 vertex attributes (respectively corners) which form the triangle.
If you want to access more data, the you have to use a Shader Storage Buffer Object (or a texture).
I have read high and low and thought I understood C++ and OpenGL vertex data layouts, but I must be wrong somewhere...
I have a struct to create a Line object. Therefore it has two points (each of 3 floats to represent a vector position). It must also have an object ID to allow me to track the specific object on creation for collisions, etc later on in the application. The struct is shown below.
struct Point
{
Vector position = { 0.0f, 0.0f, 0.0f };
};
struct Line
{
Point B = { 0.0f,0.0f,0.0f };
Point C = { 0.0f,0.0f,0.0f };
int ID = 0;
};
I then create a simpe c++ STL vector of Lines and push back two line objects:
vector<Line> lines;
Line w0;
w0.B = { 2.0f,2.0f, 0.0f };
w0.C = { 8.0f,2.0f, 0.0f };
w0.ID = 0;
lines.push_back(w0);
Line w1;
w1.B = { 10.0f,4.0f, 0.0f };
w1.C = { 18.0f,4.0f, 0.0f };
w1.ID = 1;
lines.push_back(w1);
Further on I specify the glVertexAttribPointer as follows:
glVertexAttribPointer(4, 3, GL_FLOAT, GL_FALSE, 0, (void*)(0));
glEnableVertexAttribArray(4);
This gives draws only 1 line and not the two I create objects for(!). If I remove the ID int variable from my struct I get both lines showing correctly.
It appeared later that I may not have specified the glVertexAttribPointer correctly, so I changed it logically as follows:
glVertexAttribPointer(4, 3, GL_FLOAT, GL_FALSE, sizeof(Line), (void*)offsetof(Line, B));
It then drew only 1 line at completely different coordinates! Different combinations of offsets etc. didn't help. Can I ultimately use an int value, different from the rest of the struct and pass only the floats over to OpenGL? I really need the ID of the object and use it later in the application. There must be a way - I must be missing something..please help.
Just because you need to represent lines in a certain way inside your application doesn't mean that you have to feed exactly that data in exactly that way to OpenGL for drawing. OpenGL doesn't need this ID field. There seems to be no reason to upload this ID data to the GPU. Besides that, there's no way to make OpenGL vertex attribute arrays use a memory layout like the one you have with your array of Line structs. Think about what multiple Lines look like in memory:
B1 C1 ID1 B2 C2 ID2 B3 C3 ID3 …
Note how the gap between consecutive vertex positions is not fixed but is either 0 between two points of the same line segment or sizeof(int) between the end vertex of one line and the start vertex of the next line. There is no way to describe such a vertex attribute array with just a stride and base offset. And all of this is ignoring the fact that compilers are free to add padding bytes between struct members in whatever way they see fit. So your memory layout is not even guaranteed to look like that and, at least in theory, is subject to change depending on which version of which compiler you're using with which compile options.
I would suggest to let go of the idea that the Line struct is a given and every aspect of your application must absolutely work with that exact data representation. You have to upload the data to the GPU for drawing at some point anyways. When you do, simply copy just the start and end points and skip the id. Apart from that, consider the fact that you could also just generally switch from the Array of Structures approach you have now (where you keep an array of Line structures) to a Structure of Arrays approach, i.e., have one array for all the line start points, one for all the end points, and one for the IDs. Depending on how exactly you process your data, this is often beneficial even on the CPU. Finally, there would be the option to upload the data to a Shader Storage Buffer and manually look up the vertex attributes in the vertex shader. I don't think I would recommend going that way here though…
I use oglplus - it's a c++ wrapper for OpenGL.
I have a problem with defining instanced data for my particle renderer - positions work fine but something goes wrong when I want to instance a bunch of ints from the same VBO.
I am going to skip some of the implementation details to not make this problem more complicated. Assume that I bind VAO and VBO before described operations.
I have an array of structs (called "Particle") that I upload like this:
glBufferData(GL_ARRAY_BUFFER, sizeof(Particle) * numInstances, newData, GL_DYNAMIC_DRAW);
Definition of the struct:
struct Particle
{
float3 position;
//some more attributes, 9 floats in total
//(...)
int fluidID;
};
I use a helper function to define the OpenGL attributes like this:
void addInstancedAttrib(const InstancedAttribDescriptor& attribDesc, GLSLProgram& program, int offset=0)
{
//binding and some implementation details
//(...)
oglplus::VertexArrayAttrib attrib(program, attribDesc.getName().c_str());
attrib.Pointer(attribDesc.getPerVertVals(), attribDesc.getType(), false, sizeof(Particle), (void*)offset);
attrib.Divisor(1);
attrib.Enable();
}
I add attributes for positions and fluidids like this:
InstancedAttribDescriptor posDesc(3, "InstanceTranslation", oglplus::DataType::Float);
this->instancedData.addInstancedAttrib(posDesc, this->program);
InstancedAttribDescriptor fluidDesc(1, "FluidID", oglplus::DataType::Int);
this->instancedData.addInstancedAttrib(fluidDesc, this->program, (int)offsetof(Particle,fluidID));
Vertex shader code:
uniform vec3 FluidColors[2];
in vec3 InstanceTranslation;
in vec3 VertexPosition;
in vec3 n;
in int FluidID;
out float lightIntensity;
out vec3 sphereColor;
void main()
{
//some typical MVP transformations
//(...)
sphereColor = FluidColors[FluidID];
gl_Position = projection * vertexPosEye;
}
This code as whole produces this output:
As you can see, the particles are arranged in the way I wanted them to be, which means that "InstanceTranslation" property is setup correctly. The group of the particles to the left have FluidID value of 0 and the ones to the right equal to 1. The second set of particles have proper positions but index improperly into FluidColors array.
What I know:
It's not a problem with the way I set up the FluidColors uniform. If I hard-code the color selection in the shader like this:
sphereColor = FluidID == 0? FluidColors[0] : FluidColors1;
I get:
OpenGL returns GL_NO_ERROR from glGetError so there's no problem with the enums/values I provide
It's not a problem with the offsetof macro. I tried using hard-coded values and they didn't work either.
It's not a compatibility issue with GLint, I use simple 32bit Ints (checked this with sizeof(int))
I need to use FluidID as a instanced attrib that indexes into the color array because otherwise, if I were to set the color for a particle group as a simple vec3 uniform, I'd have to batch the same particle types (with the same FluidID) together first which means sorting them and it'd be too costly of an operation.
To me, this seems to be an issue of how you set up the fluidID attribute pointer. Since you use the type int in the shader, you must use glVertexAttribIPointer() to set up the attribute pointer. Attributes you set up with the normal glVertexAttribPointer() function work only for float-based attribute types. They accept integer input, but the data will be converted to float when the shader accesses them.
In oglplus, you apparently have to use VertexArrayAttrib::IPointer() instead of VertexArrayAttrib::Pointer() if you want to work with integer attributes.
I want to pass a single float or unsigned int type variable to vertex shader but you can only pass vec or struct as an attribute variable. So, I used a vec2 type attribute variable and later used it to access the content.
glBindAttribLocation(program, 0, "Bid");
glEnableVertexAttribArray(0);
glVertexAttribIPointer(0, 1, GL_UNSIGNED_INT, sizeof(strideStructure), (const GLvoid*)0);
The vertex shader contains this code:
attribute ivec2 Bid;
void main()
{
int x = Bid.x;
int y = Bid.y;
}
So, when I pass value each time, doesn't the value get stored in x-component of vec2 Bid? In the second run of the loop, will the passed data be stored in x- component of different vector attribute? Also, if I change the size parameter to 2 for example, what would be the order in which data are stored in the vector attribute?
You can use scalar types for attributes. From the GLSL 1.50 spec (which corresponds to OpenGL 3.2):
Vertex shader inputs can only be float, floating-point vectors, matrices, signed and unsigned integers and integer vectors. Vertex shader inputs can also form arrays of these types, but not structures.
No matter if you use vector or scalar values, the types have to match. In your example, you're specifying GL_UNSIGNED_INT as the attribute type, but the type in the shader is ivec2, which is a signed value. It should be uvec2 to match the specified attribute type.
Yes, if you declare the type in the shader as uvec2 Bid, but pass only one value, that value will be in Bid.x. Bid.y will be 0. If you pass two values per vertex, the first one will be in Bid.x, and the second one in Bid.y.
You sound a little unclear about how vertex shaders are invoked, particularly when you talk about "run of the loop". There is no looping here. The vertex shader is invoked once for each vertex, and the corresponding attribute values for this specific vertex will be passed in the attribute variables. They will be in the same attribute variables, and in the same place within these variables, for each vertex.
I guess you can picture a "loop" in the sense of the vertex shader being invoked for each vertex. In reality, a lot of processing on the GPU will happen in parallel. This means that the vertex shader for a bunch of vertices will be invoked at the same time. Each one has its own variable instances, so they will not get in each others way, and the attributes for each one will be passed in exactly the same way.
An additional note on your code. You need to be careful with this call:
glBindAttribLocation(program, 0, "Bid");
glBindAttribLocation() needs to be called before linking the shader program. Otherwise it will have no effect.
I want to be able to input a bunch of vertices to my graphics program and then I want to be able to do the following on them:
Use them in the graphics part of OpenGL, especially in the Vertex Shader.
Do physics calculations on them in a Compute Shader.
By these requirements I figured that I need some structure in which I store my vertices and can access them correctly, I thought of the following:
ArrayBuffers
Textures (as in storing the information, not for texturing itself)
However I've thought and came up with drawbacks of both variants:
ArrayBuffers:
I'm unsure how my Compute Shader can read, let alone modify, the vertices. Yet I do know how to draw them.
Textures:
I know how to modify them in Compute Shaders, however I am unsure how to draw from a texture. More specifically, the number of elements needed to be drawn depends on the number of written (data not zero) elements in the texture.
I might have overlooked some important other features that suffice my need, so as the real question:
How do I create Vertices that reside on the GPU and which I can both access in the Vertex and in the Compute Shader?
Hopefully this will clear up a few misconceptions, and give you a little bit better understanding of how general purpose shader storage is setup.
What you have to understand is how buffer objects really work in GL. You often hear people distinguish between things like "Vertex Buffer Objects" and "Uniform Buffer Objects". In reality, there is no fundamental distinction – a buffer object is treated the same way no matter what it stores. It is just a generic data store, and it only takes on special meaning while it is bound to a specific point (e.g. GL_ARRAY_BUFFER or GL_UNIFORM_BUFFER).
Do not think of special purpose vertex buffers residing on the GPU, think more generally – it is actually unformatted memory that you can read/write if you know the structure. Calls like glVertexAttribPointer (...) describe the data structure of the buffer object sufficiently for glDrawArrays (...) to meaningfully pull vertex attributes from the buffer object's memory for each vertex shader invocation.
You need to do the same thing yourself for compute shaders, as demonstrated below. You need to familiarize yourself with the rules discussed in 7.6.2.2 - Standard Uniform Block Layout to fully understand the following data structure.
Description of a vertex data structure using Shader Storage Blocks can be done like so:
// Compute Shader SSB Data Structure and Buffer Definition
struct VtxData {
vec4 vtx_pos; // 4N [GOOD] -- Largest base alignment
vec3 vtx_normal; // 3N [BAD]
float vtx_padding7; // N (such that vtx_st begins on a 2N boundary)
vec2 vtx_st; // 2N [BAD]
vec2 vtx_padding10; // 2N (in order to align the entire thing to 4N)
}; // ^^ 12 * sizeof (GLfloat) per-vtx
// std140 is pretty important here, it is the only way to guarantee the data
// structure is aligned as described above and that the stride between
// elements in verts[] is 0.
layout (std140, binding = 1) buffer VertexBuffer {
VtxData verts [];
};
This allows you to use an interleaved vertex buffer in a compute shader, with the data structure defined above. You have to be careful with data alignment when you do this... you could haphazardly use any alignment/stride you wanted for an interleaved vertex array ordinarily, but here you want to conform to the std140 layout rules. This means using 3-component vectors is not always a wise use of memory; you need things to be aligned on N (float), 2N (vec2) or 4N (vec3/vec4) boundaries and this often necessitates the insertion of padding and/or clever packing of data. In the example above, you could fit an entire 3-component vector worth of data in all the space wasted by alignment padding.
Pseudo-code showing how the buffer would be created and bound for dual-use:
struct Vertex {
GLfloat pos [4];
GLfloat normal [3];
GLfloat padding7;
GLfloat st [2];
GLfloat padding10 [2];
} *verts;
[... code to allocate and fill verts ...]
GLuint vbo;
glGenBuffers (1, &vbo);
glBindBuffer (GL_ARRAY_BUFFER, vbo);
glBufferData (GL_ARRAY_BUFFER, sizeof (Vertex) * num_verts, verts, GL_STATIC_DRAW);
glVertexAttribPointer (0, 4, GL_FLOAT, GL_FALSE, 48, 0); // Vertex Attrib. 0
glVertexAttribPointer (1, 3, GL_FLOAT, GL_FALSE, 48, 16); // Vertex Attrib. 1
glVertexAttribPointer (2, 2, GL_FLOAT, GL_FALSE, 48, 32); // Vertex Attrib. 2
glBindBufferBase (GL_SHADER_STORAGE_BUFFER, 1, vbo); // Buffer Binding 1