I am trying to use glVertexAttribPointer with a structure of Eigen objects, similar to this:
struct Vertex {
Eigen::Vector3f position;
Eigen::Vector3f normal;
};
The problem is setting the offset of glVertexAttribPointer. Since there is no public access to the m_data member used to store internally the data in Eigen, offset cannot be used.
It seems like there is no nice way to do this. My current approach is something like:
(void*)((char*)vertices[0].Position.data() - (char*)(&vertices[0]))
, where vertices is a std::vector<Vertex>.
This is by no means nice (especially in modern C++). I doubt there can be a nice solution, but what would be a safer way of doing this, or at least how can this operation be isolated as much as possible, so I don't have to write it for every call to glVertexAttribPointer.
The Eigen documentation guarantees that the layout of a Eigen::Vector3f = Eigen::Matrix<float,3,1> is as follows:
struct {
float data[Rows*Cols]; // with (size_t(data)%A(Rows*Cols*sizeof(T)))==0
};
In other words, the float[3] is at offset 0 of the Eigen::Vector3f structure. You are allowed to pass offsets of position and normal as-is (offsetof(Vertex, position) and offsetof(Vertex, normal)) to your glVertexAttrib calls for the offsets, and sizeof(Eigen::Vector3f) for the sizes.
Related
I have read high and low and thought I understood C++ and OpenGL vertex data layouts, but I must be wrong somewhere...
I have a struct to create a Line object. Therefore it has two points (each of 3 floats to represent a vector position). It must also have an object ID to allow me to track the specific object on creation for collisions, etc later on in the application. The struct is shown below.
struct Point
{
Vector position = { 0.0f, 0.0f, 0.0f };
};
struct Line
{
Point B = { 0.0f,0.0f,0.0f };
Point C = { 0.0f,0.0f,0.0f };
int ID = 0;
};
I then create a simpe c++ STL vector of Lines and push back two line objects:
vector<Line> lines;
Line w0;
w0.B = { 2.0f,2.0f, 0.0f };
w0.C = { 8.0f,2.0f, 0.0f };
w0.ID = 0;
lines.push_back(w0);
Line w1;
w1.B = { 10.0f,4.0f, 0.0f };
w1.C = { 18.0f,4.0f, 0.0f };
w1.ID = 1;
lines.push_back(w1);
Further on I specify the glVertexAttribPointer as follows:
glVertexAttribPointer(4, 3, GL_FLOAT, GL_FALSE, 0, (void*)(0));
glEnableVertexAttribArray(4);
This gives draws only 1 line and not the two I create objects for(!). If I remove the ID int variable from my struct I get both lines showing correctly.
It appeared later that I may not have specified the glVertexAttribPointer correctly, so I changed it logically as follows:
glVertexAttribPointer(4, 3, GL_FLOAT, GL_FALSE, sizeof(Line), (void*)offsetof(Line, B));
It then drew only 1 line at completely different coordinates! Different combinations of offsets etc. didn't help. Can I ultimately use an int value, different from the rest of the struct and pass only the floats over to OpenGL? I really need the ID of the object and use it later in the application. There must be a way - I must be missing something..please help.
Just because you need to represent lines in a certain way inside your application doesn't mean that you have to feed exactly that data in exactly that way to OpenGL for drawing. OpenGL doesn't need this ID field. There seems to be no reason to upload this ID data to the GPU. Besides that, there's no way to make OpenGL vertex attribute arrays use a memory layout like the one you have with your array of Line structs. Think about what multiple Lines look like in memory:
B1 C1 ID1 B2 C2 ID2 B3 C3 ID3 …
Note how the gap between consecutive vertex positions is not fixed but is either 0 between two points of the same line segment or sizeof(int) between the end vertex of one line and the start vertex of the next line. There is no way to describe such a vertex attribute array with just a stride and base offset. And all of this is ignoring the fact that compilers are free to add padding bytes between struct members in whatever way they see fit. So your memory layout is not even guaranteed to look like that and, at least in theory, is subject to change depending on which version of which compiler you're using with which compile options.
I would suggest to let go of the idea that the Line struct is a given and every aspect of your application must absolutely work with that exact data representation. You have to upload the data to the GPU for drawing at some point anyways. When you do, simply copy just the start and end points and skip the id. Apart from that, consider the fact that you could also just generally switch from the Array of Structures approach you have now (where you keep an array of Line structures) to a Structure of Arrays approach, i.e., have one array for all the line start points, one for all the end points, and one for the IDs. Depending on how exactly you process your data, this is often beneficial even on the CPU. Finally, there would be the option to upload the data to a Shader Storage Buffer and manually look up the vertex attributes in the vertex shader. I don't think I would recommend going that way here though…
I have a shader storage block in the vertex shader, like this:
layout(std430,binding=0) buffer buf {mat3 rotX, rotY, rotZ; } b;
I initialized those 3 matrices with identity matrix like this:
float mats[]={ 1,0,0,0,1,0,0,0,1,
1,0,0,0,1,0,0,0,1,
1,0,0,0,1,0,0,0,1 };
GLuint ssbos;
glGenBuffers(1,&ssbos);
glBindBufferBase(GL_SHADER_STORAGE_BUFFER,0,ssbos);
glBufferData(GL_SHADER_STORAGE_BUFFER,sizeof(mats),mats,GL_DYNAMIC_DRAW);
But it doesn't seem to work (I'm using Opengl 4.3 core profile). Am I doing something wrong?
glBindBufferBase(GL_SHADER_STORAGE_BUFFER,0,ssbos);
glBufferData(GL_SHADER_STORAGE_BUFFER,sizeof(mats),mats,GL_DYNAMIC_DRAW);
glBindBufferBase binds the entire range of the buffer. But it's not a magic "bind whatever the buffer happens to store" function. It binds the entire range of the buffer as it currently exists.
And since you haven't allocated any storage for that buffer object, its current state is empty: a size of 0. And that's what you bind: a range of 0 bytes of memory.
Oh sure, in the next statement, you give the buffer memory. But that doesn't change the fact that it didn't have memory when you bound it.
So you need to create storage for the buffer before binding a range of it.
Also, don't use vec3 or any types related to vec3 in buffer-backed interface blocks. And you really shouldn't be passing axial rotation matrices like that.
The std430 layout is essentially std140 with tighter packing of structs and arrays. The data you are supplying does not respect the layout rules.
From section 7.6.2.2 Standard Uniform Block Layout of the OpenGL spec:
If the member is an array of scalars or vectors, the base alignment and array stride are set to match the base alignment of a single array element, according to rules (1), (2), and (3), and rounded up to the base alignment of a vec4. The array may have padding at the end; the base offset of the member following the array is rounded up to the next multiple of the base alignment.
If the member is a column-major matrix with C columns and R rows, the matrix is stored identically to an array of C column vectors with R components each, according to rule (4).
So your mat3 matrices are treated as 3 vec3 each (one for each column). According to the rule (4), a vec3 is padded to occupy the same memory as a vec4.
In essence, when using a mat3 in an SSBO, you need to supply the same amount of data as if you were using a mat4 mat3x4 with the added benefit of a more confusing memory layout. Therefore, it is best to use mat3x4 (or mat4) in an SSBO and only use its relevant portions in the shader. Similar advice also stands for vec3 by the way.
It is easy to get smaller matrices from a larger one:
A wide range of other possibilities exist, to construct a matrix from vectors and scalars, as long as enough components are present to initialize the matrix. To construct a matrix from a matrix:
mat3x3(mat4x4); // takes the upper-left 3x3 of the mat4x4
mat2x3(mat4x2); // takes the upper-left 2x2 of the mat4x4, last row is 0,0
mat4x4(mat3x3); // puts the mat3x3 in the upper-left, sets the lower right
// component to 1, and the rest to 0
This should give you proper results:
float mats[]={ 1,0,0,0, 0,1,0,0, 0,0,1,0,
1,0,0,0, 0,1,0,0, 0,0,1,0,
1,0,0,0, 0,1,0,0, 0,0,1,0, };
I use oglplus - it's a c++ wrapper for OpenGL.
I have a problem with defining instanced data for my particle renderer - positions work fine but something goes wrong when I want to instance a bunch of ints from the same VBO.
I am going to skip some of the implementation details to not make this problem more complicated. Assume that I bind VAO and VBO before described operations.
I have an array of structs (called "Particle") that I upload like this:
glBufferData(GL_ARRAY_BUFFER, sizeof(Particle) * numInstances, newData, GL_DYNAMIC_DRAW);
Definition of the struct:
struct Particle
{
float3 position;
//some more attributes, 9 floats in total
//(...)
int fluidID;
};
I use a helper function to define the OpenGL attributes like this:
void addInstancedAttrib(const InstancedAttribDescriptor& attribDesc, GLSLProgram& program, int offset=0)
{
//binding and some implementation details
//(...)
oglplus::VertexArrayAttrib attrib(program, attribDesc.getName().c_str());
attrib.Pointer(attribDesc.getPerVertVals(), attribDesc.getType(), false, sizeof(Particle), (void*)offset);
attrib.Divisor(1);
attrib.Enable();
}
I add attributes for positions and fluidids like this:
InstancedAttribDescriptor posDesc(3, "InstanceTranslation", oglplus::DataType::Float);
this->instancedData.addInstancedAttrib(posDesc, this->program);
InstancedAttribDescriptor fluidDesc(1, "FluidID", oglplus::DataType::Int);
this->instancedData.addInstancedAttrib(fluidDesc, this->program, (int)offsetof(Particle,fluidID));
Vertex shader code:
uniform vec3 FluidColors[2];
in vec3 InstanceTranslation;
in vec3 VertexPosition;
in vec3 n;
in int FluidID;
out float lightIntensity;
out vec3 sphereColor;
void main()
{
//some typical MVP transformations
//(...)
sphereColor = FluidColors[FluidID];
gl_Position = projection * vertexPosEye;
}
This code as whole produces this output:
As you can see, the particles are arranged in the way I wanted them to be, which means that "InstanceTranslation" property is setup correctly. The group of the particles to the left have FluidID value of 0 and the ones to the right equal to 1. The second set of particles have proper positions but index improperly into FluidColors array.
What I know:
It's not a problem with the way I set up the FluidColors uniform. If I hard-code the color selection in the shader like this:
sphereColor = FluidID == 0? FluidColors[0] : FluidColors1;
I get:
OpenGL returns GL_NO_ERROR from glGetError so there's no problem with the enums/values I provide
It's not a problem with the offsetof macro. I tried using hard-coded values and they didn't work either.
It's not a compatibility issue with GLint, I use simple 32bit Ints (checked this with sizeof(int))
I need to use FluidID as a instanced attrib that indexes into the color array because otherwise, if I were to set the color for a particle group as a simple vec3 uniform, I'd have to batch the same particle types (with the same FluidID) together first which means sorting them and it'd be too costly of an operation.
To me, this seems to be an issue of how you set up the fluidID attribute pointer. Since you use the type int in the shader, you must use glVertexAttribIPointer() to set up the attribute pointer. Attributes you set up with the normal glVertexAttribPointer() function work only for float-based attribute types. They accept integer input, but the data will be converted to float when the shader accesses them.
In oglplus, you apparently have to use VertexArrayAttrib::IPointer() instead of VertexArrayAttrib::Pointer() if you want to work with integer attributes.
A question about coding style:
When you're going to reconstruct a virtural scene containing plenty of objects (Using JOGL), is it always good to define a Vec3f class and face class representing the vertices, normals, and faces rather than to directly use float[] type? Any ideas?
Many people go step further and create a Vertex POD object of type:
struct Vertex{
vec4 position;
vec4 normal;
vec2 texture;
}
Then the stride is simply sizeof(Vertex), and the offsets can be extracted using a offsetof macro. This leads to a more robust setup when passing the data.
Hey.
I new to OpenGL ES but I've had my share of experience with normal OpenGL.
I've been told that using interlaced arrays for the vertex buffers is a lot faster due to the optimisation for avoiding cache misses.
I've developed a vertex format that I will use that looks like this
struct SVertex
{
float x,y,z;
float nx,ny,nz;
float tx,ty,tz;
float bx,by,bz;
float tu1,tv1;
float tu2,tv2;
};
Then I used "glVertexAttribPointer(index,3,GL_FLOAT,GL_FALSE,stride,v);" to point to the vertex array. The index is the one of the attribute I want to use and everything else is ok except the stride. It worked before I decided to add this into the equation. I passed the stride both as sizeof(SVertex) and like 13*4 but none of them seem to work.
If it has any importance I draw the primitives like this glDrawElements(GL_TRIANGLES,surface->GetIndexCount()/3,GL_UNSIGNED_INT,surface->IndPtr());
In the OpenGL specs it's written that the stride should be the size in bytes from the end of the attribute( in this case z) to the next attribute of the same kind(in this case x). So by my calculations this should be 13(nx,ny,nz,tx,ty....tuv2,tv2) times 4 (the size of a float).
Oh and one more thing is that the display is just empty.
Could anyone please help me with this?
Thanks a lot.
If you have a structure like this, then stride is just sizeof SVertex and it's the same for every attribute. There's nothing complicated here.
If this didn't work, look for your error somewhere else.
For example here:
surface->GetIndexCount()/3
This parameter should be the number of vertices, not primitives to be sent - hence I'd say that this division by three is wrong. Leave it as:
surface->GetIndexCount()
Then I used
"glVertexAttribPointer(index,3,GL_FLOAT,GL_FALSE,stride,v);"
to point to the vertex array. The
index is the one of the attribute I
want to use and everything else is ok
except the stride
This does not work for texcoord (you have 2x 2 floats or 1x 4 floats).
About the stride, like Kos said, I think you should pass a stride of 16 * sizeof(float) (the size of your SVertex).
Also another thing worth mentioning. You say you want to optimize for performance. Why dont you compress your vertex to the max, and suppress redundant values? This would save a lot of bandwidth.
x, y, z are OK, but nx and ny are self sufficient if your normals are normalized (which may be the case). You can extract in the vertex shader nz (assuming you have shader capabilities). The same thing applies for tx and ty. You don't need bx, by, bz at all since you know it's the cross product of normal and tangent.
struct SPackedVertex
{
float x,y,z,w; //pack all on vector4
float nx,ny,tx,ty;
float tu1,tv1;tu2,tv2;
};