Opengl - instanced attributes - c++

I use oglplus - it's a c++ wrapper for OpenGL.
I have a problem with defining instanced data for my particle renderer - positions work fine but something goes wrong when I want to instance a bunch of ints from the same VBO.
I am going to skip some of the implementation details to not make this problem more complicated. Assume that I bind VAO and VBO before described operations.
I have an array of structs (called "Particle") that I upload like this:
glBufferData(GL_ARRAY_BUFFER, sizeof(Particle) * numInstances, newData, GL_DYNAMIC_DRAW);
Definition of the struct:
struct Particle
{
float3 position;
//some more attributes, 9 floats in total
//(...)
int fluidID;
};
I use a helper function to define the OpenGL attributes like this:
void addInstancedAttrib(const InstancedAttribDescriptor& attribDesc, GLSLProgram& program, int offset=0)
{
//binding and some implementation details
//(...)
oglplus::VertexArrayAttrib attrib(program, attribDesc.getName().c_str());
attrib.Pointer(attribDesc.getPerVertVals(), attribDesc.getType(), false, sizeof(Particle), (void*)offset);
attrib.Divisor(1);
attrib.Enable();
}
I add attributes for positions and fluidids like this:
InstancedAttribDescriptor posDesc(3, "InstanceTranslation", oglplus::DataType::Float);
this->instancedData.addInstancedAttrib(posDesc, this->program);
InstancedAttribDescriptor fluidDesc(1, "FluidID", oglplus::DataType::Int);
this->instancedData.addInstancedAttrib(fluidDesc, this->program, (int)offsetof(Particle,fluidID));
Vertex shader code:
uniform vec3 FluidColors[2];
in vec3 InstanceTranslation;
in vec3 VertexPosition;
in vec3 n;
in int FluidID;
out float lightIntensity;
out vec3 sphereColor;
void main()
{
//some typical MVP transformations
//(...)
sphereColor = FluidColors[FluidID];
gl_Position = projection * vertexPosEye;
}
This code as whole produces this output:
As you can see, the particles are arranged in the way I wanted them to be, which means that "InstanceTranslation" property is setup correctly. The group of the particles to the left have FluidID value of 0 and the ones to the right equal to 1. The second set of particles have proper positions but index improperly into FluidColors array.
What I know:
It's not a problem with the way I set up the FluidColors uniform. If I hard-code the color selection in the shader like this:
sphereColor = FluidID == 0? FluidColors[0] : FluidColors1;
I get:
OpenGL returns GL_NO_ERROR from glGetError so there's no problem with the enums/values I provide
It's not a problem with the offsetof macro. I tried using hard-coded values and they didn't work either.
It's not a compatibility issue with GLint, I use simple 32bit Ints (checked this with sizeof(int))
I need to use FluidID as a instanced attrib that indexes into the color array because otherwise, if I were to set the color for a particle group as a simple vec3 uniform, I'd have to batch the same particle types (with the same FluidID) together first which means sorting them and it'd be too costly of an operation.

To me, this seems to be an issue of how you set up the fluidID attribute pointer. Since you use the type int in the shader, you must use glVertexAttribIPointer() to set up the attribute pointer. Attributes you set up with the normal glVertexAttribPointer() function work only for float-based attribute types. They accept integer input, but the data will be converted to float when the shader accesses them.
In oglplus, you apparently have to use VertexArrayAttrib::IPointer() instead of VertexArrayAttrib::Pointer() if you want to work with integer attributes.

Related

How to generate geometry to link neighbour nodes in a geometry shader with OpenCL/GL interop?

I am working on a 3D mesh I am storing in an array: each element of the array is a 4D point in homogeneous coordinates (x, y, z, w). I use OpenCL to do some calculations on these data, which later I want to visualise, therefore I set up an OpenCL/GL interop context. I have created a shared buffer between OpenCL and OpenGL by using the clCreateFromGLBuffer function on a GL_ARRAY_BUFFER:
...
glGenBuffers(1, &vbo);
glBindBuffer(GL_ARRAY_BUFFER, vbo);
glBufferData(GL_ARRAY_BUFFER, size_of_data, data, GL_DYNAMIC_DRAW);
vbo_buff = clCreateFromGLBuffer(ctx, CL_MEM_READ_WRITE, vbo, &err);
...
In the vertex shader, I access data this way:
layout (location = 0) in vec4 data;
out VS_OUT
{
vec4 vertex;
} vs_out;
void main(void)
{
vs_out.vertex = data;
}
Then in the geometry shader I do something like this:
layout (points) in;
layout (triangle_strip, max_vertices = MAX_VERT) out;
in VS_OUT
{
vec4 vertex;
} gs_in[];
void main()
{
gl_Position = gs_in[0].vertex;
EmitVertex();
...etc...
}
This gives me the ability of generating geometry based on the position of each point the stored in the data array.
This way, the geometry I can generate is only based on the current point being processed by the geometry shader: e.g. I am able to construct a small cube (voxel) around each point.
Now I would like to be able to access to the position of other points in the data array within the geometry shader: e.g. I would like to be able to retrieve the coordinates of another point (indexed by another shared buffer of an arbitrary length) besides the one which is currently processed in order to draw a line connecting them.
The problem I have is that in the geometry shader gs_in[0].vertex gives me the position of each point but I don't know which one for at the time (which index?). Moreover I don't know how to access the position of other points besides that one at the same time.
In an hypothetical pseudo-code I would like to be able to do something like this:
point_A = gs_in[0].vertex[index_A];
point_B = gs_in[0].vertex[index_B];
draw_line_between_A_and_B(point_A, point_B);
It is not clear to me whether this is possible or not, or how to achieve this within a geometry shader. I would like to stick to this approach because the calculations I do in the OpenCL kernels implement a cellular automata, hence it is convenient for me to organise my code (neutrino) in terms of central nodes and related neighbours.
All suggestions are welcome.
Thanks.
but I don't know which one for at the time (which index?)
See gl_PrimitiveIDIn
I don't know how to access the position of other points besides that one at the same time.
You can bind same source buffer two times, as a vertex source and as GL_TEXTURE_BUFFER. If your OpenGL implementation supports it, you'll then be able to read from there.
Unlike Direct3D, in GL the support for the feature is optional, the spec says GL_MAX_GEOMETRY_SHADER_STORAGE_BLOCKS can be zero.
"This gives me the ability of generating geometry based on the position of each point the stored in the data array."
No it does not. The input to the geometry shader are not all the vertex attributes in the buffer. Let me quote the Geometry Shader wiki page:
Geometry shaders take a primitive as input; each primitive is composed of some number of vertices, as defined by the input primitive type in the shader.
Primitives are a single point, line primitive or triangle primitive. For instance, If the primitive type is GL_POINTS, then the size of the input array is 1 and you can only access the vertex coordinate of the point, or if the primitive type is GL_TRIANGLES, the the size of the input array is 3 and you can only access the 3 vertex attributes (respectively corners) which form the triangle.
If you want to access more data, the you have to use a Shader Storage Buffer Object (or a texture).

Using OpenGL with Eigen for storing vertex data and glVertexAttribPointer

I am trying to use glVertexAttribPointer with a structure of Eigen objects, similar to this:
struct Vertex {
Eigen::Vector3f position;
Eigen::Vector3f normal;
};
The problem is setting the offset of glVertexAttribPointer. Since there is no public access to the m_data member used to store internally the data in Eigen, offset cannot be used.
It seems like there is no nice way to do this. My current approach is something like:
(void*)((char*)vertices[0].Position.data() - (char*)(&vertices[0]))
, where vertices is a std::vector<Vertex>.
This is by no means nice (especially in modern C++). I doubt there can be a nice solution, but what would be a safer way of doing this, or at least how can this operation be isolated as much as possible, so I don't have to write it for every call to glVertexAttribPointer.
The Eigen documentation guarantees that the layout of a Eigen::Vector3f = Eigen::Matrix<float,3,1> is as follows:
struct {
float data[Rows*Cols]; // with (size_t(data)%A(Rows*Cols*sizeof(T)))==0
};
In other words, the float[3] is at offset 0 of the Eigen::Vector3f structure. You are allowed to pass offsets of position and normal as-is (offsetof(Vertex, position) and offsetof(Vertex, normal)) to your glVertexAttrib calls for the offsets, and sizeof(Eigen::Vector3f) for the sizes.

glUniformLocation of arrays

I am trying to save the uniform locations of an array, to a std::map in my Shader class.
My vertex shader has an array uniforms to a mat4.
uniform mat4 bone_matrices[32];
In the following code the bone matrices name shows up as bone_matrices[0]. So I am only getting the location for the first array member. [I think the others are sequential.]
class Shader
{
...
private:
GLuint mi_program;
std::map<std::string, GLint> m_maplocations;
};
void Shader::mapLocations()
{
GLint numUniforms = 0;
const GLenum properties[5] = {GL_BLOCK_INDEX, GL_TYPE, GL_NAME_LENGTH, GL_LOCATION, GL_ARRAY_SIZE};
glGetProgramInterfaceiv(mi_program, GL_UNIFORM, GL_ACTIVE_RESOURCES, &numUniforms);
for(int unif = 0; unif < numUniforms; ++unif)
{
GLint values[5];
glGetProgramResourceiv(mi_program, GL_UNIFORM, unif, 5, properties, 5, NULL, values);
// Skip any uniforms that are in a block.
if(values[0] != -1)
continue;
std::string nameData;
nameData.resize(values[2]);
if(values[4] > 1)
{
// **have an array here**
}
glGetProgramResourceName(mi_program, GL_UNIFORM, unif, nameData.size(), NULL, &nameData[0]);
std::string name(nameData.begin(), nameData.end() - 1);
m_maplocations.insert(std::pair<std::string, GLint>(name, values[3]));
}
}
How can I iterate the bone_matrices array, get their names:
bone_matrices[0], bone_matrices[1],
...
and locations.
Thanks..
How can I iterate the bone_matrices array, get their names: bone_matrices[0], bone_matrices1, ... and locations.
There is one uniform: bone_matrices. This uniform is an array of basic types, so it is considered a single resource. Each array element has a location, but there is still only one uniform.
If you want to test if a uniform is arrayed, get the size of the uniform using the GL_ARRAY_SIZE property. If this value is greater than 1, then you can then query the location of each array element by a loop.
All that being said:
I am trying to save the uniform locations of an array, to a std::map in my Shader class.
Please don't do this. First, your std::map-based code will not be faster than glGetUniformLocation or glGetProgramResourceLocation. So if you're going to constantly query uniforms by string name, you may as well just use the existing API.
Second, since you're using program resources, a GL 4.3 feature, I have to assume you have access to explicit uniform locations. You should therefore assign uniforms to specific locations in your shader, then you don't have to query anything at all.
Even if you're writing a program that has to work with whatever the user gives you, it's still better for them to provide specific locations.
What I mean is this. Let's say you have an expectation that the user will provide an array of matrices called bone_matrices. So change your expectation to be that the user will provide an array of matrices at some particular uniform location X.
Yes, this means you have to partition out your uniform space. But it's a lot better than running through each shader to query its data. You don't have to have a map from a string to a location; you simply have the location where it expects it to be.

OpenGL Bindless Textures: Bind to uniform sampler2D array

I am looking into using bindless textures to rapidly display a series of images. My reference is the OpenGL 4.5 redbook. The book says I can sample bindless textures in a shader with this fragment shader:
#version 450 core
#extension GL_ARB_bindless_texture : require
in FS_INPUTS {
vec2 i_texcoord;
flat int i_texindex;
};
layout (binding = 0) uniform ALL_TEXTURES {
sampler2D fs_textures[200];
};
out vec4 color;
void main(void) {
color = texture(fs_textures[i_texindex], i_texcoord);
};
I created a vertex shader that looks like this:
#version 450 core
in vec2 vert;
in vec2 texcoord;
uniform int texindex;
out FS_INPUTS {
vec2 i_texcoord;
flat int i_texindex;
} tex_data;
void main(void) {
tex_data.i_texcoord = texcoord;
tex_data.i_texindex = texindex;
gl_Position = vec4(vert.x, vert.y, 0.0, 1.0);
};
As you may notice, my grasp of whats going on is a little weak.
In my OpenGL code, I create a bunch of textures, get their handles, and make them resident. The function I am using to get the texture handles is 'glGetTextureHandleARB'. There is another function that could be used instead, 'glGetTextureSamplerHandleARB' where I can pass in a sampler location. Here is what I did:
Texture* textures = new Texture[load_limit];
GLuint64* tex_handles = new GLuint64[load_limit];
for (int i=0; i<load_limit; ++i)
{
textures[i].bind();
textures[i].data(new CvImageFile(image_names[i]));
tex_handles[i] = glGetTextureHandleARB(textures[i].id());
glMakeTextureHandleResidentARB(tex_handles[i]);
textures[i].unbind();
}
My question is how do I bind my texture handles to the ALL_TEXTURES uniform attribute of the fragment shader? Also, what should I use to update the vertex attribute 'texindex' - an actual index into my texture handle array or a texture handle?
It's bindless texturing. You do not "bind" such textures to anything.
In bindless texturing, the data value of a sampler is a number. Specifically, the number returned by glGetTextureHandleARB. Texture handles are 64-bit unsigned integer.
In a shader, values of sampler types in buffer-backed interface blocks (UBOs and SSBOs) are 64-bit unsigned integers. So an array of samplers is equivalent in structure to an array of 64-bit unsigned integers.
So in C++, a struct equivalent to your ALL_TEXTURES block would be:
struct AllTextures
{
GLuint64 textures[200];
};
Well, assuming you properly use std140 layout, of course. Otherwise, you'd have to query the layout of the structure.
At this point, you treat the buffer as no different from any other UBO usage. Build the data for the shader by sticking an AllTextures into a buffer object, then bind that buffer as a UBO to binding 0. You just need to fill the array in with the actual texture handles.
Also, what should I use to update the vertex attribute 'texindex' - an actual index into my texture handle array or a texture handle?
Well, neither one will work. Not the way you've written it.
See, ARB_bindless_texture does not allow you to access any texture you want in any way at any time from any shader invocation. Unless you are using NV_gpu_shader5, the code leading to the texture access must be based on dynamically uniform expressions.
So unless every vertex in your rendering command gets the same index or handle... you cannot use them to pick which texture to use. Even instancing will not save you, since dynamically uniform expressions don't care about instancing.
If you want to render a bunch of quads without having to change uniforms between them (and without having to rely on an NVIDIA extension), then you have a few options. Most hardware that supports bindless texture also supports ARB_shader_draw_parameters. This gives you access to gl_DrawID, which represents the current index of a rendering command within a glMultiDraw-style command. And that extension explicitly declares that gl_DrawID is dynamically uniform.
So you could use that to select which texture to render. You simply need to issue a multi-draw command where you render the same mesh data over and over, but it gets a different gl_DrawID index in each case.

How are attributes passed to vertex shader in GLSL?

I want to pass a single float or unsigned int type variable to vertex shader but you can only pass vec or struct as an attribute variable. So, I used a vec2 type attribute variable and later used it to access the content.
glBindAttribLocation(program, 0, "Bid");
glEnableVertexAttribArray(0);
glVertexAttribIPointer(0, 1, GL_UNSIGNED_INT, sizeof(strideStructure), (const GLvoid*)0);
The vertex shader contains this code:
attribute ivec2 Bid;
void main()
{
int x = Bid.x;
int y = Bid.y;
}
So, when I pass value each time, doesn't the value get stored in x-component of vec2 Bid? In the second run of the loop, will the passed data be stored in x- component of different vector attribute? Also, if I change the size parameter to 2 for example, what would be the order in which data are stored in the vector attribute?
You can use scalar types for attributes. From the GLSL 1.50 spec (which corresponds to OpenGL 3.2):
Vertex shader inputs can only be float, floating-point vectors, matrices, signed and unsigned integers and integer vectors. Vertex shader inputs can also form arrays of these types, but not structures.
No matter if you use vector or scalar values, the types have to match. In your example, you're specifying GL_UNSIGNED_INT as the attribute type, but the type in the shader is ivec2, which is a signed value. It should be uvec2 to match the specified attribute type.
Yes, if you declare the type in the shader as uvec2 Bid, but pass only one value, that value will be in Bid.x. Bid.y will be 0. If you pass two values per vertex, the first one will be in Bid.x, and the second one in Bid.y.
You sound a little unclear about how vertex shaders are invoked, particularly when you talk about "run of the loop". There is no looping here. The vertex shader is invoked once for each vertex, and the corresponding attribute values for this specific vertex will be passed in the attribute variables. They will be in the same attribute variables, and in the same place within these variables, for each vertex.
I guess you can picture a "loop" in the sense of the vertex shader being invoked for each vertex. In reality, a lot of processing on the GPU will happen in parallel. This means that the vertex shader for a bunch of vertices will be invoked at the same time. Each one has its own variable instances, so they will not get in each others way, and the attributes for each one will be passed in exactly the same way.
An additional note on your code. You need to be careful with this call:
glBindAttribLocation(program, 0, "Bid");
glBindAttribLocation() needs to be called before linking the shader program. Otherwise it will have no effect.