Is it always good to use Vec3f / Vec4f class defined by yourself? - opengl

A question about coding style:
When you're going to reconstruct a virtural scene containing plenty of objects (Using JOGL), is it always good to define a Vec3f class and face class representing the vertices, normals, and faces rather than to directly use float[] type? Any ideas?

Many people go step further and create a Vertex POD object of type:
struct Vertex{
vec4 position;
vec4 normal;
vec2 texture;
}
Then the stride is simply sizeof(Vertex), and the offsets can be extracted using a offsetof macro. This leads to a more robust setup when passing the data.

Related

How to generate geometry to link neighbour nodes in a geometry shader with OpenCL/GL interop?

I am working on a 3D mesh I am storing in an array: each element of the array is a 4D point in homogeneous coordinates (x, y, z, w). I use OpenCL to do some calculations on these data, which later I want to visualise, therefore I set up an OpenCL/GL interop context. I have created a shared buffer between OpenCL and OpenGL by using the clCreateFromGLBuffer function on a GL_ARRAY_BUFFER:
...
glGenBuffers(1, &vbo);
glBindBuffer(GL_ARRAY_BUFFER, vbo);
glBufferData(GL_ARRAY_BUFFER, size_of_data, data, GL_DYNAMIC_DRAW);
vbo_buff = clCreateFromGLBuffer(ctx, CL_MEM_READ_WRITE, vbo, &err);
...
In the vertex shader, I access data this way:
layout (location = 0) in vec4 data;
out VS_OUT
{
vec4 vertex;
} vs_out;
void main(void)
{
vs_out.vertex = data;
}
Then in the geometry shader I do something like this:
layout (points) in;
layout (triangle_strip, max_vertices = MAX_VERT) out;
in VS_OUT
{
vec4 vertex;
} gs_in[];
void main()
{
gl_Position = gs_in[0].vertex;
EmitVertex();
...etc...
}
This gives me the ability of generating geometry based on the position of each point the stored in the data array.
This way, the geometry I can generate is only based on the current point being processed by the geometry shader: e.g. I am able to construct a small cube (voxel) around each point.
Now I would like to be able to access to the position of other points in the data array within the geometry shader: e.g. I would like to be able to retrieve the coordinates of another point (indexed by another shared buffer of an arbitrary length) besides the one which is currently processed in order to draw a line connecting them.
The problem I have is that in the geometry shader gs_in[0].vertex gives me the position of each point but I don't know which one for at the time (which index?). Moreover I don't know how to access the position of other points besides that one at the same time.
In an hypothetical pseudo-code I would like to be able to do something like this:
point_A = gs_in[0].vertex[index_A];
point_B = gs_in[0].vertex[index_B];
draw_line_between_A_and_B(point_A, point_B);
It is not clear to me whether this is possible or not, or how to achieve this within a geometry shader. I would like to stick to this approach because the calculations I do in the OpenCL kernels implement a cellular automata, hence it is convenient for me to organise my code (neutrino) in terms of central nodes and related neighbours.
All suggestions are welcome.
Thanks.
but I don't know which one for at the time (which index?)
See gl_PrimitiveIDIn
I don't know how to access the position of other points besides that one at the same time.
You can bind same source buffer two times, as a vertex source and as GL_TEXTURE_BUFFER. If your OpenGL implementation supports it, you'll then be able to read from there.
Unlike Direct3D, in GL the support for the feature is optional, the spec says GL_MAX_GEOMETRY_SHADER_STORAGE_BLOCKS can be zero.
"This gives me the ability of generating geometry based on the position of each point the stored in the data array."
No it does not. The input to the geometry shader are not all the vertex attributes in the buffer. Let me quote the Geometry Shader wiki page:
Geometry shaders take a primitive as input; each primitive is composed of some number of vertices, as defined by the input primitive type in the shader.
Primitives are a single point, line primitive or triangle primitive. For instance, If the primitive type is GL_POINTS, then the size of the input array is 1 and you can only access the vertex coordinate of the point, or if the primitive type is GL_TRIANGLES, the the size of the input array is 3 and you can only access the 3 vertex attributes (respectively corners) which form the triangle.
If you want to access more data, the you have to use a Shader Storage Buffer Object (or a texture).

Using OpenGL with Eigen for storing vertex data and glVertexAttribPointer

I am trying to use glVertexAttribPointer with a structure of Eigen objects, similar to this:
struct Vertex {
Eigen::Vector3f position;
Eigen::Vector3f normal;
};
The problem is setting the offset of glVertexAttribPointer. Since there is no public access to the m_data member used to store internally the data in Eigen, offset cannot be used.
It seems like there is no nice way to do this. My current approach is something like:
(void*)((char*)vertices[0].Position.data() - (char*)(&vertices[0]))
, where vertices is a std::vector<Vertex>.
This is by no means nice (especially in modern C++). I doubt there can be a nice solution, but what would be a safer way of doing this, or at least how can this operation be isolated as much as possible, so I don't have to write it for every call to glVertexAttribPointer.
The Eigen documentation guarantees that the layout of a Eigen::Vector3f = Eigen::Matrix<float,3,1> is as follows:
struct {
float data[Rows*Cols]; // with (size_t(data)%A(Rows*Cols*sizeof(T)))==0
};
In other words, the float[3] is at offset 0 of the Eigen::Vector3f structure. You are allowed to pass offsets of position and normal as-is (offsetof(Vertex, position) and offsetof(Vertex, normal)) to your glVertexAttrib calls for the offsets, and sizeof(Eigen::Vector3f) for the sizes.

Using Unions vs Multiple structs

I am quite new in c++, sometimes i am not sure what is better way performance/memory. My problem is that i need struct with exactly two pointers to vec3 (3 floats) and vec3/vec2.
Right now i am trying to decide if use:
- Unions with two constructors, one for vec3 and one for vec2
- Create two structs , one will contain vec2 and one vec3
struct vec3
{
float x,y,z;
};
struct vec2
{
float x,y;
};
struct Vertex
{
template <typename F>
Vertex(vec3 *Vertices,F Frag)
: m_vertices(Vertices),m_fragment(Frag)
{}
union Fragment
{
Fragment(vec3 *Colors)
:colors(Colors)
{}
Fragment(vec2 *Texcoords)
:texcoords(Texcoords)
{}
vec3 *colors;
vec2 *texcoords;
} m_fragment;
vec3 * m_vertices;
}
This code works well, but i am quite worried about performance, as i intend to use Vertex struct very often, my program might have thousand of instances of Vertex struct.
If every Vertex can have either colors or texcoords, but never both, then a union (or better yet, a std::variant<vec3, vec2>) makes sense.
If a Vertex can have both colors and texcoords, then a union won't work, since only one member of a union can be active at a time.
As for performance, profile, profile, profile. Build your interface in such a way that the choice of union or separate members is invisible to the caller. Then implement it both ways and test to see which performs better (or if there's a perceptible difference at all).

Opengl - instanced attributes

I use oglplus - it's a c++ wrapper for OpenGL.
I have a problem with defining instanced data for my particle renderer - positions work fine but something goes wrong when I want to instance a bunch of ints from the same VBO.
I am going to skip some of the implementation details to not make this problem more complicated. Assume that I bind VAO and VBO before described operations.
I have an array of structs (called "Particle") that I upload like this:
glBufferData(GL_ARRAY_BUFFER, sizeof(Particle) * numInstances, newData, GL_DYNAMIC_DRAW);
Definition of the struct:
struct Particle
{
float3 position;
//some more attributes, 9 floats in total
//(...)
int fluidID;
};
I use a helper function to define the OpenGL attributes like this:
void addInstancedAttrib(const InstancedAttribDescriptor& attribDesc, GLSLProgram& program, int offset=0)
{
//binding and some implementation details
//(...)
oglplus::VertexArrayAttrib attrib(program, attribDesc.getName().c_str());
attrib.Pointer(attribDesc.getPerVertVals(), attribDesc.getType(), false, sizeof(Particle), (void*)offset);
attrib.Divisor(1);
attrib.Enable();
}
I add attributes for positions and fluidids like this:
InstancedAttribDescriptor posDesc(3, "InstanceTranslation", oglplus::DataType::Float);
this->instancedData.addInstancedAttrib(posDesc, this->program);
InstancedAttribDescriptor fluidDesc(1, "FluidID", oglplus::DataType::Int);
this->instancedData.addInstancedAttrib(fluidDesc, this->program, (int)offsetof(Particle,fluidID));
Vertex shader code:
uniform vec3 FluidColors[2];
in vec3 InstanceTranslation;
in vec3 VertexPosition;
in vec3 n;
in int FluidID;
out float lightIntensity;
out vec3 sphereColor;
void main()
{
//some typical MVP transformations
//(...)
sphereColor = FluidColors[FluidID];
gl_Position = projection * vertexPosEye;
}
This code as whole produces this output:
As you can see, the particles are arranged in the way I wanted them to be, which means that "InstanceTranslation" property is setup correctly. The group of the particles to the left have FluidID value of 0 and the ones to the right equal to 1. The second set of particles have proper positions but index improperly into FluidColors array.
What I know:
It's not a problem with the way I set up the FluidColors uniform. If I hard-code the color selection in the shader like this:
sphereColor = FluidID == 0? FluidColors[0] : FluidColors1;
I get:
OpenGL returns GL_NO_ERROR from glGetError so there's no problem with the enums/values I provide
It's not a problem with the offsetof macro. I tried using hard-coded values and they didn't work either.
It's not a compatibility issue with GLint, I use simple 32bit Ints (checked this with sizeof(int))
I need to use FluidID as a instanced attrib that indexes into the color array because otherwise, if I were to set the color for a particle group as a simple vec3 uniform, I'd have to batch the same particle types (with the same FluidID) together first which means sorting them and it'd be too costly of an operation.
To me, this seems to be an issue of how you set up the fluidID attribute pointer. Since you use the type int in the shader, you must use glVertexAttribIPointer() to set up the attribute pointer. Attributes you set up with the normal glVertexAttribPointer() function work only for float-based attribute types. They accept integer input, but the data will be converted to float when the shader accesses them.
In oglplus, you apparently have to use VertexArrayAttrib::IPointer() instead of VertexArrayAttrib::Pointer() if you want to work with integer attributes.

OpenGL Shaders - Structuring blocks of data of similar types

I'm having a bit of a structural problem with a shader of mine. Basically I want to be able to handle multiple lights of potentionally different types, but I'm unsure what the best way of implementing this would be. So far I've been using uniform blocks:
layout (std140) uniform LightSourceBlock
{
int type;
vec3 position;
vec4 color;
// Spotlights / Point Lights
float dist;
// Spotlights
vec3 direction;
float cutoffOuter;
float cutoffInner;
float attenuation;
} LightSources[12];
It works, but there are several problems with this:
A light can be one of 3 types (spotlight, point light, directional light), which require different attributes (Which aren't neccessarily required by all types)
Every light needs a sampler2DShadow (samplerCubeShadow for point lights), which can't be used in uniform blocks.
The way I'm doing it works, but surely there must be a better way of handling something like this? How is this usually done?