Using Unions vs Multiple structs - c++

I am quite new in c++, sometimes i am not sure what is better way performance/memory. My problem is that i need struct with exactly two pointers to vec3 (3 floats) and vec3/vec2.
Right now i am trying to decide if use:
- Unions with two constructors, one for vec3 and one for vec2
- Create two structs , one will contain vec2 and one vec3
struct vec3
{
float x,y,z;
};
struct vec2
{
float x,y;
};
struct Vertex
{
template <typename F>
Vertex(vec3 *Vertices,F Frag)
: m_vertices(Vertices),m_fragment(Frag)
{}
union Fragment
{
Fragment(vec3 *Colors)
:colors(Colors)
{}
Fragment(vec2 *Texcoords)
:texcoords(Texcoords)
{}
vec3 *colors;
vec2 *texcoords;
} m_fragment;
vec3 * m_vertices;
}
This code works well, but i am quite worried about performance, as i intend to use Vertex struct very often, my program might have thousand of instances of Vertex struct.

If every Vertex can have either colors or texcoords, but never both, then a union (or better yet, a std::variant<vec3, vec2>) makes sense.
If a Vertex can have both colors and texcoords, then a union won't work, since only one member of a union can be active at a time.
As for performance, profile, profile, profile. Build your interface in such a way that the choice of union or separate members is invisible to the caller. Then implement it both ways and test to see which performs better (or if there's a perceptible difference at all).

Related

For OpenGL Shaders, how would you write a Uniform Function in C++ that accepts all types?

So say I have a Shader class and I want to have a Uniform function that sends the data I pass to the loaded Shader program
class Shader{
unsigned int programid;
template<typename Type>
void Uniform(unsigned int location, Type object){
//Some logic that passes the type data to the uniform using glUniform[?]()
}
}
How would I write the Uniform function (using templates in C++) to accept any type (primitive OR object) and pass it to the Shader?
Examples:
GLSL: uniform float Amount;
C++: shader.Uniform(somefloat);
GLSL: uniform vec3 Position;
C++:
template<typename Type, size_t Size>
Vector{ Type data[Size]; }
Vector<float, 3> position = {0.0f, 1.0f, 1.0f}
shader.Uniform(position);
GLSL:
struct Light
{
vec3 position;
vec4 rotation;
float luminosity;
bool status;
};
uniform Light object;
C++:
struct Light {
Vector<float, 3> position;
Vector<float, 4> rotation;
float luminosity;
bool status;
}
Light object = {{1.0f,0.0f,0.0f},{0.0f,0.0f,0.0f},0.75f,true};
shader.Uniform(object);
First, C++ and GLSL are statically typed languages, not dynamically typed like JavaScript or Python. So there is no actual way to write a C++ function that accepts any type. What your C++ template function does is, essentially, a text substitution. Every time the C++ compiler sees the template used, eg "Vector", it takes the original template declaration and makes a new copy with "Type" and "Size" replaced by "float" and "3" respectively. And the compiler generates a unique mangled name to prevent linker errors something like __Vector_TypeFOO_SizeBAR...
(For sake of completeness, yes it's possible to implement your own dynamic typing in C/C++ with unions and/or pointer casts. But since you can't do either of those in GLSL, it doesn't help answer the question.)
So, since GLSL doesn't have templates to do the text replacement, you'll have to implement it yourself. Load the source of your shader from file or whatever. Before you pass it to glCompileShader, use your favourite string processing library to insert actual strings into placeholder text.
Eg in your shader you could write something like:
<TYPE><SIZE> position;
and your main program would do something like
src = loadShaderCode("example.template");
src.replace("<TYPE>", "vec");
src.replace("<SIZE>", "3");
shader = compileShader(src, ...);
Hope this helps.

Is it always good to use Vec3f / Vec4f class defined by yourself?

A question about coding style:
When you're going to reconstruct a virtural scene containing plenty of objects (Using JOGL), is it always good to define a Vec3f class and face class representing the vertices, normals, and faces rather than to directly use float[] type? Any ideas?
Many people go step further and create a Vertex POD object of type:
struct Vertex{
vec4 position;
vec4 normal;
vec2 texture;
}
Then the stride is simply sizeof(Vertex), and the offsets can be extracted using a offsetof macro. This leads to a more robust setup when passing the data.

How to get a value from vec3 in vertex shader? OpenGL 3.3

I have the following vertex shader:
#version 330
layout (location = 0) in vec3 Position;
uniform mat4 gWVP;
out vec4 Color;
void main()
{
gl_Position = gWVP * vec4(Position, 1.0);
};
How can I get, for example, the third value of vec3? The first my thought was: "Maybe I can get it by multiplying this vector(Position) on something?" But I am not sure that something like "vertical vector type" exists.
So, what is the best way? I need this value to set the color of the pixel.
There are at least 4 options:
You can access vector components with component names x, y, z, w. This is mostly used for vectors that represent points/vectors. In your example, that would be Position.z.
You can use component names r, g, b, a. This is mostly used for vectors that represent colors. In your example, you could use Position.b, even though that would not be very readable. On the other hand, Color.b would be a good option for the other variable.
You can use component names s, t, p, q. This is mostly used for vectors that represent texture coordinates. In our example, Position.p would also give you the 3rd component.
You can use the subscript notation with 0-based indices. In your example, Position[2] also gives he 3rd element.
Each vector has overloaded access to elements. In this case, using Position.z should work.

OpenGL Shaders - Structuring blocks of data of similar types

I'm having a bit of a structural problem with a shader of mine. Basically I want to be able to handle multiple lights of potentionally different types, but I'm unsure what the best way of implementing this would be. So far I've been using uniform blocks:
layout (std140) uniform LightSourceBlock
{
int type;
vec3 position;
vec4 color;
// Spotlights / Point Lights
float dist;
// Spotlights
vec3 direction;
float cutoffOuter;
float cutoffInner;
float attenuation;
} LightSources[12];
It works, but there are several problems with this:
A light can be one of 3 types (spotlight, point light, directional light), which require different attributes (Which aren't neccessarily required by all types)
Every light needs a sampler2DShadow (samplerCubeShadow for point lights), which can't be used in uniform blocks.
The way I'm doing it works, but surely there must be a better way of handling something like this? How is this usually done?

glsl shader in/out variable packing

Does the order and/or size of shader in/out variables make any difference in memory use or performance? For example, are these:
// vert example:
out vec4 colorRadius;
// tess control example:
out vec4 colorRadius[];
// frag example:
in smooth vec4 colorRadius;
equivalent to these:
// vert example:
out vec3 color;
out float radius;
// tess control example:
out vec3 color[];
out float radius[];
// frag example:
in smooth vec3 color;
in smooth float radius;
Is there any additional cost with the second form or will the compiler pack them together in memory and treat them exactly the same?
The compiler could pack these things together. But it doesn't have to, and there's little evidence that compilers commonly do this. So the top version will at least be no slower than the bottom version.
At the same time, this is more of a micro-optimization. So unless you know that this is a bottleneck, just let it go. It's best to write clear, easily understood code and optimize it when you know where your problems are, than to optimize it not knowing if it's going to be a concern.