C++ data structures: converting from TNT Vector to GL vec3 - c++

I'm using a data structure not written by myself that returns a realVec. This is the declaration of realVec (in typedef.h):
typedef TNT::Vector<PCReal> realVec;
For the definition of TNT Vector please see: http://calicam.sourceforge.net/doxygen/tnt__vector_8h_source.html
Definition of PCReal:
typedef double PCReal;
I need to convert this realVec into the following vec3:
struct vec3 {
GLfloat x;
GLfloat y;
GLfloat z;
//
// --- Constructors and Destructors ---
//
vec3( GLfloat s = GLfloat(0.0) ) :
x(s), y(s), z(s) {}
vec3( GLfloat _x, GLfloat _y, GLfloat _z ) :
x(_x), y(_y), z(_z) {}
vec3( const vec3& v ) { x = v.x; y = v.y; z = v.z; }
vec3( const vec2& v, const float f ) { x = v.x; y = v.y; z = f; }
...
I'm very new to C++ so my confusion probably lies in using TNT::Vector's iterator and converting values returned by it. I'm thinking something like the below, so tell me if it makes sense. It seems to compile (no 'make' errors):
realVec normal = this->meshPoints.getNormalForVertex(i);
PCReal* iter = normal.begin();
vec3(*iter++, *iter++, *iter++);
I need this because I'm doing gl programming, and it's convenient for my shaders to take vec3's as input.

What you have might work, but you could improve it with some safety checks, and you don't really have to use iterators at all. It appears realVec provides operator[] like most other vector classes.
realVec normal = this->meshPoints.getNormalForVertex(i);
if (normal.size() >= 3)
{
vec3 x(normal[0], normal[1], normal[2]);
}

Related

Using neon/simd to optimize Vector3 class

I'd like to know if it is worth it optimizing my Vector3 class' operations with neon/simd like I did to my Vector2 class.
As far as I know, simd can only handle two or four floats at the same time, so to my Vector3 we would need something like this:
Vector3 Vector3::operator * (const Vector3& v) const
{
#if defined(__ARM_NEON__)
// extra step: allocate a fourth float
const float v4A[4] = {x, y, z, 0};
const float v4B[4] = {v.x, v.y, v.z, 0};
float32x4_t r = vmul_f32(*(float32x4_t*)v4A, *(float32x4_t*)v4B);
return *(Vector3*)&r;
#else
return Vector3(x * v.x, y * v.y, z * v.z);
#endif
}
Is this safe? Would this extra step still be faster than a non-simd code on most scenarios (say arm64 for instance)?

fast dirty approximation of center of (list of 3D vertex) that forms a very shallow convex hull

I want to find XY of the center (red) of a convex-hull points (orange circles) set that is a result from collision detection.
Using separating-axis technique, I know for sure that the convex shape (pink) is relatively thin in Z-axis.
In >90% of my use cases, the amount of vertices is not more than 8.
My poor algorithm (AABB) ... MCVE
I tried to implement it by calculating the center point of AABB.
However, when I use it in real Physics simulation, the collision point (red) is not accurate enough for box-stack stability.
Here is the test case (the vertices are extruded in +y and -y to create volume) :-
int main(){
std::vector<Vec3> hullPoints;
hullPoints.push_back(Vec3(-0.5,-0.5,-0.1));
hullPoints.push_back(Vec3(-0.5,-0.5,0.1));
hullPoints.push_back(Vec3(-0.5,0.5,-0.1));
hullPoints.push_back(Vec3(-0.5,0.5,0.1));
hullPoints.push_back(Vec3(0.5,-0.5,-0.2));
hullPoints.push_back(Vec3(0.5,-0.5,0.2));
hullPoints.push_back(Vec3(0.5,0.5,-0.2));
hullPoints.push_back(Vec3(0.5,0.5,0.2));
//^^^^ INPUT
Vec3 centerOfHull;// approximate
Vec3 centerMax=Vec3(-100000,-100000,-100000);
Vec3 centerMin=Vec3(100000,100000,100000);
for(unsigned int n=0;n<hullPoints.size();n++){
Vec3 hullPoint=hullPoints[n];
for(unsigned int m3=0;m3<3;m3++){
centerMax[m3]=std::max( centerMax[m3],hullPoint[m3]);
centerMin[m3]=std::min( centerMin[m3],hullPoint[m3]);
}
}
centerOfHull=centerMax*0.5 + centerMin*0.5;
std::cout<<"centerOfHull="<< centerOfHull.toString()<<std::endl;
//it prints (0,0,0)
}
I wish it to return something like Vec3(a value between 0.05 and 0.45, 0, don't care).
References
I want a very fast algorithm that doesn't have to be very accurate.
There are some algorithm in the internet e.g.
Skeleton (so unrelated) : Better "centerpoint" than centroid
Just average all hull points. Its accuracy is too bad. (e.g. result of my example = Vec3(0,0,0))
It is even worse for unevenly-distributed vertices e.g.
Generate the whole convex hull (and all faces). It is too slow for unnecessary high precision.
Answers doesn't need to contain any C++ code.
Just a rough suggestion can be very useful.
Appendix (Vec3 library)
It is provided only for MCVE completeness.
#include <vector>
#include <iostream>
#include <string>
struct Vec3{
//modify from https://www.flipcode.com/archives/Faster_Vector_Math_Using_Templates.shtml
float x, y, z;
inline Vec3( void ) {}
inline Vec3( const float x, const float y, const float z )
{ this->x = x; this->y = y; this->z = z; }
inline Vec3 operator + ( const Vec3& A ) const {
return Vec3( x + A.x, y + A.y, z + A.z );
}
inline Vec3 operator *( const float& A ) const {
return Vec3( x*A, y*A,z*A);
}
inline float Dot( const Vec3& A ) const {
return A.x*x + A.y*y + A.z*z;
}
inline float& operator[]( int arr) {
switch(arr){
case 0: return x;
case 1: return y;
case 2: return z;
}
std::cout<<"error"<<std::endl;
return x;
}
std::string toString( ) const {
return "("+std::to_string(x)+","+std::to_string(y)+","+std::to_string(z)+")";
}
};

Step vs. comparison operator in HLSL?

As an HLSL enthusiast, I've been in the habit of using (float)(x>=y). Usually for 0/1 multiplications for branch avoidance. I just revisited my intrinsic list and saw step(x,y). They sound equivalent in output to me.
Are there any reasons to prefer one of these styles over the other?
I think they're equivalent. This shader:
inline float test1( float x, float y )
{
return (float)( x >= y );
}
inline float test2( float x, float y )
{
return step( x, y );
}
float2 main(float4 c: COLOR0): SV_Target
{
float2 res;
res.x = test1( c.x, c.y );
res.y = test2( c.z, c.w );
return res;
}
Compiles into following DXBC instructions:
ps_4_0
dcl_input_ps linear v0.xyzw
dcl_output o0.xy
dcl_temps 1
ge r0.xy, v0.xwxx, v0.yzyy // If the comparison is true, then 0xFFFFFFFF is returned for that component.
and o0.xy, r0.xyxx, l(0x3f800000, 0x3f800000, 0, 0) // Component-wise logical AND, 0x3f800000 = 1.0f
ret
As you see, the compiler treated both inline functions as equivalents, it even merged then together into a single 2-lane vector comparison.

no default constructor exists

I'm having some trouble with a class that was working fine and now doesn't seem to want to work at all.
The error is "No appropriate default constructor available"
I am using the class in two places I'm making a list of them and initializing then adding them to the list.
Vertice3f.h
#pragma once
#include "Vector3f.h"
// Vertice3f hold 3 floats for an xyz position and 3 Vector3f's
// (which each contain 3 floats) for uv, normal and color
class Vertice3f{
private:
float x,y,z;
Vector3f uv, normal, color;
public:
// If you don't want to use a UV, Normal or Color
// just pass in a Verctor3f with 0,0,0 values
Vertice3f(float _x, float _y, float _z, Vector3f _uv,
Vector3f _normal, Vector3f _color);
~Vertice3f();
};
Vertice3f.cpp
#include "Vertice3f.h"
Vertice3f::Vertice3f(float _x, float _y, float _z,
Vector3f _uv, Vector3f _normal, Vector3f _color){
x = _x;
y = _y;
z = _z;
uv = _uv;
normal = _normal;
color = _color;
}
It is being using in my OBJModelLoader class as follows:
list<Vertice3f> vert3fList;
Vertice3f tvert = Vertice3f(
x = (float)atof(
vertList[i].substr(
vertList[i].find("v") + 1,
vertList[i].find(" ", vertList[i].find("v") + 2, 10)
).c_str()
),
y = (float)atof(
vertList[i].substr(
vertList[i].find(" ", vertList[i].find("v") + 4, 10) + 1,
vertList[i].find(" ", vertList[i].find("v") + 13, 10)
).c_str()
),
z = (float)atof(
vertList[i].substr(
vertList[i].find(" ", vertList[i].find("v") + 13, 10) + 1,
vertList[i].find(" ", vertList[i].find("v") + 23, 10)
).c_str()
),
::Vector3f(0.0f,0.0f,0.0f),::Vector3f(0.0f,0.0f,0.0f),::Vector3f(0.0f,0.0f,0.0f)
);
vert3fList.push_back(
tvert
);
I have tried defining a default constructor myself so in the .h I put
Vertice3f();
and in the cpp
Vertice3f::Vertice3f(){
x = 0.0f;
y = 0.0f;
z = 0.0f;
uv = Vector3f(0.0f,0.0f,0.0f);
normal = Vector3f(0.0f,0.0f,0.0f);
color = Vector3f(0.0f,0.0f,0.0f);
}
So, I'm not sure why it can't find a default constructor or how to appease the compiler. I'm sure it's user error because the compiler probably knows what it's doing.
Any help is greatly appreciated, I will answer any other questions you have, just ask.
I'd guess that the missing default constructor is the default constructor of Vector3f class, not of Vertice3f class. Your constructor of Vertice3f attempts to default-construct its Vector3f members, which leads to the error.
This is why your attempts to provide default constructor for Vertice3f don't change anything. The problem lies, again, with Vector3f.
To fix it either provide all necessary default constructors (assuming it agrees with your design), or rewrite the constructor of Vertice3f by using initializer list instead of in-body assignment
Vertice3f::Vertice3f(float _x, float _y, float _z,
Vector3f _uv, Vector3f _normal, Vector3f _color) :
x(_x), y(_y), z(_z), uv(_uv), normal(_normal), color(_color)
{}
This version no longer attempts to default-construct anything. And using initializer list instead of in-body assignment is a good idea in any case.

Connecting the C++ class with the LUA table

I am trying to connect my 3d engine to a lua (5.1) parser.
For example, I have a LUA class of a vec3 and I have a C++ class of a vec3. I want them to work with eachother.
This is (part) of my C++ class:
class vec3
{
public:
vec3() {}
vec3(float _x, float _y, float _z) : x(_x), y(_y), z(_z) {}
vec3 operator+(const vec3 &b)
{
return vec3(x + b.x, y + b.y, z + b.z);
}
float dot(const vec3 &b)
{
return x * b.x + y * b.y + z * b.z;
}
float x, y, z;
}
This is the (limited) lua version:
vec3 = {};
vec3.__index = vec3;
local mt = {}
mt.__call = function(class_tbl, ...)
local obj = {}
setmetatable(obj, vec3);
vec3.init(obj, ...);
return obj;
end
vec3.init = function(obj, x, y, z)
obj.x, obj.y, obj.z = x, y, z;
end
setmetatable(vec3, mt);
function vec3:__tostring()
return "(" .. self.x .. ", " .. self.y .. ", " .. self.z .. ")";
end
function vec3:__add(b)
return vec3(self.x + b.x, self.y + b.y, self.z + b.z);
end
function vec3:dot(b)
return self.x * b.x + self.y * b.y + self.z * b.z;
end
I think the question is quite obvious: I want to be able to use vec3's in my C++ code, for example to position nodes or other stuff and then I want to be able to make these available in LUA where the LUA-programmer can do math with the vec3's and send them back to C++. So I also want to be able to construct a vec3 in LUA and send it to C++ where it is understood as a vec3 class.
To achieve this, I think I need to construct the above LUA table in C instead of in LUA and I need to create a function "push" and "pop" to send them to LUA and retrieve them from LUA.
But all my trials fail.
Can anyone help me get this to work?
Dirk.
What you need to do is create a userdata on the Lua stack in C++ and use that as the object. You can fairly simply placement new into it and arrange the metatable from C++. Of course, this is hideously type-unsafe, amongst the other huge holes in the Lua system.
Why not try to use C++ packages like luabind or luabridge? In those you you can access any lua data from C++ and vice versa.