C++ Convert 3D Velocity Vector To Speed Value - c++

In a game I am working on I get the velocity of a game world object like so
void getObjectVelocity(int objectID, vec3_t *velocityOut);
So if I were to call this function like this
vec3_t storeObjectVelocity;
getObjectVelocity(21/* just an example ID */, &storeObjectVelocity);
The velocity of the object with the ID of 21 would be stored in storeObjectVelocity.
For testing purposes I am trying to print the speed of this object based off it's velocity in the middle of the game screen.
Here's an example just to give you a better idea of what I'm trying to accomplish
int convertVelocityToSpeed(vec3_t velocity)
{
//This is where I am having issues.
//Converting the objects 3D velocity vector to a speed value
}
int testHUDS()
{
char velocityToSpeedBuffer[32] = { 0 };
vec3_t storeObjectVelocity;
getObjectVelocity(21, &storeObjectVelocity);
strcpy(velocityToSpeedBuffer, "Speed: ");
strcat(velocityToSpeedBuffer, system::itoa(convertVelocityToSpeed(storeObjectVelocity), 10));
render::text(SCREEN_CENTER_X, SCREEN_CENTER_Y, velocityToSpeedBuffer, FONT_SMALL);
}
Here is my vec3_t struct in case you were wondering
struct vec3_t
{
float x, y, z;
};

Length of a vector is calculated as
√( x² + y² + z²)
So in your program, something like this will works:
std::sqrt( velocity.x * velocity.x + velocity.y * velocity.y + velocity.z * velocity.z )
As #Nelfeal commented, last approach can overflow. Using std::hypot this problem is avoided. Since is more secure and it's clearer, this should be the first option if C++17 is available. Even knowing that it's less efficient. Remember to avoid premature micro optimizations.
std::hypot(velocity.x, velocity.y, velocity.z)
Also, you should think about passing velocity as a const reference to the function.

Speed is a scalar quantity given by the magnitude of a velocity vector |velocity|. Magnitude of a 3D vector is computed as:
So in your code you want to implement your method as:
int convertVelocityToSpeed(vec3_t velocity)
{
return std::sqrt(velocity.x * velocity.x + velocity.y * velocity.y + velocity.z * velocity.z);
}
you may need to include the math header #include <cmath> and I have assumed your vec3_t holds int values although this is unusual for a velocity in physics simulations, they are usually floating point types. If not you need to check your return type.

#include <cmath>
using namespace std;
sqrt(pow(velocity.x,2), pow(velocity.y,2), pow(velocity.x,2));
Use sqrt from cmath and pow from cmath.
EDIT
edited the mistype import as corrected by in the comments

Related

Eigen C++: Best Way To Convert 2D Polar To Cartesian

I am currently doing:
Eigen::Vector2d polar(2.5, 3 * M_PI / 4);
Eigen::Vector2d cartesian = polar.x() * Vector2d(cos(polar.y()), sin(polar.y()));
but I'm not sure if this is the correct way to use Eigen or if there is some better built in way.
Thanks!
That looks right to me if you're wanting to stick with using Eigen.
Generally though since the polar representation has angles, it might be good to avoid using the Eigen::Vector2d just for the sake of reducing mistakes that may be made in the future (like adding multiple angles together and not dealing with the fact that 0 == 2*PI). Maybe you could do it with structs instead:
struct Polar { double range; double angle; };
struct Cartesian { double x; double y; };
Cartesian to_cartesian(const Polar& p) {
double c = cos(p.angle);
double s = sin(p.angle);
return {p.range * c, p.range * s};
}
Polar to_polar(const Cartesian& c) {
return {std::sqrt(c.x * c.x + c.y * c.y), std::atan2(c.y, c.x)};
}

Pre-Collision Object Staging

I am making a billiards game. Currently, when one ball collides with another at high speed, the collision is not always calculated correctly. I know what the issue is, but I'm not 100% sure how to fix it.
Say two balls are traveling with these velocities:
More often than not, when the collision is detected, the balls will have some overlap between them that looks like this:
Currently, my physics engine will handle the collision at this moment in time. This will not give the desired result since this is NOT where the balls collide in reality - balls don't go through one another. So, we need back up the balls to where they really collide. That would look like this:
I am looking for an efficient algorithm that would help me do this. Currently, I have a very naive and inefficient method - I move both balls to their locations just before the collision and take very small steps toward the moment of collision. Of course, this is very inefficient. Here is what it looks like:
void CBallCollision::StageCollision()
{
double sumOfRadii = mBall1->GetRadius() + mBall2->GetRadius();
mBall1->SetCenter(mBall1->GetLastLocationOnTable().first, mBall1->GetLastLocationOnTable().second);
mBall2->SetCenter(mBall2->GetLastLocationOnTable().first, mBall2->GetLastLocationOnTable().second);
double timeStep = 0.008;
double tolerance = 0.1 * min(mBall1->GetRadius(), mBall2->GetRadius());
int iter = 0;
while (GetDistance() > sumOfRadii)
{
double xGoal1 = mBall1->GetX() + mBall1->GetVelocityX() * timeStep;
double yGoal1 = mBall1->GetY() + mBall1->GetVelocityY() * timeStep;
pair<double, double> newCoords1 = mBall1->LinearInterpolate(xGoal1, yGoal1);
double xGoal2 = mBall2->GetX() + mBall2->GetVelocityX() * timeStep;
double yGoal2 = mBall2->GetY() + mBall2->GetVelocityY() * timeStep;
pair<double, double> newCoords2 = mBall2->LinearInterpolate(xGoal2, yGoal2);
double dist = (pow(newCoords1.first - newCoords2.first, 2) + pow(newCoords1.second - newCoords2.second, 2));
if (abs(dist - sumOfRadii) > tolerance)
{
timeStep *= 0.5;
}
else
{
mBall1->SetX(newCoords1.first);
mBall1->SetY(newCoords1.second);
mBall2->SetX(newCoords2.first);
mBall2->SetY(newCoords2.second);
}
iter++;
if (iter > 1000)
{
break;
}
}
}
If I don't put an upper bound on the number of iterations, the program crashes. I'm sure there is a much more efficient way of going about this. Any help is appreciated.

Wrote some perlin noise kind of code, it looks blocky

The previous answered question doesn't seem to answer my problem "Blocky" Perlin noise
I tried to simplify the most I could to make my code readable and understandable.
I don't use the permutation table, instead I use the mt19937 generator.
I use SFML
using namespace std;
using namespace sf;
typedef Vector2f Vec2;
Sprite spr;
Texture tx;
// dot product
float prod(Vec2 a, Vec2 b) { return a.x*b.x + a.y*b.y; }
// linear interpolation
float interp(float start,float end,float coef){return coef*(end-start)+start;}
// get the noise of a certain pixel, giving its relative value vector in the square with [0.0 1.0] values
float getnoise(Vec2&A, Vec2&B, Vec2&C, Vec2&D, Vec2 rel){
float
dot_a=prod(A ,Vec2(rel.x ,rel.y)),
dot_b=prod(B ,Vec2(rel.x-1 ,rel.y)),
dot_c=prod(C ,Vec2(rel.x ,rel.y-1)),
dot_d=prod(D ,Vec2(rel.x-1 ,rel.y-1));
return interp
(interp(dot_a,dot_b,rel.x),interp(dot_c,dot_d,rel.x),rel.y);
// return interp
// (interp(da,db,rel.x),interp(dc,dd,rel.x),rel.y);
}
// calculate the [0.0 1.0] relative value of a pixel
Vec2 getrel(int i, int j, float cellsize){
return Vec2
(float
(i // which pixel
-(i/int(cellsize))//which cell
*cellsize)// floor() equivalent
/cellsize,// [0,1] range
float(j-(j/int(cellsize))*cellsize)/cellsize
);
}
// generates an array of random float values
vector<float> seeded_rand_float(unsigned int seed, int many){
vector<float> ret;
std::mt19937 rr;
std::uniform_real_distribution<float> dist(0, 1.0);
rr.seed(seed);
for(int j = 0 ; j < many; ++j)
ret.push_back(dist(rr));
return ret;
}
// use above function to generate an array of random vectors with [0.0 1.0] values
vector<Vec2>seeded_rand_vec2(unsigned int seed, int many){
auto coeffs1 = seeded_rand_float(seed, many*2);
// auto coeffs2 = seeded_rand_float(seed+1, many); //bad choice !
vector<Vec2> pushere;
for(int i = 0; i < many; ++i)
pushere.push_back(Vec2(coeffs1[2*i],coeffs1[2*i+1]));
// pushere.push_back(Vec2(coeffs1[i],coeffs2[i]));
return pushere;
}
// here we make the perlin noise
void make_perlin()
{
int seed = 43;
int pixels = 400; // how many pixels
int divisions = 10; // cell squares
float cellsize = float(pixels)/divisions; // size of a cell
auto randv = seeded_rand_vec2(seed,(divisions+1)*(divisions+1));
// makes the vectors be in [-1.0 1.0] range
for(auto&a:randv)
a = a*2.0f-Vec2(1.f,1.f);
Image img;
img.create(pixels,pixels,Color(0,0,0));
for(int j=0;j<=pixels;++j)
{
for(int i=0;i<=pixels;++i)
{
int ii = int(i/cellsize); // cell index
int jj = int(j/cellsize);
// those are the nearest gradient vectors for the current pixel
Vec2
A = randv[divisions*jj +ii],
B = randv[divisions*jj +ii+1],
C = randv[divisions*(jj+1) +ii],
D = randv[divisions*(jj+1) +ii+1];
float val = getnoise(A,B,C,D,getrel(i,j,cellsize));
val = 255.f*(.5f * val + .7f);
img.setPixel(i,j,Color(val,val,val));
}
}
tx.loadFromImage(img);
spr.setPosition(Vec2(10,10));
spr.setTexture(tx);
};
Here are the results, I included the resulted gradients vector (I multiplied them by cellsize/2).
My question is why are there white artifacts, you can somehow see the squares...
PS: it has been solved, I posted the fixed source here http://pastebin.com/XHEpV2UP
Don't make the mistake of applying a smooth interp on the result instead of the coefficient. Normalizing vectors or adding an offset to avoid zeroes doesn't seem to improve anything. Here is the colorized result:
The human eye is sensitive to discontinuities in the spatial derivative of luminance (brightness). The linear interpolation you're using here is sufficient to make brightness continuous, but it does not not make the derivative of the brightness continuous.
Perlin recommends using eased interpolation to get smoother results. You could use 3*t^2 - 2*t^3 (as suggested in the linked presentation) right in your interpolation function. That should solve the immediate issue.
That would look something like
// interpolation
float linear(float start,float end,float coef){return coef*(end-start)+start;}
float poly(float coef){return 3*coef*coef - 2*coef*coef*coef;}
float interp(float start,float end,float coef){return linear(start, end, poly(coef));}
But note that evaluating a polynomial for every interpolation is needlessly expensive. Usually (including here) this noise is being evaluated over a grid of pixels, with squares being some integer (or rational) number of pixels large; this means that rel.x, rel.y, rel.x-1, and rel.y-1 are quantized to particular possible values. You can make a lookup table for values of the polynomial ahead of time at those values, replacing the "poly" function in the code snippet provided. This technique lets you use even smoother (e.g. degree 5) easing functions at very little additional cost.
Although Jerry is correct in his above answer (I would have simply commented above, but I'm still pretty new to StackOverflow and I have insufficient reputation to comment at the moment)...
And his solution of using:
(3*coef*coef) - (2*coef*coef*coef)
to smooth/curve the interpolation factor works.
The slightly better solution is to simplify the equation to:
(3 - (2*coef)) * coef*coef
the resulting curve is virtually identical (there are slight differences, but they are tiny), and there's 2 less multiplications (and still only a single subtraction) to do per interpolation. Resulting in less computational effort.
This reduction in computation could really add up over time, especially when using the noise function alot. For instance, if you start generating noise in more than 2 dimensions.

find closest position of asteroids c++

In the above image i would like to know how to find the smallest possible way to get to the asteroid. the ship can wrap around so the closest way is going through the top corner instead of turning around and going back. I am not looking for code, just the pseudo code of how to get to it.
Any help is appreciated
The game asteroid is played on the surface of a torus.
Well, since you can wrap around any edge of the screen, there are always 4 straight lines between the asteroid and the ship (up and left, up and right, down and left, and down and right). I would just calculate the length of each and take the smallest result.
int dx1 = abs(ship_x - asteroid_x);
int dx2 = screen_width - dx1;
int dy1 = abs(ship_y - asertoid_y);
int dy2 = screen_height - dy1;
// Now calculate the psuedo-distances as Pete suggests:
int psuedo1 = (dx1 * dx1) + (dy1 * dy1);
int psuedo2 = (dx2 * dx2) + (dy1 * dy1);
int psuedo3 = (dx1 * dx1) + (dy2 * dy2);
int psuedo4 = (dx2 * dx2) + (dy2 * dy2);
This shows how to calculate the various distances involved. There is a little complication around mapping each one to the appropriate direction.
I would recommend the A* search algorithm
#include <iostream>
template<typename Scalar>
struct vector2d {
Scalar x;
Scalar y;
};
template<typename Scalar>
struct position2d {
Scalar x;
Scalar y;
};
template<typename S>
S absolute( S in ) {
if (in < S())
return -in;
return in;
}
template<typename S>
S ShortestPathScalar( S ship, S rock, S wrap ) {
S direct = rock-ship;
S indirect = (wrap-ship) + (rock);
if (absolute( direct ) > absolute( indirect ) ) {
return indirect;
}
return direct;
}
template<typename S>
vector2d<S> ShortestPath( position2d<S> ship, position2d<S> rock, position2d<S> wrap ) {
vector2d<S> retval;
retval.x = ShortestPathScalar( ship.x, rock.x, wrap.x );
retval.y = ShortestPathScalar( ship.y, rock.y, wrap.y );
return retval;
}
int main() {
position2d<int> world = {1000, 1000};
position2d<int> rock = {10, 10};
position2d<int> ship = {500, 900};
vector2d<int> path = ShortestPath( ship, rock, world );
std::cout << "(" << path.x << "," << path.y << ")\n";
}
No point in doing crap with squaring stuff in a simple universe like that.
Scalar support for any type that supports a < b, and default construction for a zero. Like double or int or long long.
Note that copy/pasting the above code and handing it in as an assignment at the level of course where you are playing with that problem will get you looked at strangely. However, the algorithm should be pretty easy to follow.
Find the sphere in reference to the ship.
To avoid decimals in my example. let the range of x & y = [0 ... 511] where 511 == 0 when wrapped
Lets make the middle the origin.
So subtract vec2(256,256) from both the sphere and the ship's position
sphere.position(-255,255) = sphere.position(1 - 256 ,511 - 256);
ship.position(255,-255) = ship.position(511 - 256, 1 - 256)
firstDistance(-510,510) = sphere.position(-255,255) - ship.position(255,-255)
wrappedPosition(254,-254) = wrapNewPositionToScreenBounds(firstDistance(-510,510)) // under flow / over flow using origin offset of 256
secondDistance(1,-1) = ship.position(255,-255) - wrappedPosition(254,-254)
If you need the smallest way to the asteroid, you don't need to calculate the actual smallest distance to it. If I understand you correctly, you need the shortest way not the length of the shortest path.
This, I think, is computationally the least expensive method to do that:
Let the meteor's position be (Mx, My) and the ship position (Sx, Sy).
The width of the viewport is W and the height is H. Now,
dx = Mx - Sx,
dy = My - Sy.
if abs(dx) > W/2 (which is 256 in this case) your ship needs to go LEFT,
if abs(dx) < W/2 your ship needs to go RIGHT.
IMPORTANT - Invert your result if dx was negative. (Thanks to #Toad for pointing this out!)
Similarly, if
abs(dy) > H/2 ship goes UP,
abs(dy) < H/2 ship goes DOWN.
Like with dx, flip your result if dy is -ve.
This takes wrapping into account and should work for every case. No squares or pythagoras theorem involved, I doubt it can be done any cheaper. Also if you HAVE to find the actual shortest distance, you'll only have to apply it once now (since you already know which one of the four possible paths you need to take). #Peter's post gives an elegant way to do that while taking wrapping into account.

Determining if a point is inside a polyhedron

I'm attempting to determine if a specific point lies inside a polyhedron. In my current implementation, the method I'm working on take the point we're looking for an array of the faces of the polyhedron (triangles in this case, but it could be other polygons later). I've been trying to work from the info found here: http://softsurfer.com/Archive/algorithm_0111/algorithm_0111.htm
Below, you'll see my "inside" method. I know that the nrml/normal thing is kind of weird .. it's the result of old code. When I was running this it seemed to always return true no matter what input I give it. (This is solved, please see my answer below -- this code is working now).
bool Container::inside(Point* point, float* polyhedron[3], int faces) {
Vector* dS = Vector::fromPoints(point->X, point->Y, point->Z,
100, 100, 100);
int T_e = 0;
int T_l = 1;
for (int i = 0; i < faces; i++) {
float* polygon = polyhedron[i];
float* nrml = normal(&polygon[0], &polygon[1], &polygon[2]);
Vector* normal = new Vector(nrml[0], nrml[1], nrml[2]);
delete nrml;
float N = -((point->X-polygon[0][0])*normal->X +
(point->Y-polygon[0][1])*normal->Y +
(point->Z-polygon[0][2])*normal->Z);
float D = dS->dot(*normal);
if (D == 0) {
if (N < 0) {
return false;
}
continue;
}
float t = N/D;
if (D < 0) {
T_e = (t > T_e) ? t : T_e;
if (T_e > T_l) {
return false;
}
} else {
T_l = (t < T_l) ? t : T_l;
if (T_l < T_e) {
return false;
}
}
}
return true;
}
This is in C++ but as mentioned in the comments, it's really very language agnostic.
The link in your question has expired and I could not understand the algorithm from your code. Assuming you have a convex polyhedron with counterclockwise oriented faces (seen from outside), it should be sufficient to check that your point is behind all faces. To do that, you can take the vector from the point to each face and check the sign of the scalar product with the face's normal. If it is positive, the point is behind the face; if it is zero, the point is on the face; if it is negative, the point is in front of the face.
Here is some complete C++11 code, that works with 3-point faces or plain more-point faces (only the first 3 points are considered). You can easily change bound to exclude the boundaries.
#include <vector>
#include <cassert>
#include <iostream>
#include <cmath>
struct Vector {
double x, y, z;
Vector operator-(Vector p) const {
return Vector{x - p.x, y - p.y, z - p.z};
}
Vector cross(Vector p) const {
return Vector{
y * p.z - p.y * z,
z * p.x - p.z * x,
x * p.y - p.x * y
};
}
double dot(Vector p) const {
return x * p.x + y * p.y + z * p.z;
}
double norm() const {
return std::sqrt(x*x + y*y + z*z);
}
};
using Point = Vector;
struct Face {
std::vector<Point> v;
Vector normal() const {
assert(v.size() > 2);
Vector dir1 = v[1] - v[0];
Vector dir2 = v[2] - v[0];
Vector n = dir1.cross(dir2);
double d = n.norm();
return Vector{n.x / d, n.y / d, n.z / d};
}
};
bool isInConvexPoly(Point const& p, std::vector<Face> const& fs) {
for (Face const& f : fs) {
Vector p2f = f.v[0] - p; // f.v[0] is an arbitrary point on f
double d = p2f.dot(f.normal());
d /= p2f.norm(); // for numeric stability
constexpr double bound = -1e-15; // use 1e15 to exclude boundaries
if (d < bound)
return false;
}
return true;
}
int main(int argc, char* argv[]) {
assert(argc == 3+1);
char* end;
Point p;
p.x = std::strtod(argv[1], &end);
p.y = std::strtod(argv[2], &end);
p.z = std::strtod(argv[3], &end);
std::vector<Face> cube{ // faces with 4 points, last point is ignored
Face{{Point{0,0,0}, Point{1,0,0}, Point{1,0,1}, Point{0,0,1}}}, // front
Face{{Point{0,1,0}, Point{0,1,1}, Point{1,1,1}, Point{1,1,0}}}, // back
Face{{Point{0,0,0}, Point{0,0,1}, Point{0,1,1}, Point{0,1,0}}}, // left
Face{{Point{1,0,0}, Point{1,1,0}, Point{1,1,1}, Point{1,0,1}}}, // right
Face{{Point{0,0,1}, Point{1,0,1}, Point{1,1,1}, Point{0,1,1}}}, // top
Face{{Point{0,0,0}, Point{0,1,0}, Point{1,1,0}, Point{1,0,0}}}, // bottom
};
std::cout << (isInConvexPoly(p, cube) ? "inside" : "outside") << std::endl;
return 0;
}
Compile it with your favorite compiler
clang++ -Wall -std=c++11 code.cpp -o inpoly
and test it like
$ ./inpoly 0.5 0.5 0.5
inside
$ ./inpoly 1 1 1
inside
$ ./inpoly 2 2 2
outside
If your mesh is concave, and not necessarily watertight, that’s rather hard to accomplish.
As a first step, find the point on the surface of the mesh closest to the point. You need to keep track the location, and specific feature: whether the closest point is in the middle of face, on the edge of the mesh, or one of the vertices of the mesh.
If the feature is face, you’re lucky, can use windings to find whether it’s inside or outside. Compute normal to face (don't even need to normalize it, non-unit-length will do), then compute dot( normal, pt - tri[0] ) where pt is your point, tri[0] is any vertex of the face. If the faces have consistent winding, the sign of that dot product will tell you if it’s inside or outside.
If the feature is edge, compute normals to both faces (by normalizing a cross-product), add them together, use that as a normal to the mesh, and compute the same dot product.
The hardest case is when a vertex is the closest feature. To compute mesh normal at that vertex, you need to compute sum of the normals of the faces sharing that vertex, weighted by 2D angles of that face at that vertex. For example, for vertex of cube with 3 neighbor triangles, the weights will be Pi/2. For vertex of a cube with 6 neighbor triangles the weights will be Pi/4. And for real-life meshes the weights will be different for each face, in the range [ 0 .. +Pi ]. This means you gonna need some inverse trigonometry code for this case to compute the angle, probably acos().
If you want to know why that works, see e.g. “Generating Signed Distance Fields From Triangle Meshes” by J. Andreas Bærentzen and Henrik Aanæs.
I have already answered this question couple years ago. But since that time I’ve discovered much better algorithm. It was invented in 2018, here’s the link.
The idea is rather simple. Given that specific point, compute a sum of signed solid angles of all faces of the polyhedron as viewed from that point. If the point is outside, that sum gotta be zero. If the point is inside, that sum gotta be ±4·π steradians, + or - depends on the winding order of the faces of the polyhedron.
That particular algorithm is packing the polyhedron into a tree, which dramatically improves performance when you need multiple inside/outside queries for the same polyhedron. The algorithm only computes solid angles for individual faces when the face is very close to the query point. For large sets of faces far away from the query point, the algorithm is instead using an approximation of these sets, using some numbers they keep in the nodes of that BVH tree they build from the source mesh.
With limited precision of FP math, and if using that approximated BVH tree losses from the approximation, that angle will never be exactly 0 nor ±4·π. But still, the 2·π threshold works rather well in practice, at least in my experience. If the absolute value of that sum of solid angles is less than 2·π, consider the point to be outside.
It turns out that the problem was my reading of the algorithm referenced in the link above. I was reading:
N = - dot product of (P0-Vi) and ni;
as
N = - dot product of S and ni;
Having changed this, the code above now seems to work correctly. (I'm also updating the code in the question to reflect the correct solution).