Implementing Table-Lookup-Based Trig Functions - c++

For a videogame I'm implementing in my spare time, I've tried implementing my own versions of sinf(), cosf(), and atan2f(), using lookup tables. The intent is to have implementations that are faster, although with less accuracy.
My initial implementation is below. The functions work, and return good approximate values. The only problem is that they are slower than calling the standard sinf(), cosf(), and atan2f() functions.
So, what am I doing wrong?
// Geometry.h includes definitions of PI, TWO_PI, etc., as
// well as the prototypes for the public functions
#include "Geometry.h"
namespace {
// Number of entries in the sin/cos lookup table
const int SinTableCount = 512;
// Angle covered by each table entry
const float SinTableDelta = TWO_PI / (float)SinTableCount;
// Lookup table for Sin() results
float SinTable[SinTableCount];
// This object initializes the contents of the SinTable array exactly once
class SinTableInitializer {
public:
SinTableInitializer() {
for (int i = 0; i < SinTableCount; ++i) {
SinTable[i] = sinf((float)i * SinTableDelta);
}
}
};
static SinTableInitializer sinTableInitializer;
// Number of entries in the atan lookup table
const int AtanTableCount = 512;
// Interval covered by each Atan table entry
const float AtanTableDelta = 1.0f / (float)AtanTableCount;
// Lookup table for Atan() results
float AtanTable[AtanTableCount];
// This object initializes the contents of the AtanTable array exactly once
class AtanTableInitializer {
public:
AtanTableInitializer() {
for (int i = 0; i < AtanTableCount; ++i) {
AtanTable[i] = atanf((float)i * AtanTableDelta);
}
}
};
static AtanTableInitializer atanTableInitializer;
// Lookup result in table.
// Preconditions: y > 0, x > 0, y < x
static float AtanLookup2(float y, float x) {
assert(y > 0.0f);
assert(x > 0.0f);
assert(y < x);
const float ratio = y / x;
const int index = (int)(ratio / AtanTableDelta);
return AtanTable[index];
}
}
float Sin(float angle) {
// If angle is negative, reflect around X-axis and negate result
bool mustNegateResult = false;
if (angle < 0.0f) {
mustNegateResult = true;
angle = -angle;
}
// Normalize angle so that it is in the interval (0.0, PI)
while (angle >= TWO_PI) {
angle -= TWO_PI;
}
const int index = (int)(angle / SinTableDelta);
const float result = SinTable[index];
return mustNegateResult? (-result) : result;
}
float Cos(float angle) {
return Sin(angle + PI_2);
}
float Atan2(float y, float x) {
// Handle x == 0 or x == -0
// (See atan2(3) for specification of sign-bit handling.)
if (x == 0.0f) {
if (y > 0.0f) {
return PI_2;
}
else if (y < 0.0f) {
return -PI_2;
}
else if (signbit(x)) {
return signbit(y)? -PI : PI;
}
else {
return signbit(y)? -0.0f : 0.0f;
}
}
// Handle y == 0, x != 0
if (y == 0.0f) {
return (x > 0.0f)? 0.0f : PI;
}
// Handle y == x
if (y == x) {
return (x > 0.0f)? PI_4 : -(3.0f * PI_4);
}
// Handle y == -x
if (y == -x) {
return (x > 0.0f)? -PI_4 : (3.0f * PI_4);
}
// For other cases, determine quadrant and do appropriate lookup and calculation
bool right = (x > 0.0f);
bool top = (y > 0.0f);
if (right && top) {
// First quadrant
if (y < x) {
return AtanLookup2(y, x);
}
else {
return PI_2 - AtanLookup2(x, y);
}
}
else if (!right && top) {
// Second quadrant
const float posx = fabsf(x);
if (y < posx) {
return PI - AtanLookup2(y, posx);
}
else {
return PI_2 + AtanLookup2(posx, y);
}
}
else if (!right && !top) {
// Third quadrant
const float posx = fabsf(x);
const float posy = fabsf(y);
if (posy < posx) {
return -PI + AtanLookup2(posy, posx);
}
else {
return -PI_2 - AtanLookup2(posx, posy);
}
}
else { // right && !top
// Fourth quadrant
const float posy = fabsf(y);
if (posy < x) {
return -AtanLookup2(posy, x);
}
else {
return -PI_2 + AtanLookup2(x, posy);
}
}
return 0.0f;
}

"Premature optimization is the root of all evil" - Donald Knuth
Nowadays compilers provide very efficient intrinsics for trigonometric functions that get the best from modern processors (SSE etc.), which explains why you can hardly beat the built-in functions. Don't lose too much time on these parts and instead concentrate on the real bottlenecks that you can spot with a profiler.

Remember you have a co-processor ... you would have seen an increase in speed if it were 1993 ... however today you will struggle to beat native intrinsics.
Try viewing the disassebly to sinf.

Someone has already benchmarked this, and it looks as though the Trig.Math functions are already optimized, and will be faster than any lookup table you can come up with:
http://www.tommti-systems.de/go.html?http://www.tommti-systems.de/main-Dateien/reviews/languages/benchmarks.html
(They didn't use anchors on the page so you have to scroll about 1/3 of the way down)

I'm worried by this place:
// Normalize angle so that it is in the interval (0.0, PI)
while (angle >= TWO_PI) {
angle -= TWO_PI;
}
But you can:
Add time-meters to all functions, write special performance tests, run performance tests, print report of time test.. I think you will know answer after this tests.
Also you could use some profiling tools such as AQTime.

The built-in functions are very well optimized already, so it's going to be REALLY tough to beat them. Personally, I'd look elsewhere for places to gain performance.
That said, one optimization I can see in your code:
// Normalize angle so that it is in the interval (0.0, PI)
while (angle >= TWO_PI) {
angle -= TWO_PI;
}
Could be replaced with:
angle = fmod(angle, TWO_PI);

Related

C++ get parameters before a for loop or not

What is the best practice in this case:
Should I get variables before running a for loop like this:
void Map::render(int layer, Camera* pCam)
{
int texture_index(m_tilesets[layer]->getTextureIndex());
int tile_width(m_size_of_a_tile.getX());
int tile_height(m_size_of_a_tile.getY());
int camera_x(pCam->getPosition().getX());
int camera_y(pCam->getPosition().getY());
int first_tile_x(pCam->getDrawableArea().getX());
int first_tile_y(pCam->getDrawableArea().getY());
int map_max_x( (640 / 16) + first_tile_x );
int map_max_y( (360 / 16) + first_tile_y );
if (map_max_x > 48) { map_max_x = 48; }
if (map_max_y > 28) { map_max_x = 28; }
Tile* t(nullptr);
for (int y(first_tile_y); y < map_max_y; ++y) {
for (int x(first_tile_x); x < map_max_x; ++x) {
// move map relative to camera
m_dst_rect.x = (x * tile_width) + camera_x;
m_dst_rect.y = (y * tile_height) + camera_y;
t = getTile(layer, x, y);
if (t) {
pTextureManager->draw(texture_index, getTile(layer, x, y)->src, m_dst_rect);
}
}
}
}
or is it better to get it directly in the loop like this (in this case the code is shorter but less readable):
void Map::render(int layer, Camera* pCam)
{
int first_tile_x(pCam->getDrawableArea().getX());
int first_tile_y(pCam->getDrawableArea().getY());
for (int y(first_tile_y); y < (640 / 16) + first_tile_x; ++y) {
for (int x(first_tile_x); x < (360 / 16) + first_tile_y; ++x) {
// move map relative to camera
m_dst_rect.x = (x * m_size_of_a_tile.getX()) + pCam->getPosition().getX();
m_dst_rect.y = (y * m_size_of_a_tile.getY()) + pCam->getPosition().getY();
Tile* t(getTile(layer, x, y));
if (t) {
pTextureManager->draw(m_tilesets[layer]->getTextureIndex(), getTile(layer, x, y)->src, m_dst_rect);
}
}
}
}
Is there an impact on performance using one method over another?
Syntactically the second version is to be preferred as it does contain the object in the scope where it is being used, not leaking it to different contexts. Performance wise you will need to profile but I'd be surprised if there was any difference at all because a compiler will often notice that the results don't change, at least for simple functions, and do this optimization for you.
For functions that are more complex or potentially dynamic, but you know they will not change their result during the for loop it makes sense to define them before the loop.

C++ Collision Detection causing objects to disappear

I am currently working on some basic 2D RigidBody Physics and have run into an issue. I have a function that checks for collision between a Circle and a AABB but sometimes the Circle (in this case the player) will collide then disappear and if I print out the position when this happens I just set "nan".
bool Game::Physics::RigidBody2D::CircleAABB(RigidBody2D& body)
{
sf::Vector2f diff = m_Position - body.m_Position;
sf::Vector2f halfExtents = sf::Vector2f(body.m_Size.x / 2.0f, body.m_Size.y / 2.0f);
sf::Vector2f diffContrained = diff;
if (diff.x > halfExtents.x)
{
diffContrained.x = halfExtents.x;
}
else if (diff.x < -halfExtents.x)
{
diffContrained.x = -halfExtents.x;
}
if (diff.y > halfExtents.y)
{
diffContrained.y = halfExtents.y;
}
else if (diff.y < -halfExtents.y)
{
diffContrained.y = -halfExtents.y;
}
sf::Vector2f colCheck = diff - diffContrained;
sf::Vector2f VDirNorm = NormVector(colCheck);
sf::Vector2f colToPlayer = NormVector(m_Position - (diffContrained + body.m_Position));
float dist = getMagnitude(colCheck) - m_fRadius;
//std::cout << dist << std::endl;
if (dist < 0)
{
OnCollision((diffContrained + body.m_Position) - m_Position);
m_Position += (VDirNorm * abs(dist));
body.m_Position -= (VDirNorm * abs(dist))* (1.0f - body.m_fMass);
return true; //Collision has happened
}
return false;
}
This happens randomly and with almost no clear reason although it seems to happen more often when the circle is moving fast but can happen as well when it is moving slowly or one or two times when it is not moving at all.
An added note is that I apply gravity to the Y velocity and on collision set the velocity of the coordinating axis to 0.
So my question is, is something clearly wrong here to those with more physics experience than me?
Note: Using SFML for drawing and Vector2 class physics code is all mine.
EDIT: The OnCollision function checks the side the collision so that objects that inherit can use this (e.g. check if the collision was below to trigger a "isGrounded" boolean). In the this case the player checks the side and then sets the velocity on that axis to 0 and also trigger a isGrounded boolean when it is below.
void Game::GamePlay::PlayerController::OnCollision(sf::Vector2f vDir)
{
if (abs(vDir.x) > abs(vDir.y))
{
if (vDir.x > 0.0f)
{
//std::cout << "Right" << std::endl;
//Collision on the right
m_Velocity.x = 0.0f;
}
if (vDir.x < 0.0f)
{
//std::cout << "Left" << std::endl;
//Collision on the left
m_Velocity.x = 0.0f;
}
return;
}
else
{
if (vDir.y > 0.0f)
{
//std::cout << "Below" << std::endl;
//Collision below
m_Velocity.y = 0.0f;
if (!m_bCanJump && m_RecentlyCollidedNode != nullptr)
{
m_RecentlyCollidedNode->ys += 3.f;
}
m_bCanJump = true;
}
if (vDir.y < 0.0f)
{
//std::cout << "Above" << std::endl;
//Collision above
m_Velocity.y = 0.0f;
}
}
}
From debugging out velocity and position no real reason has come to the surface.
inline sf::Vector2f NormVector(sf::Vector2f vec)
{
float mag = getMagnitude(vec);
return vec / mag;
}
Solution:
if (colCheck.x == 0 && colCheck.y == 0)
{
std::cout << "Zero Vector" << std::endl;
float impulse = m_Velocity.x + m_Velocity.y;
m_Velocity.x = 0;
m_Velocity.y = 0;
m_Velocity += NormVector(diff)*impulse;
}
else
{
VDirNorm = NormVector(colCheck);
dist = getMagnitude(colCheck) - m_fRadius;
}
One issue I see is NormVector with a zero vector. You'll divide by zero, generating NaNs in your returned vector. This can happen in your existing code when diff and diffContrained are the same, so colCheck will be (0,0) causing VDirNorm to have NaNs in it, which will propagate into m_position.
Typically, a normalized zero length vector should stay a zero length vector (see this post), but in this case, since you're using the normalized vector to offset your bodies after the collision, you'll need to add code to handle it in a reasonable fashion.

Optimization issues using Barnes-Hut for graph placing

I've been trying to work out the problem of Force-Directed graph/Barnes-Hut in my graph visualization app. I've checked so far octree creation, and it looks correctly (tree is represented by boxes and circles are my graph nodes):
Fields in my Quadtree are following:
class Quadtree
{
public:
int level;
Quadtree* trees[2][2][2];
glm::vec3 vBoundriesBox[8];
glm::vec3 center;
bool leaf;
float combined_weight = 0;
std::vector<Element*> objects;
//Addition methods/fields
private:
//Additional methods/fields
protected:
}
This is how I am adding elements recursively to my quadtree:
#define MAX_LEVELS 5
void Quadtree::AddObject(Element* object)
{
this->objects.push_back(object);
}
void Quadtree::Update()
{
if(this->objects.size()<=1 || level > MAX_LEVELS)
{
for(Element* Element:this->objects)
{
Element->parent_group = this;
this->combined_weight += Element->weight;
}
return;
}
if(leaf)
{
GenerateChildren();
leaf = false;
}
while (!this->objects.empty())
{
Element* obj = this->objects.back();
this->objects.pop_back();
if(contains(trees[0][0][0],obj))
{
trees[0][0][0]->AddObject(obj);
trees[0][0][0]->combined_weight += obj->weight;
} else if(contains(trees[0][0][1],obj))
{
trees[0][0][1]->AddObject(obj);
trees[0][0][1]->combined_weight += obj->weight;
} else if(contains(trees[0][1][0],obj))
{
trees[0][1][0]->AddObject(obj);
trees[0][1][0]->combined_weight += obj->weight;
} else if(contains(trees[0][1][1],obj))
{
trees[0][1][1]->AddObject(obj);
trees[0][1][1]->combined_weight += obj->weight;
} else if(contains(trees[1][0][0],obj))
{
trees[1][0][0]->AddObject(obj);
trees[1][0][0]->combined_weight += obj->weight;
} else if(contains(trees[1][0][1],obj))
{
trees[1][0][1]->AddObject(obj);
trees[1][0][1]->combined_weight += obj->weight;
} else if(contains(trees[1][1][0],obj))
{
trees[1][1][0]->AddObject(obj);
trees[1][1][0]->combined_weight += obj->weight;
} else if(contains(trees[1][1][1],obj))
{
trees[1][1][1]->AddObject(obj);
trees[1][1][1]->combined_weight += obj->weight;
}
}
for(int i=0;i<2;i++)
{
for(int j=0;j<2;j++)
{
for(int k=0;k<2;k++)
{
trees[i][j][k]->Update();
}
}
}
}
bool Quadtree::contains(Quadtree* child, Element* object)
{
if(object->pos[0] >= child->vBoundriesBox[0][0] && object->pos[0] <= child->vBoundriesBox[1][0] &&
object->pos[1] >= child->vBoundriesBox[4][1] && object->pos[1] <= child->vBoundriesBox[0][1] &&
object->pos[2] >= child->vBoundriesBox[3][2] && object->pos[2] <= child->vBoundriesBox[0][2])
return true;
return false;
}
As you can see on the picture nodes are very clustered. I've been trying to figure out the way to fix my repulsion force calculations, but it still not working and result is still this same.
So how I'm calculating it:
First in my main file I am running loop through all graph nodes:
for(auto& n_el:graph->node_vector)
{
tree->CheckNode(&n_el);
}
Next in my Qyadtree class, (tree is this class object), I have this recursive method:
void Quadtree::CheckNode(Node* node)
{
glm::vec3 diff = this->center - node->pos;
double distance_sqr = (diff.x * diff.x) + (diff.y*diff.y) + (diff.z*diff.z);
double width_sqr = (vBoundriesBox[1][0] - vBoundriesBox[0][0]) * (vBoundriesBox[1][0] - vBoundriesBox[0][0]);
if(width_sqr/distance_sqr < 10.0f || leaf)
{
if(leaf)
{
for(auto& n: objects)
{
n->Repulse(&objects);
}
}
else
{
node->RepulseWithGroup(this);
}
}
else
{
for(int i=0; i<2; i++)
{
for(int j=0; j<2; j++)
{
for(int k=0; k<2; k++)
{
trees[i][j][k]->CheckNode(node);
}
}
}
}
}
Finally I have two methods calculate repulse force depending on the fact if it's between group and node or between two nodes:
double Node::Repulse(std::vector<Node*>* nodes)
{
double dx;
double dy;
double dz;
double force = 0.0;
double distance_between;
double delta_weights;
double temp;
for(auto& element_node:*nodes)
{
if(this->name == element_node->name)
{
continue;
}
if(!element_node->use) continue;
delta_weights = 0.5 + abs(this->weight - element_node->weight);
dx = this->pos[0] - element_node->pos[0];
dy = this->pos[1] - element_node->pos[1];
dz = this->pos[2] - element_node->pos[2];
distance_between = dx * dx + dy * dy + dz * dz;
force = 0.19998 * delta_weights/(distance_between * distance_between);
temp = std::min(1.0, force);
if(temp<0.0001)
{
temp = 0;
}
double mx = temp * dx;
double my = temp * dy;
double mz = temp * dz;
this->pos[0] += mx;
this->pos[1] += my;
this->pos[2] += mz;
element_node->pos[0] -= mx;
element_node->pos[1] -= my;
element_node->pos[2] -= mz;
}
}
void Node::RepulseWithGroup(Quadtree* tree)
{
double dx;
double dy;
double dz;
double force = 0.0;
double distance_between;
double delta_weights;
double temp;
delta_weights = 0.5 + abs(this->weight - tree->combined_weight);
dx = this->pos[0] - tree->center.x;
dy = this->pos[1] - tree->center.y;
dz = this->pos[2] - tree->center.z;
distance_between = dx * dx + dy * dy + dz * dz;
force = 0.19998 * delta_weights/(distance_between * distance_between);
temp = std::min(1.0, force);
if(temp<0.0001)
{
temp = 0;
}
double mx = temp * dx;
double my = temp * dy;
double mz = temp * dz;
this->pos[0] += mx + this->parent_group->repulsion_force.x;
this->pos[1] += my + this->parent_group->repulsion_force.y;
this->pos[2] += mz + this->parent_group->repulsion_force.z;
}
In case this idea:
if(width_sqr/distance_sqr < 10.0f || leaf)
{
if(leaf)
{
for(auto& n: objects)
{
n->Repulse(&objects);
}
}
else
{
node->RepulseWithGroup(this);
}
}
is not clear it is because I've figured out, that there might be actually multiple elements in one tree leaf. That might happen if the maximum level might be already reached and still elements are in one box. Then I need also to calculate force within box against nodes inside.
What's more is bothering me is the speed of this approach (and it's indicating that octree is not working correctly) is the speed. This is simple plot representing time/number of nodes:
As far as I know the original Force-directed graph algorithm have complexity O(n^2), but with Barnes-Hut it should be O(nlogn). Yet, the plot it's not even close to nlogn.
Can someone tell me what I am doing here wrong? I've been looking at this code for quite a long now, and I don't see where I am missing something.
EDIT:
Based on #Ilmari Karonen answer I've run test for MAX_LEVELS 5, 20, 50, 100. Results are below. As it looks there is no meaningful difference I'd say (unfortunately)
Just off the top of my head,
#define MAX_LEVELS 5
seems awfully low. You may simply be running out of depth in your octree, causing your algorithm to degenerate into O(n²) direct summing. You may want to try increasing MAX_LEVELS to a significantly higher value (at least, say, 10 or 20) and seeing if that improves the performance.
I haven't tested your code, so I can't be sure if this is the real issue, or the only one. But it's definitely what I'd check first.
Looking a bit more closely at your code, I'm seeing a couple of other potential issues, too. These might not, strictly speaking, affect performance, but they might affect the correctness of the results.
First, you have a center vector in your Quadtree class, presumably representing the center of mass of the nodes within the subtree, but you never seem to update that vector when adding nodes into the tree. Since you do use that vector in your calculations, you might be getting bogus results because of that.
(In fact, since one thing you're using the center vector for is calculating the distance between a node and a subtree, and so deciding whether to descend deeper into the subtree, that might also be messing up your performance.)
Also, you seem to be updating the positions directly while traversing the tree, which means that the trajectories generated by your algorithm will depend on the order in which the nodes are traversed and the tree expanded. For more consistent and reproducible results, you may want to first calculate the displacement of each node during the current iteration of the algorithm, storing it in a separate vector, and then run a second pass over the nodes to add the displacement to their position (and reset it for the next iterations).
Also, surely I can't be the only one who finds the fact that you have a class named Quadtree that implements an octree annoying, can I? :)

Calculating a curve thru any 3 points in a normalzed matrix using C++

If I have a simple 2-D matrix with normalized values on x-axis between 0 and 1 and y-axys between 0 and 1, and I have 3 points in this matrix e.g. P1 (x=0.2,y=0.9), P2 (x=0.5,y=0.1) and P3 (x=0.9,y=0.4).
How can I simply calculate a curve thru this points, meaning having a function which is giving me the y for any x.
I now that there are any number of possible curves thru 3 points. But hey, you know what I mean: I want a smooth curve thru it, usable for audio-sample-interpolation, usable for calculation a volume-fade-curve, usable for calculating a monster-walking-path in a game.
Now I have searched the net for this question about 3 days, and I cannot believe that there is no usable solution for this task. All the text dealing about Catmull-rom-Splines, bezier-curves and all that theroretical stuff has all at least one point which doesn't make it for me usable. For example Catmull-Rom-splines need to have a fix distance between the control-points (I would use this code and set the 4. point-y to the 3. point y) :
void CatmullRomSpline(float *x,float *y,float x1,float y1,float x2,float y2,float x3,float y3,float x4,float y4,float u)
{
//x,y are calculated for x1,y1,x2,y2,x3,y3 and x4,y4 if u is the normalized distance (0-1) in relation to the distance between x2 and x3 for my whiched point
float u3,u2,f1,f2,f3,f4;
u3=u*u*u;
u2=u*u;
f1=-0.5f * u3 + u2 -0.5f *u;
f2= 1.5f * u3 -2.5f * u2+1.0f;
f3=-1.5f * u3 +2.0f * u2+0.5f*u;
f4=0.5f*u3-0.5f*u2;
*x=x1*f1+x2*f2+x3*f3+x4*f4;
*y=y1*f1+y2*f2+y3*f3+y4*f4;
}
But I don't see that x1 to x4 have any affect on the calculation of y, so I think x1 to x4 must have the same distance?
...
Or bezier-code doesn't calcuate the curve thru the points. The points (at least the 2. point) seem only to have a force-effect on the line.
typedef struct Point2D
{
double x;
double y;
} Point2D;
class bezier
{
std::vector<Point2D> points;
bezier();
void PushPoint2D( Point2D point );
Point2D GetPoint( double time );
~bezier();
};
void bezier::PushPoint2D(Point2D point)
{
points.push_back(point);
}
Point2D bezier::GetPoint( double x )
{
int i;
Point2D p;
p.x=0;
p.y=0;
if( points.size() == 1 ) return points[0];
if( points.size() == 0 ) return p;
bezier b;
for (i=0;i<(int)points.size()-1;i++)
{
p.x = ( points[i+1].x - points[i].x ) * x + points[i].x;
p.y = ( points[i+1].y - points[i].y ) * x + points[i].y;
if (points.size()<=2) return p;
b.PushPoint2D(p);
}
return b.GetPoint(x);
}
double GetLogicalYAtX(double x)
{
bezier bz;
Point2D p;
p.x=0.2;
p.y=0.9;
bz.PushPoint2D(p);
p.x=0.5;
p.y=0.1;
bz.PushPoint2D(p);
p.x=0.9;
p.y=0.4;
bz.PushPoint2D(p);
p=bz.GetPoint(x);
return p.y;
}
This is better than nothing, but it is 1. very slow (recursive) and 2. as I said doesn't really calculate the line thru the 2. point.
Is there a mathematical brain outside which could help me?
Thank you TOCS (Scott) for providing your code, I will also try it if I have some time. But what I have tested now is the hint by INFACT (answer 3): This "Largrange polynomials" are very very close to what I am searching for:
I have renamed my class bezier to curve, because I have added some code for lagrangeinterpolation. I also have added 3 pictures of graphical presentation what the code is calculation.
In Picture 1 you can see the loose middle point of the old bezier-function.
In Picture 2 you can now see the going thru all-points-result of lagrange interpolation.
In Picture 3 you can see the only problem, or should I say "thing which I also need to be solved" (anyway its the best solution till now): If I move the middle point, the curve to going to fast, to quick to the upper or lower boundaries). I would like it to go more smoothely to the upper and lower. So that it looks more logarithm-function like. So that it doesn't exeed the y-boundaries between 0 and 1 too soon.
Now my code looks like this:
curve::curve(void)
{
}
void curve::PushPoint2D(Point2D point)
{
points.push_back(point);
}
Point2D curve::GetPoint( double x )
{
//GetPoint y for x with bezier-mathematics...
//was the only calculating function in old class "bezier"
//now the class is renamed "curve"
int i;
Point2D p;
p.x=0;
p.y=0;
if( points.size() == 1 ) return points[0];
if( points.size() == 0 ) return p;
curve b;
for (i=0;i<(int)points.size()-1;i++)
{
p.x = ( points[i+1].x - points[i].x ) * x + points[i].x;
p.y = ( points[i+1].y - points[i].y ) * x + points[i].y;
if (points.size()<=2) return p;
b.PushPoint2D(p);
}
return b.GetPoint(x);
}
//THIS IS NEW AND VERY VERY COOL
double curve::LagrangeInterpolation(double x)
{
double y = 0;
for (int i = 0; i <= (int)points.size()-1; i++)
{
double numerator = 1;
double denominator = 1;
for (int c = 0; c <= (int)points.size()-1; c++)
{
if (c != i)
{
numerator *= (x - points[c].x);
denominator *= (points[i].x - points[c].x);
}
}
y += (points[i].y * (numerator / denominator));
}
return y;
}
curve::~curve(void)
{
}
double GetLogicalYAtX(double x)
{
curve cv;
Point2D p;
p.x=0; //always left edge
p.y=y1; //now by var
cv.PushPoint2D(p);
p.x=x2; //now by var
p.y=y2; //now by var
cv.PushPoint2D(p);
p.x=1; //always right edge
p.y=y3; //now by var
cv.PushPoint2D(p);
//p=cv.GetPoint(x);
//return p.y;
return cv.LagrangeInterpolation(x);
}
Do you have any ideas how I could get the new solution a little bit more "soft"? So that I can move the 2. Point in larger areas without the curve going out of boundaries? Thank you
static bezier From3Points(const Point2D &a, const Point2D &b, const Point2D &c)
{
bezier result;
result.PushPoint2D(a);
Point2D middle;
middle.x = 2*b.x - a.x/2 - c.x/2;
middle.y = 2*b.y - a.y/2 - c.y/2;
result.PushPoint2D(middle);
result.PushPoint2D(c);
return result;
}
Untested, but should return a bezier curve where at t=0.5 the curve passes through point 'b'.
Additionally (more untested code ahead), you can calculate your points using bernstein basis polynomials like so.
static int binomialcoefficient (int n, int k)
{
if (k == 0)
return 1;
if (n == 0)
return 0;
int result = 0;
for (int i = 1; i <= k; ++i)
{
result += (n - (k - i))/i;
}
return result;
}
static double bernstein (int v, int n, double t)
{
return binomialcoefficient(v,n) * pow(t,v) * pow(1 - t,n - v);
}
Point2D GetPoint (double t)
{
Point2D result;
result.x = 0;
result.y = 0;
for (int i = 0; i < points.size(); ++i)
{
double coeff = bernstein (i,points.size(),t);
result.x += coeff * points[i].x;
result.y += coeff * points[i].y;
}
return result;
}

Find all points of a grid within a circle, ordered by norm

How would you solve the problem of finding the points of a (integer) grid within a circle centered on the origin of the axis, with the results ordered by norm, as in distance from the centre, in C++?
I wrote an implementation that works (yeah, I know, it is extremely inefficient, but for my problem anything more would be overkill). I'm extremely new to C++, so my biggest problem was finding a data structure capable of
being sort-able;
being able to save an array in one of its elements,
rather than the implementation of the algorithm. My code is as follows. Thanks in advance, everyone!
typedef std::pair<int, int[2]> norm_vec2d;
bool norm_vec2d_cmp (norm_vec2d a, norm_vec2d b)
{
bool bo;
bo = (a.first < b.first ? true: false);
return bo;
}
int energy_to_momenta_2D (int energy, std::list<norm_vec2d> *momenta)
{
int i, j, norm, n=0;
int energy_root = (int) std::sqrt(energy);
norm_vec2d temp;
for (i=-energy_root; i<=energy_root; i++)
{
for (j =-energy_root; j<=energy_root; j++)
{
norm = i*i + j*j;
if (norm <= energy)
{
temp.first = norm;
temp.second[0] = i;
temp.second[1] = j;
(*momenta).push_back (temp);
n++;
}
}
}
(*momenta).sort(norm_vec2d_cmp);
return n;
}
How would you solve the problem of finding the points of a (integer) grid within a circle centered on the origin of the axis, with the results ordered by norm, as in distance from the centre, in C++?
I wouldn't use a std::pair to hold the points. I'd create my own more descriptive type.
struct Point {
int x;
int y;
int square() const { return x*x + y*y; }
Point(int x = 0, int y = 0)
: x(x), y(y) {}
bool operator<(const Point& pt) const {
if( square() < pt.square() )
return true;
if( pt.square() < square() )
return false;
if( x < pt.x )
return true;
if( pt.x < x)
return false;
return y < pt.y;
}
friend std::ostream& operator<<(std::ostream& os, const Point& pt) {
return os << "(" << pt.x << "," << pt.y << ")";
}
};
This data structure is (probably) exactly the same size as two ints, it is less-than comparable, it is assignable, and it is easily printable.
The algorithm walks through all of the valid points that satisfy x=[0,radius] && y=[0,x] && (x,y) inside circle:
std::set<Point>
GetListOfPointsInsideCircle(double radius = 1) {
std::set<Point> result;
// Only examine bottom half of quadrant 1, then
// apply symmetry 8 ways
for(Point pt(0,0); pt.x <= radius; pt.x++, pt.y = 0) {
for(; pt.y <= pt.x && pt.square()<=radius*radius; pt.y++) {
result.insert(pt);
result.insert(Point(-pt.x, pt.y));
result.insert(Point(pt.x, -pt.y));
result.insert(Point(-pt.x, -pt.y));
result.insert(Point(pt.y, pt.x));
result.insert(Point(-pt.y, pt.x));
result.insert(Point(pt.y, -pt.x));
result.insert(Point(-pt.y, -pt.x));
}
}
return result;
}
I chose a std::set to hold the data for two reasons:
It is stored is sorted order, so I don't have to std::sort it, and
It rejects duplicates, so I don't have to worry about points whose reflection are identical
Finally, using this algorithm is dead simple:
int main () {
std::set<Point> vp = GetListOfPointsInsideCircle(2);
std::copy(vp.begin(), vp.end(),
std::ostream_iterator<Point>(std::cout, "\n"));
}
It's always worth it to add a point class for such geometric problem, since usually you have more than one to solve. But I don't think it's a good idea to overload the 'less' operator to satisfy the first need encountered. Because:
Specifying the comparator where you sort will make it clear what order you want there.
Specifying the comparator will allow to easily change it without affecting your generic point class.
Distance to origin is not a bad order, but for a grid but it's probably better to use row and columns (sort by x first then y).
Such comparator is slower and will thus slow any other set of points where you don't even care about norm.
Anyway, here is a simple solution using a specific comparator and trying to optimize a bit:
struct v2i{
int x,y;
v2i(int px, int py) : x(px), y(py) {}
int norm() const {return x*x+y*y;}
};
bool r_comp(const v2i& a, const v2i& b)
{ return a.norm() < b.norm(); }
std::vector<v2i> result;
for(int x = -r; x <= r; ++x) {
int my = r*r - x*x;
for(int y = 0; y*y <= my; ++y) {
result.push_back(v2i(x,y));
if(y > 0)
result.push_back(v2i(x,-y));
}
}
std::sort(result.begin(), result.end(), r_comp);