Initializing a box with N particles arranged in a specific pattern - c++

I'm new to C++, and as an exercise I'm trying to reproduce what was done by Metropolis et al. (Metropolis Monte Carlo).
What I have done thus far - Made 2 classes: Vector and Atom
class Vector {
public:
double x;
double y;
Vector() {
}
Vector (double x_, double y_) {
x = x_;
y = y_;
}
double len() {
return sqrt(x*x + y*y);
}
double lenSqr() {
return x*x + y*y;
}
};
class Atom {
public:
Vector pos;
Vector vel;
Vector force;
Atom (double x_, double y_) {
pos = Vector(x_, y_);
vel = Vector(0, 0);
force = Vector(0, 0);
}
double KE() {
return .5 * vel.lenSqr();
}
};
I am not certain that the way I have defined the class Atom is... the best way to go about things since I will not be using a random number generator to place the atoms in the box.
My problem:
I need to initialize a box of length L (in my case L=1) and load it with 224 atoms/particles in an offset lattice (I have included a picture). I have done some reading and I was wondering if maybe an array would be appropriate here.
One thing that I am confused about is how I could normalize the array to get the appropriate distance between the particles and what would happen to the array once the particles begin to move. I am also not sure how an array could give me the x and y position of each and every atom in the box.
Metropolis offset (hexagonal) lattice

Well, It seems, that generally you don't need to use array to represent the lattice. In practice most often it may sense to represent lattice as array only if your atoms can naturally move only on the cells (for example as figures in chess). But seems that your atoms can move in any direction (already not practicle to use such rigid structure as array, because it has naturally 4 or 8 directions for move in 2D) by any step (it is bad for arrays too, because in this case you need almost countless cells in array to represent minimal distance step).
So basically what do you need is just use array as storage for your 224 atoms and set particular position in lattice via pos parameter.
std::vector<Atom> atoms;
// initialize atoms to be in trigonal lattice
const double x_shift = 1. / 14;
const double y_shift = 1. / 16;
double x_offset = 0;
for (double y = 0; y < 1; y += y_shift){
for (double x = x_offset; x < 1; x += x_shift){
// create atom in position (x, y)
// and store it in array of atoms
atoms.push_back(Atom(x, y));
}
// every new row flip offset 0 -> 1/28 -> 0 -> 1/28...
if (x_offset == 0){
x_offset = x_shift / 2;
}
else{
x_offset = 0;
}
}
Afterwards you just need to process this array of atoms and change their positions, velocities and what you need else according to algorithm.

Related

Performing a cubic fit to a set of four points to extrapolate a "local" path, Or working alternatives?

Problem: Generate a extrapolated local path which provides path points ahead of the max FOV.
Situation: Having a car move round an unknown looped track of varying shape using a field of view which is limited so can only provide reliably 3 points ahead of the car and the car's current position. Note for more information the tack is defined by cone gates and the information provided about the locations of said gates is 2D (x,y).
Background: I have successfully generated a vector of mid points between gates however wish to generate an extrapolated path for the motion control algorithm to use. the format of this path needs to be a sequence of PathPoint (s) which contain (x,y velocity, gravity). note that gravity is just used to cap the maximum acceleration and is not important to the situation nor is velocity as the post is only concerned about generating respective (x,y) co-ordinates.
Attempted Solution Methodology: To fit two cubic functions for X positions and Y positions using the set of four points i.e f(x) and g(y). These functions should then be provided as the desired (f(x),g(y)) positions as we vary the look ahead distance to supply 20 path points.
Question: I do not believe this method to be correct both in theory and in implementation can anyone think of an easy/simple methodology to achieve the out come of having position in x axis and position in y axis to be the functions from the argument of overall distance from the car?
double PathPlanningClass::squared(double arg)
{
return arg*arg;
}
double PathPlanningClass::cubed(double arg)
{
return arg*arg*arg;
}
//https://eigen.tuxfamily.org/dox/group__TutorialLinearAlgebra.html
void PathPlanningClass::Coeffs()
{
Eigen::Matrix4f Aone;
Eigen::Vector4f bone;
Aone << _x, squared(_x), cubed(_x), _midpoints[0].getX(), squared(_midpoints[0].getX()), cubed(_midpoints[0].getX()), _midpoints[1].getX(), squared(_midpoints[1].getX()), cubed(_midpoints[1].getX()), _midpoints[_midpoints.size()-1].getX(), squared(_midpoints[_midpoints.size()-1].getX()), cubed(_midpoints[_midpoints.size()-1].getX());
bone << _y, _midpoints[0].getY(), _midpoints[1].getY(), _midpoints[_midpoints.size()-1].getY();
Eigen::Vector4f x = Aone.colPivHouseholderQr().solve(bone);
_Ax = x(1);
_Bx = x(2);
_Cx = x(3);
_Dx = x(4);
Eigen::Matrix4f Atwo;
Eigen::Vector4f btwo;
Atwo << _y, squared(_y), cubed(_y), _midpoints[0].getY(), squared(_midpoints[0].getY()), cubed(_midpoints[0].getY()), _midpoints[1].getY(), squared(_midpoints[1].getY()), cubed(_midpoints[1].getY()), _midpoints[_midpoints.size()-1].getY(), squared(_midpoints[_midpoints.size()-1].getY()), cubed(_midpoints[_midpoints.size()-1].getY());
btwo << _x, _midpoints[0].getX(), _midpoints[1].getX(), _midpoints[_midpoints.size()-1].getX();
Eigen::Vector4f y = Aone.colPivHouseholderQr().solve(bone);
_Ay = y(1);
_By = y(2);
_Cy = y(3);
_Dx = y(4);
return;
}
void PathPlanningClass::extrapolate()
{
// number of desired points
int numOfpoints = 20;
// distance to be extrapolated from car's location
double distance = 10;
// the argument for g(y) and f(x)
double arg = distance/numOfpoints;
for (int i = 0 ; i < numOfpoints; i++)
{
double farg = _Ax + _Bx*arg*i + _Cx*squared(arg*i) + _Dx*cubed(arg*i);
double garg = _Ay + _By*arg*i + _Cy*squared(arg*i) + _Dy*cubed(arg*i);
PathPoint newPoint(farg, garg, velocity(_x, _y, _yaw), 9.8);
_path.push_back(newPoint);
}
return;
}

C++ - How to generate every possible combination of n 3D coordinates by incrementing x/y/z by a given value x

As part of a larger program I need to generate every possible set of 3D coordinate points contained within the rectangular prism formed by the origin and point (Y1, Y2, Y3), given the number of points, n, that will be in the set, and the value by which the x/y/z values are to be incremented by.
This was what I initially wrote, which does the job of cycling through all possible coordinates correctly for an individual point, but does not correctly generate all the overall combinations of points needed.
In the program I created a point object, and created a vector of point objects with default x/y/z values of zero.
void allPoints(double Y1, double Y2, double Y3, double increment, vector<Point> pointset)
{
int count = pointset.size()-1;
while (count>=0)
{
while (pointset.at(count).getX()<Y1)
{
while (pointset.at(count).getY()<Y2)
{
while (pointset.at(count).getZ()<Y3)
{
//insert intended statistical test to be run on each possible set here
}
pointset.at(count).setZ(0);
pointset.at(count).incY(increment);
}
pointset.at(count).setY(0);
pointset.at(count).incX(increment);
}
count--;
}
}
I am new to coding and may be approaching this entirely wrong, and am just looking for help getting in the right direction. If using a point object isn't the way to go, it's not needed in the rest of the program - I could use 3d arrays instead.
Thanks!
Lets assume you have class Point3d which represents a point, Vec3d which represents a vector which can translate points (proper operators are defined).
In such case this should go like this:
std::vector<Point3d> CrystalNet(
size_t size,
const Point3d& origin,
const Vec3d& a = { 1, 0, 0 },
const Vec3d& b = { 0, 1, 0 },
const Vec3d& c = { 0, 0, 1 })
{
std::vector<Point3d> result;
result.reserve(size * size * size);
for (int i = 0; i < size; ++i)
for (int j = 0; j < size; ++j)
for (int k = 0; k < size; ++k) {
result.empalce_back(origin + a * i + b * j + c * k);
}
return result;
}
Defining Point3d and Vec3d is quite standard and I'm sure there is ready library which can do it.
The chief problem appears to be that your textual description is about creating a pointset. The count isn't known up front. The example code takes an already created pointset. That just doesn't work.
That's also why you end up with the // insert test here - that's not the location for a test, that's where you would add a new point to the pointset you have to create.

Drawing circle, OpenGL style

I have a 13 x 13 array of pixels, and I am using a function to draw a circle onto them. (The screen is 13 * 13, which may seem strange, but its an array of LED's so that explains it.)
unsigned char matrix[13][13];
const unsigned char ON = 0x01;
const unsigned char OFF = 0x00;
Here is the first implementation I thought up. (It's inefficient, which is a particular problem as this is an embedded systems project, 80 MHz processor.)
// Draw a circle
// mode is 'ON' or 'OFF'
inline void drawCircle(float rad, unsigned char mode)
{
for(int ix = 0; ix < 13; ++ ix)
{
for(int jx = 0; jx < 13; ++ jx)
{
float r; // Radial
float s; // Angular ("theta")
matrix_to_polar(ix, jx, &r, &s); // Converts polar coordinates
// specified by r and s, where
// s is the angle, to index coordinates
// specified by ix and jx.
// This function just converts to
// cartesian and then translates by 6.0.
if(r < rad)
{
matrix[ix][jx] = mode; // Turn pixel in matrix 'ON' or 'OFF'
}
}
}
}
I hope that's clear. It's pretty simple, but then I programmed it so I know how it's supposed to work. If you'd like more info / explanation then I can add some more code / comments.
It can be considered that drawing several circles, eg 4 to 6, is very slow... Hence I'm asking for advice on a more efficient algorithm to draw the circles.
EDIT: Managed to double the performance by making the following modification:
The function calling the drawing used to look like this:
for(;;)
{
clearAll(); // Clear matrix
for(int ix = 0; ix < 6; ++ ix)
{
rad[ix] += rad_incr_step;
drawRing(rad[ix], rad[ix] - rad_width);
}
if(rad[5] >= 7.0)
{
for(int ix = 0; ix < 6; ++ ix)
{
rad[ix] = rad_space_step * (float)(-ix);
}
}
writeAll(); // Write
}
I added the following check:
if(rad[ix] - rad_width < 7.0)
drawRing(rad[ix], rad[ix] - rad_width);
This increased the performance by a factor of about 2, but ideally I'd like to make the circle drawing more efficient to increase it further. This checks to see if the ring is completely outside of the screen.
EDIT 2: Similarly adding the reverse check increased performance further.
if(rad[ix] >= 0.0)
drawRing(rad[ix], rad[ix] - rad_width);
Performance is now pretty good, but again I have made no modifications to the actual drawing code of the circles and this is what I was intending to focus on with this question.
Edit 3: Matrix to polar:
inline void matrix_to_polar(int i, int j, float* r, float* s)
{
float x, y;
matrix_to_cartesian(i, j, &x, &y);
calcPolar(x, y, r, s);
}
inline void matrix_to_cartesian(int i, int j, float* x, float* y)
{
*x = getX(i);
*y = getY(j);
}
inline void calcPolar(float x, float y, float* r, float* s)
{
*r = sqrt(x * x + y * y);
*s = atan2(y, x);
}
inline float getX(int xc)
{
return (float(xc) - 6.0);
}
inline float getY(int yc)
{
return (float(yc) - 6.0);
}
In response to Clifford that's actually a lot of function calls if they are not inlined.
Edit 4: drawRing just draws 2 circles, firstly an outer circle with mode ON and then an inner circle with mode OFF. I am fairly confident that there is a more efficient method of drawing such a shape too, but that distracts from the question.
You're doing a lot of calculations that aren't really needed. For example, you're calculating the angle of the polar coordinates, but never use it. The square root can also easily be avoided by comparing the square of the values.
Without doing anything fancy, something like this should be a good start:
int intRad = (int)rad;
int intRadSqr = (int)(rad * rad);
for (int ix = 0; ix <= intRad; ++ix)
{
for (int jx = 0; jx <= intRad; ++jx)
{
if (ix * ix + jx * jx <= radSqr)
{
matrix[6 - ix][6 - jx] = mode;
matrix[6 - ix][6 + jx] = mode;
matrix[6 + ix][6 - jx] = mode;
matrix[6 + ix][6 + jx] = mode;
}
}
}
This does all the math in integer format, and takes advantage of the circle symmetry.
Variation of the above, based on feedback in the comments:
int intRad = (int)rad;
int intRadSqr = (int)(rad * rad);
for (int ix = 0; ix <= intRad; ++ix)
{
for (int jx = 0; ix * ix + jx * jx <= radSqr; ++jx)
{
matrix[6 - ix][6 - jx] = mode;
matrix[6 - ix][6 + jx] = mode;
matrix[6 + ix][6 - jx] = mode;
matrix[6 + ix][6 + jx] = mode;
}
}
Don't underestimate the cost of even basic arithmetic using floating point on a processor with no FPU. It seems unlikely that floating point is necessary, but the details of its use are hidden in your matrix_to_polar() implementation.
Your current implementation considers every pixel as a candidate - that is also unnecessary.
Using the equation y = cy ± √[rad2 - (x-cx)2] where cx, cy is the centre (7, 7 in this case), and a suitable integer square root implementation, the circle can be drawn thus:
void drawCircle( int rad, unsigned char mode )
{
int r2 = rad * rad ;
for( int x = 7 - rad; x <= 7 + rad; x++ )
{
int dx = x - 7 ;
int dy = isqrt( r2 - dx * dx ) ;
matrix[x][7 - dy] = mode ;
matrix[x][7 + dy] = mode ;
}
}
In my test I used the isqrt() below based on code from here, but given that the maximum r2 necessary is 169 (132, you could implement a 16 or even 8 bit optimised version if necessary. If your processor is 32 bit, this is probably fine.
uint32_t isqrt(uint32_t n)
{
uint32_t root = 0, bit, trial;
bit = (n >= 0x10000) ? 1<<30 : 1<<14;
do
{
trial = root+bit;
if (n >= trial)
{
n -= trial;
root = trial+bit;
}
root >>= 1;
bit >>= 2;
} while (bit);
return root;
}
All that said, on such a low resolution device, you will probably get better quality circles and faster performance by hand generating bitmap lookup tables for each radius required. If memory is an issue, then a single circle needs only 7 bytes to describe a 7 x 7 quadrant that you can reflect to all three quadrants, or for greater performance you could use 7 x 16 bit words to describe a semi-circle (since reversing bit order is more expensive than reversing array access - unless you are using an ARM Cortex-M with bit-banding). Using semi-circle look-ups, 13 circles would need 13 x 7 x 2 bytes (182 bytes), quadrant look-ups would be 7 x 8 x 13 (91 bytes) - you may find that is fewer bytes that the code space required to calculate the circles.
For a slow embedded device with only a 13x13 element display, you should really just make a look-up table. For example:
struct ComputedCircle
{
float rMax;
char col[13][2];
};
Where the draw routine uses rMax to determine which LUT element to use. For example, if you have 2 elements with one rMax = 1.4f, the other = 1.7f, then any radius between 1.4f and 1.7f will use that entry.
The column elements would specify zero, one, or two line segments per row, which can be encoded in the lower and upper 4 bits of each char. -1 can be used as a sentinel value for nothing-at-this-row. It is up to you how many look-up table entries to use, but with a 13x13 grid you should be able to encode every possible outcome of pixels with well under 100 entries, and a reasonable approximation using only 10 or so. You can also trade off compression for draw speed as well, e.g. putting the col[13][2] matrix in a flat list and encoding the number of rows defined.
I would accept MooseBoy's answer if only he explained the method he proposes better. Here's my take on the lookup table approach.
Solve it with a lookup table
The 13x13 display is quite small, and if you only need circles which are fully visible within this pixel count, you will get around with a quite small table. Even if you need larger circles, it should be still better than any algorithmic way if you need it to be fast (and have the ROM to store it).
How to do it
You basically need to define how each possible circle looks like on the 13x13 display. It is not sufficient to just produce snapshots for the 13x13 display, as it is likely you would like to plot the circles at arbitrary positions. My take for a table entry would look like this:
struct circle_entry_s{
unsigned int diameter;
unsigned int offset;
};
The entry would map a given diameter in pixels to offsets in a large byte table containing the shape of the circles. For example for diameter 9, the byte sequence would look like this:
0x1CU, 0x00U, /* 000111000 */
0x63U, 0x00U, /* 011000110 */
0x41U, 0x00U, /* 010000010 */
0x80U, 0x80U, /* 100000001 */
0x80U, 0x80U, /* 100000001 */
0x80U, 0x80U, /* 100000001 */
0x41U, 0x00U, /* 010000010 */
0x63U, 0x00U, /* 011000110 */
0x1CU, 0x00U, /* 000111000 */
The diameter specifies how many bytes of the table belong to the circle: one row of pixels are generated from (diameter + 7) >> 3 bytes, and the number of rows correspond to the diameter. The output code of these can be made quite fast, while the lookup table is sufficiently compact to get even larger than the 13x13 display circles defined in it if needed.
Note that defining circles this way for odd and even diameters may or may not appeal you when output by a centre location. The odd diameter circles will appear to have a centre in the "middle" of a pixel, while the even diameter circles will appear to have their centre on the "corner" of a pixel.
You may also find it nice later to refine the overall method so having multiple circles of different apparent sizes, but having the same pixel radius. Depends on what is your goal: if you want some kind of smooth animation, you may get there eventually.
Algorithmic solutions I think mostly will perform poorly here, since with this limited display surface really every pixel's state counts for the appearance.

Artefacts in Interpolated Value Noise

I'm trying to create a basic value noise function. I've reached the point where it's outputting it but within the output there are unexpected artefacts popping up such as diagonal discontinuous lines and blurs. I just can't seem to find what's causing it. Could somebody please take a look at it to see if I'm going wrong somewhere.
First off, here are three images that it's ouputting with greater magnification on each one.
//data members
float m_amplitude, m_frequency;
int m_period; //controls the tile size of the noise
vector<vector<float> m_points; //2D array to store the lattice
//The constructor generates the 2D square lattice and populates it.
Noise2D(int period, float frequency, float amplitude)
{
//initialize the lattice to the appropriate NxN size
m_points.resize(m_period);
for (int i = 0; i < m_period; ++i)
m_points[i].resize(m_period);
//populates the lattice with values between 0 and 1
int seed = 209;
srand(seed);
for(int i = 0; i < m_period; i++)
{
for(int j = 0; j < m_period; j++)
{
m_points[i][j] = abs(rand()/(float)RAND_MAX);
}
}
}
//Evaluates a position
float Evaluate(float x, float y)
{
x *= m_frequency;
y *= m_frequency;
//Gets the integer values from each component
int xFloor = (int) x;
int yFloor = (int) y;
//Gets the decimal data in the range of [0:1] for each of the components for interpolation
float tx = x - xFloor;
float ty = y - yFloor;
//Finds the appropriate boundary lattice array indices using the modulus technique to ensure periodic noise.
int xPeriodLower = xFloor % m_period;
int xPeriodUpper;
if(xPeriodLower == m_period - 1)
xPeriodUpper = 0;
else
xPeriodUpper = xPeriodLower + 1;
int yPeriodLower = yFloor % m_period;
int yPeriodUpper;
if(yPeriodLower == m_period - 1)
yPeriodUpper = 0;
else
yPeriodUpper = yPeriodLower + 1;
//The four random values at each boundary. The naming convention for these follow a single 2d coord system 00 for bottom left, 11 for top right
const float& random00 = m_points[xPeriodLower][yPeriodLower];
const float& random10 = m_points[xPeriodUpper][yPeriodLower];
const float& random01 = m_points[xPeriodLower][yPeriodUpper];
const float& random11 = m_points[xPeriodUpper][yPeriodUpper];
//Remap the weighting of each t dimension here if you wish to use an s-curve profile.
float remappedTx = tx;
float remappedTy = ty;
return MyMath::Bilinear<float>(remappedTx, remappedTy, random00, random10, random01, random11) * m_amplitude;
}
Here are the two interpolation functions that it relies on.
template <class T1>
static T1 Bilinear(const T1 &tx, const T1 &ty, const T1 &p00, const T1 &p10, const T1 &p01, const T1 &p11)
{
return Lerp( Lerp(p00,p10,tx),
Lerp(p01,p11,tx),
ty);
}
template <class T1> //linear interpolation aka Mix
static T1 Lerp(const T1 &a, const T1 &b, const T1 &t)
{
return a * (1 - t) + b * t;
}
Some of the artifacts are the result of linear interpolation. Using a higher order interpolation method would help, but it will only solve part of the problem. Crudely put, sharp transitions in the signal can lead to artifacts.
Additional artifacts result from distributing the starting noise values (I.E. the values you are interpolating among) at equal intervals - in this case, a grid. The highest & lowest values will only ever occur at these grid points - at least when using linear interpolation. Roughly speaking, patterns in the signal can lead to artifacts. Two potential ways I know of addressing this part of the problem are either using a nonlinear interpolation &/or randomly nudging the coordinates of the starting noise values to break up their regularity.
Libnoise has an explanation of generating coherent noise which covers these problems & solutions in greater depth with some nice illustrations. You could also peek at the source if you need see how it deals with these problems. And as richard-tingle already mentioned, simplex noise was designed to correct the artifact problems inherent in Perlin noise; it's a little tougher to get your head around, but it's a solid technique.

C++ Data Structure for storing 3 dimensions of floats

I've implemented a 3D strange attractor explorer which gives float XYZ outputs in the range 0-100, I now want to implement a colouring function for it based upon the displacement between two successive outputs.
I'm not sure of the data structure to use to store the colour values for each point, using a 3D array I'm limited to rounding to the nearest int which gives a very coarse colour scheme.
I'm vaguely aware of octtrees, are they suitable in this siutation?
EDIT: A little more explanation:
to generate the points i'm repeatedly running this:
(a,b,c,d are random floats in the range -3 to 3)
x = x2;
y = y2;
z = z2;
x2 = sin(a * y) - z * cos(b * x);
y2 = z2 * sin(c * x) - cos(d * y);
z2 = sin(x);
parr[i][0]=x;
parr[i][1]=y;
parr[i][2]=z;
which generates new positions for each axis each run, to colour the render I need to take the distance between two successive results, if I just do this with a distance calculation between each run then the colours fade back and forth in equilibrium so I need to take running average for each point and store it, using a 3dimenrsionl array is too coarse a colouring and I'm looking for advice on how to store the values at much smaller increments.
Maybe you could drop the 2-dim array off and use an 1-dim array of
struct ColoredPoint {
int x;
int y;
int z;
float color;
};
so that the code would look like
...
parr[i].x = x;
parr[i].y = y;
parr[i].z = z;
parr[i].color = some_computed_color;
(you may also wish to encapsulate the fields and use class ColoredPoint with access methods)
I'd probably think bout some kind of 3-d binary search tree.
template <class KEY, class VALUE>
class BinaryTree
{
// some implementation, probably available in libraries
public:
VALUE* Find(const KEY& key) const
{
// real implementation is needed here
return NULL;
}
};
// this tree nodes wil actually hold color
class BinaryTree1 : public BinaryTree<double, int>
{
};
class BinaryTree2 : public BinaryTree<double, BinaryTree1>
{
};
class BinaryTree3 : public BinaryTree<double, BinaryTree2>
{
};
And you function to retreive the color from this tree would look like that
bool GetColor(const BinaryTree3& tree, double dX, double dY, double& dZ, int& color)
{
BinaryTree2* pYTree = tree.Find(dX);
if( NULL == pYTree )
return false;
BinaryTree1* pZTree = pYTree->Find(dY);
if( NULL == pZTree )
return false;
int* pCol = pZTree->Find(dZ);
if( NULL == pCol )
return false;
color = *pCol;
return true;
}
Af course you will need to write the function that would add color to this tree, provided 3 coordinates X, Y and Z.
std::map appears to be a good candidate for base class.