How do I simulate 2D spherical waves from a point source? - c++

I'm trying to simulate waves by numerically integrating the wave equation using euler integration (just until I get the kinks worked out, then I'll switch to runge-kutta). I'm using an array of floats as a grid. Then I create a disturbance by changing the value of the grid at one point. Now, instead of radiating in all directions away from this point, the wave only travels in one direction, towards the upper-left, i.e. towards decreasing x and y. So, my question is how do I make the wave radiate out?
Here's my code
void Wave::dudx(float *input,float *output) //calculate du/dx
{
for(int y=0;y<this->height;y++)
{
for(int x=0;x<this->width;x++)
{
output[x+y*this->width]=(this->getPoint((x+1)%this->width,y)-this->getPoint(x,y)); //getPoint returns the value of the grid at (x,y)
}
}
}
void Wave::dudy(float *input,float *output) //calculate du/dy
{
for(int x=0;x<this->width;x++)
{
for(int y=0;y<this->height;y++)
{
output[x+y*this->width]=(this->getPoint(x,(y+1)%this->height)-this->getPoint(x,y));
}
}
}
void Wave::simulate(float dt)
{
float c=6.0f;
//calculate the spatial derivatives
this->dudx(this->points,this->buffer);
this->dudx(this->buffer,this->d2udx2);
this->dudy(this->points,this->buffer);
this->dudy(this->buffer,this->d2udy2);
for(int y=0;y<this->height;y++)
{
for(int x=0;x<this->width;x++)
{
this->points[x+y*this->width]+=c*c*(this->d2udx2[x+y*this->width]+this->d2udy2[x+y*this->width])*dt*dt; //I know that I can calculate c*c and dt*dt once, but I want to make it clear what I'm doing.
}
}
}

Just for the sake of somebody else coming here for the same problem. The usual way to convert the Laplacian to a finite difference expression on a regular grid is:
∆u(x,y) -> idx2*[u(x+1,y) + u(x-1,y) - 2*u(x,y)] +
idy2*[u(x,y+1) + u(x,y-1) - 2*u(x,y)]
where idx2 and idy2 are the inverse squares of the grid spacing in dimension x and y respectively. In the case when the grid spacing in both dimensions is the same, this simplifies to:
∆u(x,y) -> igs2*[u(x+1,y) + u(x-1,y) + u(x,y+1) + u(x,y-1) - 4*u(x,y)]
The multiplicative coefficient can be removed by hiding it inside other coefficients, e.g. c, by changing their units of measurement:
∆u(x,y) -> u(x+1,y) + u(x-1,y) + u(x,y+1) + u(x,y-1) - 4*u(x,y)
By the way, there cannot be 2D spherical waves since spheres are 3D objects. 2D waves are called circular waves.

Related

Adding graphs together in c++ (to generate fractal noise)

I am trying to make a 1D fractal noise function. I have a function generating every single individual graph, but am struggling with how to add them together. I am following this tutorial
https://web.archive.org/web/20160530124230/http://freespace.virgin.net/hugo.elias/models/m_perlin.htm
Here is my code for my final noise function
(I am using sfml, which is what the sf::vector2f are. It's just a vector of two floats, representing a coordinate.)
void fractalNoise() {
std::vector<sf::Vector2f> allGraphs;
std::vector<sf::Vector2f> singleNoise;
float persistance = 0.8; //represents the decrease of amplitude with frequency.
//The closer to one, the less the amplitude decreases each iteration
int nOOPM1 = 10; //number of iterations
for (int i = 0; i < nOOPM1; i++) {
float frequency = pow(2, i);
float amplitude = pow(persistance, I);
//generate a random plots of noise, equidistant on the x, and random on the Y.
//the 3 is the interpolation method(ignore this), and the 1000 is how many points to draw
singleNoise = this->interpolateNoise(
this->generateNoise(frequency, 300 * amplitude), 3, 1000);
between each point.
allGraphs.insert(allGraphs.end(), singleNoise.begin(), singleNoise.end());
}
this->noiseGenerated = allGraphs;
//every pixel stored in noiseGenerated is rendered to a window
};
I understand that the allGraphs.insert is just putting the next graph after the current one, but I am unsure how to add each graph together. Because of the nature of fractal noise, and the fact my frequencies are always changing, I can't just add the noise points before interpolating them, as they will mostly have different x values
Any help would be appreciated

Fast, good quality pixel interpolation for extreme image downscaling

In my program, I am downscaling an image of 500px or larger to an extreme level of approx 16px-32px. The source image is user-specified so I do not have control over its size. As you can imagine, few pixel interpolations hold up and inevitably the result is heavily aliased.
I've tried bilinear, bicubic and square average sampling. The square average sampling actually provides the most decent results but the smaller it gets, the larger the sampling radius has to be. As a result, it gets quite slow - slower than the other interpolation methods.
I have also tried an adaptive square average sampling so that the smaller it gets the greater the sampling radius, while the closer it is to its original size, the smaller the sampling radius. However, it produces problems and I am not convinced this is the best approach.
So the question is: What is the recommended type of pixel interpolation that is fast and works well on such extreme levels of downscaling?
I do not wish to use a library so I will need something that I can code by hand and isn't too complex. I am working in C++ with VS 2012.
Here's some example code I've tried as requested (hopefully without errors from my pseudo-code cut and paste). This performs a 7x7 average downscale and although it's a better result than bilinear or bicubic interpolation, it also takes quite a hit:
// Sizing control
ctl(0): "Resize",Range=(0,800),Val=100
// Variables
float fracx,fracy;
int Xnew,Ynew,p,q,Calc;
int x,y,p1,q1,i,j;
//New image dimensions
Xnew=image->width*ctl(0)/100;
Ynew=image->height*ctl(0)/100;
for (y=0; y<image->height; y++){ // rows
for (x=0; x<image->width; x++){ // columns
p1=(int)x*image->width/Xnew;
q1=(int)y*image->height/Ynew;
for (z=0; z<3; z++){ // channels
for (i=-3;i<=3;i++) {
for (j=-3;j<=3;j++) {
Calc += (int)(src(p1-i,q1-j,z));
} //j
} //i
Calc /= 49;
pset(x, y, z, Calc);
} // channels
} // columns
} // rows
Thanks!
The first point is to use pointers to your data. Never use indexes at every pixel. When you write: src(p1-i,q1-j,z) or pset(x, y, z, Calc) how much computation is being made? Use pointers to data and manipulate those.
Second: your algorithm is wrong. You don't want an average filter, but you want to make a grid on your source image and for every grid cell compute the average and put it in the corresponding pixel of the output image.
The specific solution should be tailored to your data representation, but it could be something like this:
std::vector<uint32_t> accum(Xnew);
std::vector<uint32_t> count(Xnew);
uint32_t *paccum, *pcount;
uint8_t* pin = /*pointer to input data*/;
uint8_t* pout = /*pointer to output data*/;
for (int dr = 0, sr = 0, w = image->width, h = image->height; sr < h; ++dr) {
memset(paccum = accum.data(), 0, Xnew*4);
memset(pcount = count.data(), 0, Xnew*4);
while (sr * Ynew / h == dr) {
paccum = accum.data();
pcount = count.data();
for (int dc = 0, sc = 0; sc < w; ++sc) {
*paccum += *i;
*pcount += 1;
++pin;
if (sc * Xnew / w > dc) {
++dc;
++paccum;
++pcount;
}
}
sr++;
}
std::transform(begin(accum), end(accum), begin(count), pout, std::divides<uint32_t>());
pout += Xnew;
}
This was written using my own library (still in development) and it seems to work, but later I changed the variables names in order to make it simpler here, so I don't guarantee anything!
The idea is to have a local buffer of 32 bit ints which can hold the partial sum of all pixels in the rows which fall in a row of the output image. Then you divide by the cell count and save the output to the final image.
The first thing you should do is to set up a performance evaluation system to measure how much any change impacts on the performance.
As said precedently, you should not use indexes but pointers for (probably) a substantial
speed up & not simply average as a basic averaging of pixels is basically a blur filter.
I would highly advise you to rework your code to be using "kernels". This is the matrix representing the ratio of each pixel used. That way, you will be able to test different strategies and optimize quality.
Example of kernels:
https://en.wikipedia.org/wiki/Kernel_(image_processing)
Upsampling/downsampling kernel:
http://www.johncostella.com/magic/
Note, from the code it seems you apply a 3x3 kernel but initially done on a 7x7 kernel. The equivalent 3x3 kernel as posted would be:
[1 1 1]
[1 1 1] * 1/9
[1 1 1]

cocos2dx detect intersection with polygon sprite

I am using cocos2d-x 3.8.
I try to create two polygon sprites with the following code.
I know we can detect intersect with BoundingBox but is too rough.
Also, I know we can use Cocos2d-x C++ Physics engine to detect collisions but doesn't it waste a lot of resource of the mobile device? The game I am developing does not need physics engine.
is there a way to detect the intersect of polygon sprites?
Thank you.
auto pinfoTree = AutoPolygon::generatePolygon("Tree.png");
auto treeSprite= Sprite::create(pinfoTree);
treeSprite-> setPosition(width / 4 * 3 - 30 , height / 2 - 200);
this->addChild(treeSprite);
auto pinfoBird = AutoPolygon::generatePolygon("Bird.png");
auto Bird= Sprite::create(pinfoTree);
Bird->setPosition(width / 4 * 3, height / 2);
this->addChild(Bird)
This is a bit more complicated: AutoPolygon gives you a bunch of triangles - the PhysicsBody::createPolygon requires a convex polygon with clockwise winding… so these are 2 different things. The vertex count might even be limited. I think Box2d’s maximum count for 1 polygon is 8.
If you want to try this you’ll have to merge the triangles to form polygons. An option would be to start with one triangle and add more as long as the whole thing stays convex. If you can’t add any more triangles start a new polygon. Add all the polygons as PhysicsShapes to your physics body to form a compound object.
I would propose that you don’t follow this path because
Autopolygon is optimized for rendering - not for best fitting
physics - that is a difference. A polygon traced with Autopolygon will always be bigger than the original sprite - Otherwise you would see rendering artifacts.
You have close to no control over the generated polygons
Tracing the shape in the app will increase your startup time
Triangle meshes and physics outlines are 2 different things
I would try some different approach: Generate the collision shapes offline. This gives you a bunch of advantages:
You can generate and tweak the polygons in a visual editor e.g. by
using PhysicsEditor
Loading the prepares polygons is way faster
You can set additional parameters like mass etc
The solution is battle proven and works out of the box
But if you want to know how polygon intersect work. You can look at this code.
// Calculate the projection of a polygon on an axis
// and returns it as a [min, max] interval
public void ProjectPolygon(Vector axis, Polygon polygon, ref float min, ref float max) {
// To project a point on an axis use the dot product
float dotProduct = axis.DotProduct(polygon.Points[0]);
min = dotProduct;
max = dotProduct;
for (int i = 0; i < polygon.Points.Count; i++) {
flaot d = polygon.Points[i].DotProduct(axis);
if (d < min) {
min = dotProduct;
} else {
if (dotProduct> max) {
max = dotProduct;
}
}
}
}
// Calculate the distance between [minA, maxA] and [minB, maxB]
// The distance will be negative if the intervals overlap
public float IntervalDistance(float minA, float maxA, float minB, float maxB) {
if (minA < minB) {
return minB - maxA;
} else {
return minA - maxB;
}
}
// Check if polygon A is going to collide with polygon B.
public boolean PolygonCollision(Polygon polygonA, Polygon polygonB) {
boolean result = true;
int edgeCountA = polygonA.Edges.Count;
int edgeCountB = polygonB.Edges.Count;
float minIntervalDistance = float.PositiveInfinity;
Vector edge;
// Loop through all the edges of both polygons
for (int edgeIndex = 0; edgeIndex < edgeCountA + edgeCountB; edgeIndex++) {
if (edgeIndex < edgeCountA) {
edge = polygonA.Edges[edgeIndex];
} else {
edge = polygonB.Edges[edgeIndex - edgeCountA];
}
// ===== Find if the polygons are currently intersecting =====
// Find the axis perpendicular to the current edge
Vector axis = new Vector(-edge.Y, edge.X);
axis.Normalize();
// Find the projection of the polygon on the current axis
float minA = 0; float minB = 0; float maxA = 0; float maxB = 0;
ProjectPolygon(axis, polygonA, ref minA, ref maxA);
ProjectPolygon(axis, polygonB, ref minB, ref maxB);
// Check if the polygon projections are currentlty intersecting
if (IntervalDistance(minA, maxA, minB, maxB) > 0)
result = false;
return result;
}
}
The function can be used this way
boolean result = PolygonCollision(polygonA, polygonB);
I once had to program a collision detection algorithm where a ball was to collide with a rotating polygon obstacle. In my case the obstacles where arcs with certain thickness. and where moving around an origin. Basically it was rotating in an orbit. The ball was also rotating around an orbit about the same origin. It can move between orbits. To check the collision I had to just check if the balls angle with respect to the origin was between the lower and upper bound angles of the arc obstacle and check if the ball and the obstacle where in the same orbit.
In other words I used the various constrains and properties of the objects involved in the collision to make it more efficient. So use properties of your objects to cause the collision. Try using a similar approach depending on your objects

Isometric Collision - 'Diamond' shape detection

My project uses an isometric perspective for the time being I am showing the co-ordinates in grid-format above them for debugging. However, when it comes to collision/grid-locking of the player, I have an issue.
Due to the nature of sprite drawing, my maths is creating some issues with the 'triangular' corner empty areas of the textures. I think that the issue is something like below (blue is what I think is the way my tiles are being detected, whereas the red is how they ideally should be detected for accurate roaming movement on the tiles:
As you can see, the boolean that checks the tile I am stood on (which takes the pixel central to the player's feet, the player will later be a car and take a pixel based on the direction of movement) is returning false and denying movement in several scenarios, as well as letting the player move in some places that shouldn't be allowed.
I think that it's because the cutoff areas of each texture are (I think) being considered part of the grid area, so when the player is in one of these corner areas it is not truly checking the correct tile, and so returning the wrong results.
The code I'm using for creating the grid is this:
int VisualComponent::TileConversion(Tile* tileToConvert, bool xOrY)
{
int X = (tileToConvert->x - tileToConvert->y) * 64; //change 64 to TILE_WIDTH_HALF
int Y = (tileToConvert->x + tileToConvert->y) * 25;
/*int X = (tileToConvert->x * 128 / 2) + (tileToConvert->y * 128 / 2) + 100;
int Y = (tileToConvert->y * 50 / 2) - (tileToConvert->x * 50 / 2) + 100;*/
if (xOrY)
{
return X;
}
else
{
return Y;
}
}
and the code for checking the player's movement is:
bool Clsentity::CheckMovementTile(int xpos, int ypos, ClsMapData* mapData) //check if the movement will end on a legitimate road tile UNOPTIMISED AS RUNS EVERY FRAME FOR EVERY TILE
{
int x = xpos + 7; //get the center bottom pixel as this is more suitable than the first on an iso grid (more realistic 'foot' placement)
int y = ypos + 45;
int mapX = (x / 64 + y / 25) / 2; //64 is TILE-WIDTH HALF and 25 is TILE HEIGHT
int mapY = (y / 25 - (x / 64)) / 2;
for (int i = 0; i < mapData->tilesList.size(); i++) //for each tile of the map
{
if (mapData->tilesList[i]->x == mapX && mapData->tilesList[i]->y == mapY) //if there is an existing tile that will be entered
{
if (mapData->tilesList[i]->movementTile)
{
HAPI->DebugText(std::to_string(mapX) + " is the x and the y is " + std::to_string(mapY));
return true;
}
}
}
return false;
}​
I'm a little stuck on progression until having this fixed in the game loop aspect of things. If anyone thinks they either know the issue from this or might be able to help it'd be great and I would appreciate it. For reference also, my tile textures are 128x64 pixels and the math behind drawing them to screen treats them as 128x50 (to cleanly link together).
Rather than writing specific routines for rendering and click mapping, seriously consider thinking of these as two views on the data, which can be transformed in terms of matrix transformations of a coordinate space. You can have two coordinate spaces - one is a nice rectangular grid that you use for positioning and logic. The other is the isometric view that you use for display and input.
If you're not familiar with linear algebra, it'll take a little bit to wrap your head around it, but once you do, it makes everything trivial.
So, how does that work? Your isometric view is merely a rotation of a bog standard grid view, right? Well, close. Isometric view also changes the dimensions if you're starting with a square grid. Anyhow: can we just do a simple coordinate transformation?
Logical coordinate system -> display system (e.g. for rendering)
Texture point => Rotate 45 degrees => Scale by sqrt(2) because a 45 degree rotation changes the dimension of the block by sqrt(1 * 1 + 1 * 1)
Display system -> logical coordinate system (e.g. for mapping clicks into logical space)
Click point => descale by sqrt(2) to unsquish => unrotate by 45 degrees
Why?
If you can do coordinate transformations, then you'd be dealing with a pretty bog-standard rectangular grid for everything else you write, which will make your any other logic MUCH simpler. Your calculations there won't involve computing angles or slopes. E.g. now your "can I move 'down'" logic is much simpler.
Let's say you have 64 x 64 tiles, for simplicity. Now transforming a screen space click to a logical tile is simply:
(int, int) whichTile(clickX, clickY) {
logicalX, logicalY = transform(clickX, clickY)
return (logicalX / 64, logicalY / 64)
}
You can do checks like see if x0,y0 and x1,y1 are on the same tile, in the logical space by someting as simple as:
bool isSameTile(x0, y0, x1, y1) {
return floor(x0/64) == floor(x1/64) && floor(y0/64) == floor(y1/64)
}
Everything gets much simpler once you define the transforms and work in the logical space.
http://en.wikipedia.org/wiki/Rotation_matrix
http://en.wikipedia.org/wiki/Scaling_%28geometry%29#Matrix_representation
http://www.alcove-games.com/advanced-tutorials/isometric-tile-picking/
If you don't want to deal with some matrix library, you can do the equivalent math pretty straightforwardly, but if you separate concerns of logic management from display / input through these transformations, I suspect you'll have a much easier time of it.

Drawing a crescent shape in OpenGL

How can I draw a 2D crescent or moon shape in OpenGL? I have tried using sin and cos like how I did for drawing circles but because a crescent has a "cut" inside it, the sin and cos don't look enough. I couldn't figure out how I could do an intersection between 2 polygons either. So I'm thinking if there a mathematical formula for drawing the crescent?
This isn't mathematically correct, but it may be close enough to meet your needs:
void drawCrescentLine(float step,float scale,float fullness) {
float angle=0.0f;
while (angle<M_PI) {
glVertex2f(scale*sinf(angle),scale*cosf(angle));
angle+=step;
}
while (angle<(2.0f*M_PI)) {
glVertex2f(fullness*scale*sinf(angle),scale*cosf(angle));
angle+=step;
}
glVertex2f(0.0f,scale);
}
or
void drawCrescentTriStrip(float step,float scale,float fullness) {
glVertex2f(0.0f,scale);
float angle=step;
while (angle<M_PI) {
float sinAngle=sinf(angle);
float cosAngle=cosf(angle);
glVertex2f(scale*sinAngle,scale*cosAngle);
glVertex2f(-fullness*scale*sinAngle,scale*cosAngle);
angle+=step;
}
glVertex2f(0.0f,-scale);
}
At fullness=1, it will draw a circle of size scale while at fullness=-0.99f, it will draw a very thin cresent. You could use two different fullness values, rightFullness and leftFullness, and always set one of them to 1.0f so you can change the direction of the crescent.
You can draw two perpendicular ellipses that intersect each other. A crescent is formed with the space that is cut out from one of the eclipses. The intersection can be removed by using a bitwise NAND logical operator when drawing.
glEnable(GL_COLOR_LOGIC_OP);
drawEllipse1();
glLogicOp(GL_NAND);
drawEllipse2();
The long way of doing it is to specify a bunch of vertices that form a skeleton for the shape that you want. You can then 'connect the dots' with GL_LINES to draw your shape. If you want a smoother shape, you can use the vertices as control points for a Bezier/Catmull-Rom spline that would draw a smooth curve joining all your vertices.
You can try this:
Vertex outside [N+1]; // Fill in N with the precision you want
Vertex inside [N+1]; // Fill in N with the precision you want
double neg_size = sqrt (1 + NEG_DIST); // Size of intescting circle.
// NEG_DIST is the distance between their centers
// Greater NEG_DIST => wider crecent
double start_angle = atan (1 / NEG_DIST); // Start angle for the inside edge
double arc = M_PI - (2 * start_angle); // Arc of the inside edge
for (int i = 0; i <= N; i++)
{
// Outside edge
outside [i].x = cos ((M_PI / N) * i) * SIZE;
outside [i].y = sin ((M_PI / N) * i) * SIZE;
// Inside edge
inside [i].x = (cos (start_angle + ((arc / N) * i)) * neg_size) * SIZE;
inside [i].y = (sin (start_angle + ((arc / N) * i)) * neg_size - NEG_DIST) * SIZE;
}
This produces the intersected polys version of a crescent. It will give you an array of coordinates for an inside and outside arc for a crescent. Then you can feed these through your favorite draw method.
NOTE: The endpoints of inside and outside overlap (I did this so that I wouldn't have +/- 1's all over the place). I'm pretty sure a GL program will be fine with it, but if you have a fence post error with this, that may be where it came from