Make an object follow A* path smoothly - c++

I am making a Topdown 2d game in SDL2 with C++. It has a tiled map of each tile being 32x32 px in dimensions.
I have used the A* search algorithm so the enemy can find the player in the map. Currently the path is being correctly traced and after performing A*, it returns a stack of SDL_Point which are just x and y values on map. But I can't figure out how to make the enemy follow this path slowly overtime rather than just hopping on x and y points in the stack.
Below is the move function, the move function is constantly called in the main game loop:
void Gunner::move(std::array<Tile*, MAP_LENGTH>& map, double deltaTime) {
// calculate A* path from current pos to player pos
Astar astar(target, this);
stack<SDL_Point> path = astar.astar(map);
if (path.size() != 0) {
SDL_SetRenderDrawColor(renderer, 0, 0, 0, 255);
int xP = path.top().x;
int yP = path.top().y;
SDL_Rect r = { xP, yP, 32, 32 };
/*
Make the enemy follow path
*/
// debugging purpose
SDL_RenderFillRect(renderer, &r);
path.pop();
}
}
This is the path generated where e is enemy and p is player.

The keyword you're looking for is "path smoothing". There are lots of ways to implement this, which are too complex for a single Stackoverflow answer:
I believe the most popular option is string pulling, which, as the name suggests, is like pulling on your path as though it were a string to make it taut.
You could also use the grid points to generate a spline.
You could use a steering algorithm to have your unit approximate the path.
Another option that has become more popular in recent years is to use an "any-angle" path-finding algorithm, which generates smoothed paths from the get-go. Theta* is the most popular one, to my knowledge.
All of these options produce near-optimal results. If for some reason you need optimal results, this paper was released a few years ago. I don't know much about it, but I assume it's slower than the other options.
Here is a github page with a lot more options for path smoothing.

Related

Implementation of breadth first search in PacMan

I am currently working on a C++ project to make a PacMan clone. Basically I have done almost everything that the game does. But I have not yet figured out how to implement breadth first search in order for the ghosts to chase pacman. In the last few days, I have read a lot about BFS. I know what it is and what it does. I also know I have to use a queue for this purpose. But still, I am unable to actually implement this algorithm in my game. I have a 2d grid of 36*28 tiles. But I am really unsure about how to implement it in my xy-coordinate system, what to push to the queue and how to manipulate the neighbouring tiles. I'm stuck at this point. I'm not asking for actual code. I just need a clear and simple explanation about the actual implementation of BFS and which things to keep in mind while working on BFS in this 2d game grid.
Your explanation will be really helpful. Thanks.
I assume you want to do the BFS every time a ghost will do a move. What you could do is start a BFS from PacMan until he found all ghosts. Note that you don't actually need the complete route a ghost will take, you only need the next move. While doing the BFS you can store for each cell the distance from PacMan to that cell. When you BFS is done, all ghosts can look in their adjacent cells an pick the cell with the lowest number. Note that you should initialize all cells with a large number.
To do your BFS you can do some tricks, like mapping your (x, y) coordinate to one number. This number can be placed in your queue. Note you should check for wall before putting something in your queue. When you pull something out the queue run a for-loop of length 4 (the number of adjacent cells).
int dx[] = {0, 1, 0, -1};
int dy[] = {1, 0, -1, 0};
void do_bfs() {
std::queue<int> queue;
// initialize grid
// add starting position of pacman to queue
while(!queue.empty()) {
// remove and access first element
cur_place = queue.front(); queue.pop();
map_to_coordinate(cur_x, cur_y, cur_place);
cur_distance = grid[cur_x][cur_y];
for (int i = 0; i < 4; i++) {
if (cur_x + dx[i] >= 0 && /* more checks */) {
queue.push_back(map_to_number(cur_x + dx[i], cur_y + dy[i]));
grid[cur_x + dx[i]][cur_y + dy[i]] = cur_distance + 1;
}
}
}
// now grid is filled, so now you should find out for each ghost how to move
}
As an exercise for the reader I tried to leave open as much while making my point.

c++ Collision Detection for a turning rectangle

I have some collision detection working when my player hits an object. But this only works when my players x & y co-ordinates hit my marker (which is the centre of my character).
Would making a method returning a vector of all of the coordinates that the players texture cover work and what is the best way to implement this?
This is being done in c++ creating a top down game
There are many ways to do it, most simply is probably(depending on you use of classes etc).
This is the simplest, but no where near the best, or infact very good at all. This way means changing your "marker" to the bottom left of the rectangle.
void collisions()
{
//check if the x-coord is between the furthest left and furthest right x coords
if(rect.Getx() > someObject.Getx() && rect.Getx() < someObject.Getx() + someObject.GetWidth())
{
rect.SetMoveSpeed(0);
}
if(rect.Gety() > someObject.Gety() && rect.Gety() < someObject.Gety() + someObject.GetHeight())
{
rect.setMoveSpeed(0);
}
}
You would then have to set the move speed to normal when it is not colliding. That could be done with an else after each if, setting the move speed again. This is a quick fix and is not recommended for use in a game you plan to distribute anywhere.

SDL and c++ -- More efficient way of leaving a trail behind the player?

so i'm fairly new with SDL, and i'm trying to make a little snowboarding game. When the player is moving down the hill, I want to leave a trail of off-coloured snow behind him. Currently, the way i have this working is I have an array (with 1000 elements) that stores the players last position. Then each frame, I have a for loop that loops 1000 times, to draw out the trail texture in all these last 1000 positions of the player...
I feel this is extremely inefficient, and i'm looking for some better alternatives!
The Code:
void Player::draw()
{
if (posIndex >= 1000)
{
posIndex = 0;
}
for (int i = 0; i < 1000; i++) // Loop through all the 1000 past positions of the player
{
// pastPlayerPos is an array of SDL_Rects that stores the players last 1000 positions
// This line calculates teh location to draw the trail texture
SDL_Rect trailRect = {pastPlayerPos[i].x, pastPlayerPos[i].y, 32, 8};
// This draws the trail texture
SDL_RenderCopy(Renderer, Images[IMAGE_TRAIL], NULL, &trailRect);
}
// This draws the player
SDL_Rect drawRect = {(int)x, (int)y, 32, 32};
SDL_RenderCopy(Renderer, Images[0], NULL, &drawRect);
// This is storing the past position
SDL_Rect tempRect = {x, y, 0, 0};
pastPlayerPos[posIndex] = tempRect;
posIndex++; // This is to cycle through the array to store the new position
This is the result, which is exactly what i'm trying to accomplish, but i'm just looking for a more efficient way. If there isn't one, i will stick with this.
There are multiple solutions. I'll give you two.
1.
Create screen-size surface. Fill it with alpha. On each player move, draw it's current position into this surface - so each movement will add you extra data to this would-be mask. Then blit this surface on screen (beware of blit order). In your case it could be improved by disabling alpha and initially filling surface with white, and blitting it first, before anything else. With that approach you can skip screen clearing after flip, by the way.
I recommend starting with this one.
2.
Not easy one, but may be more efficient (it depends). Save array points where player actually changed movement direction. After it, you need to draw chainline between these points. There is however no builtin functions in SDL to draw lines; maybe there are in SDL_gfx, i never tried it. This approach may be better if you'll use OpenGL backend later on; with SDL (or any other ordinary 2D drawing library), it's not too useful.

Problems with moving camera in openGL

Ok, I'd like to start off by saying that I know I'm not actually moving the camera, but it's easier to explain that way.
My problem is that I'm trying to move the camera with my character in a top down 2d rpg, and I can't find the correct way to do it. I know about glTranslate() but then I can only use a speed instead of an x and y coordinate. I'm not sure how to move the camera keeping the delta in mind. I don't even know if glTranslate() is even the method I should be using.
In case I'm not making any sense (which is very likely), here's some of my code.
My test while loop:
while(!Keyboard.isKeyDown(Keyboard.KEY_ESCAPE)&&!Display.isCloseRequested())
{
glClear(GL11.GL_COLOR_BUFFER_BIT);
delta=getDelta();
update(delta);
glTranslatef(speedx, speedy, 0);
level1.checkCurrent(x, y);
level1.draw();
Display.update();
Display.sync(60);
}
Here is where I set the speed:
if(Keyboard.isKeyDown(Keyboard.KEY_DOWN))
{
y+=0.5*delta;
screenY+=0.5*delta;
speedy=(int) (-0.5*delta);
direction=2;
}
else if(Keyboard.isKeyDown(Keyboard.KEY_UP))
{
y-=0.5*delta;
screenY-=0.5*delta;
speedy=(int) (0.5*delta);
direction=8;
}
else
speedy=0;
Right now you're treating OpenGL as if it were a scene graph. However OpenGL is only meant to draw things on the screen. Whatever you do, you should always think about your problem in a way, as if all the rest of the infrastructure wasn't there.
You want to accelerate an object? Well, then you need to increment some speed variable over time and that speed variable multiplied by time adds to the position. In essence Newton's laws of motion:
a = dv/dt => v = a*t + v_0
v = dr/dt => r = v*t + r_0 = a*t² + v_0*t + r_0
This you evaluate for each of your objects. Then when drawing the animation, you use the state to place the object geometry accordingly.

is it possible to speed-up matlab plotting by calling c / c++ code in matlab?

It is generally very easy to call mex files (written in c/c++) in Matlab to speed up certain calculations. In my experience however, the true bottleneck in Matlab is data plotting. Creating handles is extremely expensive and even if you only update handle data (e.g., XData, YData, ZData), this might take ages. Even worse, since Matlab is a single threaded program, it is impossible to update multiple plots at the same time.
Therefore my question: Is it possible to write a Matlab GUI and call C++ (or some other parallelizable code) which would take care of the plotting / visualization? I'm looking for a cross-platform solution that will work on Windows, Mac and Linux, but any solution that get's me started on either OS is greatly appreciated!
I found a C++ library that seems to use Matlab's plot() syntax but I'm not sure whether this would speed things up, since I'm afraid that if I plot into Matlab's figure() window, things might get slowed down again.
I would appreciate any comments and feedback from people who have dealt with this kind of situation before!
EDIT: obviously, I've already profiled my code and the bottleneck is the plotting (dozen of panels with lots of data).
EDIT2: for you to get the bounty, I need a real life, minimal working example on how to do this - suggestive answers won't help me.
EDIT3: regarding the data to plot: in a most simplistic case, think about 20 line plots, that need to be updated each second with something like 1000000 data points.
EDIT4: I know that this is a huge amount of points to plot but I never said that the problem was easy. I can not just leave out certain data points, because there's no way of assessing what points are important, before actually plotting them (data is sampled a sub-ms time resolution). As a matter of fact, my data is acquired using a commercial data acquisition system which comes with a data viewer (written in c++). This program has no problem visualizing up to 60 line plots with even more than 1000000 data points.
EDIT5: I don't like where the current discussion is going. I'm aware that sub-sampling my data might speeds up things - however, this is not the question. The question here is how to get a c / c++ / python / java interface to work with matlab in order hopefully speed up plotting by talking directly to the hardware (or using any other trick / way)
Did you try the trivial solution of changing the render method to OpenGL ?
opengl hardware;
set(gcf,'Renderer','OpenGL');
Warning!
There will be some things that disappear in this mode, and it will look a bit different, but generally plots will runs much faster, especially if you have a hardware accelerator.
By the way, are you sure that you will actually gain a performance increase?
For example, in my experience, WPF graphics in C# are considerably slower than Matlabs, especially scatter plot and circles.
Edit: I thought about the fact that the number of points that is actually drawn to the screen can't be that much. Basically it means that you need to interpolate at the places where there is a pixel in the screen. Check out this object:
classdef InterpolatedPlot < handle
properties(Access=private)
hPlot;
end
methods(Access=public)
function this = InterpolatedPlot(x,y,varargin)
this.hPlot = plot(0,0,varargin{:});
this.setXY(x,y);
end
end
methods
function setXY(this,x,y)
parent = get(this.hPlot,'Parent');
set(parent,'Units','Pixels')
sz = get(parent,'Position');
width = sz(3); %Actual width in pixels
subSampleX = linspace(min(x(:)),max(x(:)),width);
subSampleY = interp1(x,y,subSampleX);
set(this.hPlot,'XData',subSampleX,'YData',subSampleY);
end
end
end
And here is an example how to use it:
function TestALotOfPoints()
x = rand(10000,1);
y = rand(10000,1);
ip = InterpolatedPlot(x,y,'color','r','LineWidth',2);
end
Another possible improvement:
Also, if your x data is sorted, you can use interp1q instead of interp, which will be much faster.
classdef InterpolatedPlot < handle
properties(Access=private)
hPlot;
end
% properties(Access=public)
% XData;
% YData;
% end
methods(Access=public)
function this = InterpolatedPlot(x,y,varargin)
this.hPlot = plot(0,0,varargin{:});
this.setXY(x,y);
% this.XData = x;
% this.YData = y;
end
end
methods
function setXY(this,x,y)
parent = get(this.hPlot,'Parent');
set(parent,'Units','Pixels')
sz = get(parent,'Position');
width = sz(3); %Actual width in pixels
subSampleX = linspace(min(x(:)),max(x(:)),width);
subSampleY = interp1q(x,y,transpose(subSampleX));
set(this.hPlot,'XData',subSampleX,'YData',subSampleY);
end
end
end
And the use case:
function TestALotOfPoints()
x = rand(10000,1);
y = rand(10000,1);
x = sort(x);
ip = InterpolatedPlot(x,y,'color','r','LineWidth',2);
end
Since you want maximum performance you should consider writing a minimal OpenGL viewer. Dump all the points to a file and launch the viewer using the "system"-command in MATLAB. The viewer can be really simple. Here is one implemented using GLUT, compiled for Mac OS X. The code is cross platform so you should be able to compile it for all the platforms you mention. It should be easy to tweak this viewer for your needs.
If you are able to integrate this viewer more closely with MATLAB you might be able to get away with not having to write to and read from a file (= much faster updates). However, I'm not experienced in the matter. Perhaps you can put this code in a mex-file?
EDIT: I've updated the code to draw a line strip from a CPU memory pointer.
// On Mac OS X, compile using: g++ -O3 -framework GLUT -framework OpenGL glview.cpp
// The file "input" is assumed to contain a line for each point:
// 0.1 1.0
// 5.2 3.0
#include <vector>
#include <sstream>
#include <fstream>
#include <iostream>
#include <GLUT/glut.h>
using namespace std;
struct float2 { float2() {} float2(float x, float y) : x(x), y(y) {} float x, y; };
static vector<float2> points;
static float2 minPoint, maxPoint;
typedef vector<float2>::iterator point_iter;
static void render() {
glClearColor(1.0f, 1.0f, 1.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(minPoint.x, maxPoint.x, minPoint.y, maxPoint.y, -1.0f, 1.0f);
glColor3f(0.0f, 0.0f, 0.0f);
glEnableClientState(GL_VERTEX_ARRAY);
glVertexPointer(2, GL_FLOAT, sizeof(points[0]), &points[0].x);
glDrawArrays(GL_LINE_STRIP, 0, points.size());
glDisableClientState(GL_VERTEX_ARRAY);
glutSwapBuffers();
}
int main(int argc, char* argv[]) {
ifstream file("input");
string line;
while (getline(file, line)) {
istringstream ss(line);
float2 p;
ss >> p.x;
ss >> p.y;
if (ss)
points.push_back(p);
}
if (!points.size())
return 1;
minPoint = maxPoint = points[0];
for (point_iter i = points.begin(); i != points.end(); ++i) {
float2 p = *i;
minPoint = float2(minPoint.x < p.x ? minPoint.x : p.x, minPoint.y < p.y ? minPoint.y : p.y);
maxPoint = float2(maxPoint.x > p.x ? maxPoint.x : p.x, maxPoint.y > p.y ? maxPoint.y : p.y);
}
float dx = maxPoint.x - minPoint.x;
float dy = maxPoint.y - minPoint.y;
maxPoint.x += dx*0.1f; minPoint.x -= dx*0.1f;
maxPoint.y += dy*0.1f; minPoint.y -= dy*0.1f;
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_RGBA | GLUT_DOUBLE);
glutInitWindowSize(512, 512);
glutCreateWindow("glview");
glutDisplayFunc(render);
glutMainLoop();
return 0;
}
EDIT: Here is new code based on the discussion below. It renders a sin function consisting of 20 vbos, each containing 100k points. 10k new points are added each rendered frame. This makes a total of 2M points. The performance is real-time on my laptop.
// On Mac OS X, compile using: g++ -O3 -framework GLUT -framework OpenGL glview.cpp
#include <vector>
#include <sstream>
#include <fstream>
#include <iostream>
#include <cmath>
#include <iostream>
#include <GLUT/glut.h>
using namespace std;
struct float2 { float2() {} float2(float x, float y) : x(x), y(y) {} float x, y; };
struct Vbo {
GLuint i;
Vbo(int size) { glGenBuffersARB(1, &i); glBindBufferARB(GL_ARRAY_BUFFER, i); glBufferDataARB(GL_ARRAY_BUFFER, size, 0, GL_DYNAMIC_DRAW); } // could try GL_STATIC_DRAW
void set(const void* data, size_t size, size_t offset) { glBindBufferARB(GL_ARRAY_BUFFER, i); glBufferSubData(GL_ARRAY_BUFFER, offset, size, data); }
~Vbo() { glDeleteBuffers(1, &i); }
};
static const int vboCount = 20;
static const int vboSize = 100000;
static const int pointCount = vboCount*vboSize;
static float endTime = 0.0f;
static const float deltaTime = 1e-3f;
static std::vector<Vbo*> vbos;
static int vboStart = 0;
static void addPoints(float2* points, int pointCount) {
while (pointCount) {
if (vboStart == vboSize || vbos.empty()) {
if (vbos.size() >= vboCount+2) { // remove and reuse vbo
Vbo* first = *vbos.begin();
vbos.erase(vbos.begin());
vbos.push_back(first);
}
else { // create new vbo
vbos.push_back(new Vbo(sizeof(float2)*vboSize));
}
vboStart = 0;
}
int pointsAdded = pointCount;
if (pointsAdded + vboStart > vboSize)
pointsAdded = vboSize - vboStart;
Vbo* vbo = *vbos.rbegin();
vbo->set(points, pointsAdded*sizeof(float2), vboStart*sizeof(float2));
pointCount -= pointsAdded;
points += pointsAdded;
vboStart += pointsAdded;
}
}
static void render() {
// generate and add 10000 points
const int count = 10000;
float2 points[count];
for (int i = 0; i < count; ++i) {
float2 p(endTime, std::sin(endTime*1e-2f));
endTime += deltaTime;
points[i] = p;
}
addPoints(points, count);
// render
glClearColor(1.0f, 1.0f, 1.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(endTime-deltaTime*pointCount, endTime, -1.0f, 1.0f, -1.0f, 1.0f);
glColor3f(0.0f, 0.0f, 0.0f);
glEnableClientState(GL_VERTEX_ARRAY);
for (size_t i = 0; i < vbos.size(); ++i) {
glBindBufferARB(GL_ARRAY_BUFFER, vbos[i]->i);
glVertexPointer(2, GL_FLOAT, sizeof(float2), 0);
if (i == vbos.size()-1)
glDrawArrays(GL_LINE_STRIP, 0, vboStart);
else
glDrawArrays(GL_LINE_STRIP, 0, vboSize);
}
glDisableClientState(GL_VERTEX_ARRAY);
glutSwapBuffers();
glutPostRedisplay();
}
int main(int argc, char* argv[]) {
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_RGBA | GLUT_DOUBLE);
glutInitWindowSize(512, 512);
glutCreateWindow("glview");
glutDisplayFunc(render);
glutMainLoop();
return 0;
}
As a number of people have mentioned in their answers, you do not need to plot that many points. I think it is important to rpeat Andrey's comment:
that is a HUGE amount of points! There isn't enough pixels on the screen to plot that amount.
Rewriting plotting routines in different languages is a waste of your time. A huge number of hours have gone into writing MATLAB, whay makes you think you can write a significantly faster plotting routine (in a reasonable amount of time)? Whilst your routine may be less general, and therefore would remove some of the checks that the MATLAB code will perform, your "bottleneck" is that you are trying to plot so much data.
I strongly recommend one of two courses of action:
Sample your data: You do not need 20 x 1000000 points on a figure - the human eye won't be able to distinguish between all the points, so it is a waste of time. Try binning your data for example.
If you maintain that you need all those points on the screen, I would suggest using a different tool. VisIt or ParaView are two examples that come to mind. They are parallel visualisation programs designed to handle extremenly large datasets (I have seen VisIt handle datasets that contained PetaBytes of data).
There is no way you can fit 1000000 data points on a small plot. How about you choose one in every 10000 points and plot those?
You can consider calling imresize on the large vector to shrink it, but manually building a vector by omitting 99% of the points may be faster.
#memyself The sampling operations are already occurring. Matlab is choosing what data to include in the graph. Why do you trust matlab? It looks to me that the graph you showed significantly misrepresents the data. The dense regions should indicate that the signal is at a constant value, but in your graph it could mean that the signal is at that value half the time - or was at that value at least once during the interval corresponding to that pixel?
Would it be possible to use an alternate architectue? For example, use MATLAB to generate the data and use a fast library or application (GNUplot?) to handle the plotting?
It might even be possible to have MATLAB write the data to a stream as the plotter consumes the data. Then the plot would be updated as MATLAB generates the data.
This approach would avoid MATLAB's ridiculously slow plotting and divide the work up between two separate processes. The OS/CPU would probably assign the process to different cores as a matter of course.
I think it's possible, but likely to require writing the plotting code (at least the parts you use) from scratch, since anything you could reuse is exactly what's slowing you down.
To test feasibility, I'd start with testing that any Win32 GUI works from MEX (call MessageBox), then proceed to creating your own window, test that window messages arrive to your WndProc. Once all that's going, you can bind an OpenGL context to it (or just use GDI), and start plotting.
However, the savings is likely to come from simpler plotting code and use of newer OpenGL features such as VBOs, rather than threading. Everything is already parallel on the GPU, and more threads don't help transfer of commands/data to the GPU any faster.
I did a very similar thing many many years ago (2004?). I needed an oscilloscope-like display for kilohertz sampled biological signals displayed in real time. Not quite as many points as the original question has, but still too many for MATLAB to handle on its own. IIRC I ended up writing a Java component to display the graph.
As other people have suggested, I also ended up down-sampling the data. For each pixel on the x-axis, I calculated the minimum and maximum values taken by the data, then drew a short vertical line between those values. The entire graph consisted of a sequence of short vertical lines, each immediately adjacent to the next.
Actually, I think that the implementation ended up writing the graph to a bitmap that scrolled continuously using bitblt, with only new points being drawn ... or maybe the bitmap was static and the viewport scrolled along it ... anyway it was a long time ago and I might not be remembering it right.
Blockquote
EDIT4: I know that this is a huge amount of points to plot but I never said that the problem was easy. I can not just leave out certain data points, because there's no way of assessing what points are important, before actually plotting them
Blockquote
This is incorrect. There is a way to to know which points to leave out. Matlab is already doing it. Something is going to have to do it at some point no matter how you solve this. I think you need to redirect your problem to be "how do I determine which points I should plot?".
Based on the screenshot, the data looks like a waveform. You might want to look at the code of audacity. It is an open source audio editing program. It displays plots to represent the waveform in real time, and they look identical in style to the one in your lowest screen shot. You could borrow some sampling techniques from them.
What you are looking for is the creation of a MEX file.
Rather than me explaining it, you would probably benefit more from reading this: Creating C/C++ and Fortran Programs to be Callable from MATLAB (MEX-Files) (a documentation article from MathWorks).
Hope this helps.