A* Pathfinding without diagonal movement - c++

I'm currently trying to implement A* in c++ using: Link
However for the first version i decided to not include diagonal movement.
In the summary section in point c where it loops checking the neighbours of the current point:
for each neighbour of the current square (above, below, left, right)
if neighbour on closed list or not walkable {
continue
}
if neighbour not in open list {
add to open list
set parent of neighbour to current square
update F, G, H values
} else if neighbour is on open list {
check to see if this path to that square is better,
using G cost as the measure. A lower G cost means that this is a better path.
If so, change the parent of the square to the current square,
and recalculate the G and F scores of the square.
}
If i am only allowing 4 direction movement, do i still need to check the g cost to see if the path to that square is better? For example starting at the start point all 4 neighbours of the start point will have the same g.

The point of checking for the lower G cost is to set a new path for that square if a better one is eventually found. You may initially find a path from A to a square C. But when searching later, you'll find a different path to a neighbor of C. If C is not in the closed list, the G cost of the new path to C may be lower than the first, so you want to update C with the better G (and thus F) value.

Let's analyze the situation. The basic point is that every edge in the graph has the same weight.
Assume that you are currently at vertex c and you analyze neighbor n. Then the updated distance for neighbor n would be distance(c) + w, where w is the uniform edge length. Now the question is if n can have a larger distance than that via a different path.
Assume the previous parent p of n that would lead to a longer path. Then, n has the distance distance(p) + w. In order for that expression to be larger than the updated cost from c, the following needs to hold:
distance(p) + w > distance(c) + w
distance(p) > distance(c)
So p would have to be farther away from the start than c. If you used Dijkstra's algorithm, this would not be possible because this would mean that p would be fixed after c. But you use A* and determine the order with the heuristic. Hence, the following two conditions need to hold:
distance(p) > distance(c)
distance(p) + heuristic(p) <= distance(c) + heuristic(c)
<=> heuristic(p) <= heuristic(c) - (distance(p) - distance(c))
So it comes down to your heuristic. The widely used Manhattan distance heuristic allows this. So if you use this heuristic, you have to check for a lower cost. If your heuristic does not allow the above condition (for example a constant heuristic as in the case of Dijkstra), you don't.

Related

Given n points, how can I find the number of points with given distance

I have an input of n unique points (X,Y) that are between 0 and 2^32 inclusive. The coordinates are integers.
I need to create an algorithm that finds the number of pairs of points with a distance of exactly 2018.
I have thought of checking with every other point but it would be O(n^2) and I have to make it more efficient. I also thought of using a set or a vector and sort it using a comparator based on the distance with the origin point but it wouldn't help at all.
So how can I do it efficiently?
There is one Pythagorean triple with the hypotenuse of 2018: 11182+16802=20182.
Since all coordinates are integers, the only possible differences between the coordinates (both X an Y) of the two points are 0, 1118, 1680, and 2018.
Finding all pairs of points with a given difference between X (or Y) coordinates is a simple n log n operation.
Numbers other than 2018 might need a bit more work because they might be members of more than one Pythagorean triple (for example 2015 is a hypotenuse of 3 triples). If the number is not given as a constant, but provided at run time, you will have to generate all triples with this hypotenuse. This may require some sqrt(N) effort (N is the hypotenuse, not the number of points). One can find a recipe on the math stackexchange, e.g. here (there are many others).
You could try using a Quadtree. First you start sorting your points into the quadtree. You should specify a lower limit for the cell size of e.g. 2048 wich is a power of 2. Then iterate though the points and calculate distances to the points in the same cell and to the points in adjacent cells. That way you should be able to decrease the number of distance calculations drastically.
The main difficulty will probably be implementing the tree structure. You also have to find a way to find adjacent cells (you must include the possibility to traverse upwards in the tree)
The complexity of this is probably O(n*log(n)) in the best case but don't pin me down on that.
One additional word on the distance calculation: You are probably much faster if you don't do
dx = p1x - p2x;
dy = p1y - p2y;
if ( sqrt(dx*dx + dy*dy) == 2018 ) {
...
}
but
dx = p1x - p2x;
dy = p1y - p2y;
if ( dx*dx + dy*dy == 2018*2018 ) {
...
}
Squaring is faster than taking the sqare root. So just compare the square of the distance with the square of 2018.

OpenCV estimateRigidTransform()

After reading several posts about getting the 2D transformation of 2D points from one image to another, estimateRigidTransform() seems to be the recommendation. I'm trying to use it. I modified the source code (to change the RANSAC parameters, because its hardcoded, and the hardcoded parameters are not very good)(the source code for this function is in lkpyramid.cpp). I have read up on how RANSAC works, and am trying to understand the steps in estimateRigidTransform().
// choose random 3 non-complanar points from A & B
...
// additional check for non-complanar vectors
a[0] = pA[idx[0]];
a[1] = pA[idx[1]];
a[2] = pA[idx[2]];
b[0] = pB[idx[0]];
b[1] = pB[idx[1]];
b[2] = pB[idx[2]];
double dax1 = a[1].x - a[0].x, day1 = a[1].y - a[0].y;
double dax2 = a[2].x - a[0].x, day2 = a[2].y - a[0].y;
double dbx1 = b[1].x - b[0].x, dby1 = b[1].y - b[0].y;
double dbx2 = b[2].x - b[0].x, dby2 = b[2].y - b[0].y;
const double eps = 0.01;
if( fabs(dax1*day2 - day1*dax2) < eps*std::sqrt(dax1*dax1+day1*day1)*std::sqrt(dax2*dax2+day2*day2) ||
fabs(dbx1*dby2 - dby1*dbx2) < eps*std::sqrt(dbx1*dbx1+dby1*dby1)*std::sqrt(dbx2*dbx2+dby2*dby2) )
continue;
Is it a typo that it uses non-coplanar vectors? I mean the 2D points are all on the same plane right?
My second question is what is that if condition doing? I know that the left hand side (gives the area of triangle times 2) would be zero or near zero if the points are collinear, and the right hand side is the multiplication of the lengths of 2 sides of the triangle.
Collinearity is preserved in affine transformations (such as the one you are probably estimating), but this transformations also calculate also changes in rotations in point of view (as if you rotated the object in a 3d world). However, these points will be collinear as well, so for the algorithm it may have not a unique solution. Look at the pictures:
imagine selecting 3 center points of each black square in the first row in the first image. Then map it to the same centers in the next image. It may generate a mapping to that solution, but also a mapping to a zoom version of the first one. The same may happen with the third one, just that this time may map to a zoom out version of the first one (without any other change). However if the points are not collinear, for example, 3 corner squares centers, it will find a unique mapping.
I hope this helps you to clarify your doubts. If not, leave a comment

How to find all equals paths in degenerate tree from specific start vertex? [duplicate]

I have some degenerate tree (it looks like as array or doubly linked list). For example, it is this tree:
Each edge has some weight. I want to find all equal paths, which starts in each vertex.
In other words, I want to get all tuples (v1, v, v2) where v1 and v2 are an arbitrary ancestor and descendant such that c(v1, v) = c(v, v2).
Let edges have the following weights (it is just example):
a-b = 3
b-c = 1
c-d = 1
d-e = 1
Then:
The vertex A does not have any equal path (there is no vertex from left side).
The vertex B has one equal pair. The path B-A equals to the path B-E (3 == 3).
The vertex C has one equal pair. The path B-C equals to the path C-D (1 == 1).
The vertex D has one equal pair. The path C-D equals to the path D-E (1 == 1).
The vertex E does not have any equal path (there is no vertex from right side).
I implement simple algorithm, which works in O(n^2). But it is too slow for me.
You write, in comments, that your current approach is
It seems, I looking for a way to decrease constant in O(n^2). I choose
some vertex. Then I create two set. Then I fill these sets with
partial sums, while iterating from this vertex to start of tree and to
finish of tree. Then I find set intersection and get number of paths
from this vertex. Then I repeat algorithm for all other vertices.
There is a simpler and, I think, faster O(n^2) approach, based on the so called two pointers method.
For each vertix v go at the same time into two possible directions. Have one "pointer" to a vertex (vl) moving in one direction and another (vr) into another direction, and try to keep the distance from v to vl as close to the distance from v to vr as possible. Each time these distances become equal, you have equal paths.
for v in vertices
vl = prev(v)
vr = next(v)
while (vl is still inside the tree)
and (vr is still inside the tree)
if dist(v,vl) < dist(v,vr)
vl = prev(vl)
else if dist(v,vr) < dist(v,vl)
vr = next(vr)
else // dist(v,vr) == dist(v,vl)
ans = ans + 1
vl = prev(vl)
vr = next(vr)
(By precalculating the prefix sums, you can find dist in O(1).)
It's easy to see that no equal pair will be missed provided that you do not have zero-length edges.
Regarding a faster solution, if you want to list all pairs, then you can't do it faster, because the number of pairs will be O(n^2) in the worst case. But if you need only the amount of these pairs, there might exist faster algorithms.
UPD: I came up with another algorithm for calculating the amount, which might be faster in case your edges are rather short. If you denote the total length of your chain (sum of all edges weight) as L, then the algorithm runs in O(L log L). However, it is much more advanced conceptually and more advanced in coding too.
Firstly some theoretical reasoning. Consider some vertex v. Let us have two arrays, a and b, not the C-style zero-indexed arrays, but arrays with indexation from -L to L.
Let us define
for i>0, a[i]=1 iff to the right of v on the distance exactly i there
is a vertex, otherwise a[i]=0
for i=0, a[i]≡a[0]=1
for i<0, a[i]=1 iff to the left of v on the distance exactly -i there is a vertex, otherwise a[i]=0
A simple understanding of this array is as follows. Stretch your graph and lay it along the coordinate axis so that each edge has the length equal to its weight, and that vertex v lies in the origin. Then a[i]=1 iff there is a vertex at coordinate i.
For your example and for vertex "b" chosen as v:
a--------b--c--d--e
--|--|--|--|--|--|--|--|--|-->
-4 -3 -2 -1 0 1 2 3 4
a: ... 0 1 0 0 1 1 1 1 0 ...
For another array, array b, we define the values in a symmetrical way with respect to origin, as if we have inverted the direction of the axis:
for i>0, b[i]=1 iff to the left of v on the distance exactly i there
is a vertex, otherwise b[i]=0
for i=0, b[i]≡b[0]=1
for i<0, b[i]=1 iff to the right of v on the distance exactly -i there is a vertex, otherwise b[i]=0
Now consider a third array c such that c[i]=a[i]*b[i], asterisk here stays for ordinary multiplication. Obviously c[i]=1 iff the path of length abs(i) to the left ends in a vertex, and the path of length abs(i) to the right ends in a vertex. So for i>0 each position in c that has c[i]=1 corresponds to the path you need. There are also negative positions (c[i]=1 with i<0), which just reflect the positive positions, and one more position where c[i]=1, namely position i=0.
Calculate the sum of all elements in c. This sum will be sum(c)=2P+1, where P is the total number of paths which you need with v being its center. So if you know sum(c), you can easily determine P.
Let us now consider more closely arrays a and b and how do they change when we change the vertex v. Let us denote v0 the leftmost vertex (the root of your tree) and a0 and b0 the corresponding a and b arrays for that vertex.
For arbitrary vertex v denote d=dist(v0,v). Then it is easy to see that for vertex v the arrays a and b are just arrays a0 and b0 shifted by d:
a[i]=a0[i+d]
b[i]=b0[i-d]
It is obvious if you remember the picture with the tree stretched along a coordinate axis.
Now let us consider one more array, S (one array for all vertices), and for each vertex v let us put the value of sum(c) into the S[d] element (d and c depend on v).
More precisely, let us define array S so that for each d
S[d] = sum_over_i(a0[i+d]*b0[i-d])
Once we know the S array, we can iterate over vertices and for each vertex v obtain its sum(c) simply as S[d] with d=dist(v,v0), because for each vertex v we have sum(c)=sum(a0[i+d]*b0[i-d]).
But the formula for S is very simple: S is just the convolution of the a0 and b0 sequences. (The formula does not exactly follow the definition, but is easy to modify to the exact definition form.)
So what we now need is given a0 and b0 (which we can calculate in O(L) time and space), calculate the S array. After this, we can iterate over S array and simply extract the numbers of paths from S[d]=2P+1.
Direct application of the formula above is O(L^2). However, the convolution of two sequences can be calculated in O(L log L) by applying the Fast Fourier transform algorithm. Moreover, you can apply a similar Number theoretic transform (don't know whether there is a better link) to work with integers only and avoid precision problems.
So the general outline of the algorithm becomes
calculate a0 and b0 // O(L)
calculate S = corrected_convolution(a0, b0) // O(L log L)
v0 = leftmost vertex (root)
for v in vertices:
d = dist(v0, v)
ans = ans + (S[d]-1)/2
(I call it corrected_convolution because S is not exactly a convolution, but a very similar object for which a similar algorithm can be applied. Moreover, you can even define S'[2*d]=S[d]=sum(a0[i+d]*b0[i-d])=sum(a0[i]*b0[i-2*d]), and then S' is the convolution proper.)

C++: Finding all combinations of array items divisable to two groups

I believe this is more of an algorithmic question but I also want to do this in C++.
Let me illustrate the question with an example.
Suppose I have N number of objects (not programming objects), each with different weights. And I have two vehicles to carry them. The vehicles are big enough to carry all the objects by each. These two vehicles have their own mileage and different levels of fuel in the tank. And also the mileage depends on the weight it carries.
The objective is to bring these N objects as far as possible. So I need to distribute the N objects in a certain way between the two vehicles. Note that I do not need to bring them the 'same' distance, but rather as far as possible. So example, I want the two vehicles to go 5km and 6 km, rather than one going 2km and other going 7km.
I cannot think of a theoretical closed-form calculation to determine which weights to be loaded in to each vehicle. because remember that I need to carry all the N objects which is a fixed value.
So as far as I can think, I need to try all the combinations.
Could someone advice of an efficient algorithm to try all the combinations?
For example I would have the following:
int weights[5] = {1,4,2,7,5}; // can be more values than 5
float vehicelONEMileage(int totalWeight);
float vehicleTWOMileage(int totalWeight);
How could I efficiently try all the combinations of weights[] with the two functions?
Thw two functions can be assumed as linear functions. I.e. the return value of the two mileage functions are linear functions with (different) negative slopes and (different) offsets.
So what I need to find is something like:
MAX(MIN(vehicleONEMileage(x), vehicleTWOMileage(sum(weights) - x)));
Thank you.
This should be on the cs or the math site.
Simplification: Instead of an array of objects, let's say we can distribute weight linearly.
The function we want to optimize is the minimum of both travel distances. Finding the maximum of the minimum is the same as finding the maximum of the product (Without proof. But to see this, think of the relationship between perimeter and area of rectangles. The rectangle with the biggest area given a perimeter is a square, which also happens to have the largest minimum side length).
In the following, we will scale the sum of all weights to 1. So, a distribution like (0.7, 0.3) means that 70% of all weights is loaded on vehicle 1. Let's call the load of vehicle 1 x and the load of vehicle 1-x.
Given the two linear functions f = a x + b and g = c x + d, where f is the mileage of vehicle 1 when loaded with weight x, and g the same for vehicle 2, we want to maximize
(a*x+b)*(c*(1-x)+d)
Let's ask Wolfram Alpha to do the hard work for us: www.wolframalpha.com/input/?i=derive+%28%28a*x%2Bb%29*%28c*%281-x%29%2Bd%29%29
It tells us that there is an extremum at
x_opt = (a * c + a * d - b * c) / (2 * a * c)
That's all you need to solve your problem efficiently.
The complete algorithm:
find a, b, c, d
b = vehicleONEMileage(0)
a = (vehicleONEMileage(1) - b) * sum_of_all_weights
same for c and d
calculate x_opt as above.
if x_opt < 0, load all weight onto vehicle 2
if x_opt > 1, load all weight onto vehicle 1
else, try to load tgt_load = x_opt*sum_of_all_weights onto vehicle 1, the rest onto vehicle 2.
The rest is a knapsack problem. See http://en.wikipedia.org/wiki/Knapsack_problem#0.2F1_Knapsack_Problem
How to apply this? Use the dynamic programming algorithm described there twice.
for maximizing a load up to tgt_load
for maximizing a load up to (sum_of_all_weights - tgt_load)
The first one, if loaded onto vehicle one, gives you a distribution with slightly less then expected on vehicle one.
The second one, if loaded onto vehicle two, gives you a distribution with slightly more than expected on vehicle two.
One of those is the best solution. Compare them and use the better one.
I leave the C++ part to you. ;-)
I can suggest the following solution:
The total number of combinations is 2^(number of weights). Using a bit logic we can loop through the all combinations and calculate maxDistance. Bits in the combination value show which weight goes to which vehicle.
Note that algorithm complexity is exponential and int has a limited number of bits!
float maxDistance = 0.f;
for (int combination = 0; combination < (1 << ARRAYSIZE(weights)); ++combination)
{
int weightForVehicleONE = 0;
int weightForVehicleTWO = 0;
for (int i = 0; i < ARRAYSIZE(weights); ++i)
{
if (combination & (1 << i)) // bit is set to 1 and goes to vechicleTWO
{
weightForVehicleTWO += weights[i];
}
else // bit is set to 0 and goes to vechicleONE
{
weightForVehicleONE += weights[i];
}
}
maxDistance = max(maxDistance, min(vehicelONEMileage(weightForVehicleONE), vehicleTWOMileage(weightForVehicleTWO)));
}

Very fast 3D distance check?

Is there a way to do a quick and dirty 3D distance check where the results are rough, but it is very very fast? I need to do depth sorting. I use STL sort like this:
bool sortfunc(CBox* a, CBox* b)
{
return a->Get3dDistance(Player.center,a->center) <
b->Get3dDistance(Player.center,b->center);
}
float CBox::Get3dDistance( Vec3 c1, Vec3 c2 )
{
//(Dx*Dx+Dy*Dy+Dz*Dz)^.5
float dx = c2.x - c1.x;
float dy = c2.y - c1.y;
float dz = c2.z - c1.z;
return sqrt((float)(dx * dx + dy * dy + dz * dz));
}
Is there possibly a way to do it without a square root or possibly without multiplication?
You can leave out the square root because for all positive (or really, non-negative) numbers x and y, if sqrt(x) < sqrt(y) then x < y. Since you're summing squares of real numbers, the square of every real number is non-negative, and the sum of any positive numbers is positive, the square root condition holds.
You cannot eliminate the multiplication, however, without changing the algorithm. Here's a counterexample: if x is (3, 1, 1) and y is (4, 0, 0), |x| < |y| because sqrt(1*1+1*1+3*3) < sqrt(4*4+0*0+0*0) and 1*1+1*1+3*3 < 4*4+0*0+0*0, but 1+1+3 > 4+0+0.
Since modern CPUs can compute a dot product faster than they can actually load the operands from memory, it's unlikely that you would have anything to gain by eliminating the multiply anyway (I think the newest CPUs have a special instruction that can compute a dot product every 3 cycles!).
I would not consider changing the algorithm without doing some profiling first. Your choice of algorithm will heavily depend on the size of your dataset (does it fit in cache?), how often you have to run it, and what you do with the results (collision detection? proximity? occlusion?).
What I usually do is first filter by Manhattan distance
float CBox::Within3DManhattanDistance( Vec3 c1, Vec3 c2, float distance )
{
float dx = abs(c2.x - c1.x);
float dy = abs(c2.y - c1.y);
float dz = abs(c2.z - c1.z);
if (dx > distance) return 0; // too far in x direction
if (dy > distance) return 0; // too far in y direction
if (dz > distance) return 0; // too far in z direction
return 1; // we're within the cube
}
Actually you can optimize this further if you know more about your environment. For example, in an environment where there is a ground like a flight simulator or a first person shooter game, the horizontal axis is very much larger than the vertical axis. In such an environment, if two objects are far apart they are very likely separated more by the x and y axis rather than the z axis (in a first person shooter most objects share the same z axis). So if you first compare x and y you can return early from the function and avoid doing extra calculations:
float CBox::Within3DManhattanDistance( Vec3 c1, Vec3 c2, float distance )
{
float dx = abs(c2.x - c1.x);
if (dx > distance) return 0; // too far in x direction
float dy = abs(c2.y - c1.y);
if (dy > distance) return 0; // too far in y direction
// since x and y distance are likely to be larger than
// z distance most of the time we don't need to execute
// the code below:
float dz = abs(c2.z - c1.z);
if (dz > distance) return 0; // too far in z direction
return 1; // we're within the cube
}
Sorry, I didn't realize the function is used for sorting. You can still use Manhattan distance to get a very rough first sort:
float CBox::ManhattanDistance( Vec3 c1, Vec3 c2 )
{
float dx = abs(c2.x - c1.x);
float dy = abs(c2.y - c1.y);
float dz = abs(c2.z - c1.z);
return dx+dy+dz;
}
After the rough first sort you can then take the topmost results, say the top 10 closest players, and re-sort using proper distance calculations.
Here's an equation that might help you get rid of both sqrt and multiply:
max(|dx|, |dy|, |dz|) <= distance(dx,dy,dz) <= |dx| + |dy| + |dz|
This gets you a range estimate for the distance which pins it down to within a factor of 3 (the upper and lower bounds can differ by at most 3x). You can then sort on, say, the lower number. You then need to process the array until you reach an object which is 3x farther away than the first obscuring object. You are then guaranteed to not find any object that is closer later in the array.
By the way, sorting is overkill here. A more efficient way would be to make a series of buckets with different distance estimates, say [1-3], [3-9], [9-27], .... Then put each element in a bucket. Process the buckets from smallest to largest until you reach an obscuring object. Process 1 additional bucket just to be sure.
By the way, floating point multiply is pretty fast nowadays. I'm not sure you gain much by converting it to absolute value.
I'm disappointed that the great old mathematical tricks seem to be getting lost. Here is the answer you're asking for. Source is Paul Hsieh's excellent web site: http://www.azillionmonkeys.com/qed/sqroot.html . Note that you don't care about distance; you will do fine for your sort with square of distance, which will be much faster.
In 2D, we can get a crude approximation of the distance metric without a square root with the formula:
distanceapprox (x, y) =
which will deviate from the true answer by at most about 8%. A similar derivation for 3 dimensions leads to:
distanceapprox (x, y, z) =
with a maximum error of about 16%.
However, something that should be pointed out, is that often the distance is only required for comparison purposes. For example, in the classical mandelbrot set (z←z2+c) calculation, the magnitude of a complex number is typically compared to a boundary radius length of 2. In these cases, one can simply drop the square root, by essentially squaring both sides of the comparison (since distances are always non-negative). That is to say:
√(Δx2+Δy2) < d is equivalent to Δx2+Δy2 < d2, if d ≥ 0
I should also mention that Chapter 13.2 of Richard G. Lyons's "Understanding Digital Signal Processing" has an incredible collection of 2D distance algorithms (a.k.a complex number magnitude approximations). As one example:
Max = x > y ? x : y;
Min = x < y ? x : y;
if ( Min < 0.04142135Max )
|V| = 0.99 * Max + 0.197 * Min;
else
|V| = 0.84 * Max + 0.561 * Min;
which has a maximum error of 1.0% from the actual distance. The penalty of course is that you're doing a couple branches; but even the "most accepted" answer to this question has at least three branches in it.
If you're serious about doing a super fast distance estimate to a specific precision, you could do so by writing your own simplified fsqrt() estimate using the same basic method as the compiler vendors do, but at a lower precision, by doing a fixed number of iterations. For example, you can eliminate the special case handling for extremely small or large numbers, and/or also reduce the number of Newton-Rapheson iterations. This was the key strategy underlying the so-called "Quake 3" fast inverse square root implementation -- it's the classic Newton algorithm with exactly one iteration.
Do not assume that your fsqrt() implementation is slow without benchmarking it and/or reading the sources. Most modern fsqrt() library implementations are branchless and really damned fast. Here for example is an old IBM floating point fsqrt implementation. Premature optimization is, and always will be, the root of all evil.
Note that for 2 (non-negative) distances A and B, if sqrt(A) < sqrt(B), then A < B. Create a specialized version of Get3DDistance() (GetSqrOf3DDistance()) that does not call sqrt() that would be used only for the sortfunc().
If you worry about performance, you should also take care of the way you send your arguments:
float Get3dDistance( Vec3 c1, Vec3 c2 );
implies two copies of Vec3 structure. Use references instead:
float Get3dDistance( Vec3 const & c1, Vec3 const & c2 );
You could compare squares of distances instead of the actual distances, since d2 = (x1-x2)2 + (y1-y2)2+ (z1-z2)2. It doesn't get rid of the multiplication, but it does eliminate the square root operation.
How often are the input vectors updated and how often are they sorted? Depending on your design, it might be quite efficient to extend the "Vec3" class with a pre-calculated distance and sort on that instead. Especially relevant if your implementation allows you to use vectorized operations.
Other than that, see the flipcode.com article on approximating distance functions for a discussion on yet another approach.
Depending slightly on the number of points that you are being used to compare with, what is below is pretty much guaranteed to be the get the list of points in approximate order assuming all points change at all iteration.
1) Rewrite the array into a single list of Manhattan distances with
out[ i ] = abs( posn[ i ].x - player.x ) + abs( posn[ i ].y - player.y ) + abs( posn[ i ].z - player.z );
2) Now you can use radix sort on floating point numbers to order them.
Note that in practice this is going to be a lot faster than sorting the list of 3d positions because it significantly reduces the memory bandwidth requirements in the sort operation which all of the time is going to be spend and in which unpredictable accesses and writes are going to occur. This will run on O(N) time.
If many of the points are stationary at each direction there are far faster algorithms like using KD-Trees, although implementation is quite a bit more complex and it is much harder to get good memory access patterns.
If this is simply a value for sorting, then you can swap the sqrt() for a abs(). If you need to compare distances against set values, get the square of that value.
E.g. instead of checking sqrt(...) against a, you can compare abs(...) against a*a.
You may want to consider caching the distance between the player and the object as you calculate it, and then use that in your sortfunc. This would depend upon how many times your sort function looks at each object, so you might have to profile to be sure.
What I'm getting at is that your sort function might do something like this:
compare(a,b);
compare(a,c);
compare(a,d);
and you would calculate the distance between the player and 'a' every time.
As others have mentioned, you can leave out the sqrt in this case.
If you could center your coordinates around the player, use spherical coordinates? Then you could sort by the radius.
That's a big if, though.
If your operation happens a lot, it might be worth to put it into some 3D data structure. You probably need the distance sorting to decide which object is visible, or some similar task. In order of complexity you can use:
Uniform (cubic) subdivision
Divide the used space into cells, and assign the objects to the cells. Fast access to element, neighbours are trivial, but empty cells take up a lot of space.
Quadtree
Given a threshold, divide used space recursively into four quads until less then threshold number of object is inside. Logarithmic access element if objects don't stack upon each other, neighbours are not hard to find, space efficient solution.
Octree
Same as Quadtree, but divides into 8, optimal even if objects are above each other.
Kd tree
Given some heuristic cost function, and a threshold, split space into two halves with a plane where the cost function is minimal. (Eg.: same amount of objects at each side.) Repeat recursively until threshold reached. Always logarithmic, neighbours are harder to get, space efficient (and works in all dimensions).
Using any of the above data structures, you can start from a position, and go from neighbour to neighbour to list the objects in increasing distance. You can stop at desired cut distance. You can also skip cells that cannot be seen from the camera.
For the distance check, you can do one of the above mentioned routines, but ultimately they wont scale well with increasing number of objects. These can be used to display data that takes hundreds of gigabytes of hard disc space.