I'm currently implementing the paper of Revelles, Urena and Lastra "An Efficient Parametric Algorithm for Octree Traversal". In Ray - Octree intersection algorithms someone implemented it and pasted his code. My implementation should be the same, except that I used some vectors for computation.
However using this Octree only the upper right part of the image is rendered, for the rest of the image the octree isn't traversed. The check wheter to traverse or not happens in the following method:
bool Octnode::intersect( Ray r, SurfaceData *sd )
{
unsigned int a = 0;
v3d o = r.origin();
v3d d = r.direction();
if ( r.direction()[0] < 0. ) {
o[0] = _size[0] - r.origin()[0];
d[0] = -r.direction()[0];
a |= 4;
}
if ( r.direction()[1] < 0. ) {
o[1] = _size[1] - r.origin()[1];
d[1] = -r.direction()[1];
a |= 2;
}
if ( r.direction()[2] < 0. ) {
o[2] = _size[2] - r.origin()[2];
d[2] = -r.direction()[2];
a |= 1;
}
v3d t0 = ( _min - o ) / d;
v3d t1 = ( _max - o ) / d;
scalar t = std::numeric_limits<double>::max();
// traversal -- if any -- starts here
if ( t0.max() < t1.min() ) {
return processSubtree( t0, t1, r, &t, sd, a );
} else {
return false;
}
}
[Edit] The above method implements the function
void ray_parameter( octree *oct, ray r )
from the paper. As C. Urena pointed out there is an error in the paper that causes the traversal to be incorrect. Unfortunately traversal is skipped before this error could come into play.
In the Google group that can be found follwing C. Urena's link it seems the size of an octree node is computed differently. I did:
_size = _max - _min;
versus
_size = ( _max - _min ) / 2.;
in the Google group. I'll test that and post another update. [/Edit]
[Edit 2] Applying the fix that Carlos mentioned and reducing the size by half brought me this far:
The spheres should be completely rendered, but at least not all rays for the upper left quarter are rejected. [/Edit 2]
[Edit 3] Using different data sets I get seemingly better results, looks like I'll have to investigate some other parts of the code.
[/Edit 3]
I have no time for a detailed review of your code, but perhaps you should check for an error in the original paper which may be also present in your code: you can see it described here: http://lsi.ugr.es/curena/inves/wscg00/ -- there's a pointer to a a google group with the discussion.
Hope this help,
Carlos.
Related
Background
For a computer vision assignment I've been given the task of implementing RANSAC to fit a plane to a given set of points and filter that input list of points by the consensus model using Eigenvalue Decomposition.
I have spent days trying to tweak my code to achieve correct plane filtering behavior on an input set of test data. All you algorithm junkies, this one's for you.
My implementation uses a vector of a ROS data structure (Point32) as inputs, but this is transparent to the problem at hand.
What I've done
When I test for expected plane filtering behavior (correct elimination of outliers >95-99% of the time), I see in my implementation that I only eliminate outliers and extract the main plane of a test point cloud ~30-40% of the time. Other times, I filter a plane that ~somewhat~ fits the expected model, but leaves a lot of obvious outliers inside the consensus model. The fact that this works at all suggests that I'm doing some things right, and some things wrong.
I've tweaked my constants (distance threshold, max iterations, estimated % points fit) to London and back, and I only see small differences in the consensus model.
Implementation (long)
const float RANSAC_ESTIMATED_FIT_POINTS = .80f; // % points estimated to fit the model
const size_t RANSAC_MAX_ITER = 500; // max RANSAC iterations
const size_t RANDOM_MAX_TRIES = 100; // max RANSAC random point tries per iteration
const float RANSAC_THRESHOLD = 0.0000001f; // threshold to determine what constitutes a close point to a plane
/*
Helper to randomly select an item from a STL container, from stackoverflow.
*/
template <typename I>
I random_element(I begin, I end)
{
const unsigned long n = std::distance(begin, end);
const unsigned long divisor = ((long)RAND_MAX + 1) / n;
unsigned long k;
do { k = std::rand() / divisor; } while (k >= n);
std::advance(begin, k);
return begin;
}
bool run_RANSAC(const std::vector<Point32> all_points,
Vector3f *out_p0, Vector3f *out_n,
std::vector<Point32> *out_inlier_points)
{
for (size_t iterations = 0; iterations < RANSAC_MAX_ITER; iterations ++)
{
Point32 p1,p2,p3;
Vector3f v1;
Vector3f v2;
Vector3f n_hat; // keep track of the current plane model
Vector3f P0;
std::vector<Point32> points_agree; // list of points that agree with model within
bool found = false;
// try RANDOM_MAX_TRIES times to get random 3 points
for (size_t tries = 0; tries < RANDOM_MAX_TRIES; tries ++) // try to get unique random points 100 times
{
// get 3 random points
p1 = *random_element(all_points.begin(), all_points.end());
p2 = *random_element(all_points.begin(), all_points.end());
p3 = *random_element(all_points.begin(), all_points.end());
v1 = Vector3f (p2.x - p1.x,
p2.y - p1.y,
p2.z - p1.z ); //Vector P1P2
v2 = Vector3f (p3.x - p1.x,
p3.y - p1.y,
p3.z - p1.z); //Vector P1P3
if (std::abs(v1.dot(v2)) != 1.f) // dot product != 1 means we've found 3 nonlinear points
{
found = true;
break;
}
} // end try random element loop
if (!found) // could not find 3 random nonlinear points in 100 tries, go to next iteration
{
ROS_ERROR("run_RANSAC(): Could not find 3 random nonlinear points in %ld tries, going on to iteration %ld", RANDOM_MAX_TRIES, iterations + 1);
continue;
}
// nonlinear random points exist past here
// fit a plane to p1, p2, p3
Vector3f n = v1.cross(v2); // calculate normal of plane
n_hat = n / n.norm();
P0 = Vector3f(p1.x, p1.y, p1.z);
// at some point, the original p0, p1, p2 will be iterated over and added to agreed points
// loop over all points, find points that are inliers to plane
for (std::vector<Point32>::const_iterator it = all_points.begin();
it != all_points.end(); it++)
{
Vector3f M (it->x - P0.x(),
it->y - P0.y(),
it->z - P0.z()); // M = (P - P0)
float d = M.dot(n_hat); // calculate distance
if (d <= RANSAC_THRESHOLD)
{ // add to inlier points list
points_agree.push_back(*it);
}
} // end points loop
ROS_DEBUG("run_RANSAC() POINTS AGREED: %li=%f, RANSAC_ESTIMATED_FIT_POINTS: %f", points_agree.size(),
(float) points_agree.size() / all_points.size(), RANSAC_ESTIMATED_FIT_POINTS);
if (((float) points_agree.size()) / all_points.size() > RANSAC_ESTIMATED_FIT_POINTS)
{ // if points agree / total points > estimated % points fitting
// fit to points_agree.size() points
size_t n = points_agree.size();
Vector3f sum(0.0f, 0.0f, 0.0f);
for (std::vector<Point32>::iterator iter = points_agree.begin();
iter != points_agree.end(); iter++)
{
sum += Vector3f(iter->x, iter->y, iter->z);
}
Vector3f centroid = sum / n; // calculate centroid
Eigen::MatrixXf M(points_agree.size(), 3);
for (size_t row = 0; row < points_agree.size(); row++)
{ // build distance vector matrix
Vector3f point(points_agree[row].x,
points_agree[row].y,
points_agree[row].z);
for (size_t col = 0; col < 3; col ++)
{
M(row, col) = point(col) - centroid(col);
}
}
Matrix3f covariance_matrix = M.transpose() * M;
Eigen::EigenSolver<Matrix3f> eigen_solver;
eigen_solver.compute(covariance_matrix);
Vector3f eigen_values = eigen_solver.eigenvalues().real();
Matrix3f eigen_vectors = eigen_solver.eigenvectors().real();
// find eigenvalue that is closest to 0
size_t idx;
// find minimum eigenvalue, get index
float closest_eval = eigen_values.cwiseAbs().minCoeff(&idx);
// find corresponding eigenvector
Vector3f closest_evec = eigen_vectors.col(idx);
std::stringstream logstr;
logstr << "Closest eigenvalue : " << closest_eval << std::endl <<
"Corresponding eigenvector : " << std::endl << closest_evec << std::endl <<
"Centroid : " << std::endl << centroid;
ROS_DEBUG("run_RANSAC(): %s", logstr.str().c_str());
Vector3f all_fitted_n_hat = closest_evec / closest_evec.norm();
// invoke copy constructors for outbound
*out_n = Vector3f(all_fitted_n_hat);
*out_p0 = Vector3f(centroid);
*out_inlier_points = std::vector<Point32>(points_agree);
ROS_DEBUG("run_RANSAC():: Success, total_size: %li, inlier_size: %li, %% agreement %f",
all_points.size(), out_inlier_points->size(), (float) out_inlier_points->size() / all_points.size());
return true;
}
} // end iterations loop
return false;
}
Pseudocode from wikipedia for reference:
Given:
data – a set of observed data points
model – a model that can be fitted to data points
n – minimum number of data points required to fit the model
k – maximum number of iterations allowed in the algorithm
t – threshold value to determine when a data point fits a model
d – number of close data points required to assert that a model fits well to data
Return:
bestfit – model parameters which best fit the data (or nul if no good model is found)
iterations = 0
bestfit = nul
besterr = something really large
while iterations < k {
maybeinliers = n randomly selected values from data
maybemodel = model parameters fitted to maybeinliers
alsoinliers = empty set
for every point in data not in maybeinliers {
if point fits maybemodel with an error smaller than t
add point to alsoinliers
}
if the number of elements in alsoinliers is > d {
% this implies that we may have found a good model
% now test how good it is
bettermodel = model parameters fitted to all points in maybeinliers and alsoinliers
thiserr = a measure of how well model fits these points
if thiserr < besterr {
bestfit = bettermodel
besterr = thiserr
}
}
increment iterations
}
return bestfit
The only difference between my implementation and the wikipedia pseudocode is the following:
thiserr = a measure of how well model fits these points
if thiserr < besterr {
bestfit = bettermodel
besterr = thiserr
}
My guess is that I need to do something related to comparing the (closest_eval) with some sentinel value for the expected minimum eigenvalue corresponding to a normal for planes that tend to fit the model. However this was not covered in class and I have no idea where to start figuring out what's wrong.
Heh, it's funny how thinking about how to present the problem to others can actually solve the problem I'm having.
Solved by simply implementing this with a std::numeric_limits::max() starting best fit eigenvalue. This is because the best fit plane extracted on any n-th iteration of RANSAC is not guaranteed to be THE best fit plane and may have a huge error in consensus amongst each constituent point, so I need to converge on that for each iteration. Woops.
thiserr = a measure of how well model fits these points
if thiserr < besterr {
bestfit = bettermodel
besterr = thiserr
}
I'm implementing a recursive ray tracer with reflection. The ray tracer is currently reflecting areas that are in shadow, and I don't know why. The shadow aspect of the ray tracer works as expected when the reflective code is commented out, so I don't think that's the issue.
Vec Camera::shade(Vec accumulator,
Ray ray,
vector<Surface*>surfaces,
vector<Light*>lights,
int recursion_depth) {
if (recursion_depth == 0) return Vec(0,0,0);
double closestIntersection = numeric_limits<double>::max();
Surface* cs;
for(unsigned int i=0; i < surfaces.size(); i++){
Surface* s = surfaces[i];
double intersection = s->intersection(ray);
if (intersection > EPSILON && intersection < closestIntersection) {
closestIntersection = intersection;
cs = s;
}
}
if (closestIntersection < numeric_limits<double>::max()) {
Point intersectionPoint = ray.origin + ray.dir*closestIntersection;
Vec intersectionNormal = cs->calculateIntersectionNormal(intersectionPoint);
Material materialToUse = cs->material;
for (unsigned int j=0; j<lights.size(); j++) {
Light* light = lights[j];
Vec dirToLight = (light->origin - intersectionPoint).norm();
Vec dirToCamera = (this->eye - intersectionPoint).norm();
bool visible = true;
for (unsigned int k=0; k<surfaces.size(); k++) {
Surface* s = surfaces[k];
double t = s->intersection(Ray(intersectionPoint, dirToLight));
if (t > EPSILON && t < closestIntersection) {
visible = false;
break;
}
}
if (visible) {
accumulator = accumulator + this->color(dirToLight, intersectionNormal,
intersectionPoint, dirToCamera, light, materialToUse);
}
}
//Reflective ray
//Vec r = d − 2(d · n)n
if (materialToUse.isReflective()) {
Vec d = ray.dir;
Vec r_v = d-intersectionNormal*2*intersectionNormal.dot(d);
Ray r(intersectionPoint+intersectionNormal*EPSILON, r_v);
//km is the ideal specular component of the material, and mult is component-wise multiplication
return this->shade(accumulator, r, surfaces, lights, recursion_depth--).mult(materialToUse.km);
}
else
return accumulator;
}
else
return accumulator;
}
Vec Camera::color(Vec dirToLight,
Vec intersectionNormal,
Point intersectionPoint,
Vec dirToCamera,
Light* light,
Material material) {
//kd I max(0, n · l) + ks I max(0, n · h)p
Vec I(light->r, light->g, light->b);
double dist = (intersectionPoint-light->origin).magnitude();
I = I/(dist*dist);
Vec h = (dirToLight + dirToCamera)/((dirToLight + dirToCamera).magnitude());
Vec kd = material.kd;
Vec ks = material.ks;
Vec diffuse = kd*I*fmax(0.0, intersectionNormal.dot(dirToLight));
Vec specular = ks*I*pow(fmax(0.0, intersectionNormal.dot(h)), material.r);
return diffuse+specular;
}
I've provided my output and the expected output. The lighting looks a bit different b/c mine was originally an .exr file and the other is a .png, but I've drawn arrows in my output where the surface should be reflecting shadows, but it's not.
A couple of things to check:
The visibility check in the inner for loop might be returning a false positive (i.e. it's calculating that all surfaces[k] are not closer to lights[j] than your intersection point, for some j). This would cause it to incorrectly add that light[j]'s contribution to your accumulator. This would result in missing shadows, but it ought to happen everywhere, including your top recursion level, whereas you're only seeing missing shadows in reflections.
There might an error in the color() method that's returning some wrong value that's then being incremented into accumulator. Although without seeing that code, it's hard to know for sure.
You're using postfix decrement on recursion_depth inside the materialToUse.IsReflective() check. Can you verify that the decremented value of recursion_depth is actually being passed to the shade() method call? (And if not, try changing to prefix decrement).
return this->shade(... recursion_depth--)...
EDIT: Can you also verify that recursion_depth is just a parameter to the shade() method, i.e. that there isn't a global / static recursion_depth anywhere. Assuming that there isn't (and there shouldn't be), you can change the call above to
return this->shade(... recursion_depth - 1)...
EDIT 2: A couple of other things to look at:
In color(), I don't understand why you're including the direction to the camera in your calculations. The color of intersections other than the first one, per pixel, ought to be independent of where the camera is. But I doubt that's the cause of this issue.
Verify that return this->shade(accumulator, r, surfaces, lights, recursion_depth--).mult(materialToUse.km); is doing the right thing with that matrix multiplication. Why are you multiplying by materialToUse.km?
Verify that materialToUse.km is constant per surface (i.e. it doesn't change over the geometry of the surface, the depth of iteration, or anything else).
Break up the statement return this->shade(accumulator, r, surfaces, lights, recursion_depth--).mult(materialToUse.km); into its component objects, so you can see the intermediate results in the debugger:
Vec reflectedColor = this->shade(accumulator, r, surfaces, lights, recursion_depth - 1);
Vec multipliedColor = reflectedColor.mult(materialToUse.km);
return multipliedColor;
Determine the image (x, y) coordinates of one of your problematic pixels. Set a conditional breakpoint that's triggered when rendering that pixel, and then step through your shade() method. Assuming you pick the pixel pointed to by the bottom right arrow in your example image, you ought to see one recursion into shade(). Stepping through that the first recurse, you'll see that your code is incorrectly adding the light contribution from the floor, when it should be in shadow.
To answer my own question: I was not checking that the t should be less than the distance from the intersection to light position.
Instead of:
if (t > EPSILON && t < closestIntersection) {
visible = false;
break;
}
it should be:
if (t > EPSILON && t < max_t) {
visible = false;
break;
}
where max_t is
double max_t = dirToLight.magnitude();
before dirToLight has been normalized.
I'm currently lost trying to figure out how to implement an equivalent version of MATLAB's hilbert() function in C++. I'm very new to signal processing, but, ultimately, I would like to figure out a way to phase shift any given signal by 90 degrees. I was attempting to follow the method suggested in this question on MATLAB central, which appears to work based on tests using GNU Octave.
I have what I believe to be a working implementation of both FFT and the inverse FFT, and I have tried implementing the method described in the answer to this post in order to compute the analytical signal. I have tried doing this by applying the FFT, setting the upper half of the array to zero, and then applying the inverse FFT, but, based on graphs I made of output from a test, there must be a problem with the way I have implemented finding the analytical signal.
What would be a suitable way to implement the hilbert() function from MATLAB in C++ given a working implementation of FFT and inverse FFT? Is there a better way of achieving the 90 degree phase shift?
Checking the MATLAB implementation the following should return the same result as the hilbert function. You'll obviously have to modify it to match your specific implementation. I'm assuming a signal class of some sort exists.
signal hilbert(const signal &x)
{
int limit1, limit2;
signal xfreq = fft(x);
if (x.numel % 2 == 0) {
limit1 = x.numel/2;
limit2 = limit1 + 1;
} else {
limit1 = (x.numel + 1)/2;
limit2 = limit1;
}
// multiply the first half by 2 (except the first element)
for (int i = 1; i < limit1; ++i) {
xfreq[i].real *= 2;
xfreq[i].imag *= 2;
}
for (int i = limit2; i < x.numel; ++i) {
xfreq[i].real = 0;
xfreq[i].imag = 0;
}
return ifft(xfreq);
}
Edit: Forgot to set the second half to zeros.
Edit2: Fixed a logical error. I coded the following up in MATLAB which matches hilbert.
function h = hil(x)
n = numel(x);
if (mod(n,2) == 0)
limit1 = n/2;
limit2 = limit1 + 2;
else
limit1 = (n+1)/2;
limit2 = limit1+1;
end
xfreq = fft(x);
for i = 2:limit1
xfreq(i) = xfreq(i)*2;
end
for i = limit2:n
xfreq(i) = 0;
end
h = ifft(xfreq);
end
I've been working on this for several weeks but have been unable to get my algorithm working properly and i'm at my wits end. Here's an illustration of what i have achieved:
If everything was working i would expect a perfect circle/oval at the end.
My sample points (in white) are recalculated every time a new control point (in yellow) is added. At 4 control points everything looks perfect, again as i add a 5th on top of the 1st things look alright, but then on the 6th it starts to go off too the side and on the 7th it jumps up to the origin!
Below I'll post my code, where calculateWeightForPointI contains the actual algorithm. And for reference- here is the information i'm trying to follow. I'd be so greatful if someone could take a look for me.
void updateCurve(const std::vector<glm::vec3>& controls, std::vector<glm::vec3>& samples)
{
int subCurveOrder = 4; // = k = I want to break my curve into to cubics
// De boor 1st attempt
if(controls.size() >= subCurveOrder)
{
createKnotVector(subCurveOrder, controls.size());
samples.clear();
for(int steps=0; steps<=20; steps++)
{
// use steps to get a 0-1 range value for progression along the curve
// then get that value into the range [k-1, n+1]
// k-1 = subCurveOrder-1
// n+1 = always the number of total control points
float t = ( steps / 20.0f ) * ( controls.size() - (subCurveOrder-1) ) + subCurveOrder-1;
glm::vec3 newPoint(0,0,0);
for(int i=1; i <= controls.size(); i++)
{
float weightForControl = calculateWeightForPointI(i, subCurveOrder, controls.size(), t);
newPoint += weightForControl * controls.at(i-1);
}
samples.push_back(newPoint);
}
}
}
//i = the weight we're looking for, i should go from 1 to n+1, where n+1 is equal to the total number of control points.
//k = curve order = power/degree +1. eg, to break whole curve into cubics use a curve order of 4
//cps = number of total control points
//t = current step/interp value
float calculateWeightForPointI( int i, int k, int cps, float t )
{
//test if we've reached the bottom of the recursive call
if( k == 1 )
{
if( t >= knot(i) && t < knot(i+1) )
return 1;
else
return 0;
}
float numeratorA = ( t - knot(i) );
float denominatorA = ( knot(i + k-1) - knot(i) );
float numeratorB = ( knot(i + k) - t );
float denominatorB = ( knot(i + k) - knot(i + 1) );
float subweightA = 0;
float subweightB = 0;
if( denominatorA != 0 )
subweightA = numeratorA / denominatorA * calculateWeightForPointI(i, k-1, cps, t);
if( denominatorB != 0 )
subweightB = numeratorB / denominatorB * calculateWeightForPointI(i+1, k-1, cps, t);
return subweightA + subweightB;
}
//returns the knot value at the passed in index
//if i = 1 and we want Xi then we have to remember to index with i-1
float knot(int indexForKnot)
{
// When getting the index for the knot function i remember to subtract 1 from i because of the difference caused by us counting from i=1 to n+1 and indexing a vector from 0
return knotVector.at(indexForKnot-1);
}
//calculate the whole knot vector
void createKnotVector(int curveOrderK, int numControlPoints)
{
int knotSize = curveOrderK + numControlPoints;
for(int count = 0; count < knotSize; count++)
{
knotVector.push_back(count);
}
}
Your algorithm seems to work for any inputs I tried it on. Your problem might be a that a control point is not where it is supposed to be, or that they haven't been initialized properly. It looks like there are two control-points, half the height below the bottom left corner.
In the above image i would like to know how to find the smallest possible way to get to the asteroid. the ship can wrap around so the closest way is going through the top corner instead of turning around and going back. I am not looking for code, just the pseudo code of how to get to it.
Any help is appreciated
The game asteroid is played on the surface of a torus.
Well, since you can wrap around any edge of the screen, there are always 4 straight lines between the asteroid and the ship (up and left, up and right, down and left, and down and right). I would just calculate the length of each and take the smallest result.
int dx1 = abs(ship_x - asteroid_x);
int dx2 = screen_width - dx1;
int dy1 = abs(ship_y - asertoid_y);
int dy2 = screen_height - dy1;
// Now calculate the psuedo-distances as Pete suggests:
int psuedo1 = (dx1 * dx1) + (dy1 * dy1);
int psuedo2 = (dx2 * dx2) + (dy1 * dy1);
int psuedo3 = (dx1 * dx1) + (dy2 * dy2);
int psuedo4 = (dx2 * dx2) + (dy2 * dy2);
This shows how to calculate the various distances involved. There is a little complication around mapping each one to the appropriate direction.
I would recommend the A* search algorithm
#include <iostream>
template<typename Scalar>
struct vector2d {
Scalar x;
Scalar y;
};
template<typename Scalar>
struct position2d {
Scalar x;
Scalar y;
};
template<typename S>
S absolute( S in ) {
if (in < S())
return -in;
return in;
}
template<typename S>
S ShortestPathScalar( S ship, S rock, S wrap ) {
S direct = rock-ship;
S indirect = (wrap-ship) + (rock);
if (absolute( direct ) > absolute( indirect ) ) {
return indirect;
}
return direct;
}
template<typename S>
vector2d<S> ShortestPath( position2d<S> ship, position2d<S> rock, position2d<S> wrap ) {
vector2d<S> retval;
retval.x = ShortestPathScalar( ship.x, rock.x, wrap.x );
retval.y = ShortestPathScalar( ship.y, rock.y, wrap.y );
return retval;
}
int main() {
position2d<int> world = {1000, 1000};
position2d<int> rock = {10, 10};
position2d<int> ship = {500, 900};
vector2d<int> path = ShortestPath( ship, rock, world );
std::cout << "(" << path.x << "," << path.y << ")\n";
}
No point in doing crap with squaring stuff in a simple universe like that.
Scalar support for any type that supports a < b, and default construction for a zero. Like double or int or long long.
Note that copy/pasting the above code and handing it in as an assignment at the level of course where you are playing with that problem will get you looked at strangely. However, the algorithm should be pretty easy to follow.
Find the sphere in reference to the ship.
To avoid decimals in my example. let the range of x & y = [0 ... 511] where 511 == 0 when wrapped
Lets make the middle the origin.
So subtract vec2(256,256) from both the sphere and the ship's position
sphere.position(-255,255) = sphere.position(1 - 256 ,511 - 256);
ship.position(255,-255) = ship.position(511 - 256, 1 - 256)
firstDistance(-510,510) = sphere.position(-255,255) - ship.position(255,-255)
wrappedPosition(254,-254) = wrapNewPositionToScreenBounds(firstDistance(-510,510)) // under flow / over flow using origin offset of 256
secondDistance(1,-1) = ship.position(255,-255) - wrappedPosition(254,-254)
If you need the smallest way to the asteroid, you don't need to calculate the actual smallest distance to it. If I understand you correctly, you need the shortest way not the length of the shortest path.
This, I think, is computationally the least expensive method to do that:
Let the meteor's position be (Mx, My) and the ship position (Sx, Sy).
The width of the viewport is W and the height is H. Now,
dx = Mx - Sx,
dy = My - Sy.
if abs(dx) > W/2 (which is 256 in this case) your ship needs to go LEFT,
if abs(dx) < W/2 your ship needs to go RIGHT.
IMPORTANT - Invert your result if dx was negative. (Thanks to #Toad for pointing this out!)
Similarly, if
abs(dy) > H/2 ship goes UP,
abs(dy) < H/2 ship goes DOWN.
Like with dx, flip your result if dy is -ve.
This takes wrapping into account and should work for every case. No squares or pythagoras theorem involved, I doubt it can be done any cheaper. Also if you HAVE to find the actual shortest distance, you'll only have to apply it once now (since you already know which one of the four possible paths you need to take). #Peter's post gives an elegant way to do that while taking wrapping into account.