Recognize pattern in 3D environment - c++

I'm currently developing 3rd person building game (as bacheleor thesis). I need to recognize constructed patterns co I can mark corresponding structure as some building (so player can start using that building).
I have folowing rules:
3 types of building blocks (all based on cube) (and in future there will be more types)
any block can be scaled up to k multiple in each axis (k = 20)
blocks can be rotated by any axis (but stays in 3D grid)
Problem definition:
4 cubes of base size (1,1,1) in a grid 2x2 should be equivalent to 1
box size (2,2,1), so all possible variants (mainly different in rotation) could be evaluated as valid pattern construction.
I expect that my patterns will be up to 30x30x30 multiple of base size.
For example, I'd like to recognize structure like this: (currently placed in level by hand)
Size is 21x21x22 and it is constructed from multiple objects.
As a limitation, I will border pattern search from exact point (lets say control console for that structure). Current limitation constant is around 50x50x50 multiple of base cube. (Limit is subject to change based on answers here.)
I searched over 20hrs and result was only papers to recognize 3D structure from 2D image or recognize exact structure in 2D.
My problem is that with growing K( each X,Y,Z) there is exponencially growing number of structures, which should be accepted as correct pattern construction.
Question: What algorithm (+ heurestic) should I use?
Following 3 images contains visualization of structures, that are considered as correct variant of pattern at image 4, thus they should be found and accepted.
All together has same final shape (and has same material at same places). I simplified the problem only to 2D shape, but extending in 3D space is obvious.
Thank you for all answers / comments.

If every building block whose axes can be scaled can be subdivided into smaller 1x1x1 building blocks, and if the following conditions adequately describe whether a built structure should match a given template:
Arbitrarily subdividing a template building block should not cause a mismatch
Arbitrarily "extruding through" any axis-aligned plane (imagine slicing through the world in some axis-aligned plane and then pulling the two halves apart, while new matter continuously fills in the gaps between points that were originally touching) should not cause a mismatch
Any other difference forces a mismatch
then it should be possible to efficiently recognise built instances of a template structure in time O(b + t^2), where b is the voxel volume of the built structure and t is the (typically very small; see below) voxel volume of the template. The basic idea is to transform any built structure into a canonical form in which any "extruded range" is compacted down to a single voxel in length.
Atomise then canonicalise
First, atomise all building blocks in the built structure down to their equivalent 1x1x1-building-block forms. The next step, compacting extruded ranges, is essentially the same algorithm as for eliminating duplicates from a sorted list, but in 3D:
For each axis d (X, Y, Z):
Set j = 1.
For i from 1 to the maximum co-ord in axis d:
Is the planar voxel "slice" of the built structure at co-ord i on axis d (e.g., if d is Y, then the set of voxels (x, i, z) for all x and z) identical to the immediately preceding "slice" at co-ord i-1?
If not, copy the slice at co-ord i on top of the slice at co-ord j, and then increment j.
This will produce a canonical version of the built structure in which all adjacent slices are different; typically this version will be much smaller, since all "long" features are collapsed to length 1. The role of j here is to point to the earliest location where we can put the next non-identical slice. Since j <= i always, we never have to worry about overwriting a slice we haven't processed yet. Also note that it doesn't matter in which order the directions are processed, the final result is the same.
The same canonicalisation process should have been applied to each template structure at the outset as preprocessing. Now the two canonical forms (template and built structure) can be compared directly via a brute-force voxel-by-voxel comparison (basically like strstr(), but looking for a cuboid inside a cuboid instead of a string inside a string). Any rotations or flips that should be considered valid transformations should also be tried at this point.
Features and caveats
Given the template
X.X
XXX
it will recognise e.g. the following as matches,
X.X
X.X XXX XX
X.X XXXXXXX
XXX XXXXXXX
but not e.g.
X..
X.X
X.X
XXX
But if you want to detect such U-shaped structures with different-length legs, you need only supply an additional template:
X..
X.X
XXX
This template will match all U-shaped structures with different-length legs (but not U-shaped structures with equal-length legs!). Depending on which rotations you consider, you may also need the mirror image.
Structures whose AABBs intersect won't be handled correctly. These can be separated easily enough in a prior step.
Interestingly enough, this algorithm is capable of recognising structures that comprise more than one connected component. For example, the template
X.X.X
will recognise three equal-size cuboids in a row (or column, if rotations are allowed).

I would write a function to check if a 1x1x1 area has a specific shape (full block, half block + rotation, edge block + rotation) and then define and check the pattern in this manner.

Related

What's the difference between "BB regression algorithms used in R-CNN variants" vs "BB in YOLO" localization techniques?

Question:
What's the difference between the bounding box(BB) produced by "BB regression algorithms in region-based object detectors" vs "bounding box in single shot detectors"? and can they be used interchangeably if not why?
While understanding variants of R-CNN and Yolo algorithms for object detection, I came across two major techniques to perform object detection i.e Region-based(R-CNN) and niche-sliding window based(YOLO).
Both use different variants(complicated to simple) in both regimes but in the end, they are just localizing objects in the image using Bounding boxes!. I am just trying to focus on the localization(assuming classification is happening!) below since that is more relevant to the question asked & explained my understanding in brief:
Region-based:
Here, we let the Neural network to predict continuous variables(BB coordinates) and refers to that as regression.
The regression that is defined (which is not linear at all), is just a CNN or other variants(all layers were differentiable),outputs are four values (𝑟,𝑐,ℎ,𝑤), where (𝑟,𝑐) specify the values of the position of the left corner and (ℎ,𝑤) the height and width of the BB.
In order to train this NN, a smooth L1 loss was used to learn the precise BB by penalizing when the outputs of the NN are very different from the labeled (𝑟,𝑐,ℎ,𝑤) in the training set!
niche-Sliding window(convolutionally implemented!) based:
first, we divide the image into say 19*19 grid cells.
the way you assign an object to a grid-cell is by selecting the midpoint of an object and then assigning that object to whichever one grid cell contains the midpoint of the object. So each object, even if the objects span multiple grid cells, that object is assigned only to one of the 19 by 19 grid cells.
Now, you take the two coordinates of this grid-cell and calculate the precise BB(bx, by, bh, bw) for that object using some method such as
(bx, by, bh, bw) are relative to the grid cell where x & y are center point and h & w are the height of precise BB i.e the height of the bounding box is specified as a fraction of the overall width of the grid cell and h& w can be >1.
There multiple ways of calculating precise BB specified in the paper.
Both Algorithms:
outputs precise bounding boxes.!
works in supervised learning settings, they were using labeled dataset where the labels are bounding boxes stored(manually marked my some annotator using tools like labelimg ) for each image in a JSON/XML file format.
I am trying to understand the two localization techniques on a more abstract level(as well as having an in-depth idea of both techniques!) to get more clarity on:
in what sense they are different?, &
why 2 were created, I mean what are the failure/success points of 1 on the another?.
and can they be used interchangeably, if not then why?
please feel free to correct me if I am wrong somewhere, feedback is highly appreciated! Citing to any particular section of a research paper would be more rewarding!
The essential differences are that two-stage Faster R-CNN-like are more accurate while single-stage YOLO/SSD-like are faster.
In two-stage architectures, the first stage is usually of region proposal, while the second stage is for classification and more accurate localization. You can think of the first stage as similar to the single-stage architectures, when the difference is that the region proposal only separates "object" from "background", while the single-stage distinguishes between all object classes. More explicitly, in the first stage, also in a sliding window-like fashion, an RPN says whether there's an object present or not, and if there is - to roughly give the region (bounding box) in which it lies. This region is used by the second stage for classification and bounding box regression (for better localization) by first pooling the relevant features from the proposed region, and then going through the Fast R-CNN-like architecture (which does the classificaion+regression).
Regarding your question about interchanging between them - why would you want to do so? Usually you would choose an architecture by your most pressing needs (e.g. latency/power/accuracy), and you wouldn't want to interchange between them unless there's some sophisticated idea which will help you somehow.

How do you store voxel data?

I've been looking online and I'm impressed by the capabilities of using voxel data, especially for terrain building and manipulation. The problem is that voxels are never clearly explained on any site that i visited or how to use/implement them. All i find is that voxels are volumetric data. Please provide a more complete answer; what is volumetric data. It may seem like a simple question but I'm still unsure.
Also, how would you implement voxel data? (I aim to implement this into a c++ program.) What sort of data type would you use to store the voxel data to enable me to modify the contents at run time as fast as possible. I have looked online and i couldn't find anything which explained how to store the data. Lists of objects, arrays, ect...
How do you use voxels?
EDIT:
Since I'm just beginning with voxels, I'll probably start by using it to only model simple objects but I will eventually be using it for rendering terrain and world objects.
In essence, voxels are a three-dimensional extension of pixels ("volumetric pixels"), and they can indeed be used to represent volumetric data.
What is volumetric data
Mathematically, volumetric data can be seen as a three-dimensional function F(x,y,z). In many applications this function is a scalar function, i.e., it has one scalar value at each point (x,y,z) in space. For instance, in medical applications this could be the density of certain tissues. To represent this digitally, one common approach is to simply make slices of the data: imagine images in the (X,Y)-plane, and shifting the z-value to have a number of images. If the slices are close to eachother, the images can be displayed in a video sequence as for instance seen on the wiki-page for MRI-scans (https://upload.wikimedia.org/wikipedia/commons/transcoded/4/44/Structural_MRI_animation.ogv/Structural_MRI_animation.ogv.360p.webm). As you can see, each point in space has one scalar value which is represented as a grayscale.
Instead of slices or a video, one can also represent this data using voxels. Instead of dividing a 2D plane in a regular grid of pixels, we now divide a 3D area in a regular grid of voxels. Again, a scalar value can be given to each voxel. However, visualizing this is not as trivial: whereas we could just give a gray value to pixels, this does not work for voxels (we would only see the colors of the box itself, not of its interior). In fact, this problem is caused by the fact that we live in a 3D world: we can look at a 2D image from a third dimension and completely observe it; but we cannot look at a 3D voxel space and observe it completely as we have no 4th dimension to look from (unless you count time as a 4th dimension, i.e., creating a video).
So we can only look at parts of the data. One way, as indicated above, is to make slices. Another way is to look at so-called "iso-surfaces": we create surfaces in the 3D space for which each point has the same scalar value. For a medical scan, this allows to extract for instance the brain-part from the volumetric data (not just as a slice, but as a 3D model).
Finally, note that surfaces (meshes, terrains, ...) are not volumetric, they are 2D-shapes bent, twisted, stretched and deformed to be embedded in the 3D space. Ideally they represent the border of a volumetric object, but not necessarily (e.g., terrain data will probably not be a closed mesh). A way to represent surfaces using volumetric data, is by making sure the surface is again an iso-surface of some function. As an example: F(x,y,z) = x^2 + y^2 + z^2 - R^2 can represent a sphere with radius R, centered around the origin. For all points (x',y',z') of the sphere, F(x',y',z') = 0. Even more, for points inside the sphere, F < 0, and for points outside of the sphere, F > 0.
A way to "construct" such a function is by creating a distance map, i.e., creating volumetric data such that every point F(x,y,z) indicates the distance to the surface. Of course, the surface is the collection of all the points for which the distance is 0 (so, again, the iso-surface with value 0 just as with the sphere above).
How to implement
As mentioned by others, this indeed depends on the usage. In essence, the data can be given in a 3D matrix. However, this is huge! If you want the resolution doubled, you need 8x as much storage, so in general this is not an efficient solution. This will work for smaller examples, but does not scale very well.
An octree structure is, afaik, the most common structure to store this. Many implementations and optimizations for octrees exist, so have a look at what can be (re)used. As pointed out by Andreas Kahler, sparse voxel octrees are a recent approach.
Octrees allow easier navigating to neighbouring cells, parent cells, child cells, ... (I am assuming now that the concept of octrees (or quadtrees in 2D) are known?) However, if many leaf cells are located at the finest resolutions, this data structure will come with a huge overhead! So, is this better than a 3D array: it somewhat depends on what volumetric data you want to work with, and what operations you want to perform.
If the data is used to represent surfaces, octrees will in general be much better: as stated before, surfaces are not really volumetric, hence will not require many voxels to have relevant data (hence: "sparse" octrees). Refering back to the distance maps, the only relevant data are the points having value 0. The other points can also have any value, but these do not matter (in some cases, the sign is still considered, to denote "interior" and "exterior", but the value itself is not required if only the surface is needed).
How to use
If by "use", you are wondering how to render them, then you can have a look at "marching cubes" and its optimizations. MC will create a triangle mesh from volumetric data, to be rendered in any classical way. Instead of translating to triangles, you can also look at volume rendering to render a "3D sampled data set" (i.e., voxels) as such (https://en.wikipedia.org/wiki/Volume_rendering). I have to admit that I am not that familiar with volume rendering, so I'll leave it at just the wiki-link for now.
Voxels are just 3D pixels, i.e. 3D space regularly subdivided into blocks.
How do you use them? It really depends on what you are trying to do. A ray casting terrain game engine? A medical volume renderer? Something completely different?
Plain 3D arrays might be the best for you, but it is memory intensive. As BWG pointed out, octree is another popular alternative. Search for Sparse Voxel Octrees for a more recent approach.
In popular usage during the 90's and 00's, 'voxel' could mean somewhat different things, which is probably one reason you have been finding it hard to find consistent information. In technical imaging literature, it means 3D volume element. Oftentimes, though, it is used to describe what is somewhat-more-clearly termed a high-detail raycasting engine (as opposed to the low-detail raycasting engine in Doom or Wolfenstein). A popular multi-part tutorial lives in the Flipcode archives. Also check out this brief one by Jacco.
There are many old demos you can find out there that should run under emulation. They are good for inspiration and dissection, but tend to use a lot of assembly code.
You should think carefully about what you want to support with your engine: car-racing, flying, 3D objects, planets, etc., as these constraints can change the implementation of your engine. Oftentimes, there is not a data structure, per se, but the terrain heightfield is represented procedurally by functions. Otherwise, you can use an image as a heightfield. For performance, when rendering to the screen, think about level-of-detail, in other words, how many actual pixels will be taken up by the rendered element. This will determine how much sampling you do of the heightfield. Once you get something working, you can think about ways you can blend pixels over time and screen space to make them look better, while doing as little rendering as possible.

'Stable' multi-dimensional scaling algorithm

I have a wireless mesh network of nodes, each of which is capable of reporting its 'distance' to its neighbors, measured in (simplified) signal strength to them. The nodes are geographically in 3d space but because of radio interference, the distance between nodes need not be trigonometrically (trigonomically?) consistent. I.e., given nodes A, B and C, the distance between A and B might be 10, between A and C also 10, yet between B and C 100.
What I want to do is visualize the logical network layout in terms of connectness of nodes, i.e. include the logical distance between nodes in the visual.
So far my research has shown the multidimensional scaling (MDS) is designed for exactly this sort of thing. Given that my data can be directly expressed as a 2d distance matrix, it's even a simpler form of the more general MDS.
Now, there seem to be many MDS algorithms, see e.g. http://homepage.tudelft.nl/19j49/Matlab_Toolbox_for_Dimensionality_Reduction.html and http://tapkee.lisitsyn.me/ . I need to do this in C++ and I'm hoping I can use a ready-made component, i.e. not have to re-implement an algo from a paper. So, I thought this: https://sites.google.com/site/simpmatrix/ would be the ticket. And it works, but:
The layout is not stable, i.e. every time the algorithm is re-run, the position of the nodes changes (see differences between image 1 and 2 below - this is from having been run twice, without any further changes). This is due to the initialization matrix (which contains the initial location of each node, which the algorithm then iteratively corrects) that is passed to this algorithm - I pass an empty one and then the implementation derives a random one. In general, the layout does approach the layout I expected from the given input data. Furthermore, between different runs, the direction of nodes (clockwise or counterclockwise) can change. See image 3 below.
The 'solution' I thought was obvious, was to pass a stable default initialization matrix. But when I put all nodes initially in the same place, they're not moved at all; when I put them on one axis (node 0 at 0,0 ; node 1 at 1,0 ; node 2 at 2,0 etc.), they are moved along that axis only. (see image 4 below). The relative distances between them are OK, though.
So it seems like this algorithm only changes distance between nodes, but doesn't change their location.
Thanks for reading this far - my questions are (I'd be happy to get just one or a few of them answered as each of them might give me a clue as to what direction to continue in):
Where can I find more information on the properties of each of the many MDS algorithms?
Is there an algorithm that derives the complete location of each node in a network, without having to pass an initial position for each node?
Is there a solid way to estimate the location of each point so that the algorithm can then correctly scale the distance between them? I have no geographic location of each of these nodes, that is the whole point of this exercise.
Are there any algorithms to keep the 'angle' at which the network is derived constant between runs?
If all else fails, my next option is going to be to use the algorithm I mentioned above, increase the number of iterations to keep the variability between runs at around a few pixels (I'd have to experiment with how many iterations that would take), then 'rotate' each node around node 0 to, for example, align nodes 0 and 1 on a horizontal line from left to right; that way, I would 'correct' the location of the points after their relative distances have been determined by the MDS algorithm. I would have to correct for the order of connected nodes (clockwise or counterclockwise) around each node as well. This might become hairy quite quickly.
Obviously I'd prefer a stable algorithmic solution - increasing iterations to smooth out the randomness is not very reliable.
Thanks.
EDIT: I was referred to cs.stackexchange.com and some comments have been made there; for algorithmic suggestions, please see https://cs.stackexchange.com/questions/18439/stable-multi-dimensional-scaling-algorithm .
Image 1 - with random initialization matrix:
Image 2 - after running with same input data, rotated when compared to 1:
Image 3 - same as previous 2, but nodes 1-3 are in another direction:
Image 4 - with the initial layout of the nodes on one line, their position on the y axis isn't changed:
Most scaling algorithms effectively set "springs" between nodes, where the resting length of the spring is the desired length of the edge. They then attempt to minimize the energy of the system of springs. When you initialize all the nodes on top of each other though, the amount of energy released when any one node is moved is the same in every direction. So the gradient of energy with respect to each node's position is zero, so the algorithm leaves the node where it is. Similarly if you start them all in a straight line, the gradient is always along that line, so the nodes are only ever moved along it.
(That's a flawed explanation in many respects, but it works for an intuition)
Try initializing the nodes to lie on the unit circle, on a grid or in any other fashion such that they aren't all co-linear. Assuming the library algorithm's update scheme is deterministic, that should give you reproducible visualizations and avoid degeneracy conditions.
If the library is non-deterministic, either find another library which is deterministic, or open up the source code and replace the randomness generator with a PRNG initialized with a fixed seed. I'd recommend the former option though, as other, more advanced libraries should allow you to set edges you want to "ignore" too.
I have read the codes of the "SimpleMatrix" MDS library and found that it use a random permutation matrix to decide the order of points. After fix the permutation order (just use srand(12345) instead of srand(time(0))), the result of the same data is unchanged.
Obviously there's no exact solution in general to this problem; with just 4 nodes ABCD and distances AB=BC=AC=AD=BD=1 CD=10 you cannot clearly draw a suitable 2D diagram (and not even a 3D one).
What those algorithms do is just placing springs between the nodes and then simulate a repulsion/attraction (depending on if the spring is shorter or longer than prescribed distance) probably also adding spatial friction to avoid resonance and explosion.
To keep a "stable" diagram just build a solution and then only update the distances, re-using the current position from previous solution as starting point. Picking two fixed nodes and aligning them seems a good idea to prevent a slow drift but I'd say that spring forces never end up creating a rotational momentum and thus I'd expect that just scaling and centering the solution should be enough anyway.

Collision detection between two general hexahedrons

I have 2 six faced solids. The only guarantee is that they each have 8 vertex3f's (verticies with x,y and z components). Given this, how can I find out if these are colliding?
I'm hesitant to answer after you deleted your last question while I was trying to answer it and made me lose my post. Please don't do that again. Anyway:
Not necessarily optimal, but obviously correct, based on constructive solid geometry:
Represent the two solids each as an intersection of 6 half-spaces. Note that this depends on convexity but nothing else, and extends to solids with more sides. My preferred representation for halfspaces is to choose a point on each surface (for example, a vertex) and the outward-pointing unit normal vector to that surface.
Intersect the two solids by treating all 12 half-spaces as the defining half-spaces for a new solid. (This step is purely conceptual and might not involve any actual code.)
Compute the surface/edge representation of the new solid and check that it's non-empty. One approach to doing this is to initially populate your surface/edge representation with one surface for each of the 12 half-spaces with edges outside the bounds of the 2 solids, then intersect its edges with each of the remaining 11 half-spaces.
It sounds like a bit of work, but there's nothing complicated. Only dot products, cross products (to get the initial representation), and projections.
It seems I'm too dumb to quit.
Consider this. If any edge of solid 1 intersects any face of solid 2, you have a collision. That's not quite comprehensive because there are case when one is is fully contained in the other, which you can test by determining if the center of either is contained in the other.
Checking edge face intersection works like this.
Define the edge as vector starting from one vertex running to the other. Take note of the length, L, of the edge.
Define the plane segments by a vertex, a normal, an in-plane basis, and the positions of the remaining vertices in that basis.
Find the intersection of the line and the plane. In the usual formulation you will be able to get both the length along the line, and the in-plane coordinates of the intersection in the basis that you have chosen.
The intersection must line as length [0,L], and must lie inside the figure in the plane. That last part is a little harder, but has a well known general solution.
This will work. For eloquence, I rather prefer R..'s solution. If you need speed...well, you'll just have to try them and see.
Suppose one of your hexahedrons H1 has vertices (x_1, y_1, z_1), (x_2, y_2, z_2), .... Find the maximum and minimum in each coordinate: x_min = min(x_1, x_2, ...), x_max = max(x_1, x_2,...), and so on. Do the same for the other hexahedron H2.
If the interval [x_min(H1), x_max(H1)] and the interval [x_min(H2), x_max(H2)] do not intersect (that is, either x_max(H1) < x_min(H2) or x_max(H2) < x_min(H1)), then the hexahedrons cannot possibly collide. Repeat this for the y and z coordinates. Qualitatively, this is like looking at the shadow of each hexahedron on the x-axis. If they don't overlap, the polyhedrons can't collide.
If any of the intervals do overlap, you'll have to move on to more precise collision detection. This will be a lot trickier. The obvious brute force method is to check if any of the edges of one intersects any of the faces of the other, but I imagine you can do a lot better than that.
The brute force way to check if an edge intersects a face... First you'd find the intersection of the line defined by the edge with the plane defined by the face (see wikipedia, for example). Then you have to check if that point is actually on the edge and the face. The edge is easy - just see if the coordinates are between the coordinates of the two vertices defining the edge. The face is trickier, especially with no guarantees about it being convex. In the general case, you'll have to just see which side of the half-planes defined by each edge it's on. If it's on the inside half-plane for all of them, it's inside the face. I unfortunately don't have time to type that all up now, but I bet googling could aid you some there. But of course, this is all brute force, and there may be a better way. (And dmckee points out a special case that this doesn't handle)

Select all points in a matrix within 30m of another point

So if you look at my other posts, it's no surprise I'm building a robot that can collect data in a forest, and stick it on a map. We have algorithms that can detect tree centers and trunk diameters and can stick them on a cartesian XY plane.
We're planning to use certain 'key' trees as natural landmarks for localizing the robot, using triangulation and trilateration among other methods, but programming this and keeping data straight and efficient is getting difficult using just Matlab.
Is there a technique for sub-setting an array or matrix of points? Say I have 1000 trees stored over 1km (1000m), is there a way to say, select only points within 30m radius of my current location and work only with those?
I would just use a GIS, but I'm doing this in Matlab and I'm unaware of any GIS plugins for Matlab.
I forgot to mention, this code is going online, meaning it's going on a robot for real-time execution. I don't know if, as the map grows to several miles, using a different data structure will help or if calculating every distance to a random point is what a spatial database is going to do anyway.
I'm thinking of mirroring the array of trees, into two arrays, one sorted by X and the other by Y. Then bubble sorting to determine the 30m range in that. I do the same for both arrays, X and Y, and then have a third cross link table that will select the individual values. But I don't know, what that's called, how to program that and I'm sure someone already has so I don't want to reinvent the wheel.
Cartesian Plane
GIS
You are looking for a spatial database like a quadtree or a kd-tree. I found two kd-tree implementations here and here, but didn't find any quadtree implementations for Matlab.
The simple solution of calculating all the distances and scanning through seems to run almost instantaneously:
lim = 1;
num_trees = 1000;
trees = randn(num_trees,2); %# list of trees as Nx2 matrix
cur = randn(1,2); %# current point as 1x2 vector
dists = hypot(trees(:,1) - cur(1), trees(:,2) - cur(2)); %# distance from all trees to current point
nearby = tree_ary((dists <= lim),:); %# find the nearby trees, pull them from the original matrix
On a 1.2 GHz machine, I can process 1 million trees (1 MTree?) in < 0.4 seconds.
Are you running the Matlab code directly on the robot? Are you using the Real-Time Workshop or something? If you need to translate this to C, you can replace hypot with sqr(trees[i].x - pos.x) + sqr(trees[i].y - pos.y), and replace the limit check with < lim^2. If you really only need to deal with 1 KTree, I don't know that it's worth your while to implement a more complicated data structure.
You can transform you cartesian coordinates into polar coordinates with CART2POL. Then selecting points inside certain radius will be strait-forward.
[THETA,RHO] = cart2pol(X-X0,Y-Y0);
selected = RHO < 30;
where X0, Y0 are coordinates of the current location.
My guess is that trees are distributed roughly evenly through the forest. If that is the case, simply use 30x30 (or 15x15) grid blocks as hash keys into an closed hash table. Look up the keys for all blocks intersecting the search circle, and check all hash entries starting at that key until one is flagged as the last in its "bucket."
0---------10---------20---------30--------40---------50----- address # line
(0,0) (0,30) (0,60) (30,0) (30,30) (30,60) hash key values
(1,3) (10,15) (3,46) (24,9.) (23,65.) (15,55.) tree coordinates + "." flag
For example, to get the trees in (0,0)…(30,30), map (0,0) to the address 0 and read entries (1,3), (10,15), reject (3,46) because it's out of bounds, read (24,9), and stop because it's flagged as the last tree in that sector.
To get trees in (0,60)…(30,90), map (0,60) to address 20. Skip (24, 9), read (23, 65), and stop as it's last.
This will be quite memory efficient as it avoids storing pointers, which would otherwise be of considerable size relative to the actual data. Nevertheless, closed hashing requires leaving some empty space.
The illustration isn't "to scale" as in reality there would be space for several entries between the hash key markers. So you shouldn't have to skip any entries unless there are more trees than average in a local preceding sector.
This does use hash collisions to your advantage, so it's not as random as a hash function typically is. (Not every entry corresponds to a distinct hash value.) However, as dense sections of forest will often be adjacent, you should randomize the mapping of sectors to "buckets," so a given dense sector will hopefully overflow into a less dense one, or the next, or the next.
Additionally, there is the issue of empty sectors and terminating iteration. You could insert a dummy tree into each sector to mark it as empty, or some other simple hack.
Sorry for the long explanation. This kind of thing is simpler to implement than to document. But the performance and the footprint can be excellent.
Use some sort of spatially partitioned data structure. A simple solution would be to simply create a 2d array of lists containing all objects within a 30m x 30m region. Worst case is then that you only need to compare against the objects in four of those lists.
Plenty of more complex (and potentially beneficial) solutions could also be used - something like bi-trees are a bit more complex to implement (not by much though), but could get more optimum performance (especially in cases where the density of objects varies considerably).
You could look at the voronoi diagram support in matlab:
http://www.mathworks.com/access/helpdesk/help/techdoc/ref/voronoi.html
If you base the voronoi polygons on your key trees, and cluster the neighbouring trees into those polygons, that would partition your search space by proximity (finding the enclosing polygon for a given non-key point is fast), but ultimately you're going to get down to computing key to non-key distances by pythagoras or trig and comparing them.
For a few thousand points (trees) brute force might be fast enough if you have a reasonable processor on board. Compute the distance of every other tree from tree n, then select those within 30'. This is the same as having all trees in the same voronoi polygon.
Its been a few years since I worked in GIS but I found the following useful: 'Computational Geometry In C' Joseph O Rourke, ISBN 0-521-44592-2 Paperback.