the dimensional reduction issues in self-organizing map (SOM) - data-mining

Self organizing map is claimed to be able to visualize/cluster the high-dimensional data on a smaller dimensional space. I have some difficulties in understanding this statement.
Consider a six-dimensional data set, the codebook vector/reference vector is also of six-dimensional. According to the SOM algorithm, updating these reference vectors are also conducted in the six-dimensional vector space. If we are considering a two dimensional map, how should I understand the map between the six-dimensional data space and two-dimensional map space?

The map between the N-dimensional input space and the 2D SOM space is a non-linear projection preserving as much of the topology as possible.
It means that information about distance and angle is lost in the process but that proximity relationship between points is preserved (i.e. 2 points which are close one to another in the input space should be close in the SOM space).
I got my best insight in "what does a SOM do?" by using it on the 3D RGB color space: the work of the SOM can easily be visualized in this case and should help to grasp the concept.

The 2D self organizing map (SOM) distributes the input vectors onto a 2D plane. Mathematically the SOM is a 3D matrix and the length of the third dimension is given by the length of your input data. To visualize the SOM it's usual to compute the U-matrix. The U-matrix gives for each neuron of the SOM the mean Euclidean distance between the considered neuron and its neighbors.
The resulting 2D matrix allows the visualization of the high dimensional space onto a 2D plane. The high values give barrier between clusters represented as deep blue valleys in the following figure:
This U-matrix comes from the learning on this 3D data set:
And here the U-matrix in the 3D original space:

You cannot understand it but it's possible to use it so you can try to think of it as a discrete function that can map for example a 4d vector space to a 1d vector. Most important is that your function is some sort of recursion. A L-system for example uses recursion or repetition a lot. A better description about monster curves can be found here at Nick' spatial index hilbert curve blog.

Related

Why 100x100 images form a 10'000-dimension space?

While reading a paper, I came through that when viewed as vectors of pixel values, face images are extremely high-dimensional. For example, 100x100 images form a 10'000-dimension space.
How is that possible, I don't seem to understand it.
A vector has only one dimension so if you convert a 2D array into 1D known as Flatten in terms of Neural Networks, the result you'll get would be a vector of 100*100 = 10000 values in one dimension. So, basically, you are accumulating a 2D quantity into 1D.
If you need more info on this topic, you can understand the concept of Flatten from YouTube, it will help you get a pictorial understanding of the concept.
Hope this would help clear your doubt.

Searching for geometric shape on cartesian plane by coordinates

I have an algorithmic problem on a Cartesian plane.. I need to efficiently search for geometric shapes that intersect with a given point. There are several shapes(rectangle, circle, triangle and polygon) but those are not important, because the determining the actual point inclusion is not a problem here, I will implement those on my own. The problem lies in determining which shapes need to be verified for the inclusion with the given point. Iterating through all of my shapes on plane and running the point inclusion method on each one of them is inefficient as the number of instances of shapes will be quite large. My first idea was to divide the plane for segments(the plane is finite, but too large for any kind of 3D array) and when adding a shape to the database, i would determine which segments it would intersect with and save them within object of the shape. Then when the point for inclusion verification is given, I would only need to determine the segment in which the point is located and then verify the inclusion only with objects which intersect with that segment.
Is that the way to go? I don't know if the method I described is optimal or if i am not missing something. Any help would be appreciated..
Thanks in advance
P.S.: I will be writing this in C++. That is not really relevant as it is more of an algorithmic problem but I wanted to put that out if someone was curious...
The gridding approach can be used here.
See the plane as a raster image where you draw all your shapes using a scan conversion algorithm, making sure that all pixels even partially covered are filled. For every image pixel, keep a list of the shapes that filled it.
A query is then straightforward: find the pixel where the query point falls in time O(1) and check every shape in the list, in time O(K), where K is the list length, approximately equal to the number of intersecting shapes.
If your image is made of N² pixels and you have M objects having an average area A pixels, you will need to store N²+M.A list elements (a shape identifier + a link to the next). You will choose the pixel size to achieve a good compromise between accuracy and storage cost. In any case, you must limit yourself to N²<Q.M, where Q is the total number of queries, otherwise the cost of just initializing the image could exceed the total query time.
In case your scene is very sparse (more voids than shapes), you can use a compressed representation of the image, using a quadtree.

How do you store voxel data?

I've been looking online and I'm impressed by the capabilities of using voxel data, especially for terrain building and manipulation. The problem is that voxels are never clearly explained on any site that i visited or how to use/implement them. All i find is that voxels are volumetric data. Please provide a more complete answer; what is volumetric data. It may seem like a simple question but I'm still unsure.
Also, how would you implement voxel data? (I aim to implement this into a c++ program.) What sort of data type would you use to store the voxel data to enable me to modify the contents at run time as fast as possible. I have looked online and i couldn't find anything which explained how to store the data. Lists of objects, arrays, ect...
How do you use voxels?
EDIT:
Since I'm just beginning with voxels, I'll probably start by using it to only model simple objects but I will eventually be using it for rendering terrain and world objects.
In essence, voxels are a three-dimensional extension of pixels ("volumetric pixels"), and they can indeed be used to represent volumetric data.
What is volumetric data
Mathematically, volumetric data can be seen as a three-dimensional function F(x,y,z). In many applications this function is a scalar function, i.e., it has one scalar value at each point (x,y,z) in space. For instance, in medical applications this could be the density of certain tissues. To represent this digitally, one common approach is to simply make slices of the data: imagine images in the (X,Y)-plane, and shifting the z-value to have a number of images. If the slices are close to eachother, the images can be displayed in a video sequence as for instance seen on the wiki-page for MRI-scans (https://upload.wikimedia.org/wikipedia/commons/transcoded/4/44/Structural_MRI_animation.ogv/Structural_MRI_animation.ogv.360p.webm). As you can see, each point in space has one scalar value which is represented as a grayscale.
Instead of slices or a video, one can also represent this data using voxels. Instead of dividing a 2D plane in a regular grid of pixels, we now divide a 3D area in a regular grid of voxels. Again, a scalar value can be given to each voxel. However, visualizing this is not as trivial: whereas we could just give a gray value to pixels, this does not work for voxels (we would only see the colors of the box itself, not of its interior). In fact, this problem is caused by the fact that we live in a 3D world: we can look at a 2D image from a third dimension and completely observe it; but we cannot look at a 3D voxel space and observe it completely as we have no 4th dimension to look from (unless you count time as a 4th dimension, i.e., creating a video).
So we can only look at parts of the data. One way, as indicated above, is to make slices. Another way is to look at so-called "iso-surfaces": we create surfaces in the 3D space for which each point has the same scalar value. For a medical scan, this allows to extract for instance the brain-part from the volumetric data (not just as a slice, but as a 3D model).
Finally, note that surfaces (meshes, terrains, ...) are not volumetric, they are 2D-shapes bent, twisted, stretched and deformed to be embedded in the 3D space. Ideally they represent the border of a volumetric object, but not necessarily (e.g., terrain data will probably not be a closed mesh). A way to represent surfaces using volumetric data, is by making sure the surface is again an iso-surface of some function. As an example: F(x,y,z) = x^2 + y^2 + z^2 - R^2 can represent a sphere with radius R, centered around the origin. For all points (x',y',z') of the sphere, F(x',y',z') = 0. Even more, for points inside the sphere, F < 0, and for points outside of the sphere, F > 0.
A way to "construct" such a function is by creating a distance map, i.e., creating volumetric data such that every point F(x,y,z) indicates the distance to the surface. Of course, the surface is the collection of all the points for which the distance is 0 (so, again, the iso-surface with value 0 just as with the sphere above).
How to implement
As mentioned by others, this indeed depends on the usage. In essence, the data can be given in a 3D matrix. However, this is huge! If you want the resolution doubled, you need 8x as much storage, so in general this is not an efficient solution. This will work for smaller examples, but does not scale very well.
An octree structure is, afaik, the most common structure to store this. Many implementations and optimizations for octrees exist, so have a look at what can be (re)used. As pointed out by Andreas Kahler, sparse voxel octrees are a recent approach.
Octrees allow easier navigating to neighbouring cells, parent cells, child cells, ... (I am assuming now that the concept of octrees (or quadtrees in 2D) are known?) However, if many leaf cells are located at the finest resolutions, this data structure will come with a huge overhead! So, is this better than a 3D array: it somewhat depends on what volumetric data you want to work with, and what operations you want to perform.
If the data is used to represent surfaces, octrees will in general be much better: as stated before, surfaces are not really volumetric, hence will not require many voxels to have relevant data (hence: "sparse" octrees). Refering back to the distance maps, the only relevant data are the points having value 0. The other points can also have any value, but these do not matter (in some cases, the sign is still considered, to denote "interior" and "exterior", but the value itself is not required if only the surface is needed).
How to use
If by "use", you are wondering how to render them, then you can have a look at "marching cubes" and its optimizations. MC will create a triangle mesh from volumetric data, to be rendered in any classical way. Instead of translating to triangles, you can also look at volume rendering to render a "3D sampled data set" (i.e., voxels) as such (https://en.wikipedia.org/wiki/Volume_rendering). I have to admit that I am not that familiar with volume rendering, so I'll leave it at just the wiki-link for now.
Voxels are just 3D pixels, i.e. 3D space regularly subdivided into blocks.
How do you use them? It really depends on what you are trying to do. A ray casting terrain game engine? A medical volume renderer? Something completely different?
Plain 3D arrays might be the best for you, but it is memory intensive. As BWG pointed out, octree is another popular alternative. Search for Sparse Voxel Octrees for a more recent approach.
In popular usage during the 90's and 00's, 'voxel' could mean somewhat different things, which is probably one reason you have been finding it hard to find consistent information. In technical imaging literature, it means 3D volume element. Oftentimes, though, it is used to describe what is somewhat-more-clearly termed a high-detail raycasting engine (as opposed to the low-detail raycasting engine in Doom or Wolfenstein). A popular multi-part tutorial lives in the Flipcode archives. Also check out this brief one by Jacco.
There are many old demos you can find out there that should run under emulation. They are good for inspiration and dissection, but tend to use a lot of assembly code.
You should think carefully about what you want to support with your engine: car-racing, flying, 3D objects, planets, etc., as these constraints can change the implementation of your engine. Oftentimes, there is not a data structure, per se, but the terrain heightfield is represented procedurally by functions. Otherwise, you can use an image as a heightfield. For performance, when rendering to the screen, think about level-of-detail, in other words, how many actual pixels will be taken up by the rendered element. This will determine how much sampling you do of the heightfield. Once you get something working, you can think about ways you can blend pixels over time and screen space to make them look better, while doing as little rendering as possible.

Fit a circle or a spline into a bunch of 3D Points

I have some 3D Points that roughly, but clearly form a segment of a circle. I now have to determine the circle that fits best all the points. I think there has to be some sort of least squares best fit but I cant figure out how to start.
The points are sorted the way they would be situated on the circle. I also have an estimated curvature at each point.
I need the radius and the plane of the circle.
I have to work in c/c++ or use an extern script.
You could use a Principal Component Analysis (PCA) to map your coordinates from three dimensions down to two dimensions.
Compute the PCA and project your data onto the first to principal components. You can then use any 2D algorithm to find the centre of the circle and its radius. Once these have been found/fitted, you can project the centre back into 3D coordinates.
Since your data is noisy, there will still be some data in the third dimension you squeezed out, but bear in mind that the PCA chooses this dimension such as to minimize the amount of data lost, i.e. by maximizing the amount of data that is represented in the first two components, so you should be safe.
A good algorithm for such data fitting is RANSAC (Random sample consensus). You can find a good description in the link so this is just a short outline of the important parts:
In your special case the model would be the 3D circle. To build this up pick three random non-colinear points from your set, compute the hyperplane they are embedded in (cross product), project the random points to the plane and then apply the usual 2D circle fitting. With this you get the circle center, radius and the hyperplane equation. Now it's easy to check the support by each of the remaining points. The support may be expressed as the distance from the circle that consists of two parts: The orthogonal distance from the plane and the distance from the circle boundary inside the plane.
Edit:
The reason because i would prefer RANSAC over ordinary Least-Squares(LS) is its superior stability in the case of heavy outliers. The following image is showing an example comparision of LS vs. RANSAC. While the ideal model line is created by RANSAC the dashed line is created by LS.
The arguably easiest algorithm is called Least-Square Curve Fitting.
You may want to check the math,
or look at similar questions, such as polynomial least squares for image curve fitting
However I'd rather use a library for doing it.

Bilinear interpolation to enlarge bitmap images

I'm a student, and I've been tasked to optimize bilinear interpolation of images by invoking parallelism from CUDA.
The image is given as a 24-bit .bmp format. I already have a reader for the .bmp and have stored the pixels in an array.
Now I need to perform bilinear interpolation on the array. I do not understand the math behind it (even after going through the wiki article and other Google results). Because of this I'm unable to come up with an algorithm.
Is there anyone who can help me with a link to an existing bilinear interpolation algorithm on a 1-D array? Or perhaps link to an open source image processing library that utilizes bilinear and bicubic interpolation for scaling images?
The easiest way to understand bilinear interpolation is to understand linear interpolation in 1D.
This first figure should give you flashbacks to middle school math. Given some location a at which we want to know f(a), we take the neighboring "known" values and fit a line between them.
So we just used the old middle-school equations y=mx+b and y-y1=m(x-x1). Nothing fancy.
We basically carry over this concept to 2-D in order to get bilinear interpolation. We can attack the problem of finding f(a,b) for any a,b by doing three interpolations. Study the next figure carefully. Don't get intimidated by all the labels. It is actually pretty simple.
For a bilinear interpolation, we again using the neighboring points. Now there are four of them, since we are in 2D. The trick is to attack the problem one dimension at a time.
We project our (a,b) to the sides and first compute two (one dimensional!) interpolating lines.
f(a,yj) where yj is held constant
f(a,yj+1) where yj+1 is held constant.
Now there is just one last step. You take the two points you calculated, f(a,yj) and f(a,yj+1), and fit a line between them. That's the blue one going left to right in the diagram, passing through f(a,b). Interpolating along this last line gives you the final answer.
I'll leave the math for the 2-D case for you. It's not hard if you work from the diagram. And going through it yourself will help you really learn what's going on.
One last little note, it doesn't matter which sides you pick for the first two interpolations. You could have picked the top and bottom, and then done the third interpolation line between those two instead. The answer would have been the same.
When you enlarge an image by scaling the sides by an integral factor, you may treat the result as the original image with extra pixels inserted between the original pixels.
See the pictures in IMAGE RESIZE EXAMPLE.
The f(x,y)=... formula in this article in Wikipedia gives you a method to compute the color f of an inserted pixel:
For every inserted pixel you combine the colors of the 4 original pixels (Q11, Q12, Q21, Q22) surrounding it. The combination depends on the distance between the inserted pixel and the surrounding original pixels, the closer it is to one of them, the closer their colors are:
The original pixels are shown as red. The inserted pixel is shown as green.
That's the idea.
If you scale the sides by a non-integral factor, the formulas still hold, but now you need to recalculate all pixel colors as you can't just take the original pixels and simply insert extra pixels between them.
Don't get hung up on the fact that 2D arrays in C are really 1D arrays. It's an implementation detail. Mathematically, you'll still need to think in terms of 2D arrays.
Think about linear interpolation on a 1D array. You know the value at 0, 1, 2, 3, ... Now suppose I ask you for the value at 1.4. You'd give me a weighted mix of the values at 1 and 2: (1 - 0.4)*A[1] + 0.4*A[2]. Simple, right?
Now you need to extend to 2D. No problem. 2D interpolation can be decomposed into two 1D interpolations, in the x-axis and then y-axis. Say you want (1.4, 2.8). Get the 1D interpolants between (1, 2)<->(2,2) and (1,3)<->(2,3). That's your x-axis step. Now 1D interpolate between them with the appropriate weights for y = 2.8.
This should be simple to make massively parallel. Just calculate each interpolated pixel separately. With shared memory access to the original image, you'll only be doing reads, so no synchronization issues.