What's the relationship between Hamming distance and Simple Matching Coefficient? - data-mining

I'm doing exercises of Introduction to Data Mining, and got stuck in following question:
Which approach, Jaccard or Hamming distance, is more similar to the
Simple Matching Coefficient, and which approach is more similar to the
cosine measure? Explain. (Note: The Hamming measure is a distance,
while the other three measures are similarities, but don’t let this confuse
you.)
I think that the Hamming distance is similar to the SMC, since both of them look at whole dataset and compare data points similar or dissimilar. But the solution of this book just like following:
The Hamming distance is similar to the SMC. In fact, SMC = Hamming
distance / number of bits.
Did solution make mistake? I think Hamming distance and SMC isn't equal to each other, and Hamming distance plus SMC equal to 1.

Hamming / length = 1 - SMC
is a very strong relationship. Because of this they are equivalent in their capabilities.
You argumet of "looking at the whole data set" is wrong, each is defined on a pair of objects?
The point of this exercise is to practise your basic math skills, and transform equations into one another. This is a skill you will need frequently:
you don't need to explore equivalent functions, one is enough
of equivalent functions, one may be more efficient to compute than another
of equivalent functions, one may be more precise than another due to floating point math.

Related

What is the fastest algorithm to find the point from a set of points, which is closest to a line?

I have:
- a set of points of known size (in my case, only 6 points)
- a line characterized by x = s + t * r, where x, s and r are 3D vectors
I need to find the point closest to the given line. The actual distance does not matter to me.
I had a look at several different questions that seem related (including this one) and know how to solve this on paper from my highschool math classes. But I cannot find a solution without calculating every distance, and I am sure there has to be a better/faster way. Performance is absolutely crucial in my application.
One more thing: All numbers are integers (coordinates of points and elements of s and r vectors). Again, for performance reasons I would like to keep the floating-point math to a minimum.
You have to process every point at least once to know their distance. Unless you want to repeat the process many times with different lines, simply computing the distance of every point is unavoidable. So the algorithm has to be O(n).
Since you don't care about the actual distance, we can make some simplification to the point-distance computation. The exact distance is computed by (source):
d^2 = |r⨯(p-s)|^2 / |r|^2
where ⨯ is the cross product and |r|^2 is the squared length of vector r. Since |r|^2 is constant for all points, we can omit it from the distance computation without changing result:
d^2 = |r⨯(p-s)|^2
Compare the approximated square distances and keep the minimum. The advantage of this formula is that you can do everything with integers since you mentioned that all coordinates are integers.
I'm afraid you can't get away with computing less than 6 distances (if you could, at least one point would be left out -- including the nearest one).
See if it makes sense to preprocess: Is the line fixed and the points vary? Consider rotating coordinates to make the line horizontal.
As there are few points, it is doubtful that this is your bottleneck. Measure where the hot spots are, redesign algorithms/data representation, spice up compiler optimization, compile to assembly and bum that. Strictly in that order.
Jon Bentley's "Writing Efficient Programs" (sadly long out of print) and "Programming Pearls" (2nd edition) are full of advise on practical programming.

How to implement k-means algorithm on string data

I am trying to implement K-means algorithm on the below data-set.It's stragiht-forward to calculate distance between any two numeric attributes but how do I calculate distance between two strings and also how do I sum up all the distances(i.e the distance between string attributes and the distance between numeric attributes.) Please kindly advise me.Thank you.
K-means is designed for Euclidean distance. You cannot just plug in arbitrary other distance functions. This may cause k-means to no longer converge.
The required property is that the mean must minimize the variances. If you cannot guarantee this property (and what is the mean of a string anyway?) then you lose guaranteed convergence.
Technically, k-means isn't even based on Euclidean distance, but it minimizes variances, which happen to be the same as squared Euclidean distances; and if you minimize these squares, you also minimize Euclidean distance. But what the algorithm really aims at minimizing is Var(Attribute 1, Cluster 1) + Var(Attribute 2, Cluster 1) + ... + Var(Attribute n, Cluster k).
You might want to look into k-medians, which by using a medoid instead of the mean, avoids both the need to be able to compute a mean and can give convergence guarantees for arbitrary distances as far as I know.
However, you might want to look into truly distance based algorithms, including the various density based clustering algorithms which usually also are distance-based.
To calculate the distance between strings, you can use the Levenshtein distance (aka edit distance).
To normalize the values between the string and numeric attributes, you can can try to state the attributes as percentages: find the min and max value of each type of attribute, and then for a given data instance, calculate its percentage within the respective range.

Compare similarity algorithms

I want to use string similarity functions to find corrupted data in my database.
I came upon several of them:
Jaro,
Jaro-Winkler,
Levenshtein,
Euclidean and
Q-gram,
I wanted to know what is the difference between them and in what situations they work best?
Expanding on my wiki-walk comment in the errata and noting some of the ground-floor literature on the comparability of algorithms that apply to similar problem spaces, let's explore the applicability of these algorithms before we determine if they're numerically comparable.
From Wikipedia, Jaro-Winkler:
In computer science and statistics, the Jaro–Winkler distance
(Winkler, 1990) is a measure of similarity between two strings. It is
a variant of the Jaro distance metric (Jaro, 1989, 1995) and
mainly[citation needed] used in the area of record linkage (duplicate
detection). The higher the Jaro–Winkler distance for two strings is,
the more similar the strings are. The Jaro–Winkler distance metric is
designed and best suited for short strings such as person names. The
score is normalized such that 0 equates to no similarity and 1 is an
exact match.
Levenshtein distance:
In information theory and computer science, the Levenshtein distance
is a string metric for measuring the amount of difference between two
sequences. The term edit distance is often used to refer specifically
to Levenshtein distance.
The Levenshtein distance between two strings is defined as the minimum
number of edits needed to transform one string into the other, with
the allowable edit operations being insertion, deletion, or
substitution of a single character. It is named after Vladimir
Levenshtein, who considered this distance in 1965.
Euclidean distance:
In mathematics, the Euclidean distance or Euclidean metric is the
"ordinary" distance between two points that one would measure with a
ruler, and is given by the Pythagorean formula. By using this formula
as distance, Euclidean space (or even any inner product space) becomes
a metric space. The associated norm is called the Euclidean norm.
Older literature refers to the metric as Pythagorean metric.
And Q- or n-gram encoding:
In the fields of computational linguistics and probability, an n-gram
is a contiguous sequence of n items from a given sequence of text or
speech. The items in question can be phonemes, syllables, letters,
words or base pairs according to the application. n-grams are
collected from a text or speech corpus.
The two core
advantages of n-gram models (and algorithms that use
them) are relative simplicity and the ability to scale up – by simply
increasing n a model can be used to store more context with a
well-understood space–time tradeoff, enabling small experiments to
scale up very efficiently.
The trouble is these algorithms solve different problems that have different applicability within the space of all possible algorithms to solve the longest common subsequence problem, in your data or in grafting a usable metric thereof. In fact, not all of these are even metrics, as some of them don't satisfy the triangle inequality.
Instead of going out of your way to define a dubious scheme to detect data corruption, do this properly: by using checksums and parity bits for your data. Don't try to solve a much harder problem when a simpler solution will do.
String similarity helps in a lot of different ways. For example
google's did you mean results are calculated using string similarity.
string similarity is used to correct OCR errors.
string similarity is used to correct keyboard entering errors.
string similarity is used to find most matching sequence of two DNAs in bioinformatics.
But as one size does not fit all. Every string similarity algorithm is designed for a specific usage though most of them are similar. For example Levenshtein_distance is about how many char you change to make two strings equal.
kitten → sitten
Here distance is 1 character change. You may give different weights to deletion, addition and substitution. For example OCR errors and keyboard errors give less weight for some changes. OCR ( some chars are very similar to others ), keyboard some chars are very near to each other. Bioinformatic string similarity allows a lot of insertion.
Your second example of "Jaro–Winkler distance metric is designed and best suited for short strings such as person names"
Therefore you should keep in your mind about your problem.
I want to use string similarity functions to find corrupted data in my database.
How your data is corrupted? Is it a user error , similar to keyboard input error? Or is it similar to OCR errors? Or something else entirely?

Looking for C/C++ library calculating max of Gaussian curve using discrete values

I have some discrete values and assumption, that these values lie on a Gaussian curve.
There should be an algorithm for max-calculation using only 3 discrete values.
Do you know any library or code in C/C++ implementing this calculation?
Thank you!
P.S.:
The original task is auto-focus implementation. I move a (microscope) camera and capture the pictures in different positions. The position having most different colors should have best focus.
EDIT
This was long time ago :-(
I'just wanted to remove this question, but left it respecting the good answer.
You have three points that are supposed to be on a Gaussian curve; this means that they lie on the function:
If you take the logarithm of this function, you get:
which is just a simple 2nd grade polynomial, i.e. a parabola with a vertical axis of simmetry:
with
So, if you know the three coefficients of the parabola, you can derive the parameters of the Gaussian curve; incidentally, the only parameter of the Gaussian function that is of some interest to you is b, since it tells you where the center of the distribution, i.e. where is its maximum. It's immediate to find out that
All that remains to do is to fit the parabola (with the "original" x and the logarithm of your values). Now, if you had more points, a polynomial fit would be involved, but, since you have just three points, the situation is really simple: there's one and only one parabola that goes through three points.
You now just have to write the equation of the parabola for each of your points and solve the system:
(with , where the zs are the actual values read at the corresponding x)
This can be solved by hand (with some time), with some CAS or... looking on StackOverflow :) ; the solution thus is:
So using these last equations (remember: the ys are the logarithm of your "real" values) and the other relations you can easily write a simple algebraic formula to get the parameter b of your Gaussian curve, i.e. its maximum.
(I may have done some mess in the calculations, double-check them before using the results, anyhow the procedure should be correct)
(thanks at http://www.codecogs.com/latex/eqneditor.php for the LaTeX equations)

minimum distance between 2 points in c++

I'm given m places (x,y coordinates).
I have n requests of finding the closest place to a given point P(x,y); (The minimum Euclidian distance)
How can i solve this problem below O(n*m) where n is the number of requests and m the number of places? I could use squared Euclidian distances but it's still n*m.
Try a kd-tree. A high performance library implementation can be found here.
Note: I'm pointing you to an approximate nearest-neighbors search which is optimized for high dimensions. This may be slightly overkill for your application.
Edit:
For a 2d kd-tree, the build time would be O(m*log(m)) and the query time would be O(n*sqrt(m)). This should end up being a net win over the naive solution if your number of queries n, exceeds log(m).
The library means you don't have to implement it so the complexity shouldn't be an issue.
If you want to generalize to high dimension extremely fast querying, check out locality sensitive hashing.
Interesting. To reduce the effect of n, I wonder if perhaps it would help to save the result of each request as you encounter and handle it. A clever result table might shortcut the need to calculate sqrt( x2 + y2) in solving subsequent requests.
The Nearest-Neighbor-Problem, eh? I found Robert Sedgewick Std Book very useful in these cases. He describes Nearest Neighbour Search, too.