image difference algorithm based on visual perception - c++

I have tried searching google for this for quite some time and have had no luck find what I need. What I am trying to do is I have a few different versions of the same image these versions being derived from the original by dithering them to a palette. The image is split into 8x8 tiles that use one of n palettes. I want to pick which palette looks best. There are a few questions on stackoverflow relating to image difference however they suggest a simple algorithm
Σ(R1n-R2n)^2+(G1n-G2n)^2+(B1n-B2n)^2 and picking the lowest sum. Some suggest that to improve this a better delta algorithm can be used, I have tried CIEDE2000 still found the results unsatisfactory and in many instances I can manually pick a better tile. Note that I did do a bit of verification of my implementation of CIEDE2000 using precomputed data and used this library for http://www.getreuer.info/home/colorspace rgb to CIELab so I doubt that the reason I am a bit disappointed with the results is due to a bug and most of the time it does an okay job. What I want to do is implement a better algorithm than just the delta to pick the best tile. Does anyone know of a good algorithm for this (inclusive) or better terminology so that when I search for this I find more relevant information?

Related

What's the best way for complex shape matching

I need to know what's the best way to match certain shape (template) in the image.
I know there is several ways, but some of them did not lead to a very good results and the another need a lot of process time, so anyone tried a good and fast way to do the matching with short process time.
For example this is the template...
And I have a sample and I want to compare the sample with the template and return true if the sample is similar to the template else return false.
Note: I tried contour matching, Cascade Classification, and SURF, but all of them is not very good or the process time is not so good.
Matching things with eachother can be a rather difficult task, mainly due to the fact that different techniques have very different characteristics and can yield almost perfect results on some categories and very bad results on others.
This said, I don't think you'll ever get an answer to your question, at least not one that says "Use xyx method from [cited paper], that will solve all your problems". I'll try to point out some examples for you hoping that it'll help.
Template matching operator: compare a template with a sliding window on your image, can achieve very good results if your template is very similar to the object you are looking for in the image, no matter how complex it is. Can be very fast, it's not invariant to basically anything, so if you plan to have rotations, significant changes in lighting or something else, this is probably not going to work for you. here you can find out some code. Watch out which color space are you using, different color spaces can achieve very different results if used right (e.g. for face analysis HSV can be better that RGB in some cases)
Keypoint matching like SIFT or SURF: I used this a lot with very good results. You'll need to decide what descriptor to use and what matcher. OpenCV has some nice examples,here you can find one. Not going to be the fastest way to match your object since these descriptors can take some time to be extracted, it's good if you don't know much about the conditions you'll be working in though: it's usually robust to scale, rotation and lightning changes as long that keypoints can be correctly found on both the template and the image.
Shape matching: I was rather surprised when, in an image classification competition i participated in, I had been able to use a simple HOG descriptor to obtain very discriminating information about my images. Histograms of Oriented Gradients are a rather powerful tool for describing the shape of an object, it uses edge orientation and magnitude to describe your image. They can be fast to compute (OpenCV has a a GPU implementation I think), configurable (you can decide how thick your grid can be and how many cells, resulting in very different information). HOGs are not invariant to rotation, seen the object from a different angle will likely produce a different histogram, but they are very robust to lighting changes due to the fact that doesn't use color.
HOGs are just an example, there are a lot of shape and contour descriptors but basically they offer pretty much the same I think.
Histogram matching: not my first choice, it can be useful if you know something about the object and the rest of image. For example, if you know you are looking for your pink flower in a jungle image where it's the only pink thing there, a simple color histogram matching will do just fine. Pick up a sliding window, run it over your image, compare your histograms and you'll be done. Very fast, very simple, it doesn't use the shape at all so no matter how complex your object is, you'll find it. Not using shape makes it robust to rotations, watch out for lighting changes though. A very big limitations of this method is that if there are other pink things in your jungle you won't be able to distinguish.
Hybrid approaches: here is where you can get the best out of the techniques cited above. As you have seen, most of them work well in a certain environment and quite bad in others. You can use a combination of the techniques you know and obtain something much better than the sum of the parts. I worked a lot with HOGs and head pose estimation and a real breakthrough came when we started extracting HOGs not in a dense way but around certain keypoints. You'll need to know your problem, find out what do you need and adapt a bunch of methods to it. In general, hybrid methods can work a lot better and a lot slower.
Hope this helps you a bit, I don't think that, given the information you gave us, I could give you a much better answer..(probably someone else can, that's why I'm still a student :) )

FFT based image registration (optionally using OpenCV) in cpp?

I'm trying to align two images taken from a handheld camera.
At first, I was trying to use the OpenCV warpPerspective method based on SIFT/SURF feature points. The problem is the feature-extract & matching process may be extremely slow when the image quality is high (3000x4000). I tried to scale-down the image before find feature-points, the result is not as good as before.(The Mat generated from findHomography shouldn't be affected by scaling down the image, right?) And sometimes, due to lack of good feature point matches, the result is quite strange.
After searching on this topic, it seems that solving the problem in Fourier domain will speed up the registration process. And I've found this question which leads me to the code here.
The only problem is the code is written in python with numpy (not even using OpenCV), which makes it quite hard to re-written to C++ code using OpenCV (In OpenCV, I can only find dft and there's no fftshift nor fft stuff, I'm not quite familiar with NumPy, and I'm not brave enough to simply ignore the missing methods). So I'm wondering why there is not such a Fourier-domain image registration implementation using C++?
Can you guys give me some suggestion on how to implement one, or give me a link to the already implemented C++ version? Or help me to turn the python code into C++ code?
Big thanks!
I'm fairly certain that the FFT method can only recover a similarity transform, that is, only a (2d) rotation, translation and scale. Your results might not be that great using a handheld camera.
This is not quite a direct answer to your question, but, as a suggestion for a speed improvement, have you tried using a faster feature detector and descriptor? In OpenCV SIFT/SURF are some of the slowest methods they have for feature extraction/matching. You could try testing some of their other methods first, they all work quite well and are faster than SIFT/SURF. Especially if you use their FLANN-based matcher.
I've had to do this in the past with similar sized imagery, and using the binary descriptors OpenCV has increases the speed significantly.
If you need only shift you can use OpenCV's phasecorrelate

remove gradient of a image without a comparison image

currently i am having much difficulty thinking of a good method of removing the gradient from a image i received.
The image is a picture taken by a microscope camera that has a light glare in the middle. The image has a pattern that goes throughout the image. However i am supposed to remove the light glare on the image created by the camera light.
Unfortunately due to the nature of the camera it is not possible to take a picture on black background with the light to find the gradient distribution. Nor do i have a comparison image that is without the gradient. (note- the location of the light glare will always be consistant when the picture is taken)
In easier terms its like having a photo with a flash in it but i want to get rid of the flash. The only problem is i have no way to obtaining the image without flash to compare to or even obtaining a black image with just the flash on it.
My current thought is conduct edge detection and obtain samples in specific locations away from the edges (due to color difference) and use that to gauge the distribution of gradient since those areas are supposed to have relatively identical colors. However i was wondering if there was a easier and better way to do this.
If needed i will post a example of the image later.
At the moment i have a preferrence of solving this in c++ using opencv if that makes it easier.
thanks in advance for any possible ideas for this problem. If there is another link, tutorial, or post that may solve my problem i would greatly appreciate the post.
as you can tell there is a light thats being shinned on the img as you can tell from the white spot. and the top is lighter than the bottome due to the light the color inside the oval is actually different when the picture is taken in color. However the color between the box and the oval should be consistant. My original idea was to perhaps sample only those areas some how and build a profile that i can utilize to remove the light but i am unsure how effective that would be or if there is a better way
EDIT :
Well i tried out Roger's suggestion and the results were suprisngly good. Using 110 kernel gaussian blurr to find illumination and conducting CLAHE on top of that. (both done in opencv)
However my colleage told me that the image doesn't look perfectly uniform and pointed out that around the area where the light used to be is slightly brighter. He suggested trying a selective gaussian blur where the areas above certain threshold pixel values are not blurred while the rest of the image is blurred.
Does anyone have opinions regarding this and perhaps a link, tutorial, or an example of something like this being done? Most of the things i find tend to be selective blur for programs like photoshop and gimp
EDIT2 :
it is difficult to tell with just eyes but i believe i have achieved relatively close uniformization by using a simple plane fitting algorithm.((-A * x - B * y) / C) (x,y,z) where z is the pixel value. I think that this can be improved by utilizing perhaps a sine fitting function? i am unsure. But I am relatively happy with the results. Many thanks to Roger for the great ideas.
I believe using a bunch of pictures and getting the avg would've been another good method (suggested by roger) but Unofruntely i was not able to implement this since i was not supplied with various pictures and the machine is under modification so i was unable to use it.
I have done some work in this area previously and found that a large Gaussian blur kernel can produce a reasonable approximation to the background illumination. I will try to get something working on your example image but, in the meantime, here is an example of your image after Gaussian blur with radius 50 pixels, which may help you decide if it's worth progressing.
UPDATE
Just playing with this image, you can actually get a reasonable improvement using adaptive histogram equalisation (I used CLAHE) - see comparison below - any use?
I will update this answer with more details as I progress.
I would like to point you to this paper: http://www.cs.berkeley.edu/~ravir/dirtylens.pdf, but, in my opinion, without any sort of calibration/comparison image taken apriori, it is difficult to mine out the ground truth from the flared image.
However, if you are trying to just present the image minus the lens flare, disregarding the actual scientific data behind the flared part, then you switch into the domain of image inpainting. Criminsi's algorithm, as described in this paper: http://research.microsoft.com/pubs/67276/criminisi_tip2004.pdf and explained/simplified in these two links: http://cs.brown.edu/courses/csci1950-g/results/final/eboswort/ http://www.cc.gatech.edu/~sooraj/inpainting/, will do a very good job in restoring texture information to the flared up regions. (If you'd really like to pursue this approach, do mention that. More comprehensive help can be provided for this).
However, given the fact that we're dealing with microscopic data, I doubt if you'd like to lose the scientific data contained in a particular region of an image. In that case, I really think you need to find a workaround to determine the flare model of the flash/light source w.r.t the lens you're using.
I hope someone else can shed more light on this.

Comparing SIFT features stored in a mysql database

I'm currently extending an image library used to categorize images and i want to find duplicate images, transformed images, and images that contain or are contained in other images.
I have tested the SIFT implementation from OpenCV and it works very well but would be rather slow for multiple images. Too speed it up I thought I could extract the features and save them in a database as a lot of other image related meta data is already being held there.
What would be the fastest way to compare the features of a new images to the features in the database?
Usually comparison is done calculating the euclidean distance using kd-trees, FLANN, or with the Pyramid Match Kernel that I found in another thread here on SO, but haven't looked much into yet.
Since I don't know of a way to save and search a kd-tree in a database efficiently, I'm currently only seeing three options:
* Let MySQL calculate the euclidean distance to every feature in the database, although I'm sure that that will take an unreasonable time for more than a few images.
* Load the entire dataset into memory at the beginning and build the kd-tree(s). This would probably be fast, but very memory intensive. Plus all the data would need to be transferred from the database.
* Saving the generated trees into the database and loading all of them, would be the fastest method but also generate high amounts of traffic as with new images the kd-trees would have to be rebuilt and send to the server.
I'm using the SIFT implementation of OpenCV, but I'm not dead set on it. If there is a feature extractor more suitable for this task (and roughly equally robust) I'm glad if someone could suggest one.
So I basically did something very similar to this a few years ago. The algorithm you want to look into was proposed a few years ago by David Nister, the paper is: "Scalable Recognition with a Vocabulary Tree". They pretty much have an exact solution to your problem that can scale to millions of images.
Here is a link to the abstract, you can find a download link by googleing the title.
http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=1641018
The basic idea is to build a tree with a hierarchical k-means algorithm to model the features and then leverage the sparse distribution of features in that tree to quickly find your nearest neighbors... or something like that, it's been a few years since I worked on it. You can find a powerpoint presentation on the authors webpage here: http://www.vis.uky.edu/~dnister/Publications/publications.html
A few other notes:
I wouldn't bother with the pyramid match kernel, it's really more for improving object recognition than duplicate/transformed image detection.
I would not store any of this feature stuff in an SQL database. Depending on your application it is sometimes more effective to compute your features on the fly since their size can exceed the original image size when computed densely. Histograms of features or pointers to nodes in a vocabulary tree are much more efficient.
SQL databases are not designed for doing massive floating point vector calculations. You can store things in your database, but don't use it as a tool for computation. I tried this once with SQLite and it ended very badly.
If you decide to implement this, read the paper in detail and keep a copy handy while implementing it, as there are many minor details that are very important to making the algorithm work efficiently.
The key, I think, is that is this isn't a SIFT question. It is a question about approximate nearest neighbor search. Like image matching this too is an open research problem. You can try googling "approximate nearest neighbor search" and see what type of methods are available. If you need exact results, try: "exact nearest neighbor search".
The performace of all these geometric data structures (such as kd-trees) degrade as the number of dimensions increase, so the key I think is that you may need to represent your SIFT descriptors in a lower number of dimensions (say 10-30 instead of 256-1024) to have really efficient nearest neighbor searches (use PCA for example).
Once you have this I think it will become secondary if the data is stored in MySQL or not.
I think speed is not the main issue here. The main issue is how to use the features to get the results you want.
If you want to categorize the images (e. g. person, car, house, cat), then the Pyramid Match kernel is definitely worth looking at. It is actually a histogram of the local feature descriptors, so there is no need to compare individual features to each other. There is also a class of algorithms known as the "bag of words", which try to cluster the local features to form a "visual vocabulary". Again, in this case once you have your "visual words" you do not need to compute distances between all pairs of SIFT descriptors, but instead determine which cluster each feature belongs to. On the other hand, if you want to get point correspondences between pairs of images, such as to decide whether one image is contained in another, or to compute the transformation between the images, then you do need to find the exact nearest neighbors.
Also, there are local features other than SIFT. For example SURF are features similar to SIFT, but they are faster to extract, and they have been shown to perform better for certain tasks.
If all you want to do is to find duplicates, you can speed up your search considerably by using a global image descriptor, such as a color histogram, to prune out images that are obviously different. Comparing two color histograms is orders of magnitude faster than comparing two sets each containing hundreds of SIFT features. You can create a short list of candidates using color histograms, and then refine your search using SIFT.
I have some tools in python you can play with here . Basically its a package that uses SIFT transformed vectors, and then computes a nearest lattice hashing of each 128d sift vector. The hashing is the important part, as it is locality sensitive, simply meaning that vectors near in R^n space result in equivalent hash collision probabilities. The work I provide is an extension of Andoni that provides a query adaptive heuristic for pruning the LSH exact search lists, as well as an optimized CUDA implementation of the hashing function. I also have a small app that does image database search with nice visual feedback, all under bsd (exception is SIFT which has some additional restrictions).

How do I do high quality scaling of a image?

I'm writing some code to scale a 32 bit RGBA image in C/C++. I have written a few attempts that have been somewhat successful, but they're slow and most importantly the quality of the sized image is not acceptable.
I compared the same image scaled by OpenGL (i.e. my video card) and my routine and it's miles apart in quality. I've Google Code Searched, scoured source trees of anything I thought would shed some light (SDL, Allegro, wxWidgets, CxImage, GD, ImageMagick, etc.) but usually their code is either convoluted and scattered all over the place or riddled with assembler and little or no comments. I've also read multiple articles on Wikipedia and elsewhere, and I'm just not finding a clear explanation of what I need. I understand the basic concepts of interpolation and sampling, but I'm struggling to get the algorithm right. I do NOT want to rely on an external library for one routine and have to convert to their image format and back. Besides, I'd like to know how to do it myself anyway. :)
I have seen a similar question asked on stack overflow before, but it wasn't really answered in this way, but I'm hoping there's someone out there who can help nudge me in the right direction. Maybe point me to some articles or pseudo code... anything to help me learn and do.
Here's what I'm looking for:
No assembler (I'm writing very portable code for multiple processor types).
No dependencies on external libraries.
I am primarily concerned with scaling DOWN, but will also need to write a scale up routine later.
Quality of the result and clarity of the algorithm is most important (I can optimize it later).
My routine essentially takes the following form:
DrawScaled(uint32 *src, uint32 *dst,
src_x, src_y, src_w, src_h,
dst_x, dst_y, dst_w, dst_h );
Thanks!
UPDATE: To clarify, I need something more advanced than a box resample for downscaling which blurs the image too much. I suspect what I want is some kind of bicubic (or other) filter that is somewhat the reverse to a bicubic upscaling algorithm (i.e. each destination pixel is computed from all contributing source pixels combined with a weighting algorithm that keeps things sharp.
Example
Here's an example of what I'm getting from the wxWidgets BoxResample algorithm vs. what I want on a 256x256 bitmap scaled to 55x55.
www.free_image_hosting.net/uploads/1a25434e0b.png
And finally:
www.free_image_hosting.net/uploads/eec3065e2f.png
the original 256x256 image
I've found the wxWidgets implementation fairly straightforward to modify as required. It is all C++ so no problems with portability there. The only difference is that their implementation works with unsigned char arrays (which I find to be the easiest way to deal with images anyhow) with a byte order of RGB and the alpha component in a separate array.
If you refer to the "src/common/image.cpp" file in the wxWidgets source tree there is a down-sampler function which uses a box sampling method "wxImage::ResampleBox" and an up-scaler function called "wxImage::ResampleBicubic".
A fairly simple and decent algorithm to resample images is Bicubic interpolation, wikipedia alone has all the info you need to get this implemented.
Is it possible that OpenGL is doing the scaling in the vector domain? If so, there is no way that any pixel-based scaling is going to be near it in quality. This is the big advantage of vector based images.
The bicubic algorithm can be tuned for sharpness vs. artifacts - I'm trying to find a link, I'll edit it in when I do.
Edit: It was the Mitchell-Netravali work that I was thinking of, which is referenced at the bottom of this link:
http://www.cg.tuwien.ac.at/~theussl/DA/node11.html
You might also look into Lanczos resampling as an alternative to bicubic.
Now that I see your original image, I think that OpenGL is using a nearest neighbor algorithm. Not only is it the simplest possible way to resize, but it's also the quickest. The only downside is that it looks very rough if there's any detail in your original image.
The idea is to take evenly spaced samples from your original image; in your case, 55 out of 256, or one out of every 4.6545. Just round the number to get the pixel to choose.
Try using the Adobe Generic Image Library ( http://opensource.adobe.com/wiki/display/gil/Downloads ) if you want something ready and not only an algorithm.
Extract from: http://www.catenary.com/howto/enlarge.html#c
Enlarge or Reduce - the C Source Code
Requires Victor Image Processing Library for 32-bit Windows v 5.3 or higher.
int enlarge_or_reduce(imgdes *image1)
{
imgdes timage;
int dx, dy, rcode, pct = 83; // 83% percent of original size
// Allocate space for the new image
dx = (int)(((long)(image1->endx - image1->stx + 1)) * pct / 100);
dy = (int)(((long)(image1->endy - image1->sty + 1)) * pct / 100);
if((rcode = allocimage(&timage, dx, dy,
image1->bmh->biBitCount)) == NO_ERROR) {
// Resize Image into timage
if((rcode = resizeex(image1, &timage, 1)) == NO_ERROR) {
// Success, free source image
freeimage(image1);
// Assign timage to image1
copyimgdes(&timage, image1);
}
else // Error in resizing image, release timage memory
freeimage(&timage);
}
return(rcode);
}
This example resizes an image area and replaces the original image with the new image.
Intel has IPP libraries which provide high speed interpolation algorithms optimized for Intel family processors. It is very good but it is not free though. Take a look at the following link:
Intel IPP
A generic article from our beloved host: Better Image Resizing, discussing the relative qualities of various algorithms (and it links to another CodeProject article).
It sounds like what you're really having difficulty understanding is the discrete -> continuous -> discrete flow involved in properly resampling an image. A good tech report that might help give you the insight into this that you need is Alvy Ray Smith's A Pixel Is Not A Little Square.
Take a look at ImageMagick, which does all kinds of rescaling filters.
As a follow up, Jeremy Rudd posted this article above. It implements filtered two pass resizing. The sources are C# but it looks clear enough that I can port it to give it a try. I found very similar C code yesterday that was much harder to understand (very bad variable names). I got it to sort-of-work, but it was very slow and did not produce good results which led me to believe there was an error in my adaptation. I may have better luck writing it from scratch with this as a reference, which I'll try.
But considering how the two pass algorithm works I wonder if there isn't a faster way of doing it, perhaps even in one pass?