calculate blurness and sharpness of an image [duplicate] - c++

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Is there a way to detect if an image is blurry?
How to calculate blurness and sharpness of a given image usig opencv? Is there any functions there in opencv to do it? If there is no functions in opencv how can I implement it? nay ideas would be great..
The input will be an image and the output should be the blurness and sharpness of the image.

I recommend you to make a frequential analysis of the image. Energy in high band will tell you that the image is quite sharpened, while energy in low band usually means that image is blurry. For computing spectrum, you can use FFTW library.
Regards,

I don't know about opencv.
If I were trying to get an approximate measurement of where an imagine is on the sharp-to-blurry spectrum, I'd start from the observation that the sharpness of parts of an image is evident from the contrast between adjacent pixels - something like max(c1 * abs(r1 - r2), c2 * abs(g1 - g2), c3 * abs(b1 - b2)) where c1-3 weigh perceptual importance of each of the red, green and blue channels, and the two pixels are (r1,g1,b1) and (r2,g2,b2)).
Many tweaks possible, such as raising each colour's contribution to a power to emphasise changes at the dark (power <1)or bright (power >1) end of the brightness scale. Note that the max() approach considers sharpness for each colour channel separately: a change from say (255,255,255) to (0,255,255) is very dramatic despite only one channel changing.
You may find it better to convert from RBG to another colour representation, such as Hue/Saturation/Value (there'll be lots of sites online explaining the HSV space, and formulas for conversions).
Photographically, we're usually interested in knowing that the in-focus part of the image is sharp (foreground/background blur/bokeh due to shallow depth of field is a normal and frequently desirable quality) - the clearest indication of that is high contrast in some part of the image, suggesting you want the maximum value of adjacent-pixel contrasts. That said, some focused pixtures can still have very low local contrasts (e.g. a picture of a solid coloured surface). Further, damaged pixel elements on the sensor, dirt on the lens/sensor, and high-ISO / long-exposure noise may all manifest as spots of extremely high contrast. So the validity of your result's always going to be questionable, but it might be ball-park right a useful percentage of the time.

Related

How to detect anomalies in opencv (c++) if threshold is not good enought?

I have grayscale images like this:
I want to detect anomalies on this kind of images. On the first image (upper-left) I want to detect three dots, on the second (upper-right) there is a small dot and a "Foggy area" (on the bottom-right), and on the last one, there is also a bit smaller dot somewhere in the middle of the image.
The normal static tresholding does't work ok for me, also Otsu's method is always the best choice. Is there any better, more robust or smarter way to detect anomalies like this? In Matlab I was using something like Frangi Filtering (eigenvalue filtering). Can anybody suggest good processing algorithm to solve anomaly detection on surfaces like this?
EDIT: Added another image with marked anomalies:
Using #Tapio 's tophat filtering and contrast adjustement.
Since #Tapio provide us with great idea how to increase contrast of anomalies on the surfaces like I asked at the begining, I provide all you guys with some of my results. I have and image like this:
Here is my code how I use tophat filtering and contrast adjustement:
kernel = getStructuringElement(MORPH_ELLIPSE, Size(3, 3), Point(0, 0));
morphologyEx(inputImage, imgFiltered, MORPH_TOPHAT, kernel, Point(0, 0), 3);
imgAdjusted = imgFiltered * 7.2;
The result is here:
There is still question how to segment anomalies from the last image?? So if anybody have idea how to solve it, just take it! :) ??
You should take a look at bottom-hat filtering. It's defined as the difference of the original image and the morphological closing of the image and it makes small details such as the ones you are looking for flare out.
I adjusted the contrast to make both images visible. The anomalies are much more pronounced when looking at the intensities and are much easier to segment out.
Let's take a look at the first image:
The histogram values don't represent the reality due to scaling caused by the visualization tools I'm using. However the relative distances do. So now the thresholding range is much larger, the target changed from a window to a barn door.
Global thresholding ( intensity > 15 ) :
Otsu's method worked poorly here. It segmented all the small details to the foreground.
After removing noise by morphological opening :
I also assumed that the black spots are the anomalies you are interested in. By setting the threshold lower you include more of the surface details. For example the third image does not have any particularly interesting features to my eye, but that's for you to judge. Like m3h0w said, it's a good heuristic to know that if something is hard for your eye to judge it's probably impossible for the computer.
#skoda23, I would try unsharp masking with fine tuned parameters for the blurring part, so that the high frequencies get emphasized and test it thoroughly so that no important information is lost in the process. Remember that it is usually not good idea to expect computer to do super-human work. If a human has doubts about where the anomalies are, computer will have to. Thus it is important to first preprocess the image, so that the anomalies are obvious for the human eye. Alternative for unsharp masking (or addition) might be CLAHE. But again: remember to fine tune it very carefully - it might bring out the texture of the board too much and interfere with your task.
Alternative approach to basic thresholding or Otsu's, would be AdaptiveThreshold() which might be a good idea since there is a difference in intensity values between different regions you want to find.
My second guess would be first using fixed value thresholding for the darkest dots and then trying Sobel, or Canny. There should exist an optimal neighberhood where texture of the board will not shine as much and anomalies will. You can also try bluring before edge detection (if you've detected the small defects with the thresholding).
Again: it is vital for the task to experiment a lot on every step of this approach, because fine tuning the parameters will be crucial for eventual success. I'd recommend making friends with the trackbar, to speed up the process. Good luck!
You're basically dealing with the unfortunate fact that reality is analog. A threshold is a method to turn an analog range into a discrete (binary) range. Any threshold will do that. So what exactly do you mean with a "good enough" threshold?
Let's park that thought for a second. I see lots of anomalies - sort of thin grey worms. Apparently, you ignore them. I'm applying a different threshold then you are. This may be reasonable, but you're applying domain knowledge that I don't have.
I suspect these grey worms will be throwing off your fixed value thresholding. That's not to say the idea of a fixed threshold is bad. You can use it to find some artifacts and exclude those. Somewhat darkish patches will be missed, but can be brought out by replacing each pixel with the median value of its neighborhood, using a neighborhood size that's bigger than the width of those worms. In the dark patch, this does little, but it wipes out small local variations.
I don't pretend these two types of abnormalities are the only two, but that is really an application domain question and not about techniques. E.g. you don't appear to have ligthing artifacts (reflections), at least not in these 3 samples.

Infrared images segmentation using OpenCV

Let's say I have a series of infrared pictures and the task is to isolate human body from other objects in the picture. The problem is a noise from other relatively hot objects like lamps and their 'hot' shades.
Simple thresholding methods like binary and/or Otsu didn't give good results on difficult (noisy) pictures, so I've decided to do it manually.
Here are some samples
The results are not terrible, but I think they can be improved. Here I simple select pixels by hue value of HSV. More or less, hot pixels are located in this area: hue < 50, hue > 300. My main concern here is these pink pixels which sometimes are noise from lamps but sometimes are parts of human body, so I can't simply discard them without causing significant damage to the results: e.g. on the left picture this will 'destroy' half of the left hand and so on.
As the last resort I could use some strong filtering and erosion but I still believe there's a way somehow to told to OpenCV: hey, I don't need these pink areas unless they are part of a large hot cluster.
Any ideas, keywords, techniques, good articles? Thank in advance
FIR data is presumably monotonically proportional (if not linear) to temperature, and this should yield a grayscale image.
Your examples are colorized with a color map - the color only conveys a single channel of actual information. It would be best if you could work directly on the grayscale image (maybe remap the images to grayscale).
Then, see if you can linearize the images to an actual temperature scale such that the pixel value represents the temperature. Once you do this you can should be able to clamp your image to the temperature range that you expect a person to appear in. Check the datasheets of your camera/imager for the conversion formula.

Calculating dispersion in Human Tracking

I am currently trying to track human heads from a CCTV. I am currently using colour histogram and LBP histogram comparison to check the affinity between bounding boxes. However sometimes these are not enough.
I was reading through a paper in the following link : paper where dispersion metric is described. However I still cannot clearly get it. For example I cannot understand what pi,j is referring to in the equation. Can someone kindly & clearly explain how I can find dispersion between bounding boxes in separate frames please?
You assistance is much appreciated :)
This paper tackles the tracking problem using a background model, as most CCTV tracking methods do. The BG model produces a foreground mask, and the aforementioned p_ij relates to this mask after some morphology. Specifically, they try to separate foreground blobs into components, based on thresholds on allowed 'gaps' in FG mask holes. The end result of this procedure is a set of binary masks, one for each hypothesized object. These masks are then used for tracking using spatial and temporal consistency. In my opinion, this is an old fashioned way of processing video sequences, only relevant if you're limited in processing power and the scenes are not crowded.
To answer your question, if O is the mask related to one of the hypothesized objects, then p_ij is the binary pixel in the (i,j) location within the mask. Thus, c_x and c_y are the center of mass of the binary shape, and the dispersion is simply the average distance from the center of mass for the shape (it is larger for larger objects. This enforces scale consistency in tracking, but in a very weak manner. You can do much better if you have a calibrated camera.

Algorithm to zoom images clearly

I know images can be zoomed with the help of image pyramids. And I know opencv pyrUp() method can zoom images. But, after certain extent, the image gets non-clear. For an example, if we zoom a small image 15 times of its original size, it is definitely not clear.
Are there any method in OpenCV to zoom the images but keep the clearance as it is in the original one? Or else, any algorithm to do this?
One thing to remember: You can't pull extra resolution out of nowhere. When you scale up an image, you can have either a blurry, smooth image, or you can have a sharp, blocky image, or you can have something in between. Better algorithms, that appear to have better performance with specific types of subjects, make certain assumptions about the contents of the image, which, if true, can yield higher apparent performance, but will mess up if those assumptions prove false; there you are trading accuracy for sharpness.
There are several good algorithms out there for zooming specific types of subjects, including pixel art,
faces, or text.
More general algorithms for sharpening images include unsharp masking, edge enhancement, and others, however all of these are assume specific things about the contents of the image, for instance, that the image contains text, or that a noisy area would still be noisy (or not) at a higher resolution.
A low-resolution polka-dot pattern, or a sandy beach's gritty pattern, will not go over very well, and the computer may turn your seascape into something more reminiscent of a mosh pit. Every zoom algorithm or sharpening filter has a number of costs associated with it.
In order to correctly select a zoom or sharpening algorithm, more context, including sample images, are absolutely necessary.
OpenCV has the Super Resolution module. I haven't had a chance to try it yet so not too sure how well it works.
You should check out Super-Resolution From a Single Image:
Methods for super-resolution (SR) can be broadly classified into two families of methods: (i) The classical multi-image super-resolution (combining images obtained at subpixel misalignments), and (ii) Example-Based super-resolution (learning correspondence between low and high resolution image patches from a database). In this paper we propose a unified framework for combining these two families of methods.
You most likely want to experiment with different interpolation schemes for your images. OpenCV provides the resize function that can be used with various different interpolation schemes (docs). You will likely be trading off bluriness (e.g., in bicubic or bilinear interpolation schemes) with jagged aliasing effects (for example, in nearest-neighbour interpolation). I'd recommend experimenting with the different schemes that it provides and see which ones give you the best results.
The supported interpolation schemes are listed as:
INTER_NEAREST nearest-neighbor interpolation
INTER_LINEAR bilinear interpolation (used by default)
INTER_AREA resampling using pixel area relation. It may be the preferred method
for image decimation, as it gives moire-free results. But when the image is
zoomed, it is similar to the INTER_NEAREST method
INTER_CUBIC bicubic interpolation over 4x4 pixel neighborhood
INTER_LANCZOS4 Lanczos interpolation over 8x8 pixel neighborhood
Wikimedia commons provides this nice comparison image for nearest-neighbour, bilinear, and bicubic interpolation:
You can see that you are unlikely to get the same sharpness as the original image when zoomed, but you can trade off "smoothness" for aliasing effects (i.e., jagged edges).
Take a look at quick image scaling algorithms.
First, I will discuss a simple algorithm, dubbed "smooth Bresenham" that can best be described as nearest neighbour interpolation on a zoomed grid, using a Bresenham algorithm. The algorithm is quick, it produces a quality equivalent to that of linear interpolation and it can zoom up and down, but it is only suitable for a zoom factor that is within a fairly small range. To offset this, I next develop a directional interpolation algorithm that can only magnify (scale up) and only with a factor of 2×, but that does so in a way that keeps edges sharp. This directional interpolation method is quite a bit slower than the smooth Bresenham algorithm, and it is therefore practical to cache those 2× images, once computed. Caching images with relative sizes that are powers of 2, combined with simple interpolation, is actually a third image zooming technique: MIP-mapping.
A related question is Image scaling and rotating in C/C++. Also, you can use CImpg.
What your asking goes out of this universe physics: there are simply not enough bits in the original image to represent 15*15 times more details. Whatever algorithm cannot invent the "right information" that is not there. It can just find a suitable interpolation. But it will never increase the details.
Despite what happens in many police fiction, getting a picture of fingerprint on a car door handle stating from a panoramic view of a city is definitively a fake.
You Can easily zoom in or zoom out an image in opencv using the following two functions.
For Zoom In
pyrUp(tmp, dst, Size(tmp.cols * 2, tmp.rows * 2));
For Zoom Out
pyrDown(tmp, dst, Size(tmp.cols / 2, tmp.rows / 2));
You can get details about the method in the following link:
Image Zoom Out and Zoom In using OpenCV

Detect if images are different in real-time

I am working on a microscope that streams live images via a built-in video camera to a PC, where further image processing can be performed on the streamed image. Any processing done on the streamed image must be done in "real-time" (minimal frames dropped).
We take the average of a series of static images to counter random noise from the camera to improve the output of some of our image processing routines.
My question is: how do I know if the image is no longer static - either the sample under inspection has moved or rotated/camera zoom-in or out - so I can reset the image series used for averaging?
I looked through some of the threads, and some ideas that seemed interesting:
Note: using Windows, C++ and Intel IPP. With IPP the image is a byte array (Ipp8u).
1. Hash the images, and compare the hashes (normal hash or perceptual hash?)
2. Use normalized cross correlation (IPP has many variations - which to use?)
Which do you guys think is suitable for my situation (speed)?
If you camera doesn't shake, you can, as inVader said, subtract images. Then a sum of absolute values of all pixels of the difference image is sometimes enough to tell if images are the same or different. However, if your noise, lighting level, etc... varies, this will not give you a good enough S/N ratio.
And in noizy conditions normal hashes are even more useless.
The best would be to identify that some features of your object has changed, like it's boundary (if it's regular) or it's mass center (if it's irregular). If you have a boundary position, you'll need to analyze just one line of pixels, perpendicular to that boundary, to tell that boundary has moved.
Mass center position may be a subject to frequent false-negative responses, but adding a total mass and/or moment of inertia may help.
If the camera shakes, you may have to align images before comparing (depending on comparison method and required accuracy, a single pixel misalignment might be huge), and that's where cross-correlation helps.
And further, you doesn't have to analyze each image. You can skip one, and if the next differs, discard both of them. Here you have twice as much time to analyze an image.
And if you are averaging images, you might just define an optimal amount of images you need and compare just the first and the last image in the sequence.
So, simplest thing to try would be to take subsequent images, subtract them from each other and have a look at the difference. Then define some rules including local and global thresholds for the difference in which two images are considered equal. Simple subtraction of bitmap/array data, looking for maxima and calculating the average differnce across the whole thing should be ne problem to do in real time.
If there are varying light conditions or something moving in a predictable way(like a door opening and closing), then something more powerful, albeit slower, like gaussian mixture models for background modeling, might be worth looking into, click here. It is quite compute intensive, but can be parallelized pretty easily.
Motion detection algorithms is what is used.
http://www.codeproject.com/Articles/10248/Motion-Detection-Algorithms
http://www.codeproject.com/Articles/22243/Real-Time-Object-Tracker-in-C
First of all I would take a series of images at a slow fps rate and downsample those images to make them smaller, not too much but enough to speed up the process.
Now you have several options:
You could make a sum of absolute differences of the two images by subtracting them and use a threshold to value if the image has changed.
If you want to speed it up even further I would suggest doing a progressive SAD using a small kernel and moving from the top of the image to the bottom. You can value the complessive amount of differences during the process and eventually stop when you are satisfied.