I am processing some images using ImageMagick library. As part of the processing I want to minimize the number of colors if this doesn't affect image quality (too much).
For this I have tried to use MagickQuantizeImage function. Can someone explain me whow should I choose the parameters ?
treedepth:
Normally, this integer value is zero or one. A zero or one tells Quantize to choose a optimal tree depth of Log4(number_colors).% A tree of this depth generally allows the best representation of the reference image with the least amount of memory and the fastest computational speed. In some cases, such as an image with low color dispersion (a few number of colors), a value other than Log4(number_colors) is required. To expand the color tree completely, use a value of 8.
dither:
A value other than zero distributes the difference between an original image and the corresponding color reduced algorithm to neighboring pixels along a Hilbert curve.
measure_error:
A value other than zero measures the difference between the original and quantized images. This difference is the total quantization error. The error is computed by summing over all pixels in an image the distance squared in RGB space between each reference pixel value and its quantized value.
ps: I have made some tests but sometimes the quality of images in severely affected, and I don't want find a result by trial and error.
This is a really good description of the algorithm
http://www.imagemagick.org/www/quantize.html
They are referencing the command-line version, but the concepts are the same.
The parameter measure_error is meant to give you an indication of how good an answer you got. Set to non-zero, then look at the Image object's mean_error_per_pixel field after you quantize to see how good a quantization you got.
If it's not good enough, increase the number of colors.
Related
I'm developing an image manipulation software in which the user can adjust brightness, contrast and local contrast/"clarity" of an image.
Adjustments are made using OpenCV's convertTo for brightness and contrast, and CLAHE for local contrast.
I want to know the order in which I should apply these adjustments to the image. Is there a rule of thumb regarding this ? I get vastly different results changing the order, and I can't find anything in the documentation.
Actually this question's answer is inside the definitions of those terms.
Brightness: The brightness is directly related with the index values of each pixel. A pixel ( so also an image ) becomes brighter when getting closer to 255(white) and oppositely becomes darker when getting closer to 0(black). This increment or decrease in each pixel is done by adding or subtracting a constant to each pixel.
Contrast: In brightness we talked about adding a constant to each pixel, in here we are talking about multiplying each pixel with a constant. This makes gaps between pixels in whole image.
CLAHE: I see CLAHE as intelligent of Histogram Equalization. Some image's pixel population is distributed in a narrow interval inside 0-255. To widen this area to the whole interval(0-255), CLAHE is the tool.
If we back to your question:
If you set brightness and contrast first, CLAHE can cause a backward
operation.
If you use CLAHE first, it can make sense.
Note: According to my experiences, CLAHE is mostly used for pre-processing steps. Setting brightness and contrast in low amounts can be okey for both case, but in high amount changing I prefer to set after CLAHE.
Let's say I have a series of infrared pictures and the task is to isolate human body from other objects in the picture. The problem is a noise from other relatively hot objects like lamps and their 'hot' shades.
Simple thresholding methods like binary and/or Otsu didn't give good results on difficult (noisy) pictures, so I've decided to do it manually.
Here are some samples
The results are not terrible, but I think they can be improved. Here I simple select pixels by hue value of HSV. More or less, hot pixels are located in this area: hue < 50, hue > 300. My main concern here is these pink pixels which sometimes are noise from lamps but sometimes are parts of human body, so I can't simply discard them without causing significant damage to the results: e.g. on the left picture this will 'destroy' half of the left hand and so on.
As the last resort I could use some strong filtering and erosion but I still believe there's a way somehow to told to OpenCV: hey, I don't need these pink areas unless they are part of a large hot cluster.
Any ideas, keywords, techniques, good articles? Thank in advance
FIR data is presumably monotonically proportional (if not linear) to temperature, and this should yield a grayscale image.
Your examples are colorized with a color map - the color only conveys a single channel of actual information. It would be best if you could work directly on the grayscale image (maybe remap the images to grayscale).
Then, see if you can linearize the images to an actual temperature scale such that the pixel value represents the temperature. Once you do this you can should be able to clamp your image to the temperature range that you expect a person to appear in. Check the datasheets of your camera/imager for the conversion formula.
I have black and white image after binarization. After that I have image like below:
How can I remove the small lines parallel to the long curves using OpenCV?. I can remove them by removing all small objects, but I want to remove only the small parallel
lines.
This looks like a Canny artifact (or some kind of ringing artifact) to me. There are several ways to remove them.
An empiric but not too computing intensive method would be to locate all small features, and superimpose them with the same image shifted by [+/-]X, [+/-]Y. If the feature is completely coincident with the shifted image, i.e., all pixels in the white feature are also white in the shifted image, then you are probably looking at an artifact.
To evaluate "smallness" of feature, you can use a basic floodfill. This method is cheap because you can simulate shifting with pointers, without really allocating four shifted images. It is prone to false positives wherever you really have small parallel lines, and to false negatives if the artifacts are very large.
Another method would be to posterize twice the original image with different thresholds. While the "real" lines will stay together, the ringing artifacts will have a different strength. At that point you evaluate the image difference, and consider "artifact" all features that are farther than a given threshold from the image track. This is a bit more computation intensive, yields better results, but depends on what you have for an original image, i.e. what is your workflow.
It is possible that reevaluating the workflow (altering the edge detection phase) could avoid the creation of the artifacts altogether.
use cvBlobslib library to detect the white patches as blobs...the cvBlobslib library gives functions by which you can find out different features of the blobs like area , and ellipticity...so if you want only the smaller patches parallel to the long curve...then ..
Get the long curve on the basis of area covered by the blob or the preimeter i.e. contour length of the blob...
Get the ellipticity or the orientation of the major axis of the long curve after fitting an ellipse(cvBlobslib library will do that for you..!!)...
Filter all those blobs which are less than a threshold in terms of area or contour and have the same orientation as the long curve....
hope this might work..
If you know the orientation of your line in advance, you can do a morphological closing with a custom structuring element adapted to your needs.
See morphomat on wikipedia
See opencv documentation
Perhaps similar to what the others said, but in simpler words: since the small lines seem to have roughly half the thickness of the long ones, if you don't really care about preserving the long lines the way they are, you could apply several times a simple algorithm that "makes the lines thinner", until the small ones disappear. What you need to do is scan the image pixel by pixel and when you detect a white pixel above or below or to the left or to the right of a black pixel, you store its coordinates in a vector. After you traverse the entire image, you make all the pixels specified by the coordinates in the vector black. You could define some threshold empirically for the number of iterations of this algorithm.
Here are steps exploiting the fact that parallel lines are increasing edge density.
1) Apply adaptive Threshold on gray image to get many edges.
2) Erode 3x3 (or experiment but small) Morphological Operation.
3) Take Logical Not to get edge density.
4) Apply Dilate of like 3x3 or 5x5. It will dilate edges to merge and make a region.
5) Now Erode 7x7 (or experiment for higher then last dilate) Morphological Operation. It will remove most of the non-required region, long lines and small stray areas.
Output is is MASK for removal region. You can apply contour detection on original image and remove contour-object for matching position in mask high precision removal.
OR if you don't need high-precision result simply And with mask's NOT.
Why not doing something like:
Find the long curves (using findContours and filter by size).
Find the small curves
For each long curve, calculate the minimal distance between each point of every small curve and the long curve.
Calculate the mean and the standard deviation of these minimal distances.
Reject small curves for which either the mean minimal distance to the long curve is too large, or small curves for which the standard deviation of the minimal distances is large.
The result will probably be better (and faster) is you skeletonize the image first.
Good luck with it,
I am working on a microscope that streams live images via a built-in video camera to a PC, where further image processing can be performed on the streamed image. Any processing done on the streamed image must be done in "real-time" (minimal frames dropped).
We take the average of a series of static images to counter random noise from the camera to improve the output of some of our image processing routines.
My question is: how do I know if the image is no longer static - either the sample under inspection has moved or rotated/camera zoom-in or out - so I can reset the image series used for averaging?
I looked through some of the threads, and some ideas that seemed interesting:
Note: using Windows, C++ and Intel IPP. With IPP the image is a byte array (Ipp8u).
1. Hash the images, and compare the hashes (normal hash or perceptual hash?)
2. Use normalized cross correlation (IPP has many variations - which to use?)
Which do you guys think is suitable for my situation (speed)?
If you camera doesn't shake, you can, as inVader said, subtract images. Then a sum of absolute values of all pixels of the difference image is sometimes enough to tell if images are the same or different. However, if your noise, lighting level, etc... varies, this will not give you a good enough S/N ratio.
And in noizy conditions normal hashes are even more useless.
The best would be to identify that some features of your object has changed, like it's boundary (if it's regular) or it's mass center (if it's irregular). If you have a boundary position, you'll need to analyze just one line of pixels, perpendicular to that boundary, to tell that boundary has moved.
Mass center position may be a subject to frequent false-negative responses, but adding a total mass and/or moment of inertia may help.
If the camera shakes, you may have to align images before comparing (depending on comparison method and required accuracy, a single pixel misalignment might be huge), and that's where cross-correlation helps.
And further, you doesn't have to analyze each image. You can skip one, and if the next differs, discard both of them. Here you have twice as much time to analyze an image.
And if you are averaging images, you might just define an optimal amount of images you need and compare just the first and the last image in the sequence.
So, simplest thing to try would be to take subsequent images, subtract them from each other and have a look at the difference. Then define some rules including local and global thresholds for the difference in which two images are considered equal. Simple subtraction of bitmap/array data, looking for maxima and calculating the average differnce across the whole thing should be ne problem to do in real time.
If there are varying light conditions or something moving in a predictable way(like a door opening and closing), then something more powerful, albeit slower, like gaussian mixture models for background modeling, might be worth looking into, click here. It is quite compute intensive, but can be parallelized pretty easily.
Motion detection algorithms is what is used.
http://www.codeproject.com/Articles/10248/Motion-Detection-Algorithms
http://www.codeproject.com/Articles/22243/Real-Time-Object-Tracker-in-C
First of all I would take a series of images at a slow fps rate and downsample those images to make them smaller, not too much but enough to speed up the process.
Now you have several options:
You could make a sum of absolute differences of the two images by subtracting them and use a threshold to value if the image has changed.
If you want to speed it up even further I would suggest doing a progressive SAD using a small kernel and moving from the top of the image to the bottom. You can value the complessive amount of differences during the process and eventually stop when you are satisfied.
I have two images, both taken at the same time from the same detector.
Both images have 11 bit resolution (yes, its odd but that is the case here). The difference between the two images is that one image as been amplified by a factor of 1 and the other has been amplified by a factor of 10.
How can I take these two 11 bit images, and combine their pixel values to get a single 16 bit image? Basically, this increases the dynamic range of the final image.
I am fairly new to image processing. I know there is a solution for this, since other systems do this on the fly pixel-by-pixel in an FPGA. I was just hoping to be able to do this in Matlab post processing instead of live. I know doing bitwise operations in Matlab can be kinda difficult, but we do have an educational license with every toolbox available.
As mentioned below, this look an awful lot like HDR processing. The goal isn't artistic, rather data preservation. This is eventually going to be put in C++ and flown on an autonomous flight computer and running standard bloated HDR software on the fly would kill our timing requirements
Thanks for the help!
As a side note, I'd like to be able to do this for any combination of gains. ie 2x and 30x, 4x and 8x ect. In my gut I feel like this is a deceptively simple algorithm or interpolation, but I just don't know where to start.
Gains
Since there is some confusion on what the gains mean, I'll try to explain. The image sensor (CMOS) being used on our custom camera has the capability to simultaneously output two separate images, both taken from the same exposure. It can do this because the sensor has 2 different electrical amplifiers along its data path.
In photography terms, it would be like your DSLR being able to take a picture using 2 different ISO values at the same time.
Sorry for the confusion
The problems you pose is known as "High Dynamic Range Imaging" and "Tone Mapping". I suggest you start with those Wikipedia articles, then drill down to the bibliography cited therein.
You don't provide enough details about your imagery to give a more specific answer. What is the "gain" you mention? Did you crank up the sensor's gain (to what ISO-equivalent number?), or did you use a longer exposure time? Are the 11-bit pixel values linear or already gamma-compressed?
To upscale an 11bit range to a 16bit range multiple by (2^16-1)/(2^11-1).
(Assuming you want a linear scaling. (Which is reasonable when scaling up.)
If the gain was discrete (applied to the 11bit range), then you have two 11bit images which may have some values saturated.
If the gain was applied in a continuous (analog) or floating point range, then your values can go beyond the original 11bits. Also, if the gain was applied in a continuous (analog) or floating point range, the values were probably scaled to another range first e.g. [0,1] (by dividing by (2^11-1)).
If the values were scaled to another range, you will have to divide by the maximum of the new range instead of by (2^11-1).
Either way (whether gain was in 11bit range or not), due to the gain and due to the addtion, the resulting values may be large than the original range. In this case, you need to decide how you want to scale them:
Do you want to scale the original 11bit range to 16bit (possible causing saturation)?
If so multiple by multiple by (2^16-1)/(2^11-1)
Do you want to scale the maximum possible value to 2^16-1?
If so multiple by multiple by (2^16-1)/( (2^11-1) * (G1+G2) )
Do you want to scale the actual maximum value to 2^16-1?
If so multiple by multiple by (2^16-1)/(max(sum(I1+I2))
Edit:
Since you do not want to add the images, but rather use the different details in them, perhaps this article will help you:
Digital Photography with Flash and No-Flash Image Pairs