Improving output quality of 'ImageChops' - computer-vision

I've implemented 'ImageChops' to find the difference between 2 images. The differences are pointed out in the output accurately, but the picture lacks clarity. Sometimes the contents of the output overlap.
Is it possible to point out the difference in the images such that the whole image is displayed & the differences are highlighted, like how 'matchTemplate()' displays the complete image & allows us to use a bounding box to highlight the matched areas?
Main image:
Image to be compared with:
output:
My code:
from PIL import Image, ImageChops
img1= Image.open("C:/ImageComparison/Images/img1.png")
img2=Image.open("C:/ImageComparison/Images/img2.png")
diff=ImageChops.difference(img1, img2).convert('RGB')
diff.show()

Related

how to produce glare on an image with opencv

Is there a way to produce a glare on an image? Given an image with an object, I want to produce a glare on a portion of an image. If I have an image that is 256x256, I want to produce glare on the first 64x64 patch. Is there a function in opencv I can use for that? If not, what is a good way to go about this problem?
I think that this example does what you need. Each time it saves a face, it gives a flash in the part of the screen where the face was recognised. So, the glares changes every time of place and size.
You can found it here:
https://github.com/MasteringOpenCV/code/tree/master/Chapter8_FaceRecognition
Seek this part in the main.cpp:
// Make a white flash on the face, so the user knows a photo has been taken.
Mat displayedFaceRegion = displayedFrame(faceRect);
displayedFaceRegion += CV_RGB(90,90,90);

Photoshop-like image difference with Python

I want to compare to images of the same size with some text on it.
Let's say the two words are: 'google' and 'gooogle'.
Before measuring the image difference in PS, I am blurring the images using Gauß.
The neat thing in PS is, no matter how you arrange the layers - gooogle on top or google on top - the difference of the layers stay the same.
You get a black background and the difference as (more or less) white pixels.
I am unable to reproduce this functionality in Python.
How did PS manage to get commutativity in there?
I was able to find the solution:
You need to take the absolute. Problem is, you need to convert the RGB image (uint8) to a bigger datatype.
After that you can subtract the images and take the absolute. In the end you need to convert it back to RGB, which is uint8.
def ps_like_diff:(img1, img2):
img1_ = img1.astype(int)
img2_ = img2.astype(int)
diff = img1_ - img2_
return (np.abs(diff)).astype('uint8')

OpenCV HSV weird converted

I am working on project what detect hematoma from skin. I am having issue with color after convertion from RGB to HSV. My algorithm detect hematoma by its color.
With some images I have good results like here:
Original img: http://imgur.com/WHiOWdj
Result img: http://imgur.com/PujbnHa
But with some images i have bad result like this:
Original img: http://imgur.com/OshB99r
Result img: http://imgur.com/CuNzAId
The same original image after convertion to HSV: http://imgur.com/lkVwtCs
Do you have any ideas how to fix it?
Thanks
Looking at your result image I think that you are only using the H channel of the original image in your algorithm. The false positive detection can inherit from that the some part of the healty skin has quite the same H value than the hematoma has. You can see on the qrey-scale image of H channel that both parts have similar values:
The difference between the two parts is the saturation value. On the following image you can see the S channel of the original image and it shows perfectly that at the hematoma the saturation is much higher than at other the part of the arm:
This was expected because the hematoma has much stronger color than the healty skin has.
So, I suggest you to use both H and S channel in your algorithm that is you have to take into account only that parts of H image where the S image contains high saturation values. A possible and simple solution to do that is that you binarize both H and S images and with an AND operation you can execute this filtering:
H image after binarisation:
S image after binarisation:
Image after H&S operation:
You can see that on the result image only the hematoma part is white (except some noise but you can eliminate easily, for example by size or by morphological filtering).
EDIT
Important to note that binarization is one of most important (and sometimes also very complicated) step in the object detection algorithms namely binarization is the first highlight of the objects to detect.
If the the external conditions (lighting, color of objects etc.) do not change significantly from image to image you can use fix binaraziation thresholds. If this constant environment can not be issured you have to use more complicated methods. There are a lot of possibilies you can use, here you can read some examples:
Wikipedia - Thresholding
Wikipedia - Balanced histogram thresholding
Several solutions are based on the histogram analysis: on the histograms with objects there are always more local maximums which positions can vary depend on the environment and if you find them you can adapt the binarization threshold easily.
For example the histogram of the H channel of the original image is the following:
The first maximum belongs to the background, the second to the skin and the last to the hematome. It can be supposed that these 3 thresholds can be found in each image only their positions vary depend on the lighting or on other conditions. To put a threshold between the 2nd and the 3rd local maximum it can be a good choice to highlight the hematome.
Finally I offer you the read the following articel about thresholding in OpenCV:
OpenCV - Thresholding

How to zoom an image and keep its clearance?

I have an image. I resized it i obtain another with a size greater than the first .
resize(roi,zoom,Size(2274,70),(0.0),(0.0),3);
i also test all the method of interpplation but it don't give a good result.
CV_INTER_NN (default)
CV_INTER_LINEAR
CV_INTER_CUBIC
CV_INTER_AREA
The image contain text when i zoomed it it becomes so fuzzy and i can not recognize the text.
I ask for an algorithm or method to make the image more clear
Thanks for the help

How to detect image location before stitching with OpenCV / C++

I'm trying to merge/stitch 2 images together but found that the default stitcher class in OpenCV could not handle my images.
So I started to write my own..
Unfortunately the images are too large to attach to this message (they are both 12600x9000 pixels in size).. so I'll try to explain as good as possible.
The 2 images are not pictures takes by a camera but are tiff files extracted from a PDF file.
The images themselves were actually CAD drawings, so not much gradients in there and therefore I think the default stitcher class could not handle them.
So far, I managed to extract the features and match them.
Also I used the following well known example to stitch them together:
Mat WarpedImage;
cv::warpPerspective(img_2,WarpedImage,homography,cv::Size(2*img_2.cols,2*img_2.rows));
Mat half(WarpedImage,Rect(0,0,img_1.cols,img_1.rows));
img_1.copyTo(half);
I sort of made it fit.. because my problem is that in my case the 2 images could be aligned vertically or horizontally.
By default, all stitch examples on the internet assume the first image is the left image and the 2nd image is the right image.
So my first question would be:
How can I detect if the image is to the left, right, above or below the first image and create a proper sized new image?
Secondly..
Currently I'm getting the proper image.. however, because I'm not having some decent code to check the ideal width and height of the new image, I have a lot of black/empty space in the new image.
What would be the best C++ code to remove those black area's?
(I'm seeing a lot of Python scripts on the net.. but no C++ examples of this.. and I have 0 Python skills....)
Thank you very much in advance for your help.
Greetings,
Floris.
You can reproject the corners of the second image with perspectiveTransform. With the transformed points you can find the relative position of your image and calculate the new image size that will fit both images. This will also let you deal with the black areas, since you have the boundaries of the two images.