Finding difference in an image - c++

I have image as follows:
I want to detect 5 dials for processing. Hough circles is detecting all other irrelevant circles. to solve this i created a plain image and generated absolute difference with this one. It gave this image:
I drew box around it and final image is:
My code is as follows:
Mat img1 = imread(image_path1, COLOR_BGR2GRAY);
Mat img2 = imread(image_path2, COLOR_BGR2GRAY);
cv::Mat diffImage;
cv::absdiff(img2, img1, diffImage);
cv::Mat foregroundMask = cv::Mat::zeros(diffImage.rows, diffImage.cols, CV_8UC3);
float threshold = 30.0f;
float dist;
for(int j=0; j<diffImage.rows; ++j)
{
for(int i=0; i<diffImage.cols; ++i)
{
cv::Vec3b pix = diffImage.at<cv::Vec3b>(j,i);
dist = (pix[0]*pix[0] + pix[1]*pix[1] + pix[2]*pix[2]);
dist = sqrt(dist);
if(dist>threshold)
{
foregroundMask.at<unsigned char>(j,i) = 255;
}
}
}
cvtColor(diffImage,diffImage,COLOR_BGR2GRAY);
Mat1b img = diffImage.clone();
// Binarize image
Mat1b bin = img > 70;
// Find non-black points
vector<Point> points;
findNonZero(bin, points);
// Get bounding rect
Rect box = boundingRect(points);
// Draw (in color)
rectangle(img1, box, Scalar(0,255,0), 3);
// Show
imshow("Result", img1);
Now the issue is i cant compare plain image with anyother iamge of different sizes. Any pointer to right direction will be very helpful.
Regards,
Saghir A. Khatr
Edit
My plain image is as follows
I want to create a standard sample plain image which can be used with any image to detect that portion...

Related

Opencv error: assertion failed in wrapPerspective

i'm trying to make an AR app, using aruco and Opencv (i'm a newbie). It detects aruco marker, and puts an image on it. I have tried to use wrapPerstective() function, however somethig is wrong, it returns Opencv error assertion failed ((m0.type() == cv_32f m0.type() == cv_64f) in wrapPerspective. Please give me a way to solve it
int main() {
cv::VideoCapture inputVideo;
inputVideo.open("gal.mp4");
cv::Ptr<cv::aruco::Dictionary> dictionary = cv::aruco::getPredefinedDictionary(cv::aruco::DICT_4X4_50);
cv::Mat sq = imread("zhuz.jpg", CV_LOAD_IMAGE_UNCHANGED);
while (inputVideo.grab()) {
vector<Point2f> sqPoints;
vector<Point2f> p;
sqPoints.push_back(Point2f(0, 0));
sqPoints.push_back(Point2f(sq.cols, 0));
sqPoints.push_back(Point2f(sq.cols, sq.rows));
sqPoints.push_back(Point2f(0, sq.rows));
cv::Mat image, warp_matrix;
inputVideo.retrieve(image);
Mat cpy_img(image.rows, image.cols, image.type());
Mat neg_img(image.rows, image.cols, image.type());
Mat gray;
Mat blank(sq.rows, sq.cols, sq.type());
std::vector<int> ids;
std::vector<std::vector<cv::Point2f>> corners;
cv::aruco::detectMarkers(image, dictionary, corners, ids);
if (ids.size() > 0) {
p.push_back(corners[0][0]);
p.push_back(corners[0][1]);
p.push_back(corners[0][2]);
p.push_back(corners[0][3]);
Mat wrap_matrix = getPerspectiveTransform(sqPoints, p);
blank = Scalar(0);
neg_img = Scalar(0); // Image is white when pixel values are zero
cpy_img = Scalar(0); // Image is white when pixel values are zero
bitwise_not(blank, blank);
warpPerspective(sq, neg_img, warp_matrix, Size(neg_img.cols, neg_img.rows)); // Transform overlay Image to the position - [ITEM1]
warpPerspective(blank, cpy_img, warp_matrix, Size(cpy_img.cols, neg_img.rows)); // Transform a blank overlay image to position
bitwise_not(cpy_img, cpy_img); // Invert the copy paper image from white to black
bitwise_and(cpy_img, image, cpy_img); // Create a "hole" in the Image to create a "clipping" mask - [ITEM2]
bitwise_or(cpy_img, neg_img, image); // Finally merge both items [ITEM1 & ITEM2]
}
cv::imshow("out", image);
}
}

Improve Text Binarization / OCR Preprocessing with OpenCV

I am building a scanner feature for my app and binarize the photo of the document with OpenCV:
// convert to greyscale
cv::Mat converted, blurred, blackAndWhite;
converted = cv::Mat(inputMatrix.rows, inputMatrix.cols, CV_8UC1);
cv::cvtColor(inputMatrix, converted, CV_BGR2GRAY );
// remove noise
cv::GaussianBlur(converted, blurred, cvSize(3,3), 0);
// adaptive threshold
cv::adaptiveThreshold(blackAndWhite, blackAndWhite, 255, cv::ADAPTIVE_THRESH_GAUSSIAN_C, cv::THRESH_BINARY, 15 , 9);
The result is okay, but scans from different scanner apps are much better. Especially very small, tiny sized text is much better:
Processed with opencv
Scanned With DropBox
What can I do, to improve my result?
May be the apps are using anti-aliasing to make their binarized output look nicer. To obtain a similar effect, I first tried binarizing the image, but the result didn't look very nice with all the jagged edges. Then I applied pyramid upsampling and then downsampling to the result, and the output was better.
I didn't use adaptive thresholding however. I segmented the text-like regions and processed those regions only, then pasted them to form the final images. It is a kind of local thresholding using the Otsu method or the k-means (using combinations of thr_roi_otsu, thr_roi_kmeans and proc_parts in the code). Below are some results.
Apply Otsu threshold to all text regions, then upsample followed by downsample:
Some text:
Full image:
Upsample input image, apply Otsu threshold to individual text regions, downsample the result:
Some text:
Full image:
/*
apply Otsu threshold to the region in mask
*/
Mat thr_roi_otsu(Mat& mask, Mat& im)
{
Mat bw = Mat::ones(im.size(), CV_8U) * 255;
vector<unsigned char> pixels(countNonZero(mask));
int index = 0;
for (int r = 0; r < mask.rows; r++)
{
for (int c = 0; c < mask.cols; c++)
{
if (mask.at<unsigned char>(r, c))
{
pixels[index++] = im.at<unsigned char>(r, c);
}
}
}
// threshold pixels
threshold(pixels, pixels, 0, 255, CV_THRESH_BINARY | CV_THRESH_OTSU);
// paste pixels
index = 0;
for (int r = 0; r < mask.rows; r++)
{
for (int c = 0; c < mask.cols; c++)
{
if (mask.at<unsigned char>(r, c))
{
bw.at<unsigned char>(r, c) = pixels[index++];
}
}
}
return bw;
}
/*
apply k-means to the region in mask
*/
Mat thr_roi_kmeans(Mat& mask, Mat& im)
{
Mat bw = Mat::ones(im.size(), CV_8U) * 255;
vector<float> pixels(countNonZero(mask));
int index = 0;
for (int r = 0; r < mask.rows; r++)
{
for (int c = 0; c < mask.cols; c++)
{
if (mask.at<unsigned char>(r, c))
{
pixels[index++] = (float)im.at<unsigned char>(r, c);
}
}
}
// cluster pixels by gray level
int k = 2;
Mat data(pixels.size(), 1, CV_32FC1, &pixels[0]);
vector<float> centers;
vector<int> labels(countNonZero(mask));
kmeans(data, k, labels, TermCriteria(CV_TERMCRIT_EPS+CV_TERMCRIT_ITER, 10, 1.0), k, KMEANS_PP_CENTERS, centers);
// examine cluster centers to see which pixels are dark
int label0 = centers[0] > centers[1] ? 1 : 0;
// paste pixels
index = 0;
for (int r = 0; r < mask.rows; r++)
{
for (int c = 0; c < mask.cols; c++)
{
if (mask.at<unsigned char>(r, c))
{
bw.at<unsigned char>(r, c) = labels[index++] != label0 ? 255 : 0;
}
}
}
return bw;
}
/*
apply procfn to each connected component in the mask,
then paste the results to form the final image
*/
Mat proc_parts(Mat& mask, Mat& im, Mat (procfn)(Mat&, Mat&))
{
Mat tmp = mask.clone();
vector<vector<Point>> contours;
vector<Vec4i> hierarchy;
findContours(tmp, contours, hierarchy, CV_RETR_CCOMP, CV_CHAIN_APPROX_SIMPLE, Point(0, 0));
Mat byparts = Mat::ones(im.size(), CV_8U) * 255;
for(int idx = 0; idx >= 0; idx = hierarchy[idx][0])
{
Rect rect = boundingRect(contours[idx]);
Mat msk = mask(rect);
Mat img = im(rect);
// process the rect
Mat roi = procfn(msk, img);
// paste it to the final image
roi.copyTo(byparts(rect));
}
return byparts;
}
int _tmain(int argc, _TCHAR* argv[])
{
Mat im = imread("1.jpg", 0);
// detect text regions
Mat morph;
Mat kernel = getStructuringElement(MORPH_ELLIPSE, Size(3, 3));
morphologyEx(im, morph, CV_MOP_GRADIENT, kernel, Point(-1, -1), 1);
// prepare a mask for text regions
Mat bw;
threshold(morph, bw, 0, 255, THRESH_BINARY | THRESH_OTSU);
morphologyEx(bw, bw, CV_MOP_DILATE, kernel, Point(-1, -1), 10);
Mat bw2x, im2x;
pyrUp(bw, bw2x);
pyrUp(im, im2x);
// apply Otsu threshold to all text regions, then upsample followed by downsample
Mat otsu1x = thr_roi_otsu(bw, im);
pyrUp(otsu1x, otsu1x);
pyrDown(otsu1x, otsu1x);
// apply k-means to all text regions, then upsample followed by downsample
Mat kmeans1x = thr_roi_kmeans(bw, im);
pyrUp(kmeans1x, kmeans1x);
pyrDown(kmeans1x, kmeans1x);
// upsample input image, apply Otsu threshold to all text regions, downsample the result
Mat otsu2x = thr_roi_otsu(bw2x, im2x);
pyrDown(otsu2x, otsu2x);
// upsample input image, apply k-means to all text regions, downsample the result
Mat kmeans2x = thr_roi_kmeans(bw2x, im2x);
pyrDown(kmeans2x, kmeans2x);
// apply Otsu threshold to individual text regions, then upsample followed by downsample
Mat otsuparts1x = proc_parts(bw, im, thr_roi_otsu);
pyrUp(otsuparts1x, otsuparts1x);
pyrDown(otsuparts1x, otsuparts1x);
// apply k-means to individual text regions, then upsample followed by downsample
Mat kmeansparts1x = proc_parts(bw, im, thr_roi_kmeans);
pyrUp(kmeansparts1x, kmeansparts1x);
pyrDown(kmeansparts1x, kmeansparts1x);
// upsample input image, apply Otsu threshold to individual text regions, downsample the result
Mat otsuparts2x = proc_parts(bw2x, im2x, thr_roi_otsu);
pyrDown(otsuparts2x, otsuparts2x);
// upsample input image, apply k-means to individual text regions, downsample the result
Mat kmeansparts2x = proc_parts(bw2x, im2x, thr_roi_kmeans);
pyrDown(kmeansparts2x, kmeansparts2x);
return 0;
}

Drawing Rectangle around difference area

I have a question which i am unable to resolve. I am taking difference of two images using OpenCV. I am getting output in a seperate Mat. Difference method used is the AbsDiff method. Here is the code.
char imgName[15];
Mat img1 = imread(image_path1, COLOR_BGR2GRAY);
Mat img2 = imread(image_path2, COLOR_BGR2GRAY);
/*cvtColor(img1, img1, CV_BGR2GRAY);
cvtColor(img2, img2, CV_BGR2GRAY);*/
cv::Mat diffImage;
cv::absdiff(img2, img1, diffImage);
cv::Mat foregroundMask = cv::Mat::zeros(diffImage.rows, diffImage.cols, CV_8UC3);
float threshold = 30.0f;
float dist;
for(int j=0; j<diffImage.rows; ++j)
{
for(int i=0; i<diffImage.cols; ++i)
{
cv::Vec3b pix = diffImage.at<cv::Vec3b>(j,i);
dist = (pix[0]*pix[0] + pix[1]*pix[1] + pix[2]*pix[2]);
dist = sqrt(dist);
if(dist>threshold)
{
foregroundMask.at<unsigned char>(j,i) = 255;
}
}
}
sprintf(imgName,"D:/outputer/d.jpg");
imwrite(imgName, diffImage);
I want to bound the difference part in a rectangle. findContours is drawing too many contours. but i only need a particular portion. My diff image is
I want to draw a single rectangle around all the five dials.
Please point me to right direction.
Regards,
I would search for the highest value for i index giving a non black pixel; that's the right border.
The lowest non black i is the left border. Similar for j.
You can:
binarize the image with a threshold. Background will be 0.
Use findNonZero to retrieve all points that are not 0, i.e. all foreground points.
use boundingRect on the retrieved points.
Result:
Code:
#include <opencv2/opencv.hpp>
using namespace cv;
int main()
{
// Load image (grayscale)
Mat1b img = imread("path_to_image", IMREAD_GRAYSCALE);
// Binarize image
Mat1b bin = img > 70;
// Find non-black points
vector<Point> points;
findNonZero(bin, points);
// Get bounding rect
Rect box = boundingRect(points);
// Draw (in color)
Mat3b out;
cvtColor(img, out, COLOR_GRAY2BGR);
rectangle(out, box, Scalar(0,255,0), 3);
// Show
imshow("Result", out);
waitKey();
return 0;
}
Find contours, it will output a set of contours as std::vector<std::vector<cv::Point> let us call it contours:
std::vector<cv::Point> all_points;
size_t points_count{0};
for(const auto& contour:contours){
points_count+=contour.size();
all_points.reserve(all_points);
std::copy(contour.begin(), contour.end(),
std::back_inserter(all_points));
}
auto bounding_rectnagle=cv::boundingRect(all_points);

OpenCV: Why is the ROI Mat distorted?

Im having an issue where the ROI of an mat get distorted. I think the image below describes the problem best. The image is first turned gray with cv:cvtColor, then i use cv::threshold on the gray mat, then i use Canny on the threshold mat, then i find the contours. I go through all the contours and find the biggest contour based on area of the contour. I then use drawContour to use it as a mask on the threshold mat. And thats where i use the following code:
cv::Mat inMat = [self cvMatFromUIImage: inputImage];
cv::Mat grayMat, threshMat, canny_output;
cv::cvtColor(inMat, grayMat, CV_BGR2GRAY);
cv::threshold(grayMat, threshMat, 128, 255, CV_THRESH_BINARY | CV_THRESH_OTSU);
Canny(threshMat, canny_output, 100, 100, 3);
cv::Mat drawing = cv::Mat::zeros( canny_output.size(), CV_8UC3 );
std::vector<cv::Vec4i> hierarchy;
std::vector<std::vector<cv::Point> > contours;
cv::findContours(canny_output, contours, hierarchy, CV_RETR_LIST, CV_CHAIN_APPROX_SIMPLE);
NSMutableArray *contourAreasArray = [NSMutableArray new];
// Sorts the contour areas from greatest or smallest
for (int i = 0; i < contours.size(); ++i) {
NSNumber *contourArea = [NSNumber numberWithDouble:cv::contourArea(contours[i])];
[contourAreasArray addObject:contourArea];
}
NSSortDescriptor *highestToLowest = [NSSortDescriptor sortDescriptorWithKey:#"self" ascending:NO];
[contourAreasArray sortUsingDescriptors:[NSArray arrayWithObject:highestToLowest]];
cv::Rect maskRect;
Mat mask;
for (int i = 0; i < contours.size(); ++i)
{
NSNumber *contourArea = [NSNumber numberWithDouble:cv::contourArea(contours[i])];
cv::Rect brect = cv::boundingRect(contours[i]);
if ([contourArea doubleValue] == [[contourAreasArray objectAtIndex:0] doubleValue]) {
//NSLog(#"Drawing Contour");
maskRect = cv::boundingRect(contours[i]);
cv::drawContours(drawing, contours, i, cvScalar(255,255,255), CV_FILLED);
cv::rectangle(drawing, brect, cvScalar(255));
}
}
cv::Rect region_of_interest = maskRect;
cv::Mat roiImage = cv::Mat(threshMat, region_of_interest);
// final result is converted to UIImage for display
UIImage *threshUIImage = [self UIImageFromCVMat:drawing];
UIImage *resultUIImage = [self UIImageFromCVMat:roiImage];
self.plateImageView.image = resultUIImage;
self.threshImageView.image = threshUIImage;
Where am i going wrong here? If you guys need more code let me know, ill be more than happy to post more code... (the left image is the extracted (ROI Mat) and the right one is the mask that i obtain from drawContour).
And this is the picture I'm trying to process to get the license plate.
Alright well i finally figured it out...the solution was to do the following, essentially just coping the ROI image into another Mat:
cv::Rect region_of_interest = maskRect;
cv::Mat roiImage = cv::Mat(threshMat, region_of_interest);
cv::Mat finalROIImage;
roiImage.copyTo(finalROIImage);

HOW TO Find rhombus corners in OpenCV and crop a smaller rectangle out of it?

I am willing to get a plain rectangle out of a warped Image.
For example, once I have this sort of Image:
... I would like to crop the area correspondent to the following rectangle:
... BUT my code is extracting this bigger frame:
MY CODE IS SHOWN bellow:
int main(int argc, char** argv) {
cv::Mat img = cv::imread(argv[1]);
// Convert RGB Mat to GRAY
cv::Mat gray;
cv::cvtColor(img, gray, CV_BGR2GRAY);
// Store the set of points in the image before assembling the bounding box
std::vector<cv::Point> points;
cv::Mat_<uchar>::iterator it = gray.begin<uchar>();
cv::Mat_<uchar>::iterator end = gray.end<uchar>();
for (; it != end; ++it) {
if (*it)
points.push_back(it.pos());
}
// Compute minimal bounding box
Rect box = cv::boundingRect(cv::Mat(points));
// Draw bounding box in the original image (debug purposes)
cv::Point2f vertices[4];
vertices[0] = Point2f(box.x, box.y +box.height);
vertices[1] = Point2f(box.x, box.y);
vertices[2] = Point2f(box.x+ box.width, box.y);
vertices[3] = Point2f(box.x+ box.width, box.y +box.height);
for (int i = 0; i < 4; ++i) {
cv::line(img, vertices[i], vertices[(i + 1) % 4], cv::Scalar(0, 255, 0), 1, CV_AA);
cout << "==== vertices (x, y) = " << vertices[i].x << ", " << vertices[i].y << endl;
}
cv::imshow("box", img);
cv::imwrite("box.png", img);
waitKey(0);
return 0;
}
Any ideas on how to find the rhombus corners and try to reduce them to a smaller rectangle?
The most difficult part of this problem is actually finding the locations of the rhombus corners. If the images in your actual usage are much different from your example, this particular procedure for finding the rhombus corners may not work. Once this is achieved, you can sort the corner points by their distance from the center of the image. You are looking for points closest to the image center.
First, you must define a functor for the sort comparison (this could be a lambda if you can use C++11):
struct CenterDistance
{
CenterDistance(cv::Point2f pt) : center(pt){}
template <typename T> bool operator() (cv::Point_<T> p1, cv::Point_<T> p2) const
{
return cv::norm(p1-center) < cv::norm(p2-center);
}
cv::Point2f center;
};
This doesn't actually need to be a templated operator(), but it makes it work for any cv::Point_ type.
For your example image, the image corners are very well defined, so you can use a corner detector like FAST. Then, you can use cv::convexHull() to get the exterior points, which should be only the rhombus corners.
int main(int argc, char** argv) {
cv::Mat img = cv::imread(argv[1]);
// Convert RGB Mat to GRAY
cv::Mat gray;
cv::cvtColor(img, gray, CV_BGR2GRAY);
// Detect image corners
std::vector<cv::KeyPoint> kpts;
cv::FAST(gray, kpts, 50);
std::vector<cv::Point2f> points;
cv::KeyPoint::convert(kpts, points);
cv::convexHull(points, points);
cv::Point2f center(img.size().width / 2.f, img.size().height / 2.f);
CenterDistance centerDistance(center);
std::sort(points.begin(), points.end(), centerDistance);
//The two points with minimum distance are what we want
cv::rectangle(img, points[0], points[1], cv::Scalar(0,255,0));
cv::imshow("box", img);
cv::imwrite("box.png", img);
cv::waitKey(0);
return 0;
}
Note that you can use cv::rectangle() instead of constructing the drawn rectangle from lines. The result is: