I'm trying to isolate little object (black objects) in semolina like the image below. I use OpenCV 3.1 in C++.
I try different method and the best that I found is below. But the result is not good as I want.
I only want to have my black objects at the final.
Original Image :
So I convert my image to grayscale, and apply GaussianBlur and a large Blur in order to have a background.
GaussianBlur(img_gray, img_inter, Size(21, 21), 0, 0);
blur(img_inter, img_blur, Size(51, 51), Point(-1,-1), 0);
Background Image :
After, I subtract my background to my original image:
Subtracted Image :
And the final image after opening operation, threshold and opening again :
Result Image :
The code below :
cvtColor(*img_in, img_gray, CV_BayerGB2GRAY);
GaussianBlur(img_gray, img_inter, Size(21, 21), 0, 0);
blur(img_inter, img_blur, Size(51, 51), Point(-1,-1), 0);
img = (255 - img_gray) - (255 - img_blur); // Background subtraction
morphologyEx(img, img_inter, cv::MORPH_OPEN, cv::getStructuringElement(cv::MORPH_ELLIPSE, cv::Size(3, 3)),Point(-1,-1), 1);
threshold(img_inter, img, 50, 255, CV_THRESH_BINARY);
morphologyEx(img, img_inter, cv::MORPH_OPEN, cv::getStructuringElement(cv::MORPH_ELLIPSE, cv::Size(3, 3)), Point(-1, -1), 1);
If you have any ideas to improve the result, thanks you
Related
]2
I would like to get a square shape from the right image above. But when I try to get it, it also includes other protruding parts because they have similar color. Are there any solutions to get the result like below? (The square lines are not 100 % straight. They are little distorted.)
This is the code I wrote.
cv::Mat img_gray, img, clahe_img, threshold_img, bitwise_img, morph_img;
cv::Mat rectified_CCD_img = cv::imread('img.png')
cv::Mat kernel = cv::Mat::ones(99, 99, CV_8U);
clahe = cv::createCLAHE(10, cv::Size(100, 100));
cv::cvtColor(rectified_CCD_img, img_gray, cv::COLOR_BGR2GRAY);
cv::medianBlur(img_gray, img, 33);
clahe->apply(img, clahe_img);
cv::threshold(clahe_img, threshold_img, 0, 255, cv::THRESH_OTSU);
cv::bitwise_not(threshold_img, bitwise_img);
cv::morphologyEx(bitwise_img, morph_img, cv::MORPH_OPEN, kernel);
That's the original image:
Google Drive link
For this specific image my pipeline would be very simple:
Binary threshold the image with a fixed threshold. The rectangle is quite dark compared to the rest of the image.
Morphological opening with a large rectangular kernel to get rid of the "noise".
To get a perfect rectangle, determine the bounding rectangle of the remaining part, and draw a white rectangle.
That'd be the whole code:
// Read image
cv::Mat img = cv::imread("OTH61.png", cv::IMREAD_GRAYSCALE);
// Binary threshold image at fixed threshold
cv::Mat img_thr;
cv::threshold(img, img_thr, 32, 255, cv::THRESH_BINARY_INV);
// Morphological opening with large rectangular kernel
cv::Mat img_mop;
cv::morphologyEx(img_thr, img_mop, cv::MORPH_OPEN, cv::Mat::ones(51, 51, CV_8UC1));
// Draw rectangle w.r.t. to the bounding rectangle of the remaining part
cv::rectangle(img_mop, cv::boundingRect(img_mop), 255, cv::FILLED);
The thresholded image:
The morphological opened image:
The cleaned image:
I'm currently working on a project which reads an image, applies a number of filters, with the purpose of being able to place a bounding rect around regions of interest.
I have an image of handwritten text on lined paper as my input:
string imageLocation = "loctation of image file";
src = imread(imageLocation, 1);
I then convert the image to gray scale and apply adaptive thresholding:
cvtColor(src, newsrc, CV_BGR2GRAY);
adaptiveThreshold(~newsrc, dst, 255, CV_ADAPTIVE_THRESH_MEAN_C, THRESH_BINARY, 15, -2);
I then use morphological operations to attempt to eliminate the horizontal lines from the image:
Mat horizontal = dst.clone();
int horizontalSize = dst.cols / 30;
Mat horizontalStructure = getStructuringElement(MORPH_RECT, Size(horizontalSize,1));
erode(horizontal, horizontal, horizontalStructure, Point(-1, -1));
dilate(horizontal, horizontal, horizontalStructure, Point(-1, -1));
cv::resize(horizontal, horizontal, cv::Size(), 0.5, 0.5, CV_INTER_CUBIC);
imshow("horizontal", horizontal);
Which produces the following (so far so good):
I then try to use the same erode & dilate methods to figure out the vertical:
int verticalsize = dst.rows / 30;
Mat verticalStructure = getStructuringElement(MORPH_RECT, Size( 1,verticalsize));
erode(vertical, vertical, verticalStructure, Point(-1, -1));
dilate(vertical, vertical, verticalStructure, Point(-1, -1));
cv::resize(vertical, vertical, cv::Size(), 0.5, 0.5, CV_INTER_CUBIC);
imshow("vertical", vertical);
I'm following OpenCV's example, which can be found here
But, the output i'm getting for the vertical is:
My question is, how would I go about removing these horizontal lines from the image.
Sorry for the lengthy question (I wanted to explain as much as I could) and thanks in advance for any advice.
You can try to make this work in frequency domain like here:
http://lifeandprejudice.blogspot.ru/2012/07/activity-6-enhancement-in-frequency_25.html
http://www.fmwconcepts.com/imagemagick/fourier_transforms/fourier.html
Working with FFT is very effective in adding/removing regilar grids from image.
I'm currently working on a project where I need to display a processed live video capture. Therefore, I'm using something similar to this:
cv::VideoCapture cap(0);
if (!cap.isOpened())
return -1;
cap.set(CV_CAP_PROP_FRAME_WIDTH, 1280);
cap.set(CV_CAP_PROP_FRAME_HEIGHT, 720);
cv::namedWindow("Current Capture");
for (;;)
{
cv::Mat frame;
cap >> frame;
cv::Mat mirrored;
cv::flip(frame, mirrored, 1);
cv::imshow("Current Capture", process_image(mirrored));
if (cv::waitKey(30) >= 0) break;
}
The problem I have is, that process_image, which perfomes a circle detection in the image, needs some time to finish and causes the displaying to be rather a slideshow then a video.
My Question is: How can I speed up the processing without manipulating the process_image function?
I thought about performing the image processing in another thread, but I'm not really sure how to start. Do you have any other idea than this?
PS.: I'm not expecting you to write code for me, I only need a point to start from ;)
EDIT:
Ok, if there is nothing i can do about the performance while capturing, I will need to change the process_image function.
cv::Mat process_image(cv::Mat img)
{
cv::Mat hsv;
cv::medianBlur(img, img, 7);
cv::cvtColor(img, hsv, cv::COLOR_BGR2HSV);
cv::Mat lower_hue_range; // lower and upper hue range in case of red color
cv::Mat upper_hue_range;
cv::inRange(hsv, cv::Scalar(LOWER_HUE1, 100, 100), cv::Scalar(UPPER_HUE1, 255, 255), lower_hue_range);
cv::inRange(hsv, cv::Scalar(LOWER_HUE2, 100, 100), cv::Scalar(UPPER_HUE1, 255, 255), upper_hue_range);
/// Combine the above two images
cv::Mat hue_image;
cv::addWeighted(lower_hue_range, 1.0, upper_hue_range, 1.0, 0.0, hue_image);
/// Reduce the noise so we avoid false circle detection
cv::GaussianBlur(hue_image, hue_image, cv::Size(13, 13), 2, 2);
/// store all found circles here
std::vector<cv::Vec3f> circles;
cv::HoughCircles(hue_image, circles, CV_HOUGH_GRADIENT, 1, hue_image.rows / 8, 100, 20, 0, 0);
for (size_t i = 0; i < circles.size(); i++)
{
/// circle center
cv::circle(hsv, cv::Point(circles[i][0], circles[i][1]), 3, cv::Scalar(0, 255, 0), -1, 8, 0);
/// circle outline
cv::circle(hsv, cv::Point(circles[i][0], circles[i][1]), circles[i][2], cv::Scalar(0, 0, 255), 3, 8, 0);
}
cv::Mat newI;
cv::cvtColor(hsv, newI, cv::COLOR_HSV2BGR);
return newI;
}
Is there a huge perfomance issue I can do anything about?
If you are sure that the process_image function is what is causing the bottle neck in your program, but you can't modify it, then there's not really a lot you can do. If that function takes longer to execute than the duration of a video frame then you will never get what you need.
How about reducing the quality of the video capture or reducing the size? At the moment I can see you have it set to 1280*720. If the process_image function has less data to work with it should execute faster.
Source image:
Destination image:
Code:
cv::Mat sharpenedLena;
cv::Mat kernel = (cv::Mat_<float>(3, 3) << 0, -1, 0, -1, 5, -1, 0, -1, 0);
cv::filter2D(matGrey, sharpenedLena, matGrey.depth(), kernel);
cv::adaptiveThreshold(sharpenedLena, matBinary, 255, cv::ADAPTIVE_THRESH_GAUSSIAN_C, CV_THRESH_BINARY, 55, 30);
cv::Mat dst_img1;
//cv::GaussianBlur(matBinary, dst_img1, cv::Size(3,3), 0, 0);
cv::medianBlur(matBinary, dst_img1, 3);
UIImage *addrUIImage = [ImageUtil UIImageFromCVMat:dst_img1];
[self recognizeImageWithTesseract:addrUIImage withLauange:1];
Result:
三胡南省慈利昙龙三覃河镇文
I think it should be a picture deal with the problem. Here there is a treatment effect of others. How to achieve this effect?
Target image:
Here is my results & Code Snippet:
Mat mSource_Bgr,mSource_Gray,mSource_Hsv,mThreshold;
mSource_Bgr= imread(FileName_S.c_str(),1);
namedWindow("Source Image",WINDOW_AUTOSIZE);
imshow("Source Image",mSource_Bgr);
cvtColor(mSource_Bgr,mSource_Hsv,COLOR_BGR2HSV);
mSource_Hsv = mSource_Hsv + Scalar(0,0,-25); //Subtracting 25 from all the Pixel Values
cvtColor(mSource_Hsv,mSource_Bgr,COLOR_HSV2BGR);// Back to BGR Just for Debug purpose
imshow("Improved Darkness",mSource_Bgr);
imwrite(FileName_S+"_Res.bmp",mSource_Bgr);
cvtColor(mSource_Bgr,mSource_Gray,COLOR_BGR2GRAY); // for Adaptive Thresholding the input Image
adaptiveThreshold(mSource_Gray,mThreshold,255,ADAPTIVE_THRESH_GAUSSIAN_C,THRESH_BINARY,59,10);
imshow("Adaptive Thres",mThreshold);
imwrite(FileName_S+"_Thres.bmp",mThreshold);
You can remove the Noise i.e small dots by using contour Area or by Morphological Processing.Hope this helps you!
You can try using adaptiveThreshold() or algorithms such as MSER OpenCV.
These will perform better, especially MSER and its variant CSER are designed to detect text-like structures.
You can try using Binary thresholding with a OPEN Morphology operation.
I'm introducing myself in OpenCV (in order for an software project at university) and found a tutorial for color circle detection which I adapted and tested. It was written with OpenCV 1 in C. So I tried to convert it to OpenCv 2 classes API and everything was fine, but I ran into one problem:
The C function cvHoughCircles produces other results than the C++ function HoughCircles.
The C version finds my test circle and has a low rate of false positives, but the C++ version has a significantly higher mistake rate.
//My C implementation
IplImage *img = cvQueryFrame( capture );
CvSize size = cvGetSize(img);
IplImage *hsv = cvCreateImage(size, IPL_DEPTH_8U, 3);
cvCvtColor(img, hsv, CV_BGR2HSV);
CvMat *mask = cvCreateMat(size.height, size.width, CV_8UC1);
cvInRangeS(hsv, cvScalar(107, 61, 0, 0), cvScalar(134, 255, 255, 0), mask);
/* Copy mask into a grayscale image */
IplImage *hough_in = cvCreateImage(size, 8, 1);
cvCopy(mask, hough_in, NULL);
cvSmooth(hough_in, hough_in, CV_GAUSSIAN, 15, 15, 0, 0);
cvShowImage("mask",hough_in);
/* Run the Hough function */
CvMemStorage *storage = cvCreateMemStorage(0);
CvSeq *circles = cvHoughCircles(hough_in, storage, CV_HOUGH_GRADIENT,
4, size.height/4, 100, 40, 0, 0);
// ... iterating over all found circles
this works pretty well
//My C++ implementation
cv::Mat img;
cap.read(img);
cv::Size size(img.cols,img.rows);
cv::Mat hsv(size, IPL_DEPTH_8U, 3);
cv::cvtColor(img, hsv, CV_BGR2HSV);
cv::Mat mask(size.height, size.width, CV_8UC1);
cv::inRange(hsv, cv::Scalar(107, 61, 0, 0), cv::Scalar(134, 255, 255, 0), mask);
GaussianBlur( mask, mask, cv::Size(15, 15), 0, 0 );
/* Run the Hough function */
imshow("mask",mask);
vector<cv::Vec3f> circles;
cv::HoughCircles(mask, circles, CV_HOUGH_GRADIENT,
4, size.height/4, 100, 140, 0, 0);
// ... iterating over all found circles
As you can see, I use same arguments to all calls. I tested this with a webcam and a static sample object.One requirement is to use OpenCV2 C++ API.
Does anybody know, why I get so different results under equivalent conditions?
Edit
The different threshold values was just a mistake when I tested to make results more equally. These screenshots are taken with threshold set to 40 for both versions:
Screenshots: (Sorry, cannot yet post images)
C and C++ version
I see Hough parameters in C version as "..., 100, 40, 0, 0); " while in C++ version as "... 100, 140, 0, 0);" This difference in thresholds probably explains the difference in results.