I am doing some ellipse recognition in an image and in order to do so I am opening a simple image:
img = imread("M:/Desktop/PsEyeRight.jpg", CV_LOAD_IMAGE_COLOR);
selecting a ROI (that's the only way I could see to set a ROI in OpenCV 2.4.6 where the old library had a cvSetImageROI() and cvResetImageROI() which I think were more simple):
Mat roi(img, Rect(Point(205, 72), Point(419,285)));
changing its color space with cvtColor:
cvtColor(roi, roi, CV_BGR2GRAY);
applying Threshold:
threshold(roi, roi, 150, 255, THRESH_BINARY);
Then I do the findContours with a cloned image since findContours modifies the image passed in the function and then I change the ROI back to BGR color space:
cvtColor(roi, roi, CV_GRAY2BGR);
And draw all the found ellipses in roi.
When I show roi I can see that everything worked 100% but I was expecting that when I showed the original image, it will be the original image with the ROI in threshold and the drawings inside of it, but instead I just get the original image itself, like nothing has changed. I believe this is happening because cvtColor is copying roi, so it doesn't "point" to img any more.
What's the best way (or recommended) to do this same processing and have the ROI inside the original image, showing the progress of the algorithm?
the main problem is, that you can't have an image, which is partly 3chan/rgb and partly 1chan/gray.
my solution would be , to work on a copy of the roi in the 1st place, and later convert it back to rgb and paste it into the original image.
img = imread("M:/Desktop/PsEyeRight.jpg", CV_LOAD_IMAGE_COLOR); // original
Mat roi(img, Rect(Point(205, 72), Point(419,285)));
Mat work = roi.clone();
cvtColor(work , work , CV_BGR2GRAY);
threshold(work , work , 150, 255, THRESH_BINARY);
// findContours(work,...);
cvtColor(work , roi, CV_GRAY2BGR); //here's the trick
Related
I want to detect blobs using opencv SimpleBlobDetector, in that class
cv::Ptr<cv::SimpleBlobDetector> detector = cv::SimpleBlobDetector::create(parameters);
detector->detect( inputImage, keypoints);
This works fine, until I want to introduce a mask so that the detector only looks for blobs within the mask.
detector->detect( inputImage, keypoints, zmat );
from the documentation, link, it says:
Mask specifying where to look for keypoints (optional). It must be a
8-bit integer matrix with non-zero values in the region of interest.
My understanding is that the detect algorithm will search only the non zero elements, in the mask matrix. So, I created a mask and populated this way::
cv::Mat zmat = cv::Mat::zeros(inputImage.size(), CV_8UC1);
cv::Scalar color(255,255,255);
cv::Rect rect(x,y,w,h);
cv::rectangle(zmat, rect, color, CV_FILLED);
However, it seems that the mask is not doing anything and the detect algorithm is detecting everything. I am using OpenCV 3.2.
I also tried just simple roi, but still the detector is detecting things all over:
cv::Mat roi(zmat, cv::Rect(10,10,600,600));
roi = cv::Scalar(255, 255, 255);
// match keypoints of connected components with blob detection
detector->detect( inputImage, keypoints, zmat);
Sorry it's not better news.
Using my installed version of opencv (a 3.1.0 dev version, built September 2016 - I really don't want to reinstall that thing!), I too have this problem. The SimpleBlobDetector just ignores the mask data. There's a quick and dirty work around using a Mat copy with roi (mostly your code, but declare zmat with 3 channels):
cv::Mat zmat = cv::Mat::zeros(gImg.size(), CV_8UC3);
cv::Scalar color(255,255,255);
cv::Rect rect(x,y,w,h);
cv::rectangle(zmat, rect, color, CV_FILLED);
inputImage.copyTo(zmat, zmat);
detector->detect(zmat, keypoints);
So you end up with your input image in zmat but with the "uninteresting" areas blacked (zeroed) out. Technically, it isn't using any (much) more memory than declaring your mask and it doesn't interfere with your input image either.
The only other thing worth checking is that your rectangle rect does specify something that isn't the complete frame - I obviously substituted in my own values for that for testing.
I have a color image that I want to copy to a white image, but only certain pixels. I want to keep the original colors of those pixels in the white image. I'm using a mask and copyTo function, but the resulting image is shown in only one color. What am I doing wrong? The input is a photo of a lined paper sheet with a hand-drawn figure and what I'm trying to do is only retain the figure and the lines with their colors in a white image (to get rid of the noise, shadows and wrinkles).
Mat src = imread("D:/Images/test.jpg", 1);
imshow("input", src);
Mat gray;
Mat dst_eq = adaptiveHistogramEqualization(src); //adaptive histogram equalization to enhance contrast
imshow("adaptive histogram", dst_eq);
cvtColor(dst_eq, gray, CV_BGR2GRAY);
GaussianBlur(gray, gray, Size(5, 5), 0, 0);
adaptiveThreshold(~gray, binary, 255, CV_ADAPTIVE_THRESH_GAUSSIAN_C, THRESH_BINARY_INV, 3, -2); //binary mask
imshow("binary mask", ~binary);
Mat white2 = Mat(src.size(), src.type());
white2 = cv::Scalar(255, 255, 255);
src.copyTo(white2, ~binary);
imshow("Result", white2);
If this can be done differently, please tell me. BTW, Does anyone know how to get rid of the lines in the paper? Taking into account that the photos will be taken with a phone, therefore it will be important lighting variations, and the sheet could have square or lined background, and the lines could be of different colors (not necessarily blue). I tried color segmentation (supposing blue lines) with no success due to the lighting variations and noise. I also tried morphological transformations. In both cases, it is impossible to remove the background and keep the figure intact in all my test images.
I have a source image. I need a particular portion to be segmented from it and save it as another image. I have the canny outline of the portion I need to be segmented out,but how do I use it to cut the portion from the source image? I have attached both the source image and the canny edge outline. Please help me and suggest me a solution.
EDIT-1: Alexander Kondratskiy,Is this what you meant by filling the boundary?
EDIT-2 : according to Kannat, I have done this
Now how do I separate the regions that are outside and inside of the contour into two separate images?
Edit 3- I thought of 'And-ing'the mask and the contour lined source image.Since I am using C, I am having a little difficulty.
this is the code I use to and:-
hsv_gray = cvCreateImage( cvSize(seg->width, seg->height), IPL_DEPTH_8U, 1 );
cvCvtColor( seg, hsv_gray, CV_BGR2GRAY );
hsv_mask=cvCloneImage(hsv_gray);
IplImage* contourImg =cvCreateImage( cvSize(hsv_mask->width, hsv_mask->height), IPL_DEPTH_8U, 3 );
IplImage* newImg=cvCreateImage( cvSize(hsv_mask->width, hsv_mask->height), IPL_DEPTH_8U, 3 );
cvAnd(contourImg, hsv_mask,newImg,NULL);
I always get an error of mismatch size or Type. I adjusted the size but I can't seem to adjust the type,since one(hsv_mask) is 1 channel and the others are 3 channels.
#kanat- I also tried your boundingrect but could not seem to get in right in C format.
Use cv::findContours on your second image to find the contour of the segment. Then use cv::boundingRect to find bounding box for this segment. After that you can create matrix and save in it cropped bounding box from your second image (as I see it is a binary image). To crop needed region use this:
cv::getRectSubPix(your_image, BB_size, cv::Point(BB.x + BB.width/2,
BB.y + BB.height/2), new_image).
Then you can save new_image using cv::imwrite. That's it.
EDIT:
If you found only one contour then use this (else you will iterate through elements of found contours). The following code shows the steps but sorry I can't test it now:
std::vector<std::vector<cv::Point>> contours;
// cv::findContours(..., contours, ...);
cv::Rect BB = cv::boundingRect(cv::Mat(contours[0]));
cv::Mat new_image;
cv::getRectSubPix(your_image, BB.size(), cv::Point(BB.x + BB.width/2,
BB.y + BB.height/2), new_image);
cv::imwrite("new_image_name.jpg", new_image);
You could fill the boundary created by the canny-edge detector, and use that as an alpha mask on the original image.
I'm having a bit of trouble with an image that I'm converting for colour recognition.
The function looks like this:
void PaintHSVWindow(cv::Mat img){
cv::Mat HSV, threshold;
cvtColor(img, HSV, COLOR_BGR2HSV);
inRange(HSV, cv::Scalar(HMin, SMin, VMin), cv::Scalar(HMax, SMax, VMax), threshold);
Mat erodeElement = getStructuringElement(MORPH_RECT, cv::Size(3, 3));
Mat dilateElement = getStructuringElement(MORPH_RECT, cv::Size(8, 8));
erode(threshold, threshold, erodeElement);
dilate(threshold, threshold, dilateElement);
cv::resize(threshold, threshold, cv::Size(360, 286));
MyForm::setHSVWindow(threshold);
}
And the output looks as follows:
On the left is the input. On the right is supposed to be the same image, converted to HSV, filtered between the given thresholds to find the yellow ball, eroded and dilated to remove the smaller contours, and displayed in half the size of the original image. Instead, it takes the expected image and squashes 3 of them in the same space.
Any guesses as to why this would happen?
UPDATE 1:
OK, since it appears that running findContours on the image on the right-hand size still gives me the proper output, i.e. the contours from the distorted, 3-times-copied right-side image can be pasted into the right position on the left-side input image, I've decided to just take the distorted image and crop it for display purposes. It will only ever be used to find the contours of a given HSV range in an image, and if it serves that purpose, I'm happy.
As #Nallath comments, this is apparently a channel issue. According to the documentation, the output of inRange() should be a 1-channel CV_8U image which is the logical AND of all channel inclusives.
Your result means that somewhere along the way threshold is being treated like a 3-channel plane-order image.
What version of OpenCV are you using?
I suggest that you show threshold between every step to find the place where this conversion happens. This might be a bug that should be reported.
Ok sorry for asking pretty much the same question again but I've tried many methods and I still can't do what I'm trying to do and I'm not even sure it's possible with opencv alone.
I have rotated an image and I want to copy it inside another image. The problem is that no matter what way I crop this rotated image it always copies inside this second image with a non rotated square around it. As can be seen in the image below.(Forget the white part thats ok). I just want to remove the striped part.
I believe my problem is with my ROI that I copy the image to as this ROI is a rect and not a RotatedRect. As can be seen in the code below.
cv::Rect roi(Pt1.x, Pt1.y, ImageAd.cols, ImageAd.rows);
ImageAd.copyTo(ImageABC(roi));
But I can't copyTo with a rotatedRect like in the code below...
cv::RotatedRect roi(cent, sizeroi, angled);
ImageAd.copyTo(ImageABC(roi));
So is there a way of doing what I want in opencv?
Thanks!
After using method below with masks I get this image which as seen is cut off by the roi in which I use to say where in the image I want to copy my rotated image. Basically now that I've masked the image, how can I select where to put this masked image into my second image. At the moment I use a rect but that won't work as my image is no longer a rect but a rotated rect. Look at the code to see how I wrongly do it at the moment (it cuts off and if I make the rect bigger an exception is thrown).
cv::Rect roi(Pt1.x, Pt1.y, creditcardimg.cols, creditcardimg.rows);
creditcardimg.copyTo(imagetocopyto(roi),mask);
Instead of ROI you can use mask to copy,
First create mask using rotated rect.
Copy your source image to destination image using this mask
See below C++ code
Your rotated rect and I calculated manually.
RotatedRect rRect = RotatedRect(Point2f(140,115),Size2f(115,80),192);
Create mask using draw contour.
Point2f vertices[4];
rRect.points(vertices);
Mat mask(src.rows, src.cols, CV_8UC1, cv::Scalar(0));
vector< vector<Point> > co_ordinates;
co_ordinates.push_back(vector<Point>());
co_ordinates[0].push_back(vertices[0]);
co_ordinates[0].push_back(vertices[1]);
co_ordinates[0].push_back(vertices[2]);
co_ordinates[0].push_back(vertices[3]);
drawContours( mask,co_ordinates,0, Scalar(255),CV_FILLED, 8 );
Finally copy source to destination using above mask.
Mat dst;
src.copyTo(dst,mask);