Hy,
I am using OpenCV2.4.4 to develop the following application:
read an image in Mat format;
transform it to grayscale;
detect the face, if exists;
crop the face, extracting the Region Of Interest;
save the cropped image to file.
My question is: could I possible set a standard dimension for the saved image (width and height)?
I tried to use resize function, but it doesn't actually I want because saves just a part of the face.
cv::cvtColor(croppedImage, greyMat, CV_BGR2GRAY);
greyMat.resize(100);
imwrite("result.jpg", greyMat);
Try to use cv::resize instead:
cv::cvtColor(croppedImage, greyMat, CV_BGR2GRAY);
cv::Mat result;
cv::resize(greyMat, result, cv::Size(100,100));
imwrite("result.jpg", result);
See the documentation for details.
Related
I have a source image. I need a particular portion to be segmented from it and save it as another image. I have the canny outline of the portion I need to be segmented out,but how do I use it to cut the portion from the source image? I have attached both the source image and the canny edge outline. Please help me and suggest me a solution.
EDIT-1: Alexander Kondratskiy,Is this what you meant by filling the boundary?
EDIT-2 : according to Kannat, I have done this
Now how do I separate the regions that are outside and inside of the contour into two separate images?
Edit 3- I thought of 'And-ing'the mask and the contour lined source image.Since I am using C, I am having a little difficulty.
this is the code I use to and:-
hsv_gray = cvCreateImage( cvSize(seg->width, seg->height), IPL_DEPTH_8U, 1 );
cvCvtColor( seg, hsv_gray, CV_BGR2GRAY );
hsv_mask=cvCloneImage(hsv_gray);
IplImage* contourImg =cvCreateImage( cvSize(hsv_mask->width, hsv_mask->height), IPL_DEPTH_8U, 3 );
IplImage* newImg=cvCreateImage( cvSize(hsv_mask->width, hsv_mask->height), IPL_DEPTH_8U, 3 );
cvAnd(contourImg, hsv_mask,newImg,NULL);
I always get an error of mismatch size or Type. I adjusted the size but I can't seem to adjust the type,since one(hsv_mask) is 1 channel and the others are 3 channels.
#kanat- I also tried your boundingrect but could not seem to get in right in C format.
Use cv::findContours on your second image to find the contour of the segment. Then use cv::boundingRect to find bounding box for this segment. After that you can create matrix and save in it cropped bounding box from your second image (as I see it is a binary image). To crop needed region use this:
cv::getRectSubPix(your_image, BB_size, cv::Point(BB.x + BB.width/2,
BB.y + BB.height/2), new_image).
Then you can save new_image using cv::imwrite. That's it.
EDIT:
If you found only one contour then use this (else you will iterate through elements of found contours). The following code shows the steps but sorry I can't test it now:
std::vector<std::vector<cv::Point>> contours;
// cv::findContours(..., contours, ...);
cv::Rect BB = cv::boundingRect(cv::Mat(contours[0]));
cv::Mat new_image;
cv::getRectSubPix(your_image, BB.size(), cv::Point(BB.x + BB.width/2,
BB.y + BB.height/2), new_image);
cv::imwrite("new_image_name.jpg", new_image);
You could fill the boundary created by the canny-edge detector, and use that as an alpha mask on the original image.
I am implementing an algorithm in which i have to use multi resolution analysis. What that paper says is that i have to perform some processing at lower scale, find some pixel locations and then remap the pixels according to the orignal scale. I really dont understand the remapping function in Open cv. If any one could help me that would be great.
Thanks.
If you want to resize picture in OpenCV, you can do this:
Mat img = imread(picturePath, CV_8UC1);
Mat resized(NEW_HEIGH, NEW_WIDTH, CV_32FC1);
img.convertTo(img, CV_32FC1);
resize(img, resized, resized.size());
If you want to acess a specific pixel, use:
img.at<var>(row, col)
in a CV_32FC1 format replace "var" with "float", for CV_8UC1 replace with "int"
Hope this will help.
I have written an android app which uses OpenCV to manipulate with images. I'm using the below code to write a cv::Mat object to JPG file.
cv::imwrite("<sd card path>/img.jpg", <some mat object>);
I do see the image being saved on my sd card, however, the colors are not right. It has some bluish color all over the image.
Does anyone know what I'm missing here?
The above comment by Haris resolved this issue. I modified the code to change the color space as below:
cv::Mat mat;
// Initialize mat
cv::cvtColor(mat, mat, CV_BGR2RGB);
cv::imwrite("<sd card path>/img.jpg", mat);
I was able to see the right image being saved with this.
I'm having a bit of trouble with an image that I'm converting for colour recognition.
The function looks like this:
void PaintHSVWindow(cv::Mat img){
cv::Mat HSV, threshold;
cvtColor(img, HSV, COLOR_BGR2HSV);
inRange(HSV, cv::Scalar(HMin, SMin, VMin), cv::Scalar(HMax, SMax, VMax), threshold);
Mat erodeElement = getStructuringElement(MORPH_RECT, cv::Size(3, 3));
Mat dilateElement = getStructuringElement(MORPH_RECT, cv::Size(8, 8));
erode(threshold, threshold, erodeElement);
dilate(threshold, threshold, dilateElement);
cv::resize(threshold, threshold, cv::Size(360, 286));
MyForm::setHSVWindow(threshold);
}
And the output looks as follows:
On the left is the input. On the right is supposed to be the same image, converted to HSV, filtered between the given thresholds to find the yellow ball, eroded and dilated to remove the smaller contours, and displayed in half the size of the original image. Instead, it takes the expected image and squashes 3 of them in the same space.
Any guesses as to why this would happen?
UPDATE 1:
OK, since it appears that running findContours on the image on the right-hand size still gives me the proper output, i.e. the contours from the distorted, 3-times-copied right-side image can be pasted into the right position on the left-side input image, I've decided to just take the distorted image and crop it for display purposes. It will only ever be used to find the contours of a given HSV range in an image, and if it serves that purpose, I'm happy.
As #Nallath comments, this is apparently a channel issue. According to the documentation, the output of inRange() should be a 1-channel CV_8U image which is the logical AND of all channel inclusives.
Your result means that somewhere along the way threshold is being treated like a 3-channel plane-order image.
What version of OpenCV are you using?
I suggest that you show threshold between every step to find the place where this conversion happens. This might be a bug that should be reported.
I am doing some ellipse recognition in an image and in order to do so I am opening a simple image:
img = imread("M:/Desktop/PsEyeRight.jpg", CV_LOAD_IMAGE_COLOR);
selecting a ROI (that's the only way I could see to set a ROI in OpenCV 2.4.6 where the old library had a cvSetImageROI() and cvResetImageROI() which I think were more simple):
Mat roi(img, Rect(Point(205, 72), Point(419,285)));
changing its color space with cvtColor:
cvtColor(roi, roi, CV_BGR2GRAY);
applying Threshold:
threshold(roi, roi, 150, 255, THRESH_BINARY);
Then I do the findContours with a cloned image since findContours modifies the image passed in the function and then I change the ROI back to BGR color space:
cvtColor(roi, roi, CV_GRAY2BGR);
And draw all the found ellipses in roi.
When I show roi I can see that everything worked 100% but I was expecting that when I showed the original image, it will be the original image with the ROI in threshold and the drawings inside of it, but instead I just get the original image itself, like nothing has changed. I believe this is happening because cvtColor is copying roi, so it doesn't "point" to img any more.
What's the best way (or recommended) to do this same processing and have the ROI inside the original image, showing the progress of the algorithm?
the main problem is, that you can't have an image, which is partly 3chan/rgb and partly 1chan/gray.
my solution would be , to work on a copy of the roi in the 1st place, and later convert it back to rgb and paste it into the original image.
img = imread("M:/Desktop/PsEyeRight.jpg", CV_LOAD_IMAGE_COLOR); // original
Mat roi(img, Rect(Point(205, 72), Point(419,285)));
Mat work = roi.clone();
cvtColor(work , work , CV_BGR2GRAY);
threshold(work , work , 150, 255, THRESH_BINARY);
// findContours(work,...);
cvtColor(work , roi, CV_GRAY2BGR); //here's the trick