OpenCV copyTo() mask error - c++

I'm trying to paste a smaller image into a larger image, using masks in OpenCV 2.4 via C++.
Without a mask, I copy the small image to the larger image with the following code:
smallImage.copyTo(largeImage(cv::Rect(pt, smallImage.size()));
where pt has the type of cv::Point2f. It works perfectly. However, if I apply a mask:
smallImage.copyTo(largeImage(cv::Rect(pt, smallImage.size()), mask);
I get an error from Mat::create (see documentation):
CV_Assert(!fixedType() || (CV_MAT_CN(type) ==
m.channels() && ((1 << CV_MAT_TYPE(flags)) & fixedDepthMask) != 0));
If I remove the cv::Rect from my code, simplifying it to:
smallImage.copyTo(largeImage, mask);
it works, albeit it doesn't copy to the correct location. How do I solve this?

The following code works without any error.
Mat large_img = imread("C:\\Koala.jpg");
Mat small_img;
resize(large_img,small_img,Size(100,100),1);
small_img.copyTo(large_img (Rect(100,100,100,100)));
imshow("Rsult",large_img);
waitKey(0);
The small image is re-sized version of large image and it is copied in b/w (100,100) location to (200,200) in the large image. You can adopt these lines according to your requirement.

To paste a image scaledImage to resultMat:
scaledImage.copyTo(resultMat);
But i don't think you can select a roi in Java to copy at a particular region.

Related

OpenCv, get image information

I am playing around with an open source openCv application. With the provided image sets, it works great, but when I attempt to pass it a live camera stream, or even recorded frames from that camera stream, it crashes. I assume that this is to do with the cv::Mat type, differing image channels, or some conversion that i am not doing.
The provided dataset is grey-scale, 8 bit, and so are my images.
The application expects grayscale (CV_8U).
My question is:
Given one of the (working) provided images, and one of my recorded (not working) images, what is the best way to compare them using opencv, to find out what the difference might be that is causing my crashes?
Thank you.
I have tried:
Commenting out this code (Which gave assertion errors)
if(mImGray.channels()==3)
{
cvtColor(mImGray,mImGray,CV_BGR2GRAY);
cvtColor(imGrayRight,imGrayRight,CV_BGR2GRAY);
}
else if(mImGray.channels()==4)
{
cvtColor(mImGray,mImGray,CV_BGRA2GRAY);
cvtColor(imGrayRight,imGrayRight,CV_BGRA2GRAY);
}
And replacing it with:
cv::Mat TempL;
mImGray.convertTo(TempL, CV_8U);
cvtColor(TempL, mImGray, CV_BayerGR2BGR);
cvtColor(mImGray, mImGray, CV_BGR2GRAY);
And the program crashes with no error...
You can try this code:
if ( mImGray.depth() != CV_8U )
mImGray.convertTo(mImGray, CV_8U);
if (mImGray.channels() == 3 )
{
cvtColor(mImGray, mImGray, COLOR_BGR2GRAY);
}
Or you can define a new Mat with create function and use that.

segmentation of the source image in opencv based on canny edge outline attained from processing the said source image

I have a source image. I need a particular portion to be segmented from it and save it as another image. I have the canny outline of the portion I need to be segmented out,but how do I use it to cut the portion from the source image? I have attached both the source image and the canny edge outline. Please help me and suggest me a solution.
EDIT-1: Alexander Kondratskiy,Is this what you meant by filling the boundary?
EDIT-2 : according to Kannat, I have done this
Now how do I separate the regions that are outside and inside of the contour into two separate images?
Edit 3- I thought of 'And-ing'the mask and the contour lined source image.Since I am using C, I am having a little difficulty.
this is the code I use to and:-
hsv_gray = cvCreateImage( cvSize(seg->width, seg->height), IPL_DEPTH_8U, 1 );
cvCvtColor( seg, hsv_gray, CV_BGR2GRAY );
hsv_mask=cvCloneImage(hsv_gray);
IplImage* contourImg =cvCreateImage( cvSize(hsv_mask->width, hsv_mask->height), IPL_DEPTH_8U, 3 );
IplImage* newImg=cvCreateImage( cvSize(hsv_mask->width, hsv_mask->height), IPL_DEPTH_8U, 3 );
cvAnd(contourImg, hsv_mask,newImg,NULL);
I always get an error of mismatch size or Type. I adjusted the size but I can't seem to adjust the type,since one(hsv_mask) is 1 channel and the others are 3 channels.
#kanat- I also tried your boundingrect but could not seem to get in right in C format.
Use cv::findContours on your second image to find the contour of the segment. Then use cv::boundingRect to find bounding box for this segment. After that you can create matrix and save in it cropped bounding box from your second image (as I see it is a binary image). To crop needed region use this:
cv::getRectSubPix(your_image, BB_size, cv::Point(BB.x + BB.width/2,
BB.y + BB.height/2), new_image).
Then you can save new_image using cv::imwrite. That's it.
EDIT:
If you found only one contour then use this (else you will iterate through elements of found contours). The following code shows the steps but sorry I can't test it now:
std::vector<std::vector<cv::Point>> contours;
// cv::findContours(..., contours, ...);
cv::Rect BB = cv::boundingRect(cv::Mat(contours[0]));
cv::Mat new_image;
cv::getRectSubPix(your_image, BB.size(), cv::Point(BB.x + BB.width/2,
BB.y + BB.height/2), new_image);
cv::imwrite("new_image_name.jpg", new_image);
You could fill the boundary created by the canny-edge detector, and use that as an alpha mask on the original image.

OpenCV: findContours exception

my matlab code is:
h = fspecial('average', filterSize);
imageData = imfilter(imageData, h, 'replicate');
bwImg = im2bw(imageData, grayThresh);
cDist=regionprops(bwImg, 'Area');
cDist=[cDist.Area];
opencv code is:
cv::blur(dst, dst,cv::Size(filterSize,filterSize));
dst = im2bw(dst, grayThresh);
cv::vector<cv::vector<cv::Point> > contours;
cv::vector<cv::Vec4i> hierarchy;
cv::findContours(dst,contours,hierarchy,CV_RETR_CCOMP, CV_CHAIN_APPROX_NONE);
here is my image2blackand white function
cv::Mat AutomaticMacbethDetection::im2bw(cv::Mat src, double grayThresh)
{
cv::Mat dst;
cv::threshold(src, dst, grayThresh, 1, CV_THRESH_BINARY);
return dst;
}
I'm getting an exception in findContours() C++ exception: cv::Exception at memory location 0x0000003F6E09E0A0.
Can you please explain what am I doing wrong.
dst is cv::Mat and I used it all along it has my original values.
Update here is my matrix written into *.txt file:
http://www.filedropper.com/gili
UPDATE 2:
I have added dst.convertTo(dst,CV_8U); like Micka suggested, I no longer have an exception. however values are nothing like expected.
Take a look at this question which has a similar problem to what you're encountering: Matlab and OpenCV calculate different image moment m00 for the same image.
Basically, the OP in the linked post is trying to find the zeroth image moment for both x and y of all closed contours - which is actually just the area, by using findContours in OpenCV and regionprops in MATLAB. In MATLAB, that can be accessed by the Area property from regionprops, and judging from your MATLAB code, you wish to find the same quantity.
From the post, there is most certainly a difference between how OpenCV and MATLAB finds contours in an image. This boils down to the way both platforms consider what is a "connected pixel". OpenCV only uses a four-pixel neighbourhood while MATLAB uses an eight-pixel neighbourhood.
As such, there is nothing wrong with your implementation, and converting to 8UC1 is good. However, the areas (and ultimately the total number of connected components and contours themselves) between both contours found with MATLAB and OpenCV are not the same. The only way for you to get exactly the same result is if you manually draw the contours found by findContours on a black image, and using the cv::moments function directly on this image.
However, because of the differing implementations of cv::blur() in comparison to fspecial with an averaging mask that is even, you still may not be able to get the same results along the borders of the image. If there are no important contours around the borders of your image, then hopefully this will give you the right result.
Good luck!

How to thin an image borders with specific pixel size? OpenCV

I'm trying to thin an image by making the border pixels of size 16x24 becoming 0. I'm not trying to get the skeletal image, I'm just trying to reduce the size of the white area. Any methods that I could use? Enlighten me please.
This is the sample image that i'm trying to thin. It is made of 16x24 white blocks
EDIT
I tried to use this
cv::Mat img=cv::imread("image.bmp", CV_LOAD_IMAGE_GRAYSCALE);//image is in binary
cv::Mat mask = img > 0;
Mat kernel = Mat::ones( 16, 24, CV_8U );
erode(mask,mask,kernel);
But the result i got was this
which is not exactly what i wanted. I want to maintain the exact same shape with just 16x24 pixels of white shaved off from the border. Any idea what went wrong?
You want to Erode your image.
Another Description
Late answer, but you should erode your image using a kernel which is twice the size you want to get rid of plus one, like:
Mat kernel = Mat::ones( 24*2+1, 16*2+1, CV_8U );
Notice I changed the places of the height and width of the block, I only know opencv from Python, but I am pretty sure the order is the same as in Python.

cv::findContours is modifying source image OpenCV 2.3

From OpenCV documentation, source image in cv::findContours is aquired as const, but something strange is going on with my application. I'm using cv::inRange function to get thresholded image over specific color, and after that, using cv::moments, I can get the center of white pixels in thresholded image and this is working ok.
In addition, I would like to implement the code for finding biggest contour and locating central moment in that contour. After adding just cv::findContours in the code, I spotted strange behavior in the output and after that I wanted to check what is going on with source image using this code:
cv::Mat contourImage;
threshedImage.copyTo(contourImage); // threshedImage is the output from inRange
cv::findContours(threshedImage, contours, CV_RETR_LIST, CV_CHAIN_APPROX_NONE, cv::Point(0,0));
cv::Mat temp;
cv::absdiff(threshedImage,contourOutput, temp);
cv::namedWindow("absdiff");
cv::imshow("absdiff",temp);
After this, output is showing that there is a difference between threshedImage and contourImage. How this is possible? Does anyone have similar results with cv::findContours?
Wrong! The docs clear states that:
Source image is modified by this function.
So if you need the original image intact, make a copy of this image and pass the copy to cv::findContours().