I have four corners extracted from a image,
std::vector<cv::Point2f> ecken;
ecken.push_back(corners[0]);
ecken.push_back(corners[size.height-1]);
ecken.push_back(corners[size.area()-size.height-1]);
ecken.push_back(corners[size.area()-1]);
they are warped in to a second image:
quad_pts.push_back(cv::Point2f(0, 0));
quad_pts.push_back(cv::Point2f(quad.cols, 0));
quad_pts.push_back(cv::Point2f(quad.cols, quad.rows));
quad_pts.push_back(cv::Point2f(0, quad.rows));
// Get transformation matrix
cv::Mat transmtx = cv::getPerspectiveTransform(ecken, quad_pts);
cv::warpPerspective(src, quad, transmtx, quad.size(),1);
I want to go back to the original image from the result that I get in quad, these what I've tried:
cv::Mat trans = cv::getPerspectiveTransform(quad_pts,ecken );
cv::perspectiveTransform(quad,test,trans); /// I'm not sure that this correct and the program crashes here
and here the error message in the console:
OpenCV Error: Assertion failed (scn + 1 == m.cols && (depth == CV_32F || depth =
= CV_64F)) in unknown function, file ..\..\..\opencv\modules\core\src\matmul.cpp
, line 1926
and it didn't work !!
any idea ??
Your two Mat objects are of different depth. Check by printing Mat.depth() that both have the same image type. It could be that one of them is grayscale and the other colour.
Related
Problem : Watershed algorithm
I started app project, for image processing, using OpenCv 4.5.3 and Swift ( with C++ ). I'm fighting with watershaded alg. for a really long time... And i have no clue what did i do wrong. Just don't know...
Error :
libc++abi.dylib: terminating with uncaught exception of type cv::Exception: OpenCV(4.5.3)
/Volumes/build-storage/build/master_iOS-mac/opencv/modules/imgproc/src/segmentation.cpp:161:
error: (-215:Assertion failed) src.type()
== CV_8UC3 && dst.type() == CV_32SC1 in function 'watershed'
terminating with uncaught exception of type cv::Exception: OpenCV(4.5.3)
/Volumes/build-storage/build/master_iOS-mac/opencv/modules/imgproc/src/segmentation.cpp:161: error:
(-215:Assertion failed) src.type()
== CV_8UC3 && dst.type() == CV_32SC1 in function 'watershed'
In the definition of openCv's watershed we can find :
#param image Input 8-bit 3-channel image.
#param markers Input/output 32-bit single-channel image (map) of markers. It should have the same size as image .
Code
+(UIImage *) watershed:(UIImage *)src{
cv::Mat img, mask;
UIImageToMat(src, img);
// Change the background from white to black, since that will help later to extract
// better results during the use of Distance Transform
cv::inRange(img, cv::Scalar(255,255,255), cv::Scalar(255,255,255), mask);
img.setTo(cv::Scalar(0,0,0), mask);
// Create a kernel that we will use to sharpen our image
// an approximation of second derivative, a quite strong kernel
cv::Mat kernel = (cv::Mat_<float>(3,3) <<
1, 1, 1,
1, -8, 1,
1, 1, 1);
// do the laplacian filtering as it is
// well, we need to convert everything in something more deeper then CV_8U
// because the kernel has some negative values,
// and we can expect in general to have a Laplacian image with negative values
// BUT a 8bits unsigned int (the one we are working with) can contain values from 0 to 255
// so the possible negative number will be truncated
cv::Mat lapl;
cv::filter2D(img, lapl, CV_32F, kernel);
cv::Mat sharp;
img.convertTo(sharp, CV_32F);
cv::Mat result = sharp - lapl;
// convert back to 8bits gray scale
result.convertTo(result, CV_8UC3);
lapl.convertTo(lapl, CV_8UC3);
cv::Mat bw;
cv::cvtColor(result, bw, cv::COLOR_BGR2GRAY);
cv::threshold(bw, bw, 40, 255, cv::THRESH_BINARY | cv::THRESH_OTSU);
// Perform the distance transform algorithm
cv::Mat dist;
cv::distanceTransform(bw, dist, cv::DIST_L2, cv::DIST_MASK_3);
// Normalize the distance image for range = {0.0, 1.0}
// so we can visualize and threshold it
cv::normalize(dist, dist, 0, 1.0, cv::NORM_MINMAX);
// Threshold to obtain the peaks
// This will be the markers for the foreground objects
cv::threshold(dist, dist, 0.4, 1.0, cv::THRESH_BINARY);
// Dilate a bit the dist image
cv::Mat kernel1 = cv::Mat::ones(3, 3, CV_8U);
dilate(dist, dist, kernel1);
// Create the CV_8U version of the distance image
// It is needed for findContours()
cv::Mat dist_8u;
dist.convertTo(dist_8u, CV_8U);
// Find total markers
std::vector<std::vector<cv::Point> > contours;
findContours(dist_8u, contours, cv::RETR_EXTERNAL, cv::CHAIN_APPROX_SIMPLE);
// Create the marker image for the watershed algorithm
cv::Mat markers = cv::Mat::zeros(dist.size(), CV_32S);
// Draw the foreground markers
for (size_t i = 0; i < contours.size(); i++)
{
drawContours(markers, contours, static_cast<int>(i), cv::Scalar(static_cast<int>(i)+1), -1);
}
// Draw the background marker
circle(markers, cv::Point(5,5), 3, cv::Scalar(255), -1);
cv::Mat markers8u;
markers.convertTo(markers8u, CV_8U, 10);
// Perform the watershed algorithm
watershed(result, markers);
return MatToUIImage(result);
}
You can clearly see, that variables has proper type, as in descr. of function:
result.convertTo(result, CV_8UC3);
cv::Mat markers = cv::Mat::zeros(dist.size(), CV_32S);
The convertTo can not add channels as well can not reduce/convert image to image with smaller amount of channels.
The key in this case is to use :
cvtColor(src, src, COLOR_BGRA2BGR); // change 4 to 3 channels
I want to superimpose an image on a previously detected item using haarcascade classifiers by using warpPerspective. The image seems properly scaled but the rest of the image becomes black.
I have tried to modify transparency of source and destination image but I do not know if it was successful.
image without warpPerspective
image with warpPerspective
Rect r = boundingRects.carRects.at(i).rect; // points of the object in the image.
vector<Point2d> imagePoints;
imagePoints.push_back(Point2d(r.x, r.y));
imagePoints.push_back(Point2d(r.x + r.width, r.y));
imagePoints.push_back(Point2d(r.x + r.width, r.y+ r.height));
imagePoints.push_back(Point2d(r.x, r.y + r.height));
dimensions.push_back(Point2d(0, 0)); // dimensions of the source image
dimensions.push_back(Point2d(899, 0));
dimensions.push_back(Point2d(899, 539));
dimensions.push_back(Point2d(0, 539));
H = findHomography(carAlertObject.dimensions, imagePoints); // get homography matrix
warpPerspective(carAlertObject.img, img, H,img.size()); // carAlertObject contains the source image and dimensions, img is the destination image
The problem was that I had to use a mask image to apply warpPerspective function and then do the sum of that image and destination image to have a correct result.
I am attempting to scale a grayscale image of type 8UC1 by 1.0f/255,
by the following operation
image.convertTo(image,CV_32F,1.0f/255,0); //convert and scale
After inspecting the output of the above, I find that all values are too close to zero. For instance, at a point where the value should be 0.2784, I'm getting 1.23417e-06.
So, I tried to see if I could undo the scaling and get the input back i.e. multiplying the result from above by 255, using
cv::imwrite("undo_scaling.jpg",image*255); //rescale and write to disk
strangely, the input image can be reconstructed.
Where am I going wrong with the scaling operation?
EDIT
The following is my attempt to preprocess an image. That involves the followings steps
Apply a mask
Crop the result
Finally, scale the pixel values by 255
I use the following code:
cv::Mat maskCrop(std::string imageName, std::string maskName)
{
cv::Mat image,mask,final_image;
image = cv::imread( imageName, CV_LOAD_IMAGE_GRAYSCALE);
mask = cv::imread( maskName,CV_LOAD_IMAGE_GRAYSCALE);
cv::resize(image, image, mask.size()); // make the size of mask and image same
cv::bitwise_and(image, mask, final_image); //Apply mask
// define rectangular window for cropping
int offset_x = 1250; // top left corner, for cropping
int offset_y = 1550;
cv::Rect roi;
roi.x = offset_x;
roi.y = offset_y;
roi.width = 550;
roi.height = 650;
// Crop the original image to the defined ROI //
cv::Mat crop = dstImage(roi);
crop.convertTo(crop,CV_32F,1.0f/255,0);
return crop;
}
Below is the input image:
The following is the mask to be applied on it:
I just start to learn opencv, I have defined a vector like:
vector<Point2f> cornersB;
and after that i have done some calculations like:goodFeaturesToTrack,cornerSubPix and calcOpticalFlowPyrLK using cornersB.
And now I want to show cornerB to see the points that has been drawn, my code is:
pointmat = Mat(cornersB);
imshow("Window", pointmat);
But I got error said that bad number of channels (Source image must have 1, 3 or 4 channels) in cvConvertImage.
Anyone can teach me how to show the points of cornerB in an image?
I just want to see the points (points in white and the background in black).
The simpler is to use cv::drawKeypoints
drawKeypoints( InputArray image, const std::vector<KeyPoint>& keypoints, InputOutputArray outImage,const Scalar& color=Scalar::all(-1), int flags=DrawMatchesFlags::DEFAULT );
In your case, let define a black image as image:
cv::Mat image(512,512,CV_8U)
image.setTo(0);
Then convert cornersB to cv::KeyPoint kp_cornerB and define the color as white with CV_RGB(255, 255, 255)
std::vector<cv::KeyPoint> kp_cornerB ;
// TODO convert cornersB to kp_cornerB
cv::Mat pointmat;
cv::drawKeypoints(image, kp_cornerB, pointmat, CV_RGB(255, 255, 255));
imshow("Window", pointmat);
The conversion can be done with a for loop on the vector:
for(vector<Point2f>::const_iterator it = cornersB.begin();
it != cornersB.end(); it++) {
cv::KeyPoint kp(*it, 8);
kp_cornerB.push_back(kp);
}
Here, the value '8' is the 'size' of the keypoint.
Sorry if the title gave you the wrong idea, I tried to make it as brief as possible. In short, what I am trying to do is detect a face with Viola-Jones algorithm (already implemented), save it in a separate image, convert that image to grayscale, then slap the grayscale image back into its' original position, resulting in a webcam display with all faces (and any false positives, I suppose) colored gray and surrounded by a green rectangle. However, I get the following error message:
Unhandled exception at 0x771115de in proba.exe: Microsoft C++ exception: cv::Exception at >memory location 0x003ef2c8..
This is my code (the relevant part), any suggestion/advice would be appreciated:
face_cascade.detectMultiScale( frame_gray, faces, 1.1, 2, 0|CV_HAAR_SCALE_IMAGE, Size(30, 30) );
for( int i = 0; i < faces.size(); i++ )
{
Point pt1(faces[i].x + faces[i].width, faces[i].y + faces[i].height);
Point pt2(faces[i].x, faces[i].y);
Rect myROI(pt1, pt2);
Mat croppedImage;
Mat(frame, myROI).copyTo(croppedImage);
cvtColor(croppedImage, croppedImage, CV_BGR2GRAY ); //the last four lines process the image
croppedImage.copyTo(frame(Rect(pt1, croppedImage.size()))); //this should copy the image back into its' original location
rectangle(frame, pt1, pt2, cvScalar(0, 255, 0, 0), 1, 8, 0);
}
//-- Show what you got
imshow( window_name, frame );
And sorry if I'm missing the obvious answer.
Your cropped grayscale image croppedImage is a 1 channel image but you are trying to overlay it onto 3 channel RGB image frame. In other words, the function copyTo in
croppedImage.copyTo(frame(Rect(pt1, croppedImage.size())));
expects croppedImage to have the same number of channels as frame. This is why you are getting the error.
EDIT To solve your issue you may try convert your grayscale cropped image back to RGB format (it will still look like the grayscale image). Something like
cvtColor(croppedImage, croppedImage, CV_BGR2GRAY ); // to grayscale
cvtColor(croppedImage, croppedImage, CV_GRAY2BGR ); // to RGB
croppedImage.copyTo(frame(Rect(pt1, croppedImage.size())));