C++ and OpenCV 4.5.3 - (-215: Assertion failed) - c++

Problem : Watershed algorithm
I started app project, for image processing, using OpenCv 4.5.3 and Swift ( with C++ ). I'm fighting with watershaded alg. for a really long time... And i have no clue what did i do wrong. Just don't know...
Error :
libc++abi.dylib: terminating with uncaught exception of type cv::Exception: OpenCV(4.5.3)
/Volumes/build-storage/build/master_iOS-mac/opencv/modules/imgproc/src/segmentation.cpp:161:
error: (-215:Assertion failed) src.type()
== CV_8UC3 && dst.type() == CV_32SC1 in function 'watershed'
terminating with uncaught exception of type cv::Exception: OpenCV(4.5.3)
/Volumes/build-storage/build/master_iOS-mac/opencv/modules/imgproc/src/segmentation.cpp:161: error:
(-215:Assertion failed) src.type()
== CV_8UC3 && dst.type() == CV_32SC1 in function 'watershed'
In the definition of openCv's watershed we can find :
#param image Input 8-bit 3-channel image.
#param markers Input/output 32-bit single-channel image (map) of markers. It should have the same size as image .
Code
+(UIImage *) watershed:(UIImage *)src{
cv::Mat img, mask;
UIImageToMat(src, img);
// Change the background from white to black, since that will help later to extract
// better results during the use of Distance Transform
cv::inRange(img, cv::Scalar(255,255,255), cv::Scalar(255,255,255), mask);
img.setTo(cv::Scalar(0,0,0), mask);
// Create a kernel that we will use to sharpen our image
// an approximation of second derivative, a quite strong kernel
cv::Mat kernel = (cv::Mat_<float>(3,3) <<
1, 1, 1,
1, -8, 1,
1, 1, 1);
// do the laplacian filtering as it is
// well, we need to convert everything in something more deeper then CV_8U
// because the kernel has some negative values,
// and we can expect in general to have a Laplacian image with negative values
// BUT a 8bits unsigned int (the one we are working with) can contain values from 0 to 255
// so the possible negative number will be truncated
cv::Mat lapl;
cv::filter2D(img, lapl, CV_32F, kernel);
cv::Mat sharp;
img.convertTo(sharp, CV_32F);
cv::Mat result = sharp - lapl;
// convert back to 8bits gray scale
result.convertTo(result, CV_8UC3);
lapl.convertTo(lapl, CV_8UC3);
cv::Mat bw;
cv::cvtColor(result, bw, cv::COLOR_BGR2GRAY);
cv::threshold(bw, bw, 40, 255, cv::THRESH_BINARY | cv::THRESH_OTSU);
// Perform the distance transform algorithm
cv::Mat dist;
cv::distanceTransform(bw, dist, cv::DIST_L2, cv::DIST_MASK_3);
// Normalize the distance image for range = {0.0, 1.0}
// so we can visualize and threshold it
cv::normalize(dist, dist, 0, 1.0, cv::NORM_MINMAX);
// Threshold to obtain the peaks
// This will be the markers for the foreground objects
cv::threshold(dist, dist, 0.4, 1.0, cv::THRESH_BINARY);
// Dilate a bit the dist image
cv::Mat kernel1 = cv::Mat::ones(3, 3, CV_8U);
dilate(dist, dist, kernel1);
// Create the CV_8U version of the distance image
// It is needed for findContours()
cv::Mat dist_8u;
dist.convertTo(dist_8u, CV_8U);
// Find total markers
std::vector<std::vector<cv::Point> > contours;
findContours(dist_8u, contours, cv::RETR_EXTERNAL, cv::CHAIN_APPROX_SIMPLE);
// Create the marker image for the watershed algorithm
cv::Mat markers = cv::Mat::zeros(dist.size(), CV_32S);
// Draw the foreground markers
for (size_t i = 0; i < contours.size(); i++)
{
drawContours(markers, contours, static_cast<int>(i), cv::Scalar(static_cast<int>(i)+1), -1);
}
// Draw the background marker
circle(markers, cv::Point(5,5), 3, cv::Scalar(255), -1);
cv::Mat markers8u;
markers.convertTo(markers8u, CV_8U, 10);
// Perform the watershed algorithm
watershed(result, markers);
return MatToUIImage(result);
}
You can clearly see, that variables has proper type, as in descr. of function:
result.convertTo(result, CV_8UC3);
cv::Mat markers = cv::Mat::zeros(dist.size(), CV_32S);

The convertTo can not add channels as well can not reduce/convert image to image with smaller amount of channels.
The key in this case is to use :
cvtColor(src, src, COLOR_BGRA2BGR); // change 4 to 3 channels

Related

Using cv::rgbd::Odometry::compute

I am using C++ and OpenCV with combination of ROS. I use live images from my camera (intel realsense R200). I get depth and RGB images from my camera. In my c++ code I want to use these images to get odometry data and make a trajectory out of it.
I am trying to use the "cv::rgbd::Odometry::compute" function for odometry, but I always get false as return value ("isSuccess" value in the code is always 0). But I dont know which part I am doing wrong.
I read my images from camera using ROS and then in the Callback function, first I convert all images to grayscale and then I use Surf function for detecting the features. Then I want to use "compute" to get the transformation between current and previous frame.
As far as I understood "Rt" and "inintRt" are the output of function so it is enough to cunstruct them with correct size.
Can anyone see the problem? Am I missing anything?
boost::shared_ptr<rgbd::Odometry> odom;
Mat Rt = Mat(4,4, CV_64FC1);
Mat initRt = Mat(4,4, CV_64FC1);
Mat prevFtrM; //mask Matrix of previous image
Mat currFtrM; //mask Matrix of current image
Mat tempFtrM;
Mat imgprev;// previous depth image
Mat imgcurr;// current depth image
Mat imgprevC;// previous colored image
Mat imgcurrC;// current colored image
void Surf(Mat img) // detect features of the img and fill currFtrM
{
int minHessian = 400;
Ptr<SURF> detector = SURF::create( minHessian );
vector<KeyPoint> keypoints_1;
currFtrM = Mat::zeros(img.size(), CV_8U); // type of mask is CV_8U
Mat roi(currFtrM, cv::Rect(0,0,img.size().width,img.size().height));
roi = Scalar(255, 255, 255);
detector->detect( img, keypoints_1, currFtrM );
Mat img_keypoints_1;
drawKeypoints( img, keypoints_1, img_keypoints_1, Scalar::all(-1), DrawMatchesFlags::DEFAULT );
//-- Show detected (drawn) keypoints
imshow("Keypoints 1", img_keypoints_1 );
}
void Callback(const sensor_msgs::ImageConstPtr& clr, const sensor_msgs::ImageConstPtr& dpt)
{
if(!imgcurr.data || !imgcurrC.data) // first frame
{
// depth image
imgcurr = cv_bridge::toCvShare(dpt, sensor_msgs::image_encodings::TYPE_32FC1)->image;
// colored image
imgcurrC = cv_bridge::toCvShare(clr, "bgr8")->image;
cvtColor(imgcurrC, imgcurrC, COLOR_BGR2GRAY);
//find features in the image
Surf(imgcurrC);
prevFtrM = currFtrM;
//scale color image to size of depth image
resize(imgcurrC,imgcurrC, imgcurr.size());
return;
}
odom = boost::make_shared<rgbd::RgbdOdometry>(imgcurrC, Odometry::DEFAULT_MIN_DEPTH(), Odometry::DEFAULT_MAX_DEPTH(), Odometry::DEFAULT_MAX_DEPTH_DIFF(), std::vector< int >(), std::vector< float >(), Odometry::DEFAULT_MAX_POINTS_PART(), Odometry::RIGID_BODY_MOTION);
// depth image
imgprev = imgcurr;
imgcurr = cv_bridge::toCvShare(dpt, sensor_msgs::image_encodings::TYPE_32FC1)->image;
// colored image
imgprevC = imgcurrC;
imgcurrC = cv_bridge::toCvShare(clr, "bgr8")->image;
cvtColor(imgcurrC, imgcurrC, COLOR_BGR2GRAY);
//scale color image to size of depth image
resize(imgcurrC,imgcurrC, imgcurr.size());
cv::imshow("Color resized", imgcurrC);
tempFtrM = currFtrM;
//detect new features in imgcurrC and save in a vector<Point2f>
Surf( imgcurrC);
prevFtrM = tempFtrM;
//set camera matrix to identity matrix
float vals[] = {619.137635, 0., 304.793791, 0., 625.407449, 223.984030, 0., 0., 1.};
const Mat cameraMatrix = Mat(3, 3, CV_32FC1, vals);
odom->setCameraMatrix(cameraMatrix);
bool isSuccess = odom->compute( imgprevC, imgprev, prevFtrM, imgcurrC, imgcurr, currFtrM, Rt, initRt );
if(isSuccess)
cout << "isSuccess " << isSuccess << endl;
}
Update: I calibrated my camera and replaced the camera matrix with real values.
A bit late, but could be still useful for someone.
It seems to me that you are missing extrinsic calibration from the calculation: in my experiments, R200 has a translation component between RGB and Depth camera that you are not taking into account.
Furthermore, looking at the camera parameters, Depth and RGB have different intrinsics and the Color frame has a MODIFIED_BROWN_CONRADY lens distortion (but this is minimal), are you undistorting that?
Obviously, I can be wrong if you already do all those steps and save registered RGB and Depth on files.

Opencv error: assertion failed in wrapPerspective

i'm trying to make an AR app, using aruco and Opencv (i'm a newbie). It detects aruco marker, and puts an image on it. I have tried to use wrapPerstective() function, however somethig is wrong, it returns Opencv error assertion failed ((m0.type() == cv_32f m0.type() == cv_64f) in wrapPerspective. Please give me a way to solve it
int main() {
cv::VideoCapture inputVideo;
inputVideo.open("gal.mp4");
cv::Ptr<cv::aruco::Dictionary> dictionary = cv::aruco::getPredefinedDictionary(cv::aruco::DICT_4X4_50);
cv::Mat sq = imread("zhuz.jpg", CV_LOAD_IMAGE_UNCHANGED);
while (inputVideo.grab()) {
vector<Point2f> sqPoints;
vector<Point2f> p;
sqPoints.push_back(Point2f(0, 0));
sqPoints.push_back(Point2f(sq.cols, 0));
sqPoints.push_back(Point2f(sq.cols, sq.rows));
sqPoints.push_back(Point2f(0, sq.rows));
cv::Mat image, warp_matrix;
inputVideo.retrieve(image);
Mat cpy_img(image.rows, image.cols, image.type());
Mat neg_img(image.rows, image.cols, image.type());
Mat gray;
Mat blank(sq.rows, sq.cols, sq.type());
std::vector<int> ids;
std::vector<std::vector<cv::Point2f>> corners;
cv::aruco::detectMarkers(image, dictionary, corners, ids);
if (ids.size() > 0) {
p.push_back(corners[0][0]);
p.push_back(corners[0][1]);
p.push_back(corners[0][2]);
p.push_back(corners[0][3]);
Mat wrap_matrix = getPerspectiveTransform(sqPoints, p);
blank = Scalar(0);
neg_img = Scalar(0); // Image is white when pixel values are zero
cpy_img = Scalar(0); // Image is white when pixel values are zero
bitwise_not(blank, blank);
warpPerspective(sq, neg_img, warp_matrix, Size(neg_img.cols, neg_img.rows)); // Transform overlay Image to the position - [ITEM1]
warpPerspective(blank, cpy_img, warp_matrix, Size(cpy_img.cols, neg_img.rows)); // Transform a blank overlay image to position
bitwise_not(cpy_img, cpy_img); // Invert the copy paper image from white to black
bitwise_and(cpy_img, image, cpy_img); // Create a "hole" in the Image to create a "clipping" mask - [ITEM2]
bitwise_or(cpy_img, neg_img, image); // Finally merge both items [ITEM1 & ITEM2]
}
cv::imshow("out", image);
}
}

Extract image pixels of triangle Error

I'm new to image processing and development. I need to take the inside triangle pixels of the image. In order to do it I used the following code. Unfortunately I obtain unwanted black pixels. get rid of that problem i tried to remove background[0] pixels by giving alfa value.(tranparent background) But it gives following Error. Any help is appreciated.
My code:
Mat img = cv::imread("/home/fabio/code/lena.jpg", cv::IMREAD_GRAYSCALE);
Mat alpha(img.size(), CV_8UC1, Scalar(0));
//triangle definition (example points)
vector<Point> points;
points.push_back(Point(200, 70));
points.push_back(Point(60, 150));
points.push_back(Point(500, 500));
//apply triangle to mask
fillConvexPoly(alpha, points, Scalar(255));
cv::Mat finalImage = cv::Mat::zeros(img.size(), img.type());
img.copyTo(finalImage, alpha);
imshow("image", finalImage);
Mat dst;
Mat rgb[1];
split(finalImage, rgb);
Mat rgba[2] = { finalImage, alpha };
merge(rgba, 2, dst);
imshow("dst", dst);
Error: OpenCV Error: Bad number of channels (Source image must have 1, 3 or 4 channels) in cvConvertImage, file C:\builds\2_4_PackSlave-win64-vc12-shared\opencv\modules\highgui\src\utils.cpp, line 611
use this instead of your last block:
std::vector<cv::Mat> channels;
cv::split(finalImage,m channels);
if(channels.size() == 0)
{
std::cout << "unexpected error" << std::endl;
return 1;
}
// fill up to reach 3 channels
while(channels,size() < 3)
{
channels.push_back(channels[0]);
}
// add alpha channel
channels.push_back(alpha);
cv::merge(channels, dst);
I didn't test it but this should be what you want?

OpenCV: how can I interpret the results of inRange?

I am processing video images and I would like to detect if the video contains any pixels of a certain range of red. Is this possible?
Here is the code I am adapting from a tutorial:
#ifdef __cplusplus
- (void)processImage:(Mat&)image;
{
cv::Mat orig_image = image.clone();
cv::medianBlur(image, image, 3);
cv::Mat hsv_image;
cv::cvtColor(image, hsv_image, cv::COLOR_BGR2HSV);
cv::Mat lower_red_hue_range;
cv::Mat upper_red_hue_range;
cv::inRange(hsv_image, cv::Scalar(0, 100, 100), cv::Scalar(10, 255, 255), lower_red_hue_range);
cv::inRange(hsv_image, cv::Scalar(160, 100, 100), cv::Scalar(179, 255, 255), upper_red_hue_range);
// Interpret values here
}
Interpreting values
I would like to detect if the results from the inRange operations are nil or not. In other words I want to understand if there are any matching pixels in the original image with a colour inRange from the given lower and upper red scale. How can I interpret the results?
First you need to OR the lower and upper mask:
Mat mask = lower_red_hue_range | upper_red_hue_range;
Then you can countNonZero to see if there are non zero pixels (i.e. you found something).
int number_of_non_zero_pixels = countNonZero(mask);
It could be better to first apply morphological erosion or opening to remove small (probably noisy) blobs:
Mat kernel = getStructuringElement(MORPH_ELLIPSE, Size(3, 3));
morphologyEx(mask, mask, MORPH_OPEN, kernel); // or MORPH_ERODE
or find connected components (findContours, connectedComponentsWithStats) and prune / search for according to some criteria:
vector<vector<Point>> contours
findContours(mask.clone(), contours, RETR_EXTERNAL, CHAIN_APPROX_SIMPLE);
double threshold_on_area = 100.0;
for(int i=0; i<contours.size(); ++i)
{
double area = countourArea(contours[i]);
if(area < threshold_on_area)
{
// don't consider this contour
continue;
}
else
{
// do something (e.g. drawing a bounding box around the contour)
Rect box = boundingRect(contours[i]);
rectangle(hsv_image, box, Scalar(0, 255, 255));
}
}

How to show the vector<Point2f> as a image?

I just start to learn opencv, I have defined a vector like:
vector<Point2f> cornersB;
and after that i have done some calculations like:goodFeaturesToTrack,cornerSubPix and calcOpticalFlowPyrLK using cornersB.
And now I want to show cornerB to see the points that has been drawn, my code is:
pointmat = Mat(cornersB);
imshow("Window", pointmat);
But I got error said that bad number of channels (Source image must have 1, 3 or 4 channels) in cvConvertImage.
Anyone can teach me how to show the points of cornerB in an image?
I just want to see the points (points in white and the background in black).
The simpler is to use cv::drawKeypoints
drawKeypoints( InputArray image, const std::vector<KeyPoint>& keypoints, InputOutputArray outImage,const Scalar& color=Scalar::all(-1), int flags=DrawMatchesFlags::DEFAULT );
In your case, let define a black image as image:
cv::Mat image(512,512,CV_8U)
image.setTo(0);
Then convert cornersB to cv::KeyPoint kp_cornerB and define the color as white with CV_RGB(255, 255, 255)
std::vector<cv::KeyPoint> kp_cornerB ;
// TODO convert cornersB to kp_cornerB
cv::Mat pointmat;
cv::drawKeypoints(image, kp_cornerB, pointmat, CV_RGB(255, 255, 255));
imshow("Window", pointmat);
The conversion can be done with a for loop on the vector:
for(vector<Point2f>::const_iterator it = cornersB.begin();
it != cornersB.end(); it++) {
cv::KeyPoint kp(*it, 8);
kp_cornerB.push_back(kp);
}
Here, the value '8' is the 'size' of the keypoint.