Opencv error: assertion failed in wrapPerspective - c++

i'm trying to make an AR app, using aruco and Opencv (i'm a newbie). It detects aruco marker, and puts an image on it. I have tried to use wrapPerstective() function, however somethig is wrong, it returns Opencv error assertion failed ((m0.type() == cv_32f m0.type() == cv_64f) in wrapPerspective. Please give me a way to solve it
int main() {
cv::VideoCapture inputVideo;
inputVideo.open("gal.mp4");
cv::Ptr<cv::aruco::Dictionary> dictionary = cv::aruco::getPredefinedDictionary(cv::aruco::DICT_4X4_50);
cv::Mat sq = imread("zhuz.jpg", CV_LOAD_IMAGE_UNCHANGED);
while (inputVideo.grab()) {
vector<Point2f> sqPoints;
vector<Point2f> p;
sqPoints.push_back(Point2f(0, 0));
sqPoints.push_back(Point2f(sq.cols, 0));
sqPoints.push_back(Point2f(sq.cols, sq.rows));
sqPoints.push_back(Point2f(0, sq.rows));
cv::Mat image, warp_matrix;
inputVideo.retrieve(image);
Mat cpy_img(image.rows, image.cols, image.type());
Mat neg_img(image.rows, image.cols, image.type());
Mat gray;
Mat blank(sq.rows, sq.cols, sq.type());
std::vector<int> ids;
std::vector<std::vector<cv::Point2f>> corners;
cv::aruco::detectMarkers(image, dictionary, corners, ids);
if (ids.size() > 0) {
p.push_back(corners[0][0]);
p.push_back(corners[0][1]);
p.push_back(corners[0][2]);
p.push_back(corners[0][3]);
Mat wrap_matrix = getPerspectiveTransform(sqPoints, p);
blank = Scalar(0);
neg_img = Scalar(0); // Image is white when pixel values are zero
cpy_img = Scalar(0); // Image is white when pixel values are zero
bitwise_not(blank, blank);
warpPerspective(sq, neg_img, warp_matrix, Size(neg_img.cols, neg_img.rows)); // Transform overlay Image to the position - [ITEM1]
warpPerspective(blank, cpy_img, warp_matrix, Size(cpy_img.cols, neg_img.rows)); // Transform a blank overlay image to position
bitwise_not(cpy_img, cpy_img); // Invert the copy paper image from white to black
bitwise_and(cpy_img, image, cpy_img); // Create a "hole" in the Image to create a "clipping" mask - [ITEM2]
bitwise_or(cpy_img, neg_img, image); // Finally merge both items [ITEM1 & ITEM2]
}
cv::imshow("out", image);
}
}

Related

C++ and OpenCV 4.5.3 - (-215: Assertion failed)

Problem : Watershed algorithm
I started app project, for image processing, using OpenCv 4.5.3 and Swift ( with C++ ). I'm fighting with watershaded alg. for a really long time... And i have no clue what did i do wrong. Just don't know...
Error :
libc++abi.dylib: terminating with uncaught exception of type cv::Exception: OpenCV(4.5.3)
/Volumes/build-storage/build/master_iOS-mac/opencv/modules/imgproc/src/segmentation.cpp:161:
error: (-215:Assertion failed) src.type()
== CV_8UC3 && dst.type() == CV_32SC1 in function 'watershed'
terminating with uncaught exception of type cv::Exception: OpenCV(4.5.3)
/Volumes/build-storage/build/master_iOS-mac/opencv/modules/imgproc/src/segmentation.cpp:161: error:
(-215:Assertion failed) src.type()
== CV_8UC3 && dst.type() == CV_32SC1 in function 'watershed'
In the definition of openCv's watershed we can find :
#param image Input 8-bit 3-channel image.
#param markers Input/output 32-bit single-channel image (map) of markers. It should have the same size as image .
Code
+(UIImage *) watershed:(UIImage *)src{
cv::Mat img, mask;
UIImageToMat(src, img);
// Change the background from white to black, since that will help later to extract
// better results during the use of Distance Transform
cv::inRange(img, cv::Scalar(255,255,255), cv::Scalar(255,255,255), mask);
img.setTo(cv::Scalar(0,0,0), mask);
// Create a kernel that we will use to sharpen our image
// an approximation of second derivative, a quite strong kernel
cv::Mat kernel = (cv::Mat_<float>(3,3) <<
1, 1, 1,
1, -8, 1,
1, 1, 1);
// do the laplacian filtering as it is
// well, we need to convert everything in something more deeper then CV_8U
// because the kernel has some negative values,
// and we can expect in general to have a Laplacian image with negative values
// BUT a 8bits unsigned int (the one we are working with) can contain values from 0 to 255
// so the possible negative number will be truncated
cv::Mat lapl;
cv::filter2D(img, lapl, CV_32F, kernel);
cv::Mat sharp;
img.convertTo(sharp, CV_32F);
cv::Mat result = sharp - lapl;
// convert back to 8bits gray scale
result.convertTo(result, CV_8UC3);
lapl.convertTo(lapl, CV_8UC3);
cv::Mat bw;
cv::cvtColor(result, bw, cv::COLOR_BGR2GRAY);
cv::threshold(bw, bw, 40, 255, cv::THRESH_BINARY | cv::THRESH_OTSU);
// Perform the distance transform algorithm
cv::Mat dist;
cv::distanceTransform(bw, dist, cv::DIST_L2, cv::DIST_MASK_3);
// Normalize the distance image for range = {0.0, 1.0}
// so we can visualize and threshold it
cv::normalize(dist, dist, 0, 1.0, cv::NORM_MINMAX);
// Threshold to obtain the peaks
// This will be the markers for the foreground objects
cv::threshold(dist, dist, 0.4, 1.0, cv::THRESH_BINARY);
// Dilate a bit the dist image
cv::Mat kernel1 = cv::Mat::ones(3, 3, CV_8U);
dilate(dist, dist, kernel1);
// Create the CV_8U version of the distance image
// It is needed for findContours()
cv::Mat dist_8u;
dist.convertTo(dist_8u, CV_8U);
// Find total markers
std::vector<std::vector<cv::Point> > contours;
findContours(dist_8u, contours, cv::RETR_EXTERNAL, cv::CHAIN_APPROX_SIMPLE);
// Create the marker image for the watershed algorithm
cv::Mat markers = cv::Mat::zeros(dist.size(), CV_32S);
// Draw the foreground markers
for (size_t i = 0; i < contours.size(); i++)
{
drawContours(markers, contours, static_cast<int>(i), cv::Scalar(static_cast<int>(i)+1), -1);
}
// Draw the background marker
circle(markers, cv::Point(5,5), 3, cv::Scalar(255), -1);
cv::Mat markers8u;
markers.convertTo(markers8u, CV_8U, 10);
// Perform the watershed algorithm
watershed(result, markers);
return MatToUIImage(result);
}
You can clearly see, that variables has proper type, as in descr. of function:
result.convertTo(result, CV_8UC3);
cv::Mat markers = cv::Mat::zeros(dist.size(), CV_32S);
The convertTo can not add channels as well can not reduce/convert image to image with smaller amount of channels.
The key in this case is to use :
cvtColor(src, src, COLOR_BGRA2BGR); // change 4 to 3 channels

Using cv::rgbd::Odometry::compute

I am using C++ and OpenCV with combination of ROS. I use live images from my camera (intel realsense R200). I get depth and RGB images from my camera. In my c++ code I want to use these images to get odometry data and make a trajectory out of it.
I am trying to use the "cv::rgbd::Odometry::compute" function for odometry, but I always get false as return value ("isSuccess" value in the code is always 0). But I dont know which part I am doing wrong.
I read my images from camera using ROS and then in the Callback function, first I convert all images to grayscale and then I use Surf function for detecting the features. Then I want to use "compute" to get the transformation between current and previous frame.
As far as I understood "Rt" and "inintRt" are the output of function so it is enough to cunstruct them with correct size.
Can anyone see the problem? Am I missing anything?
boost::shared_ptr<rgbd::Odometry> odom;
Mat Rt = Mat(4,4, CV_64FC1);
Mat initRt = Mat(4,4, CV_64FC1);
Mat prevFtrM; //mask Matrix of previous image
Mat currFtrM; //mask Matrix of current image
Mat tempFtrM;
Mat imgprev;// previous depth image
Mat imgcurr;// current depth image
Mat imgprevC;// previous colored image
Mat imgcurrC;// current colored image
void Surf(Mat img) // detect features of the img and fill currFtrM
{
int minHessian = 400;
Ptr<SURF> detector = SURF::create( minHessian );
vector<KeyPoint> keypoints_1;
currFtrM = Mat::zeros(img.size(), CV_8U); // type of mask is CV_8U
Mat roi(currFtrM, cv::Rect(0,0,img.size().width,img.size().height));
roi = Scalar(255, 255, 255);
detector->detect( img, keypoints_1, currFtrM );
Mat img_keypoints_1;
drawKeypoints( img, keypoints_1, img_keypoints_1, Scalar::all(-1), DrawMatchesFlags::DEFAULT );
//-- Show detected (drawn) keypoints
imshow("Keypoints 1", img_keypoints_1 );
}
void Callback(const sensor_msgs::ImageConstPtr& clr, const sensor_msgs::ImageConstPtr& dpt)
{
if(!imgcurr.data || !imgcurrC.data) // first frame
{
// depth image
imgcurr = cv_bridge::toCvShare(dpt, sensor_msgs::image_encodings::TYPE_32FC1)->image;
// colored image
imgcurrC = cv_bridge::toCvShare(clr, "bgr8")->image;
cvtColor(imgcurrC, imgcurrC, COLOR_BGR2GRAY);
//find features in the image
Surf(imgcurrC);
prevFtrM = currFtrM;
//scale color image to size of depth image
resize(imgcurrC,imgcurrC, imgcurr.size());
return;
}
odom = boost::make_shared<rgbd::RgbdOdometry>(imgcurrC, Odometry::DEFAULT_MIN_DEPTH(), Odometry::DEFAULT_MAX_DEPTH(), Odometry::DEFAULT_MAX_DEPTH_DIFF(), std::vector< int >(), std::vector< float >(), Odometry::DEFAULT_MAX_POINTS_PART(), Odometry::RIGID_BODY_MOTION);
// depth image
imgprev = imgcurr;
imgcurr = cv_bridge::toCvShare(dpt, sensor_msgs::image_encodings::TYPE_32FC1)->image;
// colored image
imgprevC = imgcurrC;
imgcurrC = cv_bridge::toCvShare(clr, "bgr8")->image;
cvtColor(imgcurrC, imgcurrC, COLOR_BGR2GRAY);
//scale color image to size of depth image
resize(imgcurrC,imgcurrC, imgcurr.size());
cv::imshow("Color resized", imgcurrC);
tempFtrM = currFtrM;
//detect new features in imgcurrC and save in a vector<Point2f>
Surf( imgcurrC);
prevFtrM = tempFtrM;
//set camera matrix to identity matrix
float vals[] = {619.137635, 0., 304.793791, 0., 625.407449, 223.984030, 0., 0., 1.};
const Mat cameraMatrix = Mat(3, 3, CV_32FC1, vals);
odom->setCameraMatrix(cameraMatrix);
bool isSuccess = odom->compute( imgprevC, imgprev, prevFtrM, imgcurrC, imgcurr, currFtrM, Rt, initRt );
if(isSuccess)
cout << "isSuccess " << isSuccess << endl;
}
Update: I calibrated my camera and replaced the camera matrix with real values.
A bit late, but could be still useful for someone.
It seems to me that you are missing extrinsic calibration from the calculation: in my experiments, R200 has a translation component between RGB and Depth camera that you are not taking into account.
Furthermore, looking at the camera parameters, Depth and RGB have different intrinsics and the Color frame has a MODIFIED_BROWN_CONRADY lens distortion (but this is minimal), are you undistorting that?
Obviously, I can be wrong if you already do all those steps and save registered RGB and Depth on files.

Extract image pixels of triangle Error

I'm new to image processing and development. I need to take the inside triangle pixels of the image. In order to do it I used the following code. Unfortunately I obtain unwanted black pixels. get rid of that problem i tried to remove background[0] pixels by giving alfa value.(tranparent background) But it gives following Error. Any help is appreciated.
My code:
Mat img = cv::imread("/home/fabio/code/lena.jpg", cv::IMREAD_GRAYSCALE);
Mat alpha(img.size(), CV_8UC1, Scalar(0));
//triangle definition (example points)
vector<Point> points;
points.push_back(Point(200, 70));
points.push_back(Point(60, 150));
points.push_back(Point(500, 500));
//apply triangle to mask
fillConvexPoly(alpha, points, Scalar(255));
cv::Mat finalImage = cv::Mat::zeros(img.size(), img.type());
img.copyTo(finalImage, alpha);
imshow("image", finalImage);
Mat dst;
Mat rgb[1];
split(finalImage, rgb);
Mat rgba[2] = { finalImage, alpha };
merge(rgba, 2, dst);
imshow("dst", dst);
Error: OpenCV Error: Bad number of channels (Source image must have 1, 3 or 4 channels) in cvConvertImage, file C:\builds\2_4_PackSlave-win64-vc12-shared\opencv\modules\highgui\src\utils.cpp, line 611
use this instead of your last block:
std::vector<cv::Mat> channels;
cv::split(finalImage,m channels);
if(channels.size() == 0)
{
std::cout << "unexpected error" << std::endl;
return 1;
}
// fill up to reach 3 channels
while(channels,size() < 3)
{
channels.push_back(channels[0]);
}
// add alpha channel
channels.push_back(alpha);
cv::merge(channels, dst);
I didn't test it but this should be what you want?

Drawing Rectangle around difference area

I have a question which i am unable to resolve. I am taking difference of two images using OpenCV. I am getting output in a seperate Mat. Difference method used is the AbsDiff method. Here is the code.
char imgName[15];
Mat img1 = imread(image_path1, COLOR_BGR2GRAY);
Mat img2 = imread(image_path2, COLOR_BGR2GRAY);
/*cvtColor(img1, img1, CV_BGR2GRAY);
cvtColor(img2, img2, CV_BGR2GRAY);*/
cv::Mat diffImage;
cv::absdiff(img2, img1, diffImage);
cv::Mat foregroundMask = cv::Mat::zeros(diffImage.rows, diffImage.cols, CV_8UC3);
float threshold = 30.0f;
float dist;
for(int j=0; j<diffImage.rows; ++j)
{
for(int i=0; i<diffImage.cols; ++i)
{
cv::Vec3b pix = diffImage.at<cv::Vec3b>(j,i);
dist = (pix[0]*pix[0] + pix[1]*pix[1] + pix[2]*pix[2]);
dist = sqrt(dist);
if(dist>threshold)
{
foregroundMask.at<unsigned char>(j,i) = 255;
}
}
}
sprintf(imgName,"D:/outputer/d.jpg");
imwrite(imgName, diffImage);
I want to bound the difference part in a rectangle. findContours is drawing too many contours. but i only need a particular portion. My diff image is
I want to draw a single rectangle around all the five dials.
Please point me to right direction.
Regards,
I would search for the highest value for i index giving a non black pixel; that's the right border.
The lowest non black i is the left border. Similar for j.
You can:
binarize the image with a threshold. Background will be 0.
Use findNonZero to retrieve all points that are not 0, i.e. all foreground points.
use boundingRect on the retrieved points.
Result:
Code:
#include <opencv2/opencv.hpp>
using namespace cv;
int main()
{
// Load image (grayscale)
Mat1b img = imread("path_to_image", IMREAD_GRAYSCALE);
// Binarize image
Mat1b bin = img > 70;
// Find non-black points
vector<Point> points;
findNonZero(bin, points);
// Get bounding rect
Rect box = boundingRect(points);
// Draw (in color)
Mat3b out;
cvtColor(img, out, COLOR_GRAY2BGR);
rectangle(out, box, Scalar(0,255,0), 3);
// Show
imshow("Result", out);
waitKey();
return 0;
}
Find contours, it will output a set of contours as std::vector<std::vector<cv::Point> let us call it contours:
std::vector<cv::Point> all_points;
size_t points_count{0};
for(const auto& contour:contours){
points_count+=contour.size();
all_points.reserve(all_points);
std::copy(contour.begin(), contour.end(),
std::back_inserter(all_points));
}
auto bounding_rectnagle=cv::boundingRect(all_points);

Finding difference in an image

I have image as follows:
I want to detect 5 dials for processing. Hough circles is detecting all other irrelevant circles. to solve this i created a plain image and generated absolute difference with this one. It gave this image:
I drew box around it and final image is:
My code is as follows:
Mat img1 = imread(image_path1, COLOR_BGR2GRAY);
Mat img2 = imread(image_path2, COLOR_BGR2GRAY);
cv::Mat diffImage;
cv::absdiff(img2, img1, diffImage);
cv::Mat foregroundMask = cv::Mat::zeros(diffImage.rows, diffImage.cols, CV_8UC3);
float threshold = 30.0f;
float dist;
for(int j=0; j<diffImage.rows; ++j)
{
for(int i=0; i<diffImage.cols; ++i)
{
cv::Vec3b pix = diffImage.at<cv::Vec3b>(j,i);
dist = (pix[0]*pix[0] + pix[1]*pix[1] + pix[2]*pix[2]);
dist = sqrt(dist);
if(dist>threshold)
{
foregroundMask.at<unsigned char>(j,i) = 255;
}
}
}
cvtColor(diffImage,diffImage,COLOR_BGR2GRAY);
Mat1b img = diffImage.clone();
// Binarize image
Mat1b bin = img > 70;
// Find non-black points
vector<Point> points;
findNonZero(bin, points);
// Get bounding rect
Rect box = boundingRect(points);
// Draw (in color)
rectangle(img1, box, Scalar(0,255,0), 3);
// Show
imshow("Result", img1);
Now the issue is i cant compare plain image with anyother iamge of different sizes. Any pointer to right direction will be very helpful.
Regards,
Saghir A. Khatr
Edit
My plain image is as follows
I want to create a standard sample plain image which can be used with any image to detect that portion...