Create Mat from Vector<point2f> Assertion failed error - c++

I was trying to write Point2f imagePoints to a Mat image in openCV. I was following the link below.
Create Mat from vector<point2f>
But I am getting 'Assertion failed' error. Please help.
Code:
std::vector<cv::Point3d> objectPoints;
std::vector<cv::Point2d> imagePoints;
cv::Mat intrisicMat(3, 3, cv::DataType<double>::type);
intrisicMat.at<double>(0, 0) = param.focalLength.first;
intrisicMat.at<double>(0, 1) = 0;
intrisicMat.at<double>(0, 2) = param.principalPoint.first;
intrisicMat.at<double>(1, 0) = 0;
intrisicMat.at<double>(1, 1) = param.focalLength.second;
intrisicMat.at<double>(1, 2) = param.principalPoint.second;
intrisicMat.at<double>(2, 0) = 0;
intrisicMat.at<double>(2, 1) = 0;
intrisicMat.at<double>(2, 2) = 1;
cv::Mat rVec(3, 1, cv::DataType<double>::type); // Rotation vector
rVec.at<double>(0) = 0;
rVec.at<double>(1) = 0;
rVec.at<double>(2) = 0;
cv::Mat tVec(3, 1, cv::DataType<double>::type); // Translation vector
tVec.at<double>(0) = 0;
tVec.at<double>(1) = 0;
tVec.at<double>(2) = 0;
cv::Mat distCoeffs(5, 1, cv::DataType<double>::type); // Distortion vector
distCoeffs.at<double>(0) = param.distortionRadial.at(0);
distCoeffs.at<double>(1) = param.distortionRadial.at(1);
distCoeffs.at<double>(2) = param.distortionTangential.first;
distCoeffs.at<double>(3) = param.distortionTangential.second;
distCoeffs.at<double>(4) = param.distortionRadial.at(2);
projectPoints(objectPoints, rVec, tVec, intrisicMat, distCoeffs, imagePoints);
Mat depthImage = Mat(imagePoints);
imwrite("E:/softwares/1.8.0.71/bin/depthImage.jpg", depthImage);
cout << "depthImage.channels()=" << depthImage.channels() << endl;
Error:
OpenCV Error: Assertion failed (image.channels() == 1 || image.channels() == 3 || image.channels() == 4) in cv::imwrite_, file E:\softwares\opencv-3.1.0\opencv-3.1.0\modules\imgcodecs\src\loadsave.cpp, line 455
My image has 2 channels. So ImWrite() is throwing assertion failed error. How can I create a Mat image using the Image points if not like this?

With what you have written in the comments, it seems that you're trying to imwrite your Mat to a file. The problem is, a Mat from Vector<Point2f> will give a 2 channels matrix, which is not compatible with any image format (grayscale, RGB or RGBA).
Moreover, please edit your main post to show the code (using markdown) so it is easier to read and then help you.

Related

Forward process fail in ONNX model

I want to use FER+ Emotion Recognition in my project:
string modelPath = "../data/model.onnx";
Mat frame = imread("../data/Smile-Mood.jpg");
Mat gray;
cvtColor(frame, gray, COLOR_BGR2GRAY);
float scale = 1.0;
int inHeight = 64;
int inWidth = 64;
bool swapRB = false;
//Read and initialize network
cv::dnn::Net net = cv::dnn::readNetFromONNX(modelPath);
Mat blob;
//Create a 4D blob from a frame
cv::dnn::blobFromImage(gray, blob, scale, Size(inWidth, inHeight), swapRB, false);
//Set input blob
net.setInput(blob);
//Make forward pass
Mat prob = net.forward();
I have this error in last line:
Unhandled exception at 0x00007FFDA25AA799 in FERPlusDNNOpenCV.exe: Microsoft C++ exception: cv::Exception at memory location 0x00000063839BE050. occurred
OpenCV(4.1.0) Error: Assertion failed (ngroups > 0 && inpCn % ngroups == 0 && outCn % ngroups == 0) in cv::dnn::ConvolutionLayerImpl::getMemoryShapes
how to fix that?
UPDATE ACCORDING #Micka COMMENT:
Mat dstImg = Mat(64, 64, CV_32FC1);
resize(gray, dstImg, Size(64, 64));
std::vector<float> array;
if (dstImg.isContinuous())
array.assign(dstImg.data, dstImg.data + dstImg.total());
else{
for (int i = 0; i < dstImg.rows; ++i) {
array.insert(array.end(), dstImg.ptr<float>(i), dstImg.ptr<float>(i) + dstImg.cols);
}
}
//Set input blob
net.setInput(array);
//Make forward pass
Mat prob = net.forward();
I've got this error: vector subscript out of range.
Should I use another function instead setInput?

How to warp image with predefined homography matrix in OpenCV?

I am trying to set predefined values to homography and then use function warpPerspective that will warp my image. First i used findHomography function and displayed result:
H = findHomography(obj, scene, CV_RANSAC);
for( int i=0; i<H.rows; i++){
for( int j=0; j<H.cols; j++){
printf("H: %d %d: %lf\n",i,j,H.at<double>(i,j));
}
}
warpPerspective(image1, result, H, cv::Size(image1.cols + image2.cols, image1.rows));
This works as it is supposed to and i get these values
After that i tried to set values for H and call warpPerspective like this:
H.at<double>(0, 0) = 0.766912;
H.at<double>(0, 1) = 0.053191;
H.at<double>(0, 2) = 637.961151;
H.at<double>(1, 0) = -0.118426;
H.at<double>(1, 1) = 0.965682;
H.at<double>(1, 2) = 3.405685;
H.at<double>(2, 0) = -0.000232;
H.at<double>(2, 1) = 0.000019;
H.at<double>(2, 2) = 1.000000;
warpPerspective(image1, result, H, cv::Size(image1.cols + image2.cols, image1.rows));
And now i get System NullReferenceException, do you have any idea why is this failing?
Okay i got help on OpenCV forum, my declaration of H was like this
cv::Mat H;
this was okay for function fingHomography, but when i wanted to add values manually, i had to declare H like this:
cv::Mat H(3, 3, CV_64FC1);

OPENCV - Impossible to save or show a SurfDescriptor Mat

I'm trying to save my Surf descriptors (of different images) Mat into a file, for later use using the code below:
int offline(int nb) { //creates descriptors image;
Mat img;
int nfeatures = 25;
int nOctaveLayers = 2;
double contrastThreshold = 0.04;
double edgeThreshold = 10;
double sigma = 1.6;
SurfFeatureDetector surfDetector = SURF(edgeThreshold, 2, 2, true, false);
vector< KeyPoint> keypoints;
SurfDescriptorExtractor surfExtractor;
Mat imgDescriptors;
for (int i = 2;i <= nb;i++) {
img = imread("images/" + to_string(i) + ".jpg", CV_LOAD_IMAGE_GRAYSCALE);
if (!img.data)
{
return -1;
}
surfDetector.detect(img, keypoints);
surfExtractor.compute(img, keypoints, imgDescriptors);
imwrite(to_string(i)+".jgp",imgDescriptors); //impossible to save
//imshow("lol", imgDescriptors); //impossible to show
}
return 0;
}
i'm getting this exception : OpenCV Error: Unspecified error (could not find a writer for the specified extension) in cv::imwrite_, file C:\builds\2_4_PackSlave-win64-vc12-shared\opencv\modules\highgui\src\loadsave.cpp, line 275 [RESOLVED BY adding the extension +".jpg".
So i thought i should try to show the image, so i'm having another error:
Any clue about this ? (SECOND EXCEPTION)

Right use of initUndistortRectifyMap and remap from OpenCV

I want to undistort a camera image. The undistort function of OpenCV is too slow, so I want to split it like mentioned in the documentation into the 2 calls of initUndistortRectifyMap (as init step) and remap (in the render loop).
At first, I tried a test program with the principal approach:
//create source matrix
cv::Mat srcImg(res.first, res.second, cvFormat, const_cast<char*>(pImg));
//fill matrices
cv::Mat cam(3, 3, cv::DataType<float>::type);
cam.at<float>(0, 0) = 528.53618582196384f;
cam.at<float>(0, 1) = 0.0f;
cam.at<float>(0, 2) = 314.01736116032430f;
cam.at<float>(1, 0) = 0.0f;
cam.at<float>(1, 1) = 532.01912214324500f;
cam.at<float>(1, 2) = 231.43930864205211f;
cam.at<float>(2, 0) = 0.0f;
cam.at<float>(2, 1) = 0.0f;
cam.at<float>(2, 2) = 1.0f;
cv::Mat dist(5, 1, cv::DataType<float>::type);
dist.at<float>(0, 0) = -0.11839989180635836f;
dist.at<float>(1, 0) = 0.25425420873955445f;
dist.at<float>(2, 0) = 0.0013269901775205413f;
dist.at<float>(3, 0) = 0.0015787467748277866f;
dist.at<float>(4, 0) = -0.11567938093172066f;
cv::Mat map1, map2;
cv::initUndistortRectifyMap(cam, dist, cv::Mat(), cam, cv::Size(res.second, res.first), CV_32FC1, map1, map2);
cv::remap(srcImg, *m_undistImg, map1, map2, cv::INTER_CUBIC);
The format of my camera image is BGRA. The code compiles and starts, but the resulting image is wrong:
Any ideas, what's wrong with my code?
It works, yes. To be honest, I don't remember exactly what the problem was. I interchanged width and height or somethink like that.
This is my running code:
//create source matrix
cv::Mat srcImg(resolution.second, resolution.first, cvFormat, const_cast<unsigned char*>(pSrcImg));
//look if an update of the maps is necessary
if ((resolution.first != m_width) || (m_height != resolution.second))
{
m_width = resolution.first;
m_height = resolution.second;
cv::initUndistortRectifyMap(*m_camData, *m_distData, cv::Mat(), *m_camData, cv::Size(resolution.first, resolution.second), CV_32FC1, *m_undistMap1, *m_undistMap2);
}
//create undistorted image
cv::remap(srcImg, *m_undistortedImg, *m_undistMap1, *m_undistMap2, cv::INTER_LINEAR);
return reinterpret_cast<unsigned char*>(m_undistortedImg->data);

OpenCV running kmeans algorithm on an image

I am trying to run kmeans on a 3 channel color image, but every time I try to run the function it seems to crash with the following error:
OpenCV Error: Assertion failed (data.dims <= 2 && type == CV_32F && K > 0) in unknown function, file ..\..\..\OpenCV-2.3.0\modules\core\src\matrix.cpp, line 2271
I've included the code below with some comments to help specify what is being passed in. Any help is greatly appreciated.
// Load in an image
// Depth: 8, Channels: 3
IplImage* iplImage = cvLoadImage("C:/TestImages/rainbox_box.jpg");
// Create a matrix to the image
cv::Mat mImage = cv::Mat(iplImage);
// Create a single channel image to create our labels needed
IplImage* iplLabels = cvCreateImage(cvGetSize(iplImage), iplImage->depth, 1);
// Convert the image to grayscale
cvCvtColor(iplImage, iplLabels, CV_RGB2GRAY);
// Create the matrix for the labels
cv::Mat mLabels = cv::Mat(iplLabels);
// Create the labels
int rows = mLabels.total();
int cols = 1;
cv::Mat list(rows, cols, mLabels .type());
uchar* src;
uchar* dest = list.ptr(0);
for(int i=0; i<mLabels.size().height; i++)
{
src = mLabels.ptr(i);
memcpy(dest, src, mLabels.step);
dest += mLabels.step;
}
list.convertTo(list, CV_32F);
// Run the algorithm
cv::Mat labellist(list.size(), CV_8UC1);
cv::Mat centers(6, 1, mImage.type());
cv::TermCriteria termcrit(CV_TERMCRIT_EPS+CV_TERMCRIT_ITER, 10, 1.0);
kmeans(mImage, 6, labellist, termcrit, 3, cv::KMEANS_PP_CENTERS, centers);
The error says all: Assertion failed (data.dims <= 2 && type == CV_32F && K > 0)
These are very simple rules to understand, the function will work only if:
mImage.depth() is CV_32F
if mImage.dims is <= 2
and if K > 0. In this case, you define K as 6.
From what you stated on the question, it seems that:
IplImage* iplImage = cvLoadImage("C:/TestImages/rainbox_box.jpg");`
is loading the image as IPL_DEPTH_8U by default and not IPL_DEPTH_32F. This means that mImage is also IPL_DEPTH_8U, which is why your code is not working.