I'm trying to learn OpenCV (Using version 3.0.0).
Right now I'm trying to see what the point operatioins do to various images, everything is going fine until I tried to do the magnitude operation, which requires inputs be in the form of
magnitude(InputArray x, InputArray y, OutputArray magnitude)
It also describes that x and y should be floating-point arrays of x/y-coordinates of the vectors and also the same size.
I've tried making a Vector of Mat's and splitting up the input image into these vectors and then doing the magnitude operator on them, but this didn't work. So I think I need to pass the arguments as columns and rows, but now I'm getting the error
OpenCV Error: Assertion failed (src1.size() == src2.size() && type == src2.type() && (depth == CV_32F || depth == CV_64F)) in magnitude, file /home/<user>/opencv-3.0.0-beta/modules/core/src/mathfuncs.cpp, line 521
terminate called after throwing an instance of 'cv::Exception'
what(): /home/<user>/opencv-3.0.0-beta/modules/core/src/mathfuncs.cpp:521: error: (-215) src1.size() == src2.size() && type == src2.type() && (depth == CV_32F || depth == CV_64F) in function magnitude
Aborted (core dumped)
And I'm not sure why, because I am clearly converting the input Mats to CV_64F types.
Am I using the magnitude function wrong? Or just passing it the wrong data?
void Magnitude(Mat img, Mat out)
{
img.convertTo(img, CV_64F);
out.convertTo(out, CV_64F);
for(int i = 0 ; i < img.rows ; i ++)
for(int j = 0 ; j < img.cols ; j++)
cv::magnitude(img.row(i), img.col(j), out.at<cv::Vec2f>(i,j));
cv::normalize(out,out,0,255,cv::NORM_MINMAX);
cv::convertScaleAbs(out,out);
cv::imshow("Magnitude", out);
waitKey();
}
void magnitude(InputArray x, InputArray y, OutputArray magnitude)
where x, y and magnitude must have the same size. In your case it means that your image have to be quadratic. Is it right?
A sample usage:
cv::Sobel(img, gx, img.depth(), 1, 0, 3);
cv::Sobel(img, gy, img.depth(), 0, 1, 3);
cv::Mat mag(gx.size(), gx.type());
cv::magnitude(gx, gy, mag);
Related
Hi all I'm a rookie in C++ and Opencv as well. Kindly help me out on this.
I have a function (cv::aruco::estimatePoseSingleMarkers(markerCorners, markerLength, camMatrix, distCoeffs, rvecs,tvecs); that gives pose of the marker w.r.t camera.
I used a for loop to print out the values of tvecs from the function.
tvecs: [-0.0240248, 0.0161165, 0.052999]
When I print the size of tvecs it says the size is 1. But I think itś 1x3.
My requirement is to perform a matrix multiplication of the above mentioned tvces and a matrix of size [3x3].
How do I do this?
The following is the piece of code:
// Get frame and convert to OpenCV Mat
int openCVDataType=CV_8UC3;
cv::Mat image(cv::Size(pRequest->imageWidth.read(),pRequest->imageHeight.read()),openCVDataType,pRequest->imageData.read(),pRequest->imageLinePitch.read() );
//Undistort
cv::remap(image, imageUndist, map1, map2, CV_INTER_LINEAR);
//ArUco detection
cv::aruco::detectMarkers(imageUndist,dictionary,markerCorners,markerIds,detectorParams,rejectedCandidates);
if(markerIds.size() > 0) {
cv::aruco::estimatePoseSingleMarkers(markerCorners, markerLength, camMatrix, distCoeffs, rvecs,tvecs);
for(unsigned int d = 0;d<markerIds.size();d++) {
cout<<"tvecsss: "<<tvecs[d]<<endl;
cout<<"tvecs: "<<tvecs[d].t()<<endl;
cv::Mat rotMat;
cv::Rodrigues(rvecs[d],rotMat);
rotMat = rotMat.t();
cout<<"rotMat: "<<rotMat<<endl;
cout<<"translation: "<<-tvecs[d]*rotMat<<endl;
}
There's an error when I multiply tvecs[d]*rotMat. This the error:
mat.inl.hpp:1274: error: (-215:Assertion failed) data && dims <= 2 &&
(rows == 1 || cols == 1) && rows + cols - 1 == n && channels() == 1 in
function 'operator cv::Vec<_Tp, m>'
You need to change this:
-tvecs[d]*rotMat
into this:
rotMat*-tvecs[d]
Tvec is 3x1 matrix and rotMat is 3x3 matrix
I'm not familiar with opencv, but I need to use the function ‘remap’ to rectify the image.
I have an image with 960x1280, and a remap file called ‘remap.bin’ with 9.8MB(is equaled to 960x1280x4x2, which means the two floats in one position(x,y));
Applies a generic geometrical transformation to an image.
C++: void remap(InputArray src, OutputArray dst, InputArray map1, InputArray map2, int interpolation, int borderMode=BORDER_CONSTANT, const Scalar& borderValue=Scalar())
map1 – The first map of either (x,y) points or just x values having the type CV_16SC2 , CV_32FC1 , or CV_32FC2 . See convertMaps() for details on converting a floating point representation to fixed-point for speed.
map2 – The second map of y values having the type CV_16UC1 , CV_32FC1 , or none (empty map if map1 is (x,y) points), respectively.
According to the explain,
I code like this:
int main(int argc, char* argv[]){
if(argc != 3){
printf("Please enter one path of image and one path of mapdata!\n");
return 0;
}
std::string image_path = argv[1];
char* remap_path = argv[2];
cv::Mat src = cv::imread(image_path);
cv::Mat dst;
dst.create( src.size(), src.type());
cv::Mat map2;
map2.create( src.size(), CV_32FC1);
map2.data = NULL;
cv::Mat mapXY;
mapXY.create( src.rows, src.cols, CV_64FC1);
FILE *fp;
fp = fopen(remap_path, "rb");
fread(mapXY.data, sizeof(float), mapXY.cols*mapXY.rows*2, fp);
fclose(fp);
imshow("src", src);
printf("remap!\n");
cv::remap(src, dst, mapXY, map2, cv::INTER_LINEAR);
imshow("dst", dst);
cv::waitKey(0);
return 0;
But when I run the program I get this error:
OpenCV Error: Assertion failed (((map1.type() == CV_32FC2 || map1.type() == CV_16SC2) && !map2.data) || (map1.type() == CV_32FC1 && map2.type() == CV_32FC1)) in remap, file /home/liliming/opencv-2.4.13/modules/imgproc/src/imgwarp.cpp, line 3262 terminate called after throwing an instance of 'cv::Exception' what(): /home/liliming/opencv-2.4.13/modules/imgproc/src/imgwarp.cpp:3262: error: (-215) ((map1.type() == CV_32FC2 || map1.type() == CV_16SC2) && !map2.data) || (map1.type() == CV_32FC1 && map2.type() == CV_32FC1) in function remap Aborted (core dumped)
I have no idea about it.
Could anyone help me? or give some sample codes?
Thank you very much!
The documentation for OpenCV 3.1 says:
map1 The first map of either (x,y) points or just x values having the type
CV_16SC2 , CV_32FC1, or CV_32FC2.
The assert says that map1 doesn't have a type of CV_32FC2
This is because you are creating and reading it with a type of CV_64FC1.
You need to convert it to the correct type: array of two dimensions of type CV_32FC2 (two 32-bit floats per element.)
The documentation goes on to say:
See `convertMaps` for details on converting a
floating point representation to fixed-point for speed.
Documentation can be found here: https://docs.opencv.org/3.1.0/da/d54/group__imgproc__transform.html#gab75ef31ce5cdfb5c44b6da5f3b908ea4
I separate the remap table into two tables remapX, remapY.
Like this:
float *data_xy = (float *)malloc(sizeof(float)*960*1280*2);
FILE *fp;
fp = fopen(remap_path, "rb");
fread(data_xy, sizeof(float), 960*1280*2, fp);
fclose(fp);
for(int y=0; y<1280; ++y){
for(int x=0; x<960; ++x){
map_x.at<float>(y, x) = data_xy[(y*960+x)*2];
map_y.at<float>(y, x) = data_xy[(y*960+x)*2+1];
}
}
And then use the
cv::remap(src, dst, map_x, map_y, cv::INTER_LINEAR);
It works well.
But I don't know how to use one parameter map1 to finish remap.
So I am having basically the same issue described in this post from last year, and am not getting anywhere with solving it. I am calling calibrateCamera and am getting the error "Assertion failed (nimages > 0 && nimages == (int) imagePoints1.total() && (!imgPtMat2 || nimages == (int)imagePoints2.total()) in cv::collectCalibrationData".
The line of code that is getting this error is:
double rms = calibrateCamera(objectPoints, imagePoints, imageSize, cameraMatrix,
distCoeffs, rvecs, tvecs, s.flag | CALIB_FIX_K4 | CALIB_FIX_K5);
I have checked the size of both objectPoints and imagePoints, and they are the same, as shown in the images below.
Both imagePoints and objectPoints contain points that make sense-- they are not filled with incorrect values or empty. I took a look at the collectCalibrationData code because that was where the assertion was failing, and it seems my problem is that the function itself seems to calculate the size of each vector incorrectly, or at least in a way that does not give a consistent size. Code for relevant part of function collectcalibrationData shown below:
static void collectCalibrationData( InputArrayOfArrays objectPoints,
InputArrayOfArrays imagePoints1,
InputArrayOfArrays imagePoints2,
Mat& objPtMat, Mat& imgPtMat1, Mat* imgPtMat2,
Mat& npoints )
{
int nimages = (int)objectPoints.total();
int i, j = 0, ni = 0, total = 0;
CV_Assert(nimages > 0 && nimages == (int)imagePoints1.total() &&
(!imgPtMat2 || nimages == (int)imagePoints2.total()));
for( i = 0; i < nimages; i++ )
{
ni = objectPoints.getMat(i).checkVector(3, CV_32F);
if( ni <= 0 )
CV_Error(CV_StsUnsupportedFormat, "objectPoints should contain vector of vectors of points of type Point3f");
int ni1 = imagePoints1.getMat(i).checkVector(2, CV_32F);
if( ni1 <= 0 )
CV_Error(CV_StsUnsupportedFormat, "imagePoints1 should contain vector of vectors of points of type Point2f");
CV_Assert( ni == ni1 );
total += ni;
}
It seems that the size of each of the vectors is calculated with nimages = (int)objectPoints.total() and nimages == (int)imagePoints.total(). I print out the values these produce with these two lines (converting the vectors to InputArrayofArrays b/c that's what collectCalibrationData does):
cv::InputArrayOfArrays IMGPOINT = imagePoints; std::cout << (int) IMGPOINT.total() << std::endl;
cv::InputArrayOfArrays OBJPOINT = objectPoints; std::cout << (int) OBJPOINT.total() << std::endl;
These statements produce a seemingly random integer every time I run the program-- they are not consistent and are never equal to each other. At this point, I'm stuck. I'm not sure why collectCalibrationData is getting the wrong values for the size of my vectors, and why converting the vectors to an InputArrayofArrays seems to change their size. Any tips? I've seen this problem asked once or twice before but there's never been an answer.
I am using VS 2013 and OpenCV 3.0.0.
I tried to use the following function in OpenCV (C++)
calcOpticalFlowPyrLK(prev_frame_gray, frame_gray, points[0], points[1], status, err, winSize, 3, termcrit, 0, 0.001);
and I get this error
OpenCV Error: Assertion failed ((npoints = prevPtsMat.checkVector(2, CV_32F, true)) >= 0) in calcOpticalFlowPyrLK,
file /home/rohit/OpenCV_src/opencv-2.4.9/modules/video/src/lkpyramid.cpp, line 845
terminate called after throwing an instance of 'cv::Exception'
what(): /home/rohit/OpenCV_src/opencv-2.4.9/modules/video/src/lkpyramid.cpp:845:
error: (-215) (npoints = prevPtsMat.checkVector(2, CV_32F, true)) >= 0 in function calcOpticalFlowPyrLK
Both of the following return -1
frame_gray.checkVector(2, CV_32F, true)
prev_frame_gray.checkVector(2, CV_32F, true)
I wanted to know what checkVector actually does because it is leading to the assertion error as you can see above.
The official OpenCV's doc says:
cv::Mat::checkVector() returns N if the matrix is 1-channel (N x
ptdim) or ptdim-channel (1 x N) or (N x 1); negative number otherwise
OpenCV considers some data types equivalent in case of some functions i.e. objectPoints of cv::solvePnP() can be:
1xN/Nx1 1-channel cv::Mat
3xN/Nx3 3-channel cv::Mat
std::vector<cv::Point3f>
With checkVector you can make sure that you are passing the correct representation of your data.
I had a similar issue with cv2.projectPoints function (-215:Assertion failed) because openCV was expecting a nx3 matrix and i was passing an 1D array of length 3. Try:
points[0].reshape(-1,3)
As argument to the function. It changes the shape (3,) to shape (1,3).
My goal is to pad my segmented image with zeros along the border as I needed to close it (for filling in small holes in my foreground). Here tmp is an CV_8UC3 segmented image Mat obtained from my image frame, in which all my background pixels have been blacked out. I have manually created a binary mask from tmp and stored it in Mat bM, which is the same size and type as my image Mat frame.
Mat bM = Mat::zeros(frame.rows, frame.cols, CV_8UC1);
for(i=0;i<frame.rows;i++)
{
for(j=0;j<frame.cols;j++)
{
if(tmp.at<Vec3b>(i,j)[0] != 0 && tmp.at<Vec3b>(i,j)[1] != 0 && tmp.at<Vec3b>(i,j)[0] != 0)
bM.at<uchar>(i,j) = 255;
}
}
Mat padded;
int padding = 6;
padded.create(bM.rows + 2*padding, bM.cols + 2*padding, bM.type());
padded.setTo(Scalar::all(0));
bM.copyTo(padded(Rect(padding, padding, bM.rows, bM.cols)));
My execution breaks at the last line in Visual Studio giving the following error:
Assertion failed <0 <= roi.x && 0 <= roi.width && roi.x + roi.width <= m.cols && 0<=roi.height && roi.y+roi.height <=m.rows>
While I understand what that means, I cant figure out why it would throw this error as my source image is within bounds of my target image. I've stepped through my code and am sure it breaks at that specific line. From what I've read, the cv::Rect constructor can be given offsets the way I've passed the offsets padding and padding, and these offsets are taken from the top left corner of the image. Can the copyTo function be used this way? Or is my error elsewhere?
The CV::Rect constructor is different from the CV::Mat constructor.
Rect_(_Tp _x, _Tp _y, _Tp _width, _Tp _height);
The cv::Rect parameters are offset in x, offset in y and then width first and height at last.
So when you do this:
bM.copyTo(padded(Rect(padding, padding, bM.rows, bM.cols)));
You create a cv::Rect with width bM.rows and heigth bM.cols. Which is the oposite of what you need.
Change it to:
bM.copyTo(padded(Rect(padding, padding, bM.cols, bM.rows)));