What I am doing is after pre-processing the image (by thresholding) find contours of the Image.
And I want to get the Discrete Fourier Descriptor of each contours (using dft() function)
My code follows,
vector<Mat> contourLines1;
vector<Mat> contourLines2;
getContourLine(exC1, contourLines1, binThreshold, numOfErosions);
getContourLine(exC2, contourLines2, binThreshold, numOfErosions);
// calculate fourier descriptor
Mat fd1 = makeFD(contourLines1.front());
Mat fd2 = makeFD(contourLines2.front());
/////////////////////////
void getContourLine(Mat& img, vector<Mat>& objList, int thresh, int k){
threshold(img,img,thresh,255,THRESH_BINARY);
erode(img,img,0,cv::Point(-1,-1),k);
cv::findContours(img,objList,CV_RETR_LIST,CV_CHAIN_APPROX_SIMPLE);
}
/////////////////////////
Mat makeFD(Mat& contour){
Mat result;
dft(contour,result,DFT_ROWS);
return result;
}
What is the problem??? I can't find it.. I think the type of parameters of functions (such as cv::finContours or dft ) is wrong....
Output of findContours is vector< vector< Point > >. You are providing vector< Mat>. This is a legitimate use (although a bit obscure), but you have to remember that the type elements in matrix is 'int'. DFT on the other hand works only with matrices of floats. This is what causes the crash. You can use convertTo function to create matrices of proper type.
Also I am not sure that the output will have any meaning to whatever computation you are doing. As far as I know Fourier transform supposed to work with signal, not with coordinates that are extracted from it.
Just a stylistic remark: cleaner way to perform same threshold is
img = (img > thresh);
Related
I have this project where we are trying to make an autonomous vehicle using a lidar and a stereo camera. To to this we're making two maps with cartographer and merging them together. However, the data from the stereo camera is not very accurate and we therefor have to manipulate the map made by cartographer. To make the camera map we are detecting lines, reading the distance and turning this into a laser scan which is the sent to cartographer. Ideally we would be able to convert the map into just the lines. This is what the camera map looks like: Camera map
What I would like to do first is fill out the holes in the map to make it easier to find lines and such later. This is where I am struggling. I have written code to convert from nav_msgs::OccupancyGrid to cv::Mat and back in addition to merging the maps. I have looked over this code and I don't think this is where the problem is. I have tried different suggestions online but have not gotten close to a solution. This is my code:
cv::Mat fill_cam_mat(cv::Mat mat) {
int thresh = 50;
cv::Mat canny_output;
cv::Canny( mat, canny_output, thresh, thresh*2 );
//std::vector<cv::Vec4i> hierarchy;
cv::Mat mat_floodfill = canny_output.clone();
cv::floodFill(mat_floodfill, cv::Point(0,0), cv::Scalar(255));
cv::Mat mat_floodfill_inv;
cv::bitwise_not(mat_floodfill, mat_floodfill_inv);
cv::Mat mat_out = (canny_output | mat_floodfill_inv);
return mat_out;
}
And my result is as follows when merged with the lidar map:
Final map
I have also tried:
cv::Mat fill_cam_mat(cv::Mat mat) {
int mat_height = mat.rows;
int mat_width = mat.cols;
int thresh = 50;
cv::Mat canny_output;
cv::Canny( mat, canny_output, thresh, thresh*2 );
cv::Mat non_zero;
cv::findNonZero(canny_output, non_zero);
std::vector<std::vector<cv::Point>> hull(non_zero.total());
for(unsigned int i = 0, n = non_zero.total(); i < n; ++i) {
cv::convexHull(non_zero, hull[i], false);
}
cv::Mat fill_contours_result(mat_height, mat_width, CV_8UC3, cv::Scalar(0));
cv::fillPoly(fill_contours_result, hull, 255);
return fill_contours_result;
}
Which gives the same result. I have also tried using cv::findContours to spicify the hull, but that worked even worse.
I am new with OpenCV and I don't understand what is wrong with my output. Would really appreciate any help on the code or if anybody have any better suggestions on how to solve the problem. Is it even necessary to fill the holes in order to get useful information from the map?
Thank you in advance!
Not quite understanding why this code works:
cv::Mat img = cv::imread('pic.jpg', -1);
cv::Mat padded;
std::uint16_t m = cv::getOptimalDFTSize(img.rows); // This will be 256
std::uint16_t n = cv::getOptimalDFTSize(img.cols); // This will be 256
cv::copyMakeBorder(img, padded, 0, m - img.rows, 0, n - img.cols,
cv::BORDER_CONSTANT, cv::Scalar::all(0)); // With my inputs, this effectively just copies img into padded
cv::Mat planes[] = { cv::Mat_<float>(padded),cv:: Mat::zeros(padded.size(), CV_32F) };
cv::Mat dft_img;
cv::merge(planes, 2, dft_img);
cv::dft(dft_img, dft_img);
cv::split(dft_img, planes);
But this breaks with an exception in memory:
cv::Mat img = cv::imread('pic.jpg', -1); // I know this image is 256x256
cv::Mat dft_img = cv::Mat::zeros(256,256,CV_32F); // Hard coding for simplicity atm
cv::dft(img,dft_img);
I'm having trouble understanding the documentation for dft() https://docs.opencv.org/2.4/modules/core/doc/operations_on_arrays.html#dft,
and other functions and classes for that matter.
I think it has something to do with dft_img not being a multichannel array in the second segment, but I'm lost on how to initialize such an array short of copying the first segment of code.
Secondly, when trying to access either planes[0] or planes[1] and modify their values with:
planes[0].at<double>(indexi,indexj) = 0;
I get another exception in memory, though I also see a new page that says mat.inl.hpp not found. Using Visual Studio, OpenCV 3.4.3, a novice with C++ but intermediate with signal processing, any help is appreciated.
You did not specify what exception you got, but an important point is that input of the dft function must be a floating point number, either 32 bits or 64 bits floating point number. Another point is that try not to use raw arrays if you are not comfortable with c++. I would even suggest that if using c++ is not mandotary, prefer python for OpenCV. Here is a working example dft code:
// read your image
cv::Mat img = cv::imread("a2.jpg", CV_LOAD_IMAGE_GRAYSCALE); // I know this image is 256x256
// convert it to floating point
//normalization is optional(depends on library and you I guess?)
cv::Mat floatImage;
img.convertTo(floatImage, CV_32FC1, 1.0/255.0);
// create a placeholder Mat variable to hold output of dft
std::vector<cv::Mat> dftOutputs;
dftOutputs.push_back(floatImage);
dftOutputs.push_back(cv::Mat::zeros(floatImage.size(), CV_32F));
cv::Mat dftOutput;
cv::merge(dftOutputs, dftOutput);
// perform dft
cv::dft(dftOutput, dftOutput);
// separate real and complex outputs back
cv::split(dftOutput, dftOutputs);
I changed code from the tutorial a little to make it easier to understand. If you would like to obtain magnitude image and such, you can follow the tutorial after split function.
I'm working on code that computes dense SIFT features from a set of images, based on SIFT flow: http://people.csail.mit.edu/celiu/SIFTflow/
I'd like to try building a FLANN index on these images by comparing the "energy" between each image in SIFT flow representation.
I have the code to compute the energy from here: http://richardt.name/publications/video-deanaglyph/
Is there a way to create my own distance function for the indexing?
RELATED NOTE:
I was finally able to get an alternate (but not custom) distance function working with flann::Index. The trick is you need to use flann::GenericIndex like so:
flann::GenericIndex<cvflann::ChiSquareDistance<int>> flannIndex(descriptors, cvflann::KDTreeIndexParams());
But you need to give it CV_32S descriptors.
And if you use knnSearch with custom distance function, you have to provide a CV_32S results Mat and CV_32F distances Mat.
Here's my full code in case it's helpful (not a lot of documentation out there):
Mat samples;
loadDescriptors(samples); // loading descriptors from .yml file
samples *= 100000; // scaling up my descriptors to be int
samples.convertTo(samples, CV_32S); // convert float to int
// create flann index
flann::GenericIndex<cvflann::ChiSquareDistance<int>> flannIndex(samples, cvflann::KDTreeIndexParams());
// NOTE lack of distance type in constructor parameters
// (unlike flann::index)
// now try knnSearch
int k=10; // find 10 nearest neighbors
Mat results(1,10,CV_32S), dists(1,10,CV_32F);
// (1,10) Mats for the output, types CV_32S and CV_32F
Mat responseHistogram;
responseHistogram = samples.row(60);
// choose a random row from the descriptors Mat
// to find nearest neighbors
flannIndex.knnSearch(responseHist, results, dists, k, cvflann::SearchParams(200) );
cout << results << endl;
cout << dists << endl;
flannIndex.save(ofToDataPath("indexChi2.txt"));
Using Chi Squared actually seems to work better for me than L2 distance. My feature vectors are BoW histograms in this case.
the problem is to fourie transform ( cv::dft ) a signal with fourie descriptors. So the mat should be complex numbers :(
But my problem is how can make a mat with complex numbers ?
Please help me to find an example or any other that show me how to store a complex number(RE + IM) to a mat ?
Is there a way to use merge ?
merge()
I found an answer saying:
I think you can use merge() function here, See the Documentation
It says : Composes a multi-channel array from several single-channel arrays.
Reference: How to store complex numbers in OpenCV matrix?
look at the nice dft sample in the opencv repo, also at the dft tutorial
so, if you have a Mat real, and a Mat imag (both of type CV_32FC1):
Mat planes[] = {real,imag};
Mat complexImg;
merge(planes, 2, complexImg); // complexImg is of type CV_32FC2 now
dft(complexImg, complexImg);
split(complexImg, planes);
// real=planes[0], imag=planes[1];
I am trying to store a IPL_DEPTH_8U, 3 channel image into an array so that I can store 100 images in memory.
To initialise my 4D array I used the following code (rows,cols,channel,stored):
int size[] = { 324, 576, 3, 100 };
CvMatND* cvImageBucket; = cvCreateMatND(3, size, CV_8U);
I then created a matrix and converted the image into the matrix
CvMat *matImage = cvCreateMat(Image->height,Image->width,CV_8UC3 );
cvConvert(Image, matImage );
How would I / access the CvMatND to copy the CvMat into it at the position of stored?
e.g. cvImageBucket(:,:,:,0) = matImage; // copied first image into array
You've tagged this as both C and C++. If you want to work in C++, you could use the (in my opinion) simpler cv::Mat structure to store each of the images, and then use these to populate a vector with all the images.
For example:
std::vector<cv::Mat> imageVector;
cv::Mat newImage;
newImage = getImage(); // where getImage() returns the next image,
// or an empty cv::Mat() if there are no more images
while (!newImage.empty())
{
// Add image to vector
imageVector.push_back(image);
// get next image
newImage = getImage();
}
I'm guessing something similar to:
for ith matImage
memcpy((char*)cvImageBucket->data+i*size[0]*size[1]*size[2],(char*)matImage->data,size[0]*size[1]*size[2]);
Although I agree with #Chris that it is best to use vector<Mat> rather than a 4D matrix, this answer is just to be a reference for those who really need to use 4D matrices in OpenCV (even though it is a very unsupported, undocumented and unexplored thing with so little available online and claimed to be working just fine!).
So, suppose you filled a vector<Mat> vec with 2D or 3D data which can be CV_8U, CV_32F etc.
One way to create a 4D matrix is
vector<int> dims = {(int)vec.size(), vec[0].rows, vec[0].cols};
Mat m(dims, vec[0].type(), &vec[0]);
However, this method fails when the vector is not continuous which is typically the case for big matrices. If you do this for a discontinuous matrix, you will get a segmentation fault or bad access error when you would like to use the matrix (i.e. copying, cloning, etc). To overcome this issue, you can copy matrices of the vector one by one into the 4D matrix as follows:
Mat m2(dims, vec[0].type());
for (auto i = 0; i < vec.size(); i++){
vec[i].copyTo(temp.at<Mat>(i));
}
Notice that both methods require the matrices to be the same resolution. Otherwise, you may get undesired results or errors.
Also, notice that you can always use for loops but it is generally not a good idea to use them when you can vectorize.