I have a matrix cv::Mat CorrespondenceMap where each element contains its own index. I then use these indexes to reconstruct the transformed image.
When I perform the operation directly on the image data I get the correct first visual. If I do it through the correspondence map it only works when I set to INTER_NEAREST otherwise I get the image shown
std::vector<int> cmap(width*height);
std::iota (std::begin(cmap), std::end(cmap), 0); // Fill with incrementing ints
cv::Mat cmapMat(height, width, CV_32SC1, cmap.data());
cmapMat.convertTo(cmapMat, CV_64F);
cv::warpAffine(source, transformedimage, lkParams.rowRange(0,2), source.size(), cv::INTER_LINEAR,cv::BORDER_CONSTANT, -1);
cmapMat.convertTo(cmapMat, CV_32S, 1, 0.5);
Related
I am trying to do a transform of a RGB image using cv::remap so that the empty space is extrapolated by the average intensity:
cv::Mat mapX(sz, CV_32FC1), mapY(sz, CV_32FC1);
// filling maps
cv::remap(src, dst, mapX, mapY, cv::INTER_LINEAR, cv::BORDER_CONSTANT, cv::mean(src));
The output is a transformed image, but the empty space is filled with the intensity value in first row/column of the corresponding column/row because the default value for the last parameter of cv::remap is 0.
Example:
If I do this (rotate):
cv::warpAffine(src, dst, rotationMatrix, bbox.size(), cv::INTER_LINEAR, cv::BORDER_CONSTANT, cv::mean(src));
I get correct output:
How can I specify a constant color everywhere?
I am trying to convert a float image that I get from a simulated depth camera to CV_16UC1. The camera publishes the depth in CV_32FC1 format. I tried many ways but the result was not reasonable.
cv::Mat depth_cv(512, 512, CV_32FC1, depth);
cv::Mat depth_converted;
depth_cv.convertTo(depth_converted,CV_16UC1);
The result is a black image. If I use a scale factor, the image will be white.
I also tried to do it this way:
float depthValueF [512*512];
for (int i=0;i<resolution[1];i++){ // go through the rows (y)
for (int j=0;j<resolution[0];j++){ // go through the columns (x)
depthValueOfPixel=depth[i*resolution[0]+j]; // this is location j/i, i.e. x/y
depthValueF[i*resolution[0]+j] = (depthValueOfPixel) * (65535.0f);
}
}
It was not successful either.
Try using cv::normalize instead, which will not only convert the image into the proper data type, but it will properly do the scaling for you under the hood.
Therefore:
cv::Mat depth_cv(512, 512, CV_32FC1, depth);
cv::Mat depth_converted;
cv::normalize(depth_cv, depth_converted, 0, 65535, NORM_MINMAX, CV_16UC1);
I am attempting to use my own kernel to blur an image (for educational purposes). But my kernel just makes my whole image white. Is my blur kernel correct? I believe the proper name of the blur filter I am trying to apply is a normalised blur.
void blur_img(const Mat& src, Mat& output) {
// src is a 1 channel CV_8UC1
float kdata[] = { 0.0625f, 0.125f, 0.0625f, 0.125f, 0.25f, 0.125f, 0.0625f, 0.125f, 0.0625f };
//float kdata[] = { -1,-1,-1, -1,8,-1, -1,-1,-1}; // outline filter works fine
Mat kernel(3, 3, CV_32F, kdata);
// results in output being a completely white image
filter2D(src, output, CV_32F, kernel);
}
Your image is not white, is float. I am sure you that you are displaying the image with imshow in a later place, and it looks all white. This is explained in the imshow documentation. Specifically:
If the image is 32-bit floating-point, the pixel values are multiplied
by 255. That is, the value range [0,1] is mapped to [0,255].
This means that if it is float it has to be [0,1] values to be displayed correctly.
Now, that we know what cause it, lets see how to solve it. I can think of 3 possible ways:
1) normalize the image to [0,1]
cv::Mat dst;
cv::normalize(outputFromBlur, dst, 0, 1, cv::NORM_MINMAX);
This function normalizes the values, so it may shift the colors... this is not the best one for known images, but rather for depth maps or other matrices with values of unknown colors.
2) covertTo uchar:
cv::Mat dst;
outputFromBlur.convertTo(dst, CV_8U);
This function does saturate_cast, so it may handle possible overflow/underflow.
3) use filter2D with another output depth:
cv::filter2D(src, output, -1, kernel);
With -1 the desire output will be of the same type of the source (I assume your source is CV_8U)
I hope this helps you, if not leave a comment.
You can see the result in the image below. The original image is just a grey pixel, the result should be that but blurred.
Opencv is not using the immediate neighboring pixels for the Gaussian Blur, I'm guessing it's doing some sort of internal padding. Why it is doing so I have no idea, my initial guess would be that it assumes that the vector has more than one channel, which is not the case. Here is how i create the cv::Mats for calculation and how i call cv::gausianBlurr
std::vector<float> sobelCopy (sobel);
cv::Mat sobel_mat_copy(height,
width,
CV_32F,
sobelCopy.data());
cv::Mat sobel_mat(height,
width,
CV_32F,
sobel.data());
cv::GaussianBlur(sobel_mat_copy, sobel_mat, cv::Size(3,3), 0.0, 0.0, cv::BORDER_CONSTANT);
Image
Fixed, it has all to with my how i ordered my vector, i had column major, cv::Mat assumes it is row major ordering.
I have some problems with opencv flann::Index -
I'm creating index
Mat samples = Mat::zeros(vfv_net_quie.size(),24,CV_32F);
for (int i =0; i < vfv_net_quie.size();i++)
{
for (int j = 0;j<24;j++)
{
samples.at<float>(i,j)=(float)vfv_net_quie[i].vfv[j];
}
}
cv::flann::Index flann_index(
samples,
cv::flann::KDTreeIndexParams(4),
cvflann::FLANN_DIST_EUCLIDEAN
);
flann_index.save("c:\\index.fln");
A fter that I'm tryin to load it and find nearest neiborhoods
cv::flann::Index flann_index(Mat(),
cv::flann::SavedIndexParams("c:\\index.fln"),
cvflann::FLANN_DIST_EUCLIDEAN
);
cv::Mat resps(vfv_reg_quie.size(), K, CV_32F);
cv::Mat nresps(vfv_reg_quie.size(), K, CV_32S);
cv::Mat dists(vfv_reg_quie.size(), K, CV_32F);
flann_index.knnSearch(sample,nresps,dists,K,cv::flann::SearchParams(64));
And have access violation in miniflann.cpp in line
((IndexType*)index)->knnSearch(_query, _indices, _dists, knn,
(const ::cvflann::SearchParams&)get_params(params));
Please help
You should not load the flann-file into a Mat(), as it is the place where the index is stored. It is a temporary object destroyed after the constructor was called. That's why the index isn't pointing anywhere useful when you call knnSearch().
I tried following:
cv::Mat indexMat;
cv::flann::Index flann_index(
indexMat,
cv::flann::SavedIndexParams("c:\\index.fln"),
cvflann::FLANN_DIST_EUCLIDEAN
);
resulting in:
Reading FLANN index error: the saved data size (100, 64) or type (5) is different from the passed one (0, 0), 0
which means, that the matrix has to be initialized with the correct dimensions (seems very stupid to me, as I don't necessarily know, how many elements are stored in my index).
cv::Mat indexMat(samples.size(), CV_32FC1);
cv::flann::Index flann_index(
indexMat,
cv::flann::SavedIndexParams("c:\\index.fln"),
cvflann::FLANN_DIST_EUCLIDEAN
);
does the trick.
In the accepted answer is somehow not clear and misleading why the input matrix in the cv::flann::Index constructor must have the same dimension as the matrix used for generating the saved Index. I'll elaborate on #Sau's comment with an example.
KDTreeIndex was generated using as input a cv::Mat sample, and then saved. When you load it, you must provide the same sample matrix to generate it, something like (using the templated GenericIndex interface):
cv::Mat sample(sample_num, sample_size, ... /* other params */);
cv::flann::SavedIndexParams index_params("c:\\index.fln");
cv::flann::GenericIndex<cvflann::L2<float>> flann_index(sample, index_params);
L2 is the usual Euclidean distance (other types can be found in opencv2/flann/dist.h).
Now the index can be used as shown the find the K nearest neighbours of a query point:
std::vector<float> query(sample_size);
std::vector<int> indices(K);
std::vector<float> distances(K);
flann_index.knnSearch(query, indices, distances, K, cv::flann::SearchParams(64));
The matrix indices will contain the locations of the nearest neighbours in the matrix sample, which was used at first to generate the index. That's why you need to load the saved index with the very matrix used to generate the index, otherwise the returned vector will contain indices pointing to meaningless "nearest neighbours".
In addition you get a distances matrix containing how far are the found neighbours from your query point, which you can later use to perform some inverse distance weighting, for example.
Please also note that sample_size has to match across sample matrix and query point.