I am trying to set predefined values to homography and then use function warpPerspective that will warp my image. First i used findHomography function and displayed result:
H = findHomography(obj, scene, CV_RANSAC);
for( int i=0; i<H.rows; i++){
for( int j=0; j<H.cols; j++){
printf("H: %d %d: %lf\n",i,j,H.at<double>(i,j));
}
}
warpPerspective(image1, result, H, cv::Size(image1.cols + image2.cols, image1.rows));
This works as it is supposed to and i get these values
After that i tried to set values for H and call warpPerspective like this:
H.at<double>(0, 0) = 0.766912;
H.at<double>(0, 1) = 0.053191;
H.at<double>(0, 2) = 637.961151;
H.at<double>(1, 0) = -0.118426;
H.at<double>(1, 1) = 0.965682;
H.at<double>(1, 2) = 3.405685;
H.at<double>(2, 0) = -0.000232;
H.at<double>(2, 1) = 0.000019;
H.at<double>(2, 2) = 1.000000;
warpPerspective(image1, result, H, cv::Size(image1.cols + image2.cols, image1.rows));
And now i get System NullReferenceException, do you have any idea why is this failing?
Okay i got help on OpenCV forum, my declaration of H was like this
cv::Mat H;
this was okay for function fingHomography, but when i wanted to add values manually, i had to declare H like this:
cv::Mat H(3, 3, CV_64FC1);
Related
I'm trying to convolve an image using FFT. I use openCV so images are in Mat containers. I convert color image to gray image, then add a second channel for imaginary numbers that is all zero. Then I take this 2-channel Mat and convolve it with Prewitt's kernel. I get a result very different from the result I get when I use normal convolution algorithm. Left image is the output I get using FFT and right image is the output of normal convolution.
Below is the pseudo algorithm of how I do the operation;
Convert image Mat and kernel Mat to complex Mats by adding second channel (Result Mat type is CV_32FC2)
Assign all Mat elements to complex vectors
Zero pad the vectors to the same next power of 2
FFT the vectors
Signal multiply both vectors elementwise and assign result to result vector
Inverse FFT the result vector
Convert result vector to Mat
I think FFT algorithm is not the problem because when I take an image, FFT it, then inverse FFT it, I get the original image just fine. But I could be wrong. So here is the FFT algorithm. Notice how there are two of them. I use the second one. I also tried other FFT algorithms and they all output the same. FFT'ing and IFFT'ing same image only skips the signal multiplication step above. So I think that's where the problem is. Here is the code of the operation;
std::vector<cf> signalMultiplication(std::vector<cf> lh, std::vector<cf> rh) {
std::vector<cf> imVec = lh, kerVec = rh, resultVec;
resultVec.resize(imVec.size());
std::transform(imVec.begin(), imVec.end(), kerVec.begin(), resultVec.begin(), std::multiplies<cf>());
return resultVec;
}
I tried multiplying them using for loop but result was the same. I don't know the problem and I can't type the whole code here since it is too long, so tell me where you think the problem is and I'll give the code of that part.
#Paul below is the main body of the code;
cv::Mat convolution2D(cv::Mat image, cv::Mat kernel) {
cv::Mat imMat, kerMat;
imMat = convertToComplexMat(image);
kerMat = convertToComplexMat(kernel);
std::vector<cf> imVec, kerVec, resultVec;
imVec = matElementsToVector<cf>(imMat);
kerVec = matElementsToVector<cf>(kerMat);
float power = log2f(imVec.size());
if (abs(power - (int)power) == 0)
power++;
else
power = ceil(power);
zeroPadding(imVec, power);
zeroPadding(kerVec, power);
//FFT code I linked takes valarray as argument so I convert vectors to valarray and back
std::valarray<cf> imCArr(imVec.data(), imVec.size());
std::valarray<cf> kerCArr(kerVec.data(), kerVec.size());
fftRosetta(imCArr);
fftRosetta(kerCArr);
imVec.assign(std::begin(imCArr), std::end(imCArr));
kerVec.assign(std::begin(kerCArr), std::end(kerCArr));
resultVec = signalMultiplication(imVec, kerVec);
std::valarray<cf> resCArr(resultVec.data(), resultVec.size());
ifftRosetta(resCArr);
resultVec.assign(std::begin(resCArr), std::end(resCArr));
cv::Mat resultMat;
resultMat = vectorToMatElementsRowMajor(resultVec, imMat.rows, imMat.cols, imMat.type());
std::vector<cv::Mat> matVec;
cv::split(resultMat, matVec);
return matVec[0]; }
These are the custom functions;
convertToComplexMat, matElementsToVector, zeroPadding, fftRosetta, ifftRosetta, signalMultiplication, vectorToMatElementsRowMajor
signalMultiplication is posted, fftRosetta and ifftRosetta are linked so here, the rest of the functions;
using cf = std::complex<float>;
cv::Mat convertToComplexMat(cv::Mat imageMat) {
cv::Mat matOper;
if (imageMat.channels() == 3)
cv::cvtColor(imageMat, matOper, cv::COLOR_BGR2GRAY);
else
matOper = imageMat.clone();
matOper.convertTo(matOper, CV_32FC1);
cv::Mat compChannel = cv::Mat::zeros(matOper.rows, matOper.cols, CV_32FC1);
std::vector<cv::Mat> channels;
channels.push_back(matOper);
channels.push_back(compChannel);
cv::merge(channels, matOper);
return matOper;
}
template <typename T>
std::vector<T> matElementsToVector(cv::Mat operand) {
std::vector<T> vecOper;
int cn = operand.channels();
int lele = operand.total();
for (int i = 0; i < operand.total(); i++) {
if (cn == 1)
vecOper.push_back(operand.at<cv::Vec<T, 1>>(i)[0]);
else if (cn == 2) {
if (typeid(T) == typeid(cf)) {
T xd = operand.at<T>(i);
vecOper.push_back(xd);
}
else
for (int k = 0; k < cn; k++)
vecOper.push_back(operand.at<cv::Vec<T, 2>>(i)[k]);
}
else if (cn == 3)
for (int k = 0; k < cn; k++)
vecOper.push_back(operand.at<cv::Vec<T,3>>(i)[k]);
}
return vecOper;
}
void zeroPadding(std::vector<cf>& a, int power) {
int p, ioper;
if (power == -1)
p = ceil(log2f(a.size()));
else
p = power;
ioper = pow(2, p);
int size = a.size();
for (int i = 0; i < ioper - size; i++) {
a.push_back(0);
}
}
template <typename T>
cv::Mat vectorToMatElementsRowMajor(std::vector<T> operand, int mrows, int mcols, int mtype) {
cv::Mat matoper(mrows, mcols, mtype);
for (int j = 0; j < matoper.total(); j++) {
matoper.at<T>(j) = operand[j];
}
return matoper;
}
#Cris I tried it again with openCV DFT like you said, following the directions here. I applied DFT to image and kernel, then element-wise multiplied them, then applied IDFT. But result is something very different now. I can see resemblance of original image in there, but there are multiple shadows of it in different angles. I think the problem is how I do signal multiplication, but I can't find any answers on how to multiply 2D signals. Here is the code, output image is below it;
cv::Mat convolution2DopenCV(cv::Mat image, cv::Mat kernel) {
cv::Mat paddedImage, paddedKernel, imgOper, kerOper;
if (image.channels() == 3)
cv::cvtColor(image, imgOper, cv::COLOR_BGR2GRAY);
else
imgOper = image.clone();
kerOper = kernel;
int m = cv::getOptimalDFTSize(imgOper.rows);
int n = cv::getOptimalDFTSize(imgOper.cols);
cv::copyMakeBorder(imgOper, paddedImage, 0, m - imgOper.rows, 0, n - imgOper.cols, cv::BORDER_CONSTANT, cv::Scalar::all(0));
cv::copyMakeBorder(kerOper, paddedKernel, 0, m - kerOper.rows, 0, n - kerOper.cols, cv::BORDER_CONSTANT, cv::Scalar::all(0));
cv::Mat planesImage[] = { cv::Mat_<float>(paddedImage), cv::Mat::zeros(paddedImage.size(), CV_32F) };
cv::Mat cmpImgMat;
cv::merge(planesImage, 2, cmpImgMat);
cv::dft(cmpImgMat, cmpImgMat);
cv::Mat planesKernel[] = { cv::Mat_<float>(paddedKernel), cv::Mat::zeros(paddedKernel.size(), CV_32F) };
cv::Mat cmpKerMat;
cv::merge(planesKernel, 2, cmpKerMat);
cv::dft(cmpKerMat, cmpKerMat);
cv::Mat resultMat = cmpImgMat.mul(cmpKerMat);
cv::Mat planes[2];
cv::idft(resultMat, resultMat);
cv::split(resultMat, planes);
cv::normalize(planes[0], planes[0], 0, 255, cv::NORM_MINMAX);
return planes[0];
}
That's everything, if there is something I'm missing, let me know.
I have been trying lot to get an undistorted image without interpolation. But when executed the below code i get some weird image.I am using the function initUndistortRectifyMap which gives the mapx and mapy of type CV_16SC2 later using the convertMaps function i am converting the mapx and mapy to type CV_32FC1.I have been trying to debug the reason but couldnot find anything helpful.
The distorted image
image after applying undistort without interpolation
int main()
{
Mat Cam1MatrixParam, Cam1Distortion;
Mat cf1;
cf1=imread("cam1.distort1.jpg", CV_LOAD_IMAGE_COLOR);
Size imagesize = cf1.size();
FileStorage fs1("cameracalibration.xml", FileStorage::READ);
fs1["camera_matrix"] >> Cam1MatrixParam;
fs1["distortion_coefficients"] >> Cam1Distortion;
Mat R = Mat::eye(3, 3, CV_32F) * 1;
int width = cf1.cols;
int height = cf1.rows;
Mat undistorted = Mat(height, width, CV_8UC3);
Mat mapx = Mat(height, width, CV_32FC1);
Mat mapy = Mat(height, width, CV_32FC1);
initUndistortRectifyMap(Cam1MatrixParam, Cam1Distortion, Cam1MatrixParam, R, imagesize, CV_16SC2, mapx, mapy);
convertMaps(mapx, mapy, mapx, mapy, CV_32FC1, false);
for (int j = 0; j < height; j++)
{
for ( int i = 0; i < width; i++)
{
undistorted.at<uchar>(mapy.at<float>(j, i), mapx.at<float>(j, i)) = cf1.at<uchar>(j, i);
}
}
imwrite("cam1.undistortimage.png", undistorted);
}
image with this version of code
undistorted.at(j, i) = cf1.at(mapy.at(j, i), mapx.at(j, i));
image with undistort function(remap with nearest interpolation)
It looks like instead of undoing the distortion it applies it once more.
mapx and mapy map from the display coordinates to the photo coordinates.
undistorted.at<cv::Vec3b>(j, i) = distort.at<cv::Vec3b>(mapy.at<float>(j, i), mapx.at<float>(j, i));
You can interpret this code as: for each display coordinate {j, i} find its corresponding (distorted) coordinate in the photo and then copy the pixel.
you are using color images (cv::Vec3b) so try instead:
undistorted.at<cv::Vec3b>(mapy.at<float>(j, i), mapx.at<float>(j, i)) = cf1.at<cv::Vec3b>(j, i);
maybe combined with the answer of Maxim Egorushkin if undistort map is reverse
I want to undistort a camera image. The undistort function of OpenCV is too slow, so I want to split it like mentioned in the documentation into the 2 calls of initUndistortRectifyMap (as init step) and remap (in the render loop).
At first, I tried a test program with the principal approach:
//create source matrix
cv::Mat srcImg(res.first, res.second, cvFormat, const_cast<char*>(pImg));
//fill matrices
cv::Mat cam(3, 3, cv::DataType<float>::type);
cam.at<float>(0, 0) = 528.53618582196384f;
cam.at<float>(0, 1) = 0.0f;
cam.at<float>(0, 2) = 314.01736116032430f;
cam.at<float>(1, 0) = 0.0f;
cam.at<float>(1, 1) = 532.01912214324500f;
cam.at<float>(1, 2) = 231.43930864205211f;
cam.at<float>(2, 0) = 0.0f;
cam.at<float>(2, 1) = 0.0f;
cam.at<float>(2, 2) = 1.0f;
cv::Mat dist(5, 1, cv::DataType<float>::type);
dist.at<float>(0, 0) = -0.11839989180635836f;
dist.at<float>(1, 0) = 0.25425420873955445f;
dist.at<float>(2, 0) = 0.0013269901775205413f;
dist.at<float>(3, 0) = 0.0015787467748277866f;
dist.at<float>(4, 0) = -0.11567938093172066f;
cv::Mat map1, map2;
cv::initUndistortRectifyMap(cam, dist, cv::Mat(), cam, cv::Size(res.second, res.first), CV_32FC1, map1, map2);
cv::remap(srcImg, *m_undistImg, map1, map2, cv::INTER_CUBIC);
The format of my camera image is BGRA. The code compiles and starts, but the resulting image is wrong:
Any ideas, what's wrong with my code?
It works, yes. To be honest, I don't remember exactly what the problem was. I interchanged width and height or somethink like that.
This is my running code:
//create source matrix
cv::Mat srcImg(resolution.second, resolution.first, cvFormat, const_cast<unsigned char*>(pSrcImg));
//look if an update of the maps is necessary
if ((resolution.first != m_width) || (m_height != resolution.second))
{
m_width = resolution.first;
m_height = resolution.second;
cv::initUndistortRectifyMap(*m_camData, *m_distData, cv::Mat(), *m_camData, cv::Size(resolution.first, resolution.second), CV_32FC1, *m_undistMap1, *m_undistMap2);
}
//create undistorted image
cv::remap(srcImg, *m_undistortedImg, *m_undistMap1, *m_undistMap2, cv::INTER_LINEAR);
return reinterpret_cast<unsigned char*>(m_undistortedImg->data);
I'm new to openCV and I'm trying to filter an image using a gaussian filter in frequency domain. But there is a run time error
"assertion failed (type == srcB.type() && srcA.size() == srcB.size()) in cv::mulSpectrum"
I know it is caused by the return type of my filter, the type doesn't match and I don't know how to make it right
here is the filter function (my guess is the return value from this function is wrong):
cv::Mat createGaussianHighPassFilter(cv::Size size, double cutoffInPixels){
Mat ghpf(size, CV_64F);
cv::Point center(size.width / 2, size.height / 2);
for(int u = 0; u < ghpf.rows; u++)
{
for(int v = 0; v < ghpf.cols; v++)
{
ghpf.at<double>(u, v) = gaussianCoeff(u - center.x, v - center.y, cutoffInPixels); //kernel utk gaussian filter yg 128x128
}
}
return ghpf;
}
and this is the main function:
Mat mask = createGaussianHighPassFilter(complexI.size(),16);
shift(mask);
Mat AX[] = {Mat::zeros(complexI.size(), CV_32F), Mat::zeros(complexI.size(), CV_32F)};
Mat kernel_spec;
AX[0] = mask; // real
AX[1] = mask; // imaginar
merge(AX, 2, kernel_spec);
cout<<complexI.type()<<endl<<kernel_spec.type(); //the result is 13 and 14, the type doesn't match
mulSpectrums(complexI, kernel_spec, complexI, DFT_ROWS); // only DFT_ROWS accepted
updateMag(complexI); // show spectrum
updateResult(complexI); // do inverse transform
Well of course they don't match. You are initializing kernel_spec as CV_32 but complexI is CV_64. Do a Mat::convertTo() and it should work.
HTH
I am trying to use opencv EM algorithm to do color extraction.I am using the following code based on example in opencv documentation:
cv::Mat capturedFrame ( height, width, CV_8UC3 );
int i, j;
int nsamples = 1000;
cv::Mat samples ( nsamples, 2, CV_32FC1 );
cv::Mat labels;
cv::Mat img = cv::Mat::zeros ( height, height, CV_8UC3 );
img = capturedFrame;
cv::Mat sample ( 1, 2, CV_32FC1 );
CvEM em_model;
CvEMParams params;
samples = samples.reshape ( 2, 0 );
for ( i = 0; i < N; i++ )
{
//from the training samples
cv::Mat samples_part = samples.rowRange ( i*nsamples/N, (i+1)*nsamples/N);
cv::Scalar mean (((i%N)+1)*img.rows/(N1+1),((i/N1)+1)*img.rows/(N1+1));
cv::Scalar sigma (30,30);
cv::randn(samples_part,mean,sigma);
}
samples = samples.reshape ( 1, 0 );
//initialize model parameters
params.covs = NULL;
params.means = NULL;
params.weights = NULL;
params.probs = NULL;
params.nclusters = N;
params.cov_mat_type = CvEM::COV_MAT_SPHERICAL;
params.start_step = CvEM::START_AUTO_STEP;
params.term_crit.max_iter = 300;
params.term_crit.epsilon = 0.1;
params.term_crit.type = CV_TERMCRIT_ITER|CV_TERMCRIT_EPS;
//cluster the data
em_model.train ( samples, Mat(), params, &labels );
cv::Mat probs;
probs = em_model.getProbs();
cv::Mat weights;
weights = em_model.getWeights();
cv::Mat modelIndex = cv::Mat::zeros ( img.rows, img.cols, CV_8UC3 );
for ( i = 0; i < img.rows; i ++ )
{
for ( j = 0; j < img.cols; j ++ )
{
sample.at<float>(0) = (float)j;
sample.at<float>(1) = (float)i;
int response = cvRound ( em_model.predict ( sample ) );
modelIndex.data [ modelIndex.cols*i + j] = response;
}
}
My question here is:
Firstly, I want to extract each model, here totally five, then store those corresponding pixel values in five different matrix. In this case, I could have five different colors seperately. Here I only obtained their indexes, is there any way to achieve their corresponding colors here? To make it easy, I can start from finding the dominant color based on these five GMMs.
Secondly, here my sample datapoints are "100", and it takes about nearly 3 seconds for them. But I want to do all these things in no more than 30 milliseconds. I know OpenCV background extraction, which is using GMM, performs really fast, below 20ms, that means, there must be a way for me to do all these within 30 ms for all 600x800=480000 pixels. I found predict function is the most time consuming one.
First Question:
In order to do color extraction you first need to train the EM with your input pixels. After that you simply loop over all the input pixels again and use predict() to classify each of them. I've attached a small example that utilizes EM for foreground/background separation based on colors. It shows you how to extract the dominant color (mean) of each gaussian and how to access the original pixel color.
#include <opencv2/opencv.hpp>
int main(int argc, char** argv) {
cv::Mat source = cv::imread("test.jpg");
//ouput images
cv::Mat meanImg(source.rows, source.cols, CV_32FC3);
cv::Mat fgImg(source.rows, source.cols, CV_8UC3);
cv::Mat bgImg(source.rows, source.cols, CV_8UC3);
//convert the input image to float
cv::Mat floatSource;
source.convertTo(floatSource, CV_32F);
//now convert the float image to column vector
cv::Mat samples(source.rows * source.cols, 3, CV_32FC1);
int idx = 0;
for (int y = 0; y < source.rows; y++) {
cv::Vec3f* row = floatSource.ptr<cv::Vec3f > (y);
for (int x = 0; x < source.cols; x++) {
samples.at<cv::Vec3f > (idx++, 0) = row[x];
}
}
//we need just 2 clusters
cv::EMParams params(2);
cv::ExpectationMaximization em(samples, cv::Mat(), params);
//the two dominating colors
cv::Mat means = em.getMeans();
//the weights of the two dominant colors
cv::Mat weights = em.getWeights();
//we define the foreground as the dominant color with the largest weight
const int fgId = weights.at<float>(0) > weights.at<float>(1) ? 0 : 1;
//now classify each of the source pixels
idx = 0;
for (int y = 0; y < source.rows; y++) {
for (int x = 0; x < source.cols; x++) {
//classify
const int result = cvRound(em.predict(samples.row(idx++), NULL));
//get the according mean (dominant color)
const double* ps = means.ptr<double>(result, 0);
//set the according mean value to the mean image
float* pd = meanImg.ptr<float>(y, x);
//float images need to be in [0..1] range
pd[0] = ps[0] / 255.0;
pd[1] = ps[1] / 255.0;
pd[2] = ps[2] / 255.0;
//set either foreground or background
if (result == fgId) {
fgImg.at<cv::Point3_<uchar> >(y, x, 0) = source.at<cv::Point3_<uchar> >(y, x, 0);
} else {
bgImg.at<cv::Point3_<uchar> >(y, x, 0) = source.at<cv::Point3_<uchar> >(y, x, 0);
}
}
}
cv::imshow("Means", meanImg);
cv::imshow("Foreground", fgImg);
cv::imshow("Background", bgImg);
cv::waitKey(0);
return 0;
}
I've tested the code with the following image and it performs quite good.
Second Question:
I've noticed that the maximum number of clusters has a huge impact on the performance. So it's better to set this to a very conservative value instead of leaving it empty or setting it to the number of samples like in your example. Furthermore the documentation mentions an iterative procedure to repeatedly optimize the model with less-constrained parameters. Maybe this gives you some speed-up. To read more please have a look at the docs inside the sample code that is provided for train() here.