What is the best way to represent with OpenCV(C++) a mat of mats? At the moment in my code, I use a vector of vector<Mat> and I need the benefits of using a matrix.
Limited to my searches, unfortunately such thing is not defined in Opencv-C++ (python version has it though). vector is all you can get.
Although if all of your images are grayscale and have the same size, you can define an n-channel image and put each of your images into one of the channels, for example:
Mat im1 = Mat::ones(n_row , n_col , CV_8UC1);
Mat im2 = Mat::ones(n_row , n_col , CV_8UC1);
Mat im3 = Mat::ones(n_row , n_col , CV_8UC1);
... // n similar images...
Mat img(n_row , n_col , CV_8UC(n));
vector<Mat> vec;
vec.push_back(im1);
vec.push_back(im2);
vec.push_back(im3);
... // push back all the n images
merge(vec,img);
Related
I have a file of 100+ images which i need to store each image into their individual matrices. Is there any way i can do this instead of hard-coding (as shown below)?
Mat src1 = imread("ts_04-11-21_16-27-00-mod", CV_8UC1);
Mat src2 = imread("ts_04-11-21_16-27-01-mod", CV_8UC1);
Mat src3 = imread("ts_04-11-21_16-27-02-mod", CV_8UC1);
Mat src4 = imread("ts_04-11-21_16-27-03-mod", CV_8UC1);
Mat src5 = imread("ts_04-11-21_16-27-04-mod", CV_8UC1);
I'm using Opencv and C++.
Maybe something like:
vector<Mat> images;
for(int i=0;i<n;i++)
{
Mat in = imread("ts_04-11-21_16-27-0"+i+"-mod", CV_8UC1);
images.push_back(in);
}
beware that reading a lot of images can be very memory consuming.
I'm trying to implement color conversion from RGB-LMS and LMS-RGB back and using reshape for multiplication matrix, following answer from this question : Fastest way to apply color matrix to RGB image using OpenCV 3.0?
My ori Mat object is from an image with 3 channel (RGB), and I need to multiply them with matrix of 1 channel (lms), it seems like I have an issue with the matrix type. I've read reshape docs and questions related to this issue, like Issues multiplying Mat matrices, and I believe I have followed the instructions.
Here's my code : [UPDATED : Convert into flat image]
void test(const Mat &forreshape, Mat &output, Mat &pic, int rows, int cols)
{
Mat lms(3, 3, CV_32FC3);
Mat rgb(3, 3, CV_32FC3);
Mat intolms(rows, cols, CV_32F);
lms = (Mat_<float>(3, 3) << 1.4671, 0.1843, 0.0030,
3.8671, 27.1554, 3.4557,
4.1194, 45.5161 , 17.884 );
/* switch the order of the matrix according to the BGR order of color on OpenCV */
Mat transpose = (3, 3, CV_32F, lms).t(); // this will do transpose from matrix lms
pic = forreshape.reshape(1, rows*cols);
Mat flatFloatImage;
pic.convertTo(flatFloatImage, CV_32F);
rgb = flatFloatImag*transpose;
output = rgb.reshape(3, cols);
}
I define my Mat object, and I have converted it into float using convertTo
Mat ori = imread("ori.png", CV_LOAD_IMAGE_COLOR);
int rows = ori.rows;
int cols = ori.cols;
Mat forreshape;
ori.convertTo(forreshape, CV_32F);
Mat pic(rows, cols, CV_32FC3);
Mat output(rows, cols, CV_32FC3);
Error is :
OpenCV Error: Assertion failed (type == B.type() && (type == CV_32FC1 || type == CV_64FC1 || type == CV_32FC2 || type == CV_64FC2)) ,
so it's the type issue.
I tried to change all type into either 32FC3 of 32FC1, but doesn't seem to work. Any suggestion ?
I believe what you need is to convert your input to a flat image and than multiply them
float lms [] = {1.4671, 0.1843, 0.0030,
3.8671, 27.1554, 3.4557,
4.1194, 45.5161 , 17.884};
Mat lmsMat(3, 3, CV_32F, lms );
Mat flatImage = ori.reshape(1, ori.rows * ori.cols);
Mat flatFloatImage;
flatImage.convertTo(flatFloatImage, CV_32F);
Mat mixedImage = flatFloatImage * lmsMat;
Mat output = mixedImage.reshape(3, imData.rows);
I might have messed up with lms matrix there, but I guess you will catch up from here.
Also see 3D matrix multiplication in opencv for RGB color mixing
EDIT:
Problem with distortion is that you got overflow after float to 8U conversion. This would do the trick:
rgb = flatFloatImage*transpose;
rgb.convertTo(pic, CV_32S);
output = pic.reshape(3, rows)
Output:
;
Also I'm not sure but quick google search gives me different matrix for LMS see here. Also note that opencv stores colors in B-G-R format instead of RGB so change your mix mtraixes recordingly.
I know that for fill a cv::Mat there is the nice cv::Mat::setTo method but I don't understand why I don't have the same effect with those pieces of code:
// build the mat
m = cv::Mat::zeros(size, CV_8UC3);
cv::cvtColor(m, m, CV_BGR2BGRA); // add alpha channel
/////////////////////////////////////////////////////////// this works
m.setTo( cv::Scalar(0,144,0,55) );
m = cv::Mat::zeros(size, CV_8UC3);
cv::cvtColor(m, m, CV_BGR2BGRA);
/////////////////////////////////////////////////////////// this does NOT work
m = m + cv::Scalar(0,144,0,55)
m = cv::Mat::ones(size, CV_8UC3);
cv::cvtColor(m, m, CV_BGR2BGRA);
/////////////////////////////////////////////////////////// this does NOT work
m = m.mul( cv::Scalar(0,144,0,55) );
m = cv::Mat::zeros(size, CV_8UC3);
cv::cvtColor(m, m, CV_BGR2BGRA);
/////////////////////////////////////////////////////////// this works too!
cv::rectangle(tracks,
cv::Rect(0, 0, tracks.cols, tracks.rows),
cv::Scalar(0,144,0,55),
-1);
PS: I'm displaying those mats as an OpenGL alpha texture
I guess "not work" means that the output is not the same as using setTo?
When transforming with cv::cvtColor, the alpha-channel is initialized to 255. If you add or multiply anything it will stay at 255.
Why do you use cv::cvtColor to transform instead of just using CV_8UC4 when creating the mat?
You can't use cv::Mat::ones for multichannel initialization. Only the first channel is set to 1 when using cv::Mat::ones. Use cv::Mat( x, y, CV_8UC3, CV_RGB(1,1,1) ).
For an aplha channel you need to use CV_8UC4, not CV_8UC3.
I'm beginner in opencv. I have not gotten main concepts of opencv in details.
So maybe my code it's too dumb;
Out of my curiosity I want to try machine learning functions like a KNN, ANN.
I have the set of images with size 28*28 pixels. I want to do train cassifier for digit recognition. So first I need to assemble train set and train classes;
Mat train_data = Mat(rows, cols, CV_32FC1);
Mat train_classes = Mat(rows, 1, CV_32SC1);
Mat img = imread(image);
Mat float_data(1, cols, CV_32FC1);
img.convertTo(float_data, CV_32FC1);
How to fill train_data with float_data ?
I thought It was smth like this:
for (int i = 0; i < train_data.rows; ++i)
{
... // image is a string which contains next image path
image = imread(image);
img.convertTo(float_data, CV_32FC1);
for( int x = 0; x < train_data.cols; x++ ){
train_data.at<float> (i, x) = float_data.at<float>( x);;
}
}
KNearest knn;
knn.train(train_data, train_classes);
but it's of course doesn't work . . .
Please, tell me how to do it right. Or at least suggest the books for dummies :)
Mat train_data; // initially empty
Mat train_labels; // empty, too.
// for each img in the train set :
Mat img = imread("image_path");
Mat float_data;
img.convertTo(float_data, CV_32FC1); // to float
train_data.push_back( float_data.reshape(1,1) ); // add 1 row (flattened image)
train_labels.push_back( label_for_image ); // add 1 item
KNearest knn;
knn.train(train_data, train_labels);
it's all the same for other ml algos !
How can I apply a notch filter on an image spectrum using OpenCV 2.4 and C++? I want to calculate the DFT of an image, suppress certain frequencies and calculate inverse dft. Can anyone show me some sample code how to apply a notch filter in frequecy domain?
EDIT:
Here is what I tried, but the quadrants of the frequency spectrum are not in order so the origin of the spectrum is not the center of the image. That makes is difficult for me to identify the frequencies to suppress. When swapping quadrants so that the origin is the center, inverse DFT shows wrong results. Can anyone show me how to do inverse dft with swapped quadrants?
I don't understand the number of columns in the frequency images filter1 and filter2 (see code). If I use filter1.cols as u in the for loop, I don't access the right border of the images. Filter1 and filter2 seem to have approx. 5000 columns but the source image has a resolution of 1280x1024 (grayscale). Any thoughts on that?
Any further comments about my code?
Mat img;
img=imread(filename,CV_LOAD_IMAGE_GRAYSCALE);
int M = getOptimalDFTSize( img.rows );
int N = getOptimalDFTSize( img.cols );
Mat padded;
copyMakeBorder(img, padded, 0, M - img.rows, 0, N - img.cols, BORDER_CONSTANT, Scalar::all(0));
Mat planes[] = {Mat_<float>(padded), Mat::zeros(padded.size(), CV_32F)};
Mat complexImg;
merge(planes, 2, complexImg);
dft(complexImg, complexImg,cv::DFT_SCALE|cv::DFT_COMPLEX_OUTPUT);
split(complexImg, planes);
Mat filter1;
planes[0].copyTo(filter1);
Mat filter2;
planes[1].copyTo(filter2);
for( int i = 0; i < filter1.rows; ++i)
{
for(int u=7;u<15;++u)
{
filter1.at<uchar>(i,u)=0;
filter2.at<uchar>(i,u)=0;
}
Mat inverse[] = {filter1,filter2};
Mat filterspec;
merge(inverse, 2, filterspec);
cv::Mat inverseTransform;
cv::dft(filterspec, inverseTransform,cv::DFT_INVERSE|cv::DFT_REAL_OUTPUT);
cv::Mat finalImage;
inverseTransform.convertTo(finalImage, CV_8U);