Opencv C++ create a Mat of Mats - c++

I am trying to build a distance matrix between frames in C++ with OpenCv 2.4.10. I think I need a mat of mats so I can put in the first row and col all the frames and make a XOR operator frame by frame. But to do so I need a structure like a matrix that contains in each position another matrix. Is there a thing like a Mat of Mats? Or can you suggest another solution? I thought of useing Vector but I need more than an array of Mat. Thank you I am new at this!

If I got it correctly, what you are looking for is a 2-dimensional Mat object, whose each element is another 2-dimensional Mat object. This is equivalent to creating a 4-dimensional Mat object. OpenCV has such a functionality - it just involves using one of less popular and less convenient Mat constructors:
const int num_of_dim = 4;
const int dimensions[num_of_dim] = { a, b, c, d }; // a, b, c, d - desired dimensions defined elsewhere
cv::Mat fourd_mat(num_of_dim, dimensions, CV_32F);
Check Mat::Mat(int ndims, const int* sizes, int type) constructor at openCV docs:
http://docs.opencv.org/2.4.10/modules/core/doc/basic_structures.html#Mat::Mat(int%20ndims,%20const%20int*%20sizes,%20int%20type)
as well as search for the phrase "multi-dimensional" and "n-dimensional" on that page to find more examples and docs.
EDIT:
As requested, I'm showing how to load an image into such a structure. It's not pretty, but I guess the easiest way is to copy the image pixel by pixel:
img = imread("path/img.jpg", 1);
for (int i = 0; i < 179; ++i)
{
for (int j = 0; i < img.rows; ++i)
{
for (int k = 0; j < img.cols; ++j)
{
const int coords1[4] = { i, 0, j, k };
const int coords2[4] = { 0, i, j, k };
fourd_mat.at<float>(coords1) = img.at<float>(j, k); //line 1
fourd_mat.at<float>(coords2) = img.at<float>(j, k); //line 2
}
}
}
The line commented as line1 is equivalent to your line struttura[i][0] = img; and line2 is equivalent to struttura[0][i] = img; after the two innermost for loops finish their work.
The code above assumes that your image type is CV_32F - if it's 8UC, you have to replace float with uchar in at() function.

Related

Combine OpenCVs k-means clustering and Vigra in C++

I am having an issue when I want to combine two computer graphic libraries, namely OpenCV and Vigra. I want to use OpenCVs k-means clustering algorithm for grayscale image binarization. The framework of my image processing was built earlier and depends strongly on Vigra, that's why I have to combine both libraries.
So basically, I am loading the image employing Vigra functionality, than convert the Vigra object to an OpenCV matrix, run the k-means clustering, re-convert the matrix object to a vigra object and finally save the image again by employing Vigra functionality. Here is a code example:
#include <vigra/impex.hxx>
#include <opencv2/core.hpp>
#include <opencv2/imgcodecs.hpp>
int main()
{
std::string InputFilePath = "path/to/image/image_name.tif";
// Vigra functionality to load an image from path
vigra::FImage InputImg;
const char* cFile = InputFilePath.c_str();
vigra::ImageImportInfo info(cFile);
int b = info.width();
int h = info.height();
InputImg.resize(b,h);
vigra::importImage(info, destImage(InputImg));
vigra::FImage OutputImg(InputImg.width(), InputImg.height());
// Setting up OCV Matrix as an one channel, 32bit float grayscale image
cv::Mat InputMat(InputImg.width(), InputImg.height(), CV_32FC1);
// my workaround to convert vigra::FImage to cv::Mat
for(unsigned int i=0; i<InputImg.width(); i++){
for(unsigned int j=0; j<InputImg.height(); j++){
InputMat.at<float>(j,i) = InputImg(i,j);
}
}
// OCVs k-means clustering
const unsigned int singleLineSize = InputMat.rows*InputMat.cols;
const unsigned int k=2;
cv::Mat data = InputMat.reshape(1, singleLineSize);
std::vector<int> labels;
data.convertTo(data, CV_32FC1);
cv::Mat1f centers;
cv::kmeans(data, k, labels, cv::TermCriteria(cv::TermCriteria::EPS + cv::TermCriteria::COUNT, 10, 1.0), 2, cv::KMEANS_RANDOM_CENTERS, centers);
for (unsigned int i = 0; i < singleLineSize; i++) {
data.at<float>(i) = centers(labels[i]);
}
cv::Mat OutputMat = data.reshape(1, InputMat.rows);
OutputMat.convertTo(OutputMat, CV_8UC1);
// re-convert cv::Mat to vigra::FImage
for(unsigned int i=0; i<InputImg.width(); i++){
for(unsigned int j=0; j<InputImg.height(); j++){
OutputImg(i,j) = OutputMat.at<float>(j,i);
}
}
std::string SaveFileName = "path/to/save_location/save_img_name.tif";
// vigra functionality to save the image
const char* cFile = SaveFileName.c_str();
vigra::ImageExportInfo exinfo(cFile);
vigra::exportImage(srcImageRange(OutputImg), exinfo.setPixelType("FLOAT")); // pixel type could also be "UINT8"
// for the sake of comparability
std::string SaveFileNameOCV = "path/to/save_location/save_mat_name.tif";
cv::imwrite(SaveFileNameOCV, OutputMat);
return 0;
}
k-means clustering works fine, and when I save the cv::Mat directly with
cv::imwrite()
everything is good. But when I re-convert the cv::Mat to a vigra::FImage object and save it, the image is corrupted. It looks as if the object (in the image) is mirrored or duplicated four times, allthough image width and heigth stay the same. I attached the images (InputImg, OutputImg and OutputMat).
Moreover, if I re-convert InputMat to OutputImg (after the k-means), and save this image, everything is fine (this image is also attached).
And finally, I do not understand why I have to switch the indices when converting from vigra::FImage to cv::Mat and vice versa:
InputMat.at<float>(j,i) = InputImg(i,j);
But if I don't, the resulting image is rotated.
Ok, so I am not quite sure if anybody uses Vigra AND OpenCV, I guess OpenCV is definitly more common than Vigra. But anyway, if anybody could help, this would be great.
BTW: I am running everything in Code::Blocks on OpenSuSE 15.1. Any library was installed via the official OpenSuSE repositories.
OK, first of all, I did not found out, why this happens. But what I now know, is that, if I am using the type-defined version Mat_ (e.g. Mat1f...) I can handle everything propperly and the saved results are as expected.
For conversion I wrote 2 methods:
cv::Mat1f convertImg2Mat(vigra::FImage &img){
int b = img.width();
int h = img.height();
cv::Mat1f mat(h, b);
for(unsigned int j=0; j<h; j++){
for(unsigned int i=0; i<b; i++){
mat(j,i) = img(i,j);
}
}
return mat;
}
and
vigra::FImage convertMat2Img(cv::Mat mat){
int b = mat.rows;
int h = mat.cols;
cv::Mat1f tmp = mat.clone();
vigra::FImage img(h, b);
for(unsigned int j=0; j<h; j++){
for(unsigned int i=0; i<b; i++){
img(i,j) = tmp(j,i);
}
}
return img;
}
which both work fine.
A stupid beginners mistake was the indexing, because vigra follows fortran order which is
img(cols, rows)
and OpenCV uses another convention which is
mat(rows, cols).
So from my side this question is not yet propperly answered, but I found a working solution anyways.

Initialize a Multi-Channel OpenCV Mat

I have to initialize a multi-channel OpenCV matrix. I'm creating the multi-channel matrix like this
cv::Mat A(img.size(), CV_16SC(levels));
where levels is the number of channels in the matrix can be anywhere from 20 - 300. I cannot initialize this matrix other than zero.
If I initialize the matrix like this
cv::Mat A(img.size(), CV_16SC(levels), Scalar(1000));
I'm getting an error stating "Assertion failed (cn <= 4) in cv::scalarToRawData". Which seems like we can initialize values only up to 4 channels
Is there any other method available in OpenCV to initialize this multi-channel matrix or I have to manually initialize the values?
Edit:
I have done the following to initialize this multi-channel matrix. Hope this helps those who come across the same issue
for (int j = 0; j < img.rows; ++j) for (int i = 0; i < img.cols; ++i)
{
short *p = A.ptr<short>(j) +(short)i*levels;
for (int l = 0; l < levels; ++l)
{
p[l] = 1000;
}
}
I was trying to use OpenCV's Vec_ and Mat_ template classes, because of this Mat_ constructor. Unfortunately, I couldn't find a working solution. All attempts lead to the same error, you already came across with. So, my guess would be, the underlying OpenCV implementation just does not support such actions, even on custom derived types.
Certainly, you have your own idea to work-around this. Nevertheless, I wanted to provide the shortest (and hopefully most efficient) solution, I could think of:
const int levels = 20;
const cv::Size size = cv::Size(123, 234);
const cv::Mat proto = cv::Mat(size, CV_16SC1, 1000);
std::vector<cv::Mat> channels;
for (int i = 0; i < levels; i++)
channels.push_back(proto);
cv::Mat A;
cv::merge(channels, A);

outputting image back to matlab Mex

I am trying to output an image from my mex file back to my matlab file, but when i open it in matlab it is not correct.
The output image withing the mex file is correct
I have tried switching the orientation of the mwSize as well as swapping i and j in new_img.at<int>(j, i);
Mat image = imread(mxArrayToString(prhs[0]));
Mat new_img(H,W, image.type(), Scalar(0));
// some operations on new_img
imshow( "gmm image", image ); //shows the original image
imshow( "gmm1 image", new_img ); //shows the output image
waitKey( 200 ); //both images are the same size as desired
mwSize nd = 2;
mwSize dims[] = {W, H};
plhs[0] = mxCreateNumericArray(nd, dims, mxUINT8_CLASS, mxREAL);
if(plhs == NULL) {
mexErrMsgTxt("Could not create mxArray.\n");
}
char* outMat = (char*) mxGetData( plhs[0]);
for (int i= 0; i < H; i++)
{
for (int j = 0; j < W; j++)
{
outMat[i +j*image.rows] = new_img.at<int>(j, i);
}
}
this is in the mat file
gmmMask = GmmMex2(imgName,rect);
imshow(gmmMask); % not the same as the output image. somewhat resembles it, but not correct.
Because you have alluded to this being a colour image, this means that you have three slices of the matrix to consider. Your code only considers one slice. First off you need to make sure that you declare the right size of the image. In MATLAB, the first dimension is always the number of rows while the second dimension is the number of columns. Now you have to add the number of channels too on top of this. I'm assuming this is an RGB image so there are three channels.
Therefore, change your dims to:
mwSize nd = 3;
mwSize dims[] = {H, W, nd};
Changing nd to 3 is important as this will allow you to create a 3D matrix. You only have a 2D matrix. Next, make sure that you are accessing the image pixels at the right location in the cv::Mat object. The way you are accessing the image pixels in the nested pair of for loops assumes a row-major fashion (iterating over the columns first, then the rows). As such, you need to interchange i and j as i accesses the rows and j accesses the columns. You will also need to access the channel of the colour image so you'll need another for loop to compensate. For the grayscale case, you have properly compensated for the column-major memory configuration for the MATLAB MEX matrix though. This is verified because j accesses the columns and you need to skip over by rows amount in order to access the next column. However, to accommodate for a colour image, you must also skip over by image.rows*image.cols to go to the next layer of pixels.
Therefore your for loop should now be:
for (int k = 0; k < nd; k++) {
for (int i = 0; i < H; i++) {
for (int j = 0; j < W; j++) {
outMat[k*image.rows*image.cols + i + j*image.rows] = new_img.at<uchar>(i, j, k);
}
}
}
Take note that the container of pixels is most likely 8-bit unsigned character, and so you must change the template to uchar not int. This may also explain why your program is crashing.

Assign 3x1 mat to 3 channels mat

This question is continuance from my question in this link. After i get mat matrix, the 3x1 matrix is multiplied with 3x3 mat matrix.
for (int i = 0; i < im.rows; i++)
{
for (int j = 0; j < im.cols; j++)
{
for (int k = 0; k < nChannels; k++)
{
zay(k) = im.at<Vec3b>(i, j)[k]; // get pixel value and assigned to Vec4b zay
}
//convert to mat, so i can easily multiplied it
mat.at <double>(0, 0) = zay[0];
mat.at <double>(1, 0) = zay[1];
mat.at <double>(2, 0) = zay[2];
We get 3x1 mat matrix and do multiplication with the filter.
multiply= Filter*mat;
And i get mat matrix 3x1. I want to assign the value into my new 3 channels mat matrix, how to do that? I want to construct an images using this operation. I'm not use convolution function, because i think the result is different. I'm working in c++, and i want to change the coloured images to another color using matrix multiplication. I get the algorithm from this paper. In that paper, we need to multiplied several matrix to get the result.
OpenCV gives you a reshape function to change the number of channels/rows/columns implicitly:
http://docs.opencv.org/modules/core/doc/basic_structures.html#mat-reshape
This is very efficient since no data is copied, only the matrix header is changed.
try:
cv::Mat mat3Channels = mat.reshape(3,1);
Didn't test it, but should work. It should give you a 1x1 matrix with 3 channel element (Vec3d) if you want a Vec3b element instead, you have to convert it:
cv::Mat mat3ChannelsVec3b;
mat3Channels.convertTo(mat3ChannelsVec3b, CV_8UC3);
If you just want to write your mat back, it might be better to create a single Vec3b element instead:
cv::Vec3b element3Channels;
element3Channels[0] = multiply.at<double>(0,0);
element3Channels[1] = multiply.at<double>(1,0);
element3Channels[2] = multiply.at<double>(2,0);
But care in all cases, that Vec3b elements can't save values < 0 and > 255
Edit: After reading your question again, you ask how to assign...
I guess you have another matrix:
cv::Mat outputMatrix = cv::Mat(im.rows, im.cols, CV_8UC3, cv::Scalar(0,0,0));
Now to assign multiply to the element in outputMatrix you ca do:
cv::Vec3b element3Channels;
element3Channels[0] = multiply.at<double>(0,0);
element3Channels[1] = multiply.at<double>(1,0);
element3Channels[2] = multiply.at<double>(2,0);
outputMatrix.at<Vec3b>(i, j) = element3Channels;
If you need alpha channel too, you can adapt that easily.

Understand Mat in opencv

I am trying to understand the following piece of code Take from: Opencv Mat
and more precisely this part:
Mat labels(0, 1, CV_32FC1);
Mat trainingData(0, dictionarySize, CV_32FC1);
From what I understand is that labels is equivalent to std::vector<float> and trainingData is equivalent to std::vector<std::vector<float>> and where std::vector<float> is of dimension dictionarySize. Is that correct?
I am asking this question because I want to convert bowDescriptor1 which is a MAT to std::vector<float>
Convert bowDescriptor1to vector:
std::vector<float> data;
for(size_t r = 0; r < bowDescriptor.rows;r++)
{
for(size_t c = 0; c < bowDescriptor.cols;c++)
{
data.push_back(bowDescriptor.at<float>(r,c));
}
}
Without testing:
from documentation you can see that bowDescriptor seems to be a matrix of size 1 x dictionarySize http://docs.opencv.org/modules/features2d/doc/object_categorization.html#bowimgdescriptorextractor-descriptorsize
so you have to go through that matrix and save each element (float) to your vector<float>
try this code:
std::vector<float> currentBowDescriptor;
for(int col = 0; col < bowDescriptor1.cols; ++col)
{
currentBowDescriptor.push_back(bowDescriptor.at<float>(0,col));
}
that's it. push_back those currentBowDescriptor s to another vector if you want.
If you want to save some computation time, you can even initialize the currentBowDescriptor in advance since you know the number of descriptor values (dictionarySize) and access those elements instead of pushing back.
hope this helps.