Displaying a 3 Dimensional Matrix in OpenCV C++ - c++

How can I flatten a 3D Matrix and display it in 2d?
Are there simple ways to display it in 3d?
Edit:
So far I simply tile the images in the 3rd dimension together like thus:
void Flatten3DArray(const cv::Mat& In, cv::Mat& Out2d)
{
CV_Assert(In.dims == 3);
int rows = In.size[0];
int cols = In.size[1];
int third = In.size[2];
int rowTiles = ceil(sqrt(third));
int colTiles = ceil(sqrt(third));
Out2d.create(rowTiles*rows, colTiles*cols, In.type());
Out2d = Scalar(0);
int thirdDimIdx = 0;
for (int i = 0; i < rowTiles; ++i)
{
for (int j = 0; j < colTiles; ++j, ++thirdDimIdx)
{
if (thirdDimIdx >= third)
{
break;
}
Mat roi(Out2d(cv::Rect(j*cols, i*rows, cols, rows)));
uint16_t *ind = (uint16_t*)In.data + thirdDimIdx * rows*cols; // sub-matrix pointer
cv::Mat subMatrix(2, In.size, In.type(), ind);
subMatrix.copyTo(roi);
}
}
}
Is there a better way to do this?

Related

Converting a vector<tensorflow::Tensor> to tensor of tensors

Let's say I have a vector of image tensors with each image tensor having the dimensions of [frames, height, width, num_channels] and I want to take that vector and convert it to one larger tensor of [num_tracks(size of vector), frames, height, width, num_channels]. What's the easiest way to do this with the tensorflow::Tensor api? This is for constructing an input tensor for a graph, and not in the graph execution itself.
Thanks!
You can create a new Tensor with desired shape, and just fill it out by iterating for all dims in for loops (to access individual item use operator() of Eigen's TensorMap which you can get by tensor<DataType,DIMS> on Tensor):
tensorflow::Tensor concat(const std::vector<tensorflow::Tensor>& in){
int frames = in[0].dim_size(0);
int height = in[0].dim_size(1);
int width = in[0].dim_size(2);
int num_channels = in[0].dim_size(3);
int num_tracks = in.size();
tensorflow::Tensor res(DT_FLOAT,tensorflow::TensorShape{num_tracks,frames,height,width,num_channels});
auto& resMap = res.tensor<float,5>();
for (int nt = 0; nt < num_tracks; ++nt) {
auto& inFrame = in[nt];
auto& inMap = inFrame.tensor<float,4>(); // Eigen's TensorMap which has operator()(Indices...)
for (int f = 0; f < frames; ++f) {
for (int r = 0; r < height; ++r) {
for (int c = 0; c < width; ++c) {
for (int ch = 0; ch < num_channels; ++ch) {
resMap(nt,f,r,c,ch) = inMap(f,r,c,ch);
}
}
}
}
}
return res;
}

Subtract opencv matrix from 3 channel matrix

I have two matrices:
cv::Mat bgr(rows, cols, CV_16UC3);
cv::Mat ir(rows, cols, CV_16UC1 );
and I want to subtract ir from each channel of bgr element-wise. I couldn't find an elegant solution yet.
EDIT
One possible solution might be:
// subtract IR from BGR
Vec3u tmp;
for (int i = 0; i < ir.rows; i++) {
for (int j = 0; j < ir.cols; j++) {
tmp = bgr.at<Vec3u>(i,j);
tmp[0] = tmp[0] - ir.at<ushort>(i,j);
tmp[1] = tmp[1] - ir.at<ushort>(i,j);
tmp[2] = tmp[2] - ir.at<ushort>(i,j);
bgr.at<Vec3u>(i, j) = tmp;
}
}
The question is that whether there is a faster solution.
If we're talking about an elegant way, it could be like this:
Mat mat = Mat::ones(2,2,CV_8UC1);
Mat mat1 = Mat::ones(2,2,CV_8UC2)*3;
Mat mats[2];
split(mat1,mats);
mats[0]-=mat;
mats[1]-=mat;
merge(mats,2,mat1);
You shouldn't use at(), if you wanted your code to be more efficient. Use pointers and check Mats for continuity:
int rows = mat.rows;
int cols = mat.cols;
if(mat.isContinuous() && mat1.isContinuous())
{
cols*=rows;
rows = 1;
}
for(int j = 0;j<rows;j++) {
auto channe2limg = mat1.ptr<Vec2b>(j);
auto channelimg = mat.ptr<uchar>(j);
for (int i = 0; i < cols; i++) {
channe2limg[i][0]-=channelimg[i];
channe2limg[i][1]-=channelimg[i];
}
}

Issues in Multiplying two RGB images in opencv using c++

I want to multiply one image by its transpose. my image size is nxm.
i do as follows
for (int k = 0; k < total_images; k++)
{
Mat img_tp1 = cv::Mat(imgRows, imgCols, CV_32FC1);
Mat img_tp2 = cv::Mat(imgRows, imgRows, CV_32FC1);
subtract(img[k], MeanMat, img_tp1);
img_tp2 = img_tp1 * img_tp2.t();
std::ostringstream name;
name << "sub" << k << ".jpg";
cv::imwrite(name.str(), img_tp2);
}
and i face this error
Unhandled exception at 0x000007FEFDB79E5D in Tracking.exe: Microsoft C++ exception: cv::Exception at memory location 0x00000000001E5EE0.
how can i do this multiplication? in fact i want to compute the covariance matrix of the sequence of images so i need this multiplication.
Thanks.
Then i decide to implement the multiplying for my RGB image and i use this code:
for (int i = 0; i < imgRows; i++)
{
for (int j = 0; j < imgRows; j++)
{
uchar pix1[3];
uchar pix2[3];
uchar pix[3] = { 0, 0, 0 };
for (int k = 0; k < imgCols; k++)
{
img_tp1.at<Vec3b>(i, k) = { pix1[0], pix1[1], pix1[2] };
img_tp1.at<Vec3b>(j, k) = { pix2[0], pix2[1], pix2[2] };
CovMat0.at<Vec3b>(i, j) = { pix[0], pix[1], pix[2] };
pix[0] = (pix1[0] * pix2[0]) + pix[0];
pix[1] = (pix1[1] * pix2[1]) + pix[1];
pix[2] = (pix1[2] * pix2[2]) + pix[2];
CovMat0.at<Vec3b>(i, j) = { pix[0], pix[1], pix[2] };
}
}
}
but it takes lots of time to process it. Is there any better way for that?
(I want to multiply one image by its transpose)

Weird result from the Kuwahara filter

I am implementing a Kuwahara filter in C++, with OpenCV to help opening and displaying images. The idea is quite straight forward but somehow I got weird result from it. Here' the cose:
#include "opencv2/opencv.hpp"
#include <iostream>
#include <iomanip>
#include <cmath>
using namespace std;
using namespace cv;
//This class is essentially a struct of 4 Kuwahara regions surrounding a pixel, along with each one's mean, sum and variance.
class Regions{
int* Area[4];
int Size[4];
unsigned long long Sum[4];
double Var[4];
int kernel;
public:
Regions(int _kernel) : kernel(_kernel) {
for (int i = 0; i<4; i++) {
Area[i] = new int[kernel*kernel];
Size[i] = 0;
Sum[i] = 0;
Var[i] = 0.0;
}
}
//Update data, increase the size of the area, update the sum
void sendData(int area, int data){
Area[area][Size[area]] = data;
Sum[area] += data;
Size[area]++;
}
//Calculate the variance of each area
double var(int area) {
int __mean = Sum[area]/Size[area];
double temp = 0;
for (int i = 0; i<Size[area]; i++) {
temp+= (Area[area][i] - __mean) * (Area[area][i] - __mean);
}
if (Size[area]==1) return 1.7e38; //If there is only one pixel inside the region then return the maximum of double
//So that with this big number, the region will never be considered in the below minVar()
return sqrt(temp/(Size[area]-1));
}
//Call the above function to calc the variances of all 4 areas
void calcVar() {
for (int i = 0; i<4; i++) {
Var[i] = var(i);
}
}
//Find out which regions has the least variance
int minVar() {
calcVar();
int i = 0;
double __var = Var[0];
if (__var > Var[1]) {__var = Var[1]; i = 1;}
if (__var > Var[2]) {__var = Var[2]; i = 2;}
if (__var > Var[3]) {__var = Var[3]; i = 3;}
return i;
}
//Return the mean of that regions
uchar result(){
int i = minVar();
return saturate_cast<uchar> ((double) (Sum[i] *1.0 / Size[i]));
}
};
class Kuwahara{
private:
int wid, hei, pad, kernel;
Mat image;
public:
Regions getRegions(int x, int y){
Regions regions(kernel);
uchar *data = image.data;
//Update data for each region, pixels that are outside the image's boundary will be ignored.
//Area 1 (upper left)
for (int j = (y-pad >=0)? y-pad : 0; j>= 0 && j<=y && j<hei; j++)
for (int i = ((x-pad >=0) ? x-pad : 0); i>= 0 && i<=x && i<wid; i++) {
regions.sendData(1,data[(j*wid)+i]);
}
//Area 2 (upper right)
for (int j = (y-pad >=0)? y-pad : 0; j<=y && j<hei; j++)
for (int i = x; i<=x+pad && i<wid; i++) {
regions.sendData(2,data[(j*wid)+i]);
}
//Area 3 (bottom left)
for (int j = y; j<=y+pad && j<hei; j++)
for (int i = ((x-pad >=0) ? x-pad : 0); i<=x && i<wid; i++) {
regions.sendData(3,data[(j*wid)+i]);
}
//Area 0 (bottom right)
for (int j = y; j<=y+pad && j<hei; j++)
for (int i = x; i<=x+pad && i<wid; i++) {
regions.sendData(0,data[(j*wid)+i]);
}
return regions;
}
//Constructor
Kuwahara(const Mat& _image, int _kernel) : kernel(_kernel) {
image = _image.clone();
wid = image.cols; hei = image.rows;
pad = kernel-1;
}
//Create new image and replace its pixels by the results of Kuwahara filter on the original pixels
Mat apply(){
Mat temp;
temp.create(image.size(), CV_8U);
uchar* data = temp.data;
for (int j= 0; j<hei; j++) {
for (int i = 0; i<wid; i++)
data[j*wid+i] = getRegions(i,j).result();
}
return temp;
}
};
int main() {
Mat img = imread("limes.tif", 1);
Mat gray, dest;
int kernel = 15;
gray.create(img.size(), CV_8U);
cvtColor(img, gray, CV_BGR2GRAY);
Kuwahara filter(gray, kernel);
dest = filter.apply();
imshow("Result", dest);
imwrite("result.jpg", dest);
waitKey();
}
And here's the result:
As you can see it's different from the correct result, the borders of those limes seem to be duplicated and moved upward. If I apply a 15x15 filter, it gives me a complete mess like this:
I've spent my whole day to debug, but so far nothing is found. I even did the calculation on small images by hand and compare with the result and see no differences.
Could anyone help me find out what did I do wrong?
Many many thanks.
It turns out that there's nothing wrong with my code, but the way I defined a kernel was the source of problem. My kernel is actually one of four small kuwahara sections, while the correct definition of a kernel is the whole area where data is calculated for each pixel, therefore the area that contains all four sections is actually the kernel. So when talked about a 7x7 "kernel", I actually applied a 15x15 one, and the horrible result came not from a 15x15 kernel as I thought, but from a 31x31. At that size, Kuwahara filter simply doesn't make sense and bizarre results are inevitable.

Why does assertion fail here

Why does the assertion fail here when i create a CvMat *? It does not happen with an image i load in cv::Mat using a pointer.
struct RGB { unsigned char b, g, r; };
cv::Point p;
RGB *data;
CvMat* mat = cvCreateMat(300,300,CV_32FC1);
for( row = 0; row < mat->rows; ++row)
{
for ( col = 0; col < mat->cols; ++col)
{
p.x=row,p.y=col;
ERROR ----->>> assert((mat->step/mat->cols) == sizeof(RGB));
data = (RGB*)&mat->data;
data += p.y * mat->cols + p.x;
}
}
For this code the assertion does not fail:
IplImage * img=cvLoadImage("blah.jpg");
int row=0,col=0;
cv::Mat in(img);
cv::Mat *mat=&in;
cv::Point p;
struct RGB { unsigned char b, g, r; };
RGB *data;
for( row = 0; row < mat->rows; ++row)
{
for ( col = 0; col < mat->cols; ++col)
{
p.x=row,p.y=col;
assert((mat->step/mat->cols) == sizeof(RGB));
data = (RGB*)&mat->data;
data += p.y * mat->cols + p.x;
printf("Row=%dxCol=%d b=%u g=%u r=%u\n",row,col,data->b,data->g,data->r);
wait_for_frame(1);
}
}
Because sizeof(RGB) != sizeof(float), which is what you filled the matrix with here:
CvMat* mat = cvCreateMat(300,300,CV_32FC1);
CV_32FC1 means 1 component, 32-bit floating point. You probably want CV_8UC3. See here or another OpenCV reference.
You can skip the entire IplImage misery if you use
cv::Mat img = cv::loadImage("blah.jpg");
Also it is better to use row ptr for going through all the pixels.
It knows the jumps, so you don't have to worry!
From the refman:
If you need to process a whole row of a 2D array, the most efficient
way is to get the pointer to the row first, and then just use the
plain C operator []
Be aware that if you are loading bigger images which have "jumps" in their data, your code will not work.
In your situation
cv::Mat img = cv::loadImage("blah.jpg");
const cv::Mat& M = img;
for(int i = 0; i < rows; i++)
{
const Vec3b* Mi = M.ptr<Vec3b>(i);
for(int j = 0; j < cols; j++)
{
const Vec3b& Mij = Mi[j];
std::cout<<"Row="<<i<<"Col="<<j<<"\t";
std::cout<<"b="<<Mij[0]<<" g="<<Mij[1]<<" r="<<Mij[2]<<std::endl;
}
}
is the fastest correct way. Otherwise you could use M.at<Vec3b>(i,j).