imshow() producing weird results with OpenCV 3.2 in C++ - c++

I am taking in RGB data from my Kinect and trying to put it into an OpenCV matrix. The data is held in "src":
Mat matrixImageRGBA(w, h, CV_8UC4);
memcpy(matrixImageRGBA.data, src, sizeof(byte) * w * h * 4);
However, when I use "imshow" to see the image, it is tiled four time horizontally. I am using the following command:
imshow("Window", matrixImageRGBA);
waitKey(500);
Does anyone have any idea of what the problem may be here? It's driving me nuts.
Thanks!

You have w and h backwards. According to the docs the constructor takes the height as the first argument:
Mat (int rows, int cols, int type)
Also, I would recommend using this constructor:
Mat(int rows, int cols, int type, void *data, size_t step=AUTO_STEP)
instead of copying to the data field (since you are using no padding at the end of each row use the default AUTO_STEP for step).

Related

OpenCV how to encode raw image information for imencode?

I am in C++.
Assume some mysterious function getData() returns all but only the pixel information of an image.
i.e a char* that points to only the pixel information with no metadata (no width, length, height, nor channels of any form)
Thus we have:
unsigned char *raw_data = getData();
Then we have another function that returns a structure containing the metadata.
eg:
struct Metadata {
int width;
int height;
int channels;
//other useful fields
}
I now need to prepend the object metadata in the correct way to create a valid image buffer.
So instead of [pixel1, pixel2, pixel3 ...]
I would have, for example [width, height, channels, pixel1, pixel2, pixel3...]
What is the correct order to prepend the metadata and are width, height and channels enough?
You can use Mat constructor to create an image from data and meta data
Mat::Mat(int rows, int cols, int type, void* data, size_t
step=AUTO_STEP); // documentation here
cv::Mat image = cv::Mat(height, width, CV_8UC3, raw_data);
type argument specifies the number of channels and data format. For example, typical RGB image data is unsigned char and the number of channels is 3 so its type = CV_8UC3
Available OpenCV Mat types are defined cvdef.h

grayscale image creation 16 bits

I am using openCV for the first time. I am using openCV3 and XCode to code it. I want to create a 16 bit grayscale image but I want to the data I have is defined such that 4000 is the pixel value for white and 0 for black. I have the information for these pixels in an array of type int. How can I create a Mat and assign the values in the array to the Mat?
short data[] = { 0,0,4000,4000,0,0,4000, ...};
Mat gray16 = Mat(h, w, CV_16S, data);
again, the types must match. for 16bit, you need CV_16S and a shortarray, for 8bit CV_8U and a uchar* array, for float CV_32S and a float* ....
You can create your Mat with
cv::Mat m(rows, cols, CV_16UC1);
but to my knowledge, there is no way to define a custom value for "white", you'll have to multiply m with std::numeric_limits::max / 4000. However, this is only necessary when displaying the image.
A lookup-table could do the same (potentially slower), see cv::LUT. However, it appearently only supports 8-bit images.
edit: OK, I missed the part about assigning existing array values; see berak's answer. I hope the answer is still useful.

Way to send OpenCV Mat to MATLAB workspace without copying the data?

When I write MEX files which use OpenCV functions it's easy to pass the data from MATLAB to the MEX environment without copying the data. Is there a way to return the data to MATLAB in the same manner? (That is, without copying the data and without causing MATLAB to crash...)
A simple example:
#include "mex.h"
#include "/opencv2/core.hpp"
void mexFunction(int nlhs, mxArray *plhs[], int nrhs,const mxArray *prhs[])
{
Rows=mxGetM(prhs[0]);
Cols=mxGetN(prhs[0]);
Mat InMat(Cols,Rows,CV_64FC1,mxGetPr(prhs[0]));//Matlab passes data column-wise
// no need to copy data - SUPER!
InMat=InMat.t();//transpose so the matrix is identical to MATLAB's one
//Make some openCV operations on InMat to get OutMat...
//Way of preventing the following code??
plhs[0]=mxCreateDoubleMatrix(OutMat.rows,OutMat.cols,mxREAL);
double *pOut=mxGetPr(plhs[0]);
for (int i(0);i<OutMat.rows;i++)
for (int j(0);j<OutMat.cols;j++)
pOut[i+j*OutMat.rows]=OutMat.at<double>(i,j);
}
Usually I do the input and output just like that, attaching a pointer to deal with the input and looping over elements on output. But, I think the output can be done in a similar manner to the input, although not without a copy of some sort. The only way to avoid a copy is to create the output Mat with a pointer from a mxArray and operate on it inplace. That's not always possible, of course. But you can be graceful about how you copy the data out.
You can exploit the same trick of attaching a buffer to a cv::Mat that you use (me too!) to bring data in from MATLAB, but also to get it out. The twist on the trick to exporting the data is to use copyTo just right so that it will use the existing buffer, the one from the mxArray in plhs[i].
Starting with an input like this:
double *img = mxGetPr(prhs[0]);
cv::Mat src = cv::Mat(ncols, nrows, CV_64FC1, img).t(); // nrows <-> ncols, transpose
You perform some operation, like resizing:
cv::Mat dst;
cv::resize(src, dst, cv::Size(0, 0), 0.5, 0.5, cv::INTER_CUBIC);
To get dst into MATLAB: first transpose the output (for sake of reordering the data into col-major order) then create an output cv::Mat with the pointer from the plhs[0] mxArray, and finally call copyTo to fill out the wrapper Mat with the transposed data:
dst = dst.t(); // first!
cv::Mat outMatWrap(dst.rows, dst.cols, dst.type(), pOut); // dst.type() or CV_*
dst.copyTo(outMatWrap); // no realloc if dims and type match
It is very important to get the dimensions and data type exactly the same for the following call to copyTo to keep from reallocating outMatWrap.
Note that when outMatWrap is destroyed, the data buffer will not be deallocated because the reference count is 0 (Mat::release() does not deallocate .data).
Possible template (by no means bullet-proof!)
template <typename T>
void cvToMATLAB(cv::Mat mat, T *p)
{
CV_Assert(mat.elemSize1() == sizeof(T));
mat = mat.t();
cv::Mat outMatWrap(mat.rows, mat.cols, mat.type(), p);
mat.copyTo(outMatWrap);
}
This should be good for channels>1, as long as the size of the MATLAB array is in pixel order too (e.g. 3xMxN). Then use permute as needed.
Note about copyTo
The conditions under which copyTo will reallocate the destination buffer are if the dimensions or data type do not match:
opencv2\core\mat.hpp line 347 (version 2.4.10), with my comments:
inline void Mat::create(int _rows, int _cols, int _type)
{
_type &= TYPE_MASK;
if( dims <= 2 && rows == _rows && cols == _cols && type() == _type && data )
return; // HIT THIS TO USE EXISTING BUFFER!
int sz[] = {_rows, _cols};
create(2, sz, _type); // realloc!
}
So, just make sure you get the size and data type correct, and the data will end up in the mxArray buffer instead of somewhere else. If you do it right, copyTo will use the buffer you specified, calling memcpy on each row.

How to create a Mat object with 3 dimensions?

I used it:
Mat map( img.size(), CV_8UC3, CV_RGB(0,0,0) );
but it seems not create any matrix with 3 dimensions!
Could anyone help me?
The CV_8UC3 flag means that you are creating an image that has three channels where each pixel in each channel is represented as an unsigned character. You should be able to confirm the multiple channels (or 3rd dimension) by seeing the output of
map.channels();
which will return how large the matrix is in the third dimension. If you require more channels, then use something like:
map.create(100,60,CV_8UC(15));
where 15 is the number of channels.
The good way to do that is to use the appropriated constructor :
Mat::Mat(int ndims, const int* sizes, int type)
For example if you want to create a 100x60x15 matrix :
int sz[] = {100, 60, 15};
Mat map(3, sz, CV_8U);

how to convert from a two dimensional array to a graylevel image in opencv?

I am using openCV in my c++ image processing project.
I have this two dimensional array I[800][600] filled with values between 0 and 255, and i want to put this array in a graylevel "IplImage" so i can view it and process it using openCV functions.
Any help will be appreciated.
Thanks in advance.
It's easy in Opencv C++ interface, all you need to do is to init a matrice, see the line below
cv::Mat img = cv::Mat(800, 600, CV_8UC1, I) // I[800][600]
Now you can do whatever you want, Opencv treats img as an 8-bit grayscale image.
CvSize image_size;
image_size.height = 800;
image_size.width = 600;
int channels = 1;
IplImage *image = cvCreateImageHeader(image_size, IPL_DEPTH_8U, channels);
cvSetData(image, I, image->widthStep)
this is untested, but the most important thing likely to require fixing is the second parameter to cvSetData(). This needs to be a pointer to unsigned character data, and if you're just using a 2D array that isn't part of a Mat, then you'll have to do something a bit different, (possibly a loop? although you should avoid loops in openCV as much as possible).
see this post for a highly relevant question