I used it:
Mat map( img.size(), CV_8UC3, CV_RGB(0,0,0) );
but it seems not create any matrix with 3 dimensions!
Could anyone help me?
The CV_8UC3 flag means that you are creating an image that has three channels where each pixel in each channel is represented as an unsigned character. You should be able to confirm the multiple channels (or 3rd dimension) by seeing the output of
map.channels();
which will return how large the matrix is in the third dimension. If you require more channels, then use something like:
map.create(100,60,CV_8UC(15));
where 15 is the number of channels.
The good way to do that is to use the appropriated constructor :
Mat::Mat(int ndims, const int* sizes, int type)
For example if you want to create a 100x60x15 matrix :
int sz[] = {100, 60, 15};
Mat map(3, sz, CV_8U);
Related
I faced a problem with C++ blobFromImage function in OpenCV. I trained a CNN-network in Keras which takes a 4-d blob as input (common practice, nothing special). The problem is that my blob order is NHWC (where Channle size is always 6) but blobFromImage returns only NCHW. There is no any trouble to reshape numpy-blob in python but I haven't found any solution for C++.
Input data is two 3-channel images stitched together (at channel axis) in one blob. For example, if images resolution is 1280x720 than blob shape will be (1, 720, 1280, 6)
Is there any way to create blob of NHWC in C++ or reshape blobFromImage result to NHWC?
It seems like you received an answer to your question in the OpenCV forum
assuming, you have 2 (float) images A and B of equal size, 3 channels each, you could first merge them like this:
vector<Mat> v = {A,B};
Mat C;
merge(v, C);
now C has 6 interleaved channels, and we need to add the "batch" dimension:
int sz[] = {1, A.rows, A.cols, 6};
Mat blob(4, sz, CV_32F, C.data);
but careful ! your blob does NOT hold a deep copy of the data, so C has to be kept alive during the processing !
EDIT: due to public demand, here's the reverse operation ;)
// extract 2d, 6chan Mat
Mat c2(blob.size[2], blob.size[3], CV_32FC(6), blob.ptr(0));
// split into channels
vector<Mat> v2;
split(c2,v2);
// merge back into 2 images
Mat a,b;
merge(vector<Mat>(v2.begin(), v2.begin()+3), a);
merge(vector<Mat>(v2.begin()+3, v2.end()), b);
I am taking in RGB data from my Kinect and trying to put it into an OpenCV matrix. The data is held in "src":
Mat matrixImageRGBA(w, h, CV_8UC4);
memcpy(matrixImageRGBA.data, src, sizeof(byte) * w * h * 4);
However, when I use "imshow" to see the image, it is tiled four time horizontally. I am using the following command:
imshow("Window", matrixImageRGBA);
waitKey(500);
Does anyone have any idea of what the problem may be here? It's driving me nuts.
Thanks!
You have w and h backwards. According to the docs the constructor takes the height as the first argument:
Mat (int rows, int cols, int type)
Also, I would recommend using this constructor:
Mat(int rows, int cols, int type, void *data, size_t step=AUTO_STEP)
instead of copying to the data field (since you are using no padding at the end of each row use the default AUTO_STEP for step).
I am using openCV for the first time. I am using openCV3 and XCode to code it. I want to create a 16 bit grayscale image but I want to the data I have is defined such that 4000 is the pixel value for white and 0 for black. I have the information for these pixels in an array of type int. How can I create a Mat and assign the values in the array to the Mat?
short data[] = { 0,0,4000,4000,0,0,4000, ...};
Mat gray16 = Mat(h, w, CV_16S, data);
again, the types must match. for 16bit, you need CV_16S and a shortarray, for 8bit CV_8U and a uchar* array, for float CV_32S and a float* ....
You can create your Mat with
cv::Mat m(rows, cols, CV_16UC1);
but to my knowledge, there is no way to define a custom value for "white", you'll have to multiply m with std::numeric_limits::max / 4000. However, this is only necessary when displaying the image.
A lookup-table could do the same (potentially slower), see cv::LUT. However, it appearently only supports 8-bit images.
edit: OK, I missed the part about assigning existing array values; see berak's answer. I hope the answer is still useful.
There are a number of questions on here about calcHist in OpenCV but I couldn't find an answer to my question, and have read the documentation several times so here's to hoping someone can spot my problem with the following code:
//the setup is that I've got a 1x993 cv::Mat called bestLabels that contains cluster
//labels for 993 features each belonging to 1 of 40 different clusters. I'm just trying
//to histogram these into hist.
cv::Mat hist;
int nbins = 40;
int hsize[] = { nbins };
float range[] = { 0, 39 };
const float *ranges[] = { range };
int chnls[] = { 0 };
cv::calcHist(&bestLabels, 1, chnls, cv::Mat(), hist, 1, hsize, ranges);
This compiles, but when I run it, I get an error:
OpenCV Error: Unsupported format or combination of formats () in cv::calcHist
This was hard to just get it to compile in the first place, but now I'm really not sure what I'm missing. Help please!
Alternatively, I had tried to iterate through the elements of bestLabels and just increment the values in an array that would store my histogram, but using bestLabels.at(0,i) wasn't working either. There's got to be an easier way to pull individual elements out of a cv::Mat object.
Thanks for the help.
What is the type of bestLabels ?
I can reproduce your error with CV_32S, but it works fine with CV_8U or CV_32F.
Maybe the easiest way is to convert it to uchar:
bestLabels.convertTo( bestLabels, CV_8U ); // CV_32F for float, might be overkill here
besides, a 'manual' histogram calculation is not sooo hard:
Mat bestLabels(1,933,CV_32S); // assuming 'int' here again
Mat hist(1,40,CV_8U,Scalar(0));
for ( int i=0; i<bestLabels.cols; i++ )
hist[ bestLabels.at<int>(0,i) ] ++;
This maybe rudimentary, but is it possible to know how many channels a cv::Mat has? For eg, we load an RGB image, I know there are 3 channels. I do the following operations, just to get the laplacian of the image, which is straight from the Opencv Documentation.
int main(int argc, char **argv)
{
Mat src = imread(argv[1],1),src_gray,dst_gray,abs_dst_gray;
cvtColor(src,src_gray,COLOR_BGR2GRAY);
GaussianBlur( src, src, Size(3,3), 0, 0, BORDER_DEFAULT );
Laplacian(src_gray,dst_gray,ddepth,kernel_size,scale,delta,BORDER_DEFAULT);
convertScaleAbs(dst_gray,abs_dst_gray);
}
After converting to Grayscale, we should have only one channel. But how can I determine the number of channels of abs_dst_gray in program? Is there any function to do this? Or is it possible through any other method, which should be written by the programmer? Please help me here.
Thanks in advance.
Call Mat.channels() :
cv::Mat img(1,1,CV_8U,cvScalar(0));
std::cout<<img.channels();
Output:
1
which is the number of channels.
Also, try:
std::cout<<img.type();
Output:
0
which belongs to CV_8U (look here at line 542). Study file types_c.h for each define.
you might use:
Mat::channels()
http://docs.opencv.org/modules/core/doc/basic_structures.html#mat-channels