Can I determine the number of channels in cv::Mat Opencv - c++

This maybe rudimentary, but is it possible to know how many channels a cv::Mat has? For eg, we load an RGB image, I know there are 3 channels. I do the following operations, just to get the laplacian of the image, which is straight from the Opencv Documentation.
int main(int argc, char **argv)
{
Mat src = imread(argv[1],1),src_gray,dst_gray,abs_dst_gray;
cvtColor(src,src_gray,COLOR_BGR2GRAY);
GaussianBlur( src, src, Size(3,3), 0, 0, BORDER_DEFAULT );
Laplacian(src_gray,dst_gray,ddepth,kernel_size,scale,delta,BORDER_DEFAULT);
convertScaleAbs(dst_gray,abs_dst_gray);
}
After converting to Grayscale, we should have only one channel. But how can I determine the number of channels of abs_dst_gray in program? Is there any function to do this? Or is it possible through any other method, which should be written by the programmer? Please help me here.
Thanks in advance.

Call Mat.channels() :
cv::Mat img(1,1,CV_8U,cvScalar(0));
std::cout<<img.channels();
Output:
1
which is the number of channels.
Also, try:
std::cout<<img.type();
Output:
0
which belongs to CV_8U (look here at line 542). Study file types_c.h for each define.

you might use:
Mat::channels()
http://docs.opencv.org/modules/core/doc/basic_structures.html#mat-channels

Related

Using custom kernel in opencv 2DFilter - causing crash ... convolution how?

Thought I'd try my hand at a little (auto)correlation/convolution today in openCV and make my own 2D filter kernel.
Following openCV's 2D Filter Tutorial I discovered that making your own kernels for openCV's Filter2D might not be that hard. However I'm getting unhandled exceptions when I try to use one.
Code with comments relating to the issue here:
#include "opencv2/imgproc/imgproc.hpp"
#include "opencv2/highgui/highgui.hpp"
#include <stdlib.h>
#include <stdio.h>
#include <iostream>
using namespace cv;
using namespace std;
int main(int argc, char** argv) {
//Loading the source image
Mat src;
src = imread( "1.png" );
//Output image of the same size and the same number of channels as src.
Mat dst;
//Mat dst = src.clone(); //didn't help...
//desired depth of the destination image
//negative so dst will be the same as src.depth()
int ddepth = -1;
//the convolution kernel, a single-channel floating point matrix:
Mat kernel = imread( "kernel.png" );
kernel.convertTo(kernel, CV_32F); //<<not working
//normalize(kernel, kernel, 1.0, 0.0, 4, -1, noArray()); //doesn't help
//cout << kernel.size() << endl; // ... gives 11, 11
//however, the example from tutorial that does work:
//kernel = Mat::ones( 11, 11, CV_32F )/ (float)(11*11);
//default value (-1,-1) here means that the anchor is at the kernel center.
Point anchor = Point(-1,-1);
//value added to the filtered pixels before storing them in dst.
double delta = 0;
//alright, let's do this...
filter2D(src, dst, ddepth , kernel, anchor, delta, BORDER_DEFAULT );
imshow("Source", src); //<< unhandled exception here
imshow("Kernel", kernel);
imshow("Destination", dst);
waitKey(1000000);
return 0;
}
As you can see, using the tutorials kernel works fine, but my image will crash the program, I've tried changing the bit-depth, normalizing, checking size and lots of commenting out blocks to see where it fails, but haven't cracked it yet.
The image is, '1.png':
And the kernel I want 'kernel.png':
I'm trying to see if I can get a hotspot in dst at the point where the eye catchlight is (the kernel I've chosen is the catchlight). I know there are other ways to do this, but I'm interested to see how effective convolving the catchlight over itself is. (autocorrelation I think that's called?)
Direct questions:
why the crash?
is the crash indicating a fundamental conceptual mistake?
or (hopefully) is it just some (silly) fault in the code?
Thanks in advance for any help :)
The assertion error should be posted which would help someone to answer you other than questioning why is the crash. Anyways, I have posted below the possible errors and solution for convolution filter2D.
Error 1:
OpenCV Error: Assertion failed (src.channels() == 1 && func != 0) in cv::countNo
nZero, file C:\builds\2_4_PackSlave-win32-vc12-shared\opencv\modules\core\src\st
at.cpp, line 549
Solution : Your input Image and the kernel should be grayscales. You can use the flag 0 in imread. (ex. cv::imread("kernel.png",0) to read the image as grayscale.) If you want to apply different kernels to different channels, split the image into separate color planes using split() and process them individually.
I don't see anything other than the obove error that may crash. Kernel size should in odd numbers and your kernel image is 11X11 which is fine. If it stills crashes kindly provide more information in order to help you out.

c++, opencv: Is it safe to use the same Mat for both source and destination images in filtering operation?

Filtering operations involve convolutions and the filtered value at position (x,y) will also depend on the intensities of pixels (x-a,y-b) with a,b >0.
So using directly as destination the same image will lead to unexpected behaviors because during calculation I'm taking some already-filtered data instead of original ones.
Question
Does opencv manage this issue internally in functions like cv::GaussianBlur(.) , cv::blur, etc? Is it safe to give a reference to the same Mat to both src and dst parameters?
thanks
Yes, there would not be any problem if you do so. I have done such thing several time. openCV will automatically take care of it.
I tested the following code and it works perfect:
int main(int argc, char* argv[])
{
Mat src;
src = imread("myImage.jpeg", 1);
imshow("src", src); //Original src
cv::blur( src, src, Size(25,25) , Point(-1,-1), BORDER_DEFAULT );
imshow("dst", src); //src after blurring
waitKey(0);
}

openCV warning component data type mismatch

I am using OpenCV 2.4.4 on a Cent OS machine. My code currently loads an image with the warning: component data type mismatch
here is the code:
#include <cv.h>
#include <highgui.h>
#include "imglib.h"
int main( int argc, char** argv )
{
Mat image = imread( argv[1], CV_LOAD_IMAGE_ANYDEPTH);
imwrite("debugwriteout.jp2", image);
}
I pass the name of a .jp2 greyscale file in the args. The image has a 14-bit pixel depth, but when I print out the pixel values I get values over 20000, and my image is now a completely black square. Any advice would be appreciated.
Additional information:
When I change the imread flag to CV_LOAD_IMAGE_GRAYSCALE it successfully convert the image to an 8-bit pixel depth and prints useful output so I can tell that the jasper module is working at least somewhat correctly.
Any advice would be appreciated,
Thanks
SZman,
I solved my problem.
The solution is the position of the high bit.
On 16 bits, for a 14 bits depth, you have xxxxxxxxxxxxxx00 instead of 00xxxxxxxxxxxxxx.
If you want the correct value, you must decal of 2 bits on the right.
Please read the image using those flags
Mat image = imread( argv[1], CV_LOAD_IMAGE_ANYDEPTH | CV_LOAD_IMAGE_ANYCOLOR);

Viewing 8 bit RAW image file in openCV

I have a raw file which contains a header of 5 bytes containing the number of rows and columns in first two bits each . The 5th byte contains the number of bits for each pixel in the image which is 8 bits in all cases. The image data follows after that.
Since I am new to openCV, i want to ask how to view this RAW image file as an greyscale image using C++?
I know how to read binary data in C++ and have stored the image as a 2-D unsigned char array (since each pixel is 8 bit).
Can anyone please tell me how to view this data as image using openCV ?
I am using the below code , but getting a completely weird image :
void openRaw() {
cv::Mat img(numRows, numCols,CV_8U,&(image[0][0]));
//img.t();
cv::imshow("img",img);
cv::waitKey();
}
Any help will be greatly appreciated.
Thanks,
Rohit
You have to convert it to an IplImage.
If you want to see it as a pure grey-scale image, its actually rather easy.
Example code I use in one application:
CvSize mSize;
mSize.height = 960;
mSize.width = 1280;
IplImage* image1 = cvCreateImage(mSize, 8, 1);
memcpy( image1->imageData, rawDataPointer, sizeOfImage);
cvNamedWindow( "corners1", 1 );
cvShowImage( "corners1", image1 );
At that point you have a valid IplImage, which you can then display. (last 2 lines of code display it)
If the image is bayer-tiled, you will have to convert to RGB.
c++ notation:
cv::Mat img(rows,cols,CV_8U,ptrToDat);
cv::imwhow("img",img);
cv::waitkey();
*data should be saved columwise, otherewise use:
cv::Mat img(cols,rows,CV_8U,ptrToDat);
img=img.t();
cv::imwhow("img",img);
cv::waitkey();

Kinect and Opencv, the depth image, how to use it

I've using Kinect and OpenCV (I am using c++). I can get both the RGB and the depth image.
With the RGB image I can "play" as usual, blurring it, using canny (after converting it to greyscale),... but I can't do the same with the depth image. Each time I want to do something with the depth image I got exceptions.
I have the following code to get the depth image:
CvMat* depthMetersMat = cvCreateMat(480, 640, CV_16UC1 );
CvMat* imageMetersMat = cvCreateMat(480, 640, CV_16UC1 );
IplImage *kinectDepthImage = cvCreateImage( cvSize(640,480),16,1);
const XnDepthPixel* pDepthMap = depth.GetDepthMap();
for (int y=0; y<XN_VGA_Y_RES; y++){
for(int x=0;x<XN_VGA_X_RES;x++){
depthMetersMat->data.s[y * XN_VGA_X_RES + x ] = 10 * pDepthMap[y * XN_VGA_X_RES + x];
}
}
cvGetImage(depthMetersMat, kinectDepthImage);
The problem is that I can't do anything with kinectDepthImage. I tried to convert it to greyscale and then using canny, but I dont know how to convert it.
Basically I would like to apply canny and laplacian to the depth image.
The problem was that the output from cvGetImage is 16bits depth while canny requires 8bit, therefore I need to convert it to 8bits, something like:
cvConvertScale(depthMetersMat, kinectDepthImage8, 1.0/256.0, 0);
The new OpenCV Api encurages to use Mat instead of the old image types. The current code for using the OpenNI depth meta data in OpenCV would be:
Mat depthMat16UC1(width, height, CV_16UC1, (void*) g_DepthMD.Data());
Mat depthMat8UC1;
depthMat16UC1.convertTo(depthMat8UC1, CV_8U, 1.0/256.0);
What is the sizeof(XnDepthPixel) ?
Try using a cvCreateImageHeader and then doing cvSetData on it with the XnDepth Image
Verify below link code ... could give you valuable information. NOTE: Its not my code, may give the result you require. comment the //cvCvtColor(rgbimg,rgbimg,CV_RGB2BGR);
http://pastebin.com/e5kHzs84
Regards
Nagaraju
if you are using OpenNI, have you created context, production nodes, and started generating data? Probably that's your problem..