Kinect and Opencv, the depth image, how to use it - c++

I've using Kinect and OpenCV (I am using c++). I can get both the RGB and the depth image.
With the RGB image I can "play" as usual, blurring it, using canny (after converting it to greyscale),... but I can't do the same with the depth image. Each time I want to do something with the depth image I got exceptions.
I have the following code to get the depth image:
CvMat* depthMetersMat = cvCreateMat(480, 640, CV_16UC1 );
CvMat* imageMetersMat = cvCreateMat(480, 640, CV_16UC1 );
IplImage *kinectDepthImage = cvCreateImage( cvSize(640,480),16,1);
const XnDepthPixel* pDepthMap = depth.GetDepthMap();
for (int y=0; y<XN_VGA_Y_RES; y++){
for(int x=0;x<XN_VGA_X_RES;x++){
depthMetersMat->data.s[y * XN_VGA_X_RES + x ] = 10 * pDepthMap[y * XN_VGA_X_RES + x];
}
}
cvGetImage(depthMetersMat, kinectDepthImage);
The problem is that I can't do anything with kinectDepthImage. I tried to convert it to greyscale and then using canny, but I dont know how to convert it.
Basically I would like to apply canny and laplacian to the depth image.

The problem was that the output from cvGetImage is 16bits depth while canny requires 8bit, therefore I need to convert it to 8bits, something like:
cvConvertScale(depthMetersMat, kinectDepthImage8, 1.0/256.0, 0);

The new OpenCV Api encurages to use Mat instead of the old image types. The current code for using the OpenNI depth meta data in OpenCV would be:
Mat depthMat16UC1(width, height, CV_16UC1, (void*) g_DepthMD.Data());
Mat depthMat8UC1;
depthMat16UC1.convertTo(depthMat8UC1, CV_8U, 1.0/256.0);

What is the sizeof(XnDepthPixel) ?
Try using a cvCreateImageHeader and then doing cvSetData on it with the XnDepth Image

Verify below link code ... could give you valuable information. NOTE: Its not my code, may give the result you require. comment the //cvCvtColor(rgbimg,rgbimg,CV_RGB2BGR);
http://pastebin.com/e5kHzs84
Regards
Nagaraju

if you are using OpenNI, have you created context, production nodes, and started generating data? Probably that's your problem..

Related

opencv Mat CV_8UC1 type (uchar) to *unsigned short (*UINT16)

This is mainly a C++ variable/pointer handling/casting question.
I am trying to apply one of the openCV library image filters to a depth Image from the Kinect v2 SDK (16bit grayscale, values between 0 and 8092).
I want to do this after getting the depth image but BEFORE using the kinect SDK to do rgb-depth registration and conversion to a point cloud. Therefore I want the final filtered image/array to be of the same type as I received before filtering so I can pass it back to the Kinect SDK.
Initial code:
Get the kinect depth frame as a pointer
UINT nBufferSize = nDepthFrameHeight * nDepthFrameWidth;
hr = pDepthFrame->CopyFrameDataToArray(nBufferSize, pDepth);
create 2 matrices along with the conversion between the 16bit and 8bit(openCV works with 8bit greyscale)
Mat depthMat(height, width, CV_16UC1, depth); // from kinect
Mat depthf(height, width, CV_8UC1);
depthMat.convertTo(depthf, CV_8UC1, 255.0/2048.0);
imshow("original-depth", depthf);
const unsigned char noDepth = 0; // change to 255, if values no depth uses max value
Mat temp, temp2;
1 step - downsize for performance, use a smaller version of depth image
Mat small_depthf;
resize(depthf, small_depthf, Size(), 0.2, 0.2);
2 step - inpaint only the masked "unknown" pixels
cv::inpaint(small_depthf, (small_depthf == noDepth), temp, 5.0, INPAINT_TELEA);
3 step - upscale to original size and replace inpainted regions in original depth image
resize(temp, temp2, depthf.size());
temp2.copyTo(depthf, (depthf == noDepth)); // add to the original signal
imshow("depth-inpaint", depthf); // show results
Problematic Part:
When I try to reverse the process (even with loss of information for now)
cv::Mat newDepth(nDepthFrameHeight, nDepthFrameWidth, CV_16UC1);
depthf.convertTo(newDepth, CV_16UC1, 8092.0 / 255.0);
I have found no way to convert these cv::Mat types back to *ushort (*UINT16 in this case).
I have tried things like reinterpret_cast, depthf.data and depthf.ptr() but it keeps showing uchar when hovering over the final data, unless I force it like in the ptr case above, in which case it crashes.
Any ideas?
P.S.: Code works flawlessly if I don't try to filter the depth. Also, crash occurs when the SDK tries to map color and depth and tries to use pDepth in
pCoordinateMapper->MapColorFrameToDepthSpace(nDepthFrameWidth * nDepthFrameHeight, pDepth, nColorFrameWidth * nColorFrameHeight, (DepthSpacePoint*)pDepthSpacePoints);

OpenCV Object Detection (HOGDescriptor) on iOS

I'm trying to get the people detector provided by the OpenCV library running. So far I get decent performance on my iPhone 6 but the detection is super bad and almost never correct and I'm not really sure why this is since you can find example videos using the same default HOG descriptor with way better detection.
Here is the code:
- (void)processImage:(Mat&)image {
cv::Mat cvImg, result;
cvtColor(image, cvImg, COLOR_BGR2HSV);
cv::vector<cv::Rect> found, found_filtered;
hog.detectMultiScale(cvImg, found, 0, cv::Size(4,4), cv::Size(8,8), 1.5, 0);
size_t i;
for (i=0; i < found.size(); i++) {
cv::Rect r = found[i];
rectangle(image, r.tl(), r.br(), Scalar(0,255,0), 2);
}
}
The video input comes from the iPhone camera itself and "processImage:" is called for every frame. For the HOGDescriptor I use the default people detector:
_hog.setSVMDetector(cv::HOGDescriptor::getDefaultPeopleDetector());
I appreciate any help. :)
I'm new to openCV, so take this with a grain of salt:
The line cvtColor(image, cvImg, COLOR_BGR2HSV); converts the image from the BGR color space to the HSV color space. Essentially, it changes each pixel from being represented by how much blue, green, and red it has, to being represented by the components hue (color), saturation (how much color) and value (how bright). Clearly, the hogDescriptor acts on a BGR image, not an HSV image. You need to pass it a type CV_8UC3 image: An image with 3 channels per pixel (C3), ex. BGR, and an 8bit unsigned number for each channel (8U), This part is less important. What are you passing into the method processImage()? It should be one of those types. If not, you need to know the type and convert it to CV_8UC3 using the cvtColor() method

How to draw rectangle with mask image in opencv?

Mask image
Result image
Expected image
Is there any function to draw a rectangle with mask image using opencv? i have attached the expected image. Please help me. thanks in advance.
I think your asked question isn't quite clear, but if the first image is your original image (circle), the second one (rectangle) is your binary mask image, and you want to apply that mask on the original image, than you can apply the mask as followed:
inputMat.copyTo(outputMat, maskMat);
Src.: https://stackoverflow.com/a/18161322/4767895
If you haven't already created a binary mask, do it this way:
Create a mask with the same size as your original image (set all zero), and draw a filled rectangle (set all one) with specific size in it.
cv::Mat mask = cv::Mat::zeros(Rows, Cols, CV_8U); // all 0
mask(Rect(StartX,StartY,Width,Height)) = 1; //make rectangle 1
Src.: https://stackoverflow.com/a/18136171/4767895
Feel free to response if I misunderstood your question.
Try using Boolean Operation provided in Opencv
Please refer to this code (source). I have added all the bitwise operation in case you need any.
int main( )
{
Mat drawing1 = Mat::zeros( Size(400,200), CV_8UC1 );
Mat drawing2 = Mat::zeros( Size(400,200), CV_8UC1 );
drawing1(Range(0,drawing1.rows),Range(0,drawing1.cols/2))=255; imshow("drawing1",drawing1);
drawing2(Range(100,150),Range(150,350))=255; imshow("drawing2",drawing2);
Mat res;
bitwise_and(drawing1,drawing2,res); imshow("AND",res);
bitwise_or(drawing1,drawing2,res); imshow("OR",res);
bitwise_xor(drawing1,drawing2,res); imshow("XOR",res);
bitwise_not(drawing1,res); imshow("NOT",res);
waitKey(0);
return(0);
}

OpenCV filter2D: Filtering only part of the matrix/image

I encountered the following problem.
I need to filter the matrix/image with linear filter, but I want to filter only those pixels that have sufficient number of neighbors around itself (according to the kernel size). To be concretely the result of filtering 32x32 image with 5x5 kernel should be of the 28x28 size.
Is it possible to do such a processing in relatively simple way with OpenCV built-in functions?
int kernel_size = 3;
cv::Mat in_img, out_img;
cv::Mat kernel = Mat::ones( kernel_size, kernel_size, CV_32F )/ (float)(kernel_size*kernel_size);
cv::filter2D(in_img, out_img, -1 , kernel); //filtering
cv::Size size = in_img.size();
cv::Rect roi(kernel_size, kernel_size,size.width - 2*kernel_size, size.height - 2*kernel_size);
cv::Mat cropped = in_img(roi).clone(); //cropping
there is a function called cv::filter2D in opencv, but the output image will be of the same size as the input image (with zero padings during the filtering). There is another image/mathematical library called vxl, there you can find a convolution operator suitable for your requirements.

Depth feed from Kinect not updating

I'm using a combination of OpenKinect and OpenCV libraries to apply Haar-like feature recognition to both RGB and depth images.
I can get the live feed and successfully detect objects using the RGB feed however the depth is giving me massive problems.
After the initial frame the depth frame does not seem to update at all.
The depth callback function that provides the raw data is as follows:
//depth callback function
void depth_cb(freenect_device *dev, void *v_depth, uint32_t timestamp)
{
if (got_depth == 0){
pthread_mutex_lock(&buf_mutex);
//copy to OpenCV buffer
memcpy(depthMat.data, v_depth, (640*480*2));
// depthMat.convertTo(depthFrame, CV_8UC1, 256.0/2048.0);
got_depth++;
pthread_cond_signal(&frame_cond);
pthread_mutex_unlock(&buf_mutex);
}
}
the Mats used are initialised like so:
cv::Mat depthMat(cv::Size(640,480),CV_16UC1);
cv::Mat depthFrame(cv::Size(640,480),CV_8UC1);
And in the main function I try use them like so:
depthMat.convertTo(depthFrame, CV_8UC1, 255.0/2048.0);
imshow("rgb", rgbMat);
imshow("depth-pre-conversion", depthMat);
imshow("depth", depthFrame);
IplImage depthImage = depthFrame;
IplImage rgbImage = rgbMat;
detect_and_draw(&depthImage);
'Depth-pre-conversion' is a almost black frame, you can just about make out the depth image here. It doesn't update.
'Depth' is the lighter version once converted to 8 bits, it also doesn't move.
'rgb' is the live RGB feed which works no problem (although it is BGR rather than RGB but I'll get round fixing that at some point, it's less important right now)
I'd appreciate any advise and help you can offer.