Depth feed from Kinect not updating - c++

I'm using a combination of OpenKinect and OpenCV libraries to apply Haar-like feature recognition to both RGB and depth images.
I can get the live feed and successfully detect objects using the RGB feed however the depth is giving me massive problems.
After the initial frame the depth frame does not seem to update at all.
The depth callback function that provides the raw data is as follows:
//depth callback function
void depth_cb(freenect_device *dev, void *v_depth, uint32_t timestamp)
{
if (got_depth == 0){
pthread_mutex_lock(&buf_mutex);
//copy to OpenCV buffer
memcpy(depthMat.data, v_depth, (640*480*2));
// depthMat.convertTo(depthFrame, CV_8UC1, 256.0/2048.0);
got_depth++;
pthread_cond_signal(&frame_cond);
pthread_mutex_unlock(&buf_mutex);
}
}
the Mats used are initialised like so:
cv::Mat depthMat(cv::Size(640,480),CV_16UC1);
cv::Mat depthFrame(cv::Size(640,480),CV_8UC1);
And in the main function I try use them like so:
depthMat.convertTo(depthFrame, CV_8UC1, 255.0/2048.0);
imshow("rgb", rgbMat);
imshow("depth-pre-conversion", depthMat);
imshow("depth", depthFrame);
IplImage depthImage = depthFrame;
IplImage rgbImage = rgbMat;
detect_and_draw(&depthImage);
'Depth-pre-conversion' is a almost black frame, you can just about make out the depth image here. It doesn't update.
'Depth' is the lighter version once converted to 8 bits, it also doesn't move.
'rgb' is the live RGB feed which works no problem (although it is BGR rather than RGB but I'll get round fixing that at some point, it's less important right now)
I'd appreciate any advise and help you can offer.

Related

How to increase the saturation values of an image using HSV (in OpenCV using C++)?

I was looking for a way to increase the saturation of some of my images using code and found the strategy of splitting a material with HSV and then increasing the S channel by a factor. However, I ran into some issues where the split channels were still in BGR (I think) because the output was just a greener tinted version of the original.
//Save original image to material
Mat orgImg = imread("sunset.jpg");
//Resize the image to be smaller
resize(orgImg, orgImg, Size(500, 500));
//Display the original image for comparison
imshow("Original Image", orgImg);
Mat g = Mat::zeros(Size(orgImg.cols, orgImg.rows), CV_8UC1);
Mat convertedHSV;
orgImg.convertTo(convertedHSV, COLOR_BGR2HSV);
Mat saturatedImg;
Mat HSVChannels[3];
split(convertedHSV, HSVChannels);
imshow("H", HSVChannels[0]);
imshow("S", HSVChannels[1]);
imshow("V", HSVChannels[2]);
HSVChannels[1] *= saturation;
merge(HSVChannels, 3, saturatedImg);
//Saturate the original image and save it to a new material.
//Display the new, saturated image.
imshow("Saturated", saturatedImg);
waitKey(0);
return 0;
This is my code and nothing I do makes it actually edit the saturation, all the outputs are just green tinted photos.
Note saturation is a public double that is usually set to around 1.5 or whatever you want.
Do not use cv::convertTo() here. It changes the bitdepth (and representation, int vs. float) of the image, not what you are trying to achieve, the color space.
Using it like that does not throw a warning or error though, because both type indicators (CV_8U, ...) and the colorspace indicators (COLOR_BGR2HSV,...) can be resolved as integers, one is a #define, the other a old style enum.
Following the example here, it is possible to do with cv::cvtColor(). Don't forget to revert back before showing the image, imshow() and imwrite() both expect an BGR format.
// Convert image from BGR -> HSV:
// orgImg.convertTo(convertedHSV, COLOR_BGR2HSV); // <- this wrong, do not use
cvtColor(orgImg, convertedHSV, COLOR_BGR2HSV); // <- this does the trick instead
// to the split, multiplication, merge
// [...]
// Convert image back HSV -> BGR:
cvtColor(saturatedImg, saturatedImg, COLOR_HSV2BGR);
//Display the new, saturated image.
imshow("Saturated", saturatedImg);
Note that oCV does not care about color representation when working with a 3 channel Mat: Could be RGB, HSV or anything else. Only for displaying (or saving to an image format) does the given color space matter.

uint8_t buffer to cv::Mat conversion results in distorted image

I have a Mipi camera that captures frames and stores them into the struct buffer that you can see below. Once the frame is stored I want to convert it into a cv::Mat, the thing is that the Mat ends up looking like the first pic.
The var buf.index is just part of the V4L2 API, useful to understand which buffer I'm using.
//The structure where the data is stored
struct buffer{
void *start;
size_t length;
};
struct buffer *buffers;
//buffer->mat
cv::Mat im = cv::Mat(cv::Size(width, height), CV_8UC3, ((uint8_t*)buffers[buf.index].start));
At first I thought that the data might be corrupted but storing the image with lodepng results in a nice image without any distortion.
unsigned char* out_buf = (unsigned char*)malloc( width * height * 3);
for(int pix = 0; pix < width*height; ++pix) {
memcpy(out_buf + pix*3, ((uint8_t*)buffers[buf.index].start)+4*pix+1, 3);
}
lodepng_encode24_file(filename, out_buf, width, height);
I bet it's something really silly.
the picture you post has oddly colored pixels and the patterns look like there's more information than simply 24 bits per pixel.
after inspecting the data, it appears that V4L gives you four bytes per pixel, and the first byte is always 0xFF (let's call that X). further, the channel order seems to be XRGB.
create a cv::Mat using 8UC4 to contain the data.
to use the picture in OpenCV, you need BGR order. cv::split the received data into its four color planes which are X,R,G,B. use cv::merge to reassemble the B,G,R planes into a picture that OpenCV can handle, or reassemble into R,G,B to create a Mat for other purposes (that other library you seem to use).

opencv Mat CV_8UC1 type (uchar) to *unsigned short (*UINT16)

This is mainly a C++ variable/pointer handling/casting question.
I am trying to apply one of the openCV library image filters to a depth Image from the Kinect v2 SDK (16bit grayscale, values between 0 and 8092).
I want to do this after getting the depth image but BEFORE using the kinect SDK to do rgb-depth registration and conversion to a point cloud. Therefore I want the final filtered image/array to be of the same type as I received before filtering so I can pass it back to the Kinect SDK.
Initial code:
Get the kinect depth frame as a pointer
UINT nBufferSize = nDepthFrameHeight * nDepthFrameWidth;
hr = pDepthFrame->CopyFrameDataToArray(nBufferSize, pDepth);
create 2 matrices along with the conversion between the 16bit and 8bit(openCV works with 8bit greyscale)
Mat depthMat(height, width, CV_16UC1, depth); // from kinect
Mat depthf(height, width, CV_8UC1);
depthMat.convertTo(depthf, CV_8UC1, 255.0/2048.0);
imshow("original-depth", depthf);
const unsigned char noDepth = 0; // change to 255, if values no depth uses max value
Mat temp, temp2;
1 step - downsize for performance, use a smaller version of depth image
Mat small_depthf;
resize(depthf, small_depthf, Size(), 0.2, 0.2);
2 step - inpaint only the masked "unknown" pixels
cv::inpaint(small_depthf, (small_depthf == noDepth), temp, 5.0, INPAINT_TELEA);
3 step - upscale to original size and replace inpainted regions in original depth image
resize(temp, temp2, depthf.size());
temp2.copyTo(depthf, (depthf == noDepth)); // add to the original signal
imshow("depth-inpaint", depthf); // show results
Problematic Part:
When I try to reverse the process (even with loss of information for now)
cv::Mat newDepth(nDepthFrameHeight, nDepthFrameWidth, CV_16UC1);
depthf.convertTo(newDepth, CV_16UC1, 8092.0 / 255.0);
I have found no way to convert these cv::Mat types back to *ushort (*UINT16 in this case).
I have tried things like reinterpret_cast, depthf.data and depthf.ptr() but it keeps showing uchar when hovering over the final data, unless I force it like in the ptr case above, in which case it crashes.
Any ideas?
P.S.: Code works flawlessly if I don't try to filter the depth. Also, crash occurs when the SDK tries to map color and depth and tries to use pDepth in
pCoordinateMapper->MapColorFrameToDepthSpace(nDepthFrameWidth * nDepthFrameHeight, pDepth, nColorFrameWidth * nColorFrameHeight, (DepthSpacePoint*)pDepthSpacePoints);

OpenCV Object Detection (HOGDescriptor) on iOS

I'm trying to get the people detector provided by the OpenCV library running. So far I get decent performance on my iPhone 6 but the detection is super bad and almost never correct and I'm not really sure why this is since you can find example videos using the same default HOG descriptor with way better detection.
Here is the code:
- (void)processImage:(Mat&)image {
cv::Mat cvImg, result;
cvtColor(image, cvImg, COLOR_BGR2HSV);
cv::vector<cv::Rect> found, found_filtered;
hog.detectMultiScale(cvImg, found, 0, cv::Size(4,4), cv::Size(8,8), 1.5, 0);
size_t i;
for (i=0; i < found.size(); i++) {
cv::Rect r = found[i];
rectangle(image, r.tl(), r.br(), Scalar(0,255,0), 2);
}
}
The video input comes from the iPhone camera itself and "processImage:" is called for every frame. For the HOGDescriptor I use the default people detector:
_hog.setSVMDetector(cv::HOGDescriptor::getDefaultPeopleDetector());
I appreciate any help. :)
I'm new to openCV, so take this with a grain of salt:
The line cvtColor(image, cvImg, COLOR_BGR2HSV); converts the image from the BGR color space to the HSV color space. Essentially, it changes each pixel from being represented by how much blue, green, and red it has, to being represented by the components hue (color), saturation (how much color) and value (how bright). Clearly, the hogDescriptor acts on a BGR image, not an HSV image. You need to pass it a type CV_8UC3 image: An image with 3 channels per pixel (C3), ex. BGR, and an 8bit unsigned number for each channel (8U), This part is less important. What are you passing into the method processImage()? It should be one of those types. If not, you need to know the type and convert it to CV_8UC3 using the cvtColor() method

Kinect and Opencv, the depth image, how to use it

I've using Kinect and OpenCV (I am using c++). I can get both the RGB and the depth image.
With the RGB image I can "play" as usual, blurring it, using canny (after converting it to greyscale),... but I can't do the same with the depth image. Each time I want to do something with the depth image I got exceptions.
I have the following code to get the depth image:
CvMat* depthMetersMat = cvCreateMat(480, 640, CV_16UC1 );
CvMat* imageMetersMat = cvCreateMat(480, 640, CV_16UC1 );
IplImage *kinectDepthImage = cvCreateImage( cvSize(640,480),16,1);
const XnDepthPixel* pDepthMap = depth.GetDepthMap();
for (int y=0; y<XN_VGA_Y_RES; y++){
for(int x=0;x<XN_VGA_X_RES;x++){
depthMetersMat->data.s[y * XN_VGA_X_RES + x ] = 10 * pDepthMap[y * XN_VGA_X_RES + x];
}
}
cvGetImage(depthMetersMat, kinectDepthImage);
The problem is that I can't do anything with kinectDepthImage. I tried to convert it to greyscale and then using canny, but I dont know how to convert it.
Basically I would like to apply canny and laplacian to the depth image.
The problem was that the output from cvGetImage is 16bits depth while canny requires 8bit, therefore I need to convert it to 8bits, something like:
cvConvertScale(depthMetersMat, kinectDepthImage8, 1.0/256.0, 0);
The new OpenCV Api encurages to use Mat instead of the old image types. The current code for using the OpenNI depth meta data in OpenCV would be:
Mat depthMat16UC1(width, height, CV_16UC1, (void*) g_DepthMD.Data());
Mat depthMat8UC1;
depthMat16UC1.convertTo(depthMat8UC1, CV_8U, 1.0/256.0);
What is the sizeof(XnDepthPixel) ?
Try using a cvCreateImageHeader and then doing cvSetData on it with the XnDepth Image
Verify below link code ... could give you valuable information. NOTE: Its not my code, may give the result you require. comment the //cvCvtColor(rgbimg,rgbimg,CV_RGB2BGR);
http://pastebin.com/e5kHzs84
Regards
Nagaraju
if you are using OpenNI, have you created context, production nodes, and started generating data? Probably that's your problem..