I'm new to kinect. I'm working on a project of Augmented Reality in Visual Studio using c++.
I'm looking for a way to get all the point coordinates(X,Y,Z) through kinect, but i'm having a hard time figuring it out. Can any one help me with it?
How do i get the points from the depth stream?
I'm Using Kinect V1 for xbox 360.
Assuming Kinect v2: The simplest way of doing this is using the ICoordinateMapper::MapDepthFrameToCameraSpace (see docs here).
IKinectSensor *sensor;
UINT16 *depthPoints;
...
// initialize the sensor and grab a depth frame and get depthPoints
...
const int nPixels = 512 * 424;
std::vector<CameraSpacePoint> cameraPoints;
cameraPoints.resize(nPixels);
ICoordinateMapper *cm = dev->get_CoordinateMapper();
if (cm != nullptr) {
cm->MapDepthFrameToCameraSpace(nPixels, &depthPoints[0], nPixels, &cameraPoints[0]);
cm->Release();
}
// use XYZs in cameraPoints
Note that this only works when you have a live connection to the device. If you have previously saved raw depth frames to disk and read them back e.g. on another computer, then look at using the ICoordinateMapper function GetDepthCameraIntrinsics or GetDepthFrameToCameraSpaceTable and save that data along with the frames to allow offline conversion to XYZ.
Edit:
For easily getting started:
Kinect v1: try using Kinect Common Bridge v1 and the CoordinateMapper::MapDepthFrameToSkeletonFrame method.
Kinect v2: try using Kinect Common Bridge v2 and the KCBMapDepthFrameToCameraSpace function.
Related
I am fairly new to using Kinect V2.0 (or any Kinect for that matter). I am creating a UWP and I am using C++ underneath. I am making everything in visual studio 2017. I am having trouble finding how to record data using the Kinect camera. So far I only found how to display the info being read by the camera and how to save a screenshot.
But I didn't found a single video/thread/question talking about how to record the data being read nor how to form a video with it.
Is there any documentation that you could point me to?
You can get more information about the Kinect here:
https://developer.microsoft.com/en-us/windows/kinect.
it already says it in a title and weirdly enough, I'm unable to find information on this.
The Kinect 1 used IR patterns to get depth data for the Kinect, which is shown here:
https://www.youtube.com/watch?v=uq9SEJxZiUg
There you had the problem that you couldn't combine overlapping images of two Kinects, because the IR fields wouldn't be distinguishable. This could be fixed by letting one Kinect shake, while the other on is still (http://dl.acm.org/citation.cfm?id=2207676.2208335&coll=DL&dl=ACM&CFID=625209301&CFTOKEN=54555397)
Is that still the case for the Kinect 2 or did they change the way this works?
I would like to do skeletal tracking simultaneously from two kinect cameras in SkeletalViewer and obtain the skeleton result. As in my understanding, the Nui_Init() only process the threads for first Kinect (which I suppose index = 0). However could I have the two skeletal tracking run at the same time as I would like to output their result respectively into two text files at the same time.
(eg. for Kinect 0 output to "cam0.txt" while Kinect 1 output to "cam1.txt")
Does anyone has experience in such case or able to help?
Regards,
Eva
PS: I read this from Kinect SDK documentation state that:
If you are using multiple Kinect sensors, skeleton tracking works only on the first device that you Initialize. To switch the device being used to track, uninitialized the old one and initialize the new one.
So is it possible if I want to acquire the coordinates simultaneously? Or even if acquire one by one, how should I call them separately? (as I realize the index of the active Kinect will be 0 which I can't differentiate them).
I assume that you are using MS SkeletalViewer example. The problem with their SkeletalViewer is that they closely tied the display and the skeleton tracking. This make it difficult to change.
Using multiple kinect sensor should be possible, you just need to initialize all the sensors the same way. The best thing to do would be to define a sensor class to wrap kinect sensors. If you don't need the display, you can just write a new program. That's a bit of work but not that much, you can probably get a fully working program for multiple sensors in less than 100 lines. If you need the display, you can rewrite the SkeletalViewer example to use your sensor class but that's more tedious.
I'm studying the use of multiple cameras for computer vision applications. E.g. there is a camera in every corner of the room and the task is human tracking. I would like to simulate this kind of environment. What I need is:
Ability to define dynamic 3D environment, e.g. room and a moving object.
Options to place cameras at different positions and get simulated data set for each camera.
Does anyone have any experience with that? I checked out blender (http://www.blender.org), but currently I'm looking for a faster/easier to use solution.
Could you give me guidance to similar software/libraries (preferably C++ or MATLAB).
you may find ILNumerics perfectly fits your needs:
http://ilnumerics.net
If I get it right! you are looking to simulate camera feed from multiple camera at different positions of an environment.
I dont know of any sites or a working ready made solution, but here is how I would proceed:
Procure 3d point clouds of a dynamic environment (see Kinect 3d slam benchmark datasets) or generate one of your own with Kinect(hoping you have Xbox kinect with you).
Once you got kinect point clouds in PCL point cloud format, you can simulate video feed from various cameras.
A pseudo code such as this will suffice:
#include <pcl_headers>
//this method just discards all 3d depth information and fills the pixels with rgb values
//this is like a snapshot in the pcd_viewer of pcl(point cloud library)
makeImage(cloud,image){};
pcd <- read the point clouds
camera_positions[] <- {new CameraPosition(affine transform)...}
for(camera_position in camera_positions)
pcl::transformPointCloud(pcd,
cloud_out,
camera_position.getAffineTransform()
);
//Now cloud_out contains point cloud in different viewpoint
image <- new Image();
make_image(cloud_out,image);
saveImage(image);
pcl provides a function to transform a point cloud given appropriate parameters pcl::trasformPointCloud()
If you wish not to use pcl then you may wish to check this post and then followed with remaining steps.
I need to convert depth information aquired with a kinect sensor,
to real world 3D coordinates.
I know that the way to do this is by using a DepthGenerator
and call ConvertProjectiveToRealWorld
but this requires the sensor to be connected....
Does anyone knows a way to do it without the sensor connected?
How is your depth information stored?
The easiest way would probably be initializing OpenNI from a depth recording (.oni file). You can create .oni files using the NiViewer sample bundled with OpenNI (press on '?' to see the list of commands, one of them should let you record).
If your data isn't stored in an oni file, you should be able to create a dummy file with a single depth frame in it. That should be enough to cause the sensor parameters to be stored in the oni file as well - the parameters that are used in the projective to real world conversion.