Simulation of computer vision data sets - c++

I'm studying the use of multiple cameras for computer vision applications. E.g. there is a camera in every corner of the room and the task is human tracking. I would like to simulate this kind of environment. What I need is:
Ability to define dynamic 3D environment, e.g. room and a moving object.
Options to place cameras at different positions and get simulated data set for each camera.
Does anyone have any experience with that? I checked out blender (http://www.blender.org), but currently I'm looking for a faster/easier to use solution.
Could you give me guidance to similar software/libraries (preferably C++ or MATLAB).

you may find ILNumerics perfectly fits your needs:
http://ilnumerics.net

If I get it right! you are looking to simulate camera feed from multiple camera at different positions of an environment.
I dont know of any sites or a working ready made solution, but here is how I would proceed:
Procure 3d point clouds of a dynamic environment (see Kinect 3d slam benchmark datasets) or generate one of your own with Kinect(hoping you have Xbox kinect with you).
Once you got kinect point clouds in PCL point cloud format, you can simulate video feed from various cameras.
A pseudo code such as this will suffice:
#include <pcl_headers>
//this method just discards all 3d depth information and fills the pixels with rgb values
//this is like a snapshot in the pcd_viewer of pcl(point cloud library)
makeImage(cloud,image){};
pcd <- read the point clouds
camera_positions[] <- {new CameraPosition(affine transform)...}
for(camera_position in camera_positions)
pcl::transformPointCloud(pcd,
cloud_out,
camera_position.getAffineTransform()
);
//Now cloud_out contains point cloud in different viewpoint
image <- new Image();
make_image(cloud_out,image);
saveImage(image);
pcl provides a function to transform a point cloud given appropriate parameters pcl::trasformPointCloud()
If you wish not to use pcl then you may wish to check this post and then followed with remaining steps.

Related

Visual Odometry, Camera Parameters

I am studying about visual odometry and watched Prof. Dr. Cyrill Stachniss' video recordings which are available as YouTube 2015/16 Playlist about Photogrammetry I & II .
First, If I want to create my own dataset (like KITTI dataset for VO or like Oxford campus dataset) what should be the properties of the image that I take with a camera.
Are they just images? Or, does they have some special properties ? That is, how can I create my own dataset with a monocular or stereo camera.
Thank you.
To get extrinsic and intrinsic parameters from the image you must have a set of images of known shape from varying views. It's not trivial task to do on your own, by common CV libraries / solution have a built-in utilities for camera calibration (I have to deal with OpenCV library and Matlab CV package and they are generally the same).Usually it's done with a black and white checkboard or another simple geometric pattern.
Then with known camera parameters you can manipulate your own dataset.
Matlab camera calibration reference
OpenCV camera calibration tutorials
If you want to benchmark some visual odometry algorithms with your dataset, you will definitely need the intrinsic parameters of your camera as well as its pose.
As said in #f4f answer, the intrinsic calibration is typically done with some images of a checkerboard that you tilt and rotate (see opencv).
This will give you parameters such as focal length, optical center but also the distortion coefficients which can be important depending on your camera.
Getting the pose of the camera (i.e extrinsic parameters) at each frame is probably trickier. Usually the ground-truth is obtained using information from additional sensors (tracking system, IMU, GPS, ...). You can have a look at : TUM RGB-D SLAM Dataset and the corresponding paper. They explain how they used a motion-capture system to get the ground-truth pose.
Recording the time of acquisition of the camera frames can also be interesting (one timestamp per frame).
Creating your own visual odometry dataset is not trivial. If you just want to create a dataset "for fun" or to do some experiments and if you have only a camera available, I would say you can just try some methods that are known to work well (like ORB-SLAM). This will give you good approximate of the camera poses (you may have to manually fix the unknown scale).

Vision Framework with ARkit and CoreML

While I have been researching best practices and experimenting multiple options for an ongoing project(i.e. Unity3D iOS project in Vuforia with native integration, extracting frames with AVFoundation then passing the image through cloud-based image recognition), I have come to the conclusion that I would like to use ARkit, Vision Framework, and CoreML; let me explain.
I am wondering how I would be able to capture ARFrames, use the Vision Framework to detect and track a given object using a CoreML model.
Additionally, it would be nice to have a bounding box once the object is recognized with the ability to add an AR object upon a gesture touch but this is something that could be implemented after getting the solid project down.
This is undoubtedly possible, but I am unsure of how to pass the ARFrames to CoreML via Vision for processing.
Any ideas?
Update: Apple now has a sample code project that does some of these steps. Read on for those you still need to figure out yourself...
Just about all of the pieces are there for what you want to do... you mostly just need to put them together.
You obtain ARFrames either by periodically polling the ARSession for its currentFrame or by having them pushed to your session delegate. (If you're building your own renderer, that's ARSessionDelegate; if you're working with ARSCNView or ARSKView, their delegate callbacks refer to the view, so you can work back from there to the session to get the currentFrame that led to the callback.)
ARFrame provides the current capturedImage in the form of a CVPixelBuffer.
You pass images to Vision for processing using either the VNImageRequestHandler or VNSequenceRequestHandler class, both of which have methods that take a CVPixelBuffer as an input image to process.
You use the image request handler if you want to perform a request that uses a single image — like finding rectangles or QR codes or faces, or using a Core ML model to identify the image.
You use the sequence request handler to perform requests that involve analyzing changes between multiple images, like tracking an object's movement after you've identified it.
You can find general code for passing images to Vision + Core ML attached to the WWDC17 session on Vision, and if you watch that session the live demos also include passing CVPixelBuffers to Vision. (They get pixel buffers from AVCapture in that demo, but if you're getting buffers from ARKit the Vision part is the same.)
One sticking point you're likely to have is identifying/locating objects. Most "object recognition" models people use with Core ML + Vision (including those that Apple provides pre-converted versions of on their ML developer page) are scene classifiers. That is, they look at an image and say, "this is a picture of a (thing)," not something like "there is a (thing) in this picture, located at (bounding box)".
Vision provides easy API for dealing with classifiers — your request's results array is filled in with VNClassificationObservation objects that tell you what the scene is (or "probably is", with a confidence rating).
If you find or train a model that both identifies and locates objects — and for that part, I must stress, the ball is in your court — using Vision with it will result in VNCoreMLFeatureValueObservation objects. Those are sort of like arbitrary key-value pairs, so exactly how you identify an object from those depends on how you structure and label the outputs from your model.
If you're dealing with something that Vision already knows how to recognize, instead of using your own model — stuff like faces and QR codes — you can get the locations of those in the image frame with Vision's API.
If after locating an object in the 2D image, you want to display 3D content associated with it in AR (or display 2D content, but with said content positioned in 3D with ARKit), you'll need to hit test those 2D image points against the 3D world.
Once you get to this step, placing AR content with a hit test is something that's already pretty well covered elsewhere, both by Apple and the community.

How to turn any camera into a Depth Camera?

I want to build a depth camera that finds out any image from particular distance. I have already read the following link.
http://www.i-programmer.info/news/194-kinect/7641-microsoft-research-shows-how-to-turn-any-camera-into-a-depth-camera.html
https://jahya.net/blog/how-depth-sensor-works-in-5-minutes/
But couldn't understand clearly which hardware requirements need & how to integrated into all together?
Thanks
Certainly, a depth sensor needs an IR sensor, just like in Kinect or Asus Xtion and other cameras available that provides the depth or range image. However, Microsoft came up with machine learning techniques and using algorithmic modification and research which you can find here. Also here is a video link which shows the mobile camera that has been modified to get depth rendering. But some hardware changes might be necessary if you make a standalone 2D camera into a new performing device. So I would suggest you to see the hardware design of the existing market devices as well.
one way or the other you would need two angles to the same points to get a depth. So search for depth sensors and examples e.g. kinect with ros or openCV or here
also you could transfere two camera streams into a point cloud but that's another story
Here's what I know:
3D Cameras
RGBD and Stereoscopic cameras are popular for these applications but are not always practical / available. I've prototyped with Kinects (v1,v2) and intel cameras (r200,d435). Certainly those are preferred even today.
2D Cameras
IF YOU WANT TO USE RGB DATA FOR DEPTH INFO then you need to have an algorithm that will process the math for each frame; try an RGB SLAM. A good algo will not process ALL the data every frame but it will process all the data once and then look for clues to support evidence of changes to your scene. A number of BIG companies have already done this (it's not that difficult if you have a big team w big money) think Google, Apple, MSFT, etc etc.
Good luck out there, make something amazing!

Is there a way to construct and store a 3D Map from point cloud and depth data?

I am currently working on a SLAM algorithm, and I succeeded in gathering the depth and RGB data on the form of a point cloud. However, I only display the frames that my Kinect 2.0 received to the screen and that is all.
I would like to gather those frames and as I move the Kinect, I construct a more elaborate Map (either 2D or 3D) so that it will help me in the localization or mapping.
My idea of the map construction would be just like when we create a Panorama image from many single snapshots.
Anyone has a clue, idea or an algorithm to do it?
You can use rtabmap to create 3D map and localizing your device. Its very simple to use and supports different devices.

OpenCV to Identify objects from training video set and then test them against another video

I have been tasked to use OpenCV and C++
Read a set of videos for creating a set of images/learning.
Classify objects seen in the videos
Label the images
test against series of test videos to check objects were identified as expected. draw a rectangle around them and label.
I am new to OpenCV however happy to program in C++ as soon as approach is formed. I am also planning to write my own functions at a later stage.
I need your help in formning right way of solution approach as I have to identify household objects [cup, soft toy, phone, camera, keyboard) from a stream of video and then test on another stream of video. The original video has depth information as well but not sure how to use it to my benefit.
Read about Support vector machine (SVM) , Feature extraction (e.g. SIFT/SURF) , SVM training and SVM testing. And, for drawing Rectangle, read about findContour(), drawContour() in openCV.
Approach:
Detect objects (e.g. car/plane etc.). Store the points of its contours
Extract some features of that object using SIFT/SURF
Based upon the extracted features, classify the object using SVM (the input for SVM will be the extracted features)
And if the SVM says -Yes! it is a car. Then, draw a rectangle around it using the points of its contour which you had stored in first step.