Disparity Map post processing in OpenFrameworks - c++

After long hours I finally managed to get a stereo disparity map with a single camera. The result is rather spotty as one would expect, so I would like to apply some filter to improve the quality. The problem is that I'm not using pure OpenCV, but the plugin for OpenFrameworks (ofxCv), meaning I can't use this:
http://docs.opencv.org/3.1.0/d3/d14/tutorial_ximgproc_disparity_filtering.html
There has to be a way how I can apply the WLS filter, or something similar in this situation. WLS appears to be implemented in OpenCV, but I can't access it through the plugin, and direct access also doesn't seem to work.
Does anybody know how I can apply that filter, or has any other, general, disparity map post-processing advice?

I'm not sure what OpenCV functionality is available to you. But just a suggestion, maybe use the implementation from OpenCV in your project. Look at the file: https://raw.githubusercontent.com/opencv/opencv_contrib/master/modules/ximgproc/src/disparity_filters.cpp
Copy any additional files you may need to your project and try building. With basic OpenCV support you might be able to make it work.

Related

DJI M210 calibrate front stereo cameras for depth perception

I'm trying to get the calibration values to put in the m210_stereo_param.yaml, like it's suggested in the official developer website (OnboardSDK for Linux). The objective is to have good values to test the depth perception sample. The website suggest different approach for the calibration, and I chose the OpenCV one.
I found an example of calibration on this Github repository: Opencv - stereo_calibrate_rc (some explanations are given on this link: Stereo Camera Calibration in Opencv 3)
The problem is that after getting the final Matrix (in the intrinsics.yaml and extrinsics.yaml), I modified the values in the m210_stereo_param.yaml, and tried to run the sample. I got this result (which is not correct, even the default values of the m210_stereo_param.yaml had a better result).
Do you have any idea of what is going wrong with the calibration ? It's quite complicated to found a clear approach to get values to put in the yaml.
Solve, the problem was that the xml file containing the list of images need to looks like this :
"data/left01.jpg"
"data/right01.jpg"
"data/left02.jpg"
"data/right02.jpg"
"data/left03.jpg"
...
And mine xml file was more like this :
"data/left01.jpg"
"data/left02.jpg"
"data/left03.jpg"
...
"data/right01.jpg"
"data/right02.jpg"
So, if you use this example, check if your xml file alternate left-right pictures.
I asked the the tech support of DJI and he told me that the M210 does not support the calibration. In this case, you should not have these problems. Just use the original one.

Face detection and image preview drawing

I'm developing application that uses DirectShow combined with C++.
Its main goal is to capture users' faces.
I have reached the phase when I capture a image from my webcam.
The problem is I need an intelligent render. In fact, I need that render to be able to detect a face inside a rectangle.
I'm wondring if there is a filter that I can use for this purpose,
or if I need to create my own custmized filter.
If so enlighten my mind.
It would look like this:
I need to understand how I can draw a recangle in my render in the first place. Because otherwise, even if I know the algorithm, I will not be able to apply it. This is my main goal now.
I have some idea but I don't know if they are correct. I think I need to grab each frame separately and apply some modification in some pixels, like what's drawn in the live render.
Have a look at OpenCV
Quick look inside and I found this.
Making your own "filter" that works well is no easy job.
Are you talking about automatic detection of where there is something like a human face in the shot you have taken with the webcam? In this case object detection algorithms like Viola-Jones might be interesting for you.
If a commercial package is an option, you can use the Montivision Filter SDK which includes filters that should do the job out of the box. They offer a free eval which is perfect for experimentation.

Augmented Reality-PC

I recently saw the virtual mirror concept on you tube, I tried it out and researched about it. It seems that the creators have used augmented reality so that people can see the output on their screens. On researching I found out that we identify a pattern on which a 3D image is superimposed.
Question 1:How are they able to superimpose the jewellery and track the face of the person without identifying any pattern?
I also tried to check various libraries that I can use to make a program similar to the one they show. Seems to me that a lot of people are using Android phones and iPhones and making apps that use augmented reality.
Question 2:Is there any way that I can use c++ and try to make a program that uses augmented reality?
Oh, and the most important thing, the link to the application is provided below:
http://www.boutiqueaccessories.com.au/virtual-mirror/w1/i1001664/
Do try it out. Its a good experience. :D
I'm not able to actually try the live demo, but the linked video suggests that they either use some simplified pattern recognition (get the person's outline), or they simply track you based on the initial image (with your position/texture being determined by the outline being shown.
Following the video, it's easy to see that there's no real/advanced AR behind this. The images are simply overlayed or hidden (e.g. in case it's missing track of one ear due to you looking to the side) and they're not transformed (no perspective or resizing happening). They definitely seem to track the head (or features like ears, neck, etc.). depending on your background and surroundings that's actually a rather trivial task.
Question 2: Sure! There are lots of premade toolsets out there, but you could as well use some general image processing library such as OpenCV to do the math. Augmented reality usually uses some kind of pattern (e.g. a card or page with a known pattern) to determine the correct position and transformation for the contents to be added to the image. There are also approaches using the device's orientation and perspective changes in camera images to determine depth/position (I really like this demo).

Implementing the warp/liquify tool in C++

Im looking for a way to warp an image similar to how the liquify/IWarp tool works in Photoshop/Gimp.
I would like to use it to move a few points on an image to make it look wider than it was originally.
Anyone have any ideas on libraries that could be used to do this? I'm currently using OpenCV in the same project so if theres a way using that it would be easiest but I'm open to anything really
Thanks.
EDIT: Heres an example of what im looking to do http://i.imgur.com/wMOzq.png
All I've done there is pulled a few points out sideways and thats what im looking to do from inside my application
From this search 'image warp operator source c++' i get:
..... Added function 'CImg ::[get_]warp()' that can warp an image using a deformation .... Added function 'CImg ::save_cpp()' allowing to save an image directly as a C/C++ source code. ...
then CImg could do well for you.
OpenCV's remap can accomplish this. You only have to provide x and y displacement maps. I would suggest you can create the displacement map directly if you are clever, and this would be good for brush-stroke manipulation similar to Photoshop's liquify. The mesh warp and sparse point map approach is another option, but essentially computes the displacement map based on interpolation.
You may want to take a look at http://code.google.com/p/imgwarp-opencv/. This library seems to be exactly what you need: image warping based on a sparse grid.
Another option is, of course, generate displacements yourself and use cv::Remap() function of OpenCV.

Creating SVG image in c++

I want to create a SVG image programmatically using preferably c++ from some image points. Can anyone help me with that?
simple-svg is a header only svg lib easy to use:
simple_svg_1.0.0
Here is an example how to use it: main_1.0.0.cpp
It is also hosted on GitHub.
You could check out LibBoard. I have no experience with it myself, so I can't vouch for its usefulness, but it does appear to be what you're looking for. I'm not sure how complicated your target image is going to be, but the website states:
For now, LibBoard can handle primitives like lines, rectangles,
triangles, polylines, circles, ellipses and text.
In future releases, bitmap insertion should be supported.
See the TODO file for a list of features that should be added in future releases.
So you'll have basic functionality from it, and you can probably mess around with the basic list of shapes to create some pretty complicated images.
I used GraphViz to do that, using 'dot' language, check it out.