Generating a 3D representation using a 2D picture using opencv c++ - c++

I don't know if the title explains what I'm trying to do, but after a few days with no sleep and searching the web I pretty much don't know how to do this.
I'm trying to take a point on a 2D image and find the depth of the rest of the points in the selected region relative to that point.
From all the internet search I've done, it seems that this is possible by a single 2D image but Stereoimages could be used to increase the accuracy of the triangulation.
Ultimately I am looking into doing this using stereovision but if anyone knows how to do it using a single image I also would appreciate it.
I am using OpenCV 249 in C++ vb12 (visual studio 2013).
any help is appreciated.

Related

Perspective transform (warp) of an image

I'm stuck trying to get perspective transformation to work.
I want to draw an image so it fits 4 given points. It's something like you have in Photoshop and it's used to change the perspective of image (see image below).
I have an image in byte array and I'm not using any additional libraries.
Everything I've found so far was either for OpenCV or didn't do what I wanted.
I have found some open-source program PhotoDemon and it does exactly what I want, and there is a code for that. I was trying to get it to work for many hours but it gives me completly weird results (second line on the image below).
Could someone provide me with some code or step-by-step math of what and how to do or even just a pseudo-code. I'm a little bit sick of it. It seems easy but I need some help.

Visual Odometry in opencv (possibly using RGBD)

I am attempting to implement a visual odometry solution in opencv, and running into a few problems. This is quite a broad question, so I apologise in advance, however I have a number of questions.
My understanding of the problem currently is:
Obtain some model to represent the correspondence between two successive images, be that optical flow or feature matching.
Obtain the fundamental (and then essential if needed) matrix from these point correspondences.
Calculate [R|t] from that.
I am aware of the findFundamentalMat function in openCV, but I think that only takes 2D point matches? In Scaramuzza and Fraundorfers paper 'Visual Odometry - pt1' they suggest that 3-D to 2-D correspondences will be most accurate.
I guess then my question is can I use the depth data retrieved from a kinect, giving me 3-D feature points, be used in opencv to give me an egomotion estimation?
I've also taken a look at solvePnP, but as far as I'm aware this only solves for a single frame (for when you know the real model space coordinates of features, like with a fiducial marker)
Although I did consider if I track 3D points between two frames, solving the perspective in the first frame, then in the second frame with the same points should give me a transformation between the two?
I apologize for this badly formulated question, I am still new to computer vision. Rather than attempting to answer this question if it is too much of a minefield, I would be appreciative of a point to any related literature or opencv tutorials for odometry. Thanks.
There is an example rgbdodometry.cpp in opencv\samples\cpp folder.
Have you seen it?

Real time Object detection: where to learn?

I am working with opencv these days and I am capable of doing 99% of stuff explained in opencv official tutorials. And I managed to do motion tracking manually with background substraction, where some users claimed as impossible.
However, right now I am working with object detection, where I need to track the hand and want to find whether the hand is moved to left or right. Can this be done by following steps? (used in motion detection)
Get camera 2 instances of camera video (real time)
blur it to reduce noise
theresold it to find hand (or leave it if blur is enough)
find the absolute deference between 2 images
Get PSR
find pixel position of motion
However, it seems like it is not 100% same as motion detection, because I read some stuff about Kalman Filter, Block-matching, etc which I did not use in motion detection. However, I found this tutorial
http://homepages.cae.wisc.edu/~ece734/project/s06/lintangwuReport.pdf
But, I really need your advice. Is there any tutorial which teach me how to do this? I am interested in learning core theory with opencv explanation (c++).
Since I am not good at maths( I am working on it - I didnt go to the university , they found me and invited me to join the final year for free because of my programming skills, so I missed math) , full of math stuff will not work.
Please help. Thank you.

Implementing the warp/liquify tool in C++

Im looking for a way to warp an image similar to how the liquify/IWarp tool works in Photoshop/Gimp.
I would like to use it to move a few points on an image to make it look wider than it was originally.
Anyone have any ideas on libraries that could be used to do this? I'm currently using OpenCV in the same project so if theres a way using that it would be easiest but I'm open to anything really
Thanks.
EDIT: Heres an example of what im looking to do http://i.imgur.com/wMOzq.png
All I've done there is pulled a few points out sideways and thats what im looking to do from inside my application
From this search 'image warp operator source c++' i get:
..... Added function 'CImg ::[get_]warp()' that can warp an image using a deformation .... Added function 'CImg ::save_cpp()' allowing to save an image directly as a C/C++ source code. ...
then CImg could do well for you.
OpenCV's remap can accomplish this. You only have to provide x and y displacement maps. I would suggest you can create the displacement map directly if you are clever, and this would be good for brush-stroke manipulation similar to Photoshop's liquify. The mesh warp and sparse point map approach is another option, but essentially computes the displacement map based on interpolation.
You may want to take a look at http://code.google.com/p/imgwarp-opencv/. This library seems to be exactly what you need: image warping based on a sparse grid.
Another option is, of course, generate displacements yourself and use cv::Remap() function of OpenCV.

stitching aerial images to create a map

I am working on a project to stitch together around 400 high resolution aerial images around 36000x2600 to create a map. I am currently using OpenCV and so far I have obtained the match points between the images. Now I am at a lost in figuring out how to get the matrix transformation of the images so I can begin the stitching process. I have absolutely no background in working with images nor graphics so this is a first time for me. Can I get some advice on how I would approach this?
The images that I received also came with a data sheet showing longitude, latitude, airplane wing angle, altitude, etc. of each image. I am unsure how accurate these data are, but I am wondering if I can use these information to perform the proper matrix transformation that I need.
Thanks
Do you want to understand the math behind the process or just have an superficial idea of whats going on and just use it?
The regular term for "image snitching" is image alignment. Feed google with it and you'll find tons of sources.
For example, here.
Best regards,
zhengtonic
In recent opencv 2.3 release...they implemented a whole process of image stitching. Maybe it is worth looking at.