Visual Odometry in opencv (possibly using RGBD) - c++

I am attempting to implement a visual odometry solution in opencv, and running into a few problems. This is quite a broad question, so I apologise in advance, however I have a number of questions.
My understanding of the problem currently is:
Obtain some model to represent the correspondence between two successive images, be that optical flow or feature matching.
Obtain the fundamental (and then essential if needed) matrix from these point correspondences.
Calculate [R|t] from that.
I am aware of the findFundamentalMat function in openCV, but I think that only takes 2D point matches? In Scaramuzza and Fraundorfers paper 'Visual Odometry - pt1' they suggest that 3-D to 2-D correspondences will be most accurate.
I guess then my question is can I use the depth data retrieved from a kinect, giving me 3-D feature points, be used in opencv to give me an egomotion estimation?
I've also taken a look at solvePnP, but as far as I'm aware this only solves for a single frame (for when you know the real model space coordinates of features, like with a fiducial marker)
Although I did consider if I track 3D points between two frames, solving the perspective in the first frame, then in the second frame with the same points should give me a transformation between the two?
I apologize for this badly formulated question, I am still new to computer vision. Rather than attempting to answer this question if it is too much of a minefield, I would be appreciative of a point to any related literature or opencv tutorials for odometry. Thanks.

There is an example rgbdodometry.cpp in opencv\samples\cpp folder.
Have you seen it?

Related

Raycasting render 2d chest CT scans to 3d using OPENGL/C++

I am going to split this question in 3 parts
First, I've been given this problem, and I don't know where to start, if you have been solving related problem, would you give me some hints and keywords to help me do some more research?
I have done some research on my own
So here is some 2D chest CT scans (sorry due to reputation rule i can't implement images directly)
All photos are in the same angle. So I think I can simply read each photo to a vector of pixels, do some thresh holding to make all black and black-ish pixels going to be a non-colored pixel. Next, I'll create a vector called vector_of_photo of those vectors. Then the index of each vector in vector_of_photo are now the Z-index.
Now I can render a 3d photo from those vectors of pixels right?
In the second place, I got trouble understand raycasting algorithm,
I think the idea here is, when I already got a box of pixel then everytime I rotate the box, it cast straight-lines from that angle of the camera to the box, each line found a has-colored pixel going to stop casting and render that pixel (or more specific, copy the pixel to the exactly location on the plane).
Did I understand it correctly?
At last, the OPENGL/c++ part is just the option I think I'm going to use to solve this problem. And I'm not pretty sure it is a good idea or not, so give me some more hint about the programming language, library or module I should take a look at.
I happen to be working on the same problem in my spare time. Haha :)
Here is one approach to your problem:
Load the images into your application, such that you get the 3D volumetric dataset that you describe
Remove all points that don't fit within some range of values (e.g. 0.4/1.0 to 0.6/1.0 brightness). You may need to apply preprocessing and filtering.
Fit a mesh to the resulting point cloud with open-source software. Here is a good blog post about that
https://towardsdatascience.com/5-step-guide-to-generate-3d-meshes-from-point-clouds-with-python-36bad397d8ba
Take the resulting mesh (probably, an STL file) and visualize it in any software your want (Blender 3D, Unity 3D, Cinema 4D, a custom OpenGL application), anything really.
My own approach to this problem is very similar to the one you suggest in your question, and I have already made some headway. Therefore, I thought it would be good to suggest another route.
NOTE Please be aware that what you are working on is not a trivial problem. It's a large project, and there are many Commerical companies that put years into doing just this. This is a great project for learning OpenGL, rendering, and other concepts. It's perfectly doable, but you may be looking at several months of work, and lots of trial and error. Good luck!
Its not often that two people would happen to work on the same problem, so if you want to discuss further, feel free to contact me over linkedin and/or post a comment below. www.linkedin.com/in/michael-sohnen-a2454b1b2

Grasp hand pose estimation with Machine learning

What I'm trying to do is..
Dataset: Pictures of a hand holding a stick, and I know the 3d position of joints, or 3D pose of each picture. Pictures are taken from same position, so hand is the only one that moves.
Input: A picture of hand
Output: 3D Hand Pose Is this possible, and if so, how could this be done? Since I'm a newbie to ML, I want to get ideas for good understanding. Thanks!
It should be possible, but it will be a hard research project.
Because the problem requires such complex output, a machine learning approach to the problem will take a huge number of training examples, more than you could generate by hand. A good approach might be to make a small program that can 3D-render an image of a hand in pose x, with random lighting, random hand size, etc. Then feed millions of those training images to a convolutional neural net with deep learning, where the final output neurons encode the poses.
Using that same program, an alternative approach would be to do gradient descent on the pose, repeatedly rendering poses until you get the best match. That's called a generative model. It doesn't involve a neural net, but it would probably be slow. There are no doubt other approaches too.
If you're interested, Microsoft has been working on this problem to enable new types of Xbox Kinect games: https://www.microsoft.com/en-us/research/project/fully-articulated-hand-tracking/
All in all though, if you're new to computer vision and machine learning, I'd recommend starting with simpler challenges first.

Head pose estimation opencv

I have a specific question with regards head pose, I am looking into
creating a cylindrical model that maps 2D points to this 3D model,
and then compare points from successive frames using optical flow,
so that later on I can perform pose estimation.
I discovered that opencv has a warp function
http://docs.opencv.org/modules/stitching/doc/warpers.html#detail::CylindricalWarper : public detail::RotationWarperBase
I also have read that OpenGl might be used for this.
I think what I am looking for is a texture map, but here is an example of what I am looking for....
https://www.youtube.com/watch?v=Etj_aktbnwM
Any guidance would be much appreciated,
Thanks.
Here is a good tutorial to get you started:
http://www.morethantechnical.com/2010/03/19/quick-and-easy-head-pose-estimation-with-opencv-w-code/
I have modified this code in one project to create a realtime head pose tracker. The initial feature points were detected autmatically, and tracked along the frame sequence.

FFT based image registration (optionally using OpenCV) in cpp?

I'm trying to align two images taken from a handheld camera.
At first, I was trying to use the OpenCV warpPerspective method based on SIFT/SURF feature points. The problem is the feature-extract & matching process may be extremely slow when the image quality is high (3000x4000). I tried to scale-down the image before find feature-points, the result is not as good as before.(The Mat generated from findHomography shouldn't be affected by scaling down the image, right?) And sometimes, due to lack of good feature point matches, the result is quite strange.
After searching on this topic, it seems that solving the problem in Fourier domain will speed up the registration process. And I've found this question which leads me to the code here.
The only problem is the code is written in python with numpy (not even using OpenCV), which makes it quite hard to re-written to C++ code using OpenCV (In OpenCV, I can only find dft and there's no fftshift nor fft stuff, I'm not quite familiar with NumPy, and I'm not brave enough to simply ignore the missing methods). So I'm wondering why there is not such a Fourier-domain image registration implementation using C++?
Can you guys give me some suggestion on how to implement one, or give me a link to the already implemented C++ version? Or help me to turn the python code into C++ code?
Big thanks!
I'm fairly certain that the FFT method can only recover a similarity transform, that is, only a (2d) rotation, translation and scale. Your results might not be that great using a handheld camera.
This is not quite a direct answer to your question, but, as a suggestion for a speed improvement, have you tried using a faster feature detector and descriptor? In OpenCV SIFT/SURF are some of the slowest methods they have for feature extraction/matching. You could try testing some of their other methods first, they all work quite well and are faster than SIFT/SURF. Especially if you use their FLANN-based matcher.
I've had to do this in the past with similar sized imagery, and using the binary descriptors OpenCV has increases the speed significantly.
If you need only shift you can use OpenCV's phasecorrelate

remove gradient of a image without a comparison image

currently i am having much difficulty thinking of a good method of removing the gradient from a image i received.
The image is a picture taken by a microscope camera that has a light glare in the middle. The image has a pattern that goes throughout the image. However i am supposed to remove the light glare on the image created by the camera light.
Unfortunately due to the nature of the camera it is not possible to take a picture on black background with the light to find the gradient distribution. Nor do i have a comparison image that is without the gradient. (note- the location of the light glare will always be consistant when the picture is taken)
In easier terms its like having a photo with a flash in it but i want to get rid of the flash. The only problem is i have no way to obtaining the image without flash to compare to or even obtaining a black image with just the flash on it.
My current thought is conduct edge detection and obtain samples in specific locations away from the edges (due to color difference) and use that to gauge the distribution of gradient since those areas are supposed to have relatively identical colors. However i was wondering if there was a easier and better way to do this.
If needed i will post a example of the image later.
At the moment i have a preferrence of solving this in c++ using opencv if that makes it easier.
thanks in advance for any possible ideas for this problem. If there is another link, tutorial, or post that may solve my problem i would greatly appreciate the post.
as you can tell there is a light thats being shinned on the img as you can tell from the white spot. and the top is lighter than the bottome due to the light the color inside the oval is actually different when the picture is taken in color. However the color between the box and the oval should be consistant. My original idea was to perhaps sample only those areas some how and build a profile that i can utilize to remove the light but i am unsure how effective that would be or if there is a better way
EDIT :
Well i tried out Roger's suggestion and the results were suprisngly good. Using 110 kernel gaussian blurr to find illumination and conducting CLAHE on top of that. (both done in opencv)
However my colleage told me that the image doesn't look perfectly uniform and pointed out that around the area where the light used to be is slightly brighter. He suggested trying a selective gaussian blur where the areas above certain threshold pixel values are not blurred while the rest of the image is blurred.
Does anyone have opinions regarding this and perhaps a link, tutorial, or an example of something like this being done? Most of the things i find tend to be selective blur for programs like photoshop and gimp
EDIT2 :
it is difficult to tell with just eyes but i believe i have achieved relatively close uniformization by using a simple plane fitting algorithm.((-A * x - B * y) / C) (x,y,z) where z is the pixel value. I think that this can be improved by utilizing perhaps a sine fitting function? i am unsure. But I am relatively happy with the results. Many thanks to Roger for the great ideas.
I believe using a bunch of pictures and getting the avg would've been another good method (suggested by roger) but Unofruntely i was not able to implement this since i was not supplied with various pictures and the machine is under modification so i was unable to use it.
I have done some work in this area previously and found that a large Gaussian blur kernel can produce a reasonable approximation to the background illumination. I will try to get something working on your example image but, in the meantime, here is an example of your image after Gaussian blur with radius 50 pixels, which may help you decide if it's worth progressing.
UPDATE
Just playing with this image, you can actually get a reasonable improvement using adaptive histogram equalisation (I used CLAHE) - see comparison below - any use?
I will update this answer with more details as I progress.
I would like to point you to this paper: http://www.cs.berkeley.edu/~ravir/dirtylens.pdf, but, in my opinion, without any sort of calibration/comparison image taken apriori, it is difficult to mine out the ground truth from the flared image.
However, if you are trying to just present the image minus the lens flare, disregarding the actual scientific data behind the flared part, then you switch into the domain of image inpainting. Criminsi's algorithm, as described in this paper: http://research.microsoft.com/pubs/67276/criminisi_tip2004.pdf and explained/simplified in these two links: http://cs.brown.edu/courses/csci1950-g/results/final/eboswort/ http://www.cc.gatech.edu/~sooraj/inpainting/, will do a very good job in restoring texture information to the flared up regions. (If you'd really like to pursue this approach, do mention that. More comprehensive help can be provided for this).
However, given the fact that we're dealing with microscopic data, I doubt if you'd like to lose the scientific data contained in a particular region of an image. In that case, I really think you need to find a workaround to determine the flare model of the flash/light source w.r.t the lens you're using.
I hope someone else can shed more light on this.