I'm trying to create 360 panorama using OpenCV. I found Stitcher class and now I try to use it. It is working good with normal panorama like this:
This panorama is created using 22 images. Stitcher class is good in that case. It's not fast but result is acceptable.
I have problem when I try to add ground and sky to it. I know it is very hard to this class to connect all this pictures, because ground and sky pictures are very similar to each other. I made some pictures in my house, with items on the floor and it's still not working so I think outside it's even more complicated for this algoritm.
Do you have any ideas how to do it using OpenCV? Maybe not OpenCV, something other, giving better results? Other languages are acceptable too. I know that there are applications doing this, but I don't want to use them. I'm looking for solution to write this myself (using libraries etc).
Related
After long hours I finally managed to get a stereo disparity map with a single camera. The result is rather spotty as one would expect, so I would like to apply some filter to improve the quality. The problem is that I'm not using pure OpenCV, but the plugin for OpenFrameworks (ofxCv), meaning I can't use this:
http://docs.opencv.org/3.1.0/d3/d14/tutorial_ximgproc_disparity_filtering.html
There has to be a way how I can apply the WLS filter, or something similar in this situation. WLS appears to be implemented in OpenCV, but I can't access it through the plugin, and direct access also doesn't seem to work.
Does anybody know how I can apply that filter, or has any other, general, disparity map post-processing advice?
I'm not sure what OpenCV functionality is available to you. But just a suggestion, maybe use the implementation from OpenCV in your project. Look at the file: https://raw.githubusercontent.com/opencv/opencv_contrib/master/modules/ximgproc/src/disparity_filters.cpp
Copy any additional files you may need to your project and try building. With basic OpenCV support you might be able to make it work.
I am trying to use the OpenCVs Background Subtractor class MOG2 to seperate a person moving infront of a camera. I got everything set up and working nicely. But the resulting mask I am getting looks something like this:
(default settings)
Now what I would like to get is something like this:
(bad gimp skills :D)
I have already tryed to mess around with the parameter described in the docu, but all I managed to accomplish was something the looked like a motion blur effect...
So I was hopeing somebody with a better understanding of the algorithm or somebody who has already done something similar might be able to help me!
Thanks in advance, Foaly
I am working also with that and what I've seen is that this algorithm needs a good calibration to accomplish that goal, because you should be aware that this algorithm try to put in the background some pixels that don't show changes, e.g. in your skin major part of the pixels have the same color maybe this is the reason. I recommend to you that use other kind of methods (using zncc) if you want to use an application like the one that is showed in your question.
So I guess this were our image processing skills come into play. The first thing I would do is make the lines on the image thicker and join it it. We can use the following:
1) I would want the lines thicker. Use Morphological operators tutorial on Morphological operators with Otsu's method. This paper worked for me when I did my ear biometrics http://www4.comp.polyu.edu.hk/~csajaykr/myhome/papers/PR2011.pdf
2) Fill in connected components using opencv and clean the image
3) Segment human profile
I'm using OpenCV library (C++) to extract detectors from 2 images coming from a video stream taker from an aerial camera in order to, afterwards, find the matching points in successive images. i'm wondering which is the best algorithm to find robust detectors of a urban environment??
Ps. Actually I'm using SURF but when the images changes a little (because the camera is translating very slowly) the matchings between these descriptors become very few!
If you want to try different aproaches give a try to RoboRealm , they have a trial version, you just put the algoritms and seems the results, for testing purposes even if you will use OpenCV its ok.
Some background:
Hi all! I have a project which involves cloud imaging. I take pictures of the sky using a camera mounted on a rotating platform. I then need to compute the amount of cloud present based on some color threshold. I am able to this individually for each picture. To completely achieve my goal, I need to do the computation on the whole image of the sky. So my problem lies with stitching several images (about 44-56 images). I've tried using the stitch function on all and some subsets of image set but it returns an incomplete image (some images were not stitched). This could be because of a lack of overlap of something, I dunno. Also the output image has been distorted weirdly (I am actually expecting the output to be something similar to a picture taken by a fish-eye lense).
The actual problem:
So now I'm trying to figure out the opencv stitching pipeline. Here is a link:
http://docs.opencv.org/modules/stitching/doc/introduction.html
Based on what I have researched I think this is what I want to do. I want to map all the images to a circular shape, mainly because of the way how my camera rotates, or something else that has uses a fairly simple coordinate transformation. So I think I need get some sort of fixed coordinate transform thing for the images. Is this what they call the homography? If so, does anyone have any idea how I can go about my problem? After this, I believe I need to get a mask for blending the images. Will I need to get a fixed mask like the one I want for my homography?
Am I going through a possible path? I have some background in programming but almost none in image processing. I'm basically lost. T.T
"So I think I need get some sort of fixed coordinate transform thing for the images. Is this what they call the homography?"
Yes, the homography matrix is the transformation matrix between an original image and the ideal result. It warps an image in perspective so it can fit in stitching to the other image.
"If so, does anyone have any idea how I can go about my problem?"
Not with the limited information you provided. It would ease the problem a lot if you know the order of pictures (which borders which.. row, column position)
If you have no experience in image processing, I would recommend you use a tutorial covering stitching using more basic functions in detail. There is some important work behind the scenes, and it's not THAT harder to actually do it yourself.
Start with this example. It stitches two pictures.
http://ramsrigoutham.com/2012/11/22/panorama-image-stitching-in-opencv/
I am working on a project to stitch together around 400 high resolution aerial images around 36000x2600 to create a map. I am currently using OpenCV and so far I have obtained the match points between the images. Now I am at a lost in figuring out how to get the matrix transformation of the images so I can begin the stitching process. I have absolutely no background in working with images nor graphics so this is a first time for me. Can I get some advice on how I would approach this?
The images that I received also came with a data sheet showing longitude, latitude, airplane wing angle, altitude, etc. of each image. I am unsure how accurate these data are, but I am wondering if I can use these information to perform the proper matrix transformation that I need.
Thanks
Do you want to understand the math behind the process or just have an superficial idea of whats going on and just use it?
The regular term for "image snitching" is image alignment. Feed google with it and you'll find tons of sources.
For example, here.
Best regards,
zhengtonic
In recent opencv 2.3 release...they implemented a whole process of image stitching. Maybe it is worth looking at.