I'm starting with opencv and image processing in general. I need to make an algorithm, with opencv in C++, for image rectification, namely to carry out this transformation in the image:
I know there are 3 types of image rectification:
polar,
`cylindrical,
planar
The polar rectification would meet my needs. But I don't where to start. I know a lot of people here know much about opencv, image processing. What are the basics of and what are the pit falls I should avoid with polar rectification?
Related
I'm currently trying to "undistort" fisheye imagery using OpenCV in C++. I know the exact lens and camera model, so I figured that I would be able to use this information to calculate some parameters and ultimately convert fisheye images to rectilinear images. However, all the tutorials I've found online encourage using auto-calibration with checkerboards. Is there a way to calibrate the fisheye camera by just using camera + lens parameters and some math? Or do I have to use the checkerboard calibration technique?
I am trying to avoid having to use the checkerboard calibration technique because I am just receiving some images to undistort, and it would be undesirable to have to ask for images of checkerboards if possible. The lens is assumed to retain a constant zoom/focal length for all images.
Thank you so much!
To un-distord an image, you need to know the intrinsic parameters of the camera which describe the distorsion.
You can't compute them from datasheet values, because they depend on how the lens is manufactured and two lenses of the same vendor & model might have different distorsion coefficients, especially if they are cheap one.
Some raster graphics editor embed a lens database from which you can query distorsion coefficients. But there is no magic, they built it by measuring the lens distorsion and eventually interpolate them after.
But you can still use an empiric method to correct at least barrel effect.
They are plenty of shaders to do so and you can always do your own maths to build a distorsion map.
I am trying to do image alignment like posted on adrian blog like this image or in this link.
I want to do image alignment on this kind of image. The problem is I want to automatically detect the 4 point edges which are hard to detect in this kind of images with contour detection like in the tutorial.
Now I can do alignment just fine with manually input edge coordinates. Some of my friends suggest me to detect the edges with dlib landmark detection, but as far as I can see it mostly uses shape in which dlib automatically marking the landmark.
Do I miss something here? Or is there any tutorial or even basic guide about how to do that?
Maybe you can try to detect edges on a Gaussian pyramid. You can find an explanation here https://en.wikipedia.org/wiki/Pyramid_(image_processing). The basic idea is that by filtering with Gaussian filters of increasing size, the small objects are blurred. Thus at some scale, we get only edges of the showcase (maybe need further processing).
Here is the tutorial of opencv on image pyramid: https://opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_imgproc/py_pyramids/py_pyramids.html.
I think wavelet pyramid (do wavelet transform several times) may work for your problem, since wavelet can reduce the details in image.
I'm writing a program in opencv that stitches aerial images taken by a drone. The problem is that the stitch,after a certain point, start to "curving", so my homography matrix will be messed up and I can't stitch anything more.
There's a way in openv to do images orthorectivication without gps or DEM parameters? If not in opencv, there's a library easily integrable with opencv?
Thanks!
PS: I'm programming in C++
Some background:
Hi all! I have a project which involves cloud imaging. I take pictures of the sky using a camera mounted on a rotating platform. I then need to compute the amount of cloud present based on some color threshold. I am able to this individually for each picture. To completely achieve my goal, I need to do the computation on the whole image of the sky. So my problem lies with stitching several images (about 44-56 images). I've tried using the stitch function on all and some subsets of image set but it returns an incomplete image (some images were not stitched). This could be because of a lack of overlap of something, I dunno. Also the output image has been distorted weirdly (I am actually expecting the output to be something similar to a picture taken by a fish-eye lense).
The actual problem:
So now I'm trying to figure out the opencv stitching pipeline. Here is a link:
http://docs.opencv.org/modules/stitching/doc/introduction.html
Based on what I have researched I think this is what I want to do. I want to map all the images to a circular shape, mainly because of the way how my camera rotates, or something else that has uses a fairly simple coordinate transformation. So I think I need get some sort of fixed coordinate transform thing for the images. Is this what they call the homography? If so, does anyone have any idea how I can go about my problem? After this, I believe I need to get a mask for blending the images. Will I need to get a fixed mask like the one I want for my homography?
Am I going through a possible path? I have some background in programming but almost none in image processing. I'm basically lost. T.T
"So I think I need get some sort of fixed coordinate transform thing for the images. Is this what they call the homography?"
Yes, the homography matrix is the transformation matrix between an original image and the ideal result. It warps an image in perspective so it can fit in stitching to the other image.
"If so, does anyone have any idea how I can go about my problem?"
Not with the limited information you provided. It would ease the problem a lot if you know the order of pictures (which borders which.. row, column position)
If you have no experience in image processing, I would recommend you use a tutorial covering stitching using more basic functions in detail. There is some important work behind the scenes, and it's not THAT harder to actually do it yourself.
Start with this example. It stitches two pictures.
http://ramsrigoutham.com/2012/11/22/panorama-image-stitching-in-opencv/
I am trying to find a way to determine the correctness of edge detection. I want it to have little markers showing where the program determines the edges to be with something like x's or dots or lines. I am looking for something that does this: http://en.wikipedia.org/wiki/File:Corner.png
OpenCV has an edge detector and is usable in C++. As it happens the image you linked to is used in the article describing (one of) the built in algorithms.
The image you link to ins't edge detection.
Edge detection is normally just finding abrubt brightness changes in a greyscale image - you do this with differention - eg. Sobel operator.
Specifically finding corners is either done with SIFT or something like Laplacian of Gaussians
That image is not result of edge detection operations! It's corner detection. They have entirely different purposes:
Corner detection is an approach used
within computer vision systems to
extract certain kinds of features and
infer the contents of an image. Corner
detection is frequently used in motion
detection, image matching, tracking,
image mosaicing, panorama stitching,
3D modelling and object recognition.
Corner detection overlaps with the
topic of interest point detection.
OpenCV has corner detection algorithms. The latest link includes a source code example for VS 2008. You can also check this link for another example. Google can provide much more.