I have an issue with perspective transformation in OpenCV (I'm working with Qt).
I've attached two images, you can see the contour and how part of the image is lost.
What kind of transformation can I use? Note that the images may or may not have this curvature.
Image edited
Issue with curvature
thanks
That's the expected output: an image of the same size but transformed, as it can be seen in OpenCV's page.
If you want to preserve the whole image you have to embed the original one into a larger image (probably centered, with a background color that matches your problem, maybe black) and the apply the transformation to that image. You can later crop-to-fit and scale it to the original size.
Related
Problem
Is there a build-in function for interpolating single pixels?
Given a normal image as Mat and a Point, e.g. an anomaly of the sensor or an outlier, is their some function to repair this Point?
Furthermore, if I have more than one Point connected (let's say a blob with area smaller 10x10) is there a possibility to fix them too?
Trys but not really solutions
It seems that interpolation is implemented in the geometric transformations including resizing images and to extrapolate pixels outside of the image with borderInterpolate, but I haven't found a possibility for single pixels or small clusters of pixels.
A solution with medianBlur like suggested here does not seem appropriate as it changes the whole image.
Alternative
If there isn't a build-in function, my idea would be to look at all 8-connected surrounding pixels which are not part of the blob and calculate the mean or weighted mean. If doing this iteratively, all missing or erroneous pixel should be filled. But this method would be dependent of the applied order to correct each pixel. Are there other suggestions?
Update
Here is an image to illustrate the problem. Left the original image with a contour marking the pixels to fix. Right side shows the fixed pixels. I hope to find some sophisticated algorithms to fix the pixel.
The build-in function inpaint of OpenCV does the desired interpolation of chosen pixels. Simply create a mask with all pixels to be repaired.
See the documentation here: OpenCV 3.2. Description: inpaint and Function: inpaint
I am using open CV and C++. I have a completely dark image which has 3 colored points on it. I need their center coordinates. If I have only one colored point in the dark image, it will automatically display its center coordinate. However,if I take as input the dark image with the 3 colored points,my program will make an average if those 3 coordinates and return the center of the 3 colored points together,which is my exact problem. I need their individual center coordinates.
Can anyone suggest a method to do that please. Thanks
Here is the code http://pastebin.com/RM7chqBE
Found a solution!
load original image to grayscale
convert original image to gray
set range of intensity value depending on color that needs to be detected
vector of contours and hierarchy
findContours
vector of moments and point
iterate through each contour to find coordinates
One of the ways to do this easily is to use the findContours and drawContours function.
In the documentation you have a bit of code that explains how to retrieve the connected components of an image. Which is what you are actually trying to do.
For example you could draw every connected component you will find (that means every dot) on it's own image and use the code you already have on every image.
This may not be the most efficient way to do this however but it's really simple.
Here is how I would do it
http://pastebin.com/y1Ae3e2V
I'm not sure this works however as I don't have time to test it but you can try it.
I'm trying to remove foreground from two images, here's a sample pair of images:
As you can see, the Budweiser bottle is removed from the scene before the second shot is taken.
These photos were captured from a pinhole camera (iPhone), and, the tricky part is I'm hand-holding the camera, so it cannot be guaranteed that the images are perfectly aligned pixel by pixel, so a simple minus-threshold method will not work.
Then, I've decided to perform image registration using findHomography and warpPerspective from OpenCV, here's the result image:
This image is warped with the matrix I've got from findHomography, it kind of improved the alignment quality, but still not that aligned so I can use a simple way to remove the foreground.
So, finally, I decided to implement a "fuzzy-minus" algorithm: for every pixel in image1, I'll look through a 7x7 neighbour in image2 (a 7 by 7 kernel?), using the minimal difference in grayscale as the result of minus, and threshold the result into binary image, here's what I've got:
And the result is still not good. Notice the white wholes in the bottle, this is produced due to similar grayscale value of foreground and background. So I'm not sure what to do now.
I can think of two ways to solve the problem, the first is to get a better aligned pair of images, and simply minus the pairs; the second is to use a more robust way to extract the foreground.
Can anyone give me some advice on how to deal with this kind of problem? I believe there should be some state-of-art algorithms or processing pipelines, but after googling around, I get nothing.
I'm using OpenCV with C++, it would be fantastic if you can tell me how to do it with these tools in hand.
Big big thanks in advance!
The problem is not in your algorithm. You are having problem because the two scenes were not taken from exactly the same angle, as shown in the animation below. This slight difference highlight the edges in the subtraction.
You need a static camera in order to apply this approach.
I suggest using mathematical morphology on the mask that you got to get rid of the artifacts.
Try applying both opening and closing to get rid of the black and the white small regions.
Mathematical Morphology
Mathematical Morphology in opencv
The difference between the two picture is pretty huge, so you will need to use a large structure element, but I don't think you will be able to get rid of the shadow.
For the two large strips in the background, you may try to use a horizontally shaped structure element as well.
Edit
Is it possible to produce a grayscale image instead of a binary image? if yes, you may try to experiment with the hat method for the shadow, but I am not sure about this point.
This is what I got using two different structure elements for closing THEN opening
Mat mask = imread("mask.jpg",CV_LOAD_IMAGE_GRAYSCALE);
morphologyEx(mask,mask,MORPH_CLOSE,getStructuringElement(CV_SHAPE_ELLIPSE,Size(50,10)));
morphologyEx(mask,mask,MORPH_OPEN,getStructuringElement(CV_SHAPE_ELLIPSE,Size(10,50)));
imshow("open",mask);
imwrite("maskopenclose.jpg",mask);
I would suggest optical flow for alignment and OpenCV's background subtraction algorithm:
http://docs.opencv.org/trunk/doc/tutorials/video/background_subtraction/background_subtraction.html
I suggest that instead of using findHomography try using some of openCV's stereo correspondence functions: http://docs.opencv.org/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html
there is a sample code here: https://github.com/Itseez/opencv/blob/master/samples/cpp/stereo_calib.cpp
basically i wrote a code that had two images. a reference img and a background img. So far i have successfully found the matching image by using feature recognition. Then i rotated it and resized it to look identical as the reference image. The only problem left is the fact that the image as some of the background image on the fringes of the object. This image has been appropriately cropped so i just need to work with the image below. The most obvious answer that first came to me was perhaps use a edge detection algorithm (canny) and use that to give me a clue on where the background may lie. However since the images itself could technically be anything i feel like there would be lots of noise and various unusual errors so if possible i would rather not want to take that path. I also saw the backgroundsubtraction MOG but it seemed like that works for videos and not for single stilled image. In case i was wrong i tried the following code but had 0 effect:
BackgroundSubtractorMOG bs_mog(3, 4, 0.8);
Mat foreground_mog;
bs_mog (cropped_img, foreground_mog, -1.0);
Perhaps i am doing it wrong. So my thought is other than edge detection and if backgroundsubtractorMOG is only for moving images are there any other ideas or options i can look into to remove the fringe background image (i want to turn it all into just white)
thank you in advance for your ideas and comments
EDIT:
well i unerstand the logic already posted by others but i am unsure what the best way to make a mask for this bottom image would be. It is important to note that the image can technically be anything. Not necessary round in shape. Also due to changes in the algorithm the shape must be resized after the object is separated from the background. This means i can't use my reference image to just make a mask and use that mask on this image due to the difference in size.
Segmentation could work. Try cvgrabCut() with a customized mask.
Set as background all pixels very close to borders. (red in image below)
Set as foreground the center area of your image. (green in image below)
Any intermediate pixels set them to probably foreground. (gray in image below)
What I want to do is overlay stereo images together.
Given a sample set of stereo images I was able to display rectified images of them.
However, given a set of stereo images taken for the Microsoft Kinect, RGB and Infrared, I get really distorted images.
The original and rectified images can be found in the link:
http://img153.imageshack.us/img153/8021/calibration.png
I used the same code for the same set of images. I have tried multiple sets of Kinect "stereo" images and they all came out very distorted.
I am wondering what could be wrong?
The way I am displaying the images is:
I use cvStereoCalibrate() with these two as the last parameters: ...cvTermCriteria(CV_TERMCRIT_ITER+CV_TERMCRIT_EPS, 100, 1e-5), CV_CALIB_FIX_ASPECT_RATIO }
I then use cvStereoRectify and get mapx and mapy of the RGB camera using cvInitUndistortRectifyMap() then cvRemap and display the images.
I was wondering if the parameters of cvStereoCalibrate greatly affect the Kinect "stereo" images?
Thanks,
Tyro
I notice that one of the images has much less brightness and contrast in your samples. While it does find the corners, less brightness and contrast will result in a lot of error in the subpixel accuracy. I struggle a lot with rectification too and find that setting everything up perfectly (to the point of needing less rectification) is the only way to get really good results.
You use too small pattern for calibration.