Filling Borders after undistort results in unwanted artefacts - c++

I am using the OpenCV Function "undistort" to have a kind of barrel distortion. Of course I have now black borders and I wish to fill the black borders with the value 65535which is required for my following processing pipeline. To achieve this behavior, I am executing the following code:
cv::undistort(src,dst,K,dis);
dst.setTo(65535,dst == 0);
where src is the original image, dst the image with the barrel distortion, K the camera matrix, and dis the distortion coeffizients. The function setTo will set all parameters that are 0 to 65535 to 0.
This results in teh following exemplary image:
It can be seen that it is almost everywhere white, - the black large bar is wanted. However, there is still an outline around the image left.
These are values which were not caught by the setTo-Function since they are not 0. On further inspection, they are showing a kind of linear tendency.
So my question is, if it is possible to "forbid" OpenCV to make these smooth edges when using undistort? Or is there any kind of solution to get rid of these values without destroying the ground truth? The shown image is only exemplary, the values that are outside on the edges may occur in the wanted ground truth

Related

cv::detail::MultiBandBlender strange white streaks at the end of the photo

I'm working with OpenCV 3.4.8 with C++11 and I'm trying to blend images together.
In this example I have 2 images (thiers mask shown in the screen belowe). I have georeference, so I can easy calculate corners of this images in the final image.
The data outside the masks are black.
My code looks like something like that:
std::vector<cv::UMat> inputImages;
std::vector<cv::UMat> masks;
std::vector<cv::Point> corners;
std::vector<cv::Size> imgSizes;
/*
here is code where I load images, create thier masks
(like in the screen above) and calculate corners.
*/
cv::Ptr<cv::detail::SeamFinder> seamFinder = new cv::detail::DpSeamFinder();
seamFinder->find(inputImages, corners, masks);
cv::Ptr<cv::detail::Blender> blender = new cv::detail:: MultiBandBlender(false);
blender->prepare(corners, imgSizes);
for(size_t i = 0; i < inputImages.size(); i++)
{
blender->feed(inputImages[i], masks[i], corners[i]);
}
cv::UMat blendedImg, outMask;
blender->blend(blendedImg, outMask);
SeamFinder gives me result like in the screen above. Finded seam lines looks good and Im very satisied form them. But the other problem occurs in the next step. The MultiBandBlender is making strange white streaks when the seam line goes on the end of the data.
This is an example:
When I don't use blender, but just use masks to cut the oryginal images and just add (cv::add()) images together with additional alpha channel (made from masks) I get very good results without any holes and strange colors, but I need to have more smoothed transition :/
Can anyone help me? When I create MultiBand Blender with smaller num_bands the white streaks are smaller, and with the num_bands = 0 the results looks like with just adding images.
I looked at feed() and blend() methods in the MultiBandBlender and I think that it is connected with Gaussian or Laplacian pyramid and the final restoring images from Laplacian pyramid in the blend() method.
EDIT1:
When Gaussian and Laplacian pyramids are created the copyMakeBorder(), which prevents the MultiBandBlender from making this white streaks when images are fully filled with the data. So in my case I think that I need to create my blender almost the same like MultiBandBlender, but copyMakeBorder() method in the feed() method change to the something that will "extend" my image inside the mask, like #AlexanderKondratskiy suggested.
Now I don't know how to achive correct "extend" similar to BORDER_REFLECT or BORDER_REFLECT_101.
I suspect your input images contain white pixels outside those masks. The white banding occurs around the areas where the seam follows the mask exactly. For Laplacian for example, pixels outside the mask do influence the final result, as each layer of a pyramid is essentially some blurring kernel on the image.
If you have some kind of good data outside the mask, keep it. If you do not, I suggest "extending" your image beyond the mask to maintain a smooth transition.
Edit:
Here's two things you could try, unless someone with more experience with OpenCV comes along.
To prove/disprove my hypothesis, fill the black region with just the average or median color within the mask. This should make the transition to the outside region less sharp, and hopefully reduce the artefacts. If that does not happen, my answer is wrong.
In terms of what is probably a good generalization of "BORDER_REFLECT" when the edge is arbitrary, you could try something like this:
Find the centroid c of the mask polygon
For each pixel p outside the mask, think of the line between it and c
Calculate point p' along this line that is the same distance inside the mask area, as p is from the mask edge. (i.e. you're reflecting along the mask edge)
Linearly interpolate the color of from the neighbors of p' (as it's position may not fall exactly in the middle of a pixel). That's the color of pixel p

Python 2.7: Area opening and closing binary image in Python not so accurate

I am using Python 2.7 and I used following Python and Matlab function for removing noises and fill holes in this image
.
1. Code to remove noise and fill holes using Python and Opencv
img = cv2.imread("binar.png",0)
kernel = np.ones((5,5),np.uint8)
open = cv2.morphologyEx(img, cv2.MORPH_OPEN, kernel)
close = cv2.morphologyEx(open, cv2.MORPH_CLOSE, kernel)
Code used in python and scipy using ndimage.binary_closing:
im = cv2.imread("binar.png", cv2.IMREAD_GRAYSCALE)
open_img = ndimage.binary_opening(im)
close_img = ndimage.binary_closing(open_img)
clg = close_img.astype(np.int)
Code used in Matlab: I used imfill and bwareaopen.
The results I got is shown below:
First image from using nd.image.binary_closing. My problem is it doesn't fill all white blobs fully. We can see inbetween minor black portion are still present.
Second image from using cv2.morphologyEx. Same problem in this also, as it also has some minor white portion in between white blobs. Here I faced one more problem. It converts some white pixels into black which should not be otherwise. I mentioned those areas with red color in image 2. Red highlighted portions is connected with larger one blobs but even then they get converted into black pixels.
Third image I got from MATLAB processing in which imfill work perfectly without converting essential white pixels into black.
So, my question is, Is there any method for Python 2.7 with which I can remove noises below certain area and fill the white blobs accurately as in Matlab? One more thing is, I want to find out the centroids and areas of those final processed blobs in last for further used. I can find out these using cv2.connectedComponentsWithStats but I want to find area and centroids after removing noises and filling blobs.
Thanks.
(I think this is not duplicate because I want to do it in Python not in Matlab. )
From Matlab's imfill() documentation:
BW2= imfill(BW,locations) performs a flood-fill operation on background pixels of the input binary image BW, starting from the points specified in locations. (...)
BW2= imfill(BW,'holes') fills holes in the input binary image BW. In this syntax, a hole is a set of background pixels that cannot be reached by filling in the background from the edge of the image.
I2= imfill(I) fills holes in the grayscale image I. In this syntax, a hole is defined as an area of dark pixels surrounded by lighter pixels.
The duplicate that I flagged shows ways to accomplish the third variant usually. However for many images, the second variant will still work fine and is extremely easy to accomplish. From the first variant you see that it mentions a flood-fill operation, which can be implemented in OpenCV with cv2.floodFill(). The second variant gives a really easy method---just flood fill from the edges, and the pixels left over are the black holes which can't be reached from outside. Then if you invert this image, you'll get white pixels for the holes, which you can add to your mask to fill in the holes.
import cv2
import numpy as np
# read image, ensure binary
img = cv2.imread('image.png', 0)
img[img!=0] = 255
# flood fill background to find inner holes
holes = img.copy()
cv2.floodFill(holes, None, (0, 0), 255)
# invert holes mask, bitwise or with img fill in holes
holes = cv2.bitwise_not(holes)
filled_holes = cv2.bitwise_or(img, holes)
cv2.imshow('', filled_holes)
cv2.waitKey()
Note that in this case, I just set the starting pixel for the background at (0,0). However it's possible that there could be, e.g., a white line going down the center which would cut off this operation to stop filling (i.e. stop finding the background) for the other half of the image. The more robust method would be to go through all of the edge pixels on the image, and flood fill every time you come across a black pixel. You can accomplish this more easily with the mask parameter in cv2.floodFill(), which allows you to continue to update the mask each time.
To find the centroids of each blob, you could use contour detection and cv2.moments() to find the centroids of each contour, or you could also do cv2.connectedComponentsWithStats() like you mentioned.

Rectangle detection / tracking using OpenCV

What I need
I'm currently working on an augmented reality kinda game. The controller that the game uses (I'm talking about the physical input device here) is a mono colored, rectangluar pice of paper. I have to detect the position, rotation and size of that rectangle in the capture stream of the camera. The detection should be invariant on scale and invariant on rotation along the X and Y axes.
The scale invariance is needed in case that the user moves the paper away or towards the camera. I don't need to know the distance of the rectangle so scale invariance translates to size invariance.
The rotation invariance is needed in case the user tilts the rectangle along its local X and / or Y axis. Such a rotation changes the shape of the paper from rectangle to trapezoid. In this case, the object oriented bounding box can be used to measure the size of the paper.
What I've done
At the beginning there is a calibration step. A window shows the camera feed and the user has to click on the rectangle. On click, the color of the pixel the mouse is pointing at is taken as reference color. The frames are converted into HSV color space to improve color distinguishing. I have 6 sliders that adjust the upper and lower thresholds for each channel. These thresholds are used to binarize the image (using opencv's inRange function).
After that I'm eroding and dilating the binary image to remove noise and unite nerby chunks (using opencv's erode and dilate functions).
The next step is finding contours (using opencv's findContours function) in the binary image. These contours are used to detect the smallest oriented rectangles (using opencv's minAreaRect function). As final result I'm using the rectangle with the largest area.
A short conclusion of the procedure:
Grab a frame
Convert that frame to HSV
Binarize it (using the color that the user selected and the thresholds from the sliders)
Apply morph ops (erode and dilate)
Find contours
Get the smallest oriented bouding box of each contour
Take the largest of those bounding boxes as result
As you may noticed, I don't make an advantage of the knowledge about the actual shape of the paper, simply because I don't know how to use this information properly.
I've also thought about using the tracking algorithms of opencv. But there were three reasons that prevented me from using them:
Scale invariance: as far as I read about some of the algorithms, some don't support different scales of the object.
Movement prediction: some algorithms use movement prediction for better performance, but the object I'm tracking moves completely random and therefore unpredictable.
Simplicity: I'm just looking for a mono colored rectangle in an image, nothing fancy like car or person tracking.
Here is a - relatively - good catch (binary image after erode and dilate)
and here is a bad one
The Question
How can I improve the detection in general and especially to be more resistant against lighting changes?
Update
Here are some raw images for testing.
Can't you just use thicker material?
Yes I can and I already do (unfortunately I can't access these pieces at the moment). However, the problem still remains. Even if I use material like cartboard. It isn't bent as easy as paper, but one can still bend it.
How do you get the size, rotation and position of the rectangle?
The minAreaRect function of opencv returns a RotatedRect object. This object contains all the data I need.
Note
Because the rectangle is mono colored, there is no possibility to distinguish between top and bottom or left and right. This means that the rotation is always in range [0, 180] which is perfectly fine for my purposes. The ratio of the two sides of the rect is always w:h > 2:1. If the rectangle would be a square, the range of roation would change to [0, 90], but this can be considered irrelevant here.
As suggested in the comments I will try histogram equalization to reduce brightness issues and take a look at ORB, SURF and SIFT.
I will update on progress.
The H channel in the HSV space is the Hue, and it is not sensitive to the light changing. Red range in about [150,180].
Based on the mentioned information, I do the following works.
Change into the HSV space, split the H channel, threshold and normalize it.
Apply morph ops (open)
Find contours, filter by some properties( width, height, area, ratio and so on).
PS. I cannot fetch the image you upload on the dropbox because of the NETWORK. So, I just use crop the right side of your second image as the input.
imgname = "src.png"
img = cv2.imread(imgname)
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
## Split the H channel in HSV, and get the red range
hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
h,s,v = cv2.split(hsv)
h[h<150]=0
h[h>180]=0
## normalize, do the open-morp-op
normed = cv2.normalize(h, None, 0, 255, cv2.NORM_MINMAX, cv2.CV_8UC1)
kernel = cv2.getStructuringElement(shape=cv2.MORPH_ELLIPSE, ksize=(3,3))
opened = cv2.morphologyEx(normed, cv2.MORPH_OPEN, kernel)
res = np.hstack((h, normed, opened))
cv2.imwrite("tmp1.png", res)
Now, we get the result as this (h, normed, opened):
Then find contours and filter them.
contours = cv2.findContours(opened, cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)
print(len(contours))[-2]
bboxes = []
rboxes = []
cnts = []
dst = img.copy()
for cnt in contours:
## Get the stright bounding rect
bbox = cv2.boundingRect(cnt)
x,y,w,h = bbox
if w<30 or h < 30 or w*h < 2000 or w > 500:
continue
## Draw rect
cv2.rectangle(dst, (x,y), (x+w,y+h), (255,0,0), 1, 16)
## Get the rotated rect
rbox = cv2.minAreaRect(cnt)
(cx,cy), (w,h), rot_angle = rbox
print("rot_angle:", rot_angle)
## backup
bboxes.append(bbox)
rboxes.append(rbox)
cnts.append(cnt)
The result is like this:
rot_angle: -2.4540319442749023
rot_angle: -1.8476102352142334
Because the blue rectangle tag in the source image, the card is splited into two sides. But a clean image will have no problem.
I know it's been a while since I asked the question. I recently continued on the topic and solved my problem (although not through rectangle detection).
Changes
Using wood to strengthen my controllers (the "rectangles") like below.
Placed 2 ArUco markers on each controller.
How it works
Convert the frame to grayscale,
downsample it (to increase performance during detection),
equalize the histogram using cv::equalizeHist,
find markers using cv::aruco::detectMarkers,
correlate markers (if multiple controllers),
analyze markers (position and rotation),
compute result and apply some error correction.
It turned out that the marker detection is very robust to lighting changes and different viewing angles which allows me to skip any calibration steps.
I placed 2 markers on each controller to increase the detection robustness even more. Both markers has to be detected only one time (to measure how they correlate). After that, it's sufficient to find only one marker per controller as the other can be extrapolated from the previously computed correlation.
Here is a detection result in a bright environment:
in a darker environment:
and when hiding one of the markers (the blue point indicates the extrapolated marker postition):
Failures
The initial shape detection that I implemented didn't perform well. It was very fragile to lighting changes. Furthermore, it required an initial calibration step.
After the shape detection approach I tried SIFT and ORB in combination with brute force and knn matcher to extract and locate features in the frames. It turned out that mono colored objects don't provide much keypoints (what a surprise). The performance of SIFT was terrible anyway (ca. 10 fps # 540p).
I drew some lines and other shapes on the controller which resulted in more keypoints beeing available. However, this didn't yield in huge improvements.

Create mask to select the black area

I have a black area around my image and I want to create a mask using OpenCV C++ that selects just this black area so that I can paint it later. How can i do that without affecting the image itself?
I tried to convert the image to grayscale and then using threshold to convert it to binary, but it affects my image since the result contains black pixels from inside the image.
Another Question : if i want to crop the image instead of paint it, how can i do it??
Thanks in advance,
I would solve the problem like this:
Inverse-binarize the image with a threshold of 1 (i.e. all pixels with the value 0 are set to 1, all others to 0)
use cv::findContours to find white segments
remove segments that don't touch image borders
use cv::drawContours to draw the remaining segments to a mask.
There is probably a more efficient solution in terms of runtime efficiency, but you should be able to prototype my solution quite quickly.

What happens in GrabCut Algorithm

I want to know the what actually happnens in the following code.
cv::Rect rectangle(x,y,width,height);
cv::Mat result;
cv::Mat bgModel,fgModel;
cv::grabCut(image,
result,
rectangle,
bgModel,fgModel,
1,
cv::GC_INIT_WITH_RECT);
cv::compare(result,cv::GC_PR_FGD,result,cv::CMP_EQ);
// Generate output image
cv::Mat foreground(image.size(),CV_8UC3,cv::Scalar(255,255,255));
image.copyTo(foreground,result);
According to my knowledge when we define the rectangle outside the rectangle will be consider as the known background and the inside as the unknown foreground.
Then bgmodel and fgmodel are Gausian mixed models which keeps the foreground and background pixels separately.
The parameter we are passing as 1 means, we are asking to divide the pixels to separate pixels process to run only once.
what i can't understand is
cv::compare(result,cv::GC_PR_FGD,result,cv::CMP_EQ);
What actually happens in the above method.
If anyone could explain, that would be a great help.
thanx.
I've found this book which tells that this code compares values of pixels in result to GC_PR_FGD value. Those that are equal it doesn't touch, all the other pixels it deletes. Here's a citation from there:
The input/output segmentation image can have one of the four values:
cv::GC_BGD, for pixels certainly belonging to the background (for example, pixels outside the rectangle in our example)
cv::GC_FGD, for pixels certainly belonging to the foreground (none in our example)
cv::GC_PR_BGD, for pixels probably belonging to the background
cv::GC_PR_FGD for pixels probably belonging to the foreground (that is the initial value for the pixels inside the rectangle in our
example).
We get a binary image of the segmentation by extracting the pixels
having a value equal to cv::GC_PR_FGD:
// Get the pixels marked as likely foreground
cv::compare(result,cv::GC_PR_FGD,result,cv::CMP_EQ);
// Generate output image
cv::Mat foreground(image.size(),CV_8UC3,
cv::Scalar(255,255,255));
image.copyTo(foreground, // bg pixels are not copied
result);