I'm trying to deskew an image that has an element of known size. Given this image:
I can use aruco:: estimatePoseBoard which returns rotation and translation vectors. Is there a way to use that information to deskew everything that's in the same plane as the marker board? (Unfortunately my linear algebra is rudimentary at best.)
Clarification
I know how to deskew the marker board. What I want to be able to do is deskew the other things (in this case, the cloud-shaped object) in the same plane as the marker board. I'm trying to determine whether or not that's possible and, if so, how to do it. I can already put four markers around the object I want to deskew and use the detected corners as input to getPerspectiveTransform along with the known distance between them. But for our real-world application it may be difficult for the user to place markers exactly. It would be much easier if they could place a single marker board in the frame and have the software deskew the other objects.
Since you tagged OpenCV:
From the image I can see that you have detected the corners of all the black box. So just get the most border for points in a way or another:
Then it is like this:
std::vector<cv::Point2f> src_points={/*Fill your 4 corners here*/};
std::vector<cv::Point2f> dst_points={cv:Point2f(0,0), cv::Point2f(width,0), cv::Point2f(width,height),cv::Point2f(0,height)};
auto H=v::getPerspectiveTransform(src_points,dst_points);
cv::Mat copped_image;
cv::warpPerspective(full_image,copped_image,H,cv::Size(width,height));
I was stuck on the assumption that the destination points in the call to getPerspectiveTransform had to be the corners of the output image (as they are in Humam's suggestion). Once it dawned on me that the destination points could be somewhere within the output image I had my answer.
float boardX = 1240;
float boardY = 1570;
float boardWidth = 1730;
float boardHeight = 1400;
vector<Point2f> destinationCorners;
destinationCorners(Point2f(boardX+boardWidth, boardY));
destinationCorners(Point2f(boardX+boardWidth, boardY+boardHeight));
destinationCorners(Point2f(boardX, boardY+boardHeight));
destinationCorners(Point2f(boardX, boardY));
Mat h = getPerspectiveTransform(detectedCorners, destinationCorners);
Mat bigImage(image.size() * 3, image.type(), Scalar(0, 50, 50));
warpPerspective(image, bigImage, h, bigImage.size());
This fixed the perspective of the board and everything in its plane. (The waviness of the board is due to the fact that the paper wasn't lying flat in the original photo.)
Related
I have a camera and a lamp.
The camera takes pictures automatically and the lamp is rigid.
Each of my pictures has a bright spot in the middle and is getting darker on the outside (linear).
Is there an easy way to darken the middle, or brighten the outside to accommodate this (preferably with a gradient)?
I am using OpenCV with the C++ API.
Thank you for the help.
It's hard to say what exactly you want to do without an example. However, let's assume the effect is exactly the same in all images and you want to apply the same transformation to each of them.
You say the effect is linear, and assume you want to make the center darker by let's say 20% and the pixel furthest from the center brighter by 20%. Let's further assume the optical center is in the center of the image (needn't be true in practice).
So you have an image cv::Mat img; you want to manipulate, and I assume it contains data of type CV_32F (if not float or double-valued, convert, can be more than one channel). You create another cv::Mat
//first, make a mask image to multiply the image with
cv::Mat mask = cv::Mat::zeros(img.rows,img.cols,CV_32F);
float maxdist = std::sqrt(img.rows*img.rows+img.cols*img.cols)/2;
cv::Point2f center(img.cols*0.5,img.rows*0.5);
for (int j=0;j<img.rows;++j)
for (int i=0;i<img.cols;++i)
{
cv::Point2f p(i,j);
cv::Point2f diff(p-center);
float dist(std::sqrt(diff.dot(diff)));
float factor(0.8+0.4*dist/maxdist);
mask.at<float>(j,i) = factor;
}
//apply the transformation, to as many images as you like
img = img.mul(mask);
This doesn't check for overflows, you may or may not want to do this afterwards. But from your question, it would be a simple way to do this.
What I need
I'm currently working on an augmented reality kinda game. The controller that the game uses (I'm talking about the physical input device here) is a mono colored, rectangluar pice of paper. I have to detect the position, rotation and size of that rectangle in the capture stream of the camera. The detection should be invariant on scale and invariant on rotation along the X and Y axes.
The scale invariance is needed in case that the user moves the paper away or towards the camera. I don't need to know the distance of the rectangle so scale invariance translates to size invariance.
The rotation invariance is needed in case the user tilts the rectangle along its local X and / or Y axis. Such a rotation changes the shape of the paper from rectangle to trapezoid. In this case, the object oriented bounding box can be used to measure the size of the paper.
What I've done
At the beginning there is a calibration step. A window shows the camera feed and the user has to click on the rectangle. On click, the color of the pixel the mouse is pointing at is taken as reference color. The frames are converted into HSV color space to improve color distinguishing. I have 6 sliders that adjust the upper and lower thresholds for each channel. These thresholds are used to binarize the image (using opencv's inRange function).
After that I'm eroding and dilating the binary image to remove noise and unite nerby chunks (using opencv's erode and dilate functions).
The next step is finding contours (using opencv's findContours function) in the binary image. These contours are used to detect the smallest oriented rectangles (using opencv's minAreaRect function). As final result I'm using the rectangle with the largest area.
A short conclusion of the procedure:
Grab a frame
Convert that frame to HSV
Binarize it (using the color that the user selected and the thresholds from the sliders)
Apply morph ops (erode and dilate)
Find contours
Get the smallest oriented bouding box of each contour
Take the largest of those bounding boxes as result
As you may noticed, I don't make an advantage of the knowledge about the actual shape of the paper, simply because I don't know how to use this information properly.
I've also thought about using the tracking algorithms of opencv. But there were three reasons that prevented me from using them:
Scale invariance: as far as I read about some of the algorithms, some don't support different scales of the object.
Movement prediction: some algorithms use movement prediction for better performance, but the object I'm tracking moves completely random and therefore unpredictable.
Simplicity: I'm just looking for a mono colored rectangle in an image, nothing fancy like car or person tracking.
Here is a - relatively - good catch (binary image after erode and dilate)
and here is a bad one
The Question
How can I improve the detection in general and especially to be more resistant against lighting changes?
Update
Here are some raw images for testing.
Can't you just use thicker material?
Yes I can and I already do (unfortunately I can't access these pieces at the moment). However, the problem still remains. Even if I use material like cartboard. It isn't bent as easy as paper, but one can still bend it.
How do you get the size, rotation and position of the rectangle?
The minAreaRect function of opencv returns a RotatedRect object. This object contains all the data I need.
Note
Because the rectangle is mono colored, there is no possibility to distinguish between top and bottom or left and right. This means that the rotation is always in range [0, 180] which is perfectly fine for my purposes. The ratio of the two sides of the rect is always w:h > 2:1. If the rectangle would be a square, the range of roation would change to [0, 90], but this can be considered irrelevant here.
As suggested in the comments I will try histogram equalization to reduce brightness issues and take a look at ORB, SURF and SIFT.
I will update on progress.
The H channel in the HSV space is the Hue, and it is not sensitive to the light changing. Red range in about [150,180].
Based on the mentioned information, I do the following works.
Change into the HSV space, split the H channel, threshold and normalize it.
Apply morph ops (open)
Find contours, filter by some properties( width, height, area, ratio and so on).
PS. I cannot fetch the image you upload on the dropbox because of the NETWORK. So, I just use crop the right side of your second image as the input.
imgname = "src.png"
img = cv2.imread(imgname)
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
## Split the H channel in HSV, and get the red range
hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
h,s,v = cv2.split(hsv)
h[h<150]=0
h[h>180]=0
## normalize, do the open-morp-op
normed = cv2.normalize(h, None, 0, 255, cv2.NORM_MINMAX, cv2.CV_8UC1)
kernel = cv2.getStructuringElement(shape=cv2.MORPH_ELLIPSE, ksize=(3,3))
opened = cv2.morphologyEx(normed, cv2.MORPH_OPEN, kernel)
res = np.hstack((h, normed, opened))
cv2.imwrite("tmp1.png", res)
Now, we get the result as this (h, normed, opened):
Then find contours and filter them.
contours = cv2.findContours(opened, cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)
print(len(contours))[-2]
bboxes = []
rboxes = []
cnts = []
dst = img.copy()
for cnt in contours:
## Get the stright bounding rect
bbox = cv2.boundingRect(cnt)
x,y,w,h = bbox
if w<30 or h < 30 or w*h < 2000 or w > 500:
continue
## Draw rect
cv2.rectangle(dst, (x,y), (x+w,y+h), (255,0,0), 1, 16)
## Get the rotated rect
rbox = cv2.minAreaRect(cnt)
(cx,cy), (w,h), rot_angle = rbox
print("rot_angle:", rot_angle)
## backup
bboxes.append(bbox)
rboxes.append(rbox)
cnts.append(cnt)
The result is like this:
rot_angle: -2.4540319442749023
rot_angle: -1.8476102352142334
Because the blue rectangle tag in the source image, the card is splited into two sides. But a clean image will have no problem.
I know it's been a while since I asked the question. I recently continued on the topic and solved my problem (although not through rectangle detection).
Changes
Using wood to strengthen my controllers (the "rectangles") like below.
Placed 2 ArUco markers on each controller.
How it works
Convert the frame to grayscale,
downsample it (to increase performance during detection),
equalize the histogram using cv::equalizeHist,
find markers using cv::aruco::detectMarkers,
correlate markers (if multiple controllers),
analyze markers (position and rotation),
compute result and apply some error correction.
It turned out that the marker detection is very robust to lighting changes and different viewing angles which allows me to skip any calibration steps.
I placed 2 markers on each controller to increase the detection robustness even more. Both markers has to be detected only one time (to measure how they correlate). After that, it's sufficient to find only one marker per controller as the other can be extrapolated from the previously computed correlation.
Here is a detection result in a bright environment:
in a darker environment:
and when hiding one of the markers (the blue point indicates the extrapolated marker postition):
Failures
The initial shape detection that I implemented didn't perform well. It was very fragile to lighting changes. Furthermore, it required an initial calibration step.
After the shape detection approach I tried SIFT and ORB in combination with brute force and knn matcher to extract and locate features in the frames. It turned out that mono colored objects don't provide much keypoints (what a surprise). The performance of SIFT was terrible anyway (ca. 10 fps # 540p).
I drew some lines and other shapes on the controller which resulted in more keypoints beeing available. However, this didn't yield in huge improvements.
I need to detect this ball: and find its position and radius using opencv. I have downloaded many codes, but neither of them works. Any helps are highly appreciated.
I see you have quite a setup installed. As mentioned in the comments, please make sure that you have appropriate lighting to capture the ball, as well as making the ball distinguishable from it's surroundings by painting it a different colour.
Once your setup is optimized for detection, you may proceed via different ways to track your ball (stationary or not). A few ways may be:
Feature detection : Via Hough Circles, detect 2D circles (and their radius) that lie within a certain range of color, as explained below
There are many more ways to detect objects via feature detection, such as this clever blog points out.
Object Detection: Via SURF, SIFT and many other methods, you may detect your ball, calculate it's radius and even predict it's motion.
This code uses Hough Circles to compute the ball position, display it in real time and calculate it's radius in real time. I am using Qt 5.4 with OpenCV version 2.4.12
void Dialog::TrackMe() {
webcam.read(cim); /*call read method of webcam class to take in live feed from webcam and store each frame in an OpenCV Matrice 'cim'*/
if(cim.empty()==false) /*if there is something stored in cim, ie the webcam is running and there is some form of input*/ {
cv::inRange(cim,cv::Scalar(0,0,175),cv::Scalar(100,100,256),cproc);
/* if any part of cim lies between the RGB color ranges (0,0,175) and (100,100,175), store in OpenCV Matrice cproc */
cv::HoughCircles(cproc,veccircles,CV_HOUGH_GRADIENT,2,cproc.rows/4,100,50,10,100);
/* take cproc, process the output to matrice veccircles, use method [CV_HOUGH_GRADIENT][1] to process.*/
for(itcircles=veccircles.begin(); itcircles!=veccircles.end(); itcircles++)
{
cv::circle(cim,cv::Point((int)(*itcircles)[0],(int)(*itcircles)[1]), 3, cv::Scalar(0,255,0), CV_FILLED); //create center point
cv::circle(cim,cv::Point((int)(*itcircles)[0],(int)(*itcircles)[1]), (int)(*itcircles)[2], cv::Scalar(0,0,255),3); //create circle
}
QImage qimgprocess((uchar*)cproc.data,cproc.cols,cproc.rows,cproc.step,QImage::Format_Indexed8); //convert cv::Mat to Qimage
ui->output->setPixmap(QPixmap::fromImage(qimgprocess));
/*render QImage to screen*/
}
else
return; /*no input, return to calling function*/
}
How does the processing take place?
Once you start taking in live input of your ball, the frame captured should be able to show where the ball is. To do so, the frame captured is divided into buckets which are further divides into grids. Within each grid, an edge is detected (if it exists) and thus, a circle is detected. However, only those circles that pass through the grids that lie within the range mentioned above (in cv::Scalar) are considered. Thus, for every circle that passes through a grid that lies in the specified range, a number corresponding to that grid is incremented. This is known as voting.
Each grid then stores it's votes in an accumulator grid. Here, 2 is the accumulator ratio. This means that the accumulator matrix will store only half as many values as resolution of input image cproc. After voting, we can find local maxima in the accumulator matrix. The positions of the local maxima are corresponding to the circle centers in the original space.
cproc.rows/4 is the minimum distance between centers of the detected circles.
100 and 50 are respectively the higher and lower threshold passed to the canny edge function, which basically detects edges only between the mentioned thresholds
10 and 100 are the minimum and maximum radius to be detected. Anything above or below these values will not be detected.
Now, the for loop processes each frame captured and stored in veccircles. It create a circle and a point as detected in the frame.
For the above, you may visit this link
How do I apply some transformation (e.g. rotation) to a cv::rotatedRect?
Tried using cv::warpAffine but won't work, as it is supposed to be applied to cv::Mat...
You can control rotation translation and scale directly using the internal variables angle, center & size see documentation.
More general transformations requires getting the vertices using points() and manipulating them using for example cv::warpAffinebut once doing that you will no longer have a cv::rotatedRect (by definition)
If you are planing to do complex operations like affine or perspective, you should deal with the points of the rotated rect and the result may be quad shape not a rectangle.
cv::warpAffine works for images. you should use cv::Transform and cv::Perspectivetransform
They take array of points and produced array of points.
Example:
cv::RotatedRect rect;
//fill rect somehow
cv::Point2f rect_corners[4];
rect.points(rect_corners);
std::vector<cv::Point2f> rect_corners_transformed(4);
cv::Mat M;
//fill M with affine transformation matrix
cv::transform(std::vector<cv::Point2f>(std::begin(rect_corners), std::end(rect_corners)), rect_corners_transformed, M);
// your transformed points are in rect_corners_transformed
TLDR: Create a new rectangle.
I don't know if it will help you, but I solved a similar problem by creating a new rectangle and ignoring the old one. In other words, I calculated the new angle, and then assigned it and the values of the old rectangle (the center point and the size) to the new rectangle:
RotatedRect newRotatedRectangle(oldRectangle.center, oldRectangle.size, newAngle);
I am working with a fish-eye camera and need the reverse the distortion before any further calculation,
In this question this is happening Correcting fisheye distortion
src = cv.LoadImage(src)
dst = cv.CreateImage(cv.GetSize(src), src.depth, src.nChannels)
mapx = cv.CreateImage(cv.GetSize(src), cv.IPL_DEPTH_32F, 1)
mapy = cv.CreateImage(cv.GetSize(src), cv.IPL_DEPTH_32F, 1)
cv.InitUndistortMap(intrinsics, dist_coeffs, mapx, mapy)
cv.Remap(src, dst, mapx, mapy, cv.CV_INTER_LINEAR + cv.CV_WARP_FILL_OUTLIERS, cv.ScalarAll(0))
The problem with this is that this way the remap functions goes through all the points and creates a new picture out of. this is time consuming to do it every frame.
They way that I am looking for is to have a point to point translation on the fish-eye picture to normal picture coordinates.
The approach we are taking is to do all the calculations on the input frame and just translate the result coordinates to the world coordinates so we don't want to go through all the points of a picture and create a new one out of it. (Time is really important for us)
In the matrices mapx and mapy there are some point to point translations but a lot of points are without complete translation.
I tried to interpolate this matrices but the result was not what I was looking for.
Any help in would be much appreciated, even other approaches which are more time efficient than cv.Remap.
Thanks
I think what you want is cv.UndistortPoints().
Assuming you have detected some point features distorted in your distorted image, you should be able to do something like this:
cv.UndistortPoints(distorted, undistorted, intrinsics, dist_coeffs)
This will allow you to work with undistorted points without generating a new, undistorted image for each frame.