How to find center of a marker in an accurate way? - c++

I want to get the value of the center of the marker. as a picture below.
I have tried to calculate the center of the marker by calculating from the corner of the marker. So, I already got the center of the marker but it was not accurate values as I want. When the marker is not parallel to a camera, the calculated center has shifted from the center of the marker as a picture below.
I would appreciate if you could give me some suggestions to find the center of the marker in an accuracy way.
I would like to apologize if my question is not clear enough.
Please let me know what I have to improve.

Since your image is at an angle, you can use a perspective transform to obtain a bird's eye view of the image. To obtain the center of the marker, you can first find the corners using the Shi-Tomasi Corner Detector or the Harris corner detector and then use geometry calculations based on the found corners to obtain your center point.

Related

OpenCV get 3D coordinates from 2D pixel

For my undergraduate paper I am working on a iPhone Application using openCV to detect domino tiles. The detection works well in close areas, but when the camera is angled the tiles far away are difficult to detect.
My approach to solve this I would want to do some spacial calculations. For this I would need to convert a 2D Pixel value into world coordinates, calculate a new 3D position with a vector and convert these coordinates back to 2D and then check the colour/shape at that position.
Additionally I would need to know the 3D positions for Augmented Reality additions.
The Camera Matrix i got trough this link create opencv camera matrix for iPhone 5 solvepnp
The Rotationmatrix of the Camera I get from the Core Motion.
Using Aruco markers would be my last resort, as I woulnd't get the decided effect that I would need for the paper.
Now my question is, can i not make calculations when I know the locations and distances of the circles on a lets say Tile with a 5 on it?
I wouldn't need to have a measurement in mm/inches, I can live with vectors without measurements.
The camera needs to be able to be rotated freely.
I tried to invert the calculation sm'=A[R|t]M' to be able to calculate the 2D coordinates in 3D. But I am stuck with inverting the [R|t] even on paper, and I don't know either how I'd do that in swift or c++.
I have read so many different posts on forums, in books etc. and I am completely stuck and appreciate any help/input you can give me. Otherwise I'm screwed.
Thank you so much for your help.
Update:
By using the solvePnP that was suggested by Micka I was able to get the Rotation and Translation Vectors for the angle of the camera.
Meaning that if you are able to identify multiple 2D Points in your image and know their respective 3D World coordinates (in mm, cm, inch, ...), then you can get the mechanisms to project points from known 3D World coordinates onto the respective 2D coordinates in your image. (use the opencv projectPoints function).
What is up next for me to solve is the translation from 2D into 3D coordinates, where I need to follow ozlsn's approach with the inverse of the received matrices out of solvePnP.
Update 2:
With a top down view I am getting along quite well to being able to detect the tiles and their position in the 3D world:
tile from top Down
However if I am now angling the view, my calculations are not working anymore. For example I check the bottom Edge of a 9-dot group and the center of the black division bar for 90° angles. If Corner1 -> Middle Edge -> Bar Center and Corner2 -> Middle Edge -> Bar Center are both 90° angles, than the bar in the middle is found and the position of the tile can be found.
When the view is Angled, then these angles will be shifted due to the perspective to lets say 130° and 50°. (I'll provide an image later).
The Idea I had now is to make a solvePNP of 4 Points (Bottom Edge plus Middle), claculate solvePNP and then rotate the needed dots and the center bar from 2d position to 3d position (height should be irrelevant?). Then i could check with the translated points if the angles are 90° and do also other needed distance calculations.
Here is an image of what I am trying to accomplish:
Markings for Problem
I first find the 9 dots and arrange them. For each Edge I try to find the black bar. As said above, seen from Top, the angle blue corner, green middle edge to yellow bar center is 90°.
However, as the camera is angled, the angle is not 90° anymore. I also cannot check if both angles are 180° together, that would give me false positives.
So I wanted to do the following steps:
Detect Center
Detect Edges (3 dots)
SolvePnP with those 4 points
rotate the edge and the center points (coordinates) to 3D positions
Measure the angles (check if both 90°)
Now I wonder how I can transform the 2D Coordinates of those points to 3D. I don't care about the distance, as I am just calculating those with reference to others (like 1.4 times distance Middle-Edge) etc., if I could measure the distance in mm, that would even be better though. Would give me better results.
With solvePnP I get the rvec which I could change into the rotation Matrix (with Rodrigues() I believe). To measure the angles, my understanding is that I don't need to apply the translation (tvec) from solvePnP.
This leads to my last question, when using the iPhone, can't I use the angles from the motion detection to build the rotation matrix beforehand and only use this to rotate the tile to show it from the top? I feel that this would save me a lot of CPU Time, when I don't have to solvePnP for each tile (there can be up to about 100 tile).
Find Homography
vector<Point2f> tileDots;
tileDots.push_back(corner1);
tileDots.push_back(edgeMiddle);
tileDots.push_back(corner2);
tileDots.push_back(middle.Dot->ellipse.center);
vector<Point2f> realLivePos;
realLivePos.push_back(Point2f(5.5,19.44));
realLivePos.push_back(Point2f(12.53,19.44));
realLivePos.push_back(Point2f(19.56,19.44));
realLivePos.push_back(Point2f(12.53,12.19));
Mat M = findHomography(tileDots, realLivePos, CV_RANSAC);
cout << "M = "<< endl << " " << M << endl << endl;
vector<Point2f> barPerspective;
barPerspective.push_back(corner1);
barPerspective.push_back(edgeMiddle);
barPerspective.push_back(corner2);
barPerspective.push_back(middle.Dot->ellipse.center);
barPerspective.push_back(possibleBar.center);
vector<Point2f> barTransformed;
if (countNonZero(M) < 1)
{
cout << "No Homography found" << endl;
} else {
perspectiveTransform(barPerspective, barTransformed, M);
}
This however gives me wrong values, and I don't know anymore where to look (Sehe den Wald vor lauter Bäumen nicht mehr).
Image Coordinates https://i.stack.imgur.com/c67EH.png
World Coordinates https://i.stack.imgur.com/Im6M8.png
Points to Transform https://i.stack.imgur.com/hHjBM.png
Transformed Points https://i.stack.imgur.com/P6lLS.png
You see I am even too stupid to post 4 images here??!!?
The 4th index item should be at x 2007 y 717.
I don't know what I am doing wrongly here.
Update 3:
I found the following post Computing x,y coordinate (3D) from image point which is doing exactly what I need. I don't know maybe there is a faster way to do it, but I am not able to find it otherwise. At the moment I can do the checks, but still need to do tests if the algorithm is now robust enough.
Result with SolvePnP to find bar Center
The matrix [R|t] is not square, so by-definition, you cannot invert it. However, this matrix lives in the projective space, which is nothing but an extension of R^n (Euclidean space) with a '1' added as the (n+1)st element. For compatibility issues, the matrices that multiplies with vectors of the projective space are appended by a '1' at their lower-right corner. That is : R becomes
[R|0]
[0|1]
In your case [R|t] becomes
[R|t]
[0|1]
and you can take its inverse which reads as
[R'|-Rt]
[0 | 1 ]
where ' is a transpose. The portion that you need is the top row.
Since the phone translates in the 3D space, you need the distance of the pixel in consideration. This means that the answer to your question about whether you need distances in mm/inches is a yes. The answer changes only if you can assume that the ratio of camera translation to the depth is very small and this is called weak perspective camera. The question that you're trying to tackle is not an easy one. There is still people researching on this at PhD degree.

How to find the angle formed by blades of a wind turbine when the yaw is changed?

This is a continuation of the question from Here-How to find angle formed by the blades of a wind turbine with respect to a horizontal imaginary axis?
I've decided to use the following methodology for this-
 Getting a frame from a camera and putting it in a loop.
 Performing Canny edge detection.
 Perform HoughLinesP to detect lines in the image.
Finding Blade Angle:
 Perform Probabilistic Hough Lines Transform on the image. Restrict the blade lines to the length of the blades, as known already.
 The returned value will have the start and end points of the lines detected. Since there are no background noises, this gives the starting and end point of the blade lines and the image will have the blade lines.
 Now, find the dot product with a vector (1,0) by finding the vectors of the blade lines detected or we can use atan2 to find the relative angle of all the points detected with respect to a horizontal.
Problem:
When the yaw angle of the turbine is changed and it is not directly facing the camera, how do I calculate the blade angle formed?
The idea is to basically map the angles when rotated back into the form when viewed head on. From what I've been able to understand, I thought I'd find the homography matrix, decompose the matrix to get rotation, convert to Euler angles to calculate shift from the original axis, then shift all the axes with that angle. However, it's just a vague idea with no concrete planning to go upon.
Or I begin with trying to find the projection matrix, then get camera matrix and rotation matrix? I am lost on this account completely and feel overwhelmed with the many functions...
Other things I came across was the perspective transform,solvepnp..
It would be great if anyone could suggest another way to deal with this? Any links of code snippets would be helpful. I'm not that familiar with OpenCV and would be grateful for any help.
Thanks!
Edit:
[Edit by Spektre]
Assume the tip of the blades plus the center (or the three "roots" of the blades") lie on a common plane.
Fit a homography between those points and the corresponding ones in a reference pose for the turbine (cv::findHomography in OpenCv)
Decompose the homography into rotation and translation using an estimated or assumed camera calibration (cv::decomposeHomographyMat).
Convert the rotation into Euler angles.

Grouping different scale bounding boxes

I've created an openCV application for human detection on images.
I run my algorithm on the same image over different scales, and when detections are made, at the end I have information about the bounding box position and at which scale it was taken from. Then I want to transform that rectangle to the original scale, given that position and size will vary.
I've wrapped my head around this and I've gotten nowhere. This should be rather simple, but at the moment I am clueless.
Help anyone?
Ok, got the answer elsewhere
"What you should do is store the scale where you are at for each detection. Then transforming should be rather easy right. Imagine you have the following.
X and Y coordinates (center of bounding box) at scale 1/2 of the original. This means that you should multiply with the inverse of the scale to get the location in the original, which would be 2X, 2Y (again for the bounxing box center).
So first transform the center of the bounding box, than calculate the width and height of your bounding box in the original, again by multiplying with the inverse. Then from the center, your box will be +-width_double/2 and +-height_double/2."

android maps tile coordinate

My objective is to rotate the map around a point at the center of the bottom of the screen. To do this, I need to know the map coordinates of that (screen) point, then do a double transformation: first - move the center and then rotate around the new center , but that all requires the knowledge of the map coordinates of the rotation point.
I have the details of the map: center, orientation, zoom level, but I can't find any reference mapping a screen position to map coordinates.
Can someone point me in the right direction please?
Answered...but bet warned, it's not simple.
I had to dig into the arcane world of Google Maps tiling, and with a bit of help from mapTiler and some re-writing of the globalMercatorClass from php to java This is how I approached it:
Convert the center of the screen (via meters) to pixels in the current zoom
Using trig, find the pixel position of the bottom, center of the screen from the screen size and the map orientation from the screen center.
Convert that back to Lat/Lon and use it as the camera position
It uses an extraordinary number of CPU cycles, so I have had to slow the refresh rate down to 1.5 seconds, but, hey, it's working.
It would be nice to have some of this functionality available from the API, it would probably reduce the horsepower (and brainpower) required for the job. But I'm delighted that it's possible.

Image Processing - Rotation and Optical Character Recognizion

Good Morning everybody,
Today I wanna concern about the topic "Image Manipulation in C++".
So far I am able to filter all the noisy stuff out of the picture and change the color to black and white.
But now I have two questions.
First Question:
Below you see a screenshot of the image. What is the best way to find out how to rotate the text. In the end it would be nice if the text is horizontal. Does anybody have a good link or an example.
Second Question:
How to go on? Do you think I should send the image to an "Optical Character Recognizer" (a) or should I filter out each letter (b)?
If the answer is (a) what is the smallest ocr lib? All libs I found so far seem to be overpowered and difficult to implement in an existing project. (like gocr or tesseract)
If the answer is (b) what is the best way to save each letter as an own image? Shoul i search for an white pixel an than go from pixel to pixel an save the coordinates in an 2D Array? What is with the letter "i" ;)
Thanks to everybody who will help me to find my way!Sorry for the strange english above. I'm still a language noob :-)
The usual name for the problem in your first question is "Skew Correction"
You may Google for it (lot of references). A nice paper here, showing for example how to get this:
An easy way to start (but not as good as the previously mentioned), is to perform a Principal Component Analysis:
For your first question:
First, Remove any "specs" of noisy white pixels that aren't part of the letter sequence. A gentle low-pass filter (pixel color = average of surrounding pixels) followed by a clamping of the pixel values to pure black or pure white. This should get rid of the little "dot" underneath the "a" character in your image and any other specs.
Now search for the following pixels:
xMin = white pixel with the lowest x value (white pixel closest to the left edge)
xMax = white pixel with the largest x value (white pixel closest to the right edge)
yMin = white pixel with the lowest y value (white pixel closest to the top edge)
yMax = white pixel with the largest y value (white pixel closest to the bottom edge)
with these four pixel values, form a bounding box: Rect(xMin, yMin, xMax, yMax);
compute the area of the bounding box and find the center.
using the center of the bounding box, rotate the box by N degrees. (You can pick N: 1 degree would be an ok value).
Repeat the process of finding xMin,xMax,yMin,yMax and recompute the area
Continue rotating by N degrees until you've rotated K degrees. Also rotate by -N degrees until you've rotated by -K degrees. (Where K is the max rotation... say 30 degrees). At each step recompute the area of the bounding box.
The rotation that produces the bounding box with the smallest area is likely the rotation that aligns the letters parallel to the bottom edge (horizontal alignment).
You could measure the height to each white pixel from the bottom and find how much the text is leaning. It's a very simple approach but it worked fine for me when I tried it.