OpenCV drawCircle and draw Rectangles on the circle line - c++

I want to draw circles with different radius and then I want to draw rectangles on this circle.
It should look like this:
]
I have tried it with the formula for the circle
y_Circle = Center_Circle.y + sqrt(pow(Radius, 2) - pow(x_Circle - Center_Circle.x, 2));
but this is just for the lower part of the circle. For the upper part I need this formula, but with a "-" after Center_Circly.y.
The Problem is, that i´m not getting the rectangles in the position like in the image above. It looks like this:
In this image I draw the rectangles on a circle with the formula above. For a better unterstanding I have drawn two circles by hand to show the problem.
You can see, that there is space between the rectangles and in the lower part there is no space between the rectangles. Is there another possibility to do that in an easier way? Maybe like this: Draw a circle with openCV, get access to the coordinates of the circle line and draw the rectangles of this circle line. But I don´t know how to get acces to the coordinates of the circle.
Here is my code-snippet:
for (int Radius = Rect_size; Radius < MaxRadius;)
{
x_Circle = MaxRadius - Radius;
circumference_half = 2 * 3.1415 * Radius / 2;
Rectangle_count = circumference_half / Rect_size;
for (int i = 0; i < Rectangle_count - 1; i++)
{
y_Circle = Center_Circle.y + sqrt(pow(Radius, 2) - pow(x_Circle - Center_Circle.x, 2));
if (y_Circle <= FRAME_Heigth && x_Circle <= FRAME_WIDTH && x_Circle >=0)
{
test = Rect(x_Circle, y_Circle, Rect_size, Rect_size);
rectangle(RectangePic, test, Scalar(0, 255, 255), 1, 8);
imshow("testee", RectangePic);
waitKey();
}
x_Circle += Rect_size;
}
Radius += Rect_size;
}

Try this script for these results:
import cv2, numpy as np, math
# Parameters for the window
hw = 789
# Parameters for the circle
circle_center = hw/2, hw/2
radius = hw/2.25
circle_thickness = 2
circle_color = (255,0,255)
# Parameters for the boxes
num_boxes = 50
box_size = 30
box_color = (0,255,0)
box_thickness = 2
# Create background image
bg = np.zeros((hw, hw, 3), np.uint8)
# Draw circle
cv2.circle(bg, tuple(np.array(circle_center, int)), int(radius), circle_color, circle_thickness)
# Time to draw some boxes!
for index in range(num_boxes):
# Compute the angle around the circle
angle = 2 * math.pi * index / num_boxes
# Compute the center of the box
x, y = circle_center[0] + math.sin(angle)*radius, circle_center[1] + math.cos(angle)*radius
# Compute the corners of the
pt1 = x-box_size/2, y-box_size/2
pt2 = x+box_size/2, y+box_size/2
# Draw Box
cv2.rectangle(bg, tuple(np.array(pt1, int)),tuple(np.array(pt2, int)), box_color, box_thickness)
cv2.imshow('img', bg)
cv2.waitKey(0)
cv2.destroyAllWindows()

Related

Fanning out an "arc" of card meshes

I have n number of cards. Each card is a units in width.
Many popular card games display a hand of cards in the "fanned out" position (see images below), and I would like to do the same. By utilizing the following formula, I'm able to place cards in an arc:
// NOTE: UE4 uses a left-handed, Z-up coordinate system.
// (+X = Forward, +Y = Right, and +Z = Up)
// NOTE: Card meshes have their pivot points in the center of the mesh
// (meshSize * 0.5f = local origin of mesh)
// n = Number of card meshes
// a = Width of each card mesh
const auto arcWidth = 0.8f;
const auto arcHeight = 0.15f;
const auto rotationAngle = 30.f;
const auto deltaAngle = 180.f;
const auto delta = FMath::DegreesToRadians(deltaAngle) / (float)(n);
const auto halfDelta = delta * 0.5f;
const auto halfMeshWidth = a * 0.5f;
const auto radius = halfMeshWidth + (rotationAngle / FMath::Tan(halfDelta));
for (unsigned y = 0; y < n; y++)
{
auto ArcX = (radius * arcWidth) * FMath::Cos(((float)y * delta) + halfDelta);
auto ArcY = (radius * arcHeight) * FMath::Sin(((float)y * delta) + halfDelta);
auto ArcVector = FVector(0.f, ArcX, ArcY);
// Draw a line from the world origin to the card origin
DrawDebugLine(GetWorld(), FVector::ZeroVector, ArcVector, FColor::Magenta, true, -1.f, 0, 2.5f);
}
Here's a 5-Card example from Hearthstone:
Here's a 5-Card example from Slay The Spire:
But the results I'm producing are, well... Suboptimal:
No matter how I tweak the variables, the cards on the far left and far right side are getting squashed together into the hand. I imagine this has to do with how the points of a circle are distributed, and then squashed downwards (via arcHeight) to form an ellipse? In any case, you can see that the results are far from similar, even though if you look closely at the example references, you can see that an arc exists from the center of each card (before those cards are rotated in local space).
What can I do to achieve a more evenly spaced arc?
Your distribution does look like an ellipse. What you need is a very large circle, where the center of the circle is way off the bottom of the screen. Something like the circle below, where the black rectangle is the screen area where you're drawing the cards, and the green dots are the card locations. Note that the radius of the circle is large, and the angles between the cards are small.

Perspective Transformation for bird's eye view opencv c++

I am interested in perspective transformation to bird's eye view. So far I have tried getPerspectiveTransform and findHomography and then passing it onto warpPerspective. The results are quite close but a skew in TL and BR is present. Also the contourArea are not translated equally post transformation.
The contour is a square with multiple shapes inside.
Any suggestion on how to go ahead.
Code block of what I have done so far.
std::vector<Point2f> quad_pts;
std::vector<Point2f> squre_pts;
cv::approxPolyDP( Mat(validContours[largest_contour_index]), contours_poly[0], epsilon, true );
if (approx_poly.size() > 4) return false;
for (int i=0; i< 4; i++)
quad_pts.push_back(contours_poly[0][i]);
if (! orderRectPoints(quad_pts))
return false;
float widthTop = (float)distanceBetweenPoints(quad_pts[1], quad_pts[0]); // sqrt( pow(quad_pts[1].x - quad_pts[0].x, 2) + pow(quad_pts[1].y - quad_pts[0].y, 2));
float widthBottom = (float)distanceBetweenPoints(quad_pts[2], quad_pts[3]); // sqrt( pow(quad_pts[2].x - quad_pts[3].x, 2) + pow(quad_pts[2].y - quad_pts[3].y, 2));
float maxWidth = max(widthTop, widthBottom);
float heightLeft = (float)distanceBetweenPoints(quad_pts[1], quad_pts[2]); // sqrt( pow(quad_pts[1].x - quad_pts[2].x, 2) + pow(quad_pts[1].y - quad_pts[2].y, 2));
float heightRight = (float)distanceBetweenPoints(quad_pts[0], quad_pts[3]); // sqrt( pow(quad_pts[0].x - quad_pts[3].x, 2) + pow(quad_pts[0].y - quad_pts[3].y, 2));
float maxHeight = max(heightLeft, heightRight);
int mDist = (int)max(maxWidth, maxHeight);
// transform TO points
const int offset = 50;
squre_pts.push_back(Point2f(offset, offset));
squre_pts.push_back(Point2f(mDist-1, offset));
squre_pts.push_back(Point2f(mDist-1, mDist-1));
squre_pts.push_back(Point2f(offset, mDist-1));
maxWidth += offset; maxHeight += offset;
Size matSize ((int)maxWidth, (int)maxHeight);
Mat transmtx = getPerspectiveTransform(quad_pts, squre_pts);
// Mat homo = findHomography(quad_pts, squre_pts);
warpPerspective(mRgba, mRgba, transmtx, matSize);
return true;
Link to transformed image
Image pre-transformation
corner on pre-transformed image
Corners from CornerSubPix
Your original pre-transformation image is not so good, the squares have different sizes there and it looks wavy. The results you get are quite good given the quality of your input.
You could try to calibrate your camera (https://docs.opencv.org/2.4/doc/tutorials/calib3d/camera_calibration/camera_calibration.html) to compensate lens distortion, and your results may improve.
EDIT: Just to summarize the comments below, approxPolyDp may not locate the corners properly if the square has rounded corners or it is blurred. You may need to improve the corner location by other means such as a sharper original image, different preprocessing (median filter or threshold, as you suggest in the comments), or other algorithms for finer corner location (such as using the cornersubpix function or detecting the sides with Hough Transform and then calculating the intersections of them)

How do I find an object in image/video knowing its real physical dimension?

I have a sample of images and would like to detect the object among others in the image/video already knowing in advance the real physical dimensions of that object. I have one of the image sample (its airplane door) and would like to find the window in the airplane door knowing its physical dimensions(let we say it has inner radius of 20cm and out radius of 23cm) and its real world position in the door (for example its minimal distance to the door frame is 15cm) .Also I can know prior my camera resolution. Any matlab code or OpenCV C++ that can do that automatically with image processing?
Here is my image sample
And more complex image with round logos.
I run the code for second complex image and do not get the same results. Here is the image result.
You are looking for a circle in the image so i suggest you use Hough circle transform.
Convert image to gray
Find edges in the image
Use Hugh circle transform to find circles in the image.
For each candidate circle sample the values of the circle and if the values corresponds to a predefined values accept.
The code:
clear all
% Parameters
minValueWindow = 90;
maxValueWindow = 110;
% Read file
I = imread('image1.jpg');
Igray = rgb2gray(I);
[row,col] = size(Igray);
% Edge detection
Iedge = edge(Igray,'canny',[0 0.3]);
% Hough circle transform
rad = 40:80; % The approximate radius in pixels
detectedCircle = {};
detectedCircleIndex = 1;
for radIndex=1:1:length(rad)
[y0detect,x0detect,Accumulator] = houghcircle(Iedge,rad(1,radIndex),rad(1,radIndex)*pi/2);
if ~isempty(y0detect)
circles = struct;
circles.X = x0detect;
circles.Y = y0detect;
circles.Rad = rad(1,radIndex);
detectedCircle{detectedCircleIndex} = circles;
detectedCircleIndex = detectedCircleIndex + 1;
end
end
% For each detection run a color filter
ang=0:0.01:2*pi;
finalCircles = {};
finalCircleIndex = 1;
for i=1:1:detectedCircleIndex-1
rad = detectedCircle{i}.Rad;
xp = rad*cos(ang);
yp = rad*sin(ang);
for detectedPointIndex=1:1:length(detectedCircle{i}.X)
% Take each detected center and sample the gray image
samplePointsX = round(detectedCircle{i}.X(detectedPointIndex) + xp);
samplePointsY = round(detectedCircle{i}.Y(detectedPointIndex) + yp);
sampleValueInd = sub2ind([row,col],samplePointsY,samplePointsX);
sampleValueMean = mean(Igray(sampleValueInd));
% Check if the circle color is good
if(sampleValueMean > minValueWindow && sampleValueMean < maxValueWindow)
circle = struct();
circle.X = detectedCircle{i}.X(detectedPointIndex);
circle.Y = detectedCircle{i}.Y(detectedPointIndex);
circle.Rad = rad;
finalCircles{finalCircleIndex} = circle;
finalCircleIndex = finalCircleIndex + 1;
end
end
end
% Find Main circle by merging close hyptosis together
for finaCircleInd=1:1:length(finalCircles)
circleCenter(finaCircleInd,1) = finalCircles{finaCircleInd}.X;
circleCenter(finaCircleInd,2) = finalCircles{finaCircleInd}.Y;
circleCenter(finaCircleInd,3) = finalCircles{finaCircleInd}.Rad;
end
[ind,C] = kmeans(circleCenter,2);
c = [length(find(ind==1));length(find(ind==2))];
[~,maxInd] = max(c);
xCircle = median(circleCenter(ind==maxInd,1));
yCircle = median(circleCenter(ind==maxInd,2));
radCircle = median(circleCenter(ind==maxInd,3));
% Plot circle
imshow(Igray);
hold on
ang=0:0.01:2*pi;
xp=radCircle*cos(ang);
yp=radCircle*sin(ang);
plot(xCircle+xp,yCircle+yp,'Color','red', 'LineWidth',5);
The resulted image:
Remarks:
For other images will still have to fine tune several parameters like the radius that you search for the color and Hough circle threshold and canny edge thresholds.
In the function i searched for circle with radius from 40 pixels to 80. In here you can use your prior information about the real world radius of the window and the resolution of the camera. If you know approximately the distance the camera was from the airplane and the resolution of the camera and also the window radius in cm you can use this to get the radius in pixels and use this for the hough circle transform.
I wouldn't worry too much about the exact geometry and calibration and rather find the window by its own characteristics.
Binarization works relatively well, be it on the whole image or in a large region of interest.
Then you can select the most likely blob based on it approximate area and/or circularity.

How to draw a segment of a circle in Cocos2d-x?

Context
I try to draw pie chart for statistic in my game. I'm using Cocos2d-x ver.3.8.1. Size of the game is important, so I won't to use third-party frameworks to create pie charts.
Problem
I could not find any suitable method in Cocos2d-x for drawing part of the circle.
I tried to do
I tried to find a solution to this problem in Internet, but without success.
As is known, sector of a circle = triangle + segment. So, I tried to use the method drawSegment() from DrawNode also.
Although it has parameter radius ("The segment radius" written in API reference), radius affects only the thickness of the line.
drawSegment() method draw a simple line, the thickness of which is set by a method call.
Question
Please prompt me, how can I draw a segment or a sector of a circle in Cocos2d-x?
Any advice will be appreciated, thanks.
I think the one of the ways to draw a sector of a circle in Cocos2d-X is the way to use drawPolygon on DrawNode. I wrote little sample.
void drawSector(cocos2d::DrawNode* node, cocos2d::Vec2 origin, float radius, float angle_degree,
cocos2d::Color4F fillColor, float borderWidth, cocos2d::Color4F bordercolor,
unsigned int num_of_points = 100)
{
if (!node)
{
return;
}
const cocos2d::Vec2 start = origin + cocos2d::Vec2{radius, 0};
const auto angle_step = 2 * M_PI * angle_degree / 360.f / num_of_points;
std::vector<cocos2d::Point> circle;
circle.emplace_back(origin);
for (int i = 0; i <= num_of_points; i++)
{
auto rads = angle_step * i;
auto x = origin.x + radius * cosf(rads);
auto y = origin.y + radius * sinf(rads);
circle.emplace_back(x, y);
}
node->drawPolygon(circle.data(), circle.size(), fillColor, borderWidth, bordercolor);
}
This is the function to calculate the position of edge point of circle and draw polygon. If you want to use it, you need to call like following,
auto canvas = DrawNode::create();
drawSector(canvas, cocos2d::Vec2(400, 400), 100, 60, cocos2d::Color4F::GREEN, 2, cocos2d::Color4F::BLUE, 100);
this->addChild(triangle);
The result would be like this. I think the code will help your problem.

My square tetrimino keeps rotating oddly, but my other tetrimonos rotate fine?

So in my tetris game, I'm working on rotation. I've found an algorithim that works for every piece but the square piece (ironically, the only one that doesn't even need to rotate). I know I could just check to see if the piece isn't a square then rotate it if it isn't, but that's just cheap. So here's the code:
pieceShape.setTexture(imgr.GetImage("square.png"));
for(int i = 0; i < 4; i++){
sf::RectangleShape rect;
if(i < 2)
rect.setPosition(pieceShape.getPosition().x, i * 10);
else
rect.setPosition(pieceShape.getPosition().x + 10, (i - 2) * 10);
rect.setSize(sf::Vector2f(10, 10));
rect.setFillColor(sf::Color::Blue);
pieceRectangles_.push_back(rect);
}
originCount = 10;
Here, I'm creating all four blocks that make up a square piece. 10 is the width of each box (4 boxes per square) in pixels. For all other pieces, I set originCount to 5 so the origin falls in the middle of the first box created. The originCount comes into play in the RotateRight/Left functions:
void GamePiece::RotateRight(){
int newx, newy;
sf::Vector2f origin(pieceRectangles_[0].getPosition().x + originCount, pieceRectangles_[0].getPosition().y + originCount);
for(int i = 0; i < 4; i++){
newx = (pieceRectangles_[i].getPosition().y + origin.x - origin.y);
newy = (origin.x + origin.y - pieceRectangles_[i].getPosition().x - 10);
pieceRectangles_[i].setPosition(newx, newy);
}
}
In theory, the origin has now been set to the middle of the square's sprite, and the boxes should rotate about that point (i.e. appear to not even move). But the boxes shoot to the left 10 pixels on the first click, go up maybe 2 pixels on click two, etc. I'm clearly missing something, but what?
You calculate origin incorrectly. After the first rotation the coordinates of pieceRectangles_[0] will be (0, 10), so next time origin will be calculated as (10, 20), which is not what you want.