I am trying to draw a rectangle rotated suitable with the rotate of a line (this rectangle created by four points)
Basic rectangle
A white overlay in the image I created using a rectangle. I want to make it rotate and stand above the red rectangle.
Here are my red rectangle code:
std::vector<cv::Point> imagePoints;
imagePoints.push_back(it->rect_tl());
imagePoints.push_back(it->rect_tr());
imagePoints.push_back(it->rect_br());
imagePoints.push_back(it->rect_bl());
imagePoints.push_back(it->rect_tl());
polylines(cam_view, imagePoints, false, Scalar(0, 0, 255), 2);
Thanks for your help.
I assume you have the red rectangle already given. So I calculate the angle of the top line of the red rectangle and create a new rotated rectangle with the cv::RotatedRect function.
Here is the example code:
#include <iostream>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/core/core.hpp>
#include <opencv2/imgproc/imgproc.hpp>
// Function to calculate the angle from 0 to 180° between two lines
float getClockwiseAngle0to180(cv::Point2f x_axis1, cv::Point2f x_axis2, cv::Point2f tl, cv::Point2f tr) {
float dot = (x_axis2.x - x_axis1.x) * (tr.x - tl.x) + (tr.y - tl.y) * (x_axis2.y - x_axis1.y);
float det = (x_axis2.x - x_axis1.x) * (tr.y - tl.y) - (x_axis2.y - x_axis1.y) * (tr.x - tl.x);
float angle = atan2(det, dot);
angle = angle * (180 / (float)CV_PI);
if (angle < 0) {
angle = angle + 360;
}
if (angle >= 180) {
angle = angle - 180;
}
return angle;
}
int main(int argc, char** argv) {
cv::Mat test_image(400, 400, CV_8UC3, cv::Scalar(0));
// You created the red rectangle with some detection algorithm and it seems that
// you already have topleft (tl), topright (tr)... coordinate of the red rectangle
std::vector<cv::Point2f> red_rect_points;
cv::Point2f tl(200.0, 200.0);
cv::Point2f tr(300.0, 150.0);
cv::Point2f br(350.0, 220.0);
cv::Point2f bl(250.0, 300.0);
red_rect_points.push_back(tl);
red_rect_points.push_back(tr);
red_rect_points.push_back(br);
red_rect_points.push_back(bl);
// Get the angle between the tl and tr point with the given function
float rotation = getClockwiseAngle0to180(cv::Point2f(0, 0), cv::Point2f(1, 0), tr, tl);
std::cout << rotation << std::endl;
// Create a new white rectangle with the same rotation angle
// Construct it using center, size and angle
cv::RotatedRect white_rectangle(cv::Point2f(200, 150), cv::Size2f(80, 50), rotation);
cv::Point2f white_vertices[4];
white_rectangle.points(white_vertices);
// Draw both rectangles
for (int i = 0; i < 4; ++i) {
line(test_image, red_rect_points[i], red_rect_points[(i+1)%4], cv::Scalar(0, 0, 255), 1, 8, 0);
line(test_image, white_vertices[i], white_vertices[(i+1)%4], cv::Scalar(255, 255, 255), 1, 8, 0);
}
cv::imshow("Rectangles", test_image);
cv::waitKey(0);
}
Related
I am trying to rotate video frames captured using libcamera-vid from libcamera-apps an arbitrary amount of degrees using OpenCV's warpAffine().
The frames are as far as I can understand on yuv420 planar format.
The gist of what I'm doing is:
#include <opencv2/imgproc.hpp>
#include <opencv2/core.hpp>
cv::Point2f Ycenter = cv::Point2f(width / 2.0f, height / 2.0f);
cv::Point2f UVcenter = cv::Point2f(width / 4.0f, height / 4.0f);
double rotation = 0;
cv::Mat Ytransform = cv::getRotationMatrix2D(Ycenter, rotation, 1.0);
cv::Mat UVtransform = cv::getRotationMatrix2D(UVcenter, rotation, 1.0);
int Uoffset = height*width;
int Voffset = 5*info.height*info.width/4;
cv::Size Ysize(info.height, info.width);
cv::Size UVsize(info.height / 2, info.width / 2);
for (unsigned int count = 0; ; count++)
{
// ...
// Wait for frame here
// ...
// Acquire buffer in which frame is stored:
uint8_t* buffer = getFrameBuffer(); // Simplification, but not important
double rot = floor(count / 10);
if (10*rot != rotation)
{
rotation = 10*rot;
Ytransform = cv::getRotationMatrix2D(Ycenter, rotation, 1.0);
UVtransform = cv::getRotationMatrix2D(UVcenter, rotation, 1.0);
}
cv::Mat Y(Ysize.height, Ysize.width, CV_8UC1, buffer);
cv::Mat U(UVsize.height, UVsize.width, CV_8UC1, buffer + Uoffset);
cv::Mat V(UVsize.height, UVsize.width, CV_8UC1, buffer + Voffset);
cv::warpAffine(Y, Y, Ytransform, Ysize);
cv::warpAffine(U, U, UVtransform, UVsize);
cv::warpAffine(V, V, UVtransform, UVsize);
sendFrameToEncoder(buffer); // Also a simplification, also not important as far as i know
}
Where height, width are the height and width of the video frame.
However, this produces wierd, warped images. Something is clearly rotated, but not the correctly.
rotation = 0 produces:
rotation = 10 produces:
rotation = 20 produces:
rotation = 30 produces:
So, it clearly isn't working correctly. Does anyone know what's going wrong here?
I'm using OpenCV version 4.5.1
On a Raspberry Pi Zero W running Raspberry Pi OS Bullseye
My Images;
Requirement:
I am not able to understand how axis is decided to make the image always horizontal.
Algorithm:
Read the image
Find external contour
Draw the contours
Use the external contour to detect minArearect (bounding box will not help for me)
get the rotation matrix and rotate the image
Extract the required patch from the rotated image
My code:
//read the image
img = imread("90.jpeg")
cv::Mat contourOutput = img.clone();
// detect external contour(images will have noise, although example images doesn't have)
std::vector<std::vector<cv::Point> > contours;
cv::findContours(contourOutput, contours, RETR_EXTERNAL, CHAIN_APPROX_SIMPLE);
int largest_area = 0;
int largest_contour_index = 0;
for (size_t i = 0; i < contours.size(); i++) {
double area = contourArea(contours[i]);
// copy the largest contour index
if (area > largest_area) {
largest_area = area;
largest_contour_index = i;
}
}
//draw contours
drawContours(img, contours, largest_contour_index, Scalar(255, 0, 0),
2);
// detect minimum area rect to get the angle and centre
cv::RotatedRect box = cv::minAreaRect(cv::Mat(contours[largest_contour_index]));
// take the box angle
double angle = box.angle;
if (angle < -45) {
box.angle += 90;
}
angle = box.angle;
// create rotation matrix
cv::Mat rot_mat = cv::getRotationMatrix2D(box.center, angle, 1);
// Apply the transformation
cv::Mat rotated;
cv::warpAffine(img, rotated, rot_mat, img.size(), cv::INTER_CUBIC);
cv::Size box_size = box.size;
if (box.angle < -45.)
std::swap(box_size.width, box_size.height);
// get the cropped image
cv::Mat cropped;
cv::getRectSubPix(rotated, box_size, box.center, cropped);
// Display the image
namedWindow("image2", WINDOW_NORMAL);
imshow("image2", cropped);
waitKey(0);
The idea is to compute the rotated bounding box angle using minAreaRect then deskew the image with getRotationMatrix2D and warpAffine. One final step is to rotate by 90 degrees if we are working with a vertical image. Here's the results with before (left) and after (right) and the angle of rotation:
-39.999351501464844
38.52387619018555
1.6167902946472168
1.9749339818954468
I implemented it in Python but you can adapt the same approach into C++
Code
import cv2
import numpy as np
# Load image, grayscale, and Otsu's threshold
image = cv2.imread('4.png')
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
thresh = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)[1]
# Compute rotated bounding box
coords = np.column_stack(np.where(thresh > 0))
angle = cv2.minAreaRect(coords)[-1]
# Determine rotation angle
if angle < -45:
angle = -(90 + angle)
else:
angle = -angle
print(angle)
# Rotate image to deskew
(h, w) = image.shape[:2]
center = (w // 2, h // 2)
M = cv2.getRotationMatrix2D(center, angle, 1.0)
rotated = cv2.warpAffine(image, M, (w, h), flags=cv2.INTER_CUBIC, borderMode=cv2.BORDER_REPLICATE)
# Vertical image so rotate to horizontal
h, w, _ = rotated.shape
if h > w:
rotated = cv2.rotate(rotated, cv2.ROTATE_90_CLOCKWISE)
cv2.imshow('rotated', rotated)
cv2.imwrite('rotated.png', rotated)
cv2.waitKey()
I'm trying to get orientation image of a fingerprint using the method proposed in this paper.
I tried implementing the steps described in Section 3.1.1 of the paper, but I don't get the desired result.
Here are my OpenCV code:
Mat calculate_orientation(Mat img, Mat &coherence) {
Mat image = img.clone();
Mat orient_im = Mat::zeros(image.size(), image.type());
Mat grad_x, grad_y;
Sobel(image, grad_x, CV_32F, 1, 0, 3, 1, 0, BORDER_DEFAULT );
Sobel(image, grad_y, CV_32F, 0, 1, 3, 1, 0, BORDER_DEFAULT );
//Iterate per BLOCKSIZE and use BLOCKSIZE/2 as the center
for (int i=BLOCKSIZE/2 ; i<=image.rows-BLOCKSIZE/2 ; i+=BLOCKSIZE) {
for (int j=BLOCKSIZE/2 ; j<=image.cols-BLOCKSIZE/2 ; j+=BLOCKSIZE) {
//Iterate each pixel in the block
float vx = 0.0f, vy = 0.0f, angle;
//Coherence
float gx = 0.0f, gy = 0.0f, gxy = 0.0f;
for (int u=i-BLOCKSIZE/2 ; u<i+BLOCKSIZE/2 ; u++) {
for (int v=j-BLOCKSIZE/2 ; v<j+BLOCKSIZE/2 ; v++) {
gx = 2* grad_x.at<float>(u,v) * grad_y.at<float>(u,v);
gy = pow(grad_x.at<float>(u,v), 2) - pow(grad_y.at<float>(u,v), 2);
vx += gx;
vy += gy;
gxy += sqrt(pow(gx,2)+pow(gy,2));
}
}
if (vy == 0) {
angle = 90;
} else {
angle = 0.5 * atan(vx/vy) * 180.0f/CV_PI;
}
//The angle above is the angle perpendicular to ridge direction
orient_im.at<float>(i,j) = angle + 90;
//Coherence
float coh = sqrt(pow(vx,2)+pow(vy,2))/gxy;
coherence.at<float>(i,j) = coh;
}
}
return orient_im;
}
This is the input image.
And this is the result. The blue lines are orientation with coherence value of more than 0.5, and the red lines are orientation with coherence value of less than 0.5.
Only around half of the orientation seems right.
I know there are already a few questions about this, but I still haven't gotten the correct results, so pardon me for asking. Any help would be appreciated.
I want fill circle with gradient color, like I show on bottom. I can't figure out easy way, how to do that.
I can make more circles, but transitions are visible.
cv::circle(img, center, circle_radius * 1.5, cv::Scalar(1.0, 1.0, 0.3), CV_FILLED);
cv::circle(img, center, circle_radius * 1.2, cv::Scalar(1.0, 1.0, 0.6), CV_FILLED);
cv::circle(img, center, circle_radius, cv::Scalar(1.0, 1.0, 1.0), CV_FILLED);
All you need to do is create a function which takes in a central point and a new point, calculates the distance, and returns a grayscale value for that point. Alternatively you could just return the distance, store the distance at that point, and then scale the whole thing later with cv::normalize().
So let's say you have the central point as (50, 50) in a (100, 100) image. Here's pseudocode for what you'd want to do:
function euclideanDistance(center, point) # returns a float
return sqrt( (center.x - point.x)^2 + (center.y - point.y)^2 )
center = (50, 50)
rows = 100
cols = 100
gradient = new Mat(rows, cols) # should be of type float
for row < rows:
for col < cols:
point = (col, row)
gradient[row, col] = euclideanDistance(center, point)
normalize(gradient, 0, 255, NORM_MINMAX, uint8)
gradient = 255 - gradient
Note the steps here:
Create the Euclidean distance function to calculate distance
Create a floating point matrix to hold the distance values
Loop through all rows and columns and assign a distance value
Normalize to the range you want (you could stick with a float here instead of casting to uint8, but you do you)
Flip the binary gradient, since distances farther away will be brighter---but you want the opposite.
Now for your exact example image, there's a gradient in a circle, whereas this method just creates the whole image as a gradient. In your case, if you want a specific radius, just modify the function which calculates the Euclidean distance, and if it's beyond some distance, set it to 0 (the value at the center of the circle, which will be flipped eventually to white):
function euclideanDistance(center, point, radius) # returns a float
distance = sqrt( (center.x - point.x)^2 + (center.y - point.y)^2 )
if distance > radius:
return 0
else
return distance
Here is the above in actual C++ code:
#include <opencv2/opencv.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <cmath>
float euclidean_distance(cv::Point center, cv::Point point, int radius){
float distance = std::sqrt(
std::pow(center.x - point.x, 2) + std::pow(center.y - point.y, 2));
if (distance > radius) return 0;
return distance;
}
int main(){
int h = 400;
int w = 400;
int radius = 100;
cv::Mat gradient = cv::Mat::zeros(h, w, CV_32F);
cv::Point center(150, 200);
cv::Point point;
for(int row=0; row<h; ++row){
for(int col=0; col<w; ++col){
point.x = col;
point.y = row;
gradient.at<float>(row, col) = euclidean_distance(center, point, radius);
}
}
cv::normalize(gradient, gradient, 0, 255, cv::NORM_MINMAX, CV_8U);
cv::bitwise_not(gradient, gradient);
cv::imshow("gradient", gradient);
cv::waitKey();
}
A completely different method (though doing the same thing) would be to use the distanceTransform(). This function maps the distance from the center of a white blob to the nearest black value to a grayscale value, like we were doing above. This code is more concise and does the same thing. However, it can work on arbitrary shapes, not just circles, so that's cool.
#include <opencv2/opencv.hpp>
#include <opencv2/highgui/highgui.hpp>
int main(){
int h = 400;
int w = 400;
int radius = 100;
cv::Point center(150, 200);
cv::Mat gradient = cv::Mat::zeros(h, w, CV_8U);
cv::rectangle(gradient, cv::Point(115, 100), cv::Point(270, 350), cv::Scalar(255), -1, 8 );
cv::Mat gradient_padding;
cv::bitwise_not(gradient, gradient_padding);
cv::distanceTransform(gradient, gradient, CV_DIST_L2, CV_DIST_MASK_PRECISE);
cv::normalize(gradient, gradient, 0, 255, cv::NORM_MINMAX, CV_8U);
cv::bitwise_or(gradient, gradient_padding, gradient);
cv::imshow("gradient-distxform.png", gradient);
cv::waitKey();
}
You have to draw many circles. Color of each circle depends on distance from center. Here is some simple example:
void printGradient(cv::Mat &_input,const cv::Point &_center, const double radius)
{
cv::circle(_input, _center, radius, cv::Scalar(0, 0, 0), -1);
for(double i=1; i<radius; i=i++)
{
const int color = 255-int(i/radius * 255); //or some another color calculation
cv::circle(_input,_center,i,cv::Scalar(color, color, color),2);
}
}
And result:
Another approach not mentioned yet is to precompute a circle gradient image (with one of the mentioned approaches like the accepted solution) and use affine warping with linear interpolation to create other such circles (different sizes). This can be faster, if warping and interpolation are optimized and maybe accelerated by hardware.
Result might be a bit worse than perfect.
I once used this to create a single individual vignetting mask circle for each frame innendoscopic imaging. Was faster than to compute the distances "manually".
I want to get new location of a cv::rect (ROI) after rotate the image by using the following code :
cv::Point2f center(image.cols/2.0, image.rows/2.0);
cv::Rect ROI = cv::Rect(100,200,50,100);
cv::Mat rot = cv::getRotationMatrix2D(center, angle, 1.0);
cv::Rect bbox = cv::RotatedRect(center,image.size(), angle).boundingRect();
rot.at<double>(0,2) += bbox.width/2.0 - center.x;
rot.at<double>(1,2) += bbox.height/2.0 - center.y;
cv::warpAffine(image, image, rot, bbox.size(),cv::INTER_LINEAR,cv::BORDER_CONSTANT,
cv::Scalar(255, 255, 255));
how I can do it ?
Since you have the rotation matrix, you can rotate the ROI rectangle using cv::transform function. First of all, you would need an array of points of that rectangle.
vector<Point2f> roi_points = {
{roi.x, roi.y},
{roi.x + roi.width, roi.y},
{roi.x + roi.width, roi.y + roi.height},
{roi.x, roi.y + roi.height}
};
Then, you can use cv::transform:
vector<Point2f> rot_roi_points;
transform(roi_points, rot_roi_points, rot);
This way, rot_roi_points holds points of the transformed rectangle.
==>
In order to get new location of a cv::rect (ROI) you have to transform each of its corners with using of following function:
cv::Point2f Convert(const cv::Point2f & p, const cv::Mat & t)
{
float x = p.x*t.at<double>((0, 0) + p.y*t.at<double>((0, 1) + t.at<double>((0, 2);
float y = p.x*t.at<double>((1, 0) + p.y*t.at<double>((1, 1) + t.at<double>((1, 2);
return cv::Point2f(x, y);
}
The transformation matrix is the same as you used for image rotation.