I want fill circle with gradient color, like I show on bottom. I can't figure out easy way, how to do that.
I can make more circles, but transitions are visible.
cv::circle(img, center, circle_radius * 1.5, cv::Scalar(1.0, 1.0, 0.3), CV_FILLED);
cv::circle(img, center, circle_radius * 1.2, cv::Scalar(1.0, 1.0, 0.6), CV_FILLED);
cv::circle(img, center, circle_radius, cv::Scalar(1.0, 1.0, 1.0), CV_FILLED);
All you need to do is create a function which takes in a central point and a new point, calculates the distance, and returns a grayscale value for that point. Alternatively you could just return the distance, store the distance at that point, and then scale the whole thing later with cv::normalize().
So let's say you have the central point as (50, 50) in a (100, 100) image. Here's pseudocode for what you'd want to do:
function euclideanDistance(center, point) # returns a float
return sqrt( (center.x - point.x)^2 + (center.y - point.y)^2 )
center = (50, 50)
rows = 100
cols = 100
gradient = new Mat(rows, cols) # should be of type float
for row < rows:
for col < cols:
point = (col, row)
gradient[row, col] = euclideanDistance(center, point)
normalize(gradient, 0, 255, NORM_MINMAX, uint8)
gradient = 255 - gradient
Note the steps here:
Create the Euclidean distance function to calculate distance
Create a floating point matrix to hold the distance values
Loop through all rows and columns and assign a distance value
Normalize to the range you want (you could stick with a float here instead of casting to uint8, but you do you)
Flip the binary gradient, since distances farther away will be brighter---but you want the opposite.
Now for your exact example image, there's a gradient in a circle, whereas this method just creates the whole image as a gradient. In your case, if you want a specific radius, just modify the function which calculates the Euclidean distance, and if it's beyond some distance, set it to 0 (the value at the center of the circle, which will be flipped eventually to white):
function euclideanDistance(center, point, radius) # returns a float
distance = sqrt( (center.x - point.x)^2 + (center.y - point.y)^2 )
if distance > radius:
return 0
else
return distance
Here is the above in actual C++ code:
#include <opencv2/opencv.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <cmath>
float euclidean_distance(cv::Point center, cv::Point point, int radius){
float distance = std::sqrt(
std::pow(center.x - point.x, 2) + std::pow(center.y - point.y, 2));
if (distance > radius) return 0;
return distance;
}
int main(){
int h = 400;
int w = 400;
int radius = 100;
cv::Mat gradient = cv::Mat::zeros(h, w, CV_32F);
cv::Point center(150, 200);
cv::Point point;
for(int row=0; row<h; ++row){
for(int col=0; col<w; ++col){
point.x = col;
point.y = row;
gradient.at<float>(row, col) = euclidean_distance(center, point, radius);
}
}
cv::normalize(gradient, gradient, 0, 255, cv::NORM_MINMAX, CV_8U);
cv::bitwise_not(gradient, gradient);
cv::imshow("gradient", gradient);
cv::waitKey();
}
A completely different method (though doing the same thing) would be to use the distanceTransform(). This function maps the distance from the center of a white blob to the nearest black value to a grayscale value, like we were doing above. This code is more concise and does the same thing. However, it can work on arbitrary shapes, not just circles, so that's cool.
#include <opencv2/opencv.hpp>
#include <opencv2/highgui/highgui.hpp>
int main(){
int h = 400;
int w = 400;
int radius = 100;
cv::Point center(150, 200);
cv::Mat gradient = cv::Mat::zeros(h, w, CV_8U);
cv::rectangle(gradient, cv::Point(115, 100), cv::Point(270, 350), cv::Scalar(255), -1, 8 );
cv::Mat gradient_padding;
cv::bitwise_not(gradient, gradient_padding);
cv::distanceTransform(gradient, gradient, CV_DIST_L2, CV_DIST_MASK_PRECISE);
cv::normalize(gradient, gradient, 0, 255, cv::NORM_MINMAX, CV_8U);
cv::bitwise_or(gradient, gradient_padding, gradient);
cv::imshow("gradient-distxform.png", gradient);
cv::waitKey();
}
You have to draw many circles. Color of each circle depends on distance from center. Here is some simple example:
void printGradient(cv::Mat &_input,const cv::Point &_center, const double radius)
{
cv::circle(_input, _center, radius, cv::Scalar(0, 0, 0), -1);
for(double i=1; i<radius; i=i++)
{
const int color = 255-int(i/radius * 255); //or some another color calculation
cv::circle(_input,_center,i,cv::Scalar(color, color, color),2);
}
}
And result:
Another approach not mentioned yet is to precompute a circle gradient image (with one of the mentioned approaches like the accepted solution) and use affine warping with linear interpolation to create other such circles (different sizes). This can be faster, if warping and interpolation are optimized and maybe accelerated by hardware.
Result might be a bit worse than perfect.
I once used this to create a single individual vignetting mask circle for each frame innendoscopic imaging. Was faster than to compute the distances "manually".
Related
My Images;
Requirement:
I am not able to understand how axis is decided to make the image always horizontal.
Algorithm:
Read the image
Find external contour
Draw the contours
Use the external contour to detect minArearect (bounding box will not help for me)
get the rotation matrix and rotate the image
Extract the required patch from the rotated image
My code:
//read the image
img = imread("90.jpeg")
cv::Mat contourOutput = img.clone();
// detect external contour(images will have noise, although example images doesn't have)
std::vector<std::vector<cv::Point> > contours;
cv::findContours(contourOutput, contours, RETR_EXTERNAL, CHAIN_APPROX_SIMPLE);
int largest_area = 0;
int largest_contour_index = 0;
for (size_t i = 0; i < contours.size(); i++) {
double area = contourArea(contours[i]);
// copy the largest contour index
if (area > largest_area) {
largest_area = area;
largest_contour_index = i;
}
}
//draw contours
drawContours(img, contours, largest_contour_index, Scalar(255, 0, 0),
2);
// detect minimum area rect to get the angle and centre
cv::RotatedRect box = cv::minAreaRect(cv::Mat(contours[largest_contour_index]));
// take the box angle
double angle = box.angle;
if (angle < -45) {
box.angle += 90;
}
angle = box.angle;
// create rotation matrix
cv::Mat rot_mat = cv::getRotationMatrix2D(box.center, angle, 1);
// Apply the transformation
cv::Mat rotated;
cv::warpAffine(img, rotated, rot_mat, img.size(), cv::INTER_CUBIC);
cv::Size box_size = box.size;
if (box.angle < -45.)
std::swap(box_size.width, box_size.height);
// get the cropped image
cv::Mat cropped;
cv::getRectSubPix(rotated, box_size, box.center, cropped);
// Display the image
namedWindow("image2", WINDOW_NORMAL);
imshow("image2", cropped);
waitKey(0);
The idea is to compute the rotated bounding box angle using minAreaRect then deskew the image with getRotationMatrix2D and warpAffine. One final step is to rotate by 90 degrees if we are working with a vertical image. Here's the results with before (left) and after (right) and the angle of rotation:
-39.999351501464844
38.52387619018555
1.6167902946472168
1.9749339818954468
I implemented it in Python but you can adapt the same approach into C++
Code
import cv2
import numpy as np
# Load image, grayscale, and Otsu's threshold
image = cv2.imread('4.png')
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
thresh = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)[1]
# Compute rotated bounding box
coords = np.column_stack(np.where(thresh > 0))
angle = cv2.minAreaRect(coords)[-1]
# Determine rotation angle
if angle < -45:
angle = -(90 + angle)
else:
angle = -angle
print(angle)
# Rotate image to deskew
(h, w) = image.shape[:2]
center = (w // 2, h // 2)
M = cv2.getRotationMatrix2D(center, angle, 1.0)
rotated = cv2.warpAffine(image, M, (w, h), flags=cv2.INTER_CUBIC, borderMode=cv2.BORDER_REPLICATE)
# Vertical image so rotate to horizontal
h, w, _ = rotated.shape
if h > w:
rotated = cv2.rotate(rotated, cv2.ROTATE_90_CLOCKWISE)
cv2.imshow('rotated', rotated)
cv2.imwrite('rotated.png', rotated)
cv2.waitKey()
I am trying to draw a rectangle rotated suitable with the rotate of a line (this rectangle created by four points)
Basic rectangle
A white overlay in the image I created using a rectangle. I want to make it rotate and stand above the red rectangle.
Here are my red rectangle code:
std::vector<cv::Point> imagePoints;
imagePoints.push_back(it->rect_tl());
imagePoints.push_back(it->rect_tr());
imagePoints.push_back(it->rect_br());
imagePoints.push_back(it->rect_bl());
imagePoints.push_back(it->rect_tl());
polylines(cam_view, imagePoints, false, Scalar(0, 0, 255), 2);
Thanks for your help.
I assume you have the red rectangle already given. So I calculate the angle of the top line of the red rectangle and create a new rotated rectangle with the cv::RotatedRect function.
Here is the example code:
#include <iostream>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/core/core.hpp>
#include <opencv2/imgproc/imgproc.hpp>
// Function to calculate the angle from 0 to 180° between two lines
float getClockwiseAngle0to180(cv::Point2f x_axis1, cv::Point2f x_axis2, cv::Point2f tl, cv::Point2f tr) {
float dot = (x_axis2.x - x_axis1.x) * (tr.x - tl.x) + (tr.y - tl.y) * (x_axis2.y - x_axis1.y);
float det = (x_axis2.x - x_axis1.x) * (tr.y - tl.y) - (x_axis2.y - x_axis1.y) * (tr.x - tl.x);
float angle = atan2(det, dot);
angle = angle * (180 / (float)CV_PI);
if (angle < 0) {
angle = angle + 360;
}
if (angle >= 180) {
angle = angle - 180;
}
return angle;
}
int main(int argc, char** argv) {
cv::Mat test_image(400, 400, CV_8UC3, cv::Scalar(0));
// You created the red rectangle with some detection algorithm and it seems that
// you already have topleft (tl), topright (tr)... coordinate of the red rectangle
std::vector<cv::Point2f> red_rect_points;
cv::Point2f tl(200.0, 200.0);
cv::Point2f tr(300.0, 150.0);
cv::Point2f br(350.0, 220.0);
cv::Point2f bl(250.0, 300.0);
red_rect_points.push_back(tl);
red_rect_points.push_back(tr);
red_rect_points.push_back(br);
red_rect_points.push_back(bl);
// Get the angle between the tl and tr point with the given function
float rotation = getClockwiseAngle0to180(cv::Point2f(0, 0), cv::Point2f(1, 0), tr, tl);
std::cout << rotation << std::endl;
// Create a new white rectangle with the same rotation angle
// Construct it using center, size and angle
cv::RotatedRect white_rectangle(cv::Point2f(200, 150), cv::Size2f(80, 50), rotation);
cv::Point2f white_vertices[4];
white_rectangle.points(white_vertices);
// Draw both rectangles
for (int i = 0; i < 4; ++i) {
line(test_image, red_rect_points[i], red_rect_points[(i+1)%4], cv::Scalar(0, 0, 255), 1, 8, 0);
line(test_image, white_vertices[i], white_vertices[(i+1)%4], cv::Scalar(255, 255, 255), 1, 8, 0);
}
cv::imshow("Rectangles", test_image);
cv::waitKey(0);
}
Let's say I have a QPixmap that has the dimensions of (20 x 100). How can I create a copy of this QPixmap that's rotated a specific amount and also has new dimensions to allocate the new dimensions of the rotated pixmap?
I've found multiple examples on how to rotate using QPainter and QTransform, but none seem to provide a proper manner to keep the QPixmap from cutting off.
The best example I've found so far is:
// original = Original QPixmap
QSize size = original.size();
QPixmap newPixmap(size);
newPixmap.fill(QColor::fromRgb(0, 0, 0, 0));
QPainter p(&newPixmap);
p.translate(size.height() / 2, size.height() / 2);
p.rotate(35); // Any rotation, for this example 35 degrees
p.translate(size.height() / -2, size.height() / -2);
p.drawPixmap(0, 0, original);
p.end();
This rotates a QPixmap, and places it on a new QPixmap of the same dimensions. However, I am at a loss on how to modify this to work with new dimensions.
I've even tried simply modifying the initial size of the new pixmap, but that just causes the image to be off center (and still cut off for some reason?)
Any support would be appreciated!
One way to do this would be to calculate the minimum bounding rect for your rotated image and to create a new pixmap with these dimensions onto which you can render your rotated image which is now guarenteed to fit. To do this you could take each corner point of your image rectangle and rotate them around the center. The resulting points can then be used to calculate your minimum bounding rectangle by looking at each point and finding both the minimum and maximum x and y values.
For example in the following hypothetical example we have a 100x100 rectangle. If we use a simple algorithm to rotate each corner point of the rectangle around the center by our angle (in this case 45 degrees) we get the four new corner points (50, -20), (-20, 50), (120, 120) and (50, 120). From these points we can see the minimum x value is -20, the minimum y value is -20, the maximum x value is 120 and the maximum y value is 120, so the minimum bounding rect can be described by topLeft:(-20, -20) and bottomRight:(120, 120).
To help you with this here is a function taken from another stackoverflow post for rotating a point around another point:
QPointF getRotatedPoint( QPointF p, QPointF center, qreal angleRads )
{
qreal x = p.x();
qreal y = p.y();
float s = qSin( angleRads );
float c = qCos( angleRads );
// translate point back to origin:
x -= center.x();
y -= center.y();
// rotate point
float xnew = x * c - y * s;
float ynew = x * s + y * c;
// translate point back:
x = xnew + center.x();
y = ynew + center.y();
return QPointF( x, y );
}
And here is a function I wrote that uses it to calculate the minimum bounding rect for some rectangle rotated by some angle...
QRectF getMinimumBoundingRect( QRect r, qreal angleRads )
{
QPointF topLeft = getRotatedPoint( r.topLeft(), r.center(), angleRads );
QPointF bottomRight = getRotatedPoint( r.bottomRight(), r.center(), angleRads );
QPointF topRight = getRotatedPoint( r.topRight(), r.center(), angleRads );
QPointF bottomLeft = getRotatedPoint( r.bottomLeft(), r.center(), angleRads );
// getMin and getMax just return the min / max of their arguments
qreal minX = getMin( topLeft.x(), bottomRight.x(), topRight.x(), bottomLeft.x() );
qreal minY = getMin( topLeft.y(), bottomRight.y(), topRight.y(), bottomLeft.y() );
qreal maxX = getMax( topLeft.x(), bottomRight.x(), topRight.x(), bottomLeft.x() );
qreal maxY = getMax( topLeft.y(), bottomRight.y(), topRight.y(), bottomLeft.y() );
return QRectF( QPointF( minX, minY ), QPointF( maxX, maxY ) );
}
So now we have the minimum bounding rectangle for our rotated image we can create a new pixmap with its width and height and render our rotated image to it at the center. This is tricky because of the transformation involved which makes it a bit more confusing as to what your source and target rects might be. It's actually not as hard as it might seem. You perform your translation / rotation to rotate the paint device around the center, then you can simply render your source image onto your destination image exactly as you would if you were rendering the source to the center of the destination.
For example:
QPixmap originalPixmap; // Load this from somewhere
QRectF minimumBoundingRect = getMinimumBoundingRect( originalPixmap.rect(), angleRads);
QPixmap rotatedPixmap( minimumBoundingRect.width(), minimumBoundingRect.height() );
QPainter p( &rotatedPixmap );
p.save();
// Rotate the rotated pixmap paint device around the center...
p.translate( 0.5 * rotatedPixmap.width(), 0.5 * rotatedPixmap.height() );
p.rotate( angleDegrees );
p.translate( -0.5 * rotatedPixmap.width(), -0.5 * rotatedPixmap.height() );
// The render rectangle is simply the originalPixmap rectangle as it would be if placed at the center of the rotatedPixmap rectangle...
QRectF renderRect( 0.5 * rotatedRect.width() - 0.5 * originalPixmap.width(),
0.5 * rotatedRect.height() - 0.5 * originalPixmap.height(),
originalPixmap.width(),
originalPixmap.height() );
p.drawPixmap( renderRect, originalPixmap, originalPixmap.rect() );
p.restore();
And voila, a nicely rotated image with no corners chopped off.
I want to get new location of a cv::rect (ROI) after rotate the image by using the following code :
cv::Point2f center(image.cols/2.0, image.rows/2.0);
cv::Rect ROI = cv::Rect(100,200,50,100);
cv::Mat rot = cv::getRotationMatrix2D(center, angle, 1.0);
cv::Rect bbox = cv::RotatedRect(center,image.size(), angle).boundingRect();
rot.at<double>(0,2) += bbox.width/2.0 - center.x;
rot.at<double>(1,2) += bbox.height/2.0 - center.y;
cv::warpAffine(image, image, rot, bbox.size(),cv::INTER_LINEAR,cv::BORDER_CONSTANT,
cv::Scalar(255, 255, 255));
how I can do it ?
Since you have the rotation matrix, you can rotate the ROI rectangle using cv::transform function. First of all, you would need an array of points of that rectangle.
vector<Point2f> roi_points = {
{roi.x, roi.y},
{roi.x + roi.width, roi.y},
{roi.x + roi.width, roi.y + roi.height},
{roi.x, roi.y + roi.height}
};
Then, you can use cv::transform:
vector<Point2f> rot_roi_points;
transform(roi_points, rot_roi_points, rot);
This way, rot_roi_points holds points of the transformed rectangle.
==>
In order to get new location of a cv::rect (ROI) you have to transform each of its corners with using of following function:
cv::Point2f Convert(const cv::Point2f & p, const cv::Mat & t)
{
float x = p.x*t.at<double>((0, 0) + p.y*t.at<double>((0, 1) + t.at<double>((0, 2);
float y = p.x*t.at<double>((1, 0) + p.y*t.at<double>((1, 1) + t.at<double>((1, 2);
return cv::Point2f(x, y);
}
The transformation matrix is the same as you used for image rotation.
I have to draw a conical gradient in Qt C++ but I can not use the QConicalGradient. I did have a linear gradient, but I do not know how to make a conical gradient. I do not want the finished code, but I ask for a simple algorithm.
for(int y = 0; y < image.height(); y++){
QRgb *line = (QRgb *)image.scanLine(y);
for(int x = 0; x < image.width(); x++){
QPoint currentPoint(x, y);
QPoint relativeToCenter = currentPoint - centerPoint;
float angle = atan2(relativeToCenter.y(), relativeToCenter.x);
// I have a problem in this line because I don't know how to set a color:
float hue = map(-M_PI, angle, M_PI, 0, 255);
line[x] = (red << 16) + (grn << 8) + blue;
}
}
Can you help me?
Here is some pseudo code:
Given some area to paint on, and a defined center for your gradient...
For each point that you are painting on in the area, calculate the angle to the center of your gradient.
// QPoint currentPoint; // created/populated with a x, y value by two for loops
QPoint relativeToCenter = currentPoint - centerPoint;
angle = atan2(relativeToCenter.y(), relativeToCenter.x());
Then map that angle to a color using your linear gradient, or some sort of mapping function.
float hue = map(-PI, angle, PI, 0, 255); // convert angle in radians to value
// between 0 and 255
Paint that pixel, and repeat for every pixel in your area.
EDIT: Depending on the pattern of the gradient, you will want to create a different QColor pixel. For example if you had a "rainbow" gradient, just going from one hue to the next, you could use a linear mapping function like this:
float map(float x1, float x, float x2, float y1, float y2)
{
if(true){
if(x<x1)
x = x1;
if(x>x2)
x = x2;
}
return y1 + (y2-y1)/(x2-x1)*(x-x1);
}
Then you create a QColor object using the outputted value:
float hue = map(-PI, angle, PI, 0, 255); // convert angle in radians to value
// between 0 and 255
QColor c;
c.setHsl( (int) hue, 255, 255);
Then use this QColor object with your QPainter or QBrush or QPen that you are using. Or if you are putting a qRgb value back in:
line[x] = c.rgb();
http://qt-project.org/doc/qt-4.8/qcolor.html
Hope that helps.