opencv: how to fill an ellipse shape with distance from the center - c++

I would like to populate an ellipse shape in OpenCV in such a way that the value it takes is the normalized distance from its center.
Typically in OpenCV, I can fill an image with an elliptical shape as follows:
cv::ellipse(image, e, cv::Scalar(255), CV_FILLED);
However, this gives the ellipse a constant scalar value of 1 and I would like to vary this value based on its distance from the center.
I guess one way would be to go through the points and manually compute this. I am quite new to OpenCV and having trouble doing this with this Mat object.

Here is the sample code snippet to Find distance Transform of a Ellipse.
You can simply create a mask of that Ellipse region & Find Distance transform for that mask.
Mat mEllipse_Bgr(Size(640,480),CV_8UC3,Scalar(0));
Mat mEllipseMask(mEllipse_Bgr.size(),CV_8UC1,Scalar(0));
// Draw a ellipse
ellipse( mEllipse_Bgr, Point( 200, 200 ), Size( 100.0, 160.0 ), 45, 0, 360, Scalar( 255, 0, 0 ), 1, 8 );
ellipse( mEllipseMask, Point( 200, 200 ), Size( 100.0, 160.0 ), 45, 0, 360, Scalar( 255), -1, 8 );
imshow("Ellipse Image",mEllipse_Bgr);
imshow("Ellipse Mask",mEllipseMask);
// Perform the distance transform algorithm
Mat mDist;
distanceTransform(mEllipseMask, mDist, CV_DIST_L2, 3);
// Normalize the distance Transform image for range = {0.0, 1.0} to view it
normalize(mDist, mDist, 0, 1., NORM_MINMAX);
imshow("Distance Transform Image", mDist);

Related

Fill circle with gradient

I want fill circle with gradient color, like I show on bottom. I can't figure out easy way, how to do that.
I can make more circles, but transitions are visible.
cv::circle(img, center, circle_radius * 1.5, cv::Scalar(1.0, 1.0, 0.3), CV_FILLED);
cv::circle(img, center, circle_radius * 1.2, cv::Scalar(1.0, 1.0, 0.6), CV_FILLED);
cv::circle(img, center, circle_radius, cv::Scalar(1.0, 1.0, 1.0), CV_FILLED);
All you need to do is create a function which takes in a central point and a new point, calculates the distance, and returns a grayscale value for that point. Alternatively you could just return the distance, store the distance at that point, and then scale the whole thing later with cv::normalize().
So let's say you have the central point as (50, 50) in a (100, 100) image. Here's pseudocode for what you'd want to do:
function euclideanDistance(center, point) # returns a float
return sqrt( (center.x - point.x)^2 + (center.y - point.y)^2 )
center = (50, 50)
rows = 100
cols = 100
gradient = new Mat(rows, cols) # should be of type float
for row < rows:
for col < cols:
point = (col, row)
gradient[row, col] = euclideanDistance(center, point)
normalize(gradient, 0, 255, NORM_MINMAX, uint8)
gradient = 255 - gradient
Note the steps here:
Create the Euclidean distance function to calculate distance
Create a floating point matrix to hold the distance values
Loop through all rows and columns and assign a distance value
Normalize to the range you want (you could stick with a float here instead of casting to uint8, but you do you)
Flip the binary gradient, since distances farther away will be brighter---but you want the opposite.
Now for your exact example image, there's a gradient in a circle, whereas this method just creates the whole image as a gradient. In your case, if you want a specific radius, just modify the function which calculates the Euclidean distance, and if it's beyond some distance, set it to 0 (the value at the center of the circle, which will be flipped eventually to white):
function euclideanDistance(center, point, radius) # returns a float
distance = sqrt( (center.x - point.x)^2 + (center.y - point.y)^2 )
if distance > radius:
return 0
else
return distance
Here is the above in actual C++ code:
#include <opencv2/opencv.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <cmath>
float euclidean_distance(cv::Point center, cv::Point point, int radius){
float distance = std::sqrt(
std::pow(center.x - point.x, 2) + std::pow(center.y - point.y, 2));
if (distance > radius) return 0;
return distance;
}
int main(){
int h = 400;
int w = 400;
int radius = 100;
cv::Mat gradient = cv::Mat::zeros(h, w, CV_32F);
cv::Point center(150, 200);
cv::Point point;
for(int row=0; row<h; ++row){
for(int col=0; col<w; ++col){
point.x = col;
point.y = row;
gradient.at<float>(row, col) = euclidean_distance(center, point, radius);
}
}
cv::normalize(gradient, gradient, 0, 255, cv::NORM_MINMAX, CV_8U);
cv::bitwise_not(gradient, gradient);
cv::imshow("gradient", gradient);
cv::waitKey();
}
A completely different method (though doing the same thing) would be to use the distanceTransform(). This function maps the distance from the center of a white blob to the nearest black value to a grayscale value, like we were doing above. This code is more concise and does the same thing. However, it can work on arbitrary shapes, not just circles, so that's cool.
#include <opencv2/opencv.hpp>
#include <opencv2/highgui/highgui.hpp>
int main(){
int h = 400;
int w = 400;
int radius = 100;
cv::Point center(150, 200);
cv::Mat gradient = cv::Mat::zeros(h, w, CV_8U);
cv::rectangle(gradient, cv::Point(115, 100), cv::Point(270, 350), cv::Scalar(255), -1, 8 );
cv::Mat gradient_padding;
cv::bitwise_not(gradient, gradient_padding);
cv::distanceTransform(gradient, gradient, CV_DIST_L2, CV_DIST_MASK_PRECISE);
cv::normalize(gradient, gradient, 0, 255, cv::NORM_MINMAX, CV_8U);
cv::bitwise_or(gradient, gradient_padding, gradient);
cv::imshow("gradient-distxform.png", gradient);
cv::waitKey();
}
You have to draw many circles. Color of each circle depends on distance from center. Here is some simple example:
void printGradient(cv::Mat &_input,const cv::Point &_center, const double radius)
{
cv::circle(_input, _center, radius, cv::Scalar(0, 0, 0), -1);
for(double i=1; i<radius; i=i++)
{
const int color = 255-int(i/radius * 255); //or some another color calculation
cv::circle(_input,_center,i,cv::Scalar(color, color, color),2);
}
}
And result:
Another approach not mentioned yet is to precompute a circle gradient image (with one of the mentioned approaches like the accepted solution) and use affine warping with linear interpolation to create other such circles (different sizes). This can be faster, if warping and interpolation are optimized and maybe accelerated by hardware.
Result might be a bit worse than perfect.
I once used this to create a single individual vignetting mask circle for each frame innendoscopic imaging. Was faster than to compute the distances "manually".

Transform a frame to be as if it was taken from above using OpenCV

I am working on a project for estimating a UAV (quadcopter) location using optical-flow technique. I currently have a code that is using farneback algorithm from OpenCV. The current code is working fine when the camera is always pointing to the ground.
Now, I want to add support to the case when the camera is not pointing straight down - meaning that the quadcopter now has a pitch / roll / yaw (Euler angles). The quadcopters Euler angles are known and I am searching for a method to compute and apply the transformation needed based on the known current Euler angles. So that the result image will be as if it was taken from above (see image below).
I found methods that calculates the transformation when having 2 sets (source and destination) of 4 corners via findHomography or getPerspectiveTransform functions from OpenCV. But I couldn't find any methods that can do it with knowing only Euler angle (because I don't know the destination image corenrs).
So my question is what method can I use and how in order to transform a frame to be as if it was taken from above using only Euler angles and camera height from ground if necessary?
In order to demonstrate what I need:
The relevant part of my current code is below:
for(;;)
{
Mat m, disp, warp;
vector<Point2f> corners;
// take out frame- still distorted
cap >> origFrame;
// undistort the frame using the calibration parameters
cv::undistort(origFrame, undistortFrame, cameraMatrix, distCoeffs, noArray());
// lower the process effort by transforming the picture to gray
cvtColor(undistortFrame, gray, COLOR_BGR2GRAY);
if( !prevgray.empty() )
{
// calculate flow
calcOpticalFlowFarneback(prevgray, gray, uflow, 0.5, 3/*def 3 */, 10/* def 15*/, 3, 3, 1.2 /* def 1.2*/, 0);
uflow.copyTo(flow);
// get average
calcAvgOpticalFlow(flow, 16, corners);
// calculate range of view - 2*tan(fov/2)*distance
rovX = 2*0.44523*distanceSonar*100; // 2 * tan(48/2) * dist(cm)
rovY = 2*0.32492*distanceSonar*100; // 2 * tan(36/2) * dist(cm)
// calculate final x, y location
location[0] += (currLocation.x/WIDTH_RES)*rovX;
location[1] += (currLocation.y/HEIGHT_RES)*rovY;
}
//break conditions
if(waitKey(1)>=0)
break;
if(end_run)
break;
std::swap(prevgray, gray);
}
UPDATE:
After successfully adding the rotation, I still need my image to be centered (and not to go outside of the frame window as shown below). I guess I need some kind of translation. I want the center of the source image to be at the center of the destination image. How can I add this as well?
The rotation function that works:
void rotateFrame(const Mat &input, Mat &output, Mat &A , double roll, double pitch, double yaw){
Mat Rx = (Mat_<double>(3, 3) <<
1, 0, 0,
0, cos(roll), -sin(roll),
0, sin(roll), cos(roll));
Mat Ry = (Mat_<double>(3, 3) <<
cos(pitch), 0, sin(pitch),
0, 1, 0,
-sin(pitch), 0, cos(pitch));
Mat Rz = (Mat_<double>(3, 3) <<
cos(yaw), -sin(yaw), 0,
sin(yaw), cos(yaw), 0,
0, 0, 1);
Mat R = Rx*Ry*Rz;
Mat trans = A*R*A.inv();
warpPerspective(input, output, trans, input.size());
}
When I run it with rotateFrame(origFrame, processedFrame, cameraMatrix, 0, 0, 0); I get image as expected:
But when I run it with 10 degrees in roll rotateFrame(origFrame, processedFrame, cameraMatrix, 20*(M_PI/180), 0, 0);. The image is getting out of the frame window:
If you have a calibration intrinsics matrix A (3x3), and there is no translation between camara poses, all you need to find homography H (3x3) is to construct rotation matrix R (3x3) from euler angles and apply the following formula:
H = A * R * A.inv()
Where .inv() is matrix invertion.
UPDATED:
If you want to center the image, you should just add translation this way:
(this is finding the warped position of the center and translation of this point back to the center)
|dx| | 320 / 2 |
|dy| = H * | 240 / 2 |
|1 | | 1 |
| 1 0 (320/2-dx) |
W = | 0 1 (240/2-dy) | * H
| 0 0 1 |
W is your final transformation.
I came to a conclusion that I had to use the 4x4 Homography matrix in order to be able to get what I wanted. In order to find the right homography matrix we need:
3D Rotation matrix R.
Camera calibration intrinsic matrix A1 and its inverted matrix A2.
Translation matrix T.
We can compose the 3D rotation matrix R by multiplying the rotation matrices around axes X,Y,Z:
Mat R = RZ * RY * RX
In order to apply the transformation on the image and keep it centered we need to add translation given by a 4x4 matrix, where dx=0; dy=0; dz=1 :
Mat T = (Mat_<double>(4, 4) <<
1, 0, 0, dx,
0, 1, 0, dy,
0, 0, 1, dz,
0, 0, 0, 1);
Given all these matrices we can compose our homography matrix H:
Mat H = A2 * (T * (R * A1))
With this homography matrix we can then use warpPerspective function from OpenCV to apply the transformation.
warpPerspective(input, output, H, input.size(), INTER_LANCZOS4);
For conclusion and completeness of this solution here is the full code:
void rotateImage(const Mat &input, UMat &output, double roll, double pitch, double yaw,
double dx, double dy, double dz, double f, double cx, double cy)
{
// Camera Calibration Intrinsics Matrix
Mat A2 = (Mat_<double>(3,4) <<
f, 0, cx, 0,
0, f, cy, 0,
0, 0, 1, 0);
// Inverted Camera Calibration Intrinsics Matrix
Mat A1 = (Mat_<double>(4,3) <<
1/f, 0, -cx/f,
0, 1/f, -cy/f,
0, 0, 0,
0, 0, 1);
// Rotation matrices around the X, Y, and Z axis
Mat RX = (Mat_<double>(4, 4) <<
1, 0, 0, 0,
0, cos(roll), -sin(roll), 0,
0, sin(roll), cos(roll), 0,
0, 0, 0, 1);
Mat RY = (Mat_<double>(4, 4) <<
cos(pitch), 0, sin(pitch), 0,
0, 1, 0, 0,
-sin(pitch), 0, cos(pitch), 0,
0, 0, 0, 1);
Mat RZ = (Mat_<double>(4, 4) <<
cos(yaw), -sin(yaw), 0, 0,
sin(yaw), cos(yaw), 0, 0,
0, 0, 1, 0,
0, 0, 0, 1);
// Translation matrix
Mat T = (Mat_<double>(4, 4) <<
1, 0, 0, dx,
0, 1, 0, dy,
0, 0, 1, dz,
0, 0, 0, 1);
// Compose rotation matrix with (RX, RY, RZ)
Mat R = RZ * RY * RX;
// Final transformation matrix
Mat H = A2 * (T * (R * A1));
// Apply matrix transformation
warpPerspective(input, output, H, input.size(), INTER_LANCZOS4);
}
Result:
This how I do it in Eigen and using 4 corners:
// Desired four corners
std::vector<Eigen::Vector2f> Normalized_Reference_Pattern = { Eigen::Vector2f(0, 0), Eigen::Vector2f(0, 2), Eigen::Vector2f(2, 0), Eigen::Vector2f(2, 2) };
// Current four points
std::vector<Eigen::Vector2f> CurrentCentroids = { /* Whatever four corners you want, but relative sueqnece to above */ };
// Transform for current to desired
auto Master_Transform = get_perspective_transform(CurrentCentroids, Normalized_Reference_Pattern);
// abilitu to use the same tranformation for other points (other than the corners) in the image
Eigen::Vector2f Master_Transform_Centroid = Master_Transform * Eigen::Vector2f(currentX, currentY);
And here is my black box:
Eigen::Matrix3f get_perspective_transform(const std::vector<Eigen::Vector2f>& points_from,const std::vector<Eigen::Vector2f>& points_to)
{
cv::Mat transform_cv = cv::getPerspectiveTransform(
convert::to_cv(points_from),
convert::to_cv(points_to));
Eigen::Matrix3f transform_eigen;
cv::cv2eigen(transform_cv, transform_eigen);
return transform_eigen;
}

How to Compute the Structure Tensor of an Image using OpenCV

I am trying to implement an application that uses images to search for similar images in a large image database. I am developing an image descriptor to use for this search and I would like to combine color information with some gradient information. I have seen structure tensors used in this domain to find the main gradient direction in images or sub-images.
I would like to take an image, divide it into grid of sub-images, for example, 4x4 grid (in total 16 sub-images) and then find the leading gradient direction of each cell. To find the leading gradient direction I want to see if computing the structure tensor for each cell can give good representation of the image gradient and lead to improved image matching. Is this a good idea or a bad idea? The idea was to get a feature vector similar to the idea in section 3.2 in this paper http://cybertron.cg.tu-berlin.de/eitz/pdf/2009_sbim.pdf
Dividing the image into sub-images(cells) is trivial and with opencv I can compute the partial derivatives using the Sobel function.
Mat dx, dy;
Sobel(im, dx, CV_32F, 1, 0, 3, 1, 0, BORDER_DEFAULT);
Sobel(im, dy, CV_32F, 0, 1, 3, 1, 0, BORDER_DEFAULT);
Computing dx^2, dy^2 and dxy should not be a problem, but I am not sure how I can compute the structure tensor matrix and use the tensor matrix to find the main gradient direction for an image or sub-image. How can I implement this with OpenCV?
EDIT
Okay, this is what I have done.
Mat _im; // Image to compute main gradient direction for.
cvtColor(im, _im, CV_BGR2GRAY);
GaussianBlur(_im, _im, Size(3, 3), 0, 0, BORDER_DEFAULT); //Blur the image to remove unnecessary details.
GaussianBlur(_im, _im, Size(5, 5), 0, 0, BORDER_DEFAULT);
GaussianBlur(_im, _im, Size(7, 7), 0, 0, BORDER_DEFAULT);
// Calculate image derivatives
Mat dx2, dy2, dxy;
Sobel(_im, dx2, CV_32F, 2, 0, 3, 1, 0, BORDER_DEFAULT);
Sobel(_im, dy2, CV_32F, 0, 2, 3, 1, 0, BORDER_DEFAULT);
Sobel(_im, dxy, CV_32F, 1, 1, 3, 1, 0, BORDER_DEFAULT);
Mat t(2, 2, CV_32F); // tensor matrix
// Insert values to the tensor matrix.
t.at<float>(0, 0) = sum(dx2)[0];
t.at<float>(0, 1) = sum(dxy)[0];
t.at<float>(1, 0) = sum(dxy)[0];
t.at<float>(1, 1) = sum(dy2)[0];
// eigen decomposition to get the main gradient direction.
Mat eigVal, eigVec;
eigen(t, eigVal, eigVec);
// This should compute the angle of the gradient direction based on the first eigenvector.
float* eVec1 = eigVec.ptr<float>(0);
float* eVec2 = eigVec.ptr<float>(1);
cout << fastAtan2(eVec1[0], eVec1[1]) << endl;
cout << fastAtan2(eVec2[0], eVec2[1]) << endl;
Is this approach correct?
Using this image the application outputs 44.9905, 135.01.
This gives 0, 90.
When I use a part of a real image I get 342.743, 72.7425, which I find odd. I expected to get an angle along the color change (90ish).
After testing I am not sure if my implementation is correct, so any feedback or comments on this are welcomed.
I believe your problem is that you are computing second order derivatives instead of squaring first order derivatives. It should be something like this instead:
// Calculate image derivatives
Mat dx, dy;
Mat dx2, dy2, dxy;
Sobel(_im, dx, CV_32F, 1, 0);
Sobel(_im, dy, CV_32F, 0, 1);
multiply(dx, dx, dx2);
multiply(dy, dy, dy2);
multiply(dx, dy, dxy);
P.S.
Oh, by the way, there is no need to do Gaussian blurring over and over again. Just use a bigger kernel and blur once.
D.S.

draw a rectangle around a detected circle using opencv and c++

assume that I have a detected circle with coordinate of (center.x and center.y) detected by using this circle function:
GaussianBlur( dis, dis, Size(3, 3), 2, 2 );
vector<Vec3f> circles;
HoughCircles( dis, circles, CV_HOUGH_GRADIENT, 1, dis.rows/8, 200, 100);
for( size_t i = 0; i < circles.size(); i++ ){
Point center(cvRound(circles[i][0]), cvRound(circles[i][1]));
cout << "center" << center.x << ", " << center.y << endl;
// coordinates of center points
V.push_back(std::make_pair(center.x,center.y));
int radius = cvRound(circles[i][2]);
// circle center
circle( dis, center, 3, 1, -1, 8, 0 );
// circle outline
circle( dis, center, radius, 1, 3, 8, 0 );
}
how do I draw a rectangle around this circle which the center of the circle locates in middle of the rectangle and the distance between the center and each side is "radius + x" ?
I am completely new in image processing, sorry for the simple question.
I would appreciate any help..
...............Edit the code..................
cv::rectangle( diatence, cvPoint((center.x)-(radius+10),(center.y)-(radius+10)), cvPoint((center.x)+(radius+10),(center.y)+(radius+10)), 1, 1, 8 );
assuming the centre is at x,y you need to draw a rectangle with the following specifications:
top left corner : x-(radius+a),y-(radius+a)
bottom right corner : x+(radius+a),y+(radius+a)
where a is an arbitrary value that you want to add to the radius.
More generally:
given a centre point x,y and a known size LxH of a rectangle, you can draw the rectangle by specifiying the top-left point as x-(L/2),y-(H/2) and the bottom-right point as x+(L/2),y+(H/2)

projecting light and shadows on a surface

Hi I am trying to extract the lighting and the shadow from one surface and apply it to another type of surface. I convert the image to HSV and extract the Hue component and plot it which seems to give me a good indication of where the lighting and shadows are. However when I swap the hue component of the original image with my final image I get all sorts of greens and blues that are not desired. Are there any other techniques that can be used to project shadow and lighting?
cvtColor( img0, hsv, CV_BGR2HSV );
components[0].create( hsv.size(), 1);
components[1].create( hsv.size(), 1);
components[2].create( hsv.size(), 1);
split(hsv, components);
...
cvtColor( drawing, hsv_output, CV_BGR2HSV );
components_output[0].create( hsv.size(), 1);
components_output[1].create( hsv.size(), 1);
components_output[2].create( hsv.size(), 1);
split(hsv_output, components_output);
components_output[0] = 0.5 * components_output[0] + 0.5 * components[0];
int ch[] = {0 , 0};
mixChannels(&components_output[0], 1, &hsv_output, 1, ch, 1);
cvtColor( hsv_output, drawing, CV_HSV2BGR );