How draw axis of ellipse - c++

I am using fitellipse of Opencv and C++, and I'm getting these values:
/// Find the rotated rectangles and ellipses for each contour
vector<RotatedRect> minRect( contours.size() );
vector<RotatedRect> minEllipse( contours.size() );
for( int i = 0; i < contours.size(); i++ )
{
minRect[i] = minAreaRect( Mat(contours[i]) );
if( contours[i].size() > 5 )
minEllipse[i] = fitEllipse( Mat(contours[i]) );
// ...
}
float xc = minEllipse[element].center.x;
float yc = minEllipse[element].center.y;
float a = minEllipse[element].size.width / 2;
float b = minEllipse[element].size.height / 2;
float theta = minEllipse[element].angle;
But with these values how can I draw the axis of an ellipse, for example of the following ellipse?
NOTE: Element is an ellipse stored in minEllipse.

You can use minEllipse[element].points to get the four corners of the rotated bounding rectangle, like described here.
Then you only need to calculate the average of the two points on each side of the rectangle to get the endpoints for the axes...
Point2f vertices[4];
minEllipse[element].points(vertices);
line(image, (vertices[0] + vertices[1])/2, (vertices[2] + vertices[3])/2, Scalar(0,255,0));
line(image, (vertices[1] + vertices[2])/2, (vertices[3] + vertices[0])/2, Scalar(0,255,0));

You are probably looking for those formulas:
ct = cos(theta)
st = sin(theta)
LongAxix0.x = xc - a*ct
LongAxis0.y = yc - a*st
LongAxis1.x = xc + a*ct
LongAxix1.y = yc + a*st
ShortAxix0.x = xc - b*st
ShortAxix0.y = yc + b*ct
ShortAxis1.x = xc + b*st
ShortAxix2.y = yc - b*ct

But with these values how can I draw the axis of an ellipse?
The axis of the ellipse are passing through its centre:
float xc = minEllipse[element].center.x;
float yc = minEllipse[element].center.y;
the start and end points of the axis could be at an offset from the centre defined by the ellipse's width and height, i.e.:
// horizontal axis start/ end point
// coordinates
int HxStart = xc - size.width / 2;
int HyStart = yc;
int HxEnd = xc + size.width / 2;
int HyEnd = yc;
// points
Point Hstart(HxStart, HyStart);
Point Hend(HxEnd, HyEnd);
// horizontal axis
Line horizontalAxis(Hstart, Hend);
// vertical axis start/ end point
int VxStart = xc;
int VyStart = yc - size.height / 2;
int VxEnd = xc;
int VyEnd = yc + size.height / 2;
// ----//----
Now, you can rotate the axis (the above for points) by the provided angle theta, around the centre of the ellipse.
Having the above and knowing how to construct a line you can build the two axis at any given angle theta.

Related

Rotate an image without cropping in OpenCV in C++

I'd like to rotate an image, but I can't obtain the rotated image without cropping
My original image:
Now I use this code:
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>
// Compile with g++ code.cpp -lopencv_core -lopencv_highgui -lopencv_imgproc
int main()
{
cv::Mat src = cv::imread("im.png", CV_LOAD_IMAGE_UNCHANGED);
cv::Mat dst;
cv::Point2f pc(src.cols/2., src.rows/2.);
cv::Mat r = cv::getRotationMatrix2D(pc, -45, 1.0);
cv::warpAffine(src, dst, r, src.size()); // what size I should use?
cv::imwrite("rotated_im.png", dst);
return 0;
}
And obtain the following image:
But I'd like to obtain this:
My answer is inspired by the following posts / blog entries:
Rotate cv::Mat using cv::warpAffine offsets destination image
http://john.freml.in/opencv-rotation
Main ideas:
Adjusting the rotation matrix by adding a translation to the new image center
Using cv::RotatedRect to rely on existing opencv functionality as much as possible
Code tested with opencv 3.4.1:
#include "opencv2/opencv.hpp"
int main()
{
cv::Mat src = cv::imread("im.png", CV_LOAD_IMAGE_UNCHANGED);
double angle = -45;
// get rotation matrix for rotating the image around its center in pixel coordinates
cv::Point2f center((src.cols-1)/2.0, (src.rows-1)/2.0);
cv::Mat rot = cv::getRotationMatrix2D(center, angle, 1.0);
// determine bounding rectangle, center not relevant
cv::Rect2f bbox = cv::RotatedRect(cv::Point2f(), src.size(), angle).boundingRect2f();
// adjust transformation matrix
rot.at<double>(0,2) += bbox.width/2.0 - src.cols/2.0;
rot.at<double>(1,2) += bbox.height/2.0 - src.rows/2.0;
cv::Mat dst;
cv::warpAffine(src, dst, rot, bbox.size());
cv::imwrite("rotated_im.png", dst);
return 0;
}
Just try the code below, the idea is simple:
You need to create a blank image with the maximum size you're expecting while rotating at any angle. Here you should use Pythagoras as mentioned in the above comments.
Now copy the source image to the newly created image and pass it to warpAffine. Here you should use the centre of newly created image for rotation.
After warpAffine if you need to crop exact image for this translate four corners of source image in enlarged image using rotation matrix as described here
Find minimum x and minimum y for top corner, and maximum x and maximum y for bottom corner from the above result to crop image.
This is the code:
int theta = 0;
Mat src,frame, frameRotated;
src = imread("rotate.png",1);
cout<<endl<<endl<<"Press '+' to rotate anti-clockwise and '-' for clockwise 's' to save" <<endl<<endl;
int diagonal = (int)sqrt(src.cols*src.cols+src.rows*src.rows);
int newWidth = diagonal;
int newHeight =diagonal;
int offsetX = (newWidth - src.cols) / 2;
int offsetY = (newHeight - src.rows) / 2;
Mat targetMat(newWidth, newHeight, src.type());
Point2f src_center(targetMat.cols/2.0F, targetMat.rows/2.0F);
while(1){
src.copyTo(frame);
double radians = theta * M_PI / 180.0;
double sin = abs(std::sin(radians));
double cos = abs(std::cos(radians));
frame.copyTo(targetMat.rowRange(offsetY, offsetY + frame.rows).colRange(offsetX, offsetX + frame.cols));
Mat rot_mat = getRotationMatrix2D(src_center, theta, 1.0);
warpAffine(targetMat, frameRotated, rot_mat, targetMat.size());
//Calculate bounding rect and for exact image
//Reference:- https://stackoverflow.com/questions/19830477/find-the-bounding-rectangle-of-rotated-rectangle/19830964?noredirect=1#19830964
Rect bound_Rect(frame.cols,frame.rows,0,0);
int x1 = offsetX;
int x2 = offsetX+frame.cols;
int x3 = offsetX;
int x4 = offsetX+frame.cols;
int y1 = offsetY;
int y2 = offsetY;
int y3 = offsetY+frame.rows;
int y4 = offsetY+frame.rows;
Mat co_Ordinate = (Mat_<double>(3,4) << x1, x2, x3, x4,
y1, y2, y3, y4,
1, 1, 1, 1 );
Mat RotCo_Ordinate = rot_mat * co_Ordinate;
for(int i=0;i<4;i++){
if(RotCo_Ordinate.at<double>(0,i)<bound_Rect.x)
bound_Rect.x=(int)RotCo_Ordinate.at<double>(0,i); //access smallest
if(RotCo_Ordinate.at<double>(1,i)<bound_Rect.y)
bound_Rect.y=RotCo_Ordinate.at<double>(1,i); //access smallest y
}
for(int i=0;i<4;i++){
if(RotCo_Ordinate.at<double>(0,i)>bound_Rect.width)
bound_Rect.width=(int)RotCo_Ordinate.at<double>(0,i); //access largest x
if(RotCo_Ordinate.at<double>(1,i)>bound_Rect.height)
bound_Rect.height=RotCo_Ordinate.at<double>(1,i); //access largest y
}
bound_Rect.width=bound_Rect.width-bound_Rect.x;
bound_Rect.height=bound_Rect.height-bound_Rect.y;
Mat cropedResult;
Mat ROI = frameRotated(bound_Rect);
ROI.copyTo(cropedResult);
imshow("Result", cropedResult);
imshow("frame", frame);
imshow("rotated frame", frameRotated);
char k=waitKey();
if(k=='+') theta+=10;
if(k=='-') theta-=10;
if(k=='s') imwrite("rotated.jpg",cropedResult);
if(k==27) break;
}
Cropped Image
Thanks Robula!
Actually, you do not need to compute sine and cosine twice.
import cv2
def rotate_image(mat, angle):
# angle in degrees
height, width = mat.shape[:2]
image_center = (width/2, height/2)
rotation_mat = cv2.getRotationMatrix2D(image_center, angle, 1.)
abs_cos = abs(rotation_mat[0,0])
abs_sin = abs(rotation_mat[0,1])
bound_w = int(height * abs_sin + width * abs_cos)
bound_h = int(height * abs_cos + width * abs_sin)
rotation_mat[0, 2] += bound_w/2 - image_center[0]
rotation_mat[1, 2] += bound_h/2 - image_center[1]
rotated_mat = cv2.warpAffine(mat, rotation_mat, (bound_w, bound_h))
return rotated_mat
Thanks #Haris! Here's the Python version:
def rotate_image(image, angle):
'''Rotate image "angle" degrees.
How it works:
- Creates a blank image that fits any rotation of the image. To achieve
this, set the height and width to be the image's diagonal.
- Copy the original image to the center of this blank image
- Rotate using warpAffine, using the newly created image's center
(the enlarged blank image center)
- Translate the four corners of the source image in the enlarged image
using homogenous multiplication of the rotation matrix.
- Crop the image according to these transformed corners
'''
diagonal = int(math.sqrt(pow(image.shape[0], 2) + pow(image.shape[1], 2)))
offset_x = (diagonal - image.shape[0])/2
offset_y = (diagonal - image.shape[1])/2
dst_image = np.zeros((diagonal, diagonal, 3), dtype='uint8')
image_center = (diagonal/2, diagonal/2)
R = cv2.getRotationMatrix2D(image_center, angle, 1.0)
dst_image[offset_x:(offset_x + image.shape[0]), \
offset_y:(offset_y + image.shape[1]), \
:] = image
dst_image = cv2.warpAffine(dst_image, R, (diagonal, diagonal), flags=cv2.INTER_LINEAR)
# Calculate the rotated bounding rect
x0 = offset_x
x1 = offset_x + image.shape[0]
x2 = offset_x
x3 = offset_x + image.shape[0]
y0 = offset_y
y1 = offset_y
y2 = offset_y + image.shape[1]
y3 = offset_y + image.shape[1]
corners = np.zeros((3,4))
corners[0,0] = x0
corners[0,1] = x1
corners[0,2] = x2
corners[0,3] = x3
corners[1,0] = y0
corners[1,1] = y1
corners[1,2] = y2
corners[1,3] = y3
corners[2:] = 1
c = np.dot(R, corners)
x = int(c[0,0])
y = int(c[1,0])
left = x
right = x
up = y
down = y
for i in range(4):
x = int(c[0,i])
y = int(c[1,i])
if (x < left): left = x
if (x > right): right = x
if (y < up): up = y
if (y > down): down = y
h = down - up
w = right - left
cropped = np.zeros((w, h, 3), dtype='uint8')
cropped[:, :, :] = dst_image[left:(left+w), up:(up+h), :]
return cropped
Increase the image canvas (equally from the center without changing the image size) so that it can fit the image after rotation, then apply warpAffine:
Mat img = imread ("/path/to/image", 1);
double offsetX, offsetY;
double angle = -45;
double width = img.size().width;
double height = img.size().height;
Point2d center = Point2d (width / 2, height / 2);
Rect bounds = RotatedRect (center, img.size(), angle).boundingRect();
Mat resized = Mat::zeros (bounds.size(), img.type());
offsetX = (bounds.width - width) / 2;
offsetY = (bounds.height - height) / 2;
Rect roi = Rect (offsetX, offsetY, width, height);
img.copyTo (resized (roi));
center += Point2d (offsetX, offsetY);
Mat M = getRotationMatrix2D (center, angle, 1.0);
warpAffine (resized, resized, M, resized.size());
After searching around for a clean and easy to understand solution and reading through the answers above trying to understand them, I eventually came up with a solution using trigonometry.
I hope this helps somebody :)
import cv2
import math
def rotate_image(mat, angle):
height, width = mat.shape[:2]
image_center = (width / 2, height / 2)
rotation_mat = cv2.getRotationMatrix2D(image_center, angle, 1)
radians = math.radians(angle)
sin = math.sin(radians)
cos = math.cos(radians)
bound_w = int((height * abs(sin)) + (width * abs(cos)))
bound_h = int((height * abs(cos)) + (width * abs(sin)))
rotation_mat[0, 2] += ((bound_w / 2) - image_center[0])
rotation_mat[1, 2] += ((bound_h / 2) - image_center[1])
rotated_mat = cv2.warpAffine(mat, rotation_mat, (bound_w, bound_h))
return rotated_mat
EDIT: Please refer to #Remi Cuingnet's answer below.
A python version of rotating an image and take control of the padded black coloured region you can use the scipy.ndimage.rotate. Here is an example:
from skimage import io
from scipy import ndimage
image = io.imread('https://www.pyimagesearch.com/wp-
content/uploads/2019/12/tensorflow2_install_ubuntu_header.jpg')
io.imshow(image)
plt.show()
rotated = ndimage.rotate(image, angle=234, mode='nearest')
rotated = cv2.resize(rotated, (image.shape[:2]))
# rotated = cv2.cvtColor(rotated, cv2.COLOR_BGR2RGB)
# cv2.imwrite('rotated.jpg', rotated)
io.imshow(rotated)
plt.show()
If you have a rotation and a scaling of the image:
#include "opencv2/opencv.hpp"
#include <functional>
#include <vector>
bool compareCoords(cv::Point2f p1, cv::Point2f p2, char coord)
{
assert(coord == 'x' || coord == 'y');
if (coord == 'x')
return p1.x < p2.x;
return p1.y < p2.y;
}
int main(int argc, char** argv)
{
cv::Mat image = cv::imread("lenna.png");
float angle = 45.0; // degrees
float scale = 0.5;
cv::Mat_<float> rot_mat = cv::getRotationMatrix2D( cv::Point2f( 0.0f, 0.0f ), angle, scale );
// Image corners
cv::Point2f pA = cv::Point2f(0.0f, 0.0f);
cv::Point2f pB = cv::Point2f(image.cols, 0.0f);
cv::Point2f pC = cv::Point2f(image.cols, image.rows);
cv::Point2f pD = cv::Point2f(0.0f, image.rows);
std::vector<cv::Point2f> pts = { pA, pB, pC, pD };
std::vector<cv::Point2f> ptsTransf;
cv::transform(pts, ptsTransf, rot_mat );
using namespace std::placeholders;
float minX = std::min_element(ptsTransf.begin(), ptsTransf.end(), std::bind(compareCoords, _1, _2, 'x'))->x;
float maxX = std::max_element(ptsTransf.begin(), ptsTransf.end(), std::bind(compareCoords, _1, _2, 'x'))->x;
float minY = std::min_element(ptsTransf.begin(), ptsTransf.end(), std::bind(compareCoords, _1, _2, 'y'))->y;
float maxY = std::max_element(ptsTransf.begin(), ptsTransf.end(), std::bind(compareCoords, _1, _2, 'y'))->y;
float newW = maxX - minX;
float newH = maxY - minY;
cv::Mat_<float> trans_mat = (cv::Mat_<float>(2,3) << 0, 0, -minX, 0, 0, -minY);
cv::Mat_<float> M = rot_mat + trans_mat;
cv::Mat warpedImage;
cv::warpAffine( image, warpedImage, M, cv::Size(newW, newH) );
cv::imshow("lenna", image);
cv::imshow("Warped lenna", warpedImage);
cv::waitKey();
cv::destroyAllWindows();
return 0;
}
Thanks to everyone for this post, it has been super useful. However, I have found some black lines left and up (using Rose's python version) when rotating 90º. The problem seemed to be some int() roundings. In addition to that, I have changed the sign of the angle to make it grow clockwise.
def rotate_image(image, angle):
'''Rotate image "angle" degrees.
How it works:
- Creates a blank image that fits any rotation of the image. To achieve
this, set the height and width to be the image's diagonal.
- Copy the original image to the center of this blank image
- Rotate using warpAffine, using the newly created image's center
(the enlarged blank image center)
- Translate the four corners of the source image in the enlarged image
using homogenous multiplication of the rotation matrix.
- Crop the image according to these transformed corners
'''
diagonal = int(math.ceil(math.sqrt(pow(image.shape[0], 2) + pow(image.shape[1], 2))))
offset_x = (diagonal - image.shape[0])/2
offset_y = (diagonal - image.shape[1])/2
dst_image = np.zeros((diagonal, diagonal, 3), dtype='uint8')
image_center = (float(diagonal-1)/2, float(diagonal-1)/2)
R = cv2.getRotationMatrix2D(image_center, -angle, 1.0)
dst_image[offset_x:(offset_x + image.shape[0]), offset_y:(offset_y + image.shape[1]), :] = image
dst_image = cv2.warpAffine(dst_image, R, (diagonal, diagonal), flags=cv2.INTER_LINEAR)
# Calculate the rotated bounding rect
x0 = offset_x
x1 = offset_x + image.shape[0]
x2 = offset_x + image.shape[0]
x3 = offset_x
y0 = offset_y
y1 = offset_y
y2 = offset_y + image.shape[1]
y3 = offset_y + image.shape[1]
corners = np.zeros((3,4))
corners[0,0] = x0
corners[0,1] = x1
corners[0,2] = x2
corners[0,3] = x3
corners[1,0] = y0
corners[1,1] = y1
corners[1,2] = y2
corners[1,3] = y3
corners[2:] = 1
c = np.dot(R, corners)
x = int(round(c[0,0]))
y = int(round(c[1,0]))
left = x
right = x
up = y
down = y
for i in range(4):
x = c[0,i]
y = c[1,i]
if (x < left): left = x
if (x > right): right = x
if (y < up): up = y
if (y > down): down = y
h = int(round(down - up))
w = int(round(right - left))
left = int(round(left))
up = int(round(up))
cropped = np.zeros((w, h, 3), dtype='uint8')
cropped[:, :, :] = dst_image[left:(left+w), up:(up+h), :]
return cropped
Go version (using gocv) of #robula and #remi-cuingnet
func rotateImage(mat *gocv.Mat, angle float64) *gocv.Mat {
height := mat.Rows()
width := mat.Cols()
imgCenter := image.Point{X: width/2, Y: height/2}
rotationMat := gocv.GetRotationMatrix2D(imgCenter, -angle, 1.0)
absCos := math.Abs(rotationMat.GetDoubleAt(0, 0))
absSin := math.Abs(rotationMat.GetDoubleAt(0, 1))
boundW := float64(height) * absSin + float64(width) * absCos
boundH := float64(height) * absCos + float64(width) * absSin
rotationMat.SetDoubleAt(0, 2, rotationMat.GetDoubleAt(0, 2) + (boundW / 2) - float64(imgCenter.X))
rotationMat.SetDoubleAt(1, 2, rotationMat.GetDoubleAt(1, 2) + (boundH / 2) - float64(imgCenter.Y))
gocv.WarpAffine(*mat, mat, rotationMat, image.Point{ X: int(boundW), Y: int(boundH) })
return mat
}
I rotate in the same matrice in-memory, make a new matrice if you don't want to alter it
For anyone using Emgu.CV or OpenCvSharp wrapper in .NET, there's a C# implement of Lars Schillingmann's answer:
Emgu.CV:
using Emgu.CV;
using Emgu.CV.CvEnum;
using Emgu.CV.Structure;
public static class MatExtension
{
/// <summary>
/// <see>https://stackoverflow.com/questions/22041699/rotate-an-image-without-cropping-in-opencv-in-c/75451191#75451191</see>
/// </summary>
public static Mat Rotate(this Mat src, float degrees)
{
degrees = -degrees; // counter-clockwise to clockwise
var center = new PointF((src.Width - 1) / 2f, (src.Height - 1) / 2f);
var rotationMat = new Mat();
CvInvoke.GetRotationMatrix2D(center, degrees, 1, rotationMat);
var boundingRect = new RotatedRect(new(), src.Size, degrees).MinAreaRect();
rotationMat.Set(0, 2, rotationMat.Get<double>(0, 2) + (boundingRect.Width / 2f) - (src.Width / 2f));
rotationMat.Set(1, 2, rotationMat.Get<double>(1, 2) + (boundingRect.Height / 2f) - (src.Height / 2f));
var rotatedSrc = new Mat();
CvInvoke.WarpAffine(src, rotatedSrc, rotationMat, boundingRect.Size);
return rotatedSrc;
}
/// <summary>
/// <see>https://stackoverflow.com/questions/32255440/how-can-i-get-and-set-pixel-values-of-an-emgucv-mat-image/69537504#69537504</see>
/// </summary>
public static unsafe void Set<T>(this Mat mat, int row, int col, T value) where T : struct =>
_ = new Span<T>(mat.DataPointer.ToPointer(), mat.Rows * mat.Cols * mat.ElementSize)
{
[(row * mat.Cols) + col] = value
};
public static unsafe T Get<T>(this Mat mat, int row, int col) where T : struct =>
new ReadOnlySpan<T>(mat.DataPointer.ToPointer(), mat.Rows * mat.Cols * mat.ElementSize)
[(row * mat.Cols) + col];
}
OpenCvSharp:
OpenCvSharp already has Mat.Set<> method that functions same as mat.at<> in the original OpenCV, so we don't have to copy these methods from How can I get and set pixel values of an EmguCV Mat image?
using OpenCvSharp;
public static class MatExtension
{
/// <summary>
/// <see>https://stackoverflow.com/questions/22041699/rotate-an-image-without-cropping-in-opencv-in-c/75451191#75451191</see>
/// </summary>
public static Mat Rotate(this Mat src, float degrees)
{
degrees = -degrees; // counter-clockwise to clockwise
var center = new Point2f((src.Width - 1) / 2f, (src.Height - 1) / 2f);
var rotationMat = Cv2.GetRotationMatrix2D(center, degrees, 1);
var boundingRect = new RotatedRect(new(), new Size2f(src.Width, src.Height), degrees).BoundingRect();
rotationMat.Set(0, 2, rotationMat.Get<double>(0, 2) + (boundingRect.Width / 2f) - (src.Width / 2f));
rotationMat.Set(1, 2, rotationMat.Get<double>(1, 2) + (boundingRect.Height / 2f) - (src.Height / 2f));
var rotatedSrc = new Mat();
Cv2.WarpAffine(src, rotatedSrc, rotationMat, boundingRect.Size);
return rotatedSrc;
}
}
Also, you may want to mutate the src param instead of returning a new clone of it during rotation, for that you can just set the det param of WrapAffine() as the same with src: c++, opencv: Is it safe to use the same Mat for both source and destination images in filtering operation?
CvInvoke.WarpAffine(src, src, rotationMat, boundingRect.Size);
This is being called as in-place mode: https://answers.opencv.org/question/24/do-all-opencv-functions-support-in-place-mode-for-their-arguments/
Can the OpenCV function cvtColor be used to convert a matrix in place?
If it is just to rotate 90 degrees, maybe this code could be useful.
Mat img = imread("images.jpg");
Mat rt(img.rows, img.rows, CV_8U);
Point2f pc(img.cols / 2.0, img.rows / 2.0);
Mat r = getRotationMatrix2D(pc, 90, 1);
warpAffine(img, rt, r, rt.size());
imshow("rotated", rt);
Hope it's useful.
By the way, for 90º rotations only, here is a more efficient + accurate function:
def rotate_image_90(image, angle):
angle = -angle
rotated_image = image
if angle == 0:
pass
elif angle == 90:
rotated_image = np.rot90(rotated_image)
elif angle == 180 or angle == -180:
rotated_image = np.rot90(rotated_image)
rotated_image = np.rot90(rotated_image)
elif angle == -90:
rotated_image = np.rot90(rotated_image)
rotated_image = np.rot90(rotated_image)
rotated_image = np.rot90(rotated_image)
return rotated_image

EasyBMP rotating image by any angle

I am trying to rotate a bmp image using EasyBMP. when the angle is between 0 and 90 or 270 and 360 the rotation is fine. but when between 180 and 270 the boundary rectangle is stretched and for angle between 90 and 180 I get segmentation fault. I am convinced that the problem arises from
int width = image.TellWidth();
int height = image.TellHeight();
float sine= sin(angle);
float cosine=cos(angle);
float x1=-height*sine;
float y1=height*cosine;
float x2=width*cosine-height*sine;
float y2=height*cosine+width*sine;
float x3=width*cosine;
float y3=width*sine;
float minx=min(0,min(x1,min(x2,x3)));
float miny=min(0,min(y1,min(y2,y3)));
float maxx=max(x1,max(x2,x3));
float maxy=max(y1,max(y2,y3));
int outWidth;
int outHeight;
outWidth=(int)ceil(fabs(maxx)-minx);
outHeight=(int)ceil(fabs(maxy)-miny);
output.SetSize(outHeight,outWidth);
for(int x=0; x<outWidth; x++)
{
for(int y=0; y<outHeight; y++)
{
int srcX=(int)((x+minx)*cosine+(y+miny)*sine);
int srcY=(int)((y+miny)*cosine-(x+minx)*sine);
if(srcX>=0 &&srcX<width && srcY>=0 && srcY<height)
{
output.SetPixel(x,y,image.GetPixel(srcX,srcY));
}
}
}
The following is how I solved this. The TL;DR: the rotation transform goes around 0, 0, so if your image coordinates set 0,0 to bottom left, you need to translate the image to be centered on 0,0 first. Also, sin and cos expect radians, not degrees, so remember to convert first
The long way:
I started by creating a simple program that has easily verified answers, to find out where things are going wrong.
The first thing I noticed was that 90.0f wouldn't produce any output. That seemed weird, so I broke in at the "output image size" printf and realized that the output height was being calculated as -87. Clearly that's not right, so let's see why that might happen.
Going up a bit, outHeight=(int)ceil(fabs(maxy)-miny); so let's figure out how we're ending up with a negative output height when subtracting maxy and miny. It appears maxy is -0.896... and miny is 88.503... However, the absolute value of maxy is taken before subtracting miny, meaning we're ending up with 0.896 - 88.503. Whoa, that's not good! Let's try doing the subtraction then taking the absolute value.
Recompiling with both width and height as such:
outWidth=(int)ceil(fabs(maxx-minx));
outHeight=(int)ceil(fabs(maxy-miny));
Gets us much better values. Now outWidth and outHeight are 2 and 90, respectively. This is massively improved, but the height should be 100. We'll address that later.
To figure out where the math is going wrong, I reorganize the terms to go together: x with x, y with y. Next I adjusted spacing and added parenthesis to make it more readable and ensure order of operations (sure beats trying to look at an OoO table ;) ). Since it's clear you're breaking out the rotation matrix multiplication, I'm going to name your variables something a bit more intuitive than x1, x2, etc. From now on, x1 is topLeftTransformedX, x2 is topRightTransformedX, x3 will exist as bottomLeftTransformedX (always 0), and x4 will be bottomRightTransformedX, same for Y. Longer, but much easier to know what you're dealing with.
Using this, at this point, I see the same thing you do... then I remembered something, based on the numbers seen from this cleaner code (same math as yours, but still easier to debug).
Suddenly, my math for X looks like this:
// x = x cos - y sin
float topLeftTransformedX = (-midX * cosine) - (midY * sine);
float topRightTransformedX = (midX * cosine) - (midY * sine);
float bottomLeftTransformedX = (-midX * cosine) - (-midY * sine);
float bottomRightTransformedX = (midX * cosine) - (-midY * sine);
The rotation matrix rotates around the center point. You have to translate the image to be centered around that for a proper rotation.
Then, when trying to figure out why this would be giving the values it is, i recalled something else - angle needs to be in radians.
Suddenly, it almost all works. There's still some more to do, but this should get you 95% of the way there or more. Hope it helps!
// bmprotate.cpp : Defines the entry point for the console application.
//
#include "stdafx.h"
#include <math.h>
#define min(x,y) x < y ? x : y
#define max(x,y) x > y ? x : y
#define PI 3.14159
void rotate(int width, int height, float angleInDeg)
{
float angle = angleInDeg * (PI/180.0f);
float midX = ((float)width) / 2.0f;
float midY = ((float)height) / 2.0f;
float sine = sin(angle);
float cosine = cos(angle);
// x = x cos - y sin
float topLeftTransformedX = (-midX * cosine) - (midY * sine);
float topRightTransformedX = (midX * cosine) - (midY * sine);
float bottomLeftTransformedX = (-midX * cosine) - (-midY * sine);
float bottomRightTransformedX = (midX * cosine) - (-midY * sine);
float minx = min( topLeftTransformedX, min(topRightTransformedX, min(bottomLeftTransformedX, bottomRightTransformedX)) );
float maxx = max( topLeftTransformedX, max(topRightTransformedX, max(bottomLeftTransformedX, bottomRightTransformedX)) );
// y = x sin + y cos
float topLeftTransformedY = (-midX * sine) + (midY * cosine);
float topRightTransformedY = (midX * sine) + (midY * cosine);
float bottomLeftTransformedY = (-midX * sine) + (-midY * cosine);
float bottomRightTransformedY = (midX * sine) + (-midY * cosine);
float miny = min( topLeftTransformedY, min(topRightTransformedY, min(bottomLeftTransformedY, bottomRightTransformedY)) );
float maxy = max( topLeftTransformedY, max(topRightTransformedY, max(bottomLeftTransformedY, bottomRightTransformedY)) );
int outWidth;
int outHeight;
printf("(%f,%f) , (%f,%f) , (%f,%f) , (%f,%f)\n",
topLeftTransformedX, topLeftTransformedY,
topRightTransformedX, topRightTransformedY,
bottomLeftTransformedX, bottomLeftTransformedY,
bottomRightTransformedX, bottomRightTransformedY);
outWidth = (int) ceil( fabs(maxx) + fabs(minx));
outHeight = (int) ceil( fabs(maxy) + fabs(miny) );
printf("output image size: (%d,%d)\n",outWidth,outHeight);
for(int x=0; x<outWidth; x++)
{
for(int y=0; y<outHeight; y++)
{
int srcX=(int)((x+minx)*cosine+(y+miny)*sine);
int srcY=(int)((y+miny)*cosine-(x+minx)*sine);
if(srcX >=0 && srcX < width && srcY >= 0 && srcY < height)
{
printf("(x,y) = (%d,%d)\n",srcX, srcY);
}
}
}
}
int _tmain(int argc, _TCHAR* argv[])
{
rotate(100,2,90.0f);
for (int i = 0; i < 360; i++)
{
}
return 0;
}

Rotating around a sphere using OpenGL and gluLookAt

Alright, so I'm trying to click and drag to rotate around an object using C++ and OpenGL. The way I have it is to use gluLookAt centered at the origin and I'm getting coordinates for the eye by using parametric equations for a sphere (eyex = 2* cos(theta) * sin(phi); eyey = 2* sin(theta) * sin(phi); eyez = 2* cos(phi);). This works mostly, as I can click and rotate horizontally, but when I try to rotate vertically it makes tight circles instead of rotating vertically. I'm trying to get the up vector by using the position of the camera and a vecter at a 90 degree angle along the x-z plane and taking the cross product of that.
The code I have is as follows:
double dotProduct(double v1[], double v2[]) {
return v1[0]*v2[0] + v1[1]*v2[1] + v1[2]*v2[2];
}
void mouseDown(int button, int state, int x, int y) {
if (button == GLUT_LEFT_BUTTON && state == GLUT_DOWN ) {
xpos = x;
ypos = y;
}
}
void mouseMovement(int x, int y) {
diffx = x - xpos;
diffy = y - ypos;
xpos = x;
ypos = y;
}
void camera (void) {
theta += 2*PI * (-diffy/glutGet(GLUT_SCREEN_HEIGHT));
phi += PI * (-diffx/glutGet(GLUT_WINDOW_WIDTH));
eyex = 2* cos(theta) * sin(phi);
eyey = 2* sin(theta) * sin(phi);
eyez = 2* cos(phi);
double rightv[3], rightt[3], eyes[3];
rightv[0] = 2* cos(theta + 2/PI) * sin(phi);
rightv[1] = 0;
rightv[2] = 2* cos(phi);
rightt[0] = rightv[0];
rightt[1] = rightv[1];
rightt[2] = rightv[2];
rightv[0] = rightv[0] / sqrt(dotProduct(rightt, rightt));
rightv[1] = rightv[1] / sqrt(dotProduct(rightt, rightt));
rightv[2] = rightv[2] / sqrt(dotProduct(rightt, rightt));
eyes[0] = eyex;
eyes[1] = eyey;
eyes[2] = eyez;
upx = (eyey/sqrt(dotProduct(eyes,eyes)))*rightv[2] + (eyez/sqrt(dotProduct(eyes,eyes)))*rightv[1];
upy = (eyez/sqrt(dotProduct(eyes,eyes)))*rightv[0] + (eyex/sqrt(dotProduct(eyes,eyes)))*rightv[2];
upz = (eyex/sqrt(dotProduct(eyes,eyes)))*rightv[1] + (eyey/sqrt(dotProduct(eyes,eyes)))*rightv[0];
diffx = 0;
diffy = 0;
}
I am somewhat basing things off of this but it doesn't work, so I tried my way instead.
This isn't exactly a solution for the way you are doing it but I did something similar the other day. I did it by using DX's D3DXMatrixRotationAxis and D3DXVec3TransformCoord The math behind the D3DXMatrixRotationAxis method can be found at the bottom of the following page: D3DXMatrixRotationAxis Math use this if you are unable to use DX. This will allow you to rotate around any axis you pass in. In my object code I keep track of a direction and up vector and I simply rotate each of these around the axis of movement(in your case the yaw and pitch).
To implement the fixed distance camera like this I would simply do the dot product of the current camera location and the origin location (if this never changes then you can simply do it once.) and then move the camera to the origin rotate it the amount you need then move it back with its new direction and up values.

OpenCV How to Plot velocity vectors as arrows in using single static image

I am trying to plot velocity vectors like in matlab we use "quiver" function http://www.mathworks.com/help/techdoc/ref/quiver.html
I need to port same methodology in C++ using OpenCV library.
I have heard There are a few optical flow methods, i.e. Lucas and Kanade (cvCalOpticalFlowLK) or Horn and Schunck (cvCalOpticalFlowHS) or Block Matching method (cvCalOpticalFlowBM)
but all of these functions take two images , while i need to use one image because i am working on fingerprints.
Kindly help me ...
[Edit]
Solution found
void cvQuiver(IplImage*Image,int x,int y,int u,int v,CvScalar Color,
int Size,int Thickness){
cv::Point pt1,pt2;
double Theta;
double PI = 3.1416;
if(u==0)
Theta=PI/2;
else
Theta=atan2(double(v),(double)(u));
pt1.x=x;
pt1.y=y;
pt2.x=x+u;
pt2.y=y+v;
cv::line(Image,pt1,pt2,Color,Thickness,8); //Draw Line
Size=(int)(Size*0.707);
if(Theta==PI/2 && pt1.y > pt2.y)
{
pt1.x=(int)(Size*cos(Theta)-Size*sin(Theta)+pt2.x);
pt1.y=(int)(Size*sin(Theta)+Size*cos(Theta)+pt2.y);
cv::line(Image,pt1,pt2,Color,Thickness,8); //Draw Line
pt1.x=(int)(Size*cos(Theta)+Size*sin(Theta)+pt2.x);
pt1.y=(int)(Size*sin(Theta)-Size*cos(Theta)+pt2.y);
cv::line(Image,pt1,pt2,Color,Thickness,8); //Draw Line
}
else{
pt1.x=(int)(-Size*cos(Theta)-Size*sin(Theta)+pt2.x);
pt1.y=(int)(-Size*sin(Theta)+Size*cos(Theta)+pt2.y);
cv::line(Image,pt1,pt2,Color,Thickness,8); //Draw Line
pt1.x=(int)(-Size*cos(Theta)+Size*sin(Theta)+pt2.x);
pt1.y=(int)(-Size*sin(Theta)-Size*cos(Theta)+pt2.y);
cv::line(Image,pt1,pt2,Color,Thickness,8); //Draw Line
}
}
I am kind of completing the current answer here, which fails in giving the right size of each of the arrows' tip. MATLAB does it in a way that when an arrow is nearly a dot, it doesn't have any tip, while for long arrows it shows a big tip, as the following image shows.
To get this effect, we need to normalise the "tip size" of each of the arrow over the range of arrows' length. The following code does the trick
double l_max = -10;
for (int y = 0; y < img_sz.height; y+=10) // First iteration, to compute the maximum l (longest flow)
{
for (int x = 0; x < img_sz.width; x+=10)
{
double dx = cvGetReal2D(velx, y, x); // Gets X component of the flow
double dy = cvGetReal2D(vely, y, x); // Gets Y component of the flow
CvPoint p = cvPoint(x, y);
double l = sqrt(dx*dx + dy*dy); // This function sets a basic threshold for drawing on the image
if(l>l_max) l_max = l;
}
}
for (int y = 0; y < img_sz.height; y+=10)
{
for (int x = 0; x < img_sz.width; x+=10)
{
double dx = cvGetReal2D(velx, y, x); // Gets X component of the flow
double dy = cvGetReal2D(vely, y, x); // Gets Y component of the flow
CvPoint p = cvPoint(x, y);
double l = sqrt(dx*dx + dy*dy); // This function sets a basic threshold for drawing on the image
if (l > 0)
{
double spinSize = 5.0 * l/l_max; // Factor to normalise the size of the spin depeding on the length of the arrow
CvPoint p2 = cvPoint(p.x + (int)(dx), p.y + (int)(dy));
cvLine(resultDenseOpticalFlow, p, p2, CV_RGB(0,255,0), 1, CV_AA);
double angle; // Draws the spin of the arrow
angle = atan2( (double) p.y - p2.y, (double) p.x - p2.x );
p.x = (int) (p2.x + spinSize * cos(angle + 3.1416 / 4));
p.y = (int) (p2.y + spinSize * sin(angle + 3.1416 / 4));
cvLine( resultDenseOpticalFlow, p, p2, CV_RGB(0,255,0), 1, CV_AA, 0 );
p.x = (int) (p2.x + spinSize * cos(angle - 3.1416 / 4));
p.y = (int) (p2.y + spinSize * sin(angle - 3.1416 / 4));
cvLine( resultDenseOpticalFlow, p, p2, CV_RGB(0,255,0), 1, CV_AA, 0 );
}
}
}
And this is an example of how this OpenCV code would look like
I hope this help other people Googling for the same issue.
Based on the code from Dan and the suggestion of mkuse, here is a function with the same syntax as cv::line():
static void arrowedLine(InputOutputArray img, Point pt1, Point pt2, const Scalar& color,
int thickness=1, int line_type=8, int shift=0, double tipLength=0.1)
{
const double tipSize = norm(pt1-pt2)*tipLength; // Factor to normalize the size of the tip depending on the length of the arrow
line(img, pt1, pt2, color, thickness, line_type, shift);
const double angle = atan2( (double) pt1.y - pt2.y, (double) pt1.x - pt2.x );
Point p(cvRound(pt2.x + tipSize * cos(angle + CV_PI / 4)),
cvRound(pt2.y + tipSize * sin(angle + CV_PI / 4)));
line(img, p, pt2, color, thickness, line_type, shift);
p.x = cvRound(pt2.x + tipSize * cos(angle - CV_PI / 4));
p.y = cvRound(pt2.y + tipSize * sin(angle - CV_PI / 4));
line(img, p, pt2, color, thickness, line_type, shift);
}
We will see if those maintaining the OpenCV repository will like it :-)
The cvCalOpticalFlowLK does not plot velocity vectors, it computes these velocity vectors. If you do not have these vectors, you must call this function with two images. I guess you already have these vectors, and you just want to plot them.
In this case, you can use the cv::line function, for example:
cv::line(yourImage, cv::Point(baseX, baseY), cv::Point(endX, endY));
I hope this will help you!

I need to implement a 2d shape rotate function

this is the formular but i dont know how to implement it. can someone please help
rectangle::rectangle() //rectangle constructor
{
bl.real() = 0; //bottom
bl.imag() = 0; //left
tr.real() = 1; //top
tr.imag() = 1; //right
}
complex<double> rectangle::get_bl() const
{
return bl;
}
complex<double> rectangle::get_tr() const
{
return tr;
}
void rectangle::rotate(double angle)
{
//not sure how to do it tr = tr.real() * cos(angle) + tr.imag() *cos(angle);
}
main
rectangle r;
r.rotate(90);
expected output (not 100% sure)
0 0 -1 1
Move your shape to (0, 0) temporarily (formula assumes you are rotating about origin, so move the bottom-left corner to (0, 0)).
Apply formula.
Move it back.
if (tr.real() < bl.real()) {
float tempX = tr.real() - bl.real();
float tempY = tr.imag() - bl.imag();
} else {
float tempX = bl.real() - tr.real();
float tempY = bl.imag() - tr.imag();
}
tr.real() = tempX * cos(theta) - tempY * sin(theta)
tr.imag() = tempx * sin(theta) + tempY * cos(theta)
The formula is basically saying:
new_x = shape.point[i].x*cos(angle) - shape.point[i].y*sin(angle)
new_y = shape.point[i].x*sin(angle) + shape.point[i].y*cos(angle)
shape.point[i].x = new_x
shape.point[i].y = new_y
angle is in radians, to convert from degrees to radians use
degree*pi/180 where pi is the constant 3.14...
you will need to do this for each point on the shape to fully rotate the shape by the desired degree.
This formula also assumes that the points are centered around (0,0), i.e. the center of the shape is (0,0) and all points are relative to that center.
One tip, if applicable, try and store shapes as points, going clockwise from the 0th point. for instance, this rectangle will be:
point[0] = {-1, 1}
point[1] = { 1, 1}
point[2] = { 1,-1}
point[3] = {-1,-1}
To convert from tl, br to points you will need to do something similar to:
point[0] = {tl.x, tl.y}
point[1] = {br.x, tl.y}
point[2] = {br.x, br.y}
point[3] = {tl.x, br.y}