How do I apply some transformation (e.g. rotation) to a cv::rotatedRect?
Tried using cv::warpAffine but won't work, as it is supposed to be applied to cv::Mat...
You can control rotation translation and scale directly using the internal variables angle, center & size see documentation.
More general transformations requires getting the vertices using points() and manipulating them using for example cv::warpAffinebut once doing that you will no longer have a cv::rotatedRect (by definition)
If you are planing to do complex operations like affine or perspective, you should deal with the points of the rotated rect and the result may be quad shape not a rectangle.
cv::warpAffine works for images. you should use cv::Transform and cv::Perspectivetransform
They take array of points and produced array of points.
Example:
cv::RotatedRect rect;
//fill rect somehow
cv::Point2f rect_corners[4];
rect.points(rect_corners);
std::vector<cv::Point2f> rect_corners_transformed(4);
cv::Mat M;
//fill M with affine transformation matrix
cv::transform(std::vector<cv::Point2f>(std::begin(rect_corners), std::end(rect_corners)), rect_corners_transformed, M);
// your transformed points are in rect_corners_transformed
TLDR: Create a new rectangle.
I don't know if it will help you, but I solved a similar problem by creating a new rectangle and ignoring the old one. In other words, I calculated the new angle, and then assigned it and the values of the old rectangle (the center point and the size) to the new rectangle:
RotatedRect newRotatedRectangle(oldRectangle.center, oldRectangle.size, newAngle);
Related
I have a camera and a lamp.
The camera takes pictures automatically and the lamp is rigid.
Each of my pictures has a bright spot in the middle and is getting darker on the outside (linear).
Is there an easy way to darken the middle, or brighten the outside to accommodate this (preferably with a gradient)?
I am using OpenCV with the C++ API.
Thank you for the help.
It's hard to say what exactly you want to do without an example. However, let's assume the effect is exactly the same in all images and you want to apply the same transformation to each of them.
You say the effect is linear, and assume you want to make the center darker by let's say 20% and the pixel furthest from the center brighter by 20%. Let's further assume the optical center is in the center of the image (needn't be true in practice).
So you have an image cv::Mat img; you want to manipulate, and I assume it contains data of type CV_32F (if not float or double-valued, convert, can be more than one channel). You create another cv::Mat
//first, make a mask image to multiply the image with
cv::Mat mask = cv::Mat::zeros(img.rows,img.cols,CV_32F);
float maxdist = std::sqrt(img.rows*img.rows+img.cols*img.cols)/2;
cv::Point2f center(img.cols*0.5,img.rows*0.5);
for (int j=0;j<img.rows;++j)
for (int i=0;i<img.cols;++i)
{
cv::Point2f p(i,j);
cv::Point2f diff(p-center);
float dist(std::sqrt(diff.dot(diff)));
float factor(0.8+0.4*dist/maxdist);
mask.at<float>(j,i) = factor;
}
//apply the transformation, to as many images as you like
img = img.mul(mask);
This doesn't check for overflows, you may or may not want to do this afterwards. But from your question, it would be a simple way to do this.
I'm trying to deskew an image that has an element of known size. Given this image:
I can use aruco:: estimatePoseBoard which returns rotation and translation vectors. Is there a way to use that information to deskew everything that's in the same plane as the marker board? (Unfortunately my linear algebra is rudimentary at best.)
Clarification
I know how to deskew the marker board. What I want to be able to do is deskew the other things (in this case, the cloud-shaped object) in the same plane as the marker board. I'm trying to determine whether or not that's possible and, if so, how to do it. I can already put four markers around the object I want to deskew and use the detected corners as input to getPerspectiveTransform along with the known distance between them. But for our real-world application it may be difficult for the user to place markers exactly. It would be much easier if they could place a single marker board in the frame and have the software deskew the other objects.
Since you tagged OpenCV:
From the image I can see that you have detected the corners of all the black box. So just get the most border for points in a way or another:
Then it is like this:
std::vector<cv::Point2f> src_points={/*Fill your 4 corners here*/};
std::vector<cv::Point2f> dst_points={cv:Point2f(0,0), cv::Point2f(width,0), cv::Point2f(width,height),cv::Point2f(0,height)};
auto H=v::getPerspectiveTransform(src_points,dst_points);
cv::Mat copped_image;
cv::warpPerspective(full_image,copped_image,H,cv::Size(width,height));
I was stuck on the assumption that the destination points in the call to getPerspectiveTransform had to be the corners of the output image (as they are in Humam's suggestion). Once it dawned on me that the destination points could be somewhere within the output image I had my answer.
float boardX = 1240;
float boardY = 1570;
float boardWidth = 1730;
float boardHeight = 1400;
vector<Point2f> destinationCorners;
destinationCorners(Point2f(boardX+boardWidth, boardY));
destinationCorners(Point2f(boardX+boardWidth, boardY+boardHeight));
destinationCorners(Point2f(boardX, boardY+boardHeight));
destinationCorners(Point2f(boardX, boardY));
Mat h = getPerspectiveTransform(detectedCorners, destinationCorners);
Mat bigImage(image.size() * 3, image.type(), Scalar(0, 50, 50));
warpPerspective(image, bigImage, h, bigImage.size());
This fixed the perspective of the board and everything in its plane. (The waviness of the board is due to the fact that the paper wasn't lying flat in the original photo.)
I have an app that finds an object in a frame and uses warpPerspective to correct the image to be square. In the course of doing so you specify an output image size. However, I want to know how to do so without harming its apparent size. How can I unwarp the 4-corners of the image without changing the size of the image? I don't need the image itself, I just want to measure its height and width in pixels within the original image.
Get a transform matrix that will square up the corners.
std::vector<cv::Point2f> transformedPoints;
cv::Mat M = cv::getPerspectiveTransform(points, objectCorners);
cv::perspectiveTransform(points, transformedPoints, M);
This will square up the image, but in terms of the objectCorners coordinate system. Which is -0.5f to 0.5f not the original image plane.
BoundingRect almost does what I want.
cv::Rect boundingRectangle = cv::boundingRect(points);
But as the documentation states
The function calculates and returns the minimal up-right bounding rectangle for the specified point set.
And what I want is the bounding rectangle after it has been squared-up, not without squaring it up.
According to my understanding to your post, here is something which should help you.
OpenCV perspective transform example.
Update if it still doesn't help you out in finding the height and width within the image
Minimum bounding rect of the points
cv::RotatedRect box = cv::minAreaRect(cv::Mat(points));
As the minAreaRect reference on OpenCV's website states
Finds a rotated rectangle of the minimum area enclosing the input 2D point set.
You can call box.size and get the width and height.
if in openCV with C++ API while creating a rotation matrix if I write
Mat rot_matrix = getRotationMatrix2D(src_center, angle, 1.0);
then how should I write it in openCV with C? What should I write instead of Mat? Is it like this:-
CvMat* rot_mat =cv2DRotationMatrix( center, angle, scale, rot );
Is the above declaration correct? If yes, then how can I represent it in wrap affine function?
Is it like this:-
cvWarpAffine( src, dst, rot_mat);
I think you are a bit confused according to our little chat at the comments sector, so I decided to write you an answer that would try to make it a bit clearer.
First for the original question as you wrote Mat is indeed C++ form.
In C you use CvMat but the function cv2DRotationMatrix() already takes CvMat * as part of the parameters therefore it can be used like that:
cv2DRotationMatrix(center,angle,scale,rot_mat);
where:
center is CvPoint2D32f, the center of the rotation in the source image (width/2,height/2).
angle – The desired rotation angle(in degrees).
scale – Isotropic scale factor (1 means the picture would be kept at the same size)
mapMatrix – Pointer to the destination matrix (your 2x3 matrix that would be used as the rotation matrix)
Now rot_mat would hold the rotation matrix:
where:
Now you would like to calculate the position of each pixel after rotating the whole picture in x degrees (an Affine transformation on the pixels/pciture/image).
At this stage you have the rotation matrix that is being used in the affine transformation and you want to do the affine transformation (rotating the image) so you can use the function cvWarpAffine()
in our case:
cvWarpAffine( src, dst, rot_mat );
where:
src – Source image
dst – Destination image
rot_mat as our mapMatrix(transformation matrix)
*there is also a fourth parameter which is flag but the default is o.k for our case.
what it does?
transforms the source image using the specified matrix:
*or in simpler words as I described before it "just" calculates the new position for each pixel after the rotation (using rotation matrix - as input to the affine function).
rot_mat should be 2x3 mat - you can create it by calling cvCreateMat().
src,dst are IplImage * (because we said it's c code).
*The technical aspect of the function is from:
http://opencv.willowgarage.com/documentation/geometric_image_transformations.html
It seems like porting C++ program to C. As per Wikipedia, there are C interfaces which you can look for your purpose.
Declarations seems right provided the parameters are not reference but pointers since C does not support reference.
You can use pointers to wrap the class objects. It will be like:
CvMat * cv2dRotationMatrix( CvPoint *, CvAngle *, float );
As you have pointed out that you need to wrap C++ functions under a pure C function.
1. Goal
My colleague and I have been trying to render rotated ellipsoids in Qt. The typical solution approach, as we understand it, consists of shifting the center of the ellipsoids to the origin of the coordinate system, doing the rotation there, and shifting back:
http://qt-project.org/doc/qt-4.8/qml-rotation.html
2. Sample Code
Based on the solution outlined in the link above, we came up with the following sample code:
// Constructs and destructors
RIEllipse(QRect rect, RIShape* parent, bool isFilled = false)
: RIShape(parent, isFilled), _rect(rect), _angle(30)
{}
// Main functionality
virtual Status draw(QPainter& painter)
{
const QPen& prevPen = painter.pen();
painter.setPen(getContColor());
const QBrush& prevBrush = painter.brush();
painter.setBrush(getFillBrush(Qt::SolidPattern));
// Get rectangle center
QPoint center = _rect.center();
// Center the ellipse at the origin (0,0)
painter.translate(-center.x(), -center.y());
// Rotate the ellipse around its center
painter.rotate(_angle);
// Move the rotated ellipse back to its initial location
painter.translate(center.x(), center.y());
// Draw the ellipse rotated around its center
painter.drawEllipse(_rect);
painter.setBrush(prevBrush);
painter.setPen(prevPen);
return IL_SUCCESS;
}
As you can see, we have hard coded the rotation angle to 30 degrees in this test sample.
3. Observations
The ellipses come out at wrong positions, oftentimes outside the canvas area.
4. Question
What is wrong about the sample code above?
Best regards,
Baldur
P.S. Thanks in advance for any constructive response?
P.P.S. Prior to posting this message, we searched around quite a bit on stackoverflow.com.
Qt image move/rotation seemed to reflect a solution approach similar to the link above.
In painter.translate(center.x(), center.y()); you shift your object by the amount of current coordinate which makes (2*center.x(), 2*center.y()) as a result. You may need:
painter.translate(- center.x(), - center.y());
The theory of moving an object back to its origin, rotating and then replacing the object's position is correct. However, the code you've presented is not translating and rotating the object at all, but translating and rotating the painter. In the example question that you've referred to, they're wanting to rotate the whole image about an object, which is why they move the painter to the object's centre before rotating.
The easiest way to do rotations about a GraphicsItem is to initially define the item with its centre in the centre of the object, rather than in its top left corner. That way, any rotation will automatically be about the objects centre, without any need to translate the object.
To do this, you'd define the item with a bounding rect for x,y,width,height with (-width/2, -height/2, width, height).
Alternatively, assuming your item is inherited from QGraphicsItem or QGraphicsObject, you can use the function setTransformOriginPoint before any rotation.