How to create circular mask for Mat object in OpenCV / C++? - c++

My goal is to create a circular mask on a Mat object, so e.g. for a Mat looking like this:
0 0 0 0 0
0 0 0 0 0
0 0 0 0 0
0 0 0 0 0
0 0 0 0 0
...modify it such that I obtain a "circular shape" of 1s within it, so .e.g.
0 0 0 0 0
0 0 1 0 0
0 1 1 1 0
0 0 1 0 0
0 0 0 0 0
I am currently using the following code:
typedef struct {
double radius;
Point center;
} Circle;
...
for (Circle c : circles) {
// get the circle's bounding rect
Rect boundingRect(c.center.x-c.radius, c.center.y-c.radius, c.radius*2,c.radius*2);
// obtain the image ROI:
Mat circleROI(stainMask_, boundingRect);
int radius = floor(radius);
circle(circleROI, c.center, radius, Scalar::all(1), 0);
}
The problem is that after my call to circle, there is at most only one field in the circleROI set to 1... According to my understanding, this code should work because circle is supposed to use the information about the center and the radius to modify circleROI such that all points that are within the area of the circle should be set to 1... does anyone have an explanation for me what I am doing wrong? Am I taking the right approach to the problem but the actual issue might be somewhere else (this is very much possible too, since I am a novice to C++ and OpenCv)?
Note that I also tried to modify the last parameter in the circle call (which is the thickness of the circle outline) to 1 and -1, without any effect.

It is because you're filling your circleROI with a coordinate of the circle in the big mat. Your circle coordinate inside the circleROI should be relative to the circleROI, which is, in your case: new_center = (c.radius, c.radius), new_radius = c.radius.
Here is a snipcode for the loop:
for (Circle c : circles) {
// get the circle's bounding rect
Rect boundingRect(c.center.x-c.radius, c.center.y-c.radius, c.radius*2+1,c.radius*2+1);
// obtain the image ROI:
Mat circleROI(stainMask_, boundingRect);
//draw the circle
circle(circleROI, Point(c.radius, c.radius), c.radius, Scalar::all(1), -1);
}

Take a look at: getStructuringElement
http://docs.opencv.org/modules/imgproc/doc/filtering.html

Related

Creating a view matrix manually OpenGL

I´m trying to create a view matrix for my program to be able to move and rotate the camera in OpenGL.
I have a camera struct that has the position and rotation vectors in it. From what I understood, to create the view matrix, you need to multiply the transform matrix with the rotation matrix to get the expected result.
So far I tried creating matrices for rotation and for transformation and multiply them like this:
> Transformation Matrix T =
1 0 0 -x
0 1 0 -y
0 0 1 -z
0 0 0 1
> Rotation Matrix Rx =
1 0 0 0
0 cos(-x) -sin(-x) 0
0 sin(-x) cos(-x) 0
0 0 0 1
> Rotation Matrix Ry =
cos(-y) 0 sin(-y) 0
0 1 0 0
-sin(-y) 0 cos(-y) 0
0 0 0 1
> Rotation Matrix Rz =
cos(-z) -sin(-z) 0 0
sin(-z) cos(-z) 0 0
0 0 1 0
0 0 0 1
View matrix = Rz * Ry * Rx * T
Notice that the values are negated, because if we want to move the camera to one side, the entire world is moving to the opposite side.
This solution seems to almost be working. The problem that I have is that when the camera is not at 0, 0, 0, if I rotate the camera, the position is changed. What I think is that if the camera is positioned at, let´s say, 0, 0, -20 and I rotate the camera, the position should remain at 0, 0, -20 right?
I feel like I´m missing something but I can´t seem to know what. Any help?
Edit 1:
It´s an assignment for university, so I can´t use any built-in functions!
Edit 2:
I tried changing the order of the operations and putting the translation in the left side, so T * Rz * Ry * Rx, but then the models rotate around themselves, and not around the camera.

CGAL: Why is halfplane represented by six rays?

I've just started playing with Nef polyhedrons on the plane - the simple program below creates a halfplane, defined by a line y=0, and then this halfplane is explored by the CGAL Explorer.
#include <iostream>
#include <CGAL/Exact_integer.h>
#include <CGAL/Extended_cartesian.h>
#include <CGAL/Nef_polyhedron_2.h>
using Kernel = CGAL::Extended_cartesian<CGAL::Exact_integer>;
using Polyhedron = CGAL::Nef_polyhedron_2<Kernel>;
using Line = Polyhedron::Line;
using std::cout;
using std::endl;
int main()
{
const Polyhedron p(Line(0, 1, 0), Polyhedron::INCLUDED);
const auto ex = p.explorer();
for (auto it = ex.vertices_begin(); it != ex.vertices_end(); ++it)
{
if (ex.is_standard(it))
{
cout << "Point: " << ex.point(it) << endl;
}
else
{
cout << "Ray: " << ex.ray(it) << endl;
}
}
}
The program output:
Ray: 0 0 -1 -1
Ray: 0 0 -1 0
Ray: 0 0 -1 1
Ray: 0 0 1 -1
Ray: 0 0 1 0
Ray: 0 0 1 1
Why these six rays?
From the documentation for the explorer:
By recursively composing binary and unary operations one can end with a very complex rectilinear structure. To explore that structure there is a data type Nef_polyhedron_2::Explorer that allows read-only exploration of the rectilinear structure.
Therefore the planar subdivision is bounded symbolically by an axis-parallel square box of infimaximal size centered at the origin of our coordinate system. All structures extending to infinity are pruned by the box. Lines and rays have symbolic endpoints on the box. Faces are circularly closed. Infimaximal here means that its geometric extend is always large enough (but finite for our intuition). Assume you approach the box with an affine point, then this point is always inside the box. The same holds for straight lines; they always intersect the box.
Assuming that these vertices are on the box, my best guess is this:
It's a square, so that's why you get the diagonal rays like 0, 0 -> -1, 1 and 0, 0 -> 1, 1. I'm not an expert though.
Edit: drawing is upside-down, the halfplane is y >= 0, not y <= 0.
I'm answering my own question. According to these explanations from the CGAL online manual, each 2D polyhedron is bounded by an infinitely large frame, which is represented by four infinitely remoted vertices. These boundary vertices have extended coordinates (+infinity, +infinity), (+infinity, -infinity), (-infinity, +infinity) and (-infinity, -infinity). Such non-standard vertices in CGAL are represented by rays - for example, the point (+infinity, -infinity) is stored as a ray with beginning in the origin (0,0) and direction (1,-1).
So, a polyhedron, consisting of the single halfplane y>0, will have six non-standard vertices - four ones will belong to the frame, plus two ones, describing the line y=0. All its faces will look like this:
face 0, marked by 0
* no outer face cycle
face 1, marked by 0
* outer face cycle:
frame halfedge: (0 0 -1 0) => (0 0 -1 -1)
frame halfedge: (0 0 -1 -1) => (0 0 1 -1)
frame halfedge: (0 0 1 -1) => (0 0 1 0)
internal halfedge: (0 0 1 0) => (0 0 -1 0)
face 2, marked by 1
* outer face cycle:
frame halfedge: (0 0 -1 1) => (0 0 -1 0)
internal halfedge: (0 0 -1 0) => (0 0 1 0)
frame halfedge: (0 0 1 0) => (0 0 1 1)
frame halfedge: (0 0 1 1) => (0 0 -1 1)
Also please see the Figure 17.3 from the CGAL online manual.

Tetris Rotation without arrays

I am writing a Tetris Clone, it is almost done, except for the collisions. For example In order to move the Piece Z I use a method:
void PieceZ::movePieceDown()
{
drawBlock (x1,y1++);
drawBlock (x2,y2++);
drawBlock (x3,y3++);
drawBlock (x4,y4++);
}
and in order to rotate a Piece I use a setter (because coordinates are private). For rotation I use a 90 degree clockwise rotation matrix. For example if I want to move (x1,y1) and (x2, y2) is my origin, to get x and y of a new block:
newX = (y1-y2) + x2;
newY = (x2-x1) + y2 + 1;
That works to some extent, it starts out as:
0 0 0 0
0 1 1 0
0 0 1 1
0 0 0 0
Then as planned it rotates to:
0 0 0 1
0 0 1 1
0 0 1 0
0 0 0 0
And then it rotates to Piece S:
0 0 0 0
0 0 1 1
0 1 1 0
0 0 0 0
And then it just alternates between the second and the third stages.
My calculations are wrong but I can't figure out where, I just need a little hint.
Ok here is how it should go (somewhat):
Determine where you want to rotate the piece (this could be the upper or lower corner or the center) and call it origin
Calculate the new x newX = y - origin.y;
Calculate the new y newY = -x + origin.x;
This should work (I got this idea from wikipedia and rotation matrixes: https://en.wikipedia.org/wiki/Transformation_matrix)

Inverse perspective transformation of a warped image

# Iwillnotexist Idonotexist presented his code for image perspective transformation (rotations around 3 axes): link
I'm looking for a function (or math) to make an inverse perspective transformation.
Let's make an assumption, that my "input image" is a result of his warpImage() function, and all angles (theta, phi and gamma), scale and fovy are also known.
I'm looking for a function (or math) to compute inverse transformation (black border doesn't matter) to get an primary image.
How can I do this?
The basic idea is you need to find the inverse transformation. In the linked question they have F = P T R1 R2 where P is the projective transformation, T is a translation, and R1, R2 are two rotations.
Denote F* as the inverse transformation. We can the inverse as F* = R2* R1* T* P*. Note the order changes. Three of these are easy R1* is just another rotation but with the angle negated. So the first inverse rotation would be
cos th sin th 0 0
R1* = -sin th cos th 0 0
0 0 1 0
0 0 1
Note the signs on the two sin terms are reversed.
The inverse of a translation is just a translation in the opposite direction.
1 0 0 0
T*= 0 1 0 0
0 0 1 h
0 0 0 1
You can check these calculating T* T which should be the identity matrix.
The trickiest bit is the projective component we have
cot(fv/2) 0 0 0
P = 0 cot(fv/2) 0 0
0 0 -(f+n)/(f-n) -2 f n / (f-n)
0 0 -1 0
The inverse of this is
tan(fv/2) 0 0 0
P*= 0 tan(fv/2) 0 0
0 0 0 -2
0 0 (n-f)/(f n) (f+n)/(f n)
Wolfram alpha inverse with v=fv
You then need to multiply these together in the reverse order to get the final matrix.
I also had issues to back-transform my image.
You need to store the points
ptsInPt2f and ptsOutPt2f
which are computed in the the 'warpMatrix' method.
To back-transform, simply use the same method
M = getPerspectiveTransform(ptsOutPt2f, ptsInPt2f);
but with reversed param order (output as first argument, input as second).
Afterwards a simple crop will get rid of all the black.

OpenCV - get grey value from an image

Hello everybody right now I'm trying to getting grey value for every pixel in an image
what I mean with grey value is the white or black level from an image let's say 0 for white and 1 for black. for an example for this image
the value I want will be like
0 0 0 0 0 0
0 1 1 1 0 0
0 0 1 1 0 0
0 0 1 1 0 0
0 0 1 1 0 0
0 0 1 1 0 0
0 0 1 1 0 0
0 0 0 0 0 0
is this possible? if yes how to do it with OpenCV in C? or if it's impossible with OpenCV is there any other library that can do this?
What you ask is certainly possible but how it can be done depends on a lot of things. If you use C++, on SO we generally expect you to use the C++ interface which means you have a cv::Mat object and loaded the image with something like this: (using namespace cv)
#include <opencv2/core/core.hpp>
Mat mat_gray = imread(path, CV_LOAD_IMAGE_GRAYSCALE);
or by
Mat mat = imread(path); // and assuming it was originally a color image...
Mat mat_gray;
cvtColor(mat, mat_gray, CV_BGR2GRAY); //...convert it to grayscale.
Now, if you just want to access pixel values one-by-one, you use _Tp& mat.at<_Tp>(int x,int y);. That is:
for(int x=0; x<mat_gray.rows; ++x)
for(int y=0; y<mat_gray.cols; ++y)
mat_gray.at<uchar>(x,y); // if mat.type == CV_8U
You can look up your type here, which you should use in place of uchar if the mat.type is other than CV_8U.
As for the pure C interface, you can check this answer. But if you use C++, you should definitely use the C++ interface.