CGAL: Why is halfplane represented by six rays? - c++

I've just started playing with Nef polyhedrons on the plane - the simple program below creates a halfplane, defined by a line y=0, and then this halfplane is explored by the CGAL Explorer.
#include <iostream>
#include <CGAL/Exact_integer.h>
#include <CGAL/Extended_cartesian.h>
#include <CGAL/Nef_polyhedron_2.h>
using Kernel = CGAL::Extended_cartesian<CGAL::Exact_integer>;
using Polyhedron = CGAL::Nef_polyhedron_2<Kernel>;
using Line = Polyhedron::Line;
using std::cout;
using std::endl;
int main()
{
const Polyhedron p(Line(0, 1, 0), Polyhedron::INCLUDED);
const auto ex = p.explorer();
for (auto it = ex.vertices_begin(); it != ex.vertices_end(); ++it)
{
if (ex.is_standard(it))
{
cout << "Point: " << ex.point(it) << endl;
}
else
{
cout << "Ray: " << ex.ray(it) << endl;
}
}
}
The program output:
Ray: 0 0 -1 -1
Ray: 0 0 -1 0
Ray: 0 0 -1 1
Ray: 0 0 1 -1
Ray: 0 0 1 0
Ray: 0 0 1 1
Why these six rays?

From the documentation for the explorer:
By recursively composing binary and unary operations one can end with a very complex rectilinear structure. To explore that structure there is a data type Nef_polyhedron_2::Explorer that allows read-only exploration of the rectilinear structure.
Therefore the planar subdivision is bounded symbolically by an axis-parallel square box of infimaximal size centered at the origin of our coordinate system. All structures extending to infinity are pruned by the box. Lines and rays have symbolic endpoints on the box. Faces are circularly closed. Infimaximal here means that its geometric extend is always large enough (but finite for our intuition). Assume you approach the box with an affine point, then this point is always inside the box. The same holds for straight lines; they always intersect the box.
Assuming that these vertices are on the box, my best guess is this:
It's a square, so that's why you get the diagonal rays like 0, 0 -> -1, 1 and 0, 0 -> 1, 1. I'm not an expert though.
Edit: drawing is upside-down, the halfplane is y >= 0, not y <= 0.

I'm answering my own question. According to these explanations from the CGAL online manual, each 2D polyhedron is bounded by an infinitely large frame, which is represented by four infinitely remoted vertices. These boundary vertices have extended coordinates (+infinity, +infinity), (+infinity, -infinity), (-infinity, +infinity) and (-infinity, -infinity). Such non-standard vertices in CGAL are represented by rays - for example, the point (+infinity, -infinity) is stored as a ray with beginning in the origin (0,0) and direction (1,-1).
So, a polyhedron, consisting of the single halfplane y>0, will have six non-standard vertices - four ones will belong to the frame, plus two ones, describing the line y=0. All its faces will look like this:
face 0, marked by 0
* no outer face cycle
face 1, marked by 0
* outer face cycle:
frame halfedge: (0 0 -1 0) => (0 0 -1 -1)
frame halfedge: (0 0 -1 -1) => (0 0 1 -1)
frame halfedge: (0 0 1 -1) => (0 0 1 0)
internal halfedge: (0 0 1 0) => (0 0 -1 0)
face 2, marked by 1
* outer face cycle:
frame halfedge: (0 0 -1 1) => (0 0 -1 0)
internal halfedge: (0 0 -1 0) => (0 0 1 0)
frame halfedge: (0 0 1 0) => (0 0 1 1)
frame halfedge: (0 0 1 1) => (0 0 -1 1)
Also please see the Figure 17.3 from the CGAL online manual.

Related

Creating a view matrix manually OpenGL

I´m trying to create a view matrix for my program to be able to move and rotate the camera in OpenGL.
I have a camera struct that has the position and rotation vectors in it. From what I understood, to create the view matrix, you need to multiply the transform matrix with the rotation matrix to get the expected result.
So far I tried creating matrices for rotation and for transformation and multiply them like this:
> Transformation Matrix T =
1 0 0 -x
0 1 0 -y
0 0 1 -z
0 0 0 1
> Rotation Matrix Rx =
1 0 0 0
0 cos(-x) -sin(-x) 0
0 sin(-x) cos(-x) 0
0 0 0 1
> Rotation Matrix Ry =
cos(-y) 0 sin(-y) 0
0 1 0 0
-sin(-y) 0 cos(-y) 0
0 0 0 1
> Rotation Matrix Rz =
cos(-z) -sin(-z) 0 0
sin(-z) cos(-z) 0 0
0 0 1 0
0 0 0 1
View matrix = Rz * Ry * Rx * T
Notice that the values are negated, because if we want to move the camera to one side, the entire world is moving to the opposite side.
This solution seems to almost be working. The problem that I have is that when the camera is not at 0, 0, 0, if I rotate the camera, the position is changed. What I think is that if the camera is positioned at, let´s say, 0, 0, -20 and I rotate the camera, the position should remain at 0, 0, -20 right?
I feel like I´m missing something but I can´t seem to know what. Any help?
Edit 1:
It´s an assignment for university, so I can´t use any built-in functions!
Edit 2:
I tried changing the order of the operations and putting the translation in the left side, so T * Rz * Ry * Rx, but then the models rotate around themselves, and not around the camera.

Create random adjacency matrix with each node having a minimum degree of 'k'

I want to create a random adjacency matrix where each node is connected to 'k' other nodes. The graph represented by the adjacency matrix is undirected.
I started with 7 nodes in the adjacency matrix and each node is supposed to be connected to at least to 3 other nodes. So far I have been able to get this:
0 1 1 0 0 0 0
1 0 1 1 0 0 0
1 1 0 1 1 0 0
0 1 1 0 1 1 0
0 0 1 1 0 1 1
0 0 0 1 1 0 1
0 0 0 0 1 1 0
As can be seen from the matrix, the first and last row have less than three connections.
My implementation so far is:
for( int i= 0; i<7; i++){
for( int j= i+1; j<7; j++){
if(i==j){
topo[i][j]=0;
}
else{
for(int k=j; k<i+3 && k<7; k++){
int connectivity=0;
while(connectivity<3){
if(topo[i][k]!=1 && topo[k][i]!=1){
topo[i][k]=1;
topo[k][i]=1;
connectivity++;
}
else{
connectivity++;
}
}
}
}
}
}
I assume we are talking about directed graphs here.
Lets assume that v is a number of vertexes (nodes) and d of vertex A (degree, Your k) is a number of edges that are drown from A to other nodes.
If You look closer You can find out, that d value of k-th node is a number of 1 in kth row. So the only thing to do is to draw vector of 0s and 1s with v-1 (we don't connect node with itself) elements for each row.
You can draw random vector of zeros and ones by writing d ones to it and randomly permuting it.
Note for undirected graph - You can adopt this algorithm to upper-right matrix triangle, dynamically rewriting known values to lower-left one. Than for each row from up to down You can adopt algorithm, that draws rest of a raw.

How to create circular mask for Mat object in OpenCV / C++?

My goal is to create a circular mask on a Mat object, so e.g. for a Mat looking like this:
0 0 0 0 0
0 0 0 0 0
0 0 0 0 0
0 0 0 0 0
0 0 0 0 0
...modify it such that I obtain a "circular shape" of 1s within it, so .e.g.
0 0 0 0 0
0 0 1 0 0
0 1 1 1 0
0 0 1 0 0
0 0 0 0 0
I am currently using the following code:
typedef struct {
double radius;
Point center;
} Circle;
...
for (Circle c : circles) {
// get the circle's bounding rect
Rect boundingRect(c.center.x-c.radius, c.center.y-c.radius, c.radius*2,c.radius*2);
// obtain the image ROI:
Mat circleROI(stainMask_, boundingRect);
int radius = floor(radius);
circle(circleROI, c.center, radius, Scalar::all(1), 0);
}
The problem is that after my call to circle, there is at most only one field in the circleROI set to 1... According to my understanding, this code should work because circle is supposed to use the information about the center and the radius to modify circleROI such that all points that are within the area of the circle should be set to 1... does anyone have an explanation for me what I am doing wrong? Am I taking the right approach to the problem but the actual issue might be somewhere else (this is very much possible too, since I am a novice to C++ and OpenCv)?
Note that I also tried to modify the last parameter in the circle call (which is the thickness of the circle outline) to 1 and -1, without any effect.
It is because you're filling your circleROI with a coordinate of the circle in the big mat. Your circle coordinate inside the circleROI should be relative to the circleROI, which is, in your case: new_center = (c.radius, c.radius), new_radius = c.radius.
Here is a snipcode for the loop:
for (Circle c : circles) {
// get the circle's bounding rect
Rect boundingRect(c.center.x-c.radius, c.center.y-c.radius, c.radius*2+1,c.radius*2+1);
// obtain the image ROI:
Mat circleROI(stainMask_, boundingRect);
//draw the circle
circle(circleROI, Point(c.radius, c.radius), c.radius, Scalar::all(1), -1);
}
Take a look at: getStructuringElement
http://docs.opencv.org/modules/imgproc/doc/filtering.html

How to create undirected graph out of adjancency matrix?

Hello everywhere there is an explanation by drawings hot to create graph out of adj. matrix. However, i need simple pseudo code or algorithym for that .... I know how to draw it out of adj. matrix and dont know why nobody no where explains how to actually put it in code. I dont mean actual code but at least algorithm ... Many say .. 1 is if there is an edge i know that.. I have created the adj. matrix and dont know how to transfer it to graph. My vertices dont have names they are just indexes of the matrix. for example 1-9 are the "names of my matrix"
1 2 3 4 5 6 7 8 9
1 0 1 0 0 1 0 0 0 0
2 1 0 1 0 0 0 0 0 0
3 0 1 0 1 0 0 0 0 0
4 0 0 1 0 0 1 0 0 0
5 1 0 0 0 0 0 1 0 0
6 0 0 0 1 0 0 0 0 1
7 0 0 0 0 1 0 0 1 0
8 0 0 0 0 0 0 1 0 0
9 0 0 0 0 0 1 0 0 0
that was originaly a maze ... have to mark row1 col4 as start and row7 col8 end ...
Nobody ever told me how to implement graph out of matrix (without pen) :Pp
thanks
Nature of symmetry
Adjancency matrix is a representation of a graph. For undirected graph, its matrix is symmetrical. For instance, if there is an edge from vertex i to vertex j, there must also be an edge from vertex j to vertex i. That is the same edge actually.
*
*
* A'
A *
*
*
Algorithm
Noticing this nature, you can implement your algorithm as simple as:
void drawGraph(vertices[nRows][nCols])
{
for (unsigned int i = 0; i < nRows; ++i)
{
for (unsigned int j = i; j < nCols; ++j)
{
drawLine(i, j);
}
}
}
You can convert a graph from an adjacency matrix representation to a node-based representation like this:
#include <iostream>
#include <vector>
using namespace std;
const int adjmatrix[9][9] = {
{0,1,0,0,1,0,0,0,0},
{1,0,1,0,0,0,0,0,0},
{0,1,0,1,0,0,0,0,0},
{0,0,1,0,0,1,0,0,0},
{1,0,0,0,0,0,1,0,0},
{0,0,0,1,0,0,0,0,1},
{0,0,0,0,1,0,0,1,0},
{0,0,0,0,0,0,1,0,0},
{0,0,0,0,0,1,0,0,0}
};
struct Node {
vector<Node*> neighbours;
/* optional additional node information */
};
int main (int argc, char const *argv[])
{
/* initialize nodes */
vector<Node> nodes(9);
/* add pointers to neighbouring nodes */
int i,j;
for (i=0;i<9;++i) {
for (j=0;j<9;++j) {
if (adjmatrix[i][j]==0) continue;
nodes[i].neighbours.push_back(&nodes[j]);
}
}
/* print number of neighbours */
for (i=0;i<9;++i) {
cout << "Node " << i
<< " has " << nodes[i].neighbours.size() <<" outbound edges." << endl;
}
return 0;
}
Here, the graph is represented as an array of nodes with pointers to reachable neighbouring nodes. After setting up the nodes and their neighbour pointers you use this data structure to perform the graph algorithms you want, in this (trivial) example print out the number of outbound directed edges each node has.

implementing erosion, dilation in C, C++

I have theoretical understanding of how dilation in binary image is done.
AFAIK, If my SE (structuring element) is this
0 1
1 1.
where . represents the centre, and my image(binary is this)
0 0 0 0 0
0 1 1 0 0
0 1 0 0 0
0 1 0 0 0
0 0 0 0 0
so the result of dilation is
0 1 1 0 0
1 1 1 0 0
1 1 0 0 0
1 1 0 0 0
0 0 0 0 0
I got above result by shifting Image in 0, +1 (up) and and -1(left) direction, according to SE, and taking the union of all these three shifts.
Now, I need to figure out how to implement this in C, C++.
I am not sure how to begin and how to take the union of sets.
I thought of representing original image,three shifted images and final image obtained by taking union; all using matrix.
Is there any place where I can get some sample solution to start with or any ideas to proceed ?
Thanks.
There are tons of sample implementations out there.. Google is your friend :)
EDIT
The following is a pseudo-code of the process (very similar to doing a convolution in 2D). Im sure there are more clever way to doing it:
// grayscale image, binary mask
void morph(inImage, outImage, kernel, type) {
// half size of the kernel, kernel size is n*n (easier if n is odd)
sz = (kernel.n - 1 ) / 2;
for X in inImage.rows {
for Y in inImage.cols {
if ( isOnBoundary(X,Y, inImage, sz) ) {
// check if pixel (X,Y) for boundary cases and deal with it (copy pixel as is)
// must consider half size of the kernel
val = inImage(X,Y); // quick fix
}
else {
list = [];
// get the neighborhood of this pixel (X,Y)
for I in kernel.n {
for J in kernel.n {
if ( kernel(I,J) == 1 ) {
list.add( inImage(X+I-sz, Y+J-sz) );
}
}
}
if type == dilation {
// dilation: set to one if any 1 is present, zero otherwise
val = max(list);
} else if type == erosion {
// erosion: set to zero if any 0 is present, one otherwise
val = min(list);
}
}
// set output image pixel
outImage(X,Y) = val;
}
}
}
The above code is based on this tutorial (check the source code at the end of the page).
EDIT2:
list.add( inImage(X+I-sz, Y+J-sz) );
The idea is that we want to superimpose the kernel mask (of size nxn) centered at sz (half size of mask) on the current image pixel located at (X,Y), and then just get the intensities of the pixels where the mask value is one (we are adding them to a list). Once extracted all the neighbors for that pixel, we set the output image pixel to the maximum of that list (max intensity) for dilation, and min for erosion (of course this only work for grayscale images and binary mask)
The indices of both X/Y and I/J in the statement above are assumed to start from 0.
If you prefer, you can always rewrite the indices of I/J in terms of half the size of the mask (from -sz to +sz) with a small change (the way the tutorial I linked to is using)...
Example:
Consider this 3x3 kernel mask placed and centered on pixel (X,Y), and see how we traverse the neighborhood around it:
--------------------
| | | | sz = 1;
-------------------- for (I=0 ; I<3 ; ++I)
| | (X,Y) | | for (J=0 ; J<3 ; ++J)
-------------------- vect.push_back( inImage.getPixel(X+I-sz, Y+J-sz) );
| | | |
--------------------
Perhaps a better way to look at it is how to produce an output pixel of the dilation. For the corresponding pixel in the image, align the structuring element such that the origin of the structuring element is at that image pixel. If there is any overlap, set the dilation output pixel at that location to 1, otherwise set it to 0.
So this can be done by simply looping over each pixel in the image and testing whether or not the properly shifted structuring element overlaps with the image. This means you'll probably have 4 nested loops: x img, y img, x se, y se. So for each image pixel, you loop over the pixels of the structuring element and see if there is any overlap. This may not be the most efficient algorithm, but it is probably the most straightforward.
Also, I think your example is incorrect. The dilation depends on the origin of the structuring element. If the origin is...
at the top left zero: you need to shift the image (-1,-1), (-1,0), and (0,-1) giving:
1 1 1 0 0
1 1 0 0 0
1 1 0 0 0
1 0 0 0 0
0 0 0 0 0
at the bottom right: you need to shift the image (0,0), (1,0), and (0,1) giving:
0 0 0 0 0
0 1 1 1 0
0 1 1 0 0
0 1 1 0 0
0 1 0 0 0
MATLAB uses floor((size(SE)+1)/2) as the origin of the SE so in this case, it will use the top left pixel of the SE. You can verify this using the imdilate MATLAB function.
OpenCV
Example: Erosion and Dilation
/* structure of the image variable
* variable n stores the order of the square matrix */
typedef struct image{
int mat[][];
int n;
}image;
/* function recieves image "to dilate" and returns "dilated"*
* structuring element predefined:
* 0 1 0
* 1 1 1
* 0 1 0
*/
image* dilate(image* to_dilate)
{
int i,j;
int does_order_increase;
image* dilated;
dilated = (image*)malloc(sizeof(image));
does_order_increase = 0;
/* checking whether there are any 1's on d border*/
for( i = 0 ; i<to_dilate->n ; i++ )
{
if( (to_dilate->a[0][i] == 1)||(to_dilate->a[i][0] == 1)||(to_dilate->a[n-1][i] == 1)||(to_dilate->a[i][n-1] == 1) )
{
does_order_increase = 1;
break;
}
}
/* size of dilated image initialized */
if( does_order_increase == 1)
dilated->n = to_dilate->n + 1;
else
dilated->n = to_dilate->n;
/* dilating image by checking every element of to_dilate and filling dilated *
* does_order_increase serves to cope with adjustments if dilated 's order increase */
for( i = 0 ; i<to_dilate->n ; i++ )
{
for( j = 0 ; j<to_dilate->n ; j++ )
{
if( to_dilate->a[i][j] == 1)
{
dilated->a[i + does_order_increase][j + does_order_increase] = 1;
dilated->a[i + does_order_increase -1][j + does_order_increase ] = 1;
dilated->a[i + does_order_increase ][j + does_order_increase -1] = 1;
dilated->a[i + does_order_increase +1][j + does_order_increase ] = 1;
dilated->a[i + does_order_increase ][j + does_order_increase +1] = 1;
}
}
}
/* dilated stores dilated binary image */
return dilated;
}
/* end of dilation */