CGAL Alpha_shape_2 extract boundary vertices - c++

Appreciate if you could assist me with alpha_shape_2. I`m new with CGAL.
I`m trying to extract boundaries from 2D data.
Alpha_shape_2 alpha(lp.begin(), lp.end(), FT(1000), Alpha_shape_2::GENERAL);
Alpha_shape_2 call works perfectly. But, I`m confused on how to extract only boundary vertices.
Much appreciate some example.

Here is how to get points, but they are not sorted:
std::vector<Point> result;
for(Alpha_shape_2::Alpha_shape_vertices_iterator it = alpha_shape.Alpha_shape_vertices_begin();
it != alpha_shape.Alpha_shape_vertices_end();
++it){
Alpha_shape_2::Vertex_handle handle = *it;
Point p = handle->point();
result.push_back(p);
}
You need start read manual on official website to understand some concepts. The simple examples in CGAL does not have that much explanation and functionality. You need get more familiar with actual structure of CGAL.
This is how to get segments by edges. But segments not sorted too, you will need do it by yourself.
for(Alpha_shape_2::Alpha_shape_edges_iterator it = alpha_shape.Alpha_shape_edges_begin();
it != alpha_shape.alpha_shape_edges_end();
++it){
CGAL::Kernel::Segment segment = alpha_shape.segment(*it);
Point p1 = segment.vertex(0);
Point p2 = segment.vertex(1);
// so here you will get p1 and p2 of segment, which is part of shape.
.....
}
You will get something like this:
That is after my sort function (sorry can't share. but it's not so complicated to write):
UPDATE
I found this source, maybe will be helpful.

Related

How to correctly format input and resize output data whille using TensorRT engine?

I'm trying implementing deep learning model into TensorRT runtime. The model conversion step is done quite OK and i'm pretty sure about it.
Now there's 2 parts i'm currently struggle with is memCpy data from host To Device (like openCV to Trt) and get the right output shape in order to get the right data. So my questions is:
How actually a shape of input dims relate with memory buffer. What is the difference when the model input dims is NCHW and NHWC, so when i read a openCV image, it's NHWC and also the model input is NHWC, do i have to re-arange the buffer data, if Yes then what's the actual consecutive memory format i have to do ?. Or simply what does the format or sequence of data that the engine are expecting ?
About the output (assume the input are correctly buffered), how do i get the right result shape for each task (Detection, Classification, etc..)..
Eg. an array or something look similar like when working with python .
I read Nvidia docs and it's not beginner-friendly at all.
//Let's say i have a model thats have a dynamic shape input dim in the NHWC format.
auto input_dims = nvinfer1::Dims4{1, 386, 342, 3}; //Using fixed H, W for testing
context->setBindingDimensions(input_idx, input_dims);
auto input_size = getMemorySize(input_dims, sizeof(float));
// How do i format openCV Mat to this kind of dims and if i encounter new input dim format, how do i adapt to that ???
And the expected output dims is something like (1,32,53,8) for example, the output buffer result in a pointer and i don't know what's the sequence of the data to reconstruct to expected array shape.
// Run TensorRT inference
void* bindings[] = {input_mem, output_mem};
bool status = context->enqueueV2(bindings, stream, nullptr);
if (!status)
{
std::cout << "[ERROR] TensorRT inference failed" << std::endl;
return false;
}
auto output_buffer = std::unique_ptr<int>{new int[output_size]};
if (cudaMemcpyAsync(output_buffer.get(), output_mem, output_size, cudaMemcpyDeviceToHost, stream) != cudaSuccess)
{
std::cout << "ERROR: CUDA memory copy of output failed, size = " << output_size << " bytes" << std::endl;
return false;
}
cudaStreamSynchronize(stream);
//How do i use this output_buffer to form right shape of output, (1,32,53,8) in this case ?
Could you please edit your question and tell us which model you're using if it's a commonly known NN, prehaps one we can download to test locally?
Then, the answer since it doesn't depend on the model (even though it would help to answer)
How actually a shape of input dims relate with memory buffer
If the input is NxCxHxW, you need to allocate N*C*H*W*sizeof(float) memory for that on your CPU and GPU. To be more precise, you need to allocate space on GPU for all the bindings and on CPU for only input and output bindings.
when i read a openCV image, it's NHWC and also the model input is NHWC, do i have to re-arange the buffer data
No, you do not have to re-arrange the buffer data. If you would have to change between NHWC and NCHW you can check this or google 'opencv NHWC to NHCW'.
Full working code example here, especially this function.
Or simply what does the format or sequence of data that the engine are expecting ?
This depends on how the neural network was trained. You should in general know exactly which kind of preprocessing and image data formats have been used to train the NN. You should even use the same libraries to load images and process them if possible. It's an open problem in ML: if you try to replicate results of some papers and use their models but they haven't open sourced the preprocessing you might get worse results. In the "worst" case you can implement both NHCW and NCHW and test which of them works.
About the output (assume the input are correctly buffered), how do i get the right result shape for each task (Detection, Classification, etc..).. Eg. an array or something look similar like when working with python .
This question clearly requires me to understand which NNs you are referring to. But I myself do the following:
Load the TensorRT .engine file in my code like this and deserialize like this
Print the bindings like this
Then I know the size of the input binding or bindings if there are many inputs, and the size of the output binding or bindings if there are many outputs.
This way you know the right result shape for each task. I hope this answered your question. If not, please add detailed comments and edit your post to be more precise. Thank you.
I read Nvidia docs and it's not beginner-friendly at all.
Yes I agree. You're better of searching TensorRT c++ (or Python) repositories from Github and studying their code. Have you seen TensorRT samples? It doesn't really take many lines of code to implement TensorRT inference.

Parse Individual Curves from General_polygon_set_2 in CGAL

To start, I want to thank everyone who has helped me so far on previous problems I have had with working through the CGAL Library, it is greatly appreciated.
Background on myself: I am still very new with C++ and my coding experience is in MATLAB so there is a lot of concepts that I am learning very quickly and are therefore very new to me, so please excuse my erroneous language that I may use with regard to C++.
The Problem:
I have recently wrote some code that finds the Minkowski sum of a polyline and a circle (i.e., buffer of a polyline) using the code found in the documentation of Boolean Set Operations on General Polygons.
Here, a General_polygon_set_2 concept is utilized in the output, and if the output code is used from the example above I can get the following output of a Polygon_with_holes_2 class:
48 [775.718 -206.547 --> 769.134 -157.991] (769 -157 1 1) [769.134 -157.991 --> 770 -157] (769 -157 1 1) [770 -157 --> 768.866 -156.009] [768.866 -156.009 --> 762.282 -107.453] [762.282 -107.453 --> 703.282 -115.453] [703.282 -115.453 --> 708.072 -150.778] ...
7 15 [549.239 -193.612 --> 569.403 -216.422] ... 3 [456.756 -657.812 --> 657.930 908.153] ...
Here, if I understand correctly, the first integer refers to the number of a vertices in the .outer_boundary() , followed by descriptions of the curves for each "edge" of the general polygon. In my problem, the outputs will only consist of linear functions and circular arcs.
Linear: [775.718 -206.547 --> 769.134 -157.991]
Circular Arc (x-monotone): (769 -157 1 1) [769.134 -157.991 --> 770 -157]
The linear element is simple, go from this x-y coordinate to this other one by a line. As for the the circular arc, it is little bit more different, it says to use this circle described by the arguments in these brackets () to go from this x-y coordinate to this other one contained in these brackets []. The arguments to circle are: (x,y,radius,orientation).
Next, since we have holes, after the .outer_boundary() has been written out, two more integers are displayed. The first one states the number of holes, the second states the number vertices in this hole, then followed by those vertices for that hole. Then once that hole is written out, another integer is written describing the number of vertices in that hole, and this then continues for all of the holes, completing the description of the polygon.
So with that, my current problem is parsing out each individual curve one at a time so that I can do operations on them.
I have the following functions from the documentation to work with:
.outer_boundary(): returns the general polygon that represents the outer boundary.
.holes_begin(): returns the begin iterator of the holes.
.holes_end():
So my thought is to break the General_polygon_set_2 to General_polygon_2, then break that down into the .outer_boundary() and the different holes. Finally, for each set of curves, break those down into individual curves.
I am not really sure how to go about this, I just know that I need individual curve data so I can do my own operations on them. Any help, will be, as always, greatly appreciated!
Note: I actually deleted this post after reading through the arrangements documentation thinking that this was too obvious of an answer, but after sometime I still really do not see how to pull this info properly, I think the biggest issue is in my lacking knowledge of C++. Sorry about this being a noob-ish question.
Solution in Progress:
list<Polygon_with_holes_2> res;
S.polygons_with_holes (back_inserter (res));
list<Polygon_with_holes_2>::iterator i = res.begin();
Polygon_with_holes_2 mink = *i;
minkOuter = mink.outer_boundary();
cout << minkOuter << endl;
int numHoles = mink.holes_end()-mink.holes_begin();
cout << numHoles << endl;
Now I am working on isolating the holes, followed by breaking those down into each individual curve.
The doc here states that the value_type of a Hole_const_iterator is a General_polygon_2, which means that what you can iterate through all "curves" using "holes_begin()" and "holes-end", like you thought. To do that, use the following syntax:
for(auto h_it = mink.holes_begin(); h_it != mink.holes_end(); ++h_it)
{
//in here h_it is an iterator with value type General_polygon_2, so *h_it will be a the polygon describing a hole. Every step of this loop will give you another hole.
}
Then, you can iterate the curves of each polygon with curves_begin() and curves_end() the same way.
So to iterate each curve of a polygon_with_holes:
for(auto h_it = mink.holes_begin(); h_it != mink.holes_end(); ++h_it)
{
for(auto curve_it = h_it->curves_begin(); curves_it != h_it->curves_end(); ++curves_it)
{
//*curves_it gives you a curve.
}
}

Problems with implementing approximate(feature based) q learning

I am new to reinforcement learning. I had recently learned about approximate q learning, or feature-based q learning, in which you describe states by features to save space. I have tried to implement this in a simple grid game. Here, the agent is supposed to learn to not go into a firepit(signaled by an f) and to instead eat up as much dots as possible. Here is the grid used:
...A
.f.f
.f.f
...f
Here A signals the agent's starting location. Now, when implementing, I set up two features. One was 1/((distance to closest dot)^2), and the other was (distance to firepit) + 1. When the agent enters a firepit, the program returns with a reward of -100. If it goes to a non firepit position that was already visited(and thus there is no dot to be eaten), the reward is -50. If it goes to an unvisited dot, the reward is +500. In the above grid, no matter what the initial weights are, the program never learns the correct weight values. Specifically, in the output, the first training session gains a score(how many dots it ate) of 3, but for all other training sessions, the score is just 1 and the weights converge to an incorrect value of -125 for weight 1(distance to firepit) and 25 for weight 2(distance to unvisited dot). Is there something specifically wrong with my code or is my understanding of approximate q learning incorrect?
I have tried to play around with the rewards that the environment is giving and also with the initial weights. None of these have fixed the problem.
Here is the link to the entire program: https://repl.it/repls/WrongCheeryInterface
Here is what is going on in the main loop:
while(points != NUMPOINTS){
bool playerDied = false;
if(!start){
if(!atFirepit()){
r = 0;
if(visited[player.x][player.y] == 0){
points += 1;
r += 500;
}else{
r += -50;
}
}else{
playerDied = true;
r = -100;
}
}
//Update visited
visited[player.x][player.y] = 1;
if(!start){
//This is based off the q learning update formula
pairPoint qAndA = getMaxQAndAction();
double maxQValue = qAndA.q;
double sample = r;
if(!playerDied && points != NUMPOINTS)
sample = r + (gamma2 * maxQValue);
double diff = sample - qVal;
updateWeights(player, diff);
}
// checking end game condition
if(playerDied || points == NUMPOINTS) break;
pairPoint qAndA = getMaxQAndAction();
qVal = qAndA.q;
int bestAction = qAndA.a;
//update player and q value
player.x += dx[bestAction];
player.y += dy[bestAction];
start = false;
}
I would expect that both weights would still be positive, but one of them is negative(the one giving distance to the firepit).
I also expected the program to learn overtime that it is bad to enter a firepit and also bad, but not as bad, to go to an unvisited dot.
Probably not the anwser you want to hear, but:
Have you try to implement the simpler tabular Q-learning before approximate Q-learning? In your setting, with a few states and actions, it will work pefectly. If you are learning, I strongly recommend you to start with the simpler cases in order to get a better understanding/intuition about how Reinforcement Learning works.
Do you know the implications of using approximators instead of learning the exact Q function? In some cases, due to the complexity of the problem (e.g., when the state space is continuous) you should approximate the Q function (or the policy, depending on the algorithm), but this may introduce some convergence problems. Additionally, in you case, you are trying to hand-pick some features, which usually required a depth knowledge of the problem (i.e., environment) and the learning algorithm.
Do you understand the meaning of the hyperparameters alpha and gamma? You can not choose them randomly. Sometimes they are critical to obtain the expected results, not always, depending heavely on the problem and the learning algorithm. In your case, taking a look to the convergence curve of you weights, it's pretty clear that you are using a value of alpha too high. As you pointed out, after the first training session your weigths remain constant.
Therefore, practical recommendations:
Be sure to solve your grid game using a tabular Q-learning algorithm before trying more complex things.
Experiment with different values of alpha, gamma and rewards.
Read more in depth about approximated RL. A very good and accesible book (starting from zero knowledge) is the classical Sutton and Barto's book: Reinforcement Learning: An Introduction, which you can obtain for free and was updated in 2018.

vtktriangle compute normal from arbitrary points with python

I am using python wrappings for VTK. I want my script to let the user pick three arbitrary points and return a triangle with its normal information. In VTK VTK Triangle reference there is vtkTriangle::ComputeNormal (double v1[3], double v2[3],double v3[3],double n[3]).
I checked Cxx implementation examples about vtkTriangle but, I don't understand how to implement this in Python. Does n[3] stand for the normal? If so what it should be as an input parameter?
#g.stevo I understand that. However, when I give a random value the method ComputeNormal returns None. To be more clear you can find the snippet of related code below:
`p0 = trianglePolyData.GetPoints().GetPoint(0)
p1 = trianglePolyData.GetPoints().GetPoint(1)
p2 = trianglePolyData.GetPoints().GetPoint(2)
print vtk.vtkTriangle().TriangleArea(p0,p1,p2)
n=[0.0,0.0,0.0]
print vtk.vtkTriangle().ComputeNormal(p0,p1,p2,n)`
Your code is working. The result you are looking for is in the array n. The function ComputeNormal returns void, according to the documentation.
Try this:
n=[0.0,0.0,0.0]
vtk.vtkTriangle().ComputeNormal(p0,p1,p2,n)
print n

Which algorithm is used to train/predict Opencv LBPH face recognizer?

I couldn't understand how training stage and predition stage is working.İs it using another algorithm like svm or k-nearestneighbour after finding LBPH features?
If you check: https://github.com/Itseez/opencv_contrib/blob/master/modules/face/src/lbph_faces.cpp
Then you will see they use 1-nearest neighbour, excerpt from detect function:
// find 1-nearest neighbor
collector->init((int)_histograms.size(), state);
for (size_t sampleIdx = 0; sampleIdx < _histograms.size(); sampleIdx++) {
double dist = compareHist(_histograms[sampleIdx], query, HISTCMP_CHISQR_ALT);
int label = _labels.at<int>((int)sampleIdx);
if (!collector->collect(label, dist, state))return;
}
A 1-nearest neighbour classifier is used since the Local Binary Pattern descriptor is simple enough. See for a more in depth explanation the paper: "Face Recognition with Local Binary Patterns"
On a side note. This is not really an implementation/practical question and thus does not really belong on this forum. I would suggest using the opencv forum.