CGAL Weighted_pointC2 for Epeck kernel - c++

I have a project where CGAL::Exact_predicates_exact_constructions_kernel is used for CGAL::Arrangement_with_history_2. Now there is a task to add a property to each point which should not affect point operations (just extra data providing). I read tutorial from CGAL Extensible Kernel guide, but it looks strange and difficult to override all geometric operations and classes for all geometric traits. Also, it is not very clear how to support lazy operations and Interval arithmetic, how is it works in CGAL::Epeck.
Edit: Vertex record in the DCEL can't be used because not every point has DCEL vertex (CGAL vertices is the edge ends, but the edges have intermediate points |Points| >= |Vertices|)
I found Weighted_point_2.h, which define Weighted_pointC2 class with scalar field. Can this type of point be used for CGAL::Epeck kernel? Maybe there is a example?
Maybe there is an example code where the code for creating a Epeck-like kernel for a custom point?

Related

Difference between main Object into itk 4.8 toolkit

Anyone can help me to understand the difference among itk 4.8 data object. What are the difference between Vector Image, CovariantVector Image and Spatial Objects?
The itk::VectorImage class is merely an image where each image pixel has a list of values associated with it rather than a single intensity value.
I am not aware of any itk::CovariantVectorImage class or a similar class.
The itk::Vector class represents a mathematical vector with magnitude and direction with operators and methods for vector addition, scalar multiplication, the inner product of two vectors, getting a vector's norm, and so forth. You can also perform linear transformations on them using methods in the itk::AffineTransform, mainly the TransformVector() method. This is not related to C++'s std::vector container object, which is really a dynamic array data structure.
The itk::CovariantVector class is similar to itk::Vector, except it represents a covector rather than a vector. Covectors represent n-1-dimensional hyperplanes, (2D planes in the case of 3D space), and so their components transform in the opposite way that a vector's components do. itk::AffineTransform's TransformCovariantVector() method will transform a itk::CovariantVector object according to covariant transformation laws.
The itk::SpatialObject class allows you to create objects that exist in physical n-dimension space such as boxes, ellipses, tubes, planes, and cylinders, and relate these objects through parent-child relationships. You can read Chapter 5 of the ITK software manual for more information on this topic.

Given 2 points with known speed direction and location, compute a path composed of (circle) arcs

So, I have two points, say A and B, each one has a known (x, y) coordinate and a speed vector in the same coordinate system. I want to write a function to generate a set of arcs (radius and angle) that lead A to status B.
The angle difference is known, since I can get it by subtracting speed unit vector. Say I move a certain distance with (radius=r, angle=theta) then I got into the exact same situation. Does it have a unique solution? I only need one solution, or even an approximation.
Of course I can solve it by giving a certain circle and a line(radius=infine), but that's not what I want to do. I think there's a library that has a function for this, since it's quite a common approach.
A biarc is a smooth curve consisting of two circular arcs. Given two points with tangents, it is almost always possible to construct a biarc passing through them (with correct tangents).
This is a very basic routine in geometric modelling, and it is indispensable for smoothly approximating an arbirtrary curve (bezier, NURBS, etc) with arcs. Approximation with arcs and lines is heavily used in CAM, because modellers use NURBS without a problem, but machine controllers usually understand only lines and arcs. So I strongly suggest reading on this topic.
In particular, here is a great article on biarcs on biarcs, I seriously advice reading it. It even contains some working code, and an interactive demo.

Is the boost DE-9-IM struct usable

I want to use the de9im to speed up a call to point within a a polygon, where the polygon may be used many times. I know that de9im has this functionality but I can't seem to figure out how the class in boost even works (geometry/strategies/intersection_result.hpp ). Does anyone know if this class is actually functional and if so can they provide a simple example of a query for a polygon containing a point.
EDIT: I'm comparing the boost geometry library to JTS, which has a prepared geometry class, at this point I'm not 100% that use of the DE-9IM is what is allowing the pre computation but I am still wondering if boost has this functionality in it.
I'm not entirely sure what is the problem exactly.
DE9IM is a model used to describe the spatial relationship of Geometrical objects. See http://en.wikipedia.org/wiki/DE-9IM for more info.
I assume you're looking for a way how to represent Points, Polygons and how to check if one is within the other one. If this is the case then yes, Boost.Geometry of course supports that and many more. For instance to check if a Point is within a Polygon you may use:
boost::geometry::model::point<> to represent a Point
boost::geometry::model::polygon<> to represent a Polygon
boost::geometry::within() function to check the spatial relationship
More info you can find in the docs: http://www.boost.org/libs/geometry
E.g. at the bottom of this page:
http://www.boost.org/doc/libs/1_55_0/libs/geometry/doc/html/geometry/reference/algorithms/within/within_2.html
you can find an example showing how to create a Point, load Polygon from wkt string and check if one is within another one.

Which object is better for creating a curve using b2EdgeShape vs b2ChainShape?

I have a function that generates points for a curve. I use these points the create the box2d body representing the ground.
I have tried the following two ways of doing this:
Generate all the points and store them in an array. The create fixtures between any two consecutive points edge shapes.
Generate all the points and store them in an array. Create a b2ChainShape, and create the fixture for the chain shape.
When testing, both the curves look equally smooth (they use the same points after all). According to the time profiler instrument in Xcode, the methods I use to generate the body take approximately the same amount of running time (almost down to the millisecond).
Any reason why I should pick one over the other ?
According to the manual:
Chain Shapes
The chain shape provides an efficient way to connect many edges
together to construct your static game worlds. Chain shapes
automatically eliminate ghost collisions and provide two-sided
collision.
From the title of your question, you are looking at creating a long series of individual edges vs. creating a single chain shape. The chain shape is more efficient for doing the creation of "lots of edges".
From an implementation standpoint, I can't say whether or not there is a performance difference for collision detection (my guess is no because you are still looking for the collisions between the individual edges).

How to use the condensation algorithm available in OpenCV?

I need to implement a software for tracking of moving objects in image streams using the condensation algorithm and the OpenCV library. I have read that OpenCV includes an implementation of this algorithm, but I did not find examples or tutorials that explain how to use the corresponding functions available in OpenCV.
The cvCreateConDensation function allocates the CvConDensation structure and requires the dimension of the state vector (dynam_params), the dimension of the measurement vector (measure_params) and the number of samples (sample_count).
The dimension of the state vector should refer to the object state: for example, if the state could be the center point of the tracked object, then the state vector should contain the two coordinates of the center of the object, so the dimension of the state vector should be 2 in this case; in a similar manner, if the state of an object is formed by S points belonging to its shape, then I will specify 2*S as dynam_params value (ie the number of coordinates is equal to 2*S). Is this correct?
The number of samples is the number of particles, therefore the parameter sample_count must be set with the number of particles to be used for the tracking of the object.
What about the dimension of the measurement vector? What is the purpose the measure_params parameter?
The cvConDensInitSampleSet function initializes the sample set for the condensation algorithm. Which rule is used to initialize the sample set? Which distribution is used to initialize the sample set? Given the starting position and the bounding box of the object to be tracked, how does this function initialize the sample set?
What is the function that performs a complete interaction (select, predict and measure) of the algorithm? How do the samples are updated?
Is there any tutorial that explains in detail how to use the functions available in OpenCV?
A working example of condensation algorithm can be found in the Q&A of opencv and ross (same author):
http://answers.ros.org/question/55316/using-the-opencv-particle-filter-condensation/
and
http://answers.opencv.org/question/6985/syntax-for-particle-filter-in-opencv-243/
Here is another implementation of a particle filter, and the OpenCV and GSL libraries were used. The source code provided by the author is easy to read. Maybe you can learn something from it.