Difference between main Object into itk 4.8 toolkit - c++

Anyone can help me to understand the difference among itk 4.8 data object. What are the difference between Vector Image, CovariantVector Image and Spatial Objects?

The itk::VectorImage class is merely an image where each image pixel has a list of values associated with it rather than a single intensity value.
I am not aware of any itk::CovariantVectorImage class or a similar class.
The itk::Vector class represents a mathematical vector with magnitude and direction with operators and methods for vector addition, scalar multiplication, the inner product of two vectors, getting a vector's norm, and so forth. You can also perform linear transformations on them using methods in the itk::AffineTransform, mainly the TransformVector() method. This is not related to C++'s std::vector container object, which is really a dynamic array data structure.
The itk::CovariantVector class is similar to itk::Vector, except it represents a covector rather than a vector. Covectors represent n-1-dimensional hyperplanes, (2D planes in the case of 3D space), and so their components transform in the opposite way that a vector's components do. itk::AffineTransform's TransformCovariantVector() method will transform a itk::CovariantVector object according to covariant transformation laws.
The itk::SpatialObject class allows you to create objects that exist in physical n-dimension space such as boxes, ellipses, tubes, planes, and cylinders, and relate these objects through parent-child relationships. You can read Chapter 5 of the ITK software manual for more information on this topic.

Related

CGAL Weighted_pointC2 for Epeck kernel

I have a project where CGAL::Exact_predicates_exact_constructions_kernel is used for CGAL::Arrangement_with_history_2. Now there is a task to add a property to each point which should not affect point operations (just extra data providing). I read tutorial from CGAL Extensible Kernel guide, but it looks strange and difficult to override all geometric operations and classes for all geometric traits. Also, it is not very clear how to support lazy operations and Interval arithmetic, how is it works in CGAL::Epeck.
Edit: Vertex record in the DCEL can't be used because not every point has DCEL vertex (CGAL vertices is the edge ends, but the edges have intermediate points |Points| >= |Vertices|)
I found Weighted_point_2.h, which define Weighted_pointC2 class with scalar field. Can this type of point be used for CGAL::Epeck kernel? Maybe there is a example?
Maybe there is an example code where the code for creating a Epeck-like kernel for a custom point?

Boost Polygon with indexed geometry

I need to make some polygon computation on 2D plan. Typically, isInside operation.
I found boost::Polygon API but my points are inside a single big array.
That's I call indexed geometry.
See http://www.opengl-tutorial.org/intermediate-tutorials/tutorial-9-vbo-indexing/
So my best way is just to boost::Polygon and give to it my array + indices of points to use.
The objective is just to don't copy my million of points (because they are shared at least by two polygons).
I don't know if API allows it ( or I need to inherit my own class :-( ).
Maybe, someone know another API (inside boost or other).
Thanks
Documentation
within demo : https://www.boost.org/doc/libs/1_68_0/libs/geometry/doc/html/geometry/reference/algorithms/within/within_2.html
Boost Geometry allows for adapted user-defined data types.
Specifically, C arrays are adapted here: https://www.boost.org/doc/libs/1_68_0/boost/geometry/geometries/adapted/c_array.hpp
I have another answer up where I show how to use Boost Geometry algorithms on a direct C array of structs (in that case I type punned using tuple as the point type): How to calculate the convex hull with boost from arrays instead of setting each point separately? (the other answers show alternatives that may be easier if you can afford to copy some data).
The relevant algorithms would be:
https://www.boost.org/doc/libs/1_68_0/libs/geometry/doc/html/geometry/reference/algorithms/within.html
https://www.boost.org/doc/libs/1_68_0/libs/geometry/doc/html/geometry/reference/algorithms/disjoint.html

How to use arrays in machine learning classes?

I'm new to C++ and I think a good way for me to jump in is to build some basic models that I've built in other languages. I want to start with just Linear Regression solved using first order methods. So here's how I want things to be organized (in pseudocode).
class LinearRegression
LinearRegression:
tol = <a supplied tolerance or defaulted to 1e-5>
max_ite = <a supplied max iter or default to 1k>
fit(X, y):
// model learns weights specific to this data set
_gradient(X, y):
// compute the gradient
score(X,y):
// model uses weights learned from fit to compute accuracy of
// y_predicted to actual y
My question is when I use fit, score and gradient methods I don't actually need to pass around the arrays (X and y) or even store them anywhere so I want to use a reference or a pointer to those structures. My problem is that if the method accepts a pointer to a 2D array I need to supply the second dimension size ahead of time or use templating. If I use templating I now have something like this for every method that accepts a 2D array
template<std::size_t rows, std::size_t cols>
void fit(double (&X)[rows][cols], double &y){...}
It seems there likely a better way. I want my regression class to work with any size input. How is this done in industry? I know in some situations the array is just flattened into row or column major format where just a pointer to the first element is passed but I don't have enough experience to know what people use in C++.
You wrote a quite a few points in your question, so here are some points addressing them:
Contemporary C++ discourages working directly with heap-allocated data that you need to manually allocate or deallocate. You can use, e.g., std::vector<double> to represent vectors, and std::vector<std::vector<double>> to represent matrices. Even better would be to use a matrix class, preferably one that is already in mainstream use.
Once you use such a class, you can easily get the dimension at runtime. With std::vector, for example, you can use the size() method. Other classes have other methods. Check the documentation for the one you choose.
You probably really don't want to use templates for the dimensions.
a. If you do so, you will need to recompile each time you get a different input. Your code will be duplicated (by the compiler) to the number of different dimensions you simultaneously use. Lots of bad stuff, with little gain (in this case). There's no real drawback to getting the dimension at runtime from the class.
b. Templates (in your setting) are fitting for the type of the matrix (e.g., is it a matrix of doubles or floats), or possibly the number of dimesions (e.g., for specifying tensors).
Your regressor doesn't need to store the matrix and/or vector. Pass them by const reference. Your interface looks like that of sklearn. If you like, check the source code there. The result of calling fit just causes the class object to store the parameter corresponding to the prediction vector β. It doesn't copy or store the input matrix and/or vector.

How to use the condensation algorithm available in OpenCV?

I need to implement a software for tracking of moving objects in image streams using the condensation algorithm and the OpenCV library. I have read that OpenCV includes an implementation of this algorithm, but I did not find examples or tutorials that explain how to use the corresponding functions available in OpenCV.
The cvCreateConDensation function allocates the CvConDensation structure and requires the dimension of the state vector (dynam_params), the dimension of the measurement vector (measure_params) and the number of samples (sample_count).
The dimension of the state vector should refer to the object state: for example, if the state could be the center point of the tracked object, then the state vector should contain the two coordinates of the center of the object, so the dimension of the state vector should be 2 in this case; in a similar manner, if the state of an object is formed by S points belonging to its shape, then I will specify 2*S as dynam_params value (ie the number of coordinates is equal to 2*S). Is this correct?
The number of samples is the number of particles, therefore the parameter sample_count must be set with the number of particles to be used for the tracking of the object.
What about the dimension of the measurement vector? What is the purpose the measure_params parameter?
The cvConDensInitSampleSet function initializes the sample set for the condensation algorithm. Which rule is used to initialize the sample set? Which distribution is used to initialize the sample set? Given the starting position and the bounding box of the object to be tracked, how does this function initialize the sample set?
What is the function that performs a complete interaction (select, predict and measure) of the algorithm? How do the samples are updated?
Is there any tutorial that explains in detail how to use the functions available in OpenCV?
A working example of condensation algorithm can be found in the Q&A of opencv and ross (same author):
http://answers.ros.org/question/55316/using-the-opencv-particle-filter-condensation/
and
http://answers.opencv.org/question/6985/syntax-for-particle-filter-in-opencv-243/
Here is another implementation of a particle filter, and the OpenCV and GSL libraries were used. The source code provided by the author is easy to read. Maybe you can learn something from it.

Making an Eigen::Vector look like a vector of points

I want to represent a 2D shape in such a way that it can be interacted with as if it were a vector of points, in particular I want to be able to call operator[] and at() on it and return references to things that act like 2D points. Currently I just use a class whose only member variable is a vector of points and that has various arithmetic and geometric operations defined pointwise on its elements.
However, in other parts of my code I need to treat a vector of n points as an element of 2n dimensional space and perform basic linear algebra on it (e.g. projecting the vector onto a given subspace of R^2n). Currently I'm creating an Eigen::VectorXd object every time I want to do this, and then converting back after performing these operations. I don't want to do this, as I make the conversion often enough that all the copying is a noticeable source of inefficiency.
If I was storing the data as a flat array of doubles/floats/ints, I could cast a pointer to its nth element to a pointer to a Point (whose members would just be a pair of doubles/floats/ints). However, as I don't know the internal representation that Eigen uses for vectors (and it may well change), this isn't possible.
Is there a sensible way of solving this? I could just use Eigen::Vectors everywhere, but I really want most of the code to be able to pretend that it is dealing with a set of points.
However, as I don't know the internal representation that Eigen uses for vectors (and it may well change), this isn't possible.
Eigen offers the Map classes that allow mapping plain arrays to Eigen structures. For example:
double numbers[2];
Eigen::Vector2f::Map( numbers ).dot( Eigen::Vector2f::Constant(1) );