Unittest++: test for multiple possible values - c++

i am currently implementing a simple ray tracer in c++. I have a class named OrthonormalBasis, which generates three orthogonal unit vectors from one or two specified vectors, for example:
void
OrthonormalBasis::init_from_u ( const Vector& u )
{
Vector n(1,0,0);
Vector m(0,1,0);
u_ = unify(u);
v_ = cross(u_,n);
if ( v_.length() < ONB_EPSILON )
v_ = cross(u_,m);
w_ = cross(u_,v_);
}
I am testing all my methods with the Unittest++ framework. The Problem is, that there is more than one possible solution for a valid orthonormal basis. For example this test:
TEST ( orthonormalbasis__should_init_from_u )
{
Vector u(1,0,0);
OrthonormalBasis onb;
onb.init_from_u(u);
CHECK_EQUAL( Vector( 1, 0, 0 ), onb.u() );
CHECK_EQUAL( Vector( 0, 0, 1 ), onb.v() );
CHECK_EQUAL( Vector( 0, 1, 0 ), onb.w() );
}
sometimes it succeeds, sometimes it fails, because the vectors v and w could also have a negative 1, and still represent a valid orthonormal basis. Is there a way to specify multiple expected values? Or do you know another way to do that?
It is important, that i get the actual and expected values printed to the stdout, in order to debug the methods so this solution won't do the job:
TEST ( orthonormalbasis__should_init_from_u )
{
Vector u(1,0,0);
OrthonormalBasis onb;
onb.init_from_u(u);
CHECK_EQUAL( Vector( 1, 0, 0 ), onb.u() );
CHECK(
Vector( 0, 0, 1 ) == onb.v() ||
Vector( 0, 0,-1 ) == onb.v() );
CHECK(
Vector( 0, 1, 0 ) == onb.w() ||
Vector( 0,-1, 0 ) == onb.w() );
}

Surely if all you are testing is whether your basis is orthonormal, then that's what you need to test?
// check orthogonality
CHECK_EQUAL( 0, dot(onb.u(), onb.v));
CHECK_EQUAL( 0, dot(onb.u(), onb.w));
CHECK_EQUAL( 0, dot(onb.v(), onb.w));
// check normality
CHECK_EQUAL( 1, dot(onb.u(), onb.u));
CHECK_EQUAL( 1, dot(onb.v(), onb.v));
CHECK_EQUAL( 1, dot(onb.w(), onb.w));

One possibility is to create your own CHECK_MULTI function:
void CHECK_MULTI(TYPE actual, vector<TYPE> expected, const char* message)
{
for (element in expected) {
if (element == actual) {
// there's a test here so the test count is correct
CHECK(actual, element);
return;
}
}
CHECK(actual, expected);
}

I'd use a utility function or class so you can do something like this:
CHECK_EQUAL(VectorList(0,0,1)(0,0,-1), onb.v());
Given, that interpretation of equality is somewhat weird, but it should print you all values you want to see without the need to introduce a custom macro.
If you are worried about EQUAL in that context, a custom macro like CHECK_CONTAINS() shouldn't be too hard to do.
VectorList would be constructed as a temporary and operator() be used to insert values into the contained list of Vectors, similar to Boost.Assign.
Basic approach:
class VectorList {
std::vector<Vector> data_;
public:
VectorList(double a, double b, double c) {
data_.push_back(Vector(a,b,c));
}
VectorList& operator()(double a, double b, double c) {
data_.push_back(Vector(a,b,c));
return *this;
}
bool operator==(const Vector& rhs) const {
return std::find(data_.begin(), data_.end(), rhs) != data_.end();
}
};

Related

How to estimate parameter uncertainty using Ceres Solver?

I am using Ceres Solver to perform non-linear curve fits on small data sets. Following the examples I am able to generate perfectly reasonable fit parameters for models that match my data well. I am also trying to compute the parameter variances and this is where things are falling apart. The code executes but results seem incorrect, often many orders of magnitude larger than the the parameter itself. The number of points (x, y) in the data sets I am fitting is similar to the number of parameter in the fit models, e.g. 4 data points, 3 parameters.
I came across a similar SO question here: Ceres: Compute uncertainty on parameter, which was helpful in linking the Ceres wiki on using the covariance class, but the issue was not marked as resolved. I, like the previous poster, looked at the parameter variances produced using the Python lmfit (https://lmfit.github.io/lmfit-py/index.html) package and found that it provides much more reasonable results.
The Ceres description of the covariance class (http://ceres-solver.org/nnls_covariance.html#example-usage) described a potential issue where if the residuals of the cost functor are not scaled correctly, i.e. by the positive semi-definite covariance matrix of the observed data, then the parameter covariance matrix computation can't be trusted. Not being a mathematician, I am not certain how to satisfy this requirement.
Below is some sample code showing the cost function that I've implemented as well as the usage of the covariance class. Any advice would be greatly appreciated.
Cost function:
struct BiExponential1 {
BiExponential1( double x, double y, double s ) : x_( x ), y_( y ), s_( s ) {}
template <typename T>
bool operator()( const T* const a, const T* const b, const T* const c, T* residual ) const {
residual[0] = y_ - a[0] * ( exp( -b[0] * x_ ) - exp( -c[0] * x_ ) ); // observed - estimated = y - ( a' [exp( -b' * x ) - exp(-c' * x)] )
return true;
}
private:
const double x_;
const double y_;
};
Solver/Covariance usage:
double a = init_value_a;
double b = init_value_b;
double c = init_value_c;
double data_x[nData] = {<dummy data>}
double data_y[nData] = {<dummy data>}
for ( int i = 0; i < nData; ++i ) {
problem.AddParameterBlock( &a, 1 );
problem.AddParameterBlock( &b, 1 );
problem.AddParameterBlock( &c, 1 );
problem.AddResidualBlock(
new ceres::AutoDiffCostFunction<BiExponential1, 1, 1, 1, 1>(
new BiExponential1( data_x[i], data_y[i] ) ),
nullptr,
&a,
&b,
&c );
}
// Run the solver and record the results.
ceres::Solve( solverOptions, &problem, &summary );
// Variance estimates
// Code adapted from: http://ceres-solver.org/nnls_covariance.html#example-usage
Covariance::Options covOptions;
// TESTED non-default algorithm type - no effect.
//covOptions.algorithm_type = ceres::CovarianceAlgorithmType::DENSE_SVD;
Covariance covariance( covOptions );
std::vector<std::pair<const double*, const double*> > covariance_blocks;
covariance_blocks.push_back( std::make_pair( &a, &a ) );
covariance_blocks.push_back( std::make_pair( &b, &b ) );
covariance_blocks.push_back( std::make_pair( &c, &c ) );
covariance_blocks.push_back( std::make_pair( &a, &b ) );
covariance_blocks.push_back( std::make_pair( &a, &c ) );
covariance_blocks.push_back( std::make_pair( &b, &c ) );
CHECK( covariance.Compute( covariance_blocks, &problem ) );
// Get the diagonal variance terms
double covariance_aa[1 * 1];
double covariance_bb[1 * 1];
double covariance_cc[1 * 1];
covariance.GetCovarianceBlock( &a, &a, covariance_aa );
covariance.GetCovarianceBlock( &b, &b, covariance_bb );
covariance.GetCovarianceBlock( &c, &c, covariance_cc );

Combining subranges of a vector efficiently to iterate through

Combining subranges of a vector efficiently
The Process
I have some numerical data stored in vector, v. Vector v is composed of many subranges of valid/invalid data with unpredictable lengths according to some predicate, e.g. being above some threshold value. After filtering, these valid ranges are represented by a second vector, f, which contains std::pair<size_t, size_t>'s indicating the start index of the range and index one past the end of the range.
For example, filtering the vector { 1, 5, 3, 12, 10, 21, 19, 14, 5, 9, 3, 7, 2 } for data above a threshold of 10 would return { {3, 8} }
The Data
The data I am using originates from real world measurements of the output power of a laser as it is cycled on and off. The transfer from off to on, and vice versa, is not instantaneous, and noise during the transition can make it difficult to determine the exact start point/end point.
The data produced is treated as immutable and no alterations are applied to v.
The Filter
In addition to the data to be filtered and a threshold value, the filter takes a value, x representing the number of valid/invalid elements it should encounter before determining there has been a transition from a valid subrange to an invalid one, or vice versa.
For example, using the same vector as above, { 1, 5, 3, 12, 10, 21, 19, 14, 5, 9, 3, 7, 2 }, but a threshold of 8 and x = 2:
The filter reaches index 3, recognizing 12 > 8.
It continues x more indices, checking that they are also above the threshold before recognizing a transition has occurred.
The start point is set to 3.
The reverse happens for the transition from above the threshold to below.
The filter reaches index 8, recognizing 5 < 8.
However, at index 9. v[9] = 9 > 8.
As there haven't been x values below the threshold, the valid subrange continues.
At index 10 the count starts again, this time finding a valid transition.
The end point is set to 10 (One past the end).
The Problem
By only retaining the information about the start and end points of the valid ranges I avoid keeping a copy of all the valid data.
At a later point, I then perform some transformation on the data such as taking the average of each range (nice and simple), or averaging the valid data into a maximum number of n points (which causes my problem).
How can I smoothly iterate through the valid indices of v across subranges?
My first thought was to look at the Ranges library provided by the C++ standard; however, I'm very inexperienced in using <ranges> and my simple experiments with it have probably led me further from a workable answer than I was initially through added confusion.
I am currently using Visual Studio 2022 and compiling for c++20.
Compiled using:
g++ -Wall -Wextra -pedantic -O3 -std=c++20 example.cpp
example.cpp
#include <vector>
#include <utility>
#include <limits>
std::vector<std::pair<size_t, size_t>>
filter( const std::vector<double>& data,
const double threshold,
const size_t x ) {
std::vector<std::pair<size_t, size_t>> range_indices;
// continuous_range indicates if currently in a continuous, VALID range.
bool continuous_range{ false };
// range_start/end track indices of most recent valid range
// count helps distinguish between noise & transitions
// from invalid to valid ranges or vice versa.
size_t range_start{ 0 }, range_end{ 0 }, count{ 0 };
for ( size_t i{ 0 }; i < data.size(); ++i ) {
/* Some logic to decide which switch branch
* Possible values:
* 0: data[i] < threshold & !continuous_range
* - In non-valid data range, reset count.
* 1: data[i] >= threshold & !continuous_range
* - Found new valid range if count >= x, else incr count
* 2: data[i] < threshold & continuous_range
* - Left a valid range if count >= x, else incr count
* 3: data[i] >= threshold & continuous_range
* - Within continuous range, rest count.
*/
size_t branch = data[i] >= threshold ? 2 : 1;
branch += continuous_range ? 1 : -1;
switch ( branch ) {
case 0:
count = 0;
break;
case 1:
count++;
continuous_range = count >= x;
if ( continuous_range ) {
range_start = i - count + 1;
count = 0;
}
break;
case 2:
count++;
// If count == x, no longer in cont. range
continuous_range = !(count >= x);
// If not in cont. range
if ( !continuous_range ) {
// 1 past the end
range_end = i - count + 1;
range_indices.push_back(
std::pair<size_t, size_t>{ range_start, range_end }
);
count = 0;
}
break;
case 3:
count = 0;
break;
}
}
// Handle case were valid range includes final datapoint.
if ( continuous_range && range_start > range_end ) {
range_indices.emplace_back(range_start, data.size() - 1);
}
return range_indices;
}
double
vector_max( const std::vector<double>& v ) {
double max{ std::numeric_limits<double>::lowest() };
for ( const auto& d : v ) {
if ( max < d ) { max = d; }
}
return max;
}
double
mean( const std::vector<double>& data,
const size_t start, const size_t end ) {
if ( data.empty() ) {
return std::numeric_limits<double>::signaling_NaN();
}
if ( start >= end || end > data.size() ) {
return std::numeric_limits<double>::signaling_NaN();
}
double sum{ 0.0 };
for ( size_t i{ start }; i < end; ++i ) {
sum += data[i];
}
return sum / (end - start);
}
std::vector<double>
avg_range( const std::vector<double>& data,
const std::vector<std::pair<size_t, size_t>>& valid_ranges ) {
std::vector<double> avg_data;
avg_data.reserve(valid_ranges.size());
for ( const auto& [first, last] : valid_ranges ) {
avg_data.emplace_back(mean(data, first, last));
}
return avg_data;
}
std::vector<double>
avg_npoints( const std::vector<double>& data,
const std::vector<std::pair<size_t, size_t>>& valid_ranges,
const size_t n ) {
/*
* Some method to iterate through the valid ranges in data
* using valid_indices so they appear as one continuous range.
* Then average the valid data into n points.
*/
}
int main() {
/*
* I would put data here, except in reality the code handles anywhere
* from a few 100k to a few million datapoints so I'm not sure what to
* provide instead.
*/
std::vector<double> data;
const auto indices = filter(data, 0.8 * vector_max(data), 2);
const auto range_avgs = avg_range(data, indices);
const auto npoint_avgs = avg_npoints(data, indices, 1000);
}
You can indeed do this quite elegantly with ranges. Here is a short example:
#include <ranges>
#include <span>
#include <vector>
// Store your subranges as
using Sub = std::span<double>;
// and return your filtered result as
std::vector<Sub> filter(std::vector<double> const& data, ...);
int main()
{
std::vector<double> data;
const auto subs = filter(data, ...);
// A view of the vector of spans, flattened into a single sequence
auto view = std::views::join(subs);
}
The spans can be created from a pair of iterators to the data vector, or an iterator and a count, so that will require some modifications to your filter algorithm.
I guess the ranges library offers ways to write your code in a much simpler way. However, you already have the code to filter and if we just consider the question
How can I smoothly iterate through the valid indices of v across subranges?
Then the answer is rather simple and requires only few additions to your code.
First I used an alias
using indices_t = std::vector<std::pair<size_t, size_t>>;
Next, your way to find the max can be simplified by using std::max_element:
double vector_max( const std::vector<double>& v ) {
return *std::max_element(v.begin(),v.end());
}
(assumes the vector is not empty)
Then you can write a function that takes a callable as parameter and calls it with all elements inside the intervals:
template <typename F>
void apply_to_intervals(F f,const std::vector<double>& v,const indices_t& indices) {
for (const auto& interv : indices) {
for (auto i = interv.first; i < interv.second; ++i){
f(v[i]);
}
}
}
Thats really all you need to smoothly iterate the filtered elements.
For example to print them:
void print(const std::vector<double>& v, const indices_t& indices) {
apply_to_intervals([](double x) {std::cout << x << "\n";},v,indices);
}
To calculate the average:
auto avg_range(const std::vector<double>& v,const indices_t& indices) {
double sum = 0;
size_t count = 0;
auto averager = [&](double x) {
sum += x;
++count;
};
apply_to_intervals(averager,v,indices);
return sum / count;
}
Complete Code

How to update wxTreeListCtrl in real time if the childrens count is more 100000 in wxwidgets

I am going to display data in wxTreeListCtrl but i am facing some problems with time ,
it is taking much time , could some please hep me out of this.
Here is my code:
wxTreeListCtrl *m_ptreelistctrl = new wxTreeListCtrl(this, TREELISTCNTRL, pos, size, wxBORDER_NONE|wxTR_HAS_BUTTONS|wxTR_MULTIPLE, validator, name);
m_ptreelistctrl->SetHeaderBackgroundcolour(colour);
//For Displaying Names
m_ptreelistctrl->AddColumn(_U("Description") , 400 ,wxALIGN_LEFT /*,DEFAULT_ITEM_WIDTH, wxALIGN_LEFT */);
//For Displaying ID
m_ptreelistctrl->AddColumn(_U("Id"), 50/*30*/, wxALIGN_LEFT ,-1 ,false);
//For Displaying Colour
m_ptreelistctrl->AddColumn(_U("Colour"), DEFAULT_COL_WIDTH, wxALIGN_LEFT/*CENTER */);
wxStopWatch *time = new wxStopWatch();
time->Start();
custTreeItemData* pcusData = new custTreeItemData(-1, TREEITEM_ROOT);
root = m_ptreelistctrl->AddRoot(m_strRootname, -1,-1,pcusData);
pcusData = new custTreeItemData(-1, TREEITEM_ASSMB);
item_assmb = m_ptreelistctrl->AppendItem( root,"Assem", 0, 3, pcusTrData);
for ( i = 1; i <= 100000; i++ )
{
unsigned char r,g,b;
wxTreeItemId item_assmb_entities;
custTreeItemData* pcusTrData = new custTreeItemData(i, TREEITEM_ASSMB);
pcusTrData->SetDataId(10);
item_assmb_entities = m_ptreelistctrl->AppendItem(item_assmb,"srinvas", 0, 3, pcusTrData);
FillItems(pcusTrData,item_assmb_entities);
AppendColorImagetoTree( item_assmb_entities, 2, r, g, b );
AppendIdtoTree(item_assmb_entities ,1 ,10);
if( true )
{
m_ptreelistctrl->SetItemImage( item_assmb_entities, 0, 3 , wxTreeItemIcon_Selected);
m_ptreelistctrl->SetItemImage( item_assmb_entities, 0, 3 );
pcusTrData->SetCheckStatus(true);
}
else
{
m_ptreelistctrl->SetItemImage( item_assmb_entities, 0, 2 , wxTreeItemIcon_Selected);
m_ptreelistctrl->SetItemImage( item_assmb_entities, 0, 2 );
pcusTrData->SetCheckStatus(false);
}
}
pcusData = new custTreeItemData(-1, TREEITEM_COMPS);
item_comp = m_ptreelistctrl->AppendItem( root,"Comps", 0, 3, pcusTrData);
for ( i = 1; i <= 100000; i++ )
{
unsigned char r,g,b;
wxTreeItemId item_comp_entities;
custTreeItemData* pcusTrData = new custTreeItemData(i, TREEITEM_COMPS);
pcusTrData->SetDataId(10);
item_comp_entities= m_ptreelistctrl->AppendItem( item_comp,"Comps", 0, 3, pcusTrData);
FillItems(pcusTrData,item_comp_entities);
AppendColorImagetoTree( item_comp_entities, 2, r, g, b );
AppendIdtoTree(item_comp_entities,1 ,10);
if( true )
{
m_ptreelistctrl->SetItemImage( item_comp_entities, 0, 3 , wxTreeItemIcon_Selected);
m_ptreelistctrl->SetItemImage( item_comp_entities, 0, 3 );
pcusTrData->SetCheckStatus(true);
}
else
{
m_ptreelistctrl->SetItemImage( item_comp_entities, 0, 2 , wxTreeItemIcon_Selected);
m_ptreelistctrl->SetItemImage( item_comp_entities, 0, 2 );
pcusTrData->SetCheckStatus(false);
}
}
time->Pause();
int cc = time->Time();
wxString strda;
strda.Printf("time taken%d" ,cc);
wxMessageBox(strda.c_str());
Issue: I am going to display more then 200000 childrens but taking time around 17 Minutes
to construct tree ,
My answer assumes that you have made sure that the bottleneck is indeed the wxTreeListCtrl. It might as well be in your own code, so make sure of this before proceeding!
That being said, I think in this case you are better off using wxDataViewCtrl with a custom wxDataViewModel. The difference is that wxTreeListCtrl stores the entire tree in memory and offers no way to batch update the model / view, which might lead to the performance problems. In contrast, wxDataViewCtrl is just a view of your own model, which you have to adapt using your own implementation of wxDataViewModel. Note that wxDataViewModel has functions for batch updating the view, e.g. ItemsAdded, ItemsDeleted, ItemsChanged.
Hope this helps!

createing a multi dimensional vector of any size

I am trying to create a multi-dimensional histogram using multi-dimentional vectors and I don't know the dimension size ahead of time. Any ideas on how to do this in c++?
Mustafa
Write your own class. For starters, you'll probably want
something along the lines of:
class MultiDimVector
{
std::vector<int> myDims;
std::vector<double> myData;
public:
MultiDimVector( std::vector<int> dims )
: myDims( dims )
, myData( std::accumulate(
dims.begin(), dims.end(), 1.0, std::multiplies<int>() )
{
}
};
For indexing, you'll have to take an std::vector<int> as the
index, and calculate it yourself. Basically something along the
lines of:
int MultiDimVector::calculateIndex(
std::vector<int> const& indexes ) const
{
int results = 0;
assert( indexes.size() == myDims.size() );
for ( int i = 0; i != indexes.size(); ++ i ) {
assert( indexes[i] < myDims[i] );
results = myDims[i] * results + indexes[i];
}
return results;
}
You can use std::vector, like:
std::vector<std::vector<yourType> >
(or maybe if you use a framework you can search it's documentation for a better integrated array replacement ;) )
vector<vector<int>> mutli_dim_vector_name(num_rows, (vector<int>(num_cols, default_value)));
// You can use this format to further nest to the dimensions you want.

Is it possible to apply breadth-first search algorithm of boost library to matrix?

My task is to find the shortest way in a matrix from one point to other. It is possible to move only in such direction (UP, DOWN, LEFT, RIGHT).
0 0 0 0 1 0 0 0
1 0 0 0 0 0 0 0
0 0 0 1 0 1 F 0
0 1 0 1 0 0 0 0
0 0 0 1 0 0 0 0
0 S 0 1 0 0 1 0
0 0 0 0 0 0 1 0
0 0 0 0 0 0 1 0
S - Start point
F - Destination place (Finish)
0 - free cells (we can move through them)
1 - "walls" (we can't move through them)
It is obvious that a breadth first search solves this problem in the most optimal way.
I know that the Boost library supplies this algorithm, but I haven't Boost before.
How can I do a breadth first search in my case using Boost?
As I understood, breadth first search algorithm of Boost is intended only for graphs.
I guess that it isn't a good idea to convert matrix to graph with m*n vertices and m*(n -1) + (m-1)*n edges.
Can I apply breadth first search algorithm to matrix (without converting it to a graph), or is it better to implement my own function for breadth first search?
(Apologies in advance for the length of this answer. It's been a while since I've used the BGL and I thought this would make a good refresher. Full code is here.)
The beauty of the Boost Graph Library (and generic programming in general) is that you don't need to use any particular data structure in order to take advantage of a given algorithm. The matrix you've provided along with the rules about traversing it already define a graph. All that's needed is to encode those rules in a traits class that can be used to leverage the BGL algorithms.
Specifically, what we want to do is to define a specialization of boost::graph_traits<T> for your graph. Let's assume your matrix is a single array of int's in row-major format. Unfortunately, specializing graph_traits for int[N] won't be sufficient as it doesn't provide any information about the dimensions of the matrix. So let's define your graph as follows:
namespace matrix
{
typedef int cell;
static const int FREE = 0;
static const int WALL = 1;
template< size_t ROWS, size_t COLS >
struct graph
{
cell cells[ROWS*COLS];
};
}
I've used composition for the cell data here but you could just as easily use a pointer if it's to be managed externally. Now we have a type encoded with the matrix dimensions that can be used to specialize graph_traits. But first let's define some of the functions and types we'll need.
Vertex type and helper functions:
namespace matrix
{
typedef size_t vertex_descriptor;
template< size_t ROWS, size_t COLS >
size_t get_row(
vertex_descriptor vertex,
graph< ROWS, COLS > const & )
{
return vertex / COLS;
}
template< size_t ROWS, size_t COLS >
size_t get_col(
vertex_descriptor vertex,
graph< ROWS, COLS > const & )
{
return vertex % COLS;
}
template< size_t ROWS, size_t COLS >
vertex_descriptor make_vertex(
size_t row,
size_t col,
graph< ROWS, COLS > const & )
{
return row * COLS + col;
}
}
Types and functions to traverse the vertices:
namespace matrix
{
typedef const cell * vertex_iterator;
template< size_t ROWS, size_t COLS >
std::pair< vertex_iterator, vertex_iterator >
vertices( graph< ROWS, COLS > const & g )
{
return std::make_pair( g.cells, g.cells + ROWS*COLS );
}
typedef size_t vertices_size_type;
template< size_t ROWS, size_t COLS >
vertices_size_type
num_vertices( graph< ROWS, COLS > const & g )
{
return ROWS*COLS;
}
}
Edge type:
namespace matrix
{
typedef std::pair< vertex_descriptor, vertex_descriptor > edge_descriptor;
bool operator==(
edge_descriptor const & lhs,
edge_descriptor const & rhs )
{
return
lhs.first == rhs.first && lhs.second == rhs.second ||
lhs.first == rhs.second && lhs.second == rhs.first;
}
bool operator!=(
edge_descriptor const & lhs,
edge_descriptor const & rhs )
{
return !(lhs == rhs);
}
}
And finally, iterators and functions to help us traverse the incidence relationships that exist between the vertices and edges:
namespace matrix
{
template< size_t ROWS, size_t COLS >
vertex_descriptor
source(
edge_descriptor const & edge,
graph< ROWS, COLS > const & )
{
return edge.first;
}
template< size_t ROWS, size_t COLS >
vertex_descriptor
target(
edge_descriptor const & edge,
graph< ROWS, COLS > const & )
{
return edge.second;
}
typedef boost::shared_container_iterator< std::vector< edge_descriptor > > out_edge_iterator;
template< size_t ROWS, size_t COLS >
std::pair< out_edge_iterator, out_edge_iterator >
out_edges(
vertex_descriptor vertex,
graph< ROWS, COLS > const & g )
{
boost::shared_ptr< std::vector< edge_descriptor > > edges( new std::vector< edge_descriptor >() );
if( g.cells[vertex] == FREE )
{
size_t
row = get_row( vertex, g ),
col = get_col( vertex, g );
if( row != 0 )
{
vertex_descriptor up = make_vertex( row - 1, col, g );
if( g.cells[up] == FREE )
edges->push_back( edge_descriptor( vertex, up ) );
}
if( row != ROWS-1 )
{
vertex_descriptor down = make_vertex( row + 1, col, g );
if( g.cells[down] == FREE )
edges->push_back( edge_descriptor( vertex, down ) );
}
if( col != 0 )
{
vertex_descriptor left = make_vertex( row, col - 1, g );
if( g.cells[left] == FREE )
edges->push_back( edge_descriptor( vertex, left ) );
}
if( col != COLS-1 )
{
vertex_descriptor right = make_vertex( row, col + 1, g );
if( g.cells[right] == FREE )
edges->push_back( edge_descriptor( vertex, right ) );
}
}
return boost::make_shared_container_range( edges );
}
typedef size_t degree_size_type;
template< size_t ROWS, size_t COLS >
degree_size_type
out_degree(
vertex_descriptor vertex,
graph< ROWS, COLS > const & g )
{
std::pair< out_edge_iterator, out_edge_iterator > edges = out_edges( vertex, g );
return std::distance( edges.first, edges.second );
}
}
Now we're ready to define our specialization of boost::graph_traits:
namespace boost
{
template< size_t ROWS, size_t COLS >
struct graph_traits< matrix::graph< ROWS, COLS > >
{
typedef matrix::vertex_descriptor vertex_descriptor;
typedef matrix::edge_descriptor edge_descriptor;
typedef matrix::out_edge_iterator out_edge_iterator;
typedef matrix::vertex_iterator vertex_iterator;
typedef boost::undirected_tag directed_category;
typedef boost::disallow_parallel_edge_tag edge_parallel_category;
struct traversal_category :
virtual boost::vertex_list_graph_tag,
virtual boost::incidence_graph_tag {};
typedef matrix::vertices_size_type vertices_size_type;
typedef matrix::degree_size_type degree_size_type;
static vertex_descriptor null_vertex() { return ROWS*COLS; }
};
}
And here's how to perform the breadth-first search and find the shortest paths:
int main()
{
const size_t rows = 8, cols = 8;
using namespace matrix;
typedef graph< rows, cols > my_graph;
my_graph g =
{
FREE, FREE, FREE, FREE, WALL, FREE, FREE, FREE,
WALL, FREE, FREE, FREE, FREE, FREE, FREE, FREE,
FREE, FREE, FREE, WALL, FREE, WALL, FREE, FREE,
FREE, WALL, FREE, WALL, FREE, FREE, FREE, FREE,
FREE, FREE, FREE, WALL, FREE, FREE, FREE, FREE,
FREE, FREE, FREE, WALL, FREE, FREE, WALL, FREE,
FREE, FREE, FREE, FREE, FREE, FREE, WALL, FREE,
FREE, FREE, FREE, FREE, FREE, FREE, WALL, FREE,
};
const vertex_descriptor
start_vertex = make_vertex( 5, 1, g ),
finish_vertex = make_vertex( 2, 6, g );
vertex_descriptor predecessors[rows*cols] = { 0 };
using namespace boost;
breadth_first_search(
g,
start_vertex,
visitor( make_bfs_visitor( record_predecessors( predecessors, on_tree_edge() ) ) ).
vertex_index_map( identity_property_map() ) );
typedef std::list< vertex_descriptor > path;
path p;
for( vertex_descriptor vertex = finish_vertex; vertex != start_vertex; vertex = predecessors[vertex] )
p.push_front( vertex );
p.push_front( start_vertex );
for( path::const_iterator cell = p.begin(); cell != p.end(); ++cell )
std::cout << "[" << get_row( *cell, g ) << ", " << get_col( *cell, g ) << "]\n" ;
return 0;
}
Which outputs the cells along the shortest path from start to finish:
[5, 1]
[4, 1]
[4, 2]
[3, 2]
[2, 2]
[1, 2]
[1, 3]
[1, 4]
[1, 5]
[1, 6]
[2, 6]
You can definitely use the Boost Graph library for this! The idea how the algorithms in this library are implemented is to abstract from any graph data structure and instead operate in terms of iterators. For example, to move from one node to another node, the algorithms use an adjacency iterator. You would essentially look and a particular algorithm, e.g. BFS, and find out what concepts this algorithm requires: in this case the graph you used with it needs to be a "Vertex List Graph" and an "Incidence Graph". Note that these are not concrete classes but concepts: they specify how the data structure is to be accessed. The algorithm also takes a number of additional arguments like the start node and a property map to mark (color) the nodes already visited.
To use the algorithm with your matrix you would give a "graph view" of your matrix to the algorithm: a node is adjacent to its direct neighbors unless the respective neighbor is set to 1 (and, obviously, you don't walk off the edges of the matrix). Creating a graph like this isn't entirely trivial but I think it is very useful to understand how the Boost Graph library works: even if you don't want to use this particular library, it is a good example on how algorithms are implemented in abstractions to make the algorithm applicable even in entirely unforeseen situations (OK, I'm biased: long before Jeremy created the Boost Graph library I have written my diploma thesis about roughly the same thing and we came up with essentially identical abstractions).
All that said, I think that using breadth first search may not be worth the effort to learn about the Boost Graph library: it is such a trivial algorithm that you might want to just implement it directly. Also, this looks pretty much like a homework assignment in which case you are probably meant to implement the algorithm yourself. Although it might be quite impressive to have used the Boost Graph library for this, your instructor may not consider it that way. What I would consider even more impressive would be to implement BFS in a style independent from the data structure as the Boost Graph library does and then use this. With the guidance from the Boost Graph library this is definitely doable, even as an exercise (although probably more effort than required). BTW, yes, I could post code but, no, I won't. I'm happy to help with concrete problems being posted, though.