I am trying to convert a point cloud to an octree with specifically 32768 leaf nodes and then store the x,y,z co-odinates and its ocuupancy probability. resolution of octree = 0.001.
C++ Code:
void create_octree(const std::string& input_file, const std::string& output_file, double resolution) {
// Load a pointcloud from a PCD file
pcl::PointCloud<pcl::PointXYZ>::Ptr cloud(new pcl::PointCloud<pcl::PointXYZ>);
pcl::io::loadPCDFile(input_file, *cloud);
//double size = 15*resolution;
// Create an octree with a specified resolution
octomap::OcTree tree(resolution);
// Insert the pointcloud into the octree
for (auto& point : cloud->points)
{
tree.updateNode(octomap::point3d(point.x, point.y, point.z), true);
}
size_t leaf_count = tree.getNumLeafNodes();
size_t node_count = tree.size();
int desiredLeafNodes = 32768;
while(leaf_count < desiredLeafNodes){
tree.prune();
leaf_count = tree.getNumLeafNodes();
}
// Open a file for writing
std::ofstream output_file_stream(output_file);
// Iterate through the octree and write the occupied nodes to the file
for (octomap::OcTree::leaf_iterator it = tree.begin_leafs(); it != tree.end_leafs(); ++it)
{
if (tree.isNodeOccupied(*it))
{
octomap::point3d pos = it.getCoordinate();
// Get the occupancy probability of the node
double occupancy_probability = it->getOccupancy();
// Write the coordinates and occupancy probability to the file
output_file_stream << leaf_count << " " << node_count << " " << pos.x() << " " << pos.y() <<
" " << pos.z() << " " << occupancy_probability << std::endl;
}
}
output_file_stream.close();
}
PYBIND11_MODULE(octomap_module, m) {
m.doc() = "pybind11 octomap_module";
m.def("add", &add, "A function that adds two numbers");
m.def("create_octree", &create_octree, "A function to create octree from point cloud);
}
Python Code:
import open3d as o3d
import octomap_module
#pcd = o3d.io.read_point_cloud('./data/Armadillo.ply')
#o3d.io.write_point_cloud("Armadillo.pcd", pcd)
input_file = "Armadillo.pcd"
output_file = "Armadillo_octree.txt"
resolution = 0.001
octomap_module.create_octree(input_file,output_file,resolution)
I tried to check the number of leaf nodes in a while loop and prune the tree until desired number of leaf nodes are reached which is 32768(323232). The tree.prune() does not seem to work, I still get same 40936 number of leaf nodes.
Is there a better way to solve this?
Related
The main purpose is to go through all child elements of a STEP model and to make a tree view out of them using OpenCascade. Now I am downloading the STEP model according to a particular path to a TopoDS_Shape object and then I am passing this object to the AIS_Shape object in order to finally display the model in the viewport. So, I thought that there was a method that should get a model path or a model itself as a parameter, then recursively go through all its child and finaly prints them somewhere
void OcctGtkViewer::onSTEPLoad(std::string filename)
{
string tempFileName = "";
STEPControl_Reader reader;
TopoDS_Shape shape;
if (filename == "")
filename = tempFileName;
const char *encodename = filename.c_str();
if (reader.ReadFile(encodename) != IFSelect_RetDone)
{
cout << "Выбран не STEP файл!" << endl;
return;
}
Standard_Integer nbr = reader.NbRootsForTransfer();
for (Standard_Integer n = 1; n <= nbr; n++)
{
cout << "STEP: передаю корневой объект " << n << endl;
reader.TransferRoot(n);
}
Standard_Integer nbs = reader.NbShapes();
shape = reader.OneShape();
STEPShape = reader.OneShape();
cout << "STEP: файл загружен " << endl;
//TDocStd_Document document("/home/kirill/Desktop/1.STEP");
//traverseDocument(document);
return;
}
{
// dummy shape for testing
onSTEPLoad(pathToFile);
// TopoDS_Shape aBox = BRepPrimAPI_MakeBox(100.0, 50.0, 90.0).Shape();
Handle(AIS_Shape) aShape = new AIS_Shape(STEPShape);
myContext->Display(aShape, AIS_Shaded, 0, false);
}
I have an structure named "Particle" and I want to create several objects whose names depends on an int.
As I am inside a for loop the name is going to change as follows: part0, part1, part2.
for (int i = 0; i<num_particles; i++)
{
//double sample_x, sample_y, sample_theta;
string name = "part" + std::to_string(i);
Particle name;
name.id = i;
name.x = dist_x(gen);
name.y = dist_y(gen);
name.theta = dist_theta(gen);
cout << "Sample" << " " << name.x << " " << name.y << " " << name.theta << endl;
}
As you can imagine this approach doesn't work, do you have any solution?
I have updated my question, now this is my new approach:
I have created a vector and an int "number of particles":
std::vector<Particle> particles;
And the function code:
void ParticleFilter::init(double x, double y, double theta, double std[]) {
// TODO: Set the number of particles. Initialize all particles to first position (based on estimates of
// x, y, theta and their uncertainties from GPS) and all weights to 1.
// Add random Gaussian noise to each particle.
// NOTE: Consult particle_filter.h for more information about this method (and others in this file).
default_random_engine gen;
normal_distribution<double> dist_x(x, std[0]);
normal_distribution<double> dist_y(y, std[1]);
normal_distribution<double> dist_theta(theta, std[2]);
//for (int i = 0; i<num_particles; i++)
//{
//double sample_x, sample_y, sample_theta;
//string name = "part";
//+ std::to_string(i);
//Particle particles;
particles[num_particles].id =num_particles;
particles[num_particles].x = dist_x(gen);
particles[num_particles].y = dist_y(gen);
particles[num_particles].theta = dist_theta(gen);
num_particles++;
cout << "Sample" << " " << particles[num_particles].x << " " << particles[num_particles].y << " " << particles[num_particles].theta << endl;
//}
}
But it doesn't work yet, it outputs "Segmentation fault".
you can use itoa() function of cstdlib simply in your code.
for (int i = 0; i<10; i++)
{
char a[max];
string pa="part_";
string name = pa + itoa(i,a,i+1) ;
cout << "Sample" << " " << name << endl;
}
}
Sample Output:
Sample part_0
Sample part_1
Sample part_2
Sample part_3
Sample part_4
Sample part_5
Sample part_6
Sample part_7
Sample part_8
Sample part_9
This construct exists in C++, it is called std::vector.
// we want to have a bunch of variables of type Particle
// all named particles[i] for i == 0,1,2....
std::vector<Particle> particles;
// create a new particle variable
particles.emplace_back(x, y, theta);
// print the variable number 42
std::cout << particles[42];
Why do you want to down the messy road of variable naming such as var0, var1, var2 and so on? I'd recommend creating an array or vector.
It's not clear from your code snippet that why you need to create variables with different names. Moreover, your code/usecase doesn't sit right with the concept of variable scoping.
I am using OpenCV's implementation of Random Forest algorithm (i.e. RTrees) and am facing a little problem when setting parameters.
I have 5 classes and 3 variables and I want to add weight to classes because the samples sizes for each classes vary a lot.
I took a look at the documentation here and here and it seems that the priors array is the solution, but when I try to give it 5 weights (for my 5 classes) it gives me the following error :
OpenCV Error: One of arguments' values is out of range (Every class weight should be positive) in CvDTreeTrainData::set_data, file /home/sguinard/dev/opencv-2.4.13/modules/ml/src/tree.cpp, line 644
terminate called after throwing an instance of 'cv::Exception'
what(): /home/sguinard/dev/opencv-2.4.13/modules/ml/src/tree.cpp:644: error: (-211) Every class weight should be positive in function CvDTreeTrainData::set_data
If I understand well, it's due to the fact that the priors array have 5 elements. And when I try to give it only 3 elements (as my number of variables) everything works.
According to the documentation, this array should be used to add weight to classes but it actually seems that it is used to add weight to variables...
So, does anyone knows how to add weight to classes on OpenCV's RTrees algorithm ? (I'm working with OpenCV 2.4.13 in c++)
Thanks in advance !
Here is my code :
cv::Mat RandomForest(cv::Mat train_data, cv::Mat response_data, cv::Mat sample_data, int size, int size_predict, float weights[5])
{
#undef CV_TERMCRIT_ITER
#define CV_TERMCRIT_ITER 10
#define ATTRIBUTES_PER_SAMPLE 3
cv::RandomTrees RFTree;
float priors[] = {1,1,1};
CvRTParams RFParams = CvRTParams(25, // max depth
500, // min sample count
0, // regression accuracy: N/A here
false, // compute surrogate split, no missing data
5, // max number of categories (use sub-optimal algorithm for larger numbers)
//priors
weights, // the array of priors (use weights or priors)
true,//false, // calculate variable importance
2, // number of variables randomly selected at node and used to find the best split(s).
100, // max number of trees in the forest
0.01f, // forrest accuracy
CV_TERMCRIT_ITER | CV_TERMCRIT_EPS // termination cirteria
);
cv::Mat varIdx = cv::Mat();
cv::Mat vartype( train_data.cols + 1, 1, CV_8U );
vartype.setTo(cv::Scalar::all(CV_VAR_NUMERICAL));
vartype.at<uchar>(ATTRIBUTES_PER_SAMPLE, 0) = CV_VAR_CATEGORICAL;
cv::Mat sampleIdx = cv::Mat();
cv::Mat missingdatamask = cv::Mat();
for (int i=0; i!=train_data.rows; ++i)
{
for (int j=0; j!=train_data.cols; ++j)
{
if(train_data.at<float>(i,j)<0
|| train_data.at<float>(i,j)>10000
|| !float(train_data.at<float>(i,j)))
{train_data.at<float>(i,j)=0;}
}
}
// Training
std::cout << "Training ....." << std::flush;
bool train = RFTree.train(train_data,
CV_ROW_SAMPLE,//tflag,
response_data,//responses,
varIdx,
sampleIdx,
vartype,
missingdatamask,
RFParams);
if (train){std::cout << " Done" << std::endl;}
else{std::cout << " Failed" << std::endl;return cv::Mat();}
std::cout << "Variable Importance : " << std::endl;
cv::Mat VI = RFTree.getVarImportance();
for (int i=0; i!=VI.cols; ++i){std::cout << VI.at<float>(i) << " - " << std::flush;}
std::cout << std::endl;
std::cout << "Predicting ....." << std::flush;
cv::Mat predict(1,sample_data.rows,CV_32F);
float max = 0;
for (int i=0; i!=sample_data.rows; ++i)
{
predict.at<float>(i) = RFTree.predict(sample_data.row(i));
if (predict.at<float>(i)>max){max=predict.at<float>(i);/*std::cout << predict.at<float>(i) << "-"<< std::flush;*/}
}
// Personnal test due to an error I got (everyone sent to 0)
if (max==0){std::cout << " Failed ... Max value = 0" << std::endl;return cv::Mat();}
std::cout << " Done ... Max value = " << max << std::endl;
return predict;
}
I ran into a fastidious problem and I have no idea of what is causing it. I hope you can help me find a solution.
Framework: I implemented a sparse_matrix class using the CSR representation and I used this object as the basis for a recommendation system. The class is defined as follows:
class sparse_matrix_csr
{
public:
sparse_matrix_csr();
sparse_matrix_csr(const std::vector<int> &row_indices);
sparse_matrix_csr(const std::vector<int> &row_indices, const size_t nb_columns);
// other member functions omitted
private:
std::vector<int> _row_indices;
std::vector<int> _row_start_indices;
std::vector<int> _column_indices;
std::vector<double> _values;
bool _column_sorted_by_index;
};
The _row_indices vector contains the indices of the rows of the matrix. _row_start_indices contains the index of the first element of a given row in the _column_indices, which contains the column indices, and _values vector, which contains the elements of the matrix. In particular, the constructor sparse_matrix_csr(const std::vector<int> &row_indices, const size_t nb_columns) is implemented as follows:
sparse_matrix_csr::sparse_matrix_csr(const std::vector<int> &row_indices,
const size_t nb_columns):
_row_indices(row_indices),
_row_start_indices(row_indices.size()),
_column_indices(row_indices.size() * nb_columns, 0),
_values(row_indices.size() * nb_columns, 0.0),
_column_sorted_by_index(true)
{
for (size_t i = 0; i < _row_start_indices.size(); ++i)
_row_start_indices[i] = i * nb_columns;
}
This constructor takes in the indices of the rows of the sparse matrix and the number of elements that will be contained in each row. In fact, in the application I am considering, I have same matrices that are sparse only wrt the rows.
Problem: The algorithm is structured as follows
\\ Instruction block 1
{
\\ Do something
}
sparse_matrix_csr mat(list_indices, nb_columns);
\\ Instruction block 2
{
\\ Do something
}
If I run the first block of instructions alone (commenting out all that follows), my algorithm runs smoothly. However, if I uncomment the second part of the algorithm, the first part of the algorithm slows down a lot. I have been able to identify the critical line for this slow-down in the declaration of mat. However, I cannot explain this retro-action on the first part of the algorithm. The full algorithm is reported at the end of my question.
My considerations: I have never observed such a retro-action and therefore I am a little confused. One possibility that I have considered is that I have a problem in memory management which causes the slow-down of the previous part of the algorithm. I am currently working on some matrices that have around 2e6 elements, i.e. around 4e6 values stored in 2 vectors. I read on another post on stack overflow that the data contained in a std::vector are allocated in the heap. Is that always true? even if I initialize a vector with a given size as I do in the sparse matrix constructors above?
If you need some clarification, do not hesitate!
Thanks in advance,
Pierpaolo
Full algorithm:
void collaborative_filtering_mpi (std::string ratings_file, std::string targets_file,
std::string output_file, int k_neighbors, double shrinkage_factor,
bool output_debug_data)
{
std::cout << "COLLABORATIVE FILTERING ALGORITHM - k = " << k_neighbors
<< " , d = " << shrinkage_factor << std::endl;
stopwatch sw_total;
sw_total.start();
//-------------------------------------
// STEP 1: Read data from input files
//-------------------------------------
std::cout << "1) Read input files: ";
stopwatch sw;
sw.start();
//-------------------------------
// 1.1) Read user rating matrix
//-------------------------------
// Initialization
sparse_matrix_csr user_item_rating_matrix(ratings_file, true, true, true); // file is sorted, skip header
//-------------------------------------------------
// 1.2) Read targets (user, item) to be predicted
//-------------------------------------------------
// Initialization
sparse_matrix_csr targets(targets_file, true, false, false); // do not read ratings column
sw.stop();
double time_input = sw.get_duration();
std::cout << time_input << std::endl;
//-----------------------------------------
// STEP 2: Pre-computations
//-----------------------------------------
std::cout << "2) Pre-computations: ";
sw.start();
//-----------------------------------------------------------------------
// 2.1) Sort user_item_rating_matrixand compute relevant sizes of the problem
//-----------------------------------------------------------------------
// Sort: if file is sorted, sort should do nothing.
user_item_rating_matrix.sort_columns_by_index();
// Compute list of users and list of items
std::vector<int> list_users;
user_item_rating_matrix.rows(list_users);
std::vector<int> list_items;
user_item_rating_matrix.columns(list_items);
// [DEBUG]: print user_rating_matrix
if (output_debug_data)
{
std::ofstream ofs_debug("data_debug/debug_user_rating_matrix.txt");
ofs_debug << user_item_rating_matrix;
ofs_debug.close();
}
//-------------------------------------------------------------
// 2.2) Compute inverse user rating matrix (indexed by column)
//-------------------------------------------------------------
// Initialize item_user_rating_matrix: it is a sparse matrix with
// items on the rows, users on the columns and rating as values.
// This variable will be helpful when computing similarities.
sparse_matrix_csr item_user_rating_matrix;
// Compute item_user_rating<_matrix by transposing the user_rating_matrix
transpose (user_item_rating_matrix, item_user_rating_matrix);
// [DEBUG]: print item_user_matrix_on_file
if (output_debug_data)
{
std::ofstream ofs_debug("data_debug/debug_item_user_matrix.txt");
ofs_debug << item_user_rating_matrix;
ofs_debug.close();
}
//-----------------------------------------------
// 2.3) sort targets and compute relevant sizes
//-----------------------------------------------
// Compute list of target items
std::vector<int> list_target_items;
targets.columns(list_target_items);
// [DEBUG]: print targets
if (output_debug_data)
{
std::ofstream ofs_debug("data_debug/debug_targets.txt");
ofs_debug << targets;
ofs_debug.close();
}
//--------------------------------------------------------------
// 2.4) Compute difference between list_items and list_targets
//--------------------------------------------------------------
std::vector<int> list_non_target_items;
compute_difference_vector (list_items, list_target_items, list_non_target_items);
// [DEBUG]
if (output_debug_data)
{
std::ofstream ofs_debug("data_debug/debug_difference_vector.txt");
ofs_debug << "list_items - size: " << list_items.size() << std::endl;
for (std::vector<int>::const_iterator iter = list_items.begin(); iter != list_items.end(); ++iter)
ofs_debug << (*iter) << ",";
ofs_debug << std::endl << std::endl;
ofs_debug << "list_target_items - size: " << list_target_items.size() << std::endl;
for (std::vector<int>::const_iterator iter = list_target_items.begin(); iter != list_target_items.end(); ++iter)
ofs_debug << (*iter) << ",";
ofs_debug << std::endl << std::endl;
ofs_debug << "list_non_target_items - size: " << list_non_target_items.size() << std::endl;
for (std::vector<int>::const_iterator iter = list_non_target_items.begin(); iter != list_non_target_items.end(); ++iter)
ofs_debug << (*iter) << ",";
ofs_debug << std::endl << std::endl;
ofs_debug.close();
}
//--------------------------------------------
// 2.5) Compute average rating for each user
//--------------------------------------------
dictionary<int, double> average_rating_vector;
compute_average_rating(user_item_rating_matrix, average_rating_vector);
if (output_debug_data)
{
std::ofstream ofs_debug("data_debug/debug_average_rating_vector.txt");
for (dictionary<int, double>::const_iterator iter = average_rating_vector.begin();
iter != average_rating_vector.end(); ++iter)
ofs_debug << (*iter).get_key() << ": " << (*iter).get_value() << std::endl;
ofs_debug.close();
}
sw.stop();
std::cout << sw.get_duration() << std::endl;
//-------------------------------------
// STEP 3: Similarity matrix
//-------------------------------------
std::cout << "3) Compute similarity matrix: ";
sw.start();
// Initialize similarity_matrix with target items on the rows.
sparse_matrix_csr similarity_matrix(list_target_items,
list_target_items.size() + list_non_target_items.size());
// compute similarity matrix
compute_similarity_matrix_mpi(similarity_matrix,
item_user_rating_matrix,
average_rating_vector,
list_target_items,
list_non_target_items,
shrinkage_factor);
// [DEBUG]: print similarity matrix sorted by similarity
if (output_debug_data)
{
std::ofstream ofs("data_debug/debug_similarity_matrix.txt");
ofs << similarity_matrix;
ofs.close();
}
sw.stop();
std::cout << sw.get_duration() << std::endl;
//---------------------------------------------------------------
// STEP 4: Find top-K similar elements with positive similarity
//---------------------------------------------------------------
std::cout << "4) Find top-K similar elements:" << std::endl;
sw.start();
if (k_neighbors > 0)
{
//---------------------------------------------------
// 4.1) Sort similarity matrix by rating (row-wise)
//---------------------------------------------------
std::cout << " .... Sort similarity matrix by rating: ";
sw.start();
// Sort all the rows of the similarity_matrix by similarity.
// If two items have the same rating, sort them in descending order of item.
//similarity_matrix.sort_columns_by_value();
similarity_matrix.sort_columns_by_value();
// [DEBUG]: print similarity matrix sorted by similarity
if (output_debug_data)
{
std::ofstream ofs ("data_debug/debug_similarity_matrix_sorted_by_rating.txt");
ofs << similarity_matrix;
ofs.close();
}
sw.stop();
std::cout << sw.get_duration() << std::endl;
//--------------------------------------------------------
// 4.2) Cut the useless columns of the similarity matrix
//--------------------------------------------------------
std::cout << " .... Reduce similarity matrix: ";
sw.start();
sparse_matrix_csr similarity_matrix_reduced(list_target_items,
k_neighbors);
reduce_similarity_matrix_mpi (similarity_matrix,
similarity_matrix_reduced,
k_neighbors);
// [DEBUG]: print similarity matrix sorted by similarity
if (output_debug_data)
{
std::ofstream ofs ("data_debug/debug_similarity_matrix_reduced.txt");
ofs << similarity_matrix_reduced;
ofs.close();
}
sw.stop();
std::cout << sw.get_duration() << std::endl;
//---------------------------------------------------
// 4.3) Sort the reduced similarity matrix by items
//---------------------------------------------------
std::cout << " .... Sort similarity matrix by index: ";
sw.start();
// Sort all the rows of the similarity_matrix by index.
similarity_matrix_reduced.sort_columns_by_index();
// [DEBUG]: print similarity matrix sorted by similarity
if (output_debug_data)
{
std::ofstream ofs ("data_debug/debug_similarity_matrix_sorted.txt");
ofs << similarity_matrix_reduced;
ofs.close();
}
sw.stop();
std::cout << sw.get_duration() << std::endl;
//-----------------------------------------
// STEP 5: Compute predictions for targets
//-----------------------------------------
std::cout << "5) Compute predictions: ";
sw.start();
compute_predicted_ratings_mpi (targets,
user_item_rating_matrix,
similarity_matrix_reduced);
sw.stop();
std::cout << sw.get_duration() << std::endl;
}
else
{
//---------------------------------------------------
// 4.3) Sort the reduced similarity matrix by items
//---------------------------------------------------
std::cout << " .... Sort similarity matrix by index: ";
sw.start();
// Sort all the rows of the similarity_matrix by index.
// similarity_matrix.sort_columns_by_index();
similarity_matrix.sort_columns_by_index();
// [DEBUG]: print similarity matrix sorted by similarity
if (output_debug_data)
{
std::ofstream ofs ("data_debug/debug_similarity_matrix_sorted.txt");
ofs << similarity_matrix;
ofs.close();
}
sw.stop();
std::cout << sw.get_duration() << std::endl;
//-----------------------------------------
// STEP 5: Compute predictions for targets
//-----------------------------------------
std::cout << "5) Compute predictions: ";
sw.start();
compute_predicted_ratings_mpi (targets,
user_item_rating_matrix,
similarity_matrix);
sw.stop();
std::cout << sw.get_duration() << std::endl;
}
//-----------------------------------------------------
// STEP 6: Print the prediction matrix in output file
//-----------------------------------------------------
std::cout << "6) Print predictions on file: ";
sw.start();
std::ofstream ofs_output(output_file);
targets.print(ofs_output);
sw.stop();
double time_output = sw.get_duration();
std::cout << time_output << std::endl;
sw_total.stop();
double time_total = sw_total.get_duration();
std::cout << ">> Total computation time: " << time_total << std::endl;
std::cout << ">> Total computation time - no input/output: " << (time_total - time_input - time_output) << std::flush;
}
UPDATE - 05 February 2015: I don't know what happened, but the code is now running fine on my 64-bit machine. However, it is still very slow on a 32-bit VM that I need to use to run my code. I measured the size of a sparse_matrix object using sizeof and it occupies 52 bit. I think this might be causing a cache problem (credit goes to #Dark Falcon). Do you have any ideas on how I could solve this problem and make my algorithm run efficiently on the 32-bit VM?
UPDATE - 06 February 2015: Well, the problem comes and go. I tried to change the implementation of the sparse_matrix_csr class wrapping the data in a shared_ptr in the following way:
class sparse_matrix_csr
{
public:
\\ public methods omitted
private:
class sparse_matrix_csr_data
{
public:
sparse_matrix_csr_data() {}
sparse_matrix_csr_data(const std::vector<int> &row_indices): _row_indices(row_indices) {}
sparse_matrix_csr_data(const std::vector<int> &row_indices, const size_t nb_columns);
sparse_matrix_csr_data(const std::string file, bool file_is_sorted, bool skip_header, bool read_values);
// data
std::vector<int> _row_indices;
std::vector<int> _row_start_indices;
std::vector<int> _column_indices;
std::vector<double> _values;
bool _column_sorted_by_index;
};
std::shared_ptr<sparse_matrix_csr_data> _data;
};
This modification did not improve things. I am currently having problems both on the 32-bit VM and the 64-bit laptop. I have no idea about what is causing the program to be so slow.
Before you start reading, to help you understand my issue, I am telling that I have copied the code from this link: Dijkstra Shortest Path with VertexList = ListS in boost graph
So.. I am rewriting my program code to use boost, but now when 99% is ready I am stuck with my GPS (for a game).
I have a list of nodes, which I added in a way which fortunately was easy to convert to the boost method. The thing I needed to do was just create a vertice variable like this:
Vertex Vx[MAX_NODES];
I copied the typedefs from the link I have given.
The way I add vertices is this:
stringstream s;
s << i;
Vx[i] = add_vertex(s.str(),dgraph);
Where "i" equals an integer number. (eg int i = 9)
And eges are also easy to add. Now, I have my own structured array called "xNode". and eg :
xNode[i] holds all the information for X Y Z positions (xNode[i].X xNode[i].Y etc) of the nodes.
Now when using the code snippet from the link I have done this:
// Write shortest path
std::cout << "Shortest path from " << startid << " to " << endid << ":" << std::endl;
float totalDistance = 0;
for(PathType::reverse_iterator pathIterator = path.rbegin(); pathIterator != path.rend(); ++pathIterator)
{
std::cout << source(*pathIterator, dgraph) << " -> " << target(*pathIterator, dgraph)
<< " = " << get( boost::edge_weight, dgraph, *pathIterator ) << std::endl;
}
And this is where I am stuck, as "source(*pathIterator, dgraph)" and "target(*pathIterator, dgraph)
" Get addresses, but I need the vertice indexes to access xNode[i], i is the NodeID (or well the vertice ID | Vx[i]).
How can I do that?
EDIT:
I tried to do:
for(PathType::reverse_iterator pathIterator = path.rbegin(); pathIterator != path.rend(); ++pathIterator)
{
for(int i = 0; i < MAX_NODES; ++i)
{
if(source(*pathIterator, dgraph) == *((Vertex*)Vx[i]))
{
cout << " " << i << " " << endl;
break;
}
}
}
but this just crashes..
With the typedefs from that question, you can use get(boost::vertex_index, dgraph, v) to get the index of v. You can also cache the property map using:
IndexMap vi = get(boost::vertex_index, dgraph);
then use get(vi, v) to get the index for v.