GICP usage in PCL - c++

I have looked for how to implement Generalized ICP(GICP) with PCL. I found a sample code in test/registration/test_registration.cpp in PCL repository on github. The sample code uses GICP as following. Could you tell me following procedure is correct one to use GICP with PCL?
The fucntion "align" is a function of IterativeClosestPoint class. It means that "align" does not consider point-to-plane that AV Segal et al refer in their paper. I'm wondering if this is a correct procedure to use GICP with PCL. In addition, I don't know why PCL does not provide us sample codes which use estimateRigidTransformationBFGS the metod of GeneralizedIterativeClosestPoint class.
GeneralizedIterativeClosestPoint<PointT, PointT> reg_guess;
reg_guess.setInputSource (src);
reg_guess.setInputTarget (transformed_tgt);
reg_guess.setMaximumIterations (50);
reg_guess.setTransformationEpsilon (1e-8);
reg_guess.align (output, transform.matrix ());

I found the usage! Autonomous system lab in ETH Zurich opens it in their github repository. Please check robust_point_cloud_registration!

Related

(ROS) Failed to create global planner

My setup is: ROS melodic, Ubuntu: 18.04
I want simulate turtlebot3 moving with my own global planner and have been following this tutorial to get started: http://wiki.ros.org/navigation/Tutorials/Writing%20A%20Global%20Path%20Planner%20As%20Plugin%20in%20ROS#Running_the_Plugin_on_the_Turtlebot. The tutorial seem to be made for ROS hydro, but as it was the best source of guidance I could find I hoped it would work.
The error I'm having is:
Failed to create the global_planner/GlobalPlanner planner, are you sure it is properly registered and that the containing library is built? Exception: MultiLibraryClassLoader: Could not create object of class type global_planner::GlobalPlanner as no factory exists for it. Make sure that the library exists and was explicitly loaded through MultiLibraryClassLoader::loadLibrary()
To my knowledge I've followed the tutorial as much as possible with a only a few things done differently because I wanted to test it, couldn't do as the tutorial asked, or because I thought it wouldn't impact the results. What I have done differently is:
I use the carrot_planner.h and carrot_planner.cpp files in the tutorial section 1 to test that it works before trying with my own code to avoid confusion about where possible errors come from. It's not 'different' from the tutorial to my knowledge, but figured I'd mention it. They are placed in catkin_ws/src/carrot_planner/src/global_planner/
The ros package I'm working from is in catkin_ws/src and is called the carrot_planner. In the tutorial step 1.3 I use add_library(global_planner_lib src/global_planner/carrot_planner.cpp). Would not imagine it affects the results either.
In section 3 of the tutorial it mentions that 'First, you need to copy the package that contains your global planner (in our case global_planner) into the catkin workspace of your Turtlebot (e.g. catkin_ws).' Since my package was already in catkin_ws/src/ I haven't moved it since I guess I didn't need to.
I've altered the 'move_base.launch' file in '/opt/ros/melodic/share/turtlebot3_navigation/launch/' instead of the 'move_base.launch.xml' in '/opt/ros/hydro/share/turtlebot_navigation/launch/includes/' as there doesn't seem to be a destination '...turtlebot3_navigation/launch/includes/'. There are files in launch, but no includes folder. Maybe that a difference from Hydro to Melodic, I don't know. There may be a whole lot of things that need to be done differently from the tutorial when using Melodic, or with turtlebot3, but I don't know.
I haven't made my own launch file for bringup of the turtlebot, but have instead followed this tutorial (https://emanual.robotis.com/docs/en/platform/turtlebot3/nav_simulation/) to guide me with turtlebot3. After finishing this step in the global planner tutorial 'Save and close the move_base.launch.xml. Note that the name of the planner is global_planner/GlobalPlanner the same specified in global_planner_plugin.xml. Now, you are ready to use your new planner' I tested whether it worked by running: 'roslaunch turtlebot3_gazebo turtlebot3_world.launch' and then I tried running: 'roslaunch turtlebot3_navigation turtlebot3_navigation.launch map_file:=$HOME/map.yaml' which led to the error I showed above. I have created the map-yaml, so there's no misunderstanding whether that's missing.
I would be very glad for any help, thank you ^^
Edit: My system only had 'navfn' on it, not 'global_planner' or 'carrot_planner', if that makes a difference.
After looking over the code I found a solution. It doesn't make everything work perfectly yet, but seems to solve the immediate problem.
The problem was that in my 'global_planner_plugin.xml' I just used the code provided in the tutorial:
<library path="lib/libglobal_planner_lib">
<class name="global_planner/GlobalPlanner" type="global_planner::GlobalPlanner" base_class_type="nav_core::BaseGlobalPlanner">
<description>This is a global planner plugin by iroboapp project.</description>
</class>
</library>
But in the carrot_planner.cpp file it says:
PLUGINLIB_EXPORT_CLASS(carrot_planner::CarrotPlanner, nav_core::BaseGlobalPlanner)
Changing type="global_planner::GlobalPlanner to type="carrot_planner::CarrotPlanner and then launching turtlebot3 doesn't give the same error anymore.

Custom submodules in pytorch / libtorch C++

Full disclosure, I asked this same question on the PyTorch forums about a few days ago and got no reply, so this is technically a repost, but I believe it's still a good question, because I've been unable to find an answer anywhere online. Here goes:
Can you show an example of using register_module with a custom module?
The only examples I’ve found online are registering linear layers or convolutional layers as the submodules.
I tried to write my own module and register it with another module and I couldn’t get it to work.
My IDE is telling me no instance of overloaded function "MyModel::register_module" matches the argument list -- argument types are: (const char [14], TreeEmbedding)
(TreeEmbedding is the name of another struct I made which extends torch::nn::Module.)
Am I missing something? An example of this would be very helpful.
Edit: Additional context follows below.
I have a header file "model.h" which contains the following:
struct TreeEmbedding : torch::nn::Module {
TreeEmbedding();
torch::Tensor forward(Graph tree);
};
struct MyModel : torch::nn::Module{
size_t embeddingSize;
TreeEmbedding treeEmbedding;
MyModel(size_t embeddingSize=10);
torch::Tensor forward(std::vector<Graph> clauses, std::vector<Graph> contexts);
};
I also have a cpp file "model.cpp" which contains the following:
MyModel::MyModel(size_t embeddingSize) :
embeddingSize(embeddingSize)
{
treeEmbedding = register_module("treeEmbedding", TreeEmbedding{});
}
This setup still has the same error as above. The code in the documentation does work (using built-in components like linear layers), but using a custom module does not. After tracking down torch::nn::Linear, it looks as though that is a ModuleHolder (Whatever that is...)
Thanks,
Jack
I will accept a better answer if anyone can provide more details, but just in case anyone's wondering, I thought I would put up the little information I was able to find:
register_module takes in a string as its first argument and its second argument can either be a ModuleHolder (I don't know what this is...) or alternatively it can be a shared_ptr to your module. So here's my example:
treeEmbedding = register_module<TreeEmbedding>("treeEmbedding", make_shared<TreeEmbedding>());
This seemed to work for me so far.

How to use LinearSvm?

Currently I'm using FastTree for binary classification, but I would like to give SVM a try and compare metrics.
All the docs mention LinearSvm, but I can't find code example anywhere.
mlContext.BinaryClassification.Trainers does not have public SVM trainers. There is LinearSvm class and LinearSvm.TrainLinearSvm static method, but they seem to be intended for different things.
What am I missing?
Version: 0.7
For some reason there is no trainer in the runtime API but there is a linear SVM trainer in the Legacy API (for v0.7) found here. They might be generating a new one for the upcoming API, so my advice is to either use the legacy one, or wait for a newer API.
At this stage, ML.Net is very much in development.
Copy pasting the response I got on Github:
I have two answers for you: What the status of the API is, and how to use the LinearSVM in the meantime.
First, we have LinearSVM in the ML.NET codebase, but we do not yet have samples or the API extensions to place it in mlContext.BinaryClassification.Trainers. This is being worked through in issue #1318. I'll link this to that issue, and mark it as a bug.
In the meantime, you can use direct instantiation to get access to LinearSVM:
var arguments = new LinearSvm.Arguments()
{
NumIterations = 20
};
var linearSvm = new LinearSvm(mlContext, arguments);
var svmTransformer = linearSvm.Fit(trainSet);
var scoredTest = svmTransformer.Transform(testSet);
This will give you an ITransformer, here called svmTransformer that you can use to operate on IDataView objects.

itk OtsuMultipleThresholdsImageFilter does not process

I am trying to use ITK's OtsuMultipleThresholdsImageFilter filter in a project but I do not have output.
My aim is to make a simple interface between OpenCV and ITK.
To convert my data from OpenCV's Mat container to itk::Image I use ITK's bridge to OpenCV and I could check that the data are properly sent to ITK.
I am even able to display thanks to QuickView.
But When I setup the filter inspired by this example the object returned by the method GetThresholds() is empty.
Here is the code I wrote:
typedef itk::Image<uchar,2> image_type;
typedef itk::OtsuMultipleThresholdsImageFilter<image_type, image_type> filter_type;
image_type::Pointer img = itk::OpenCVImageBridge::CVMatToITKImage<image_type>(src);
image_type::SizeType size = img->GetLargestPossibleRegion().GetSize();
filter_type::Pointer filter = filter_type::New();
filter->SetInput(img);
filter->SetNumberOfHistogramBins(256);
filter->SetNumberOfThresholds(K);
filter_type::ThresholdVectorType tmp = filter->GetThresholds();
std::cout<<"CHECK: "<<tmp.size()<<std::endl;
src is OpenCV's Mat of CV_8U(C1) type.
A fundamental and basic concept to using ITK is that it is a pipeline architecture. You must connect the input's and output's then update the pipeline.
You have connected the pipeline but you have not executed it. You must call filter->Update().
Please read the ITK Software Guide to understand the fundamentals of ITK:
https://itk.org/ItkSoftwareGuide.pdf

Reverse engineering the checksum algorithm

I have an IP camera that receives commands using POST HTTP requests(for example to call PTZ commands or set various camera settings). The standard way of controlling it is through it's own web interface which is partially an ActiveX plugin and partially standard html+js. Of course because of the ActiveX part it only works in IE under Windows.
I'm attempting to change that by figuring out all the commands and writing a small python or javascript code to do the same, so that it is more cross platform.
I have one major problem. Each POST request contains a calculated "cc" field which I assume is a checksum. The JS code in the cam interface points out that it is calculated by calling a function inside the plugin:
tt = new Date().Format("yyyyMMddhhmmss");
jo_header["tt"] = tt;
if (getCpPlugin() != null && getCpPlugin().valid) {
jo_header["cc"] = getCpPlugin().nsstpGetCC(tt, session_id);
}
nsstpGetCC function obviously calculates the checksum from two parameters the timestamp and session_id. Real example(captured with Wireshark):
tt = "20171018231918"
session_id = "30303532646561302D623434612D3131"
cc = "849e586524385e1071caa4023a3df75401e5bb82"
Checksum seems to be 160bit. I tried both sha-1 and ripemd-160 and all combinations of concatenating tt and session_id I could think of. But I can't seem to get the same hash as the one the original plugin gets. The plugin dll seems to be written in c++. And I have almost no experience with decompilation to dive into this problem from that angle.
So my question basically is can someone figure out how they calculated that cc, or at least give me an idea in which direction to research further. Maybe I'm looking at wrong hash algorithms or something... Or give me some idea how I could somehow figure out what the original ActiveX function nsstpGetCC is doing for example by decompilation or maybe by monitoring it's operation in memory while running. What tools should I use?