I am trying to use ITK's OtsuMultipleThresholdsImageFilter filter in a project but I do not have output.
My aim is to make a simple interface between OpenCV and ITK.
To convert my data from OpenCV's Mat container to itk::Image I use ITK's bridge to OpenCV and I could check that the data are properly sent to ITK.
I am even able to display thanks to QuickView.
But When I setup the filter inspired by this example the object returned by the method GetThresholds() is empty.
Here is the code I wrote:
typedef itk::Image<uchar,2> image_type;
typedef itk::OtsuMultipleThresholdsImageFilter<image_type, image_type> filter_type;
image_type::Pointer img = itk::OpenCVImageBridge::CVMatToITKImage<image_type>(src);
image_type::SizeType size = img->GetLargestPossibleRegion().GetSize();
filter_type::Pointer filter = filter_type::New();
filter->SetInput(img);
filter->SetNumberOfHistogramBins(256);
filter->SetNumberOfThresholds(K);
filter_type::ThresholdVectorType tmp = filter->GetThresholds();
std::cout<<"CHECK: "<<tmp.size()<<std::endl;
src is OpenCV's Mat of CV_8U(C1) type.
A fundamental and basic concept to using ITK is that it is a pipeline architecture. You must connect the input's and output's then update the pipeline.
You have connected the pipeline but you have not executed it. You must call filter->Update().
Please read the ITK Software Guide to understand the fundamentals of ITK:
https://itk.org/ItkSoftwareGuide.pdf
Related
I am currently building a perception pipeline and am having trouble reading the points of my point cloud structure. I am reading in a point cloud into a PointCloud2 Structure and want to be able to write out the points in series of the point cloud or at the very least access them so I find a way to write them to a file later. The basic code I am working with is here:
void cloud_cropbox_cb(const sensor_msgs::PointCloud2ConstPtr &cloud_msg)
{
// Container for original & filtered data
pcl::PCLPointCloud2 *cloud = new pcl::PCLPointCloud2;
pcl::PCLPointCloud2ConstPtr cloudPtr(cloud);
pcl::PCLPointCloud2 cloud_filtered;
// convert given message into PCL data type
pcl_conversions::toPCL(*cloud_msg, *cloud);
I want to be able to access the x,y,z elements of each point in the cloud and ideally just write them all out to a textfile. I have tried using pcl::io::savePCDFileASCII() but I don't think that is quite applicable here. An additional note is the base program is constantly running and never exits until manually closed I don't know if that will be relavent to writing out to a file. Any help on what exact functions to use / steps I need to take would be much appreciated.
If you literally want to write out the contents of the message to a file, you can just (in launch file or commandline) use ros topic echo [args] > my_file.txt.
For ways that are compatible in C++, I recommend reading from http://wiki.ros.org/pcl_ros, which describes that:
pcl_ros extends the ROS C++ client library to support message passing with PCL native data types. Simply add the following include to your ROS node source code:
#include <pcl_ros/point_cloud.h>
This header allows you to publish and subscribe pcl::PointCloud objects as ROS messages. These appear to ROS as sensor_msgs/PointCloud2 messages, offering seamless interoperability with non-PCL-using ROS nodes. For example, you may publish a pcl::PointCloud in one of your nodes and visualize it in rviz using a Point Cloud2 display. It works by hooking into the roscpp serialization infrastructure.
This means that you can internally use PCL all you want, and it's all a ROS PointCloud2 message. But you have to declare the PCL type / interface you want to use in the message / publisher / subscriber that you access the data with. So instead of using the sensor_msgs/PointCloud2 type everywhere, use the pcl::Pointcloud<pcl::Point*> type everywhere.
Publishing:
typedef pcl::PointCloud<pcl::PointXYZ> PointCloud;
ros::Publisher pub = nh.advertise<PointCloud> ("points2", 1);
PointCloud::Ptr msg (new PointCloud);
msg->points.push_back (pcl::PointXYZ(1.0, 2.0, 3.0));
pub.publish (msg);
Subscribing:
typedef pcl::PointCloud<pcl::PointXYZ> PointCloud;
ros::Subscriber sub = nh.subscribe<PointCloud>("points2", 1, callback);
void callback(const PointCloud::ConstPtr& msg){
printf ("Cloud: width = %d, height = %d\n", msg->width, msg->height);
BOOST_FOREACH (const pcl::PointXYZ& pt, msg->points)
printf ("\t(%f, %f, %f)\n", pt.x, pt.y, pt.z);
}
More examples (or links to node source code) in http://wiki.ros.org/pcl_ros, tutorials on other pipeline stuff in http://wiki.ros.org/pcl_ros/Tutorials.
I am new to OMNeT++ and I'm trying to implement a drone network that communicate with each other using direct messages.
I want to visualize my drone network with the 3D visualization in OMNeT using the OsgVisualizer in inet.visualizer.scene package.
In the dronenetwork.ned file, I have used the IntegratedVisualizer and the OsgGeographicCoordinateSystem. Then in the omnetpp.ini file, the map file to be used is defined and so the map loading and mobility of the drones works fine in the 3D visualization of the simulation run.
However, the message transmissions between the drones are not visualized in 3D even though this is properly visualized in the 2D canvas mode.
I tried adding both NetworkNodeOsgVisualizer and NetworkConnectionOsgVisualizer to my drone module as visualization simple modules and also I have defined the drones as a #networkNode and #networkConnectionNode. But it still hasn't been able to visualize the message transmissions.
Any help or hint regarding this would be highly appreciated.
Code used for visualizations in the simple module drone is as follows
import inet.visualizer.scene.NetworkNodeOsgVisualizer;
import inet.visualizer.scene.NetworkConnectionOsgVisualizer;
module drone
{
parameters:
#networkNode;
#networkConnection;
submodules:
networkNodeOsgVisualizer: NetworkNodeOsgVisualizer {
#display("p=207,50");
displayModuleName = true;
visualizationTargetModule = "^.^";
visualizationSubjectModule = "wirelessInterface.^.^";
}
networkConnectionOsgVisualizer : NetworkConnectionOsgVisualizer{
visualizationTargetModule = "^.^";
visualizationSubjectModule = "wirelessInterface.^.^";
displayNetworkConnections = true;
}
Thank you
Message passing and direct message sending visualizations are special cases implemented by the Qtenv automatically for 2D (default) visualization only. You can add custom 2D message visualization (like the one in the aloha example). OMNeT++ does not provide any 3D visualization by default. All the code must be provided by the model (INET in this case). This is also true for any transient visualization. There is an example for this in the osg-earth omnet example where communication between cows are visualized by inflating bubbles.
So, you have to implement your own visualization effect. There is something in INET which is pretty close to what you want: DataLinkOsgVisualizer and PhysicalLinkOsgVisualizer which flashes an arrow if communication on data link or physical layer has occurred. This is not the same as message passing, but close enough. Or you can implement your own animation using these visualizers as a sample.
I am using the nokiatech heif api (github.com/nokiatech/heif) to process heic files produced by the IOS betas.
I am able to get the tiles and metadata like rotation and dimensions, but I am unable to locate the capture date of the image. I found some timestamp functions but they complain about "Forced FPS not set for meta context" which leads me to think these functions are related to tracks and not items.
Any help would be appreciated.
EDIT:
So there is a typo in the documentation for getReferencedToItemListByType (and getReferencedFromItemListByType), it says it takes "cdcs" as a referenceType parameter. It is ofcource "cdsc" (Content Describe).
So to get the metadata blob from a stil image as of now you can do the following:
reader.getItemListByType(contextId, "grid", gridItemIds);
ImageFileReaderInterface::IdVector cdscItemIds;
reader.getReferencedToItemListByType(contextId, gridItemIds.at(0), "cdsc", cdscItemIds);
ImageFileReaderInterface::DataVector data;
reader.getItemData(contextId, cdscItemIds.at(0), data);
Then you need to decode the exif. You can easily use Exiftool cli or an api like exiv2.
I am trying to use transform a vtkPolyData object by using vtkTransform.
However, the tutorials I found are using pipeline, for example: http://www.vtk.org/Wiki/VTK/Examples/Cxx/Filters/TransformPolyData
However, I am using VTK 6.1 which has removed thge GetOutputPort method for stand-alone data object as mentioned here:
http://www.vtk.org/Wiki/VTK/VTK_6_Migration/Replacement_of_SetInput
I have tried to replace the line:
transformFilter->SetInputConnection()
with
transformFilter->SetInputData(polydata_object);
Unfortunately, the data was not read properly (as the pipeline was not set correctly?)
Do you know how to correctly transform a stand-alone vtkPolyData without using pipeline in VTK6?
Thank you!
GetOutputPort was never a method on a data-object. It was always a method on vtkAlgorithm and it still is present on vtkAlgorithm (and subclasses). Where is the polydata_object coming from? If it's an output of a reader, you have two options:
// update the reader to ensure it executes and reads data.
reader->UpdatePipeline()
// now you can get access to the data object.
vtkSmartPointer<vtkPolyData> data = vtkPolyData::SafeDownCast(reader->GetOutputDataObject(0));
// pass that to the transform filter.
transformFilter->SetInputData(data.GetPointer());
transformFilter->Update();
Second option is to simply connect the pipeline:
transformFilter->SetInputConnection(reader->GetOutputPort());
The key is to ensure that the data is updated/reader before passing it to the transform filter, when not using the pipeline.
Is there a simple way to save a KNN classifier in OpenCV by using the C++ API?
I have tried to save a KNN classifier described here after wrapping CvKNearest class inside another class.
It successfully saves to disk, but when I read from it running predict method gives me segmentation fault (core dumped) error.
My wrapper class is as follows:
class KNNWrapper
{
CvKNearest knn;
bool train(Mat& traindata, Mat& trainclasses)
{
}
void test(Mat& testdata, Mat& testclasses)
{
}
}
I've heard that Boost Serialization library is more robust and safe. Can anyone point me to proper resources where I can get this done with Boost library?
#tisch is totally right and I'd like to correct myself. The CvKNearest doesn't override the load and save functions of the CVStatModel.
But since a CvKNearest doesn't compute a model, there's no internal state to store. Of course, you want to store the training and test cv::Mat data you have passed. You can use the FileStorage class for this, a great description and tutorial is given at:
http://docs.opencv.org/modules/core/doc/xml_yaml_persistence.html
If you want to offer the same API as in the other statistical models in OpenCV (which makes sense) I suppose to subclass the CvKNearest and offer a save and load function, which respectively serialize the training/test data and deserialize it by using the FileStorage.