Project Tango - 3D Reconstruction - c++

I'm trying to use the C 3D Reconstruction Library to get a mesh from the Tango device.
In the Mesh Building Functions there's a summary of the flow to use, which shows that I have to call the Tango3DR_update function several times and then call the Tango3DR_extractFullMesh to get the mesh.
The problem is that Tango3DR_update needs the Tango3DR_PointCloud object which I don't see how I get.
I can create an empty Tango3DR_PointCloud using Tango3DR_PointCloud_create, but I don't see anywhere how I fill it with real data.
Does anyone know how to get this object?
Or anyone knows if there's any example / sample code using this library? I didn't find any.
Thanks,
Oren

You should fill the Tango3DR_PointCloud from the TangoXYZij you receive in OnXYZijAvailableRouter. Same thing for the pose struct.
// -- point cloud
Tango3DR_PointCloud cloud;
cloud.num_points = xyz_ij->xyz_count;
cloud.points = new Tango3DR_Vector4[cloud.num_points];
for (int i = 0; i < cloud.num_points; ++i) {
cloud.points[i][0] = xyz_ij->xyz[i][0];
cloud.points[i][1] = xyz_ij->xyz[i][1];
cloud.points[i][2] = xyz_ij->xyz[i][2];
// last is confidence
cloud.points[i][3] = 1;
}
cloud.timestamp = xyz_ij->timestamp;
(Do not forget to delete [] cloud.points once you're done)
The only official example I could find is in the Unity examples. They use the C API but called from C#.

Related

Problem getting OpenCV multitracking to work

I'm in the early stages (VERY alpha) of writing the backend for an open source home video surveillance system. I'm building it on Gstreamer as a series of plugins. The idea is going to make each step of the process modular based on implementations, so that anyone can write a custom module and use that in the pipeline. For an example, take a look at "odo-lib-opencv-yolo" in the repo linked below. Using that as a template, anyone can write their own detection lib using any library or neural network model as use it within this pipeline. I also plan to expand that out to include a few "default" libraries that are optimized for different systems (Nvidia GPUs, Raspberry Pis, Coral TPUs, etc).
The issue I'm running into is the tracking portion. I wanted to start off with a simple and basic tracking system using the tracking API in OpenCV as a "filler" while I flesh out everything else. But it just doesn't seem to work right.
Here's the link to the repo: https://github.com/CeeBeeEh/Odo-Project
Of most importance is this code block in gst/gstodotrack.cpp at line 270:
if (meta->isInferenceFrame) {
LAST_COUNT = meta->detectionCount;
TRACKER->clear();
TRACKER = cv::legacy::MultiTracker::create();
for (int i=0; i<LAST_COUNT; i++) {
LAST_DETECTION[i].class_id = meta->detections[i].class_id;
LAST_DETECTION[i].confidence = meta->detections[i].confidence;
strcpy(LAST_DETECTION[i].label, meta->detections[i].label);
cv::Rect2i rect = cv::Rect2i(meta->detections[i].box.x, meta->detections[i].box.y,
meta->detections[i].box.width, meta->detections[i].box.height);
TRACKER->add(create_cvtracker(odo), img, rect);
}
GST_DEBUG("Added %zu objects to tracker", LAST_COUNT);
}
else {
meta->detectionCount = LAST_COUNT;
std::vector<cv::Rect2d> tracked;
if (!TRACKER->update(img, tracked)) GST_ERROR("Error tracking objects");
GST_DEBUG("Infer count=%lu, tracked count=%zu", LAST_COUNT, tracked.size());
for (int i=0; i<tracked.size(); i++) {
meta->detections[i].box.x = tracked[i].x;
meta->detections[i].box.y = tracked[i].y;
meta->detections[i].box.width = tracked[i].width;
meta->detections[i].box.height = tracked[i].height;
meta->detections[i].class_id = LAST_DETECTION[i].class_id;
meta->detections[i].confidence = LAST_DETECTION[i].confidence;
strcpy(meta->detections[i].label, LAST_DETECTION[i].label);
}
}
Variables at top of file:
cv::Ptr<cv::legacy::MultiTracker> TRACKER;
ulong LAST_COUNT = 0;
DetectionData LAST_DETECTION[DETECTION_MAX] = {};
I originally had
TRACKER = cv::legacy::MultiTracker::create();
in the start method to create it when the plugin is loaded, but that caused a segfault.
I've also tried creating a fixed size array of cv::Ptr<cv::Tracker> and just use each one for a single object with the same id within each frame, and then recycle each one for each inference frame. But that cause odd behaviour with each object was tracking to the same size and position within the frame.
Right now, in the current configuration, the tracking is dog-slow. It's actually faster to just inference each frame on GPU than inference a single frame and track for x additional frames. It should be the opposite.
It seems as though OpenCV's tracking API has changed as of 4.5.1, but I'm not sure if that's the cause of the issue.
I'm looking for help in sorting out the OpenCV tracking issue. I find all the docs very lacking in explaining the what and why of the multi-tracking API. So those are not of much help.

Using MATLAB coder to export code from Registration Estimator app to C++

Hi I'm trying to export "the default code" that is automatic generated from the Registration Estimator app within MATLAB to C++ using the MATLAB Coder tool.
This is an sample code I generated today:
function [MOVINGREG] = registerImages(MOVING,FIXED)
%registerImages Register grayscale images using auto-generated code from Registration Estimator app.
% [MOVINGREG] = registerImages(MOVING,FIXED) Register grayscale images
% MOVING and FIXED using auto-generated code from the Registration
% Estimator App. The values for all registration parameters were set
% interactively in the App and result in the registered image stored in the
% structure array MOVINGREG.
% Auto-generated by registrationEstimator app on 21-Jun-2017
%-----------------------------------------------------------
% Convert RGB images to grayscale
FIXED = rgb2gray(FIXED);
MOVING = rgb2gray(MOVING);
% Default spatial referencing objects
fixedRefObj = imref2d(size(FIXED));
movingRefObj = imref2d(size(MOVING));
% Intensity-based registration
[optimizer, metric] = imregconfig('monomodal');
optimizer.GradientMagnitudeTolerance = 1.00000e-04;
optimizer.MinimumStepLength = 1.00000e-05;
optimizer.MaximumStepLength = 6.25000e-02;
optimizer.MaximumIterations = 100;
optimizer.RelaxationFactor = 0.500000;
% Align centers
fixedCenterXWorld = mean(fixedRefObj.XWorldLimits);
fixedCenterYWorld = mean(fixedRefObj.YWorldLimits);
movingCenterXWorld = mean(movingRefObj.XWorldLimits);
movingCenterYWorld = mean(movingRefObj.YWorldLimits);
translationX = fixedCenterXWorld - movingCenterXWorld;
translationY = fixedCenterYWorld - movingCenterYWorld;
% Coarse alignment
initTform = affine2d();
initTform.T(3,1:2) = [translationX, translationY];
% Apply transformation
tform = imregtform(MOVING,movingRefObj,FIXED,fixedRefObj,'similarity',optimizer,metric,'PyramidLevels',3,'InitialTransformation',initTform);
MOVINGREG.Transformation = tform;
MOVINGREG.RegisteredImage = imwarp(MOVING, movingRefObj, tform, 'OutputView', fixedRefObj, 'SmoothEdges', true);
% Store spatial referencing object
MOVINGREG.SpatialRefObj = fixedRefObj;
end
Within the Coder Tool in the section Run-Time Issues I received a couple of issues e.g. that coder need to declare the extrinsic . So far so good. I added for instance: coder.extrinsic('imregconfig'); and coder.extrinsic('optimizer');. But I still getting errors like:
Attempt to extract field 'GradientMagnitudeTolerance' from 'mxArray'.
Attempt to extract field 'MinimumStepLength' from 'mxArray'.
Attempt to extract field 'MaximumStepLength' from 'mxArray'.
...
Pointing to the line with optimizer.GradientMagnitudeTolerance = 1.00000e-04; (and below).
I found out that usually the initialisation of variables is missing. But I don't know how to initialise the property optimizer.GradientMagnitudeTolerance in advanced. Can anyone help me with this?
PS: I'm using MATLAB R2017a and Microsoft Visual C++ 2017 (C) Compiler
Based on the supported functions for code generation list at https://www.mathworks.com/help/coder/ug/functions-supported-for-code-generation--categorical-list.html#bsl0arh-1, imregconfig is not supported for code generation. Which explains the issue you got first. Adding coder.extrinsic means that MATLAB Coder generated file will call into MATLAB to run that function. You can do this only for a mex target that needs MATLAB to run. imregconfig is not going to generate any C code. Standalone C code generation for use from external application is not possible with this code.
When functions declared as coder.extrinsic call into MATLAB they return an mxArray. The rest of the code can handle this mxArray only by passing it to MATLAB i.e. similar extrinsic functions. From Coder's point of view these are opaque types and hence the error about attempting extract fields from mxArray.

Extract Point Cloud from a pcl::people::PersonCluster<PointT>

I am doing a project for the university and I need to extract the point cloud of the people to work with it. I made an ROS node adapting the code of the tutorial Ground based rgdb people detection, and now I want to publish in a topic the point cloud of the first detected cluster.
But I am not able to extract that point cloud, the definition of the class is defined here person_cluster.h. And there is a public member:
typedef pcl::PointCloud<PointT> PointCloud;
So to convert it in a sensor_msgs::PointCloud2 I do:
pcl_conversions::fromPCL(clusters.at(0).PointCloud,person_msg);
where person_msg is the PointCLoud2 message, clusters is a vector of pcl::people::PersonCluster<PointT>, and I want to publish only the first point cloud because I assume that there is only one person in the scene.
The compiler gives me this error:
error: invalid use of ‘pcl::people::PersonCluster::PointCloud’
pcl_conversions::fromPCL(clusters.at(0).PointCloud,person_msg);
I don't have a lot of knowledge about C++ and I am not able to overcome this error. Googling that error it seems that it appears when you don't "define" well a class, but I doubt that in the pcl library they defined bad a class.
For those that are interested I resolved my problem.
In the forum of pcl I found a post where the same developer of the people detector I used gave the answer.
So basically:
// Get cloud after voxeling and ground plane removal:
PointCloudT::Ptr no_ground_cloud (new PointCloudT);
no_ground_cloud = people_detector.getNoGroundCloud();
// Show pointclouds of every person cluster:
PointCloudT::Ptr cluster_cloud (new PointCloudT);
for(std::vector<pcl::people::PersonCluster<PointT> >::iterator it = clusters.begin(); it != clusters.end(); ++it)
{
if (it->getPersonConfidence() > min_confidence)
{
// Create the pointcloud with points belonging to the current cluster:
cluster_cloud->clear();
pcl::PointIndices clusterIndices = it->getIndices(); // cluster indices
std::vector<int> indices = clusterIndices.indices;
for(unsigned int i = 0; i < indices.size(); i++) // fill cluster cloud
{
PointT* p = &no_ground_cloud->points[indices[i]];
cluster_cloud->push_back(*p);
}
// Visualization:
viewer.removeAllPointClouds();
viewer.removeAllShapes();
pcl::visualization::PointCloudColorHandlerRGBField<PointT> rgb(cluster_cloud);
viewer.addPointCloud<PointT> (cluster_cloud, rgb, "cluster_cloud");
viewer.setPointCloudRenderingProperties (pcl::visualization::PCL_VISUALIZER_POINT_SIZE, 2, "cluster_cloud");
viewer.spinOnce(500);
}
}
Actually I was not able to convert such type of point cloud in a sensor message PointCloud2, even trying to convert that pointcloud in a pcl::PointCloud<pcl::PointXYZ>.
A working solution was that to use as cluster_cloud a type of pcl::PointCloud<pcl::PointXYZ> and then using a publisher of type
pcl::PointCloud<pcl::PointXYZ>
Such:
ros::Publisher person_pub = nh.advertise<PointCloud>("personPointCloud", 1000);
Anyway it was no publish anything, the rviz didnt showed anything. But the viever was displaying the point cloud of the detected person. Since that pointcloud was not such I expected (if you move the arm the algorithm does no give you all the arm) it is not useful for my project so I drop it.
So still remains the problem to publish it in ros, but the problem to get the pointcloud is resolved.

Making files with map or scenario information, such as what resources to load, objects, locations, events

I'm making a simple graphics engine in C++, using Visual C++ and DirectX, and I'm testing out different map layouts.
Currently, I construct "maps" by simply making a C++ source file and start writing:
SHADOWENGINE ShadowEngine(&settings);
SPRITE_SETTINGS sset;
MODEL_SETTINGS mset;
sset.Name = "Sprite1";
sset.Pivot = TOPLEFT;
sset.Source = "sprite1.png";
sset.Type = STATIC;
sset.Movable = true;
sset.SoundSet = "sprite1.wav"
ShadowEngine->Sprites->Load(sset);
sset.Name = "Sprite2"
sset.Source = "sprite2.png";
sset.Parent = "Sprite1";
sset.Type = ANIMATED;
sset.Frames = 16;
sset.Interval = 1000;
sset.Position = D3DXVECTOR(0.0f, (ShadowEngine->Resolution->Height/2), 0.0f);
ShadowEngine->Sprites->Load(sset);
mset.Source = "character.sx";
mset.Collision = false;
mset.Type = DYNAMIC;
ShadowEngine->Models->Load(mset);
//Etc..
What I'd like to be able to do, is to create map files that are instead loaded into the engine, without having to write them into the executable. That way, I can make changes to the maps without having to recompile every damn time.
SHADOWENGINE ShadowEngine(&settings);
ShadowEngine->InitializeMap("Map1.sm");
The only way I can think of is to make it read the file as text and then just parse the information, but it sounds like such a hassle.
Am I thinking the wrong way?
What should I do?
Wouldn't mind an explanation on how others do it, like Warcraft III, Starcraft, Age of Empires, Heroes of Might and Magic...
Would really appreciate some help on this one.
You are not thinking the wrong way, loading your map data is definitely desirable. The most common prebuilt solutions are Protocol Buffers and Lua. If you don't already know Lua I would use protocol buffers, as it directly solves your problem, whereas Lua is a scripting language which is flexible enough to do what you need done.
Some people write their data as XML, but this is only a partial solution as XML is just a markup language. After loading the XML you'll have a DOM tree to parse.
Google's CPP Protobuf Tutorial

Draw a multiple lines set with VTK

Can somebody point me in the right direction of how to draw a multiple lines that seem connected? I found vtkLine and its SetPoint1 and SetPoint2 functions. Then I found vtkPolyLine, but there doesn't seem to be any add, insert or set function for this. Same for vtkPolyVertex.
Is there a basic function that allows me to just push some point at the end of its internal data and the simply render it? Or if there's no such function/object, what is the way to go here?
On a related topic: I don't like vtk too much. Is there a visualization toolkit, maybe with limited functionality, that is easier to use?
Thanks in advance
For drawing multiple lines, you should first create a vtkPoints class that contains all the points, and then add in connectivity info for the points you would like connected into lines through either vtkPolyData or vtkUnstructuredGrid (which is your vtkDataSet class; a vtkDataSet class contains vtkPoints as well as the connectivity information for these points). Once your vtkDataSet is constructued, you can take the normal route to render it (mapper->actor->renderer...)
For example:
vtkPoints *pts = vtkPoints::New();
pts->InsertNextPoint(1,1,1);
...
pts->InsertNextPoint(5,5,5);
vtkPolyData *polydata = vtkPolyData::New();
polydata->Allocate();
vtkIdType connectivity[2];
connectivity[0] = 0;
connectivity[1] = 3;
polydata->InsertNextCell(VTK_LINE,2,connectivity); //Connects the first and fourth point we inserted into a line
vtkPolyDataMapper *mapper = vtkPolyDataMapper::New();
mapper->SetInput(polydata);
// And so on, need actor and renderer now
There are plenty of examples on the documentation site for all the classes
Here is vtkPoints : http://www.vtk.org/doc/release/5.4/html/a01250.html
If you click on the vtkPoints (Tests) link, you can see the tests associated with the class. It provides a bunch of different sample code.
Also, the vtk mailing list is probably going to be much more useful than stack overflow.