Problem getting OpenCV multitracking to work - c++

I'm in the early stages (VERY alpha) of writing the backend for an open source home video surveillance system. I'm building it on Gstreamer as a series of plugins. The idea is going to make each step of the process modular based on implementations, so that anyone can write a custom module and use that in the pipeline. For an example, take a look at "odo-lib-opencv-yolo" in the repo linked below. Using that as a template, anyone can write their own detection lib using any library or neural network model as use it within this pipeline. I also plan to expand that out to include a few "default" libraries that are optimized for different systems (Nvidia GPUs, Raspberry Pis, Coral TPUs, etc).
The issue I'm running into is the tracking portion. I wanted to start off with a simple and basic tracking system using the tracking API in OpenCV as a "filler" while I flesh out everything else. But it just doesn't seem to work right.
Here's the link to the repo: https://github.com/CeeBeeEh/Odo-Project
Of most importance is this code block in gst/gstodotrack.cpp at line 270:
if (meta->isInferenceFrame) {
LAST_COUNT = meta->detectionCount;
TRACKER->clear();
TRACKER = cv::legacy::MultiTracker::create();
for (int i=0; i<LAST_COUNT; i++) {
LAST_DETECTION[i].class_id = meta->detections[i].class_id;
LAST_DETECTION[i].confidence = meta->detections[i].confidence;
strcpy(LAST_DETECTION[i].label, meta->detections[i].label);
cv::Rect2i rect = cv::Rect2i(meta->detections[i].box.x, meta->detections[i].box.y,
meta->detections[i].box.width, meta->detections[i].box.height);
TRACKER->add(create_cvtracker(odo), img, rect);
}
GST_DEBUG("Added %zu objects to tracker", LAST_COUNT);
}
else {
meta->detectionCount = LAST_COUNT;
std::vector<cv::Rect2d> tracked;
if (!TRACKER->update(img, tracked)) GST_ERROR("Error tracking objects");
GST_DEBUG("Infer count=%lu, tracked count=%zu", LAST_COUNT, tracked.size());
for (int i=0; i<tracked.size(); i++) {
meta->detections[i].box.x = tracked[i].x;
meta->detections[i].box.y = tracked[i].y;
meta->detections[i].box.width = tracked[i].width;
meta->detections[i].box.height = tracked[i].height;
meta->detections[i].class_id = LAST_DETECTION[i].class_id;
meta->detections[i].confidence = LAST_DETECTION[i].confidence;
strcpy(meta->detections[i].label, LAST_DETECTION[i].label);
}
}
Variables at top of file:
cv::Ptr<cv::legacy::MultiTracker> TRACKER;
ulong LAST_COUNT = 0;
DetectionData LAST_DETECTION[DETECTION_MAX] = {};
I originally had
TRACKER = cv::legacy::MultiTracker::create();
in the start method to create it when the plugin is loaded, but that caused a segfault.
I've also tried creating a fixed size array of cv::Ptr<cv::Tracker> and just use each one for a single object with the same id within each frame, and then recycle each one for each inference frame. But that cause odd behaviour with each object was tracking to the same size and position within the frame.
Right now, in the current configuration, the tracking is dog-slow. It's actually faster to just inference each frame on GPU than inference a single frame and track for x additional frames. It should be the opposite.
It seems as though OpenCV's tracking API has changed as of 4.5.1, but I'm not sure if that's the cause of the issue.
I'm looking for help in sorting out the OpenCV tracking issue. I find all the docs very lacking in explaining the what and why of the multi-tracking API. So those are not of much help.

Related

OMNeT ++ direct message transmission visualizations in 3D

I am new to OMNeT++ and I'm trying to implement a drone network that communicate with each other using direct messages.
I want to visualize my drone network with the 3D visualization in OMNeT using the OsgVisualizer in inet.visualizer.scene package.
In the dronenetwork.ned file, I have used the IntegratedVisualizer and the OsgGeographicCoordinateSystem. Then in the omnetpp.ini file, the map file to be used is defined and so the map loading and mobility of the drones works fine in the 3D visualization of the simulation run.
However, the message transmissions between the drones are not visualized in 3D even though this is properly visualized in the 2D canvas mode.
I tried adding both NetworkNodeOsgVisualizer and NetworkConnectionOsgVisualizer to my drone module as visualization simple modules and also I have defined the drones as a #networkNode and #networkConnectionNode. But it still hasn't been able to visualize the message transmissions.
Any help or hint regarding this would be highly appreciated.
Code used for visualizations in the simple module drone is as follows
import inet.visualizer.scene.NetworkNodeOsgVisualizer;
import inet.visualizer.scene.NetworkConnectionOsgVisualizer;
module drone
{
parameters:
#networkNode;
#networkConnection;
submodules:
networkNodeOsgVisualizer: NetworkNodeOsgVisualizer {
#display("p=207,50");
displayModuleName = true;
visualizationTargetModule = "^.^";
visualizationSubjectModule = "wirelessInterface.^.^";
}
networkConnectionOsgVisualizer : NetworkConnectionOsgVisualizer{
visualizationTargetModule = "^.^";
visualizationSubjectModule = "wirelessInterface.^.^";
displayNetworkConnections = true;
}
Thank you
Message passing and direct message sending visualizations are special cases implemented by the Qtenv automatically for 2D (default) visualization only. You can add custom 2D message visualization (like the one in the aloha example). OMNeT++ does not provide any 3D visualization by default. All the code must be provided by the model (INET in this case). This is also true for any transient visualization. There is an example for this in the osg-earth omnet example where communication between cows are visualized by inflating bubbles.
So, you have to implement your own visualization effect. There is something in INET which is pretty close to what you want: DataLinkOsgVisualizer and PhysicalLinkOsgVisualizer which flashes an arrow if communication on data link or physical layer has occurred. This is not the same as message passing, but close enough. Or you can implement your own animation using these visualizers as a sample.

Visualization of dynamically created modules

I am building a dynamic multi agent simulation in OMNeT and for this I have to create new modules at runtime. The module creation is working, however, the modules created at runtime are not appearing in the 3D visualization.
module "node" is created sucessfully
Does anyone know how to make the module appear in the visualization? Do I have to update the visualization module?
omnet.ini:
[General]
network = AgentNetwork
*.visualizer.osgVisualizer.typename = "IntegratedOsgVisualizer"
*.visualizer.*.mobilityVisualizer.animationSpeed = 1
*.visualizer.osgVisualizer.sceneVisualizer.typename = "SceneOsgEarthVisualizer"
*.visualizer.osgVisualizer.sceneVisualizer.mapFile = "hamburg.earth"
AgentSpawner:
void AgentSpawner::initialize()
{
cMessage *timer = new cMessage("timer");
scheduleAt(1.0, timer);
}
void AgentSpawner::handleMessage(cMessage *msg)
{
cModuleType *moduleType = cModuleType::get("simulations.Agent");
cModule *module = moduleType->create("node", getParentModule());
// set up parameters and gate sizes before we set up its submodules
module->par("osgModel") = "3d/glider.osgb.(20).scale.0,0,180.rot";
module->getDisplayString().parse("p=200,100;i=misc/aircraft");
module->finalizeParameters();
// create internals, and schedule it
module->buildInside();
module->callInitialize();
module->scheduleStart(simTime()+5.0);
}
The OSG visualization info is maintained totally separately from the actual simulation model module object (that's because the visualization must be ALWAYS optional in the simulation, so make sure your simulation builds fine with OSG totally turned off). This means that an entirely different data structure is built during initialization time from the existing network nodes. As this is done only once during the initialization, dynamically created modules will not have their visualization counterpart data structure.
The code which created the corresponding objects is here.
The solution would be to look up the NetworkNodeOsgVisualizer module in your AgentSpawner code then create and add the corresponding data structures (NetworkNodeOsgVisualization objects). The needed methods (create and add) are there, but sadly they are protected, so you many need to modify the INET code and make them public to be able to call them.

Project Tango - 3D Reconstruction

I'm trying to use the C 3D Reconstruction Library to get a mesh from the Tango device.
In the Mesh Building Functions there's a summary of the flow to use, which shows that I have to call the Tango3DR_update function several times and then call the Tango3DR_extractFullMesh to get the mesh.
The problem is that Tango3DR_update needs the Tango3DR_PointCloud object which I don't see how I get.
I can create an empty Tango3DR_PointCloud using Tango3DR_PointCloud_create, but I don't see anywhere how I fill it with real data.
Does anyone know how to get this object?
Or anyone knows if there's any example / sample code using this library? I didn't find any.
Thanks,
Oren
You should fill the Tango3DR_PointCloud from the TangoXYZij you receive in OnXYZijAvailableRouter. Same thing for the pose struct.
// -- point cloud
Tango3DR_PointCloud cloud;
cloud.num_points = xyz_ij->xyz_count;
cloud.points = new Tango3DR_Vector4[cloud.num_points];
for (int i = 0; i < cloud.num_points; ++i) {
cloud.points[i][0] = xyz_ij->xyz[i][0];
cloud.points[i][1] = xyz_ij->xyz[i][1];
cloud.points[i][2] = xyz_ij->xyz[i][2];
// last is confidence
cloud.points[i][3] = 1;
}
cloud.timestamp = xyz_ij->timestamp;
(Do not forget to delete [] cloud.points once you're done)
The only official example I could find is in the Unity examples. They use the C API but called from C#.

nodejs native c++ npm module memory error, cairo image processing

I've been bugging TJ on node-canvas about a code speed up I'm working on in a fork of a node module he authored and maintains.
I found Canvas.toBuffer() to be killing our pipeline resources and created an alternative that would simply convert from Canvas into an Image without going through a png buffer/media url. The problem is that cairo is a mysterious beast, and there's an additional level of concern about memory allocated within node modules as not to get GC'd by mother v8. I've added the proper HandleScopes to all required functions which access V8 data.
I was able to test the Canvas.loadImage(image) method thousands of times on my mac setup (6.18), as well as stand alone tests on our ubuntu/production servers running the same version of node. But when the code is run as a background process/server and coordinated by Gearman I'm getting some "interesting" memory/segfaults.
In addition I'm having trouble calling any of the methods of classes defined in node-canvas that aren't inline within header files. As a side question What's the best way to create common native source code packages that other node modules can rely on?
I've tried recreating the problem and running it with gdb, node_g, and all the node modules built with symbols and debug flags. But the error crops up in a lib outside of the source I can get a stack trace for.
for reference here's where I call loadImageData and while it runs locally under a variety of conditions, in our production environment when carefully tucked away within a frame server it appears to be causing segfaults (spent the day yesterday trying to gdb node_g our server code but the frame servers are kicked off by gearman... TL;DR didn't get a root cause stack trace)
https://github.com/victusfate/node-canvas/blob/master/src/Canvas.cc#L497
Handle<Value>
Canvas::LoadImage(const Arguments &args) {
HandleScope scope;
LogStream mout(LOG_DEBUG,"node-canvas.paint.ccode.Canvas.LoadImage");
mout << "Canvas::LoadImage top " << LogStream::endl;
Canvas *canvas = ObjectWrap::Unwrap<Canvas>(args.This());
if (args.Length() < 1) {
mout << "Canvas::LoadImage Error requires one argument of Image type " << LogStream::endl;
return ThrowException(Exception::TypeError(String::New("Canvas::LoadImage requires one argument of Image type")));
}
Local<Object> obj = args[0]->ToObject();
Image *img = ObjectWrap::Unwrap<Image>(obj);
canvas->loadImageData(img);
return Undefined();
}
void Canvas::loadImageData(Image *img) {
LogStream mout(LOG_DEBUG,"node-canvas.paint.ccode.Canvas.loadImageData");
if (this->isPDF()) {
mout << "Canvas::loadImageData pdf canvas type " << LogStream::endl;
cairo_surface_finish(this->surface());
closure_t *closure = (closure_t *) this->closure();
int w = cairo_image_surface_get_width(this->surface());
int h = cairo_image_surface_get_height(this->surface());
img->loadFromDataBuffer(closure->data,w,h);
mout << "Canvas::loadImageData pdf type, finished loading image" << LogStream::endl;
}
else {
mout << "Canvas::loadImageData data canvas type " << LogStream::endl;
cairo_surface_flush(this->surface());
int w = cairo_image_surface_get_width(this->surface());
int h = cairo_image_surface_get_height(this->surface());
img->loadFromDataBuffer(cairo_image_surface_get_data(this->surface()),w,h);
mout << "Canvas::loadImageData image type, finished loading image" << LogStream::endl;
}
}
and here's what the current method in Image looks like (I removed some commented out logging info)
https://github.com/victusfate/node-canvas/blob/master/src/Image.cc#L240
/*
* load from data buffer width*height*4 bytes
*/
cairo_status_t
Image::loadFromDataBuffer(uint8_t *buf, int width, int height) {
this->clearData();
int stride = cairo_format_stride_for_width (CAIRO_FORMAT_ARGB32, width); // 4*width + ?
this->_surface = cairo_image_surface_create_for_data(buf,CAIRO_FORMAT_ARGB32,width,height,stride);
this->data_mode = DATA_IMAGE;
this->loaded();
cairo_status_t status = cairo_surface_status(_surface);
if (status) return status;
return CAIRO_STATUS_SUCCESS;
}
Any help, pro tips, assistance, or words of encouragement would be appreciated.
Originally from google groups
Got it!
I was working on another library today that uses cairomm and discovered surfaces created from data buffers require those buffers to live on as long as the surface does.
http://www.cairographics.org/manual/cairo-Image-Surfaces.html#cairo-image-surface-create-for-data
"Creates an image surface for the provided pixel data. The output buffer must be kept around until the cairo_surface_t is destroyed or cairo_surface_finish() is called on the surface. The initial contents of data will be used as the initial image contents; you must explicitly clear the buffer, using, for example, cairo_rectangle() and cairo_fill() if you want it cleared."
I introduced a surface that was created from a temporary buffer.
Easy solution in node-canvas fork:
There's a member variable called _data which I can assign a locally malloced data buffer to, which will live on as long as the cairo surface does.
Solution:
A general way to copy a buffer into a surface is to create a temporary surface from the buffer, then draw from the temporary surface onto an allocated surface and let cairo manage it's own memory.
It would look something like this with the c api to cairo to implement.
cairo_surface_t *pTmp = cairo_image_surface_create_for_data (
data
, CAIRO_FORMAT_ARGB32
, width
, height
, cairo_format_stride_for_width(CAIRO_FORMAT_ARGB32, width));
_surface = cairo_image_surface_create ( CAIRO_FORMAT_ARGB32
, width
, height);
cairo_t *cr = cairo_create (_surface);
cairo_set_source_surface (cr, pTmp, x, y);
cairo_paint (cr);
In addition I'm having trouble calling any of the methods of classes
defined in node-canvas that aren't inline within header files. As a
side question What's the best way to create common native source code
packages that other node modules can rely on?
While I don't have an answer for the memory issue/seg fault I was running into in our staging environment. I do have an answer for constructing reusable libraries with native node modules.
I'm using git submodules for all the independent native node modules, and added a conditional preprocessor definition for each of their wscript or binding.gyp files to specify whether or not to generate a shared object .node module.
update Alternatively unique init function names or namespaces can surround the module initialization call (moved to this setup).
In addition I'll be using this new package to aid in debugging or rewriting code sections (I can't spend too much time debugging utilization of several remote libraries).
in wscript or binding.gyp
flags = ['-D_NAME_NODE_MODULE', '-O3', '-Wall', '-D_FILE_OFFSET_BITS=64', '-D_LARGEFILE_SOURCE', '-msse2']
then in an initialization file
#ifdef _NAME_NODE_MODULE
extern "C" {
static void init(Handle<Object> target) {
HandleScope scope;
NODE_SET_METHOD(target, "someFunction", someFunction);
}
NODE_MODULE(moduleName, init);
}
#endif
This way a node native module is only added to when the flag is set. Otherwise it can be linked to normally (like in another node module).

Making files with map or scenario information, such as what resources to load, objects, locations, events

I'm making a simple graphics engine in C++, using Visual C++ and DirectX, and I'm testing out different map layouts.
Currently, I construct "maps" by simply making a C++ source file and start writing:
SHADOWENGINE ShadowEngine(&settings);
SPRITE_SETTINGS sset;
MODEL_SETTINGS mset;
sset.Name = "Sprite1";
sset.Pivot = TOPLEFT;
sset.Source = "sprite1.png";
sset.Type = STATIC;
sset.Movable = true;
sset.SoundSet = "sprite1.wav"
ShadowEngine->Sprites->Load(sset);
sset.Name = "Sprite2"
sset.Source = "sprite2.png";
sset.Parent = "Sprite1";
sset.Type = ANIMATED;
sset.Frames = 16;
sset.Interval = 1000;
sset.Position = D3DXVECTOR(0.0f, (ShadowEngine->Resolution->Height/2), 0.0f);
ShadowEngine->Sprites->Load(sset);
mset.Source = "character.sx";
mset.Collision = false;
mset.Type = DYNAMIC;
ShadowEngine->Models->Load(mset);
//Etc..
What I'd like to be able to do, is to create map files that are instead loaded into the engine, without having to write them into the executable. That way, I can make changes to the maps without having to recompile every damn time.
SHADOWENGINE ShadowEngine(&settings);
ShadowEngine->InitializeMap("Map1.sm");
The only way I can think of is to make it read the file as text and then just parse the information, but it sounds like such a hassle.
Am I thinking the wrong way?
What should I do?
Wouldn't mind an explanation on how others do it, like Warcraft III, Starcraft, Age of Empires, Heroes of Might and Magic...
Would really appreciate some help on this one.
You are not thinking the wrong way, loading your map data is definitely desirable. The most common prebuilt solutions are Protocol Buffers and Lua. If you don't already know Lua I would use protocol buffers, as it directly solves your problem, whereas Lua is a scripting language which is flexible enough to do what you need done.
Some people write their data as XML, but this is only a partial solution as XML is just a markup language. After loading the XML you'll have a DOM tree to parse.
Google's CPP Protobuf Tutorial