I am new to OMNeT++ and I'm trying to implement a drone network that communicate with each other using direct messages.
I want to visualize my drone network with the 3D visualization in OMNeT using the OsgVisualizer in inet.visualizer.scene package.
In the dronenetwork.ned file, I have used the IntegratedVisualizer and the OsgGeographicCoordinateSystem. Then in the omnetpp.ini file, the map file to be used is defined and so the map loading and mobility of the drones works fine in the 3D visualization of the simulation run.
However, the message transmissions between the drones are not visualized in 3D even though this is properly visualized in the 2D canvas mode.
I tried adding both NetworkNodeOsgVisualizer and NetworkConnectionOsgVisualizer to my drone module as visualization simple modules and also I have defined the drones as a #networkNode and #networkConnectionNode. But it still hasn't been able to visualize the message transmissions.
Any help or hint regarding this would be highly appreciated.
Code used for visualizations in the simple module drone is as follows
import inet.visualizer.scene.NetworkNodeOsgVisualizer;
import inet.visualizer.scene.NetworkConnectionOsgVisualizer;
module drone
{
parameters:
#networkNode;
#networkConnection;
submodules:
networkNodeOsgVisualizer: NetworkNodeOsgVisualizer {
#display("p=207,50");
displayModuleName = true;
visualizationTargetModule = "^.^";
visualizationSubjectModule = "wirelessInterface.^.^";
}
networkConnectionOsgVisualizer : NetworkConnectionOsgVisualizer{
visualizationTargetModule = "^.^";
visualizationSubjectModule = "wirelessInterface.^.^";
displayNetworkConnections = true;
}
Thank you
Message passing and direct message sending visualizations are special cases implemented by the Qtenv automatically for 2D (default) visualization only. You can add custom 2D message visualization (like the one in the aloha example). OMNeT++ does not provide any 3D visualization by default. All the code must be provided by the model (INET in this case). This is also true for any transient visualization. There is an example for this in the osg-earth omnet example where communication between cows are visualized by inflating bubbles.
So, you have to implement your own visualization effect. There is something in INET which is pretty close to what you want: DataLinkOsgVisualizer and PhysicalLinkOsgVisualizer which flashes an arrow if communication on data link or physical layer has occurred. This is not the same as message passing, but close enough. Or you can implement your own animation using these visualizers as a sample.
Related
I'm in the early stages (VERY alpha) of writing the backend for an open source home video surveillance system. I'm building it on Gstreamer as a series of plugins. The idea is going to make each step of the process modular based on implementations, so that anyone can write a custom module and use that in the pipeline. For an example, take a look at "odo-lib-opencv-yolo" in the repo linked below. Using that as a template, anyone can write their own detection lib using any library or neural network model as use it within this pipeline. I also plan to expand that out to include a few "default" libraries that are optimized for different systems (Nvidia GPUs, Raspberry Pis, Coral TPUs, etc).
The issue I'm running into is the tracking portion. I wanted to start off with a simple and basic tracking system using the tracking API in OpenCV as a "filler" while I flesh out everything else. But it just doesn't seem to work right.
Here's the link to the repo: https://github.com/CeeBeeEh/Odo-Project
Of most importance is this code block in gst/gstodotrack.cpp at line 270:
if (meta->isInferenceFrame) {
LAST_COUNT = meta->detectionCount;
TRACKER->clear();
TRACKER = cv::legacy::MultiTracker::create();
for (int i=0; i<LAST_COUNT; i++) {
LAST_DETECTION[i].class_id = meta->detections[i].class_id;
LAST_DETECTION[i].confidence = meta->detections[i].confidence;
strcpy(LAST_DETECTION[i].label, meta->detections[i].label);
cv::Rect2i rect = cv::Rect2i(meta->detections[i].box.x, meta->detections[i].box.y,
meta->detections[i].box.width, meta->detections[i].box.height);
TRACKER->add(create_cvtracker(odo), img, rect);
}
GST_DEBUG("Added %zu objects to tracker", LAST_COUNT);
}
else {
meta->detectionCount = LAST_COUNT;
std::vector<cv::Rect2d> tracked;
if (!TRACKER->update(img, tracked)) GST_ERROR("Error tracking objects");
GST_DEBUG("Infer count=%lu, tracked count=%zu", LAST_COUNT, tracked.size());
for (int i=0; i<tracked.size(); i++) {
meta->detections[i].box.x = tracked[i].x;
meta->detections[i].box.y = tracked[i].y;
meta->detections[i].box.width = tracked[i].width;
meta->detections[i].box.height = tracked[i].height;
meta->detections[i].class_id = LAST_DETECTION[i].class_id;
meta->detections[i].confidence = LAST_DETECTION[i].confidence;
strcpy(meta->detections[i].label, LAST_DETECTION[i].label);
}
}
Variables at top of file:
cv::Ptr<cv::legacy::MultiTracker> TRACKER;
ulong LAST_COUNT = 0;
DetectionData LAST_DETECTION[DETECTION_MAX] = {};
I originally had
TRACKER = cv::legacy::MultiTracker::create();
in the start method to create it when the plugin is loaded, but that caused a segfault.
I've also tried creating a fixed size array of cv::Ptr<cv::Tracker> and just use each one for a single object with the same id within each frame, and then recycle each one for each inference frame. But that cause odd behaviour with each object was tracking to the same size and position within the frame.
Right now, in the current configuration, the tracking is dog-slow. It's actually faster to just inference each frame on GPU than inference a single frame and track for x additional frames. It should be the opposite.
It seems as though OpenCV's tracking API has changed as of 4.5.1, but I'm not sure if that's the cause of the issue.
I'm looking for help in sorting out the OpenCV tracking issue. I find all the docs very lacking in explaining the what and why of the multi-tracking API. So those are not of much help.
I am building a dynamic multi agent simulation in OMNeT and for this I have to create new modules at runtime. The module creation is working, however, the modules created at runtime are not appearing in the 3D visualization.
module "node" is created sucessfully
Does anyone know how to make the module appear in the visualization? Do I have to update the visualization module?
omnet.ini:
[General]
network = AgentNetwork
*.visualizer.osgVisualizer.typename = "IntegratedOsgVisualizer"
*.visualizer.*.mobilityVisualizer.animationSpeed = 1
*.visualizer.osgVisualizer.sceneVisualizer.typename = "SceneOsgEarthVisualizer"
*.visualizer.osgVisualizer.sceneVisualizer.mapFile = "hamburg.earth"
AgentSpawner:
void AgentSpawner::initialize()
{
cMessage *timer = new cMessage("timer");
scheduleAt(1.0, timer);
}
void AgentSpawner::handleMessage(cMessage *msg)
{
cModuleType *moduleType = cModuleType::get("simulations.Agent");
cModule *module = moduleType->create("node", getParentModule());
// set up parameters and gate sizes before we set up its submodules
module->par("osgModel") = "3d/glider.osgb.(20).scale.0,0,180.rot";
module->getDisplayString().parse("p=200,100;i=misc/aircraft");
module->finalizeParameters();
// create internals, and schedule it
module->buildInside();
module->callInitialize();
module->scheduleStart(simTime()+5.0);
}
The OSG visualization info is maintained totally separately from the actual simulation model module object (that's because the visualization must be ALWAYS optional in the simulation, so make sure your simulation builds fine with OSG totally turned off). This means that an entirely different data structure is built during initialization time from the existing network nodes. As this is done only once during the initialization, dynamically created modules will not have their visualization counterpart data structure.
The code which created the corresponding objects is here.
The solution would be to look up the NetworkNodeOsgVisualizer module in your AgentSpawner code then create and add the corresponding data structures (NetworkNodeOsgVisualization objects). The needed methods (create and add) are there, but sadly they are protected, so you many need to modify the INET code and make them public to be able to call them.
I'm trying to display large point clouds (~20M pts) with Qt3D.
I first found this library https://github.com/MASKOR/Qt3DPointcloudRenderer which is a good example, but I rewrote a minimalist example to load a specific LAS or PCD point cloud and display it with a subclass of Qt3DRender::QGeometry.
It works and the point cloud is nice, but it lags a lot. I think there is no optimization and all 20M points are displayed all the time.
What can I do to optimize this?
(The same point clouds is fluid on the same laptop with other softwares such as QuickTerrainReader, Pix4D or even VTK.)
Currently, at loading, the point cloud is serialized into 2 Qt3DRender::QBuffer, and I create 2 attributes from there:
Qt3DRender::QAttribute* vertexAttrib = new Qt3DRender::QAttribute(nullptr);
vertexAttrib->setName(Qt3DRender::QAttribute::defaultPositionAttributeName());
vertexAttrib->setVertexBaseType(Qt3DRender::QAttribute::Float);
vertexAttrib->setVertexSize(3);
vertexAttrib->setAttributeType(Qt3DRender::QAttribute::VertexAttribute);
vertexAttrib->setBuffer(m_vertexBuffer);
vertexAttrib->setByteStride(12);
vertexAttrib->setByteOffset(0);
vertexAttrib->setCount(m_pointcloud->size());
addAttribute(vertexAttrib);
setBoundingVolumePositionAttribute(vertexAttrib);
t3DRender::QAttribute* colorAttrib = new Qt3DRender::QAttribute(nullptr);
colorAttrib->setName(Qt3DRender::QAttribute::defaultColorAttributeName());
colorAttrib->setVertexBaseType(Qt3DRender::QAttribute::UnsignedByte);
colorAttrib->setVertexSize(3);
colorAttrib->setAttributeType(Qt3DRender::QAttribute::VertexAttribute);
colorAttrib->setBuffer(m_colorBuffer);
colorAttrib->setByteStride(3);
colorAttrib->setByteOffset(0);
colorAttrib->setCount(m_pointcloud->size());
addAttribute(colorAttrib);
I am trying to use ITK's OtsuMultipleThresholdsImageFilter filter in a project but I do not have output.
My aim is to make a simple interface between OpenCV and ITK.
To convert my data from OpenCV's Mat container to itk::Image I use ITK's bridge to OpenCV and I could check that the data are properly sent to ITK.
I am even able to display thanks to QuickView.
But When I setup the filter inspired by this example the object returned by the method GetThresholds() is empty.
Here is the code I wrote:
typedef itk::Image<uchar,2> image_type;
typedef itk::OtsuMultipleThresholdsImageFilter<image_type, image_type> filter_type;
image_type::Pointer img = itk::OpenCVImageBridge::CVMatToITKImage<image_type>(src);
image_type::SizeType size = img->GetLargestPossibleRegion().GetSize();
filter_type::Pointer filter = filter_type::New();
filter->SetInput(img);
filter->SetNumberOfHistogramBins(256);
filter->SetNumberOfThresholds(K);
filter_type::ThresholdVectorType tmp = filter->GetThresholds();
std::cout<<"CHECK: "<<tmp.size()<<std::endl;
src is OpenCV's Mat of CV_8U(C1) type.
A fundamental and basic concept to using ITK is that it is a pipeline architecture. You must connect the input's and output's then update the pipeline.
You have connected the pipeline but you have not executed it. You must call filter->Update().
Please read the ITK Software Guide to understand the fundamentals of ITK:
https://itk.org/ItkSoftwareGuide.pdf
I have an IP camera that receives commands using POST HTTP requests(for example to call PTZ commands or set various camera settings). The standard way of controlling it is through it's own web interface which is partially an ActiveX plugin and partially standard html+js. Of course because of the ActiveX part it only works in IE under Windows.
I'm attempting to change that by figuring out all the commands and writing a small python or javascript code to do the same, so that it is more cross platform.
I have one major problem. Each POST request contains a calculated "cc" field which I assume is a checksum. The JS code in the cam interface points out that it is calculated by calling a function inside the plugin:
tt = new Date().Format("yyyyMMddhhmmss");
jo_header["tt"] = tt;
if (getCpPlugin() != null && getCpPlugin().valid) {
jo_header["cc"] = getCpPlugin().nsstpGetCC(tt, session_id);
}
nsstpGetCC function obviously calculates the checksum from two parameters the timestamp and session_id. Real example(captured with Wireshark):
tt = "20171018231918"
session_id = "30303532646561302D623434612D3131"
cc = "849e586524385e1071caa4023a3df75401e5bb82"
Checksum seems to be 160bit. I tried both sha-1 and ripemd-160 and all combinations of concatenating tt and session_id I could think of. But I can't seem to get the same hash as the one the original plugin gets. The plugin dll seems to be written in c++. And I have almost no experience with decompilation to dive into this problem from that angle.
So my question basically is can someone figure out how they calculated that cc, or at least give me an idea in which direction to research further. Maybe I'm looking at wrong hash algorithms or something... Or give me some idea how I could somehow figure out what the original ActiveX function nsstpGetCC is doing for example by decompilation or maybe by monitoring it's operation in memory while running. What tools should I use?