VANET Simulation using SUMO - veins

I am trying to use SUMO in my MATLAB VANET simulator, and as per my understanding, SUMO is a standalone mobility simulator which can simulate different mobility models such as: car following (platoon), lane changing and traffic intersections.
On the other side, the VANETs applications shall make use of the VANETs protocol messages (Beacon messages or EMG or whatever) and take actions based on these infos and update the vehicles' mobility as well.
I am already familiar with veins and plexe frameworks which already make use of SUMO via TraCI, however I can't understand the big picture for how they affect mobility in SUMO!
For instance, in Plexe framework, sumo configuration files with vehicle routings can be loaded in SUMO and simulate the platoon scenario itself, so what is the value added from using VANETs protocol messages?
The same for lane changing simulation, SUMO will perform lane change for vehicles based on certain conditions, so what have the VANET simulator to add using the VANETs protocol messages?

in Plexe framework, sumo configuration files with vehicle routings can
be loaded in SUMO and simulate the platoon scenario itself, so what is
the value added from using VANETs protocol messages?
Cooperative Adaptive Cruise Control (CACC) requires information about what the preceding vehicle plans to do (as opposed to simply observing what the preceding vehicle is doing). This is only possible if the preceding vehicle communicates its plans wirelessly. Plexe makes it possible to simulate the fact that this wireless exchange of information...
takes some time and causes some load on the channel which, in turn, depends on the amount of information exchanged by other vehicles
can lose information if...
vehicles are too far away
vehicles are (partially) hidden behind obstacles
vehicles receive multiple transmissions simultaneously
(any many more effects)
All of this is simulated by a wireless network simulation made possible by the simulation models of Plexe and Veins running in OMNeT++.

Related

Granularity of an Akka Actor for IoT Scenario

I'm interested in using AKKA for an IoT Device scenario but I'm worried about complicating an individual actor. In most industries, a device is not as simple as a 'temperature sensor' you see in most tutorials. A device represents something more complex that can take on the following characteristics:
Many sensors can be represented (temperatures, electrical/fluid flows, power output, on/off values.....
Each of the values above can be queried for current value, and more likely historical values (trends, histograms....)
Alerting rules can be set up for any one of the sensor values
Each device has a fairly complex configuration that must be managed (what sensors, what unit of measure)
Many different message types can be sent (sensor reading request, alerts, configuration updates....)
So my general question is does anyone have good advice on what the level of complexity an actor should take on?
Thanks
Steve
Below are a few bullet points one might want to keep in mind when determining what level of complexity an actor should take on:
Akka actors are lightweight and loosely-coupled by design, thus scale well in a distributed environment. On the other hand, each actor can be tasked to handle fairly complex business logic using Akka's functionality-rich API. This results in great flexibility in determining how much workload an actor should bear.
In general, quantity of IoT devices and operational complexity in each device are the two key factors in the design of the device actor. If total device quantity is large, one should consider having some group-device actors each of which handles a set of devices using, for instance, a private key-value collection. On the other hand, if each IoT device involves fairly complex computation or state mutation logic, it might be better to make each actor represent an individual device. It's worth noting that the two strategies aren't mutually exclusive.
For historical data, I would recommend having actors periodically fed to a database (e.g. Cassandra, PostgreSQL) for OLAP queries. Actors should be left to answer only simple queries.
Akka actors have a well-defined lifecycle with hooks like preStart(), postRestart(), postStop() for programmatic logic control. Supervisor strategies can be created to manage actors in accordance with specific business rules (send alerts, restart actors, etc).
On customizing attributes (e.g. unit of measure) specific to the type of devices, one could model a device type along with its associated sensor attributes, say, as a case class and make it a parameter of the device actor.
Capability of handling different message types via non-blocking message passing is one of the biggest strengths of Akka actors. The receive partial function in an actor effectively handles various message types via pattern matching. When representing a device with complex state mutation logic, its operational state can be safely hotswapped via context.become.
This blog post about simulating IoT devices as individual actors might be of interest.

how to use node.js for develop a listener to receive many vehicle tracking data (over tcp)?

we work on a AVL (automatic vehicle location) project.
we will have about 300000 vehicle that send their activity info with GSM (Simcard) modem over TCP protocol.
we have a listener developed by C++ that listens to a specific port.
at the moment, we have about 20000 GPS Devices that communicate data with C++ listener on one specific port.
some times many devices wait until port be free. we should have a scalable listener.
is any better solution for this case. i saw some usages of Node.js for same case.
my questions :
1:what is your idea?is Node.js good approach?
2:how design and implement a listener with node.js?
3:any other solution?
I would look on some actor model framework, this will allow your application to scale much better, and have higher throughput (but maybe lower latency), if you have a listener at specific endpoint, this is also a potentially a SPOF (Single point of failure) and also single point of bottleneck(Potentially). The solution depends on the requirements for HA, HR, Scaling and performance and other metric's.
I have now idea regarding actor toolkit for node, here is a github page:
https://github.com/benlau/nactor

Dronekit android tower telemetry update frequency

I've just started looking into building something using dronekit. Before I dive headfirst into this (and I fully realize this might be a difficult question to answer): What kind of telemetry update frequency can I expect from the 3DR services underlying an android app (say that I'm using a PixHawk controller with the 3DR telemetry downlink plugged into my Android's USB port). I need something as instantaneous as possible, no slower than 1Hz and optimally down to 5-10Hz (talking about both telemetry update events from the drone, and the ability to send commands to the drone). Is it possible at all to get that kind of speed using this stack?
Sending commands to the drone is not throttled at the moment, so it can be as fast as your logic and hardware allow.
The receiving update frequency is currently locked at 2Hz. We're looking into making it customizable in a future release.
For more information, feel free to ping the dev team directly on the gitter channel.

Appropriate architecture for event logging in a game

I'm trying to modify a game engine so it records events (like key presses), and store these in a MySQL database on a remote server. The game engine is written in C++, and I currently have the following straightforward architecture, using mysql++ to directly INSERTrecords into appropriate databases:
Unfortunately, there's a very large overhead when connecting to the MySQL server, and the game stops for a significant amount of time. Pushing a batch of Xs worth of events to the server causes a significant delay in gameplay (60s worth of events can take 12s to synchronise). There are also apparently security concerns with leaving the MySQL port accessible publicly.
I was considering an alternative option, instead sending commands to the server, which can interact with the database in its own time:
Here the game would only send the necessary information (e.g. the table to update and the data to insert). I'm not sure whether the speed increase would be sufficient, or what system would be appropriate for managing the commands sent from the game.
Someone else suggested Log4j, but obviously I need a C++ solution. Is there an appropriate existing framework for accomplishing what I want?
Most applications gathering user-interface interaction data (in your case keystrokes) put it into a local file of some sort.
Then at an appropriate time (for example at the end of the game, or the beginning of another game), they POST that file, often in compressed form, to a publicly accessible web server. The software on the web server decompresses the data and loads it into the analytics system (the MySQL server in your case) for processing.
So, I suggest the following.
stop making your MySQL server's port available to people you don't know and trust.
get your game to gather keystrokes locally somehow.
get it to upload that data in big bunches when your game is not in realtime mode.
write a web service to receive and interpret these files.
That way you'll build a more secure analytics system and a more responsive game.

C++: Status and control pattern

I'm writing a C++ background/server application for Linux/Windows. Is there a standard control/profiling/reporting service I should use to expose my application's current status in a standardized way?
If not, what's a good pattern (or library) to use for exposing this kind of data and control?
Specifically, I want to expose the following data:
Relative "usage" of "components" (where usage/components is user-defined)
Any errors/faults
Memory, CPU, other misc process data
Method/class execution profile
Average time spent in method/class
Total calls
I want to expose the following control mechanisms
Start, stop, restart, reload X... (commandesque control)
Parameter tuning
Many Linux systems now have dbus for this sort of stuff. Daemons run and provide information and a control interface on the system bus. Desktop applications communicate with one another via the session bus.
For instance, the bluez bluetoothd daemon uses dbus to provide information about bluetooth devices and services, and a control interface to control those devices.
NetworkManager also uses dbus for status and control purposes.
However, starting and stopping are functions that are usually outside the actual application itself. Perhaps the correct architecture would be for some service supervision framework (upstart, runit...) to provide a dbus interface to control services. That said, dbus itself can be used to start services on demand, but it really is not meant for service supervision. See this for more.
Edit: I've just been reading about upstart some more, and it does have a dbus interface for job control. It is subject to change however.