I currently use TraCIScenarioManagerForker to spawn SUMO for each simulation, the "forker" method. However, the official VEINS documentation recommends launching the SUMO daemon separately using the veins-launchd script and then run simulations, the "launchd" method.
Using the forker method makes running simulations just a one command job since SUMO is killed when simulation ends. However, with the launchd method, one has to take care of setting up the SUMO daemon and killing it when simulation ends.
What are the advantages and disadvantages of each method? I'm trying to understand the recommended best practices when using VEINS.
Indeed, Veins 5.1 provides three (four, if you count an experimental one) ways of connecting a running OMNeT++ to SUMO:
assuming SUMO is already running and connecting there directly (TraCIScenarioManager)
running SUMO directly from the process - on Linux: as a fork, on Windows: as a process in the same context (TraCIScenarioManagerForker)
connecting to a Proxy (veins_launchd) that launches an isolated instance of SUMO for every client that connects to it (TraCIScenarioManagerLaunchd)
if you are feeling adventurous, the veins_libsumo fork of Veins offers a fourth option: including the SUMO engine directly in your OMNeT++ simulation and using it via method calls (instead of remote procedure calls via a network socket). Contrast, for example, TraCI based code vs. libsumo based code. This can be orders of magnitude faster with none of the drawbacks discussed below. At the time of writing (Mar 2021) this fork is just a proof of concept, though.
Each of these has unique benefits and drawbacks:
is the most flexible: you can connect to a long-running instance of SUMO which is only rolled backwards/forwards in time through the use of snapshots, connect multiple clients to a single instance, etc but
requires you to manually take care of running exactly as many instances of SUMO as you need at exactly the time when you need them
is very convenient, but
requires the simulation (as opposed to the person running the simulation) to "know" how to launch SUMO - so a simulation that works on one machine won't work on another because SUMO might be installed in a different path there etc.
launches SUMO in the directory of the simulation, so file output from multiple SUMO instances overwrites each other and file output is stored in the directory storing the simulation (which might be a slow or write protected disk, etc.)
results in both SUMO and OMNeT++ writing console output into what is potentially the same console window, requiring experience in telling one from the other when debugging crashes (and things get even more messy if one wants to debug SUMO)
does not suffer from any of these problems, but
requires the user to remember starting the proxy before starting the simulations
Related
Mostly a thought experiment; hopefully I'll learn something; perhaps an actually potentially useful idea. I'm mixing in a bit of docker-esk terminology - as I have in mind this might be one approach.
I'm wondering if there is a way to run C++ code / applications so that they have library isolation; but still host application level use of the host UI and GPU.
How do applications running on Linux (in my case Ubuntu) write to the UI? When an application requests access to X11 unix socket; can the OS then control that access to only write to bits of the UI the given application owns?
Which bit of docker creates the "isolation" that prevents containers from doing this? Is it a restriction that comes from a cgroups / namespace / chroot change?
Is there a way to reduce the "isolation" level of a docker container so it could use the host UI without having to take additional steps to give the container access to the X11 unix socket?
I would be OK if this reduces other isolation levels of the container; I assume to the equivalent of if the application was running directly on the host. The only goal is to have library isolation, I trust the code I'm running to not be malicious.
I'm not sure; but this might be along the lines of what I'm thinking of. https://chromium.googlesource.com/chromium/src/+/master/docs/linux/using_a_chroot.md#Running-X-apps - as I understand its an approach to resolving some library dependency issues. The script is https://chromium.googlesource.com/chromium/src/+/master/build/install-chroot.sh
I am working on my school project. there is a video duplication detection application wrote with c++. The application is designed to run on a single machine, and I would like to create a spark cluster and run that application under the cluster.
Is this possible? difficult?
Let me try to answer your question:
First, you should figure out that which format the c++ application has?
Is it source code or binary executable bin?
source code
You can implement the algorithm in java/scala, and make the most use of the cluster resouce, make you job much more quick that the single machine version.
if your time is limited, you can use gcc to compile your c++ source code, and follow the next method.
binary executable bin
Because java/scala bytecode(.class format) run on the jvm, not compatible with the native code on machine, which is determined by the combination of compiler and operating system.
In the case, the only choice is get a new process to execute the c++ executable bin, and get what you want through inter process communication, such as pipe.
In a word, you should get a new process on driver node to de-duplicate your data, and use spark engine to do the following parallel computing to accelerate you project.
and get new process is so easy in scala or java, please refer the doc:
Process
So I need to simulate Isis2 in ns-3. (I am also to modify Isis2 slightly, wrapping it with some C/C++ code since I need at least a quasi real-time mission-critical behavior)
Since I am far from having any of that implemented it would interesting to know if this is a suitable way of conduct. I need to specifically monitor the performance of the consensus during sporadic wifi (ad hoc) behavior.
Would it make sense to virtualize a machine for each instance of Isis2 and then use the tap bridge( model and analyze the traffic in the ns-3 channel?
(I also am to log the events on each instance; composing the various data into a unified presentation)
You need to start by building an Isis2 application program, and this would have to be done using C/CLI or C++/CLI. C++/CLI will be easier because the match with the Isis2 type system is closer. But as I type these words, I'm trying to remember whether Mono actually supports C++/CLI. If there isn't a Mono compiler for C++/CLI, you might be forced to use C# or IronPython. Basically, you have to work with what the compiler will support.
You'll build this and the library on your mono platform and should test it out, which you can do on any Linux system. Once you have it working, that's the thing you'll experiment with on NS/3. Notice that if you work on Windows, you would be able to use C++/CLI (for sure) and then can just make a Windows VM for NS3. So this would mean working on Windows, but not needing to learn C#.
This is because Isis2 is a library for group communication, multicast, file replication and sharing, DHTs and so forth and to access any particular functionality you need an application program to "drive" it. I wouldn't expect performance issues if you follow the recommendations in the video tutorials and the user manual; even for real-time uses the system is probably both fast enough and steady enough in its behavior.
Then yes, I would take a virtual machine with the needed binaries for Mono (Mono is loaded from DLLs so they need to be available at the right virtual file system locations) and your Isis2 test program and run that within NS3. I haven't tried this but don't see any reason it wouldn't work.
Keep in mind that the default timer settings for timeout and retransmission are very slow and tuned for running on Amazon AWS, inside a data center. So once you have this working, but before simulating your wifi setup, you may want to experiment with tuning the system to be more responsive in that setting. I'm thinking that ISIS_DEFAULTTIMEOUT will probably be way too long for you, and the RTDELAY setting may also be too long for you. Amazon AWS is a peculiar environment and what makes Isis2 stable in AWS might not be ideal in a Wifi setting with very different goals... but all of those parameters can be tuned by just setting the desired values in the Environment, which can be done in bash on the line that launches your test program, or using the bash "Export" command.
I'm working on a wide program (C++/Qt on linux) organized in different parts: from an inner engine toward different UI (some of them graphical). So far I've organized this division creating a number of different processes equal to the number of different UI and engine. Every "user" process communicates with the core engine via two pipes (opposite directions).
What I would like to obtain is to have every single process running as a stand alone one that doesn't block while communicating with the engine process but simply using an internal custom "message buffer" (already built and tested) to store messages and process them when free.
The solution is (I guess) to design every process to spawn an additional thread which takes care of communicating to engine process (and another one for GUI). I am using pthread.h library (POSIX). Is it right? Could someone provide a simple example of how to achieve a communication between a single couple of processes?
Thanks in advance.
This seems to be typical application:
1. One part of the program should scan for audio files in background and write tags to the database.
2. The other part makes search queries and shows results.
The application should be crossplatform.
So, the main search loop, including adding data to database is not a problem. The questions are:
1. What is the best way to implement this background working service? Boost(asio) or Qt(services framework?)?
2. What is the best approach, to make a native service wrapper using mentioned libraries or emulate it using non gui application?
3. Should I connect gui to the service(how they will communicate using boost or qt?) or directly to the database (could locks be there?)?
4. Will decsision in point 1 consume all CPU usage? And how to avoid that? How to implement scanning for files less cpu consumable?S
I like to use Poco which has a convenient ServerApplication class, which can be used in an application that can be run as either a normal command-line application, or as a Windows service, or as a *nix daemon without having to touch the code.
If you use a "real" database (MySQL, PostgreSQL, SQL Server), then querying the database from the GUI application is probably fine and easier to do. If you use another type of database that isn't necessarily multi-user friendly, then you should communicate with the service using loopback sockets or pipes.
As far as CPU usage, you could just use a bunch of "sleep" calls within your code that searches files to make sure it doesn't hog the CPU and IO ports. Or use some kind of interval notification to allow it to search in chunks periodically.