Update number of cars in veins - veins

I was wandering if I can change the number of cars in veins or is fixed? If so, which class in the code I can find the declaration of the number of cars?

If you use randomTrips.py to generate rou.xml you can specify the number of vehicles you want in your simulation. In this case a 50 vehicles rou.xml file will be created.
<SUMO_HOME>/tools/randomTrips.py -n input_net.net.xml -e 50

Anything to do with vehicle movement in a Veins simulation (e.g., when a car starts, where it starts, how and where it drives, ...) is governed by the road traffic simulator SUMO. The SUMO simulator comes with an excellent tutorial (Hello SUMO) and an extensive online user manual, available on the documentation pages on the SUMO website. In brief, you want to change the .rou.xml file to change how many cars are driving.

In the .rou.xml file, update the flow parameter number. For example the following setting:
"flow id="flow0" type="vtype0" route="route0" begin="0" period="3" number="2"/"
will change the number of vehicles to 2

Related

How to create a safe offline single sign on

I am currently writing a firmware for an embedded device which acts as an human-machine-interface in an (aftermarket) automobile environment.
The device has got a service menu which shall only be accessible to specific personnel, which is secured by a device specific pin-code, which is generated randomly in production, burned on the device and stored in a database for the personnel to retrieve. Within the service menu, the user is e.g. able to manually change states and also overwrite limits for regulation functions etc.
However it might be necessary for any user to get to that menu in an error case, e.g. if they get stuck in a remote place and have a faulty sensor or whatever. Therefore I would like to create a kind of single-sign-in for the devices. My idea is, that the device creates a code and displays it to the user. The user then calls the service team which can create a pin valid for this device in this current state (the code displayed).
I don't want it to be too easy to figure out, so that anybody will be able to crack the algorithm of code generation. I cannot use any online functionality, as the users - as mentioned - may be in a remote place.
I was thinking about creating a table of random numbers and implementing it in the firmware (like 1-10k of pins and the device displays an index and the service teams just looks up the pin for that index) But I feel like there is a better solution to that problem.
My question:
What is a safe algorithm to compute a 4 digit (1000 - 9999) pin-code based on a random number (~6 digits hexadecimal) and (optionally) a 6 byte large serialnumber?

Google Places API: supported types for tram/cable car/light-rail?

I'm taking my first steps at using Google Places API am currently experimenting with different types. I was wondering, what kind of type I have to use, if I want to search for tram/cable car/light-rails stations?
What I want is get a list of subway, bus and tram stations inside an defined radius for an arbitrary coordinate.
Subway and bus seem to be easy (types=subway_station or types=bus_station) but there does not seem to be an equivalent for trams.
Just for experimenting:
Search for the tram station "Agnes-Bernauer-Platz" at Munich (coordinates: 48.1398418,11.496119, good example because there are not subway or bus stations in direct vicinity.) If you interactively browse Google Maps, the station is found (with a "tram icon"), but Places API does not find it:
https://maps.googleapis.com/maps/api/place/nearbysearch/xml?location=48.1398418,11.496119&radius=100&key=....
Any ideas?
Thanks in advance!
Update:
types=light_rail_station
Ok, it seems there is already a type which is not yet documented at developers.google.com/places/supported_types: types=light_rail_station does the job.

More Pythonic approach to functions in Python

I am making a small (I hope it is) project, and, of course, I use self-made functions. Is it better (more Pythonic) to move them to separate file(s) and import it(them), or just leave them in same file? Details about my project:
Uses NumPy, pyBrain, PIL. Based on my home computer. Basically just experiments with neural networks recognising digits. Current algorithm:
Generate a set of pictures containing digits
Make them fit my requirements (normalising them)
Putting them into NN friendly form
Training my NN them
Taking user's input (a picture with drawn digit)
Same as step 2+3, but for input received in 5
Feeding t into NN
Observing results.
Steps 1,2,3,6 contain functions. Total about 4-5 functions.

Search detected faces from a database OpenCV

I am working with this Code:
Program sample
the above link has been programmed with the help of this page:
Servo Magazine
This Code can do Extract face, learn face and save the learn face in a database with a label(example: chris_laughing.bmp or chris_sad.bmp). It can be recognize the face that the User saved in a Database.
My Project is send a E-Mail to user if the Person not in the database.
i included a function to send a E-Mail to the user.
So i have saved 2 different Images from 2 Stars chris and john. When i click recognize they show me the correctly star with label(example: chris_laughing.bmp) from the database.
The Problem is if i Extract(detect) a face from a other star or person (also not chris not john) the Code show me the NEAREST star from the database.
What i want is that the Program give me a Messagebox that say: this Person is not in the database.
Is it possible with this Program(code)??
That program works by assuming that face images for each person lie in a subspace different to that for other people. This idea can work really well in some situations. The program learns a subspace for each person and when you input a new image it measures the distance to all subspaces it has previously learned and chooses the nearest one.
The program doesn't seem to have any sort of check that the image is too far from all learned subspaces. However, it would be an interesting exercise to try to add that feature.
Here is a some info about the main idea in the software: http://en.wikipedia.org/wiki/Eigenface

Creating custom voice commands (GNU/Linux)

I'm looking for advices, for a personal project.
I'm attempting to create a software for creating customized voice commands. The goal is to allow user/me to record some audio data (2/3 secs) for defining commands/macros. Then, when the user will speak (record the same audio data), the command/macro will be executed.
The software must be able to detect a command in less than 1 second of processing time in a low-cost computer (RaspberryPi, for example).
I already searched in two ways :
- Speech Recognition (CMU-Sphinx, Julius, simon) : There is good open-source solutions, but they often need large database files, and speech recognition is not really what I'm attempting to do. Speech Recognition could consume too much power for a small feature.
- Audio Fingerprinting (Chromaprint -> http://acoustid.org/chromaprint) : It seems to be almost what I'm looking for. The principle is to create fingerprint from raw audio data, then compare fingerprints to determine if they can be identical. However, this kind of software/library seems to be designed for song identification (like famous softwares on smartphones) : I'm trying to configure a good "comparator", but I think I'm going in a bad way.
Do you know some dedicated software or parcel of code doing something similar ?
Any suggestion would be appreciated.
I had a more or less similar project in which I intended to send voice commands to a robot. A speech recognition software is too complicated for such a task. I used FFT implementation in C++ to extract Fourier components of the sampled voice, and then I created a histogram of major frequencies (frequencies at which the target voice command has the highest amplitudes). I tried two approaches:
Comparing the similarities between histogram of the given voice command with those saved in the memory to identify the most probable command.
Using Support Vector Machine (SVM) to train a classifier to distinguish voice commands. I used LibSVM and the results are considerably better than the first approach. However, one problem with SVM method is that you need a rather large data set for training. Another problem is that, when an unknown voice is given, the classifier will output a command anyway (which is obviously a wrong command detection). This can be avoided by the first approach where I had a threshold for similarity measure.
I hope this helps you to implement your own voice activated software.
Song fingerprint is not a good idea for that task because command timings can vary and fingerprint expects exact time match. However its very easy to implement matching with DTW algorithm for time series and features extracted with CMUSphinx library Sphinxbase. See Wikipedia entry about DTW for details.
http://en.wikipedia.org/wiki/Dynamic_time_warping
http://cmusphinx.sourceforge.net/wiki/download