Increasing the vehicle communication range by increasing its transmission power in Artery with Veins - veins

I am currently working on evaluating the effects of communication load on generation of Cooperative Awareness Messages. For this, I am using Artery with veins for simulating the 802.11p stack. From my simulation, I observed that I was only getting an effective communication range of about 100m using the default communication parameters given in Config veins in the omnetpp.ini file. After going through other related questions like, (Change the transmission signal strength for a specific set of vehicles during the run-time), I know that increasing the value of *.node[*].nic.mac1609_4.txPower should help in increasing the communication range. However, I am not noticing any change in the observed communication range by either increasing or decreasing this value.
Since I am quite new in using Artery with Veins, I am not sure if there is something else that we need to do for increasing the communication range of the vehicle.
Just a bit more detail on how I am calculating the communication range. I have two vehicles A and B. I have set the speed of vehicle A to be 4m/s and Vehicle B to 0.01 m/s (such that it is almost at standstill). Based on the CAM generation conditions both vehicles at these speeds would generate a message every second. I have a straight road segment of 700m with two lanes and both the vehicles are generated at the same time in lanes 0 and 1 respectively. Based on the speed of vehicle A, it should take 175s for Vehicle A to leave the road segment. Observing the generated .sca file at the end, I see that the ReceivedBroadcasts + SNIRLostPackets = 28 for both the vehicles meaning that only for the first 28s (28*4 = 112m) both the vehicles were at range. I tried it with different values of txPower and sensitivity but I am still getting the same result.
Thanks for your help
Update
I have added the screenshots of the message that I am receiving during successful packet reception and packet loss due to either errors or due to low power levels.
Successful Reception
Reception with error
No reception
Based on the logs it seems that the packets are lost as the power measured at the reciever side is lower than the minPowerLevel. I have found out that minPowerLevel can be set by changing the value of *.**.nic.phy80211p.minPowerLevel in the omnetpp.ini file, but could someone let me know how the Rx power is calculated (in which file)?

Related

Sandy Bridge QPI bandwidth perf event

I'm trying to find the proper raw perf event descriptor to monitor QPI traffic (bandwidth) on Intel Xeon E5-2600 (Sandy Bridge).
I've found an event that seems relative here (qpi_data_bandwidth_tx: Number of data flits transmitted . Derived from unc_q_txl_flits_g0.data. Unit: uncore_qpi) but I can't use it in my system. So probably these events refer to a different micro-architecture.
Moreover, I've looked into the "Intel ® Xeon ® Processor E5-2600 Product Family Uncore Performance Monitoring Guide" and the most relative reference I found is the following:
To calculate "data" bandwidth, one should therefore do:
data flits * 8B / time (for L0)
or 4B instead of 8B for L0p
The events that monitor the data flits are:
RxL_FLITS_G0.DATA
RxL_FLITS_G1.DRS_DATA
RxL_FLITS_G2.NCB_DATA
Q1: Are those the correct events?
Q2: If yes, should I monitor all these events and add them in order to get the total data flits or just the first?
Q3: I don't quite understand in what the 8B and time refer to.
Q4: Is there any way to validate?
Also, please feel free to suggest alternatives in monitoring QPI traffic bandwidth in case there are any.
Thank you!
A Xeon E5-2600 processor has two QPI ports, each port can send up to one flit and receive up to one flit per QPI domain clock cycle. Not all flits carry data, but all non-idle flits consume bandwidth. It seems to me that you're interested in counting only data flits, which is useful for detecting remote access bandwdith bottlenecks at the socket level (instead of a particular agent within a socket).
The event RxL_FLITS_G0.DATA can be used to count the number of data flits received. This is equal to the sum of RxL_FLITS_G1.DRS_DATA and RxL_FLITS_G2.NCB_DATA. You only need to measure the latter two events if you care about the break down. Note that there are only 4 event counter per QPI port. The event TxL_FLITS_G0.DATA can be used to count the number of data flits transmitted to other sockets.
The events RxL_FLITS_G0.DATA and TxL_FLITS_G0.DATA can be used to measure the total number of flits transferred through the specified port. So it takes two out of the four counts available in each port to count total data flits.
There is no accurate way to convert data flits to bytes. A flit may contain up to 8 valid bytes. This depends on the type of transaction and power state of the link direction (power states are per link per direction). A good estimate can be obtained by reasonably assuming that most data flits are part of full cache line packets and are being transmitted in the L0 power state, so each flit does contains exactly 8 valid bytes. Alternatively, you can just measure port utilization in terms of data flits rather than bytes.
The unit of time is up to you. Ultimately, if you want to determine whether QPI bandwdith is a bottleneck, the bandwdith has to be measured periodically and compared against the theoretical maximum bandwidth. You can, for example, use total QPI clock cycles, which can be counted on one of the free QPI port PMU counters. The QPI frequency is fixed on JKT.
For validation, you can write a simple program that allocates a large buffer in remote memory and reads it. The measured number of bytes should be about the same as the size of the buffer in bytes.

How to calculate throughput for each node.

I am working on a network congestion control algorithm and I would like to get the fairness, more specifically Jain's fairness index.
However I need to get the throughput of each node or vehicle.
I know the basic definition of throughput in networks, which is the amount of data transferred over a period of time.
I am not sure what the term means in vehicular network due to the nature of the network being broadcast only.
My current set up, the only message transmitted is BSM with a rate of 10Hz.
Using: Omnet++ 5.1.1 VEINS 4.6 and SUMO 0.30
Thank you.

How to handle distributing lots of large messages to an Akka cluster on startup?

We have an actor structure where there is a cluster-sharded actor that calculates a parametric matrix of about 7 meg and has to distribute it to five other nodes for consumption. First of all, there are the following constraints:
The matrix has to be generated in one place.
The matrix is needed on all nodes to handle user load.
The matrix will be periodically generated again to handle changing variables and then sent off to all nodes to handle user load.
Saving the matrix in a database is probably not viable as it merely shifts the networking load and the database would get very large very fast. The database only saves the input parameters.
We changed the Akka max message size to 10 meg to accomplish this, but that feels a bit odd and we didn't see another choice. Normally this works fine even though passing 10 meg messages around a distributed pub-sub seems odd to me. However, on startup, the system has to start up 2000 of these all at once. As a result the sharding coordinators scream at us about buffered messages. It eventually calms down and life resumes, but I would love to be able to do this without the bloodbath in the logs.
Can someone recommend an alternative strategy for handling the distribution of the parametric matrix that gets it to every node but doesn't cause a shard coordinator complaint bloodbath?

Discrete Event Simulation C++ (array backed heap for priority queue)

First, i understand that being spoon-fed answers will absolutely hurt me in the long run and that is not what i'm looking for. That being said, here is the main point of the assignment:
"We are going to model a simple island-hopping attack on a small corporate network. The
attacker will compromise a computer in the network and use that as the launching point for other
attacks. Our attack model is simplified so that each attack takes a set amount of time and
succeeds with some probability. Periodically, the attacker and each compromised machine will
attempt to compromise a random machine in the network. Attacks crossing the intrusion
detection system will have a certain percentage chance of being caught. The sysadmin will
react (with some delay) to fix machines with 100% certainty.
The topology of the network is a tree. At the root of the tree is the IDS, with all connected
components as children. The IDS is also the network gateway.
Two switches (not agents) are direct children on the IDS.
The remaining computers are split evenly as children between the two switches.
Every event from the attacker crosses the IDS. Only attacks from computers under one switch
to computers under the other switch can be detected by the IDS
The sysadmin is an agent in the simulation that is not attached to the network. It can only
receive simulation notification from the intrusion detection system."
There are 3 event types: attack, fix, and notify. I know that the events are to be stored in the queue, which is fine, but i'm not sure how to implement these events. Create a virtual class Event and a bunch of subclasses defining all the events? One class for all events? Who knows?
There are also 3 agents that respond to or produce events: the attacker, the computers, and the IDS. Again-- should i implement these all in separate classes or would it be sufficient to use one main class.
My program is to be given 3 inputs: number of computers, percent success of the attack, and percent detected across the IDS.
What i'm having real trouble with is the organization of the whole simulation, which makes it rather difficult to begin the design and implementation. I can't seem to wrap my head around the structure of the event, and my coding is rather rusty i'm afraid to admit. A nudge in the right direction would be greatly appreciated.

new MAC protocol employing periodic backoff instead of random one in wireless network

I am trying to implement a new MAC protocol based on 2011 IEEE paper where random waiting time for default MAC wireless networks (802.11 DCF) to a Period-controlled MAC for higher performance.
I will explain the proposed protocol in terms of a simple scenario: consider 2 transmitting nodes experiencing a collision in a network; after they each have waited a random amount of time; say x and y, (implying they both have different backoff periods) if we imply periodic backoff from there on, their backoffs will be x+a, y+a which goes on and will never be equal; preventing them from ever colliding with each other again.
Also, the period of the backoff is same for all the nodes in the network ('a' in the above example), and any change that has to be done with this 'a' period will also affect with all the nodes in the network. This change is based on the channel state and the period is altered following Additive Increase Multiplicative Decrease procedure with respect to the channel idleness threshold set in the protocol algorithm.
Although the author of the above mentioned IEEE paper refused to help with the code, he did mention that the changes he made to implement the protocol were done in the following files :
The code in these files (mac-802.11.cc, mac-timers.cc, mac-802.11.h, mac-timers.h ) is posted in pastebin.ca:
http://pastebin.ca/2303764; http://pastebin.ca/2303763; pastebin.ca/2303762;
pastebin.ca/2303765
Also the algorithm for the proposed MAC protocol is given in : pastebin.ca/2303772
I would appreciate it if anyone could help me alter this method to change the random calculation to periodic. Thanks.
Any advice or suggestion will be much appreciated.
NEVER DO THIS KIND OF THINGS
Wireless network are not made by your only devices, and a network protocol is something that is expected to be observed by EVERY device that shares a SAME PHYSICAL LAYER.
What you are doing is essentially improving the performance of your devices by "stoling" to everybody else part of their possibility to access the media.
A new MAC protocol requires necessarily a distinct physical media, hence distinct frequencies and channels, not already in use and assigned for WLAN, otherwise they:
cannot be certified as 802.11
can be claimed to be "unauthorized interfering devices", with all the consequences in term of legal issues you may have in all the countries that have regulation about how the EM-spectrum is used. (including the user of such a devices to be imprisoned, in case they will damage whatever public communication!)
When dealing with physical protocols on shared media you cannot do your own rules. And when the shared media is a limited resource (like the air is) public laws and regulations are established.