How to calculate throughput for each node. - veins

I am working on a network congestion control algorithm and I would like to get the fairness, more specifically Jain's fairness index.
However I need to get the throughput of each node or vehicle.
I know the basic definition of throughput in networks, which is the amount of data transferred over a period of time.
I am not sure what the term means in vehicular network due to the nature of the network being broadcast only.
My current set up, the only message transmitted is BSM with a rate of 10Hz.
Using: Omnet++ 5.1.1 VEINS 4.6 and SUMO 0.30
Thank you.

Related

Network Adapter Recieved/Sent bits in C++

Oh all mighty coding guru's, hear my plea Bows down
I am relatively new to C++ and I am trying to make an agent server to sit on a Windows machine and return statistics when polled by a client program. I have already created the UDP communication (For speed) and have figured out how to report back memory, processor utilization and disk space. I am having a hard time trying to figure out how to get the network statistics for a given adapter.
My plan is this: you provide the index number of an interface to the program, and it spits out the network bits per second both received and sent. This way I can monitor the network utilization and see if an adapter is getting slammed (Similar to what you see in Task Manager).
Reason for this is SNMP on Windows only has 32bit integers which is fine if your network bandwidth is 100mbps but when it is gigabit, the counter wraps around faster than I can poll it, thus giving unreliable results.
Is there any way to do this?

Increasing the vehicle communication range by increasing its transmission power in Artery with Veins

I am currently working on evaluating the effects of communication load on generation of Cooperative Awareness Messages. For this, I am using Artery with veins for simulating the 802.11p stack. From my simulation, I observed that I was only getting an effective communication range of about 100m using the default communication parameters given in Config veins in the omnetpp.ini file. After going through other related questions like, (Change the transmission signal strength for a specific set of vehicles during the run-time), I know that increasing the value of *.node[*].nic.mac1609_4.txPower should help in increasing the communication range. However, I am not noticing any change in the observed communication range by either increasing or decreasing this value.
Since I am quite new in using Artery with Veins, I am not sure if there is something else that we need to do for increasing the communication range of the vehicle.
Just a bit more detail on how I am calculating the communication range. I have two vehicles A and B. I have set the speed of vehicle A to be 4m/s and Vehicle B to 0.01 m/s (such that it is almost at standstill). Based on the CAM generation conditions both vehicles at these speeds would generate a message every second. I have a straight road segment of 700m with two lanes and both the vehicles are generated at the same time in lanes 0 and 1 respectively. Based on the speed of vehicle A, it should take 175s for Vehicle A to leave the road segment. Observing the generated .sca file at the end, I see that the ReceivedBroadcasts + SNIRLostPackets = 28 for both the vehicles meaning that only for the first 28s (28*4 = 112m) both the vehicles were at range. I tried it with different values of txPower and sensitivity but I am still getting the same result.
Thanks for your help
Update
I have added the screenshots of the message that I am receiving during successful packet reception and packet loss due to either errors or due to low power levels.
Successful Reception
Reception with error
No reception
Based on the logs it seems that the packets are lost as the power measured at the reciever side is lower than the minPowerLevel. I have found out that minPowerLevel can be set by changing the value of *.**.nic.phy80211p.minPowerLevel in the omnetpp.ini file, but could someone let me know how the Rx power is calculated (in which file)?

How to figure out why UDP is only accepting packets at a relatively slow rate?

I'm using Interix on Windows XP to port my C++ Linux application more readily to port to Windows XP. My application sends and receives packets over a socket to and from a nearby machine running Linux. When sending, I'm only getting throughput of around 180 KB/sec and when receiving I'm getting around 525 KB/sec. The same code running on Linux gets closer to 2,500 KB/sec.
When I attempt to send at a higher rate than 180 KB/sec, packets get dropped to bring the rate back down to about that level.
I feel like I should be able to get better throughput on sending than 180 KB/sec but am not sure how to go about determining what is the cause of the dropped packets.
How might I go about investigating this slowness in the hopes of improving throughput?
--Some More History--
To reach the above numbers, I have already improved the throughput a bit by doing the following (that made no difference on Linux, but help throughput on Interix):
I changed SO_RCVBUF and SO_SNDBUF from 256KB to 25MB, this improved throughput about 20%
I ran optimized instead of debug, this improved throughput about 15%
I turned off all logging messages going to stdout and a log file, this doubled throughput.
So it would seem that CPU is a limiting factor on Interix, but not on Linux. Further, I am running on a Virtual Machine hosted in a hypervisor. The Windows XP is given 2 cores and 2 GB of memory.
I notice that the profiler shows the cpu on the two cores never exceeding 50% utilization on average. This even occurs when I have two instances of my application running, still it hovers around 50% on both cores. Perhaps my application, which is multi-threaded, with a dedicated thread to read from UDP socket and a dedicated thread to write to UDP socket (only one is active at any given time) is not being scheduled well on Interix and thus my packets are dropping?
In answering your question, I am making the following assumptions based on your description of the problem:
(1) You are using the exact same program in Linux when achieving the throughput of 2,500 KB/sec, other than the socket library, which is of course, going to be different between Windows and Linux. If this assumption is correct, we probably shouldn't have to worry about other pieces of your code affecting the throughput.
(2) When using Linux to achieve 2,500 KB/sec throughput, the node is in the exact same location in the network. If this assumption is correct, we don't have to worry about network issues affecting your throughput.
Given these two assumptions, I would say that you likely have a problem in your socket settings on the Windows side. I would suggest checking the size of the send-buffer first. The size of the send-buffer is 8192 bytes by default. If you increase this, you should see an increase in throughput. Use setsockopt() to change this. Here is the usage manual: http://msdn.microsoft.com/en-us/library/windows/desktop/ms740476(v=vs.85).aspx
EDIT: It looks like I misread your post going through it too quickly the first time. I just noticed you're using Interix, which means you're probably not using a different socket library. Nevertheless, I suggest checking the send buffer size first.

new MAC protocol employing periodic backoff instead of random one in wireless network

I am trying to implement a new MAC protocol based on 2011 IEEE paper where random waiting time for default MAC wireless networks (802.11 DCF) to a Period-controlled MAC for higher performance.
I will explain the proposed protocol in terms of a simple scenario: consider 2 transmitting nodes experiencing a collision in a network; after they each have waited a random amount of time; say x and y, (implying they both have different backoff periods) if we imply periodic backoff from there on, their backoffs will be x+a, y+a which goes on and will never be equal; preventing them from ever colliding with each other again.
Also, the period of the backoff is same for all the nodes in the network ('a' in the above example), and any change that has to be done with this 'a' period will also affect with all the nodes in the network. This change is based on the channel state and the period is altered following Additive Increase Multiplicative Decrease procedure with respect to the channel idleness threshold set in the protocol algorithm.
Although the author of the above mentioned IEEE paper refused to help with the code, he did mention that the changes he made to implement the protocol were done in the following files :
The code in these files (mac-802.11.cc, mac-timers.cc, mac-802.11.h, mac-timers.h ) is posted in pastebin.ca:
http://pastebin.ca/2303764; http://pastebin.ca/2303763; pastebin.ca/2303762;
pastebin.ca/2303765
Also the algorithm for the proposed MAC protocol is given in : pastebin.ca/2303772
I would appreciate it if anyone could help me alter this method to change the random calculation to periodic. Thanks.
Any advice or suggestion will be much appreciated.
NEVER DO THIS KIND OF THINGS
Wireless network are not made by your only devices, and a network protocol is something that is expected to be observed by EVERY device that shares a SAME PHYSICAL LAYER.
What you are doing is essentially improving the performance of your devices by "stoling" to everybody else part of their possibility to access the media.
A new MAC protocol requires necessarily a distinct physical media, hence distinct frequencies and channels, not already in use and assigned for WLAN, otherwise they:
cannot be certified as 802.11
can be claimed to be "unauthorized interfering devices", with all the consequences in term of legal issues you may have in all the countries that have regulation about how the EM-spectrum is used. (including the user of such a devices to be imprisoned, in case they will damage whatever public communication!)
When dealing with physical protocols on shared media you cannot do your own rules. And when the shared media is a limited resource (like the air is) public laws and regulations are established.

Packet Delay Variation (PDV)

I am currently implementing video streaming application where the goal is to utilize as much as possible gigabit ethernet bandwidth
Application protocol is built over tcp/ip
Network library is using asynchronous iocp mechanism
Only streaming over LAN is needed
No need for packets to go through routers
This simplifies many things. Nevertheless, I am experiencing problems with packet delay variation.
It means that a video frame which should arrive for example every 20 ms (1280 x 720p 50Hz video signal) sometimes arrives delayed by tens of milliseconds. More:
Average frame rate is kept
Maximum video frame delay is dependent on network utilization
The more data on LAN, the higher the maximum video frame delay
For example, when bandwidth usage is 800mbps, PDV is about 45 - 50 ms.
To my questions:
What are practical boundaries in lowering that value?
Do you know about measurement report available on internet dealing with this?
I want to know if there is some subtle error in my application (perhaps excessive locking) or if there is no way to make numbers better with current technology.
For video streaming, I would recommend using UDP instead of TCP, as it has less overhead and packet confirmation is usually not needed, as the retransmited data would already be obsolete.