new MAC protocol employing periodic backoff instead of random one in wireless network - c++

I am trying to implement a new MAC protocol based on 2011 IEEE paper where random waiting time for default MAC wireless networks (802.11 DCF) to a Period-controlled MAC for higher performance.
I will explain the proposed protocol in terms of a simple scenario: consider 2 transmitting nodes experiencing a collision in a network; after they each have waited a random amount of time; say x and y, (implying they both have different backoff periods) if we imply periodic backoff from there on, their backoffs will be x+a, y+a which goes on and will never be equal; preventing them from ever colliding with each other again.
Also, the period of the backoff is same for all the nodes in the network ('a' in the above example), and any change that has to be done with this 'a' period will also affect with all the nodes in the network. This change is based on the channel state and the period is altered following Additive Increase Multiplicative Decrease procedure with respect to the channel idleness threshold set in the protocol algorithm.
Although the author of the above mentioned IEEE paper refused to help with the code, he did mention that the changes he made to implement the protocol were done in the following files :
The code in these files (mac-802.11.cc, mac-timers.cc, mac-802.11.h, mac-timers.h ) is posted in pastebin.ca:
http://pastebin.ca/2303764; http://pastebin.ca/2303763; pastebin.ca/2303762;
pastebin.ca/2303765
Also the algorithm for the proposed MAC protocol is given in : pastebin.ca/2303772
I would appreciate it if anyone could help me alter this method to change the random calculation to periodic. Thanks.
Any advice or suggestion will be much appreciated.

NEVER DO THIS KIND OF THINGS
Wireless network are not made by your only devices, and a network protocol is something that is expected to be observed by EVERY device that shares a SAME PHYSICAL LAYER.
What you are doing is essentially improving the performance of your devices by "stoling" to everybody else part of their possibility to access the media.
A new MAC protocol requires necessarily a distinct physical media, hence distinct frequencies and channels, not already in use and assigned for WLAN, otherwise they:
cannot be certified as 802.11
can be claimed to be "unauthorized interfering devices", with all the consequences in term of legal issues you may have in all the countries that have regulation about how the EM-spectrum is used. (including the user of such a devices to be imprisoned, in case they will damage whatever public communication!)
When dealing with physical protocols on shared media you cannot do your own rules. And when the shared media is a limited resource (like the air is) public laws and regulations are established.

Related

How to build an asymmetric multi-party communication protocol?

I am implementing a c++ application which involves multiply users(for example. 128 users) with asymmetric roles(they all have different jobs). In such scenario, every user has to communicate with each other. Thus each pair of users need a bi-direction (virtual) communication channel between them.
There are three popular messaging patterns in this application.
Exchange: each user i has a message m_ij for user j != i to send. The length of m_ij is a public constant value. These messages m_ij are independent and have no relation with each other. This is something like "everyone has something for everyone".
Distribute: a (per-determined) user i_0 has a message m_j for ever other user j!=i_0. The length of the messages is a public constant value. It is a little similar to broadcast but the receivers are not receiving the same message.
Gather: a (per-determined) user i_0 receives a message m_j from ever other user j!=i_0. The length of the messages is a public constant value. This is very similar to vote mechanism.
Besides, there are also a small amount of two-party communication between some of the users.
The round-trip cost is very sensitive in the application. Thus a one-roundtrip implementation for these communication patterns is very desirable.
Besides, the bandwidth cost of the application is very high thus a non-blocking implementation is almost a must-have.
I first tried the classic server/client socket (https://www.geeksforgeeks.org/socket-programming-cc/) by having multiply ports and deploy a server/client pair between every two users. However it turns out to be a failure.
I also investgate the ZMQ library. But to my poor understanding I have to somehow handle "routing" on my own, which I am not capable of.
Nanomsg is another candidate to go with but none of the patterns it provides seem to match the requirements.
So, could anyone provide any idea about this challenge? Thanks in advance!

How to calculate throughput for each node.

I am working on a network congestion control algorithm and I would like to get the fairness, more specifically Jain's fairness index.
However I need to get the throughput of each node or vehicle.
I know the basic definition of throughput in networks, which is the amount of data transferred over a period of time.
I am not sure what the term means in vehicular network due to the nature of the network being broadcast only.
My current set up, the only message transmitted is BSM with a rate of 10Hz.
Using: Omnet++ 5.1.1 VEINS 4.6 and SUMO 0.30
Thank you.

Discrete Event Simulation C++ (array backed heap for priority queue)

First, i understand that being spoon-fed answers will absolutely hurt me in the long run and that is not what i'm looking for. That being said, here is the main point of the assignment:
"We are going to model a simple island-hopping attack on a small corporate network. The
attacker will compromise a computer in the network and use that as the launching point for other
attacks. Our attack model is simplified so that each attack takes a set amount of time and
succeeds with some probability. Periodically, the attacker and each compromised machine will
attempt to compromise a random machine in the network. Attacks crossing the intrusion
detection system will have a certain percentage chance of being caught. The sysadmin will
react (with some delay) to fix machines with 100% certainty.
The topology of the network is a tree. At the root of the tree is the IDS, with all connected
components as children. The IDS is also the network gateway.
Two switches (not agents) are direct children on the IDS.
The remaining computers are split evenly as children between the two switches.
Every event from the attacker crosses the IDS. Only attacks from computers under one switch
to computers under the other switch can be detected by the IDS
The sysadmin is an agent in the simulation that is not attached to the network. It can only
receive simulation notification from the intrusion detection system."
There are 3 event types: attack, fix, and notify. I know that the events are to be stored in the queue, which is fine, but i'm not sure how to implement these events. Create a virtual class Event and a bunch of subclasses defining all the events? One class for all events? Who knows?
There are also 3 agents that respond to or produce events: the attacker, the computers, and the IDS. Again-- should i implement these all in separate classes or would it be sufficient to use one main class.
My program is to be given 3 inputs: number of computers, percent success of the attack, and percent detected across the IDS.
What i'm having real trouble with is the organization of the whole simulation, which makes it rather difficult to begin the design and implementation. I can't seem to wrap my head around the structure of the event, and my coding is rather rusty i'm afraid to admit. A nudge in the right direction would be greatly appreciated.

NTPD synchronization with 1PPS signal

I have an AHRS (attitude heading reference system) that interfaces with my C++ application. I receive a 50Hz stream of messages via Ethernet from the AHRS, and as part of this message, I get UTC time. My system will also have NTPD running as the time server for our embedded network. The AHRS also has a 1PPS output that indicates the second roll-over time for UTC. I would like to synchronize the NTPD time with the UTC. After some research, I have found that there are techniques that utilize a serial port as input for the 1PPS. From what I can find, these techniques use GPSD to read the 1PPS and communicate with NTPD to synchronize the system time. However, GPSD is expecting a NMEA formatted message from a GPS. I don't have that.
The way I see it now, I have a couple of optional approaches:
Don't use GPSD. Write a program that reads the 1PPS and the Ethernet
message contain UTC, and then somehow communicates this information
to NTPD.
Use GPSD. Write a program that repackages the Ethernet message into
something that can be sent to GPSD, and let it handle the
interaction with NTPD.
Something else?
Any suggestions would be very much appreciated.
EDIT:
I apologize for this poorly constructed question.
My solution to this problem is as follows:
1 - interface 1PPS to RS232 port, which as it turns out is a standard approach that is handled by GPSD.
2 - write a custom C++ application to read the Ethernet messages containing UTC, and from that build an NMEA message containing the UTC.
3 - feed the NMEA message to GPSD, which in turn interfaces with NTPD to synchronize the GPS/1PPS information with system time.
I dont know why you would want drive a PPS device with a signal that is delivered via ethernet frames. Moreover PPS does not work the way you seem to think it does. There is no timecode in a PPS signal so you cant sync the time to the PPS signal. The PPS signal is simply used to inform the computer of how long a second is.
there are examples that show how a PPS signal can be read in using a serial port, e.g. by attaching it to an interrupt capable pin - that might be RingIndicator (RI) or something else with comparable features. the problem i am seeing there is that any sort of code-driven service of an interrupt has its latencys and jitter. this is defined by your system design (and if you are doing it, by your own system tailored special interrupt handler routine - on a PC even good old ISA bus introduced NMI handlers might see such effects).
to my best understanding people that are doing time sync on a "computer" are using a true hardware timer-counter (with e.g. 64 bits) and a latch that gets triggered to sample and hold the value of the timer on every incoming 1PPS pulse. - folks are doing that already with PTP over the ethernet with the small variation that a special edge of the incoming data is used as the trigger and by this sender and receiver can be synchronized using further program logic that grabs the resulting value from the built in PTP-hardware-latch.
see here: https://en.wikipedia.org/wiki/Precision_Time_Protocol
along with e.g. 802.1AS: http://www.ieee802.org/1/pages/802.1as.html
described wikipedia in section "Related initiatives" as:
"IEEE 802.1AS-2011 is part of the IEEE Audio Video Bridging (AVB) group of standards, further extended by the IEEE 802.1 Time-Sensitive Networking (TSN) Task Group. It specifies a profile for use of IEEE 1588-2008 for time synchronization over a virtual bridged local area network (as defined by IEEE 802.1Q). In particular, 802.1AS defines how IEEE 802.3 (Ethernet), IEEE 802.11 (Wi-Fi), and MoCA can all be parts of the same PTP timing domain."
some article (in German): https://www.elektronikpraxis.vogel.de/ethernet-fuer-multimediadienste-im-automobil-a-157124/index4.html
and some presentation: http://www.ieee802.org/1/files/public/docs2008/as-kbstanton-8021AS-overview-for-dot11aa-1108.pdf
my rationale to your question is:
yes its possible. but it is a precision limited design due to the various internal things like latency and jitter of the interrupt handler you are forced to use. the achievable overall precision per pulse and in a long term run is hard to say but might be in the range of some 10 ms at startup with a single pulse to maybe/guessed 0,1 ms. - doing it means proving it. long term observations should help you unveiling the true practical caps with your very specific computer and selected software environment.

Conceptual question on (a tool like) LoadRunner

I'm using LoadRunner to stress-test a J2EE application.
I have got: 1 MySQL DB server, and 1 JBoss App server. Each is a 16-core (1.8GHz) / 8GB RAM box.
Connection Pooling: The DB server is using max_connections = 100 in my.cnf. The App Server too is using min-pool-size and max-pool-size = 100 in mysql-ds.xml and mysql-ro-ds.xml.
I'm simulating a load of 100 virtual users from a 'regular', single-core PC. This is a 1.8GHz / 1GB RAM box.
The application is deployed and being used on a 100 Mbps ethernet LAN.
I'm using rendezvous points in sections of my stress-testing script to simulate real-world parallel (and not concurrent) use.
Question:
The CPU utilization on this load-generating PC never reaches 100% and memory too, I believe, is available. So, I could try adding more virtual users on this PC. But before I do that, I would like to know 1 or 2 fundamentals about concurrency/parallelism and hardware:
With only a single-core load generator as this one, can I really simulate a parallel load of 100 users (with each user using operating from a dedicated PC in real-life)? My possibly incorrect understanding is that, 100 threads on a single-core PC will run concurrently (interleaved, that is) but not parallely... Which means, I cannot really simulate a real-world load of 100 parallel users (on 100 PCs) from just one, single-core PC! Is that correct?
Network bandwidth limitations on user parallelism: Even assuming I had a 100-core load-generating PC (or alternatively, let's say I had 100, single-core PCs sitting on my LAN), won't the way ethernet works permit only concurrency and not parallelism of users on the ethernet wire connecting the load-generating PC to the server. In fact, it seems, this issue (of absence of user parallelism) will persist even in a real-world application usage (with 1 PC per user) since the user requests reaching the app server on a multi-core box can only arrive interleaved. That is, the only time the multi-core server could process user requests in parallel would be if each user had her own, dedicated physical layer connection between it and the server!!
Assuming parallelism is not achievable (due to the above 'issues') and only the next best thing called concurrency is possible, how would I go about selecting the hardware and network specification to use my simulation. For example, (a) How powerful my load-generating PCs should be? (b) How many virtual users to create per each of these PCs? (c) Does each PC on the LAN have to be connected via a switch to the server (to avoid) broadcast traffic which would occur if a hub were to be used in instead of a switch?
Thanks in advance,
/HS
Not only are you using Ethernet, assuming you're writing web services you're talking over HTTP(S) which sits atop of TCP sockets, a reliable, ordered protocol with the built-in round trips inherent to reliable protocols. Sockets sit on top of IP, if your IP packets don't line up with your Ethernet frames you'll never fully utilize your network. Even if you were using UDP, had shaped your datagrams to fit your Ethernet frames, had 100 load generators and 100 1Gbit ethernet cards on your server, they'd still be operating on interrupts and you'd have time multiplexing a little bit further down the stack.
Each level here can be thought of in terms of transactions, but it doesn't make sense to think at every level at once. If you're writing a SOAP application that operates at level 7 of the OSI model, then this is your domain. As far as you're concerned your transactions are SOAP HTTP(S) requests, they are parallel and take varying amounts of time to complete.
Now, to actually get around to answering your question: it depends on your test scripts, the amount of memory they use, even the speed your application responds. 200 or more virtual users should be okay, but finding your bottlenecks is a matter of scientific inquiry. Do the experiments, find them, widen them, repeat until you're happy. Gather system metrics from your load generators and system under test and compare with OS provider recommendations, look at the difference between a dying system and a working system, look for graphs that reach a plateau and so on.
It sounds to me like you're over thinking this a bit. Your servers are fast and new, and are more than suited to handle lots of clients. Your bottleneck (if you have one) is either going to be your application itself or your 100m network.
1./2. You're testing the server, not the client. In this case, all the client is doing is sending and receiving data - there's no overhead for client processing (rendering HTML, decoding images, executing javascript and whatever else it may be). A recent unicore machine can easily saturate a gigabit link; a 100 mbit pipe should be cake.
Also - The processors in newer/fancier ethernet cards offload a lot of work from the CPU, so you shouldn't necessarily expect a CPU hit.
3. Don't use a hub. There's a reason you can buy a 100m hub for $5 on craigslist.
Without having a better understanding of your application it's tough to answer some of this, but generally speaking you are correct that to achieve a "true" stress test of your server it would be ideal to have 100 cores (using a target of a 100 concurrent users), i.e. 100 PC's. Various issues, though, will probably show this as a no-brainer.
I have a communication engine I built a couple of years back (.NET / C#) that uses asyncrhonous sockets - needed the fastest speeds possible so we had to forget adding any additional layers on top of the socket like HTTP or any other higher abstractions. Running on a quad core 3.0GHz computer with 4GB of RAM that server easily handles the traffic of ~2,200 concurrent connections. There's a Gb switch and all the PC's have Gb NIC's. Even with all PC's communicating at the same time it's rare to see processor loads > 30% on that server. I assume this is because of all the latency that is inherent in the "total system."
We have a new requirement to support 50,000 concurrent users that I'm currently implementing. The server has dual quad core 2.8GHz processors, a 64-bit OS, and 12GB of RAM. Our modeling shows this computer is more than enough to handle the 50K users.
Issues like the network latency I mentioned (don't forget CAT 3 vs. CAT 5 vs. CAT 6 issue), database connections, types of data being stored and mean record sizes, referential issues, backplane and bus speeds, hard drive speeds and size, etc., etc., etc. play as much a role as anything in slowing down a platform "in total." My guess would be that you could have 500, 750, a 1,000, or even more users to your system.
The goal in the past was to never leave a thread blocked for too long ... the new goal is to keep all the cores busy.
I have another application that downloads and analyzes the content of ~7,800 URL's daily. Running on a dual quad core 3.0GHz (Windows Ultimate 7 64-bit edition) with 24GB of RAM that process used to take ~28 minutes to complete. By simply swiching the loop to a Parallel.ForEach() the entire process now take < 5 minutes. My processor load that we've seen is always less than 20% and maximum network loading of only 14% (CAT 5 on a Gb NIC through a standard Gb dumb hub and a T-1 line).
Keeping all the cores busy makes a huge difference, especially true on applications that spend allot of time waiting on IO.
As you are representing users, disregard the rendezvous unless you have either an engineering requirement to maintain simultaneous behavior or your agents are processes and not human users and these agents are governed by a clock tick. Humans are chaotic computing units with variant arrival and departure windows based upon how quickly one can or cannot read, type, converse with friends, etc... A great book on the subject of population behavior is "Chaos" by James Gleik (sp?)
The odds of your 100 decoupled users being highly synchronous in their behavior on an instant basis in observable conditions is zero. The odds of concurrent activity within a defined time window however, such as 100 users logging in within 10 minutes after 9:00am on a business morning, can be quite high.
As a side note, a resume with rendezvous emphasized on it is the #1 marker for a person with poor tool understanding and poor performance test process. This comes from a folio of over 1500 interviews conducted over the past 15 years (I started as a Mercury Employee on april 1, 1996)
James Pulley
Moderator
-SQAForums WinRunner, LoadRunner
-YahooGroups LoadRunner, Advanced-LoadRunner
-GoogleGroups lr-LoadRunner
-Linkedin LoadRunner (owner), LoadrunnerByTheHour (owner)
Mercury Alum (1996-2000)
CTO, Newcoe Performance Engineering