Bandwidth Monitoring using c++ program - c++

I am developing a small application that requires a module that would check whether there is bandwidth or not.Basically the module should trigger an event when the bandwidth goes down. Can this be achieved using c++ program

Look for your network interface in /sys/class/net/ directory. My system only has two interfaces lo and eth0.
There are a lot of files describing the status of the interface to explore.
I would start with operstate, statistics/rx_bytes or statistics/rx_packets.

Yes, two approaches - if you have a router with some sort of logging you could query it's values with SNMP or (harder) scrape the web interface for the stats.
Or if you need the real bandwidth you would have to find a server and download a file - measuring the time taken.

Related

Benchmarking a HTTP server

I'm currently building a custom C++-based HTTP server whose purpose is to serve HTTP GET requests as fast as possible.
I'd like to throw some test at it in order to check that the server behaves correctly when a lot of clients (~1000) are requesting content at the same time. And I'm wondering if there was any tool that could help me in that regard.
I'd like to measure the time my server takes to respond to each request and the time it takes for each client to receive the complete reply. I could build up my own application that does the job, but I was wondering if something like that already existed.
As this is my first network-based application, I was also wondering what kind of limitations I was to expect when running that kind of tests over a Gigabit network, if any.
You need a load generator. The best configuration is if you have the target system (system you want to test) in one machine and the load generator in other machine. Both machines in the same LAN. If you have target system and load generator in the same machine, the load generator could grab resources from the target system.
I would use JMeter or Tsung. JMeter is easy to install and use. The only problem is that it represents virtual clients as threads. Each virtual client means one thread and that could use lots of system resources if you one to simulate 1K virtual clients. Tsung simulates many virtual clients with the same thread and thus, it can consume less resources.
jmeter is a java based tool which can configure to do load testing.

controlling ethernet speeds in lan c++ windows

I am wondering if it is possible to limit/control ethernet upload and download speeds on specific transport layers (tcp/udp) using c++? I am trying to make a simple to use program that can control the speeds of any device that the ethernet is connected to. For example: Computer B is connected to computer A via Internet Connection Sharing, I use my program to limit computer B's download or upload speed to 120kbs (or any number i choose), with this I would also like to choose udp or tcp.
Basically, I want to create my own program similar to net limiter and other such software, but I also want to add my own features which many of which lack for my needs. These other features are easy enough, but I have no idea how to go about the actual limting process.
The way forward in the general case you ask about would be to create a virtual network adapter and all the monitored route traffic through it. Once that was done, then you can monitor streams between hosts or on specific ports.
Not an easy job... A starting point would be the Windows device driver kit.
If you were prepared to limit just one app, and could modify it, the task would be much simpler... wget and curl for example both offer limiting.
HTH, Ruth

Debugging network applications and testing for synchronicity?

If I have a server running on my machine, and several clients running on other networks, what are some concepts of testing for synchronicity between them? How would I know when a client goes out-of-sync?
I'm particularly interested in how network programmers in the field of game design do this (or just any continuous network exchange application), where realtime synchronicity would be a commonly vital aspect of success.
I can see how this may be easily achieved on LAN via side-by-side comparisons on separate machines... but once you branch out the scenario to include clients from foreign networks, I'm just not sure how it can be done without clogging up your messaging system with debug information, and therefore effectively changing the way that synchronicity would result without that debug info being passed over the network.
So what are some ways that people get around this issue?
For example, do they simply induce/simulate latency on the local network before launching to foreign networks, and then hope for the best? I'm hoping there are some more concrete solutions, but this is what I'm doing in the meantime...
When you say synchronized, I believe you are talking about network latency. Meaning, that a client on a local network may get its gaming information sooner than a client on the other side of the country. Correct?
If so, then I'm sure you can look for books or papers that cover this kind of topic, but I can give you at least one way to detect this latency and provide a way to manage it.
To detect latency, your server can use a type of trace route program to determine how long it takes for data to reach each client. A common Linux program example can be found here http://linux.about.com/library/cmd/blcmdl8_traceroute.htm. While the server is handling client data, it can also continuously collect the latency statistics and provide the data to the clients. For example, the server can update each client on its own network latency and what the longest latency is for the group of clients that are playing each other in a game.
The clients can then use the latency differences to determine when they should process the data they receive from the server. For example, a client is told by the server that its network latency is 50 milliseconds and the maximum latency for its group it 300 milliseconds. The client then knows to wait 250 milliseconds before processing game data from the server. That way, each client processes game data from the server at approximately the same time.
There are many other (and probably better) ways to handle this situation, but that should get you started in the right direction.

Linux C++: Accessing network statistics

I am developing a network statistic program in C++ for Linux.
I would like to access some statistical information about the current network connection.
E.g.:
packet loss,
bytes transferred (upload and download),
current network load (upload and download),
Any idea how to access this kind of information?
so, i have been trying to accomplish my objective using de /proc, we can find alot of information, but there is some missing information i need. I am thinking in developing a simple C++ promiscuous application, using LibPcap, that captures the network traffic i need and starts taking the metrics i want.
The Con is that i think this is going to be CPU intensive, at least more then needed ...
Any thoughts about this ?
All this information are spread into /proc/net files (updated by kernel). The most important file is /proc/net/netstat. Into to /proc/net/dev there are device statistics. You could open and parse.
A lot of information is available from the "files" in /proc/net.
/proc/net/netstat would be a good place to start.
AFAIK, it is possible to retrieve statistics information programmatically through the rtnetlink interface. See e.g. this mail for examples
You can access Network statistics trough /sys/class/net/NAME_OF_DEVICE/statistics.

How to send lots of POST requests QUICKLY

I'm planning to develop a program for our university research that has to send lots of post requests to different urls. It must work as quick as possible (we should process about 100kk urls). What language shoud i use (currently i'm writing in c++, delphi and perl a bit)?
Also, I've heard that it's possible to write an multithreaded app in perl using prefork that can process about 20-30k per minute. Is it true?
// Sorry for my bad english, but it seems to be the only place where i can get the right answer
Andrew
The 20-30k per minute is completely arbitrary. If you run this on an 8-core machine with a beefy network connection you could probably surpass that.
However, I don't think your choice of programming language / library is going to matter much here. Instead, you're going to run into the number of concurrent TCP connections allowed by the machine, and also the bandwidth of the link itself.
Webserver Stress Tool claims capable of simulating the HTTP requests generated by up to 10.000 simultaneous users and has an entry in Torry's site: Presumably it's written in Delphi or C++ Builder.
My suggestion:
You can write your custom stress tool (HTTP(S) Client) with Delphi (It happens to be my favorite language so I advocate it) using light HTTP(S) library such as RTC SDK and OmniThreadLibrary for multithreading.
See this page for a clue/hint.
Edit:
Excerpt from Demos\Readme_Demos.txt in RealThinClient_SDK331.zip
App Client, Server and ISAPI demos can be used to stress-test RTC
component using Remote Functions with strong encryption by opening
hundreds of connections from each client and flooding the
Server/ISAPI with requests.
App Client Demo is ideal for stress-testing RTC remote functions using
multiple connections in multi-threaded mode, visualy showing activity
and stage for each connection in a live graph. Client can choose
between "Proxy" and standard connection components, to see the
difference in bandwidth usage and distribution.
I have heard Erlang is pretty good for such applications as it is very efficient to spawn many processes in Erlang quickly. But I think using Python would be fine too, just use the popen module to spawn multiple processes.
After all you are limited by how many you can run at the same time depending on how many processors your machine has. The choice of language may not matter as much depending on what you are doing with the data downloaded from these URLs as that may be more processing intensive than the cost of spawning.