how to control socket rate? - c++

I want to know how can I control the rate of my network interface, In fact, I want to receive with a rate of 32 Kbits/s and send the received data to the network with a rate of 1 Mbits/s....do you have any ideas on how to control the interface's rate?....or do you know any tricks that could help?...
Thanks in advance..

There is a difference between data throughput rate and the baud rate of the connection. Generally, you want the baud rate to be as fast as possible (without errors of course). Some low level drivers or the OS may allow you to control this, but it is fundamentally a low-level hardware/driver issue.
For data throughput rate, throttling sending is easy, just don't call send() as fast. This requires that you track how much you are sending per time period and limiting it with sleeps.
Receiving can work the same way, but you have to consider that if someone is sending faster than the rate you are receiving, there may be issues.

You can do this, you must only control time and carry about not recv more and less than 32kbits (you can set this in function arguments) in second and same practice on send.

I've done this "the hard way" (dunno if there is an easier way). Specifically, I did it by controlling the rate at which I called send() and/or recv(), and how much data I indicated I was willing to send/receive in each of those calls. It takes a bit of math to do it right, but it's not impossible.

Related

UDP transfer is too fast, Apache Mina doesn't handle it

We decided to use UDP to send a lot of data like coordinates between:
client [C++] (using poll)
server [JAVA] [Apache MINA]
My datagrams are only 512 Bytes max to avoid as possible the fragmentation during the transfer.
Each datagram has a header I added (with an ID inside), so that I can monitor :
how many datagrams are received
which ones are received
The problem is that we are sending the datagrams too fast. We receive like the first ones and then have a big loss, and then get some, and big loss again. The sequence of ID datagram received is something like [1], [2], [250], [251].....
The problem is happening in local too (using localhost, 1 network card only)
I do not care about losing datagrams, but here it is not about simple loss due to network (which I can deal with)
So my questions here are:
On client, how can I get the best :
settings, or socket settings?
way to send as much as I can without being to much?
On Server, Apache MINA seems to say that it manage itself the ~"size of the buffer socket"~ but is there still some settings to care about?
Is it possible to reach something like 1MB/s knowing that our connection already allow us to have at least this bandwidth when downloading regular files?
Nowadays, when we want to transfer a ~4KB coordinates info, we have to add sleep time so that we are waiting 5 minutes or more to get it to finish, it's a big issue for us knowing that we should send every minute at least 10MB coordinates informations.
If you want reliable transport, you should use TCP. This will let you send almost as fast as the slower of the network and the client, with no losses.
If you want a highly optimized low-latency transport, which does not need to be reliable, you need UDP. This will let you send exactly as fast as the network can handle, but you can also send faster, or faster than the client can read, and then you'll lose packets.
If you want reliable highly optimized low-latency transport with fine-grained control, you're going to end up implementing a custom subset of TCP on top of UDP. It doesn't sound like you could or should do this.
... how can I get the best settings, or socket settings
Typically by experimentation.
If the reason you're losing packets is because the client is slow, you need to make the client faster. Larger receive buffers only buy a fixed amount of headroom (say to soak up bursts), but if you're systematically slower any sanely-sized buffer will fill up eventually.
Note however that this only cures excessive or avoidable drops. The various network stack layers (even without leaving a single box) are allowed to drop packets even if your client can keep up, so you still can't treat it as reliable without custom retransmit logic (and we're back to implementing TCP).
... way to send as much as I can without being to much?
You need some kind of ack/nack/back-pressure/throttling/congestion/whatever message from the receiver back to the source. This is exactly the kind of thing TCP gives you for free, and which is relatively tricky to implement well yourself.
Is it possible to reach something like 1MB/s ...
I just saw 8MB/s using scp over loopback, so I would say yes. That uses TCP and apparently chose AES128 to encrypt and decrypt the file on the fly - it should be trivial to get equivalent performance if you're just sending plaintext.
UDP is only a viable choice when any number of datagrams can be lost without sacrificing QoS. I am not familiar with Apache MINA, but the scenario described resembles the server which handles every datagram sequentially. In this case all datagrams arrived while the one is serviced will be lost - there is no queuing of UDP datagrams. Like I said, I do not know if MINA can be tuned for parallel datagram processing, but if it can't, it is simply wrong choice of tools.

Efficiently send a stream of UDP packets

I know how to open an UDP socket in C++, and I also know how to send packets through that. When I send a packet I correctly receive it on the other end, and everything works fine.
EDIT: I also built a fully working acknowledgement system: packets are numbered, checksummed and acknowledged, so at any time I know how many of the packets that I sent, say, during the last second were actually received from the other endpoint. Now, the data I am sending will be readable only when ALL the packets are received, so that I really don't care about packet ordering: I just need them all to arrive, so that they could arrive in random sequences and it still would be ok since having them sequentially ordered would still be useless.
Now, I have to transfer a big big chunk of data (say 1 GB) and I'd need it to be transferred as fast as possible. So I split the data in say 512 bytes chunks and send them through the UDP socket.
Now, since UDP is connectionless it obviously doesn't provide any speed or transfer efficiency diagnostics. So if I just try to send a ton of packets through my socket, my socket will just accept them, then they will be sent all at once, and my router will send the first couple and then start dropping them. So this is NOT the most efficient way to get this done.
What I did then was making a cycle:
Sleep for a while
Send a bunch of packets
Sleep again and so on
I tried to do some calibration and I achieved pretty good transfer rates, however I have a thread that is continuously sending packets in small bunches, but I have nothing but an experimental idea on what the interval should be and what the size of the bunch should be. In principle, I can imagine that sleeping for a really small amount of time, then sending just one packet at a time would be the best solution for the router, however it is completely unfeasible in terms of CPU performance (I probably would need to busy wait since the time between two consecutive packets would be really small).
So is there any other solution? Any widely accepted solution? I assume that my router has a buffer or something like that, so that it can accept SOME packets all at once, and then it needs some time to process them. How big is that buffer?
I am not an expert in this so any explanation would be great.
Please note, however, that for technical reasons there is no way at all I can use TCP.
As mentioned in some other comments, what you're describing is a flow control system. The wikipedia article has a good overview of various ways of doing this:
http://en.wikipedia.org/wiki/Flow_control_%28data%29
The solution that you have in place (sleeping for a hard-coded period between packet groups) will work in principle, but in order to get reasonable performance in a real-world system you need to be able to react to changes in the network. This means implementing some kind of feedback where you automatically adjust both the outgoing data rate and packet size in response to to network characteristics, such as throughput and packetloss.
One simple way of doing this is to use the number of re-transmitted packets as an input into your flow control system. The basic idea would be that when you have a lot of re-transmitted packets, you would reduce the packet size, reduce the data rate, or both. If you have very few re-transmitted packets, you would increase packet size & data rate until you see an increase in re-transmitted packets.
That's something of a gross oversimplification, but I think you get the idea.

Will disabling nagles algorithm improve performance?

I currently have an application that receives real time messages at a very high rate and my application needs to display those messages instantly. I read about nagles algorithm and I understand that it combines small messages into one big message and then transmits it (It is designed to reduce the number of acknowledgement packets by delaying the ACK for a short time).My question is will disabling Nagles algorithm help my application ? all my messages are to be displayed in realtime as soon as they are received.Any suggestions on this issue would be appreciated.
Update:
Also I only have control of the receiver , will disabling nagles algo. on the receiver have any affect or does it only have affect when its disabled on the sender ?
Nagle is a sender side algorithm only, so if you can only affect the receiver, you cannot disable it.
Even if you could affect the sender, disabling Nagle is not very effective when doing one directional communication. In bidirectional communication, disabling Nagle can improve throughput because the benefits of removing delays can accumulate as each node can send its responses slightly sooner, letting the other side respond even sooner than that. However, in the one direction case, disabling Nagle can decrease latency by one round trip, but those benefits cannot accumulate because the fact that you are not delaying packets does not slow down the generation of new packets. You never get ahead by more than one round trip. Over the internet, that's ~20-30ms. Over a LAN, that's usually ~1ms
If your system is sufficiently hard real time that a single round-trip latency matters, then TCP is a poor protocol, and you should be using UDP instead. Nagle is a TCP only algorithm, so it would not affect UDP.
Just for fun: Pinging a local computer on my LAN is <1ms. This means Nagle can only delay my messages by something under 1ms. Quantums for desktop computer schedulers can 20-60ms [1], and even longer for servers, so I would expect removing the nagle algorithm to have no visible effect on my LAN, dwarfed by the effect of other threads on my computer consuming the CPUs
[1] http://recoverymonkey.org/2007/08/17/processor-scheduling-and-quanta-in-windows-and-a-bit-about-unixlinux/

wxWidgets Socket Transfer Rate

What is the best way to figure out the transfer rate of a wxWidgets socket. Is there a built in way to do this, or would I be better getting the time before a transfer of data and then after its done and comparing them?
I ask because I want to be able to limit the transfer rate of my sockets to a user entered value.
Thanks for any help
No, there does not appear to be any mechanism built into wxWidgets to measure socket transfer rate *. The best you can do is to measure the rate at which you call wxSocketBase::Write.
Note that this won't be perfectly accurate, however. Just because you write to a socket doesn't mean that the data is already sent, much less already received.

How do I send UDP packets at a specified rate in C++ on Windows?

I'm writing a program that implements the RFC 2544 network test. As the part of the test, I must send UDP packets at a specified rate.
For example, I should send 64 byte packets at 1Gb/s. That means that I should send UDP packet every 0.5 microseconds. Pseudocode can look like "Sending UDP packets at a specified rate":
while (true) {
some_sleep (0.5);
Send_UDP();
}
But I'm afraid there is no some_sleep() function in Windows, and Linux too, that can give me 0.5 microseconds resolution.
Is it possible to do this task in C++, and if yes, what is the right way to do it?
Two approaches:
Implement your own sleep by busy-looping using a high-resolution timer such as windows QueryPerformanceCounter
Allow slight variations in rate, insert Sleep(1) when you're enough ahead of the calculated rate. Use timeBeginPeriod to get 1ms resolution.
For both approaches, you can't rely on the sleeps being exact. You will need to keep totals counters and adjust the sleep period as you get ahead/behind.
This might be helpful, but I doubt it's directly portable to anything but Windows. Implement a Continuously Updating, High-Resolution Time Provider for Windows by Johan Nilsson.
However, do keep in mind that for packets that small, the IP and UDP overhead is going to account for a large fraction of the actual on-the-wire data. This may be what you intended, or not. A very quick scan of RFC 2544 suggests that much larger packets are allowed; you may be better off going that route instead. Consistently delaying for as little as 0.5 microseconds between each Send_UDP() call is going to be difficult at best.
To transmit 64-byte Ethernet frames at line rate, you actually want to send every 672 ns. I think the only way to do that is to get really friendly with the hardware. You'll be running up against bandwidth limitations with the PCI bus, etc. The system calls to send one packet will take significantly longer than 672 ns. A sleep function is the least of your worries.
You guess you should be able to do it with Boost Asios timer function. I haven't tried it myself, but I guess that deadline_timer would take a boost::posix_time::nanosec as well as the boost::posix_time::second
Check out an example here
Here's a native Windows implementation of nanosleep. If GPL is acceptable you can reuse the code, else you'll have to reimplement.