Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I have to make connections form one server to many PCs ( ~1000 PC). These PC are connected by a Wifi Network in the same Building.
Each PC have a dedicated connection with the server. Form its IP address, the server knows the specific data to generate to him.
I have to send a dedicated short strings over network to each PC. (~30 characters by a string)
The dedicated string is sent by a frequency of 30 strings by a second to each PC.
The problem is that these sent data are critical and should be sent in real time.
Which solution is the faster and the most robust in my case?
I assume you have two PC connected by some Ethernet or wifi, or good enough modern Internet connection (both on Earth; no interplanetary ...; no pigeon IP RFC1149 or 1200 baud analog modem from the 1970s).
Then 30 strings of about 30 chars per second is about a kilobyte per second, not a big deal, and certainly not high frequency as you claim.
My current fiber internet connection at home (near Paris, France) is able of a dozen of megabytes per second of download, and at least a megabyte per second of upload. A few years ago it was ADSL with about one megabyte per second download. I never had at home an Internet connection for which a kilobyte each second was a high load. (If you are in interplanetary space, or in the most remote and desolate places of Africa or Antarctica, then 1Kbyte/sec might be an issue in 2016, but then you are very unlucky regarding Internet connection).
Your HTTP setup might use websockets (so a bit like your second solution). You could use libonion (an HTTP server library) on the server side, and libcurl (an HTTP client library) on the client side. Periodically polling (e.g. issuing an HTTP request twenty times per second) would require more resources (but that is still manageable). An HTTP connection would be slower, because HTTP adds some overhead (the headers in HTTP requests & responses).
Notice that HTTP protocol is above TCP/IP, so will definitely use BSD sockets on operating systems providing them (Linux, Windows, MacOSX, ...). So a "web solution" is using sockets already.
If you use sockets, you'll need to define a protocol on them (or using some existing one, like HTTP or JSONRPC).
I'll go for a socket approach. Probably some JSON related thing like JSONRPC. Be aware, if you code on the socket API, that TCP/IP is a stream protocol without message boundaries. You'll need to buffer on both sides, and define some message boundary conventions. You might send JSON, terminated by a newline (the ending newline is JSON compatible, and facilitate delimiting messages).
You might be interested by messaging libraries such as 0mq.
addenda (after question edition)
Your new question is widely different (thousands of PCs, not only two of them; I guess they are in the same building, or at least the same continent.). You need about 1000 * 30 * 30 i.e. less than a megabyte per second of bandwidth.
I would still suggest using some sockets. Probably 0mq is even more relevant. You might make each message some JSON. You need to document very well the protocol you are using. Probably, you want the server to have several threads (e.g. a dozen, not many thousands of threads) to loop on emitting messages (on TCP). Perhaps you might want to have a server with several Ethernet connections (but a megabyte per second can go on one single Ethernet, even a slow 100Mbits/sec one).
30 bytes, 30 times per second, is 900 bytes per second. That's not fast at all; either method will work fine. And note that an HTTP connection uses a socket anyway.
It sounds like your "socket" option implies keeping a socket connection open all the time, as opposed to HTTP, where (typically) a separate connection is opened for each request. I think what you're really asking is:
Make the client periodically ask the server if there's new data, or
Have the server immediately send the new data as soon as it's available.
That depends entirely on what your program's requirements are, which we don't know.
A thousand biderectional TCP communications require 1000 sockets (unless you want to open and close connection for every string sent, but that would be a major performance drain).
That is dangerously close to the customary soft limit of maximum open file descriptors (which is 1024). And it is 25% of customary hard limit of 4096. Given that, I find that TCP is not well suited here.
Instead, I suggest going with UDP. With UDP, you'd need only handful of sockets (even one would do, but with multiple you could scale better). It would have a problem of reliability, but you can implement some sort of it on top of UDP.
Please make yourself familiar with the OSI model.
Sockets (UDP, TCP) are on layer 4, HTTP is on layer 5, thus using a layer 4 protocol already.
Related
We decided to use UDP to send a lot of data like coordinates between:
client [C++] (using poll)
server [JAVA] [Apache MINA]
My datagrams are only 512 Bytes max to avoid as possible the fragmentation during the transfer.
Each datagram has a header I added (with an ID inside), so that I can monitor :
how many datagrams are received
which ones are received
The problem is that we are sending the datagrams too fast. We receive like the first ones and then have a big loss, and then get some, and big loss again. The sequence of ID datagram received is something like [1], [2], [250], [251].....
The problem is happening in local too (using localhost, 1 network card only)
I do not care about losing datagrams, but here it is not about simple loss due to network (which I can deal with)
So my questions here are:
On client, how can I get the best :
settings, or socket settings?
way to send as much as I can without being to much?
On Server, Apache MINA seems to say that it manage itself the ~"size of the buffer socket"~ but is there still some settings to care about?
Is it possible to reach something like 1MB/s knowing that our connection already allow us to have at least this bandwidth when downloading regular files?
Nowadays, when we want to transfer a ~4KB coordinates info, we have to add sleep time so that we are waiting 5 minutes or more to get it to finish, it's a big issue for us knowing that we should send every minute at least 10MB coordinates informations.
If you want reliable transport, you should use TCP. This will let you send almost as fast as the slower of the network and the client, with no losses.
If you want a highly optimized low-latency transport, which does not need to be reliable, you need UDP. This will let you send exactly as fast as the network can handle, but you can also send faster, or faster than the client can read, and then you'll lose packets.
If you want reliable highly optimized low-latency transport with fine-grained control, you're going to end up implementing a custom subset of TCP on top of UDP. It doesn't sound like you could or should do this.
... how can I get the best settings, or socket settings
Typically by experimentation.
If the reason you're losing packets is because the client is slow, you need to make the client faster. Larger receive buffers only buy a fixed amount of headroom (say to soak up bursts), but if you're systematically slower any sanely-sized buffer will fill up eventually.
Note however that this only cures excessive or avoidable drops. The various network stack layers (even without leaving a single box) are allowed to drop packets even if your client can keep up, so you still can't treat it as reliable without custom retransmit logic (and we're back to implementing TCP).
... way to send as much as I can without being to much?
You need some kind of ack/nack/back-pressure/throttling/congestion/whatever message from the receiver back to the source. This is exactly the kind of thing TCP gives you for free, and which is relatively tricky to implement well yourself.
Is it possible to reach something like 1MB/s ...
I just saw 8MB/s using scp over loopback, so I would say yes. That uses TCP and apparently chose AES128 to encrypt and decrypt the file on the fly - it should be trivial to get equivalent performance if you're just sending plaintext.
UDP is only a viable choice when any number of datagrams can be lost without sacrificing QoS. I am not familiar with Apache MINA, but the scenario described resembles the server which handles every datagram sequentially. In this case all datagrams arrived while the one is serviced will be lost - there is no queuing of UDP datagrams. Like I said, I do not know if MINA can be tuned for parallel datagram processing, but if it can't, it is simply wrong choice of tools.
I am currently planning how to develop a man in the middle network application for TCP server that would transfer data between server and client. It would behave as regular client for server and server for remote client without modifying any data. It will be optionally used to detect and measure how long server or client is not able to receive data that is ready to be received in situation when connection is inactive.
I am planning to use blocking send and recv functions. Before any data transfer I would call a setsockopt function to set SO_SNDTIMEO and SO_RCVTIMEO to about 10 - 20 miliseconds assuming it will force blocking send and recv functions to return early in order to let another active connection data to be routed. Running thread per connection looks too expensive. I would not use async sockets here because I can not find guarantee that they will get complete in a parts of second especially when large data amount is being sent or received. High data delays does not look good. I would use very small buffers here but calling function for each received byte looks overkill.
My next assumption would be that is safe to call send or recv later if it has previously terminated by timeout and data was received less than requested.
But I am confused by contradicting information available at msdn.
send function
https://msdn.microsoft.com/en-us/library/windows/desktop/ms740149%28v=vs.85%29.aspx
If no error occurs, send returns the total number of bytes sent, which
can be less than the number requested to be sent in the len parameter.
SOL_SOCKET Socket Options
https://msdn.microsoft.com/en-us/library/windows/desktop/ms740532%28v=vs.85%29.aspx
SO_SNDTIMEO - The timeout, in milliseconds, for blocking send calls.
The default for this option is zero, which indicates that a send
operation will not time out. If a blocking send call times out, the
connection is in an indeterminate state and should be closed.
Are my assumptions correct that I can use these functions like this? Maybe there is more effective way to do this?
Thanks for answers
While you MIGHT implement something along the ideas you have given in your question, there are preferable alternatives on all major systems.
Namely:
kqueue on FreeBSD and family. And on MAC OSX.
epoll on linux and related types of operating systems.
IO completion ports on Windows.
Using those technologies allows you to process traffic on multiple sockets without timeout logics and polling in an efficient, reactive manner. They all can be considered successors of the ancient select() function in socket API.
As for the quoted documentation for send() in your question, it is not really confusing or contradicting. Useful network protocols implement a mechanism to create "backpressure" for situations where a sender tries to send more data than a receiver (and/or the transport channel) can accomodate for. So, an application can only provide more data to send() if the network stack has buffer space ready for it.
If, for example an application tries to send 3Kb worth of data and the tcp/ip stack has only room for 800 bytes, send() might succeed and return that it used 800 bytes of the 3k offered bytes.
The basic approach to forwarding the data on a connection is: Do not read from the incoming socket until you know you can send that data to the outgoing socket. If you read greedily (and buffer on application layer), you deprive the communication channel of its backpressure mechanism.
So basically, the "send capability" should drive the receive actions.
As for using timeouts for this "middle man", there are 2 major scenarios:
You know the sending behavior of the sender application. I.e. if it has some intent on sending any data within your chosen receive timeout at any time. Some applications only send sporadically and any chosen value for a receive timeout could be wrong. Even if it is supposed to send at a specific time interval, your timeouts will cause trouble once someone debugs the sending application.
You want the "middle man" to work for unknown applications (which must not use some encryption for middle man to have a chance, of course). There, you cannot pick any "adequate" timeout value because you know nothing about the sending behavior of the involved application(s).
As a previous poster has suggested, I strongly urge you to reconsider the design of your server so that it employs an asynchronous I/O strategy. This may very well require that you spend significant time learning about each operating systems' preferred approach. It will be time well-spent.
For anything other than a toy application, using blocking I/O in the manner that you suggest will not perform well. Even with short timeouts, it sounds to me as though you won't be able to service new connections until you have completed the work for the current connection. You may also find (with short timeouts) that you're burning more CPU time spinning waiting for work to do than actually doing work.
A previous poster wisely suggested taking a look at Windows I/O completion ports. Take a look at this article I wrote in 2007 for Dr. Dobbs. It's not perfect, but I try to do a decent job of explaining how you can design a simple server that uses a small thread pool to handle potentially large numbers of connections:
Windows I/O Completion Ports
http://www.drdobbs.com/cpp/multithreaded-asynchronous-io-io-comple/201202921
If you're on Linux/FreeBSD/MacOSX, take a look at libevent:
Libevent
http://libevent.org/
Finally, a good, practical book on writing TCP/IP servers and clients is "Practical TCP/IP Sockets in C" by Michael Donahoe and Kenneth Calvert. You could also check out the W. Richard Stevens texts (which cover the topic completely for UNIX.)
In summary, I think you should take some time to learn more about asynchronous socket I/O and the established, best-of-breed approaches for developing servers.
Feel free to private message me if you have questions down the road.
Lots of questions, I am sorry!
I am doing a voice-chat (VoIP) application and I was thinking of doing a custom implementation of the IP&UDP headers, along with small, extra information mainly seq number. Sounds alot like RTP yes, but I'm mainly just interested in the seq number or timestamp, and trying to implement my own whole RTP sounds like a nightmare with all the complexity involved and data im not likely to use.
Target OS for the application is windows xp and above. I have read http://msdn.microsoft.com/en-us/library/ms740548%28v=vs.85%29.aspx on the topic of Raw sockets in windows, and now I just want some confirmation.
I also have some general networking questions.
Here's the following questions;
1) According to MSDN, you cannot send custom IP packets with a source that is not on the network list. I understand it from a security PoV, but is there any way around this? My idea was to have for example two clients open UDP communication to a non-NAT protected server, and then have the clients spoof the source-header to make it look like packets come from the server instead of each other, thereby eliminating the need for a server as a relay of data to get through NAT, which would improve latency.
I have heard of winpcap but I don't want each client to have to install any 3rd party apps. Considering the number of DoS attacks surely there must be some way around this, like spoofing the network table the OS uses to check if source-header is legit? Will this trigger anti-virus systems?
I feel it would be really fun to actually toy with IP headers and above instead of just using predefined headers.
2) I've been having issues with free RTP libraries like JRTPLIB(which probably is very good anyway it just dosn't want to work for me) to make them work, more than I could almost tolerate, and am thinking of just writing my own interpretation ontop of UDP. Does application-level protcols like RTP simply build their header directly inside the UDP payload with the actual data afterwards? I suspect this considering the encapsulation process but just want to make sure.
If so, one does not need to create a RAW socket to implement application-level protocol, just an ordinary UDP socket and then your own payload interpretation above?
3) RTP does not give any performance boost compared to UDP since it adds more headers, all it does is making sure packets arrive in a sort-of correct manner based on timestamps and sequence numbers, right?
Is it -really- that usefull to use an RTP implementation for your basic VoIP project needs instead of adding basic sequencing yourself? I realise for video conferencing perhaps you reaally don't want frames to play out of order, but in audio conversations, would you really notice it?
4) If my solution in #1 is not applicable and I would have to use a server as a data relay between clients, would multicast be a good solution to reduce server loads? Is multicast supported enough in routing hardware?
5) It is related to question 1). Why do routers/firewalls allow things like UDP hole punching? For example, two clients first conenct to the server, then the server gives a client port / ip on to other clients, so the clients can talk to each other on those ports.
Why would firewalls allow data to be received from another IP than the one used in making the connection on that very port? Sounds like a big security hole that should easly be filtered? I understand that source IP spoofing would trick it, but this?
6) To set up a UDP session between two parties (the client which is behind NAT, server whic his non-NAT) does the client simply have to send a packet to the server and then the session is allowed through the firewall? Meaning the client can receive too from the server.
Based on article at wiki, http://en.wikipedia.org/wiki/UDP_hole_punching
7) Is SIP dependant on RTP? For some reason I got this impression but I cant find data to back it up. I may plan to add softphone functionality to my VoIP client in the future and want to make sure I have a good foundation (RTP if I really must, otherwise my own UDP interpretation)
Thanks in advance!
1, Raw sockets seems unnecessary for this application
2, Yes
3, RTP runs on top of UDP, of course it adds overhead. In many ways RTP (ignoring RTCP) is pretty much the bare minimum already and if you implemented a half-way decent alternative it would save you a few bytes at best and you wouldn't be able to use any of the many RTP test tools.
7, SIP is completely independent of RTP. SIP is used to Initiate Sessions. SDP is the protocol commonly transported by SIP, and it is SDP that negotiates and controls RTP video/voice voice.
I'm having problems with low performance using a Windows named pipe. The throughput drops off rapidly as the network latency increases. There is a roughly linear relationship between messages sent per second and round trip time. It seems that the client must ack each message before the server will send the next one. This leads to very poor performance, I can only send 5 (~100 byte) messages per second over a link with an RTT of 200 ms.
The pipe is asynchronous, using multiple overlapped write operations (and multiple overlapped reads at the client end), but this is not improving throughput. Is it possible to send messages in parallel over a named pipe? The pipe is created using PIPE_TYPE_MESSAGE, would PIPE_READMODE_BYTE work better? Is there any other way I can improve performance?
This is a deployed solution, so I can't simply replace the pipe with a socket connection (I've read that Windows named pipe aren't recommended for use over a WAN, and I'm wondering if this is why). I'd be grateful for any help with this matter.
We found that Named Pipes had poor performance from Windows XP onwards.
I don't have a solution for you. But I am concurring with the notion of Named Pipes being useless from XP onwards. We changed our software (in terms of IPC) completely because of it.
Is your comms factored into a separate DLL? Perhaps you could replace the DLL with an interface that looks the same but behaves differently?
I've implemented a work around, introducing a small (~1ms) fixed delay to buffer up as much data as possible before writing to the pipe. Over a network link with a RTT of 200ms, I can send ten times as much data in about a third of the time.
I send a message down the pipe when it first connects, so the client can determine the comms mode supported by the server and send data accordingly.
I would imagine that some of the WAN optimisation gear out there would be able to boost performance, as one of the things they do is understand protocols and reduce their chattiness. Given the latency of many WAN links, this alone can boost throughput and reduce timeouts.
The software I'm working on needs to be able to connect to many servers in a short period of time, using TCP/IP. The software runs under Win32. If a server does not respond, I want to be able to quickly continue with the next server in the list.
Sometimes when a remote server does not respond, I get a connection timeout error after roughly 20 seconds. Often the timeout comes quicker.
My problem is that these 20 seconds hurts the performance of my software, and I would like my software to give up sooner (after say 5 seconds). I assume that the TCP/IP stack (?) in Windows automatically adjusts the timeout based on some parameters?
Is it sane to override this timeout in my application, and close the socket if I'm unable to connect within X seconds?
(It's probably irrelevant, but the app is built using C++ and uses I/O completion ports for asynchronous network communication)
If you use IO completion ports and async operations, why do you need to wait for a connect to complete before continuing with the next server on the list? Use ConnectEx and pass in an overlapped structure. This way the individual server connect time will no add up, the total connect time is the max server connect time not the sum.
On Linux you can
int syncnt = 1;
int syncnt_sz = sizeof(syncnt);
setsockopt(sockfd, IPPROTO_TCP, TCP_SYNCNT, &syncnt, syncnt_sz);
to reduce (or increase) the number of SYN retries per connect per socket. Unfortunately, it's not portable to Windows.
As for your proposed solution: closing a socket while it is still in connecting state should be fine, and it's probably the easiest way. But since it sounds like you're already using asynchronous completions, can you simply try to open four connections at a time? If all four time out, at least it will only take 20 seconds instead of 80.
All configurable TCP/IP parameters for Windows are here
See TcpMaxConnectRetransmissions
You might consider trying to open many connections at once (each with its own socket), and then work with the one that responds first. The others can be closed.
You could do this with non-blocking open calls, or with blocking calls and threads. Then the lag waiting for a connection to open shouldn't be any more than is minimally nessecary.
You have to be careful when you override the socket timeout. If you are too aggressive and attempt to connect to many servers very quickly then the windows TCP/IP stack will assume your application is an internet worm and throttle it down. If this happens, then the performance of your application will become even worse.
The details of when exactly the throttling back occurs is not advertised, but the timeout you propose ( 5 seconds ) should be OK, in my experience.
The details that are available about this can be found here