How can I recv TCP socket data in one package without dividing - c++

Since I create a TCP socket,it is fine when sending small amount data.no fragment. all data came in one package. but when data becomes bigger and bigger. TCP package has been divided into pieces.. it`s really annoying. Is there any option to set on socket, and the socket will automatically put pieces into one package for me ?

It's a byte stream. All the bytes will arrive correctly and in the right order, but not necessarily when you want them. If you need to send anything more complex than one byte, you need another protocol on top of TCP. That's why there are all those other TCP/IP protocols like HTTP, SMTP etc.

No there is not. There are even situations where you might receive 1 byte.

Consider using higher level messaging libraries like ZMQ. It handles all the message packing and unpacking for you.

TCP provides you reliable bi-directional byte stream. It takes care of sequencing, transport-layer packetization, retransmission, and flow-control. Decades of research went into optimizing its performance. Pretty nifty. The small price you pay for all this convenience is that you have to write and read the stream in a loop, watching for a complete application protocol message you can process when receiving, and flushing yet unbuffered bytes when sending.

Welcome to socket programming!
I'll chime in here and say that there's pretty much nothing you can do to solve you issue without adding extra dependencies on libraries which handle application protocols for you. There are some lower level message packing libraries (google's protocol buffers, among others) which may help.
It's probably the most beneficial to get used to reading and writing TCP data in a loop. It's proven and very portable.. even if you pay a small price in actually writing the streaming codecs yourself.
Try it a few times. It's a useful experience which you can re-use, and it's really not as difficult and annoying once you get the hang of it (like anything else, really).
Furthermore, it's fairly easy to unit-test (rather than dealing with esoteric libraries and uncommon protocols with badly/sparsely documented options)..

You can optimize sockets reads to return larger chunks, on platforms that support it, by setting low watermark using setsockopt() and SO_RECVLOWAT. But you will still have to handle the possibility of getting bytes less than the watermark.

I think you want SOCK_SEQPACKET (or possibly SOCK_RDM). See socket(2).

Related

Windows lowest level socket programming?

I'm having a little bit of confusion regarding how low-level winsock is? I am wanting to write a VERY basic client-server program on windows. I don't really wish to use a bloated TCP or even UDP, just something extremely basic and low latency. Would winsock be ideal for this? Or is winsock the same as the windows network functions, just all packaged up (and possibly slower)? Would I be better just using PInvoke on the native windows networking functions?
Winsock, TCP, UDP, and any well received networking library built on top of these are all going to be comparable performance wise.
Use whichever one is easiest to get your work done.
First of all, it's possible to write completely new protocol w/o implementing network driver. For this you have raw sockets. On desktop Windows they are very limited (find "limitations").
It's possible, but not recommended. Don't reinvent the wheel and choose between UDP and TCP until you're completely sure you need something more sophisticated (but not simpler).
To send data over network (as opposite to direct cable link between two computers) you need IP protocol. To dispatch your data to right application you need transport protocol (UDP, TCP and others). UDP is almost the simplest possible one because that's was its main design goal. UDP provides additional addressing (port number in addition to IP address to deliver your data to right socket), packet boundaries ("length" field) and optional checksum. That's all and that's the minimum feature list. Take it and implement everything you need over UDP.
Next, if you need to be sure that your packets are delivered instead of silently dropped somewhere on the way (reliability), delivered in right order, if you'd like to know that somebody still listen to you on opposite end (connectivity) and other things implemented by best specialists, with hardware adapted to this implementation, tested by millions during decades, with lots of documentation, available on almost all possible platforms - use TCP.

Sockets: large message performance

I have a doubt regarding the use of Berkeley Sockets under Ubuntu. In terms of performance and reliability which option is best? To send a high amount of messages but with short length or to send a low amount of messages but this ones with a large size? I do not know which is the main design rule I should follow here.
Thank you all!
In terms of reliability, unless you have very specific requirements it isn't worth worrying about much. If you are talking about TCP, it is going to do a better job than you managing things until you come across some edge case that really requires you to fiddle with some knobs, in which case a more specific question would be in order. In terms of packet size, with TCP unless you circumvent Nagel's algorithm, you don't really have the control you might think.
With UDP, arguably the best thing to do is use path MTU discovery, which TCP does for you automatically, but as a general rule you are fine just using something in 500 byte range. If you start to get too fancy you will find yourself reinventing parts of TCP.
With TCP, one option is to use the TCP_CORK socket option. See the getsockopt man page. Set TCP_CORK on the socket, write a batch of small messages, then remove the TCP_CORK option and they will be transmitted in a minimum number of network packets. This can increase throughput at the cost of increased latency.
Each network message has a 40-byte length header, but large messages are harder to route and easier to lose. If you are talking about UDP, so the best message size is Ethernet block, which is 1496 bytes long, if yiu are using TCP, leave it up to network layer to handle how much to send.
Performance you can find out yourself with iperf. Run a few experiments and you will see it yourself. As for reliability as far as I understand if you use TCP the TCP connection guarantee that data will be delivered of course if the connection is not broken.
"In terms of performance and reliability which option is best"
On a lossy layer, performance and reliability are almost a straight trade-off against each other, and greater experts than us have put years of work into finding sweet spots, and techniques which beat the straight trade-off and improve both at once.
You have two basic options:
1) Use stream sockets (TCP). Any "Messages" that your application knows about are defined at the application layer rather than at the sockets. For example, you might consider an HTTP request to be a message, and the response to be another in the opposite direction. You should see your job as being to keep the output buffer as full as possible, and the input buffer as empty as possible, at all times. Reliability has pretty much nothing to do with message length, and for a fixed size of data performance is mostly determined by the number of request-response round-trips performed rather than the number of individual writes on the socket. Obviously if you're sending one byte at a time with TCP_NODELAY then you'd lose performance, but that's pretty extreme.
2) Use datagrams (UDP). "Messages" are socket-layer entities. Performance is potentially better than TCP, but you have to invent your own system for reliability, and potentially this will hammer performance by requiring data to be re-sent. TCP has the same issue, but greater minds, etc. Datagram length can interact very awkwardly with both performance and reliability, hence the MTU discovery mentioned by Duck. If you send a large packet and it's fragmented, then if any fragment goes astray then your message won't arrive. There's a size N, where if you send N-size datagrams they won't fragment, but if you send N+1-size datagrams they will. Hence that +1 doubles the number of failed messages. You don't know N until you know the network route (and maybe not even then). So it's basically impossible to say at compile time what sizes will give good performance: even if you measure it, it'll be different for different users. If you want to optimise, there's no alternative to knowing your stuff.
UDP is also more work than TCP if you need reliability, which is built in to TCP. Potentially UDP has big payoffs, but should probably be thought of as the assembler programming of sockets.
There's also (3): use a protocol for improved UDP reliability, such as RUDP. This isn't part of Berkeley-style sockets APIs, so you'll need a library to help.

Networking Method

Hey guys, Iv'e noticed that when I send a complete packet (collect it's data in a buffer and send) it is much slower than sending the packet byte by byte.
Will it be okay if I make an online game using this method?
Sounds like a naggling-related problem.
You have to disable naggling for latency-demanding applications. (See setsockopt, TCP_NODELAY).
Explanation:
TCP stack behaves differently for small chunks, trying to combine them in bizare ways on the way to IP datagrams. This is a performance optimization suggested by J.Nagle (hence nagling). Keep in mind that enabling NODELAY will make every send() call a kernel-mode transition, so you may wish to pack streams into chunks yourself by means of memory copying, before feeding them into send() if performance is an issue for what you are doing.
I think you need to define what are your measurement points (what exactly are you measuring). By the way is this TCP or UDP?
Anyway Winsock has its own internal buffers that you can modify by calls to setsockopt.
That sounds bizarre. There is much more overhead in sending data byte by byte. Your transport headers will far outweigh the payload! Not to mention O(n) send calls (where n is the number of bytes).
You're doing something wrong if that's what you experience.
What I didn't really measure anything, I'm pretty sure it has something to do with sending data and not collecting it..
I'm using C# for server-side and C++ for client side, in the server side I wrapped the socket with a BinaryWriter and BinaryReader, and in the client I just used send and recv
to send every byte.

How to delta encode a C/C++ struct for transmission via sockets

I need to send a C struct over the wire (using UDP sockets, and possibly XDR at some point) at a fairly high update rate, which potentially causes lots of redundant and unnecessary traffic at several khz.
This is because, some of the data in the struct may not have changed at times, so I thought that delta-encoding the current C struct against the previous C struct would seem like a good idea, pretty much like a "diff".
But I am wondering, what's the best approach of doing something like this, ideally in a portable manner that also ensures that data integrity is maintained? Would it be possible to simply XOR the data and proceed like this?
Similarly, it would be important that the approach remains extensible enough, so that new fields can be added to the struct or reordered if necessary (padding), which sounds as if it'd require versioning information, as well.
Any ideas or pointers (are there existing libraries?) would be highly appreciated!
Thanks
EDIT: Thanks to everyone one who provided an answer, the level of detail is really appreciated, I realize that I probably should not have mentioned UDP though, because that is in fact not the main problem, because there is already a corresponding protocol implemented on top of UDP that accounts for the mentioned difficulties, so the question was really meant to be specific to feasible means of delta encoding a struct, and not so much about using UDP in particular as a transport mechanism.
UDP does not guarantee that a given packet was actually received, so encoding whatever you transmit as a "difference from last time" is problematic -- you can't know that your counterpart has the same idea as you about what the "last time" was. Essentially you'd have to build some overhead on top of UDP to check what packets have been received (tagging each packet with a unique ID) -- everybody who's tried to go this route will agree that more often than not you find yourself more or less duplicating the TCP streaming infrastructure on top of UDP... only, most likely, not as solid and well-developed (although admittedly sometimes you can take advantage of very special characteristics of your payloads in order to gain some modest advantage over plain good old TCP).
Does your transmission need to be one-way, sender to receiver? If that's the case (i.e., it's not acceptable for the receiver to send acknowledgments or retransmits) then there's really not much you can do along these lines. The one thing that comes to mind: if it's OK for the receiver to be out of sync for a while, then the sender could send two kinds of packets -- one with a complete picture of the current value of the struct, and an identifying unique tag, to be sent at least every (say) 5 minutes (so realistically the receiver may be out of sync for up to 15 minutes if it misses two of these "big packets"); one with just an update (diff) from the last "big packet", including the big packet's identifying unique tag and (e.g.) a run-length-encoded version of the XOR you mention.
Of course once having prepared the run-length-encoded version, the server will compare its size vs the size of the whole struct, and only send the delta-kind of packet if the savings are substantial, otherwise it might as well send the big-packet a bit earlier than needed (gains in reliability). The received will keep track of the last big-packet unique tag it has received and only apply deltas which pertain to it (helps against missing packets and packets delivered out of order, depending how sophisticated you want to make your client).
The need for versioning &c, depending on what exactly you mean (will senders and receivers with different ideas about how the struct's C layout should look need to communicate regularly? how do they handshake about what versions are know to both? etc), will add a whole further universe of complications, but that is really another question, and your core question as summarized in the title is already plenty big enough;-).
If you can afford occasional meta-messages from the receiver back to the sender (acks or requests to resend) then depending on the various numerical parameters in play you may design different strategies. I suspect acks would have to be pretty frequent to do much good, so a request to resend a big-packet (either a specifically identified one or "whatever you have that's freshest") may be the best meta-strategy to cull the options space (which otherwise threatens to explode;-). If so then the sender may be blissfully ignorant of whatever strategy the receiver is using to request big-packet-resends, and you can experiment on the receiver side with various such strategies without needing to redeploy the sender as well.
It's hard to offer much more help without some specifics, i.e., at least ballpark numbers for all the numerical parameters -- packet sizes, frequencies of sending, how long is it tolerable for the sender to be out of sync with the receiver, a bundle of network parameters, etc etc. But I hope this somewhat generic analysis and suggestions still help.
To delta encode:
1) Send "key frames" periodically (e.g. once a second). A key frame is a complete copy (rather than a delta) so that if you lose comms for any reason, you only lose a small amount of data before you can "aquire the signal" again. Use a simple packet header that allows you to detect the start of a packet and know what type of data it contains.
2) Calculate the delta from the previous packet and encode that in a compact form. By examining the type of data you are sending and the way it typically changes, you should be able to devise a pretty compact delta. However, you may need to check the size of the delta - in some cases it may not be an efficient encoding - if it's bigger than a key frame you can just send another key frame instead. You can also decide at this point whether your deltas are lossy or lossless.
3) Add a CRC check to the packet (search for CRC32). This will allow the receiver to verify that the packet has been received intact, allowing them to skip invalid packets.
NOTES:
Be careful about doing this over UDP - it gives no guarantee that your packets will arrive in the same order you sent them. Obviously a delta will only work if packets are in order. In this case, you will need to add some form of sequence ID to each packet (first packet is "1", second packet is "2" etc) so that you can detect out-of-order receiving. You may even need to keep a buffer of "n" packets in the receiver so that you can reassemble them in the correct order when you come to decode them (but of course, this could introduce some latency). You will probably also miss some packets over UDP, in which case you'll need to wait until the next keyframe before you'll be able to "re-aquire the signal" - so the key frames must be frequent enough to avoid catastrophic outages in your comms.
Consider using compression (e.g. zip etc). You may find a full packet can be built in a zip-friendly manner (e.g. rearrage data to group bytes that are likely to have similar values (especially zeros) together) and then compressed so well that it is smaller than an uncompressed delta, and you won't need to go to all the effort of deltas at all (and you won't have to worry about packet ordering etc).
edit
- Always use a version number (or packet type) in your packets so you can add new fields or change the delta encoding in the future! You'll need this for differentiating key/delta frames anyway.
I'm not convinced that delta encoding values on UDP - which is inherently unreliable and out of order - is going to be particularly easy. Instead, I'd send an ID of the field which has changed and its current value. That also doesn't require anything to change if you want to add extra fields to the data structure you're sending. If you want a standard way of doing this, look at SNMP; that may be something you can drop in, or it may be a bit baggy for you (it qualifies the field names globally and uses ASN.1 - both of which give maximum interoperability, but at the cost of some bytes in the packet).
Use an RPC like corba or protocol buffers
Use DTLS with a compression option
Use a packed format
Repurposes an existing header compression library

Communication between processes

I'm looking for some data to help me decide which would be the better/faster for communication between two independent processes on Linux:
TCP
Named Pipes
Which is worse: the system overhead for the pipes or the tcp stack overhead?
Updated exact requirements:
only local IPC needed
will mostly be a lot of short messages
no cross-platform needed, only Linux
In the past I've used local domain sockets for that sort of thing. My library determined whether the other process was local to the system or remote and used TCP/IP for remote communication and local domain sockets for local communication. The nice thing about this technique is that local/remote connections are transparent to the rest of the application.
Local domain sockets use the same mechanism as pipes for communication and don't have the TCP/IP stack overhead.
I don't really think you should worry about the overhead (which will be ridiculously low). Did you make sure using profiling tools that the bottleneck of your application is likely to be TCP overhead?
Anyways as Carl Smotricz said, I would go with sockets because it will be really trivial to separate the applications in the future.
I discussed this in an answer to a previous post. I had to compare socket, pipe, and shared memory communications. Pipes were definitely faster than sockets (maybe by a factor of 2 if I recall correctly ... I can check those numbers when I return to work). But those measurements were just for the pure communication. If the communication is a very small part of the overall work, then the difference will be negligible between the two types of communication.
Edit
Here are some numbers from the test I did a few years ago. Your mileage may vary (particularly if I made stupid programming errors). In this specific test, a "client" and "server" on the same machine echoed 100 bytes of data back and forth. It made 10,000 requests. In the document I wrote up, I did not indicate the specs of the machine, so it is only the relative speeds that may be of any value. But for the curious, the times given here are the average cost per request:
TCP/IP: .067 ms
Pipe with I/O Completion Ports: .042 ms
Pipe with Overlapped I/O: .033 ms
Shared Memory with Named Semaphore: .011 ms
There will be more overhead using TCP - that will involve breaking the data up into packets, calculating checksums and handling acknowledgement, none of which is necessary when communicating between two processes on the same machine. Using a pipe will just copy the data into and out of a buffer.
I don't know if this suites you, but a very common way of IPC (interprocess communication) under linux is by using the shared memory. It's actually ultra fast (I didn't profiled this, but this is just shared data on RAM with strong processing around it).
The main problem around this approuch is the semaphore, you must build a little system around it so you must make sure a process is not writing at the same time the other one is trying to read.
A very simple starter tutorial is at here
This is not as portable as using sockets, but the concept would be the same, so if you're migrating this to Windows, you will just have to change the shared memory create/attach layer.
Two things to consider:
Connection setup cost
Continuous Communication cost
On TCP:
(1) more costly - 3way handshake overhead required for (potentially) unreliable channel.
(2) more costly - IP level overhead (checksum etc.), TCP overhead (sequence number, acknowledgement, checksum etc.) pretty much all of which aren't necessary on the same machine because the channel is supposed to be reliable and not introduce network related impairments (e.g. packet reordering).
But I would still go with TCP provided it makes sense (i.e. depends on the situation) because of its ubiquity (read: easy cross-platform support) and the overhead shouldn't be a problem in most cases (read: profile, don't do premature optimization).
Updated: if cross-platform support isn't required and the accent is on performance, then go with named/domain pipes as I am pretty sure the platform developers will have optimize-out the unnecessary functionality deemed required for handling network level impairments.
unix domain socket is a very goog compromise. Not the overhead of tcp, but more evolutive than the pipe solution. A point you did not consider is that socket are bidirectionnal, while named pipes are unidirectionnal.
I think the pipes will be a little lighter, but I'm just guessing.
But since pipes are a local thing, there's probably a lot less complicated code involved.
Other people might tell you to try and measure both to find out. It's hard to go wrong with this answer, but you may not be willing to invest the time. That would leave you hoping my guess is correct ;)