We decided to use UDP to send a lot of data like coordinates between:
client [C++] (using poll)
server [JAVA] [Apache MINA]
My datagrams are only 512 Bytes max to avoid as possible the fragmentation during the transfer.
Each datagram has a header I added (with an ID inside), so that I can monitor :
how many datagrams are received
which ones are received
The problem is that we are sending the datagrams too fast. We receive like the first ones and then have a big loss, and then get some, and big loss again. The sequence of ID datagram received is something like [1], [2], [250], [251].....
The problem is happening in local too (using localhost, 1 network card only)
I do not care about losing datagrams, but here it is not about simple loss due to network (which I can deal with)
So my questions here are:
On client, how can I get the best :
settings, or socket settings?
way to send as much as I can without being to much?
On Server, Apache MINA seems to say that it manage itself the ~"size of the buffer socket"~ but is there still some settings to care about?
Is it possible to reach something like 1MB/s knowing that our connection already allow us to have at least this bandwidth when downloading regular files?
Nowadays, when we want to transfer a ~4KB coordinates info, we have to add sleep time so that we are waiting 5 minutes or more to get it to finish, it's a big issue for us knowing that we should send every minute at least 10MB coordinates informations.
If you want reliable transport, you should use TCP. This will let you send almost as fast as the slower of the network and the client, with no losses.
If you want a highly optimized low-latency transport, which does not need to be reliable, you need UDP. This will let you send exactly as fast as the network can handle, but you can also send faster, or faster than the client can read, and then you'll lose packets.
If you want reliable highly optimized low-latency transport with fine-grained control, you're going to end up implementing a custom subset of TCP on top of UDP. It doesn't sound like you could or should do this.
... how can I get the best settings, or socket settings
Typically by experimentation.
If the reason you're losing packets is because the client is slow, you need to make the client faster. Larger receive buffers only buy a fixed amount of headroom (say to soak up bursts), but if you're systematically slower any sanely-sized buffer will fill up eventually.
Note however that this only cures excessive or avoidable drops. The various network stack layers (even without leaving a single box) are allowed to drop packets even if your client can keep up, so you still can't treat it as reliable without custom retransmit logic (and we're back to implementing TCP).
... way to send as much as I can without being to much?
You need some kind of ack/nack/back-pressure/throttling/congestion/whatever message from the receiver back to the source. This is exactly the kind of thing TCP gives you for free, and which is relatively tricky to implement well yourself.
Is it possible to reach something like 1MB/s ...
I just saw 8MB/s using scp over loopback, so I would say yes. That uses TCP and apparently chose AES128 to encrypt and decrypt the file on the fly - it should be trivial to get equivalent performance if you're just sending plaintext.
UDP is only a viable choice when any number of datagrams can be lost without sacrificing QoS. I am not familiar with Apache MINA, but the scenario described resembles the server which handles every datagram sequentially. In this case all datagrams arrived while the one is serviced will be lost - there is no queuing of UDP datagrams. Like I said, I do not know if MINA can be tuned for parallel datagram processing, but if it can't, it is simply wrong choice of tools.
I know how to open an UDP socket in C++, and I also know how to send packets through that. When I send a packet I correctly receive it on the other end, and everything works fine.
EDIT: I also built a fully working acknowledgement system: packets are numbered, checksummed and acknowledged, so at any time I know how many of the packets that I sent, say, during the last second were actually received from the other endpoint. Now, the data I am sending will be readable only when ALL the packets are received, so that I really don't care about packet ordering: I just need them all to arrive, so that they could arrive in random sequences and it still would be ok since having them sequentially ordered would still be useless.
Now, I have to transfer a big big chunk of data (say 1 GB) and I'd need it to be transferred as fast as possible. So I split the data in say 512 bytes chunks and send them through the UDP socket.
Now, since UDP is connectionless it obviously doesn't provide any speed or transfer efficiency diagnostics. So if I just try to send a ton of packets through my socket, my socket will just accept them, then they will be sent all at once, and my router will send the first couple and then start dropping them. So this is NOT the most efficient way to get this done.
What I did then was making a cycle:
Sleep for a while
Send a bunch of packets
Sleep again and so on
I tried to do some calibration and I achieved pretty good transfer rates, however I have a thread that is continuously sending packets in small bunches, but I have nothing but an experimental idea on what the interval should be and what the size of the bunch should be. In principle, I can imagine that sleeping for a really small amount of time, then sending just one packet at a time would be the best solution for the router, however it is completely unfeasible in terms of CPU performance (I probably would need to busy wait since the time between two consecutive packets would be really small).
So is there any other solution? Any widely accepted solution? I assume that my router has a buffer or something like that, so that it can accept SOME packets all at once, and then it needs some time to process them. How big is that buffer?
I am not an expert in this so any explanation would be great.
Please note, however, that for technical reasons there is no way at all I can use TCP.
As mentioned in some other comments, what you're describing is a flow control system. The wikipedia article has a good overview of various ways of doing this:
http://en.wikipedia.org/wiki/Flow_control_%28data%29
The solution that you have in place (sleeping for a hard-coded period between packet groups) will work in principle, but in order to get reasonable performance in a real-world system you need to be able to react to changes in the network. This means implementing some kind of feedback where you automatically adjust both the outgoing data rate and packet size in response to to network characteristics, such as throughput and packetloss.
One simple way of doing this is to use the number of re-transmitted packets as an input into your flow control system. The basic idea would be that when you have a lot of re-transmitted packets, you would reduce the packet size, reduce the data rate, or both. If you have very few re-transmitted packets, you would increase packet size & data rate until you see an increase in re-transmitted packets.
That's something of a gross oversimplification, but I think you get the idea.
I'm pretty familiar with what Input/Output Completion Ports are for when it comes to TCP.
But what, if I am for example coding a FPS game, or anything where need for low latency can be a deal breaker - I want immediate response to the player to provide the best playing experience, even at cost of losing some spatial data on the go. It becomes obvious that I should use UDP and aside from sending coordinate updates frequently, I should also implement kind of semi-reliable protocol (afaik TCP induces packet loss in UDP so we should avoid mixing these two) to handle such events like chat messages, or gunshots where packet loss may be crucial.
Let's say I'm aiming at performance which would apply to MMOFPS game that allows to meet hundreds of players in one, persistent world, and aside from fighting with guns, it allows them to communicate through chat messages etc. - something like this actually exists and works well - check out PlanetSide 2.
Many articles there on the net (e.g. these from msdn) say overlapped sockets are the best and IOCP is god-tier concept, but they don't seem to distinguish the cases where we use other protocols than TCP.
So there is almost no reliable information about I/O techniques used when developing such a server, I've looked at this, but the topic seems to be highly controversial, and I've also seen this , but considering discussions in the first link, I don't know if I should follow assumptions of the second one, whether I should use IOCP with UDP at all, and if not, what is the most scalable and efficient I/O concept when it comes to UDP.
Or maybe am I just making another premature optimization and no thinking ahead is required for the moment ?
Thought about posting it on gamedev.stackexchange.com, but this question better applies to general-purpose networking I think.
I do not recommend using this, but technically the most efficient way to receive UDP datagrams would be to just block in recvfrom (or WSARecvFrom if you will). Of course, you'll need a dedicated thread for that, or not much will happen otherwise while you block.
Other than with TCP, you do not have a connection built into the protocol, and you do not have a stream without defined borders. That means you get the sender's address with every datagram that comes in, and you get a whole message or nothing. Always. No exceptions.
Now, blocking on recvfrom means one context switch to the kernel, and one context switch back when something was received. It won't go any faster by having several overlapped reads in flight either, because only one datagram can arrive on the wire at the same time, which is by far the most limiting factor (CPU time is not the bottleneck!). Using an IOCP means at least 4 context switches, two for the receive and two for the notification. Alternatively, an overlapped receive with completion callback is not much better either, because you must NtTestAlert or SleepEx to run the APC queue, so again you have at least 2 extra context switches (though, it's only +2 for all notifications together, and you might incidentially already sleep anyway).
However:
Using an IOCP and overlapped reads is nevertheless the best way to do it, even if it is not the most efficient one. Completion ports are irrespective from using TCP, they work just fine with UDP, too. As long as you use an overlapped read, it does not matter what protocol you use (or even whether it's network or disk, or some other waitable or alertable kernel object).
It also does not really matter for either latency or CPU load whether you burn a few hundred cycles extra for the completion port. We're talking about "nano" versus "milli" here, a factor of one to one million. On the other hand, completion ports are overall a very comfortable, sound, and efficient system.
You can for example trivially implement logic for resending when you did not receive an ACK in time (which you must do when a form of reliability is desired, UDP does not do it for you), as well as keepalive.
For keepalive, add a waitable timer (maybe firing after 15 or 20 seconds) that you reset every time you receive anything. If your completion port ever tells you that this timer went off, you know the connection is dead.
For resends, you could e.g. set a timeout on GetQueuedCompletionStatus, and every time you wake up find all packets that are more than so-and-so old and have not been ACKed yet.
The entire logic happens in one place, which is very nice. It's versatile, efficient, and hard to do wrong.
You can even have several threads (and, indeed, more threads than your CPU has cores) block on the completion port. Many threads sounds like an unwise design, but it is in fact the best thing to do.
A completion port wakes up to N threads in last-in-first-out order, N being the number of cores unless you tell it to do something different. If any of these threads block, another one is woken to handle outstanding events. This means that in the worst case, an extra thread may be running for a short time, but this is tolerable. In the average case, it keeps processor usage close to 100% as long as there is some work to do and zero otherwise, which is very nice. LIFO waking is favourable for processor caches and keeps switching thread contexts low.
This means you can block and wait for an incoming datagram and handle it (decrypt, decompress, perform logic, read someting from disk, whatever) and another thread will be immediately ready to handle the next datagram that might come in the next microsecond. You can use overlapped disk IO with the same completion port, too. If you have compute work (such as AI) to do that can be split into tasks, you can manually post (PostQueuedCompletionStatus) those on the completion port as well and you have a parallel task scheduler for free. All you have to do is wrap an OVERLAPPED into a structure that has some extra data after it, and use a key that you will recognize. No worrying about thread synchronization, it just magically works (you don't even strictly need to have an OVERLAPPED in your custom structure when posting your own notifications, it will work with any structure you pass, but I don't like lying to the operating system, you never know...).
It does not even matter much whether you block, for example when reading from disk. Sometimes this just happens and you can't help it. So what, one thread blocks, but your system still receives messages and reacts to it! The completion port automatically pulls another thread from its pool when it's necessary.
About TCP inducing packet loss on UDP, this is something that I am inclined to call an urban myth (although it is somewhat correct). The way this common mantra is worded is however misleading. It may have been true once upon a time (there exists research on that matter, which is, however, close to a decade old) that routers would drop UDP in favour of TCP, thereby inducing packet loss. That is, however, certainly not the case nowadays.
A more truthful point of view is that anything you send induces packet loss. TCP induces packet loss on TCP and UDP induces packet loss on TCP and vice versa, this is a normal condition (it's how TCP implements congestion control, by the way). A router will generally forward one incoming packet if the cable on the other plug is "silent", it will queue a few packets with a hard deadline (buffers are often deliberately small), optionally it may apply some form of QoS, and it will simply and silently drop everything else.
A lot of applications with rather harsh realtime requirements (VoIP, video streaming, you name it) nowadays use UDP, and while they cope well with a lost packet or two, they do not at all like significant, recurring packet loss. Still, they demonstrably work fine on networks that have a lot of TCP traffic. My phone (like the phones of millions of people) works exclusively over VoIP, data going over the same router as internet traffic. There is no way I can provoke a dropout with TCP, no matter how hard I try.
From that everyday observation, one can tell for certain that UDP is definitively not dropped in favour of TCP. If anything, QoS might favour UDP over TCP, but it most certainly doesn't penaltize it.
Otherwise, services like VoIP would stutter as soon as you open a website and be unavailable alltogether if you download something the size of a DVD ISO file.
EDIT:
To give somewhat of an idea of how simple life with IOCP can be (somewhat stripped down, utility functions missing):
for(;;)
{
if(GetQueuedCompletionStatus(iocp, &n, &k, (OVERLAPPED**)&o, 100) == 0)
{
if(o == 0) // ---> timeout, mark and sweep
{
CheckAndResendMarkedDgrams(); // resend those from last pass
MarkUnackedDgrams(); // mark new ones
}
else
{ // zero return value but lpOverlapped is not null:
// this means an error occurred
HandleError(k, o);
}
continue;
}
if(n == 0 && k == 0 && o == 0)
{
// zero size and zero handle is my termination message
// re-post, then break, so all threads on the IOCP will
// one by one wake up and exit in a controlled manner
PostQueuedCompletionStatus(iocp, 0, 0, 0);
break;
}
else if(n == -1) // my magic value for "execute user task"
{
TaskStruct *t = (TaskStruct*)o;
t->funcptr(t->arg);
}
else
{
/* received data or finished file I/O, do whatever you do */
}
}
Note how the entire logic for both handling completion messages, user tasks, and thread control happens in one simple loop, no obscure stuff, no complicated paths, every thread only executes this same, identical loop.
The same code works for 1 thread serving 1 socket, or for 16 threads out of a pool of 50 serving 5,000 sockets, 10 overlapped file transfers, and executing parallel computations.
I've seen the code to many FPS games that use UDP as the networking protocol.
The standard solution is to send all the data you need to update a single game frame in one large UDP packet. That packet should include a frame number, and a checksum. The packet should of course be compressed.
Generally the UDP packet contains the positions and velicities for every entity near the player, any chat messages that were sent, and all recent state changes. ( e.g. new entity created, entity destrouyed etc. )
Then the client listens for UDP packets. It will use only the packet with the highest frame number. So if out of order packets appear, the older packets are simply ignored.
Any packets with wrong checksums are also ignored.
Each packet should contain all the information to synchronize the client's game state with the server.
Chat messages get sent repeatedly over several packets, and each message has a unique message id For example, you retransmit the same chat message for say a full second worth of frames. If a client misses a chat message after getting it 60 times - then the quality of the network channel is just too low to play the game. Clients will display any messages they get in a UDP packet that have a message ID they have not yet displayed.
Similarly for objects being created or destroyed. All created or destroyed objects have a unique object Id set by the server. Objects get created or destroyed if the object id they correspond to has not been acted on before.
So the key here is to send data redundantly, and key all state transitions to unique id's set by the server.
#edit: Another poster mentioned that for chat messages you might want to use a different protocol on a different port. And they may be right about that probably being optimal. That is for message types where latency is not critical, but reliability is more important you might want to open up a different port and use TCP. But I'd leave that as a later excercise. It is certainly easier and cleaner at first for your game to use just one channel, and figure out the vagaries of multiple ports, multiple channels, with their various failure modes later. (e.g. what happens if the UDP channel is working, but the chat channel goes goes down? What if you succeed in opening one port and not the other? )
When I did this for a client we used ENet as the base reliable UDP protocol and re-implemented this from scratch to use IOCP for the server side whilst using the freely available ENet code for the client side.
IOCP works fine with UDP and integrates nicely with any TCP connections that you might also be handling (we have TCP, WebSocket or UDP client connections in and TCP connections between server nodes and being able to plug all of these into the same thread pool if we want is handy).
If absolute latency and UDP packet processing speed is most important (and it's unlikely it really is) then a using the new Server 2012 RIO API might be worth it, but I'm not convinced yet (see here for some preliminary performance tests and some example servers).
You probably want to look at using GetQueuedCompletionStatusEx() for dealing with your inbound data as it reduces the context switches per datagram as you can pull multiple datagrams back with a single call.
A couple things:
1) As a general rule if you need reliability, you are best off just using TCP. A competitive and perhaps even superior solution on top of UDP is possible, but it is extremely difficult to get right and have it perform properly. The main thing people implementing reliability on top of UDP don't bother with is proper flow control. You must have flow control if you intend to send large amounts of data and want it to gracefully take advantage of the bandwidth that is available at the moment (which changes continuously with route conditions). In practice, implementing anything other than essentially the same algorithm TCP uses is likely to be unfriendly to other protocols on the network as well. It's unlikely you will do a better job at implementing that algorithm than TCP does.
2) As for running TCP and UDP in parallel, it is not as huge of a concern these days as others have noted. At one time I heard that overloaded routers along the way were bias dropping UDP packets before TCP packets, which makes sense in some ways, since a dropped TCP packet will just be resent anyways, and a lost UDP packet often isn't. That said, I am skeptical that this actually happens. In particular, dropping a TCP packet will cause the sender to throttle back, so it may make more sense to drop the TCP packet.
The one case where TCP may interfere with UDP is that TCP by nature of it's algorithm is continuously trying to go faster and faster, unless it reaches a point where it loses packets, then it throttles back and repeats the process. As the TCP connection continuously bumps against that bandwidth ceiling, it is just as likely to cause UDP loss as TCP loss, which in theory would appear as if the TCP traffic was sporadically causing UDP loss.
However, this is a problem you will run into even if you put your own reliable mechanism on top of UDP (assuming you do flow control properly). If you wanted to avoid this condition, you could intentionally throttle the reliable data at the application layer. Typically in a game the reliable data rate is limited to the rate at which the client or server actually needs to send reliable data, which is often well below the bandwidth capabilities of the pipe, and thus the interference never occurs, regardless of whether it is TCP or UDP-reliable based.
Where things get a bit more difficult is if you are making a streaming asset game. For a game like FreeRealms which does this, the assets are downloaded from a CDN via HTTP/TCP and it will attempt to use all available bandwidth, which will increase packetloss on the main game channel (which is typically UDP). I have generally found the interference low enough that I don't think you should be worrying about it too much.
3) As for IOCP, my experience with them is very limited, but having done extensive game networking in the past, I am skeptical that they add value in the case of UDP. Typically the server will have a single UDP socket that is handling all incoming data. With hundreds of users connected, the rate at which the data is coming into the server is very high. Having a background thread doing a blocking call on the socket as others have suggested and then quickly moving the data into a queue for the main application thread to pick up is a reasonable solution, but somewhat unnecessary, since in practice the data is coming in so fast when under load that there is not much point in ever sleeping the thread when it blocks.
Let me put this another way, if the blocking socket call polled a single packet and then put the thread to sleep until the next packet came in, it would be context-switching to that thread thousands of times per second when the data rate got high. Either that, or by the time the unblocked thread executed and cleared the data, there would already be additional data ready to be processed as well. Instead, I prefer to put the socket in non-blocking mode and then have a background thread spin at around 100fps processing it (sleeping between polls as needed to achieve the frame rate). In this manner, the socket buffer will build up incoming packets for 10ms and then the background thread will wake up once and process all that data in bulk, then go back to sleep, thus preventing gratuitous context switches. I then have that same background thread do other send-related processing when it wakes up as well. Being entirely event-driven loses many of it's benefits when the data volume gets the least bit high.
In the case of TCP, the story is quite different, since you need an efficient mechanism to figure out which of hundreds of connects the incoming data is coming from and polling them all is very slow, even on a periodic basis.
So, in the case of UDP with a home-grown UDP-reliable mechanism on top of it, I typically have a background thread playing the same role that the OS plays... whereas the OS gets the data from the network card then distributes it to various logical TCP connections internally for processing, my background thread gets the data from the solitary UDP socket (via periodic polling) and distributes it to my own internal logical connection objects for processing. Those internal logical connections then put the application-level packet data into a thread-safe master-queue flagged with the logical connection they came from. The main application thread then processes that master-queue in, routing the packets directly to the game-level objects associated with that connection. From the main application threads point of view, it simply has an event driven queue it is processing.
The bottom line is that given that the poll call to the solitary UDP socket rarely comes up empty, it is difficult to imagine there is going to be a more efficient way to solve this problem. The only thing you lose with this method is you wait up to 10ms to wake up when in theory you could be waking up the instant the data first arrived, but that is only meaningful if you were under extremely light load anyways. Plus, the main application thread isn't going to be making use of the data until it's next frame cycle anyways, so the difference is moot, and I think the overall system performance is enhanced by this technique.
I wouldn't hold a game as old as PlanetSide up as a paragon of modern network implementation. Especially not having seen the insides of their networking library. :)
Different types of communication require different methodologies. One of the answers above talks around the differences between frame/position updates and chat messages, without recognizing that using the same transport for both is probably silly. You should most definitely use a connected TCP socket between your chat implementation and the chat server, for text-style chat. Don't argue, just do it.
So, for your game client doing updates via arriving UDP packets, the most efficient path from the network adapter through the kernel and into your application is (most likely) going to be a blocking recv. Create a thread that rips packets off the network, verifies their validity (chksum match, sequence number increasing, whatever other checks you have), de-serializes the data into an internal object, then queue the object on an internal queue to the application thread that handles those sorts of updates.
But don't take my word for it: test it! Write a small program that can receive and deserialize 3 or 4 kinds of packets, using a blocking thread and a queue to deliver the objects, then re-write it using a single thread and IOCPs, with the deserialization and queueing in the completion routine. Pound enough packets through it to get the run time up in the minute range, and test which one is fastest. Make sure something (i.e. some thread) in your test app is consuming the objects off the queue so you get a full picture of the relative performance.
Post back here when you have the two test programs done, and let us know which worked out best, mm'kay? Which was fastest, which would you rather maintain in the future, which took the longest to get it working, etc.
If you want to support many simultaneous connections, you need to use an event-driven networking approach. I know of two good libraries: libev (used by nodeJS) and libevent. They are very portable and easy to use. I have successfully used libevent in an application supporting hundreds of parallel TCP/UDP(DNS) connections.
I believe using event-driven network i/o is not premature optimization in a server - it should be the default design pattern. If you want to do a quick prototype implementation it may be better to start in a higher level language. For JavaScript there is nodeJS and for Python there is Twisted. Both I can personally recommend.
How about NodeJS
It supports UDP and it is highly scalable.
I have to send mesh data via TCP from one computer to another... These meshes can be rather large. I'm having a tough time thinking about what the best way to send them over TCP will be as I don't know much about network programming.
Here is my basic class structure that I need to fit into buffers to be sent via TCP:
class PrimitiveCollection
{
std::vector<Primitive*> primitives;
};
class Primitive
{
PRIMTYPES primType; // PRIMTYPES is just an enum with values for fan, strip, etc...
unsigned int numVertices;
std::vector<Vertex*> vertices;
};
class Vertex
{
float X;
float Y;
float Z;
float XNormal;
float ZNormal;
};
I'm using the Boost library and their TCP stuff... it is fairly easy to use. You can just fill a buffer and send it off via TCP.
However, of course this buffer can only be so big and I could have up to 2 megabytes of data to send.
So what would be the best way to get the above class structure into the buffers needed and sent over the network? I would need to deserialize on the recieving end also.
Any guidance in this would be much appreciated.
EDIT: I realize after reading this again that this really is a more general problem that is not specific to Boost... Its more of a problem of chunking the data and sending it. However I'm still interested to see if Boost has anything that can abstract this away somewhat.
Have you tried it with Boost's TCP? I don't see why 2MB would be an issue to transfer. I'm assuming we're talking about a LAN running at 100mbps or 1gbps, a computer with plenty of RAM, and don't have to have > 20ms response times? If your goal is to just get all 2MB from one computer to another, just send it, TCP will handle chunking it up for you.
I have a TCP latency checking tool that I wrote with Boost, that tries to send buffers of various sizes, I routinely check up to 20MB and those seem to get through without problems.
I guess what I'm trying to say is don't spend your time developing a solution unless you know you have a problem :-)
--------- Solution Implementation --------
Now that I've had a few minutes on my hands, I went through and made a quick implementation of what you were talking about: https://github.com/teeks99/data-chunker There are three big parts:
The serializer/deserializer, boost has its own, but its not much better than rolling your own, so I did.
Sender - Connects to the receiver over TCP and sends the data
Receiver - Waits for connections from the sender and unpacks the data it receives.
I've included the .exe(s) in the zip, run Sender.exe/Receiver.exe --help to see the options, or just look at main.
More detailed explanation:
Open two command prompts, and go to DataChunker\Debug in both of them.
Run Receiver.exe in one of the
Run Sender.exe in the other one (possible on a different computer, in which case add --remote-host=IP.ADD.RE.SS after the executable name, if you want to try sending more than once and --num-sends=10 to send ten times).
Looking at the code, you can see what's going on, creating the receiver and sender ends of the TCP socket in the respecitve main() functions. The sender creates a new PrimitiveCollection and fills it in with some example data, then serializes and sends it...the receiver deserializes the data into a new PrimitiveCollection, at which point the primitive collection could be used by someone else, but I just wrote to the console that it was done.
Edit: Moved the example to github.
Without anything fancy, from what I remember in my network class:
Send a message to the receiver asking what size data chunks it can handle
Take a minimum of that and your own sending capabilities, then reply saying:
What size you'll be sending, how many you'll be sending
After you get that, just send each chunk. You'll want to wait for an "Ok" reply, so you know you're not wasting time sending to a client that's not there. This is also a good time for the client to send a "I'm canceling" message instead of "Ok".
Send until all packets have been replied with an "Ok"
The data is transfered.
This works because TCP guarantees in-order delivery. UDP would require packet numbers (for ordering).
Compression is the same, except you're sending compressed data. (Data is data, it all depends on how you interpret it). Just make sure you communicate how the data is compressed :)
As for examples, all I could dig up was this page and this old question. I think what you're doing would work well in tandem with Boost.Serialization.
I would like to add one more point to consider - setting TCP socket buffer size in order to increase socket performance to some extent.
There is an utility Iperf that let test speed of exchange over the TCP socket. I ran on Windows a few tests in a 100 Mbs LAN. With the 8Kb default TCP window size the speed is 89 Mbits/sec and with 64Kb TCP window size the speed is 94 Mbits/sec.
In addition to how to chunk and deliver the data, another issue you should consider is platform differences. If the two computers are the same architecture, and the code running on both sides is the same version of the same compiler, then you should, probably, be able to just dump the raw memory structure across the network and have it work on the other side. If everything isn't the same, though, you can run into problems with endianness, structure padding, field alignment, etc.
In general, it's good to define a network format for the data separately from your in-memory representation. That format can be binary, in which case numeric values should be converted to standard forms (mainly, changing endianness to "network order", which is big-endian), or it can be textual. Many network protocols opt for text because it eliminates a lot of formatting issues and because it makes debugging easier. Personally, I really like JSON. It's not too verbose, there are good libraries available for every programming language, and it's really easy for humans to read and understand.
One of the key issues to consider when defining your network protocol is how the receiver knows when it has received all of the data. There are two basic approaches. First, you can send an explicit size at the beginning of the message, then the receiver knows to keep reading until it's gotten that many bytes. The other is to use some sort of an end-of-message delimiter. The latter has the advantage that you don't have to know in advance how many bytes you're sending, but the disadvantage that you have to figure out how to make sure the the end-of-message delimiter can't appear in the message.
Once you decide how the data should be structured as it's flowing across the network, then you should figure out a way to convert the internal representation to that format, ideally in a "streaming" way, so you can loop through your data structure, converting each piece of it to network format and writing it to the network socket.
On the receiving side, you just reverse the process, decoding the network format to the appropriate in-memory format.
My recommendation for your case is to use JSON. 2 MB is not a lot of data, so the overhead of generating and parsing won't be large, and you can easily represent your data structure directly in JSON. The resulting text will be self-delimiting, human-readable, easy to stream, and easy to parse back into memory on the destination side.