I have to implement a (client) socket which requires high throughput (> 800Mbps) and low latency running on Windows 7 server. Overlapped IO seems the way to go for high performance.
Read some documentation on the subject, as far as I can see the advantage of overlapped I/O is that you pass some structures with buffer to the OS and you are notified when they are filled.
No I am wondering what the common ways are to combine this with a packet based protocol (length delimited packets, header contains size of datablock)
Of course I can just read arbitrary chunks of data and copy the required number of bytes into a message struture. This means an additional copy action.
Second option might be passing the message structure as a buffer with the header size, after getting it back pass the same structure again to read the requested number of databytes. In this case the first chunk read is small, but the data is saved into the message structure, and while the data block read is pending the read of the next header block can be initialized.
Any experiences or ideas how to handle length delimited packets the most efficient ?
Thanks,
Check out scatter/gather I/O if you know the packet sizes.
Related
The title probably explains itself best.
Anyway, I have a data buffer received from an another source, and I want to send it in a single UDP packet that contains a sequence number (as the first byte) -> I want to add the sequence number to the given buffer!
Instead of allocating a new buffer, settings it's size to size+4, setting the sequence number as the first byte and copying the data into the buffer, I would like to just use the scatter gather mechanism of WSA.
Sadly though, no WSA document specifies explicitly that WSASend guarantees that all buffer will be sent a single packet (The packet size will be held as < 1500 bytes).
Can I be certain that it will work that way? Or should I re-build the packet?
Best,
Daniel
It is documented in a round-about way:
For message-oriented sockets, do not exceed the maximum message size of the underlying provider, which can be obtained by getting the value of socket option SO_MAX_MSG_SIZE. If the data is too long to pass atomically through the underlying protocol the error WSAEMSGSIZE is returned, and no data is transmitted.
So clearly it combines the data from the buffers to make a single UDP packet. If it didn't then there would no point in returning the WSAEMSGSIZE error.
I have 3 components client-proxy-server, at times when the proxy gets heavily loaded the socket buffers configure to say 1 MB gets filled. Is there a way to read Entire buffer 1 MB in one shot and then process?
FYI:
all the data grams never goes beyond MTU size are in per-defined structural format, where in length of each packet is also added.
Proxy routes data in between client & server, so tried having Producer & consumer thread but problem is NOT solved
Short answer: no.
Long answer:
The Berkeley style socket implementation allows to receive or send only one packet per call. Therefore it is not possible to read a complete network stream and replay it at the other side.
One reason is that your UDP socket can receive data from several sources. The interface should be able to pass the meta information like sender socket address, and at least the packet size to the caller. This is bunch of data should be parsed and you would pick the packets that meet a criteria. Finally you could build the bunch of packets to send.
Since you have to have the possibility to check each packet, if the packet is really expected you need a function to read a packet from the bunch. This is the function recvfrom.
I cannot find the answer for this one: what will happen if I read from socket 4bytes (I set the limit for 4 bytes) but there are actually 256bytes awaiting to be read? Will they be lost or will they wait until the next call of read function?
If it's a TCP socket, then no data will get lost; it'll get queued up.
Bear in mind that you have to be prepared to deal with partial reads, i.e. where you get fewer bytes than requested and have to call read() again to get more.
It depends what kind of socket you use. If it is stream socket (created with SOCK_STREAM), then it supports a stream of data, and you can read it even by 1 byte (though it will be not efficient), on another side you may request 1024 bytes but get only 1. And that almost irrelevant by what portions sender put them into stream (there is dependency, but you should not rely on that). So with stream you need to define end of data by higher level protocol. You may send strings with \n at the end, or use zero terminated string, or send some bytes of size of coming data before that data.
On another side if you use datagram protocol (created with SOCK_DGRAM) you will get data by packets - whatever size sender sent them. If you provide smaller buffer than data available, it will be truncated and remaining data is discarded.
Is there a good method on how to transfer a file from say... a client to a server?
Probably just images, but my professor was asking for any type of files.
I've looked around and am a little confused as to the general idea.
So if we have a large file, we can split that file into segments...? Then send each segment off to the server.
Should I also use a while loop to receive all the files / segments on the server side? Also, how will my server know if all the segments were received without previously knowing how many segments there are?
I was looking on the Cplusplus website and found that there is like a binary transfer of files...
Thanks for all the help =)
If you are using TCP:
You are right, there is no way to "know" how much data you will be receiving. This gives you a few options:
1) Before transmitting the image data, first send the number of bytes to be expected. So your first 4 bytes might be the 4-byte integer "4096". Then your client can read the first 4 bytes, "know" that it is expecting 4096 bytes, and then malloc(4096) so it can expect the rest. Then, your server can send() 4096 bytes worth of image data.
When you do this, be aware that you might have to recv() multiple times - for one reason or another, you might not have received all 4096 bytes. So you will need to check the return value of recv() to make sure you have gotten everything.
2) If you are just sending one file, you could just have your receiver read it. And it can keep recv()ing from the socket until the server closes the connection. This is a bit harder - you will have to keep track of how much you have received, and then if your buffer is full, you will have to reallocate it. I don't recommend this method, but it would technically accomplish the task.
If you are using UDP:
This means that you don't have reliable transfer. So packets might be dropped. They might also arrive out of order. So if you are going to use UDP, you must fragment your data into little segments. Both the sender and receiver must have agreement on how large a segment is (100 bytes? 1000 bytes?)
Not only that, but you must also transmit a sequence number with each packet - that is, label each packet #1, #2, etc. Because your client must be able to tell: if any packets are missing (you receive packets 1, 2 and 4 - and are thus missing #3) and to make sure they are in order (you receive 3, 2, then 1 - but when you save them to the file, you must make sure the packets are saved in the correct order, 1, 2, then 3).
So for your assignment, well, it will depend on what protocol you have to/are allowed to use.
If you use a UDP-based transfer protocol, you will have to break the file up into chunks for network transmission. You'll also have to reassemble them in the correct order on the receiving end and verify the results. If you use a TCP-based transfer protocol, all of this will be taken care of under the hood.
You should consult Beej's Guide to Network Programming for how best to send and receive data and use sockets in general. It explains most of the things about which you are asking.
There are many ways of transferring files. If your transferring files in a lossless manor, then your basically going to divide the file into chunks. Tag each chunk with a sequence number. Send the chunks to the other side and reconstitute the file. Stream oriented protocols are simpler since packets will be retransmitted if lost. If your using an unreliable protocol, then you will need to retransmit missing packets and resequenced chunks which are not in the correct order.
If lossy transfer is acceptable (like transferring video or on-line game data), then use an unreliable protocol. Lossy transfer is simpler because you don't have to retransmit missing chunks. All you need to do is make sure the chunks are processed in the proper sequence.
Many protocols send a terminator packet to indicate the end of transmission. You could use this strategy if you don't want to send the number of chunks to the other side before transmission.
I have a C++ application which receives stock data and forward to another application via socket (acting as a server).
Actually the WSASend function returns with error code 10055 after small seconds and I found that is the error message
"No buffer space available. An operation on a socket could not be performed because the system lacked sufficient buffer space or because a queue was full".
The problem arises only when I run the application after the market hours as we are receiving the whole day data (approximately 130 MB) in few minutes (I assume this is relatively large)
I am doing so as a robustness test.
I tried to increase the sending buffer SO_SNDBUF using setsockopt function but the same problem still there.
How can I solve this problem? is this related to receiver buffer?
Sending details:
For each complete message I call the send method which uses overlapped sockets
EDIT:
Can someone give general guidelines to handle high frequency data in C++?
The flow-control of TCP will cause the internal send-buffer to fill up if the receiver is not processing their end of the socket fast enough. It would seem from the error message that you are sending data without regard for how quickly the Winsock stack can process it. It would be helpful if you could state exactly how you are sending the data? Are you waiting for all the data to arrive and then sending one big block, or sending piecemeal?
Are you sending via a non-blocking or overlapped socket? In either case, after each send you should probably wait for a notification that the socket is in a state where it can send more data, either because select()/WaitForMultipleObjects() indicates it can (for non-blocking sockets), or the overlapped I/O completes, signalling that the data has been successfully copied to the sockets internal send buffers.
You can overlap sends, i.e. queue up more than one buffer at a time - that's what overlapped I/O is for - but you need to pay careful regard to the memory implications of locking large numbers of pages and potentially exhausting the non-paged pool.
Nick's answer pretty much hits the nail on the head; you're most likely exhausting the 'locked pages limit' by starting too many overlapped sends at once. Ideally you need to buffer your data in your own memory buffers and only have a set number of overlapped sends pending at any one time. I talk about how my IOCP framework allows you to deal with this kind of situation here http://www.lenholgate.com/blog/2008/07/write-completion-flow-control.html and the related TCP receive window flow control issues here http://www.lenholgate.com/blog/2008/06/data-distribution-servers.html and here http://www.serverframework.com/asynchronousevents/2011/06/tcp-flow-control-and-asynchronous-writes.html.
My preferred solution is to allow a configurable number of pending overlapped sends at any one time and once this limit is exceeded to start buffering data and then using the completion of the pending overlapped sends to drive the sending of the buffered data. This allows you to strictly control the amount of non-paged pool and the amount of 'locked pages' used and makes it possible to have lots of connections sending as fast as possible yet still control the resources that they use.