When my client sends a file to the server, should I Sleep(100) or so before sending the next chunk to ensure the server has enough time to download + write the data?
Does that just seem completely unnecessary?
Also I'm getting wouldblock errors (# 10035) when sending a chunk, so im just looping send until it succeeds, if send == SOCKET_ERROR goto SendAgain; , is that ok?
If you're sending your file via TCP, then it's the protocol that is ensuring that everything has been received, I wouldn't put a sleep between each chunk.
The wouldblock error is either that you're sending too much data for your output buffer, or you try to send it too quickly, and the remote buffer gets full. That seems ok to send it again because the receiver received it but didn't have enough space to store it and have juste drop it.
Here is a small article about your error: Winsock error 10035
In my opinion using sleepfunction to wait for something to be done is in 99% of the time the wrong way.
You ll never now the time you gonna need or you ve to expect for a process to be executed (can be interrupted by e.g spikes, other problems in i/o or whatever)
If you want to make sure something important is executed completely you should read about Semaphores or something like that, where you lock/free processes on start/end.
Taken from a man-page:
When the message does not fit into the send buffer of the socket,
send() normally blocks, unless the socket has been placed in
nonblocking I/O mode. In nonblocking mode it would fail with the error
EAGAIN or EWOULDBLOCK in this case. The select(2) call may be
used to determine when it is possible to send more data.
Related
This question already has answers here:
Are parallel calls to send/recv on the same socket valid?
(3 answers)
Closed 7 years ago.
Sockets can generally two way communicate, therefore the same socket can be used to send and recv.
If I wanted to send some data (on another thread) while the socket is getting read, what would the kernel do? This is applied for both parts.
Consider this example: the server is sending you a file and say it will take a lot (low uplink or a very big file). The user gets bored and decides to SIGINT you. You catch it and tell the server to stop sending the file (with some kind of message).
Will you be able to send to tell the server to stop sending even though you're reading from it? And of course, that's applied to the server-side as well.
Hopefully I've been enough clear.
If I wanted to send some data (on another thread) while the socket is getting read, what would the kernel do?
Nothing special... sockets aren't like garden hoses... there's just some meta-data added to a packet that's sent between the machines, so the reading and writing happen independently (apart perhaps from if one side calls recv() on a socket that has unsent data in the local buffers due to the Nagle algorithm, which bunches up data into sensible size packets, it might time-out immediately and send whatever it can, but any tuning of that would be an implementation latency-tuning detail and doesn't change the big picture or way the client and server call the TCP API).
Consider this example: the server is sending you a file and say it will take a lot (low uplink or a very big file). The user gets bored and decides to SIGINT you. You catch it and tell the server to stop sending the file (with some kind of message). Will you be able to send to tell the server to stop sending even though you're reading from it? And of course, that's applied to the server-side as well.
The kernel accepts a limited amount of data to be sent, and a limited amount of data received, after which it forces the sending side to wait until some has been consumed before sending more. So, if you've sent data to a server, then get a local SIGINT and send a "oh, cancel that" in the same way, the server must read all the already-sent data before it can see the "oh, cancel that". If instead of sending it "in the same way" you turn on the Out Of Band (OOB) flag while sending the cancel message, then the server can (if it's written to do so) detect that there's OOB data and read it before it's completed reading/processing the other data. It will still need to read and discard whatever in-band data you've already sent, but the flow control / buffering mentioned above means that should be a manageable amount - far less than your file size might be. Throughout all this, whatever you want to recv or the server sends is independent and unaffected by the large client->server send, any OOB data etc..
There's a discussion and example code from GNU at http://www.gnu.org/software/libc/manual/html_node/Out_002dof_002dBand-Data.html
Thread 1 can safely write to the socket (with send) whilst thread 2 reads from the socket (with recv). What you need to be careful of is that at the point where you close() the socket the threads are synchronised, else the file descriptor may be used elsewhere, so the other thread (if not synchronized) could read from a file descriptor now used for something else. One way to achieve this would be for your reading thread to shutdown the file descriptor, which should cause the other end to drop the connection and thus error an in-progress send.
I read somewhere that every TCP connection has it's own 125kB output and input buffer. What happens if this buffer is full, and I still continue sending data on linux?
According to http://www.kernel.org/doc/man-pages/online/pages/man2/send.2.html the packets are just silently dropped, without notifying me. What can I do to stop this from happening? Is there any way to find out if at least some of my data has been sent correctly, so that I can continue at a later point in time?
Short answer is this. "send" calls on a TCP socket will just block until the TCP sliding window (or internal queue buffers) opens up as a result of the remote endpoint receiving and consuming data. It's not much different than trying to write bytes to a file faster than the disk can save it.
If your socket is configured for non-blocking mode, send will return EWOULDBLOCK or EAGAIN, until data can be sent. Standard poll, select, and epoll calls will work as expected so you know when to "send" again.
I don't know that the "packets are dropped". I think that what is more likely is that the calls that the program makes to write() will either block or return a failure.
I have a problem - when I'm trying to send huge amounts of data through posix sockets ( doesn't matter if it's files or some data ) at some point I don't receive what I expect - I used wireshark to determine what's causing errors, and I found out, that exactly at the point my app breaks there are packets marked red saying "zero window" or "window full" sent in both directions.
The result is, that the application layer does not get a piece of data sent by send() function. It gets the next part though...
Am I doing something wrong?
EDIT:
Lets say I want to send 19232 pieces of data 1024 bytes each - at some random point ( or not at all ) instead of the 9344th packet I get the 9345th. And I didn't implement any retransmission protocol because I thought TCP does it for me.
Zero Window / Window Full is an indication that one end of the TCP connection cannot recieve any more data, until its client application reads some of the data is has already recieved. In other words, it is one side of the connection telling the other side "do not send any more data until I tell you otherwise".
TCP does handle retransmissions. Your problem is likely that:
The application on the recieving side is not reading data fast enough.
This causes the recieving TCP to report Window Full to the sending TCP.
This in turn causes send() on the sending TCP side to return either 0 (no bytes written), or -1 with errno set to EWOULDBLOCK.
Your sending application is NOT detecting this case, and is assuming that send() sent all the data you asked to send.
This causes the data to get lost. You need to fix the sending side so that it handles send() failing, including returning a value smaller than the number of bytes you asked it to send. If the socket is non-blocking, this means waiting until select() tells you that the socket is writeable before trying again.
First of all, TCP is a byte stream protocol, not a packet-based protocol. Just because you sent a 1024 byte chunk doesn't mean it will be received that way. If you're filling the pipe fast enough to get a zero window condition (i.e., that there is no more room in either a receive buffer or send buffer) then it's very likely that the receiver code will at some point be able to read far more at one time than the size of your "packet".
If you haven't specifically requested non-blocking sockets, then both send and recv will block with a zero window/window full condition rather than return an error.
If you want to paste in the receiver-side code we can take a look, but from what you've described it sounds very likely that your 9344th read is actually getting more bytes than your packet size. Do you check the value returned from recv?
Does in your network iperf also fails to send this number of packets of this size? If not check how they send this amount of data.
Hm, from what I read on Wikipedia this may be some kind of buffer overflow (receiver reports zero receive window). Just a guess though.
I am facing an issue with recv() and send() winsock api. Recv() hangs while receving the last packet.
Problem Description:-
System A's app is writing data over a non-blocking socket and system B's app is receiving data over a blocking socket in chunks of 64k.
It seems that while reading probably the last packet of 64k, which may less than or equal to 64k, the receive freezes. I am not sure if the receive of the last packet or send of the last packet is an issue, but I am observing this issue intermittently in our legacy applications.
Has anyone faced a similar issue before? If yes, then can please provide your inputs.
If not, then can you please provide some trouble-shooting techniques to narrow down to the root cause.
Just for information I have win2k3 servers.
Thanks,
Varun
Wireshark is a great tool for troubleshooting networking code. It'll show you exactly what packets are entering and leaving your network interface in near real time.
As to your specific issue: are you saying that the last chunk of data might be shorter than 64k? If so, your protocol should include some message length information so the receiver
knows how much data to look for.
A couple of guesses...
If you are using UDP, perhaps one or more packets are being dropped en route (which UDP is permitted to do whenever it feels like). In that case, your receiver might end up waiting for data that is simply never going to arrive; to fix this you would need to either implement some way of automatically resending the lost data, or (if you don't strictly need all the data), some way for the sender to notify the receiver that he is done transmitting, so the receiver might as well stop waiting. (of course you would need to handle the case where this notification gets dropped, as well... it can get complicated if you want 100% robustness)
If you are using TCP, perhaps you are not carefully checking the values returned by send() on the sending side? If you are assuming that send() will always send the number of bytes you asked it to, you might end up thinking send() sent all the bytes when in fact it only sent some (or none) of them... so the sender would think the transmission was complete, while the receiver would be stuck waiting for data that isn't going to arrive.
You might have a problem with the server sending data down the wire faster than the receiver is able to read it. You could try increasing the receive buffer:
int nSocketBuffer = 131072; // 128k
if (setsockopt(m_sSocket,SOL_SOCKET,SO_RCVBUF,(LPCSTR)&nSocketBuffer,sizeof(int)) == SOCKET_ERROR)
{
// socket error
return false;
}
I have created a named pipe with following flags:
PIPE_ACCESS_DUPLEX - both side read/write access
PIPE_TYPE_MESSAGE - Message type read
PIPE_WAIT - blocking read\write
From the server side I am calling ConnectNamedPipe and waiting for the clients to connect.
From the client side I am calling CallNamedPipe to connect to server and write data of length N.
On the server side:
After the client connects, PeekNamedPipe is called to get the length of the buffer to allocate to read the data buffer.
After getting the exact buffer size (N), I am allocating the buffer of length N and calling ReadFile to read the data from Pipe.
Problem:
The issue is that -- on Single processor machines the PeekNamedPipe API returns the buffer length as 0. Due to this later ReadFile fails.
after some investigation I could find that due to some race condition , PeekNamedPipe API gets called even before data is put onto the Pipe by the client.
Any idea how to solve this race condition ? I need to call PeekNamedPipe to get the buffer size and PeekNamedPipe cannot be called before the data is available.
I thought of introducing custom header to indicate the buffer length in the message itself but this sounds lot of changes.
Is there any better and reliable way to get the length of the data to be read from pipe ?
There are a large number of race conditions you can get with named pipes. You have to deal with them in your code. Possibilities:
ConnectNamedPipe() on the server side may return ERROR_PIPE_CONNECTED if the client managed to connect right after the CreateNamedPipe() call. Just treat it as connected.
WaitNamedPipe on client side does not set the error if it timed out. Assume a timeout.
CreateFile() on client side may return ERROR_PIPE_BUSY if another client managed to grab the pipe first, even after a successful WaitNamedPipe() call. Go back to WaitNamedPipe state.
FlushFileBuffers() may return ERROR_PIPE_NOT_CONNECTED if the client already saw the message and closed the pipe. Ignore that.
An overlapped ReadFile() call may complete immediately and not return ERROR_IO_PENDING. Consider the read completed.
PeekNamedPipe() may return 0 if the server has not written to the pipe yet. Sleep(1) and repeat.
It sounds like you want Aynschronous I/O. Just let Windows notify you when data is available, and peek at that moment.
Having a packet size in the header is a good idea in any case, making the protocol less dependent on transport layer.
Alternatively, if the client sends data and closes the pipe you can accumulate into buffer with ReadFile until EOF.