Overlapped message named pipe, ERROR_MORE_DATA and CancelIoEx - c++

I am using $SUB for the first time and have come across this problem. Both, client and server use overlapped operations and here is the specific situation I have a problem with.
Client
C1. Connects to the server.
C2. Sends the message bigger than a pipe buffer and buffer passed to overlapped read operation in the server.
C3. Successfully cancels the send operation.
Server
S1. Creates and waits for the client.
S2. When the client is connected, it reads the message.
S21. Because message doesn't fit into the buffer(ERROR_MORE_DATA), it is read part by part.
It seems to me that there is no way to tell when is the whole message, as an isolated unit, canceled. In particular, if client cancels the send operation, server does not receive the whole message, just a part of it, and consequent read operation returns with ERROR_IO_PENDING (in my case), which means there is no data to be read and read operation has been queued. I would expect to have some kind of means telling the reader that the message has been canceled, so that reader can act upon it.
However, relevant documentation is scatter over MSDN, so I may as well be missing something. I would really appreciate if anyone can shed some light on it. Thanks.

You are correct, there is no way to tell.
If you cancel the Writefile partway through, only part of the message will be written, so only that part will be read by the server. There is no "bookkeeping" information sent about how large the message was going to be before you cancelled it - what is sent is just the raw data.
So the answer is: Don't cancel the IO, just wait for it to succeed.
If you do need to cancel IO partway through, you should probably cut the connection and start again from the beginning, just as you would for a network outage.
(You could check your OVERLAPPED structure to find out how much was actually written, and carry on from there, but if you wanted to do that you would probably just not cancel the IO in the first place.)
Why did you want to cancel the IO anyway? What set of circumstances triggers this requirement?

Related

Multiple Boost::ASIO async_send_to in one go

I want to increase the throughput of my udp gameserver which uses Boost ASIO.
Right now, everytime i need to send a packet, i am putting it in a queue, then checking if there is a pending async_send_to operation, if yes, do nothing, if not, call async_send_to.
Then i wait for the write handler to be called and then call async_send_to for the next packet in queue, if any.
The documentation says that it is the way to do it "for TCP socket", but there is NOTHING on the whole internet about UDP socket.
Try it, search it on stackoverflow, you will see nobody talks about this, and for the 2 questions you will find, the question is left ignored by users.
Why is it kept a secret?
And for the 1million dollar question, can i safely call async_send_to multiple time in a row WITHOUT waiting for the write handler to be called?
Thanks in advance.
This logic is meaningless for the UDP protocol since it doesn't need to block send operation. A datagram is either delivered or lost. UDP don't have to store it in the output buffer and resend indefinitely many times until it get ACK packet.
No, you cannot safely call async_send_to multiple times in a row WITHOUT waiting for the write handler to be called. See Asynchronous IO with Boost.Asio to see precisely why.
However, asio supports scatter gather and so you can call async_send_to with multiple buffers, e.g.:
typedef std::deque<boost::asio::const_buffer> ConstBuffers;
std::string msg_1("Blah");
...
std::string msg_n("Blah");
ConstBuffers buffers;
buffers.push_back(msg_1);
...
buffers.push_back(msg_n);
socket_.async_send_to(buffers, tx_endpoint_, write_handler);
So you could increase your throughput by double buffering your message queue and using gathered writes...

Socket data race [duplicate]

This question already has answers here:
Are parallel calls to send/recv on the same socket valid?
(3 answers)
Closed 7 years ago.
Sockets can generally two way communicate, therefore the same socket can be used to send and recv.
If I wanted to send some data (on another thread) while the socket is getting read, what would the kernel do? This is applied for both parts.
Consider this example: the server is sending you a file and say it will take a lot (low uplink or a very big file). The user gets bored and decides to SIGINT you. You catch it and tell the server to stop sending the file (with some kind of message).
Will you be able to send to tell the server to stop sending even though you're reading from it? And of course, that's applied to the server-side as well.
Hopefully I've been enough clear.
If I wanted to send some data (on another thread) while the socket is getting read, what would the kernel do?
Nothing special... sockets aren't like garden hoses... there's just some meta-data added to a packet that's sent between the machines, so the reading and writing happen independently (apart perhaps from if one side calls recv() on a socket that has unsent data in the local buffers due to the Nagle algorithm, which bunches up data into sensible size packets, it might time-out immediately and send whatever it can, but any tuning of that would be an implementation latency-tuning detail and doesn't change the big picture or way the client and server call the TCP API).
Consider this example: the server is sending you a file and say it will take a lot (low uplink or a very big file). The user gets bored and decides to SIGINT you. You catch it and tell the server to stop sending the file (with some kind of message). Will you be able to send to tell the server to stop sending even though you're reading from it? And of course, that's applied to the server-side as well.
The kernel accepts a limited amount of data to be sent, and a limited amount of data received, after which it forces the sending side to wait until some has been consumed before sending more. So, if you've sent data to a server, then get a local SIGINT and send a "oh, cancel that" in the same way, the server must read all the already-sent data before it can see the "oh, cancel that". If instead of sending it "in the same way" you turn on the Out Of Band (OOB) flag while sending the cancel message, then the server can (if it's written to do so) detect that there's OOB data and read it before it's completed reading/processing the other data. It will still need to read and discard whatever in-band data you've already sent, but the flow control / buffering mentioned above means that should be a manageable amount - far less than your file size might be. Throughout all this, whatever you want to recv or the server sends is independent and unaffected by the large client->server send, any OOB data etc..
There's a discussion and example code from GNU at http://www.gnu.org/software/libc/manual/html_node/Out_002dof_002dBand-Data.html
Thread 1 can safely write to the socket (with send) whilst thread 2 reads from the socket (with recv). What you need to be careful of is that at the point where you close() the socket the threads are synchronised, else the file descriptor may be used elsewhere, so the other thread (if not synchronized) could read from a file descriptor now used for something else. One way to achieve this would be for your reading thread to shutdown the file descriptor, which should cause the other end to drop the connection and thus error an in-progress send.

how to resolve WSAEWOULDBLOCK error

I have an win7 application where i am sending data b\w 2 clients on a TCP connection. While testing i found out that i was getting WSAEWOULDBLOCK error frequently on my socket. To
resolve this error i put a while loop around it for ex.
do
{
size_t value = ::send(); /*with proper arguments*/
}while(GetLastError() == 10035);
So if i get error 10035 i will resend the data.
But now i see that this while loop runs sometimes infinitely and my application goes
into kind of deadlock state. I tried increasing the size of socket but still of no use.
If anybody has any idea how to resolve WSAEWOULDBLOCK error please let me know.
WSAEWOULDBLOCK is not really an error but simply tells you that your send buffers are full. This can happen if you saturate the network or if the other side simply doesn't acknowledge the received data. Take a look at the select() function, which allows you to wait until buffer space is available or a timeout occurs. There is also a way to bind a win32 event to a stream, which then allows its use with WaitForMultipleObjects in case you want to abort waiting early.
BTW: I initially wanted to object against your use of the term "deadlock", but this is also something that could be happening: If you wait to send some some response before receiving the next request, and the other side wants to send a next request instead of receiving your response, your applications are effectively deadlocked. Using select(), you can determine whether you can send data, receive data or that the connection has failed, which allows you to handle these case correctly and when they occur.
Note: I also assume that your code is not really a call to socket() but one to send/recv.

File transfer C++

When my client sends a file to the server, should I Sleep(100) or so before sending the next chunk to ensure the server has enough time to download + write the data?
Does that just seem completely unnecessary?
Also I'm getting wouldblock errors (# 10035) when sending a chunk, so im just looping send until it succeeds, if send == SOCKET_ERROR goto SendAgain; , is that ok?
If you're sending your file via TCP, then it's the protocol that is ensuring that everything has been received, I wouldn't put a sleep between each chunk.
The wouldblock error is either that you're sending too much data for your output buffer, or you try to send it too quickly, and the remote buffer gets full. That seems ok to send it again because the receiver received it but didn't have enough space to store it and have juste drop it.
Here is a small article about your error: Winsock error 10035
In my opinion using sleepfunction to wait for something to be done is in 99% of the time the wrong way.
You ll never now the time you gonna need or you ve to expect for a process to be executed (can be interrupted by e.g spikes, other problems in i/o or whatever)
If you want to make sure something important is executed completely you should read about Semaphores or something like that, where you lock/free processes on start/end.
Taken from a man-page:
When the message does not fit into the send buffer of the socket,
send() normally blocks, unless the socket has been placed in
nonblocking I/O mode. In nonblocking mode it would fail with the error
EAGAIN or EWOULDBLOCK in this case. The select(2) call may be
used to determine when it is possible to send more data.

How to get the length of data to be read (reliably) in named pipes?

I have created a named pipe with following flags:
PIPE_ACCESS_DUPLEX - both side read/write access
PIPE_TYPE_MESSAGE - Message type read
PIPE_WAIT - blocking read\write
From the server side I am calling ConnectNamedPipe and waiting for the clients to connect.
From the client side I am calling CallNamedPipe to connect to server and write data of length N.
On the server side:
After the client connects, PeekNamedPipe is called to get the length of the buffer to allocate to read the data buffer.
After getting the exact buffer size (N), I am allocating the buffer of length N and calling ReadFile to read the data from Pipe.
Problem:
The issue is that -- on Single processor machines the PeekNamedPipe API returns the buffer length as 0. Due to this later ReadFile fails.
after some investigation I could find that due to some race condition , PeekNamedPipe API gets called even before data is put onto the Pipe by the client.
Any idea how to solve this race condition ? I need to call PeekNamedPipe to get the buffer size and PeekNamedPipe cannot be called before the data is available.
I thought of introducing custom header to indicate the buffer length in the message itself but this sounds lot of changes.
Is there any better and reliable way to get the length of the data to be read from pipe ?
There are a large number of race conditions you can get with named pipes. You have to deal with them in your code. Possibilities:
ConnectNamedPipe() on the server side may return ERROR_PIPE_CONNECTED if the client managed to connect right after the CreateNamedPipe() call. Just treat it as connected.
WaitNamedPipe on client side does not set the error if it timed out. Assume a timeout.
CreateFile() on client side may return ERROR_PIPE_BUSY if another client managed to grab the pipe first, even after a successful WaitNamedPipe() call. Go back to WaitNamedPipe state.
FlushFileBuffers() may return ERROR_PIPE_NOT_CONNECTED if the client already saw the message and closed the pipe. Ignore that.
An overlapped ReadFile() call may complete immediately and not return ERROR_IO_PENDING. Consider the read completed.
PeekNamedPipe() may return 0 if the server has not written to the pipe yet. Sleep(1) and repeat.
It sounds like you want Aynschronous I/O. Just let Windows notify you when data is available, and peek at that moment.
Having a packet size in the header is a good idea in any case, making the protocol less dependent on transport layer.
Alternatively, if the client sends data and closes the pipe you can accumulate into buffer with ReadFile until EOF.