WSAEventSelect model - c++

Hey I'm using the WSAEventSelect for event notifications of sockets. So far everything is cool and working like a charm, but there is one problem.
The client is a .NET application and the server is written in Winsock C++. In the .NET application I'm using System.Net.Sockets.Socket class for TCP/IP. When I call the Socket.Shutdown() and Socket.Close() method, I receive the FD_CLOSE event in the server, which I'm pretty sure is fine. Okay the problem occurs when I check the iErrorCode of WSANETWORKEVENTS which I passed to WSAEnumNetworkEvents. I check it like this
if (listenerNetworkEvents.lNetworkEvents & FD_CLOSE)
{
if (listenerNetworkEvents.iErrorCode[FD_CLOSE_BIT] != 0)
{
// it comes here
// which means there is an error
// and the ERROR I got is
// WSAECONNABORTED
printf("FD_CLOSE failed with error %d\n",
listenerNetworkEvents.iErrorCode[FD_CLOSE_BIT]);
break;
}
closesocket(socketArray[Index]);
}
But it fails with the WSAECONNABORTED error. Why is that so?
EDIT: Btw, I'm running both the client and server on the same computer, is it because of that? And I received the FD_CLOSE event when I do this:
server.Shutdown(SocketShutdown.Both); // in .NET C#, client code

I'm guessing you're calling Shutdown() and then Close() immediately afterward. That will give the symptom you're seeing, because this is "slamming the connection shut". Shutdown() does initiate a graceful disconnect (TCP FIN), but immediately following it with Close() aborts that, sending a TCP RST packet to the remote peer. Your Shutdown(SocketShutdown.Both) call slams the connection shut, too, by the way.
The correct pattern is:
Call Shutdown() with the direction parameter set to "write", meaning we won't be sending any more data to the remote peer. This causes the stack to send the TCP FIN packet.
Go back to waiting for Winsock events. When the remote peer is also done writing, it will call Shutdown("write"), too, causing its stack to send your machine a TCP FIN packet, and for your application to get an FD_CLOSE event. While waiting, your code should be prepared to continue reading from the socket, because the remote peer might still be sending data.
(Please excuse the pseudo-C# above. I don't speak .NET, only C++.)
Both peers are expected to use this same shutdown pattern: each tells the other when it's done writing, and then waits to receive notification that the remote peer is done writing before it closes its socket.
The important thing to realize is that TCP is a bidirectional protocol: each side can send and receive independently of the other. Closing the socket to reading is not a nice thing to do. It's like having a conversation with another person but only talking and being unwilling to listen. The graceful shutdown protocol says, "I'm done talking now. I'm going to wait until you stop talking before I walk away."

Related

Why should I use shutdown() before closing a socket? [duplicate]

This question already has answers here:
close vs shutdown socket?
(9 answers)
Closed 6 years ago.
On this MSDN page:
Sending and Receiving Data on the Client
It recommends closing the sending side of the socket by using:
shutdown(SOCK_ID, SD_SEND);
Why should I?
Maybe I dont have to, and its just a recommendation? Maybe its for saving memory? Maybe for speed?
Does anyone have an idea?
The answer is in the shutdown() documentation:
If the how parameter is SD_SEND, subsequent calls to the send function are disallowed. For TCP sockets, a FIN will be sent after all data is sent and acknowledged by the receiver.
...
To assure that all data is sent and received on a connected socket before it is closed, an application should use shutdown to close connection before calling closesocket. One method to wait for notification that the remote end has sent all its data and initiated a graceful disconnect uses the WSAEventSelect function as follows :
Call WSAEventSelect to register for FD_CLOSE notification.
Call shutdown with how=SD_SEND.
When FD_CLOSE received, call the recv or WSARecv until the function completes with success and indicates that zero bytes were received. If SOCKET_ERROR is returned, then the graceful disconnect is not possible.
Call closesocket.
Another method to wait for notification that the remote end has sent all its data and initiated a graceful disconnect uses overlapped receive calls follows :
Call shutdown with how=SD_SEND.
Call recv or WSARecv until the function completes with success and indicates zero bytes were received. If SOCKET_ERROR is returned, then the graceful disconnect is not possible.
Call closesocket.
...
For more information, see the section on Graceful Shutdown, Linger Options, and Socket Closure.
In other words, at least for TCP, calling shutdown(SD_SEND) notifies the peer that you are done sending any more data, and that you will likely be closing your end of the connection soon. Preferably, the peer will also do the same courtesy for you. This way, both peers can know the connection was closed intentionally on both ends. This is known as a graceful disconnect, and not an abortive or abnormal disconnect.
By default, if you do not call shutdown(SD_SEND), closesocket() will attempt to perform a graceful shutdown for you UNLESS the socket's linger option is disabled. It is best not to rely on this behavior, you should always call shutdown() yourself before calling closesocket(), unless you have good reason not to.
It is unnecessary and redundant except in the following cases:
You want to achieve a synchronized close as described in the documentation quoted by Remy Lebeau.
The socket has been duplicated somehow, e.g. it is shared with child or parent processes or via the API, and you want to ensure the FIN is sent now.
Your application protocol requires that the peer receive a shutdown but needs to continue to send. This can arise for example when writing a proxy server.
You may have unread data in your socket receive buffer and you want to close and ignore it and send a FIN before provoking a connection reset, which will happen when you close if there is unread pending data.
These are the only cases I've ever come across in about 30 years: there may be others but I'm not aware of them.
There are no specific resources associated with sending or receiving operation on the socket, the socket is either used or closed. There reason for shutdown is not related to resource-management. Shutting down the socket is implementation of so-called graceful shutdown protocol, which allow both sides of the communication to realize the connection is going down and allows to minimize loss of data.

boost::asio::ip::tcp::socket -> how to query the socket state?

I am using the boost library to create a server application. At a time one client is allowed therefore if the async_accept(...) function gets called the acceptor will be closed.
The only job of my server is to send data periodically (if sending is enabled on the server, otherwise "just sit there" until it gets enabled) to the client. Therefore I have a boost message queue - if a message arrives the send() is called on the socket.
My problem is that I cannot tell if the client is still listening. Normally you would not care, by the next transmission the send would yield an error.
But in my case the acceptor is not opened when a socket is opened. If the socket gets in the CLOSE_WAIT state I have to close it and open the acceptor again so that the client can connect again.
Waiting until the next send is also no option since it is possible that the sending is disabled therefore my server would be stuck.
Question:
How can I determine if a boost::asio::ip::tcp::socket is in a CLOSE_WAIT state?
Here is the code to do what Dmitry Poroh suggests:
typedef asio::detail::socket_option::integer<ASIO_OS_DEF(SOL_SOCKET),SO_ERROR>so_error;
so_error tmp;
your_socket.get_option(tmp);
int value=tmp.value();
//do something with value.
You can try to use ip::tcp::socket::get_option and get error state with level SOL_SOCKET and option name SO_ERROR. I'm surprised that I have not found the ready boost implementation for it. So you can try to meet GettableSocketOption requirements an use ip::tcp::socket::get_option to fetch the socket error state.

Boost.Asio - Make sure that other party received data

I'm using boost::asio and sending a list to a client and closing the socket when finished. Somehow the client sometimes gets an End Of File error before he has received everything.
I'm guessing this has to do with the server closing the socket right after sending the last list entry. Is there an easy way to solve this async_send to call the handler only after the data has been successfully sent?
Or is my End Of File error coming from something else?
Boost.Asio is an operating system independent abstraction layer over TCP and UDP sockets. They provide no guarantee that the other application has received and processed the data. You will need to include this logic in your application, you may want to study the OSI model.
If you're closing the socket immediately after async_send() returns, this is incorrect. You should close it only after the completion handler is invoked.

Thread listening to UDP problems

My program receive some UDP messages, each of them sent with mouse clicks by the client. The program has the main thread (the GUI) only to set some parameters and a second thread create, with
CreateThread(NULL, 0, MyFunc, &Data, 0, &ThreadTd);
that is listening to UDP packets.
This is MyFunc:
...
sd=socket(AF_INET, SOCK_DGRAM, 0);
if(bind(sd,(struct sockaddr *)&server,sizeof(struct sockaddr_in))==-1)
....
while(true){
bytes_received=recvfrom(sd,buffer,BUFFER_SIZE,0,(struct sockaddr *)&client,&client_length);
//parsing of the buffer
}
In order to prove that there is no packet loss, if I've used a simple script that listen UDP messages sent by my client using a certain port, all the packets sent are received by my computer.
When I run my application, as soon as the client do the first mouse click, the UDP message is received, but If I try to send other messages (other mouse clicks), the server doesn't receive them (like if he doesn't catch them) and client-side, the user have to click at least 2 times before the server catch the message.
The main thread isn't busy all the time and the second thread parses only the incoming message, enhancing some variables and I haven't assign any priority to the threads.
Any suggetions?
in addition to mark's suggestion, you could also use wireshark/netcat to see when/where the datagrams are sent
This may be a problem related to socket programming. I would suggest incorporating a call to select() or epoll() with the call to recvfrom(). This is a more standard approach to socket programming. This way, the UDP server could receive messages from multiple clients, and it wouldnt block indefinitely.
Also, you should isolate if the problem is that the client doesnt always send a packet for every click, or if somehow the server doesnt always receive them. Wireshark could help see when packets are sent.
not enough info to know why there's packet loss. is it possible there's a delay in the receive thread before reaching the first recvfrom? debug tracing might point the way. i assume also that the struct sockaddr server was filled in with something sane before calling bind()? you're not showing that part...
If I understood your question correctly, your threaded server app does not receive all the packets when they are sent in quick bursts. One thing you can try is to increase socket receive buffer on the server side, so more data could be queued when your application is not reading it fast enough. See setsockopt, use the SO_RCVBUF option.

Working with sockets in MFC

I'm trying to make a MFC application(client) that connects to a server on ("localhost",port 1234), the server replies to the client and the client reads from the server's response.
The server is able to receive the data from the client and it sends the reply back to the socket from where it received it, but I am unable to read the reply from within the client.
I am making a CAsyncSocket to connect to the server and send data and a CAsyncSocket with overloaded methods onAccet and onReceive to read the reply from the server.
Please tell me what I'm doing wrong.
class ServerSocket:public CAsyncSocket{
public:
void OnAccept(int nErrorCode){
outputBox->SetWindowTextA("RECEIVED DATA");
CAsyncSocket::OnAccept(nErrorCode);
}
};
//in ApplicationDlg I have:
socket.Create();
socket.Connect("127.0.0.1",1234);
socket.Send(temp,strlen(temp)); //this should be sending the initial message
if(!serverSocket.Create()) //this should get the response i guess...
AfxMessageBox("Could not create server socket");
if(!serverSocket.Listen())
AfxMessageBox("Could not listen to socket");
You should be aware that all network operations are potentially time-consuming operations. Now, since you're using MFC's CAsyncSocket class, it performs all the operations asynchronously (doesn't block you). But return from the function doesn't mean it's already completed.
Let's look at the following lines of code:
socket.Connect("127.0.0.1",1234);
socket.Send(temp,strlen(temp)); //this should be sending the initial message
The first is the call to Connect, which most probably doesn't complete immediately.
Next, you call Send, but your socket isn't connected yet! It definitely returns you an error code, but since you don't bother checking its return value - you just happily wait to receive something.
So, the next rule for you, my friend, should be checking every return value for every function that you call, especially when it comes to networking where errors are legitimate and happen frequently.
You should only start sending after OnConnect has been called.
First, I don't see where you send the data to client (on server).
Second, Accept() does not mean data received. Accept means you have a new incoming connection, for which you need to create Another socket, to which data should be sent.