I'm writing C++ code for UDP socket class to handle basic operations (such as connect, send and receive data). I try using network events mechanism with WSAEventSelect for these basic operations associated with the socket.
When I use WSASend to send data to a (UDP) destination that receives the data everything goes well.
However, when I use WSASend to send data to a destination that does not exist (UDP) or is not reachable through the network I get the FD_READ event triggered. This of course causes serious problems since there is no actual data to receive !!
I can't explain why is this happening - any ideas ?
Maybe I'm doing something wrong, Here are relevant parts of my code:
WSADATA m_wsaData ;
SOCKET m_Socket ;
WSAEVENT m_SocketEvent ;
if(WSAStartup(MAKEWORD(2,2), &m_wsaData) != 0)
{
// some error
}
// Create a new socket to receive datagrams on
struct addrinfo hints, *res = NULL ;
int rc ;
memset(&hints, 0, sizeof(hints)) ;
hints.ai_family = AF_UNSPEC ;
hints.ai_socktype = SOCK_DGRAM ;
hints.ai_protocol = IPPROTO_UDP ;
rc = getaddrinfo("SomePC", "3030", &hints, &res) ;
if(rc == WSANO_DATA)
{
// some error
}
if ((m_Socket = WSASocket(res->ai_family, res->ai_socktype, res->ai_protocol, NULL, 0, 0)) == INVALID_SOCKET)
{
// some error
}
// create event and associate it with the socket
m_SocketEvent = WSACreateEvent() ;
if(m_SocketEvent == WSA_INVALID_EVENT)
{
// some error
}
// associate only the following events: close, read, write
if(SOCKET_ERROR == WSAEventSelect(m_Socket, m_SocketEvent, FD_CLOSE+FD_READ+FD_WRITE))
{
// some error
}
// connect to a server
int ConnectRet = WSAConnect(m_Socket, (SOCKADDR*)res->ai_addr, res->ai_addrlen, NULL, NULL, NULL, NULL) ;
if(ConnectRet == SOCKET_ERROR)
{
// some error
}
And then, whenever I try to send some data over the socket to a (UDP socket) destination that is not listening or not reachable I always get the FD_READ triggered:
char buf[32] ; // some data to send...
WSABUF DataBuf;
DataBuf.len = 32;
DataBuf.buf = (char*)&buf;
DWORD NumBytesActualSent ;
if( SOCKET_ERROR == WSASend(m_Socket, &DataBuf, 1, &NumBytesActualSent,0,0,0))
{
if(WSAGetLastError() == WSAEWOULDBLOCK) // non-blocking socket - wait for send ok ?
{
// handle WSAEWOULDBLOCK...
}
else
{
// some error
return ;
}
}
int ret = WSAWaitForMultipleEvents(1, &m_SocketEvent, FALSE, INFINITE, FALSE) ;
if(ret == WAIT_OBJECT_0)
{
WSANETWORKEVENTS NetworkEvents ;
ZeroMemory(&NetworkEvents, sizeof(NetworkEvents)) ;
if(SOCKET_ERROR == WSAEnumNetworkEvents(m_Socket, m_SocketEvent, &NetworkEvents))
{
return ; // some error
}
if(NetworkEvents.lNetworkEvents & FD_READ) // Read ?
{
if(NetworkEvents.iErrorCode[FD_READ_BIT] != 0) // read not ok ?
{
// some error
}
else
{
TRACE("Read Event Triggered ! - Why ? ? ? ? ? ? ?\n") ;
}
}
}
Any help or insights would be most appriciated !
Thanks,
Amit C.
The easiest way to inspect what really happens is to use Wireshark to capture packets. I do not have Windows PC nearby to provide you a complete example, but my guess is that this is a normal behaviour - you try to send UDP datagram, it gets dropped by a router (host not found) or rejected by a server (socket closed); an ICMP message is sent back to inform you about the failure, which is what you receive and get an event for. Since there is no actual user data, underlying stack translates ICMP message and provides you an appropriate error message via WSARecv() return code. The "false" FD_READ event is necessary since UDP is connectionless protocol and there are no other means to inform about a network state. Simply add error handling for WSARecv() and your code should work fine.
Related
I'm working on a server implementation on a Chromebook, using tcp connectivity between the windows client and the ChromeOS server. When a connection is being made, the server (Chromebook) side is sending out 5 packets; first one is the header, the next 3 ones are the information sent and the last one is the footer of the message.
We're using send and recv for sending and receiving the information, and after the header is being sent, the rest of the packets are never received, because the client is receiving error code 10054, "connection reset by peer", before the rest are received, though those are sent.
The sizes of the packets are as follows: Header is 4 bytes, the second packet sent is 2 bytes, the next one is 1 byte and the next one is 8 bytes, and the footer is 4 bytes. Our suspicion was that perhaps the 2 bytes are too small for the OS to send, and it perhaps waits for more data before sending, unlike in Windows where it currently does send those immediately. So we tried using SO_LINGER on the socket, but it didn't help. We also tried using TCP_NODELAY, but it didn't help as well. When attempting not to write to the socket's fd with the timeout (using select) the connection is being broken after the first header is sent.
We know all the packets are sent, because logging the sent packets from the machine shows all packets as sent, and only the first one arrives.
Socket flag used is this only:
setsockopt(s, SOL_SOCKET, SO_REUSEADDR, (const char *) &n, sizeof(n));
Sending a message:
ret = write_timeout(fd, timeout);
if (ret != OK) {
Logger::LogError(PROTOCOL_ERROR, "Write data to socket failed with error %d, while waiting timeout of %u\n", get_last_comm_error(), timeout);
return PROTOCOL_ERROR;
}
while (size) {
ret = send(fd, ptr, size, 0);
ptr += ret;
size -= ret;
if (ret < 0) {
Logger::LogError(PROTOCOL_ERROR, "Transport write failed: %d\n", get_last_comm_error());
return PROTOCOL_ERROR;
}
}
Write_timeout:
int write_timeout(int fd, unsigned int wait_useconds)
{
Logger::LogInfo(__FUNCTION__);
int ret = OK;
if (wait_useconds > 0) {
fd_set write_fdset;
struct timeval timeout;
FD_ZERO(&write_fdset);
FD_SET(fd, &write_fdset);
timeout.tv_sec = 0;
timeout.tv_usec = wait_useconds;
do {
ret = select(fd + 1, NULL, &write_fdset, NULL, &timeout);
} while (ret < 0 && errno == EINTR);
if (ret == OK) {
ret = -1;
errno = ETIMEDOUT;
} else if (ret == 1)
return OK;
}
The receiving end is similar:
ret = read_timeout(fd, timeout);
if (ret != OK) {
Logger::LogError(PROTOCOL_ERROR, "Error while trying to receive data from the host - timeout\n");
return TIMED_OUT;
}
while (size) {
ret = recv(fd, ptr, size, 0);
ptr+=ret;
size-=ret;
if (ret == 0) {
return FAILED_TRANSACTION;
}
if (ret < 0) {
Logger::LogError(PROTOCOL_ERROR, "Transport read failed: %d\n", get_last_comm_error());
return UNKNOWN_ERROR;
}
}
return OK;
And timeout:
int read_timeout(int fd, unsigned int wait_useconds)
{
Logger::LogInfo(__FUNCTION__);
int ret = OK;
if (wait_useconds > 0) {
fd_set read_fdset;
struct timeval timeout;
FD_ZERO(&read_fdset);
FD_SET(fd, &read_fdset);
timeout.tv_sec = 0;
timeout.tv_usec = wait_useconds;
do {
ret = select(fd + 1, &read_fdset, NULL, NULL, &timeout);
} while (ret < 0 && errno == EINTR);
if (ret == OK) {
ret = -1;
errno = ETIMEDOUT;
} else if (ret == 1)
return OK;
}
Our code does work on Windows, but (after modifying it accordingly and) using it on ChromeOS does not seem to work unfortunately.
We're running the server on a Chromebook with version 93 and building the code with that code base as well.
I did try making the second packet 4 bytes as well, but it still does not work and connection is being reset by peer after the first one is received correctly.
Does anyone know if maybe the chrome OS system waits for bigger packets before sending? Or if something else works a little bit different when working with TCP on that OS that needs to be done differently then in Windows?
I'm trying to write a Port open test by using sockets and for some reason this is reporting "port open" for invalid IP addresses. I'm currently plugged into an access point that's not connected to the internet so this is incorrectly reporting port open for external IP addresses.
First I set up the socket and since this is nonblocking mode it's usually still in progress by the first if statement. Then I am polling the socket. However, for a socket on an external IP, I'm getting the POLLOUT event even though that doesn't seem possible...
What am I missing? Why does the poll received events contain POLLOUT? I've tried resetting the pollfd struct before calling poll again but that didn't change the result.
result = connect(sd, (struct sockaddr*)&serverAddr, sizeof(serverAddr));
if(result == 0) //woohoo! success
{
//SUCCESS!
return true;
}
else if (result < 0 && errno != EINPROGRESS) //real error
{
//FAIL
return false;
}
// poll the socket until it connects
struct pollfd fds[1];
fds[0].fd = sd;
fds[0].events = POLLOUT | POLLRDHUP | POLLERR | POLLNVAL;
fds[0].revents = 0;
while (1)
{
result = poll(fds, 1, 1);
if (result < 1)
{
//Poll failed, mark as FAIL
}
else
{
// see which event occurred
if (fds[0].revents & POLLOUT || fds[0].revents & POLLRDHUP)
{
//SUCCESS
}
else if (fds[0].revents & POLLERR || fds[0].revents & POLLNVAL)
{
//FAIL
}
}
}
I needed to check SO_ERROR after receiving the POLLOUT event - POLLOUT by itself does not indicate success.
//now read the error code of the socket
int errorCode;
uint len = sizeof(errorCode);
result = getsockopt(fds[0].fd, SOL_SOCKET, SO_ERROR,
&errorCode, &len);
I'm attempting to create a UDP client/server class that relies on IO completion ports using Winsock, but I haven't been able to get the GetQueuedCompletionStatus() function to return when new data is available. This is likely due to some misunderstanding on my part, but examples/documentation on IOCP with UDP instead of TCP are few and far between.
Here's the relevant bits of my server class, with error checking removed for brevity:
Server::Server()
{
m_iocp = CreateIoCompletionPort(INVALID_HANDLE_VALUE, NULL, 0, 0);
}
void Server::StartReceiving()
{
StopReceiving();
m_iocp = CreateIoCompletionPort((HANDLE)m_receiveSocket, m_iocp, (DWORD)this, 0);
//WSAEVENT event = WSACreateEvent();
//WSAEventSelect(m_receiveSocket, event, FD_ACCEPT | FD_CLOSE);
// Start up worker thread for receiving data
m_receiveThread.reset(new std::thread(&Server::ReceiveWorkerThread, this));
}
void Server::Host(const std::string& port)
{
if (!m_initialized)
Initialize();
addrinfo hints = {};
hints.ai_family = AF_INET;
hints.ai_socktype = SOCK_DGRAM;
hints.ai_protocol = IPPROTO_UDP;
hints.ai_flags = AI_PASSIVE;
// Resolve the server address and port
const char* portStr = port.empty() ? kDefaultPort.c_str() : port.c_str();
int result;
AddressInfo addressInfo = AddressInfo::Create(nullptr, portStr, &hints, result); // Calls getaddrinfo()
m_receiveSocket = WSASocket(addressInfo.GetFamily(), addressInfo.GetSocketType(), addressInfo.GetProtocol(), nullptr, 0, WSA_FLAG_OVERLAPPED);
// Bind receiving socket
result = bind(m_receiveSocket, addressInfo.GetSocketAddress(), addressInfo.GetAddressLength());
StartReceiving();
}
void Server::ReceiveWorkerThread()
{
SOCKADDR_IN senderAddr;
int senderAddrSize = sizeof(senderAddr);
DWORD bytesTransferred = 0;
OVERLAPPED* pOverlapped = nullptr;
WSABUF wsaBuf = { (ULONG)m_buffer.GetWriteBufferSize(), m_buffer.GetWriteBufferPointer() };
DWORD flags = 0;
DWORD bytesReceived;
int result = WSARecvFrom(m_receiveSocket, &wsaBuf, 1, &bytesReceived, &flags, (sockaddr*)&senderAddr, &senderAddrSize, pOverlapped, nullptr);
// Process packets until signaled to exit
while (true)
{
DWORD context = 0;
BOOL success = GetQueuedCompletionStatus(
m_iocp,
&bytesTransferred,
&context,
&pOverlapped,
INFINITE);
wsaBuf.len = (ULONG)m_buffer.GetWriteBufferSize();
wsaBuf.buf = m_buffer.GetWriteBufferPointer();
flags = 0;
result = WSARecvFrom(m_receiveSocket, &wsaBuf, 1, &bytesReceived, &flags, (sockaddr*)&senderAddr, &senderAddrSize, pOverlapped, nullptr);
// Code to process packet would go here
if (m_exiting.load() == true)
break; // Kill worker thread
}
}
When my client sends data to the server, the first WSARecvFrom picks up the data correctly but the server blocks on the call to GetQueuedCompletionStatus and never returns, even if more datagrams are sent. I've also tried putting the socket into non-blocking mode with WSAEventSelect (code for that is commented above), but it made no difference.
From reading this similar post it sounds like there needs to be at least one read on the socket to trigger IOCP, which is why I added the first call to WSARecvFrom outside the main loop. Hopefully I'm correct in assuming that the client code is irrelevant if the server receives the data without IOCP, so I haven't posted it.
I'm sure I'm doing something wrong, but I'm just not seeing it.
You need to check the result code from WSARecvFrom and call GetQueuedCompletionStatus only if the return code is ERROR_IO_PENDING- if it is not either the operation completed without blocking and you have the data, or there was an error, but in any of these cases it was not posted to the I/O completion port and thus it will never be picked up by GetQueuedCompletionStatusand the call will block.
And you should not do this in one thread. The common approach is to have a thread that only polls the I/O completion port and calls some callbacks on context objects to notify about incoming/outgoing data, and the sending receiving calls are called wherever needed.
I'm developing a server-client project based on Winsock in c++. I have designed the server and the client sides so that they can send and receive text messages and files as well.
Then I decided to go for audio communication between server and client. I've actually implemented that however I've figured it out that I've done everything using TCP protocol and that for the audio communication it is better to use UDP protocol.
Then I've searched over the internet and found out that it is possible to use both TCP and UDP alongside each other.
I've tried to use the UDP protocol but I didn't have any major progresses.
My problem is I use both recv() and recvFrom() in a while loop like this:
while (true)
{
buflen = recv(clientS, buffer, 1024, NULL);
if (buflen > 0)
{
// Send the received buffer
}
else if (buflen == 0)
{
printf("closed\n");
break;
}
buflen = recvfrom(udpS, buffer, 1024, NULL, (struct sockaddr*)&_s, &_size);
But the recvFrom() blocks. I think I haven't done the job properly but I couldn't find out how to do it.
Here Server in C accepting UDP and TCP connections I found a similar question but the answers were just explanations and there were no sample codes to demonstrate the point clearly.
Now I need you to help me undestand clearly how to receive data from both TCP and UPD connections.
Any help is appreciated.
When dealing with multiple sockets at a time, use select() to know which socket has data pending before you read it, eg:
while (true)
{
fd_set rfd;
FD_ZERO(&rfd);
FD_SET(clientS, &rfd);
FD_SET(udpS, &rfd);
struct timeval timeout;
timeout.tv_sec = ...;
timeout.tv_usec = ...;
int ret = select(0, &rfd, NULL, NULL, &timeout);
if (ret == SOCKET_ERROR)
{
// handle error
break;
}
if (ret == 0)
{
// handle timeout
continue;
}
// at least one socket is readable, figure out which one(s)...
if (FD_ISSET(clientS, &rfd))
{
buflen = recv(clientS, buffer, 1024, NULL);
if (buflen == SOCKET_ERROR)
{
// handle error...
printf("error\n");
}
else if (buflen == 0)
{
// handle disconnect...
printf("closed\n");
}
else
{
// handle received data...
}
}
if (FD_ISSET(udpS, &rfd))
{
buflen = recvfrom(udpS, buffer, 1024, NULL, (struct sockaddr*)&_s, &_size);
//...
}
}
I'm writing a TCP server (blocking socket model).
I'm having trouble implementing a valid normal program exit when the server is waiting (blocking) for new connection attempts on Accept (I use WSAccept).
The code for the server's listening socket is something like this (I omitted error handling and other irrelevant code):
int ErrCode = WSAStartup(MAKEWORD(2,2), &m_wsaData) ;
// Create a new socket to listen and accept new connection attempts
struct addrinfo hints, *res = NULL, *ptr = NULL ;
int rc, count = 0 ;
memset(&hints, 0, sizeof(hints)) ;
hints.ai_family = AF_UNSPEC ;
hints.ai_socktype = SOCK_STREAM ;
hints.ai_protocol = IPPROTO_TCP ;
hints.ai_flags = AI_PASSIVE ;
CString strPort ;
strPort.Format("%d", Port) ;
getaddrinfo(pLocalIp, strPort.GetBuffer(), &hints, &res) ;
strPort.ReleaseBuffer() ;
ptr = res ;
if ((m_Socket = WSASocket(res->ai_family, res->ai_socktype, res->ai_protocol, NULL, 0, 0)) == INVALID_SOCKET)
{
// some error
}
if(bind(m_Socket, (SOCKADDR *)res->ai_addr, res->ai_addrlen) == SOCKET_ERROR)
{
// some error
}
if (listen(m_Socket, SOMAXCONN) == SOCKET_ERROR)
{
// some error
}
So far so good... Then I implemented the WSAccept call inside a thread like this:
SOCKADDR_IN ClientAddr ;
int ClientAddrLen = sizeof(ClientAddr) ;
SOCKET TempS = WSAAccept(m_Socket, (SOCKADDR*) &ClientAddr, &ClientAddrLen, NULL, NULL);
Of course the WSAccept blocks until a new connection attempt is made but if I wish to exit
the program then i need some way to cause WSAccept to exit. I have tried several different approaches:
Attempt to call shutdown and/or closesocket with m_Socket from within another thread failed (program just hangs).
using WSAEventSelect indeed solves this issue but then WSAccept delivers only non-blocking sockets - which is not my intention. (Is there a way to make the sockets blocking?)
I Read about APC and tried to use something like QueueUserAPC(MyAPCProc, m_hThread, 1)) but it didn't work either.
What am I doing wrong ?
Is there a better way to cause this blocking WSAccept to exit ?
Use select() with a timeout to detect when a client connection is actually pending before then calling WSAAccept() to accept it. It works with blocking sockets without putting them into non-blocking mode. That will give your code more opportunities to check if the app is shutting down.
Go with non-blocking accepting socket (WSAEventSelect as you mentioned) and use non-blocking WSAccept. You can make a non-blocking socket that WSAccept returns into blocking socket with ioctlsocket (see msdn).
Do all the other stuff you absoultely have to on shutdown, (maybe you have DB connections to close, or files to flush?), and then call ExitProcess(0). That will stop your listening thread, no problem.
See log4cplus source for my take on this issue. I basically wait on two event objects, one is signaled when connection is being accepted (using WSAEventSelect()) and another is there to interrupt the waiting. The most relevant parts of the source is below. See ServerSocket::accept().
namespace {
static
bool
setSocketBlocking (SOCKET_TYPE s)
{
u_long val = 0;
int ret = ioctlsocket (to_os_socket (s), FIONBIO, &val);
if (ret == SOCKET_ERROR)
{
set_last_socket_error (WSAGetLastError ());
return false;
}
else
return true;
}
static
bool
removeSocketEvents (SOCKET_TYPE s, HANDLE ev)
{
// Clean up socket events handling.
int ret = WSAEventSelect (to_os_socket (s), ev, 0);
if (ret == SOCKET_ERROR)
{
set_last_socket_error (WSAGetLastError ());
return false;
}
else
return true;
}
static
bool
socketEventHandlingCleanup (SOCKET_TYPE s, HANDLE ev)
{
bool ret = removeSocketEvents (s, ev);
ret = setSocketBlocking (s) && ret;
ret = WSACloseEvent (ev) && ret;
return ret;
}
} // namespace
ServerSocket::ServerSocket(unsigned short port)
{
sock = openSocket (port, state);
if (sock == INVALID_SOCKET_VALUE)
{
err = get_last_socket_error ();
return;
}
HANDLE ev = WSACreateEvent ();
if (ev == WSA_INVALID_EVENT)
{
err = WSAGetLastError ();
closeSocket (sock);
sock = INVALID_SOCKET_VALUE;
}
else
{
assert (sizeof (std::ptrdiff_t) >= sizeof (HANDLE));
interruptHandles[0] = reinterpret_cast<std::ptrdiff_t>(ev);
}
}
Socket
ServerSocket::accept ()
{
int const N_EVENTS = 2;
HANDLE events[N_EVENTS] = {
reinterpret_cast<HANDLE>(interruptHandles[0]) };
HANDLE & accept_ev = events[1];
int ret;
// Create event and prime socket to set the event on FD_ACCEPT.
accept_ev = WSACreateEvent ();
if (accept_ev == WSA_INVALID_EVENT)
{
set_last_socket_error (WSAGetLastError ());
goto error;
}
ret = WSAEventSelect (to_os_socket (sock), accept_ev, FD_ACCEPT);
if (ret == SOCKET_ERROR)
{
set_last_socket_error (WSAGetLastError ());
goto error;
}
do
{
// Wait either for interrupt event or actual connection coming in.
DWORD wsawfme = WSAWaitForMultipleEvents (N_EVENTS, events, FALSE,
WSA_INFINITE, TRUE);
switch (wsawfme)
{
case WSA_WAIT_TIMEOUT:
case WSA_WAIT_IO_COMPLETION:
// Retry after timeout or APC.
continue;
// This is interrupt signal/event.
case WSA_WAIT_EVENT_0:
{
// Reset the interrupt event back to non-signalled state.
ret = WSAResetEvent (reinterpret_cast<HANDLE>(interruptHandles[0]));
// Clean up socket events handling.
ret = socketEventHandlingCleanup (sock, accept_ev);
// Return Socket with state set to accept_interrupted.
return Socket (INVALID_SOCKET_VALUE, accept_interrupted, 0);
}
// This is accept_ev.
case WSA_WAIT_EVENT_0 + 1:
{
// Clean up socket events handling.
ret = socketEventHandlingCleanup (sock, accept_ev);
// Finally, call accept().
SocketState st = not_opened;
SOCKET_TYPE clientSock = acceptSocket (sock, st);
int eno = 0;
if (clientSock == INVALID_SOCKET_VALUE)
eno = get_last_socket_error ();
return Socket (clientSock, st, eno);
}
case WSA_WAIT_FAILED:
default:
set_last_socket_error (WSAGetLastError ());
goto error;
}
}
while (true);
error:;
DWORD eno = get_last_socket_error ();
// Clean up socket events handling.
if (sock != INVALID_SOCKET_VALUE)
{
(void) removeSocketEvents (sock, accept_ev);
(void) setSocketBlocking (sock);
}
if (accept_ev != WSA_INVALID_EVENT)
WSACloseEvent (accept_ev);
set_last_socket_error (eno);
return Socket (INVALID_SOCKET_VALUE, not_opened, eno);
}
void
ServerSocket::interruptAccept ()
{
(void) WSASetEvent (reinterpret_cast<HANDLE>(interruptHandles[0]));
}
A not so neat way of solving this problem is by issuing a dummy WSAConnect request from the thread that needs to do the shutdown. If the dummy connect fails, you might resort to ExitProcess as suggested by Martin.
void Drain()
{
if (InterlockedIncrement(&drain) == 1)
{
// Make a dummy connection to unblock wsaaccept
SOCKET ConnectSocket = WSASocket(AF_INET, SOCK_STREAM, IPPROTO_TCP, NULL, 0, 0);
if (ConnectSocket != INVALID_SOCKET) {
int iResult = WSAConnect(ConnectSocket, result->ai_addr, result->ai_addrlen, 0, 0, 0, 0);
if (iResult != 0) {
printf("Unable to connect to server! %d\n", WSAGetLastError());
}
else
{
closesocket(ConnectSocket);
}
}
}
}