How to use both TCP and UDP in one application in c++ - c++

I'm developing a server-client project based on Winsock in c++. I have designed the server and the client sides so that they can send and receive text messages and files as well.
Then I decided to go for audio communication between server and client. I've actually implemented that however I've figured it out that I've done everything using TCP protocol and that for the audio communication it is better to use UDP protocol.
Then I've searched over the internet and found out that it is possible to use both TCP and UDP alongside each other.
I've tried to use the UDP protocol but I didn't have any major progresses.
My problem is I use both recv() and recvFrom() in a while loop like this:
while (true)
{
buflen = recv(clientS, buffer, 1024, NULL);
if (buflen > 0)
{
// Send the received buffer
}
else if (buflen == 0)
{
printf("closed\n");
break;
}
buflen = recvfrom(udpS, buffer, 1024, NULL, (struct sockaddr*)&_s, &_size);
But the recvFrom() blocks. I think I haven't done the job properly but I couldn't find out how to do it.
Here Server in C accepting UDP and TCP connections I found a similar question but the answers were just explanations and there were no sample codes to demonstrate the point clearly.
Now I need you to help me undestand clearly how to receive data from both TCP and UPD connections.
Any help is appreciated.

When dealing with multiple sockets at a time, use select() to know which socket has data pending before you read it, eg:
while (true)
{
fd_set rfd;
FD_ZERO(&rfd);
FD_SET(clientS, &rfd);
FD_SET(udpS, &rfd);
struct timeval timeout;
timeout.tv_sec = ...;
timeout.tv_usec = ...;
int ret = select(0, &rfd, NULL, NULL, &timeout);
if (ret == SOCKET_ERROR)
{
// handle error
break;
}
if (ret == 0)
{
// handle timeout
continue;
}
// at least one socket is readable, figure out which one(s)...
if (FD_ISSET(clientS, &rfd))
{
buflen = recv(clientS, buffer, 1024, NULL);
if (buflen == SOCKET_ERROR)
{
// handle error...
printf("error\n");
}
else if (buflen == 0)
{
// handle disconnect...
printf("closed\n");
}
else
{
// handle received data...
}
}
if (FD_ISSET(udpS, &rfd))
{
buflen = recvfrom(udpS, buffer, 1024, NULL, (struct sockaddr*)&_s, &_size);
//...
}
}

Related

ChromeOS TCP Connectivity with Windows - peer resets

I'm working on a server implementation on a Chromebook, using tcp connectivity between the windows client and the ChromeOS server. When a connection is being made, the server (Chromebook) side is sending out 5 packets; first one is the header, the next 3 ones are the information sent and the last one is the footer of the message.
We're using send and recv for sending and receiving the information, and after the header is being sent, the rest of the packets are never received, because the client is receiving error code 10054, "connection reset by peer", before the rest are received, though those are sent.
The sizes of the packets are as follows: Header is 4 bytes, the second packet sent is 2 bytes, the next one is 1 byte and the next one is 8 bytes, and the footer is 4 bytes. Our suspicion was that perhaps the 2 bytes are too small for the OS to send, and it perhaps waits for more data before sending, unlike in Windows where it currently does send those immediately. So we tried using SO_LINGER on the socket, but it didn't help. We also tried using TCP_NODELAY, but it didn't help as well. When attempting not to write to the socket's fd with the timeout (using select) the connection is being broken after the first header is sent.
We know all the packets are sent, because logging the sent packets from the machine shows all packets as sent, and only the first one arrives.
Socket flag used is this only:
setsockopt(s, SOL_SOCKET, SO_REUSEADDR, (const char *) &n, sizeof(n));
Sending a message:
ret = write_timeout(fd, timeout);
if (ret != OK) {
Logger::LogError(PROTOCOL_ERROR, "Write data to socket failed with error %d, while waiting timeout of %u\n", get_last_comm_error(), timeout);
return PROTOCOL_ERROR;
}
while (size) {
ret = send(fd, ptr, size, 0);
ptr += ret;
size -= ret;
if (ret < 0) {
Logger::LogError(PROTOCOL_ERROR, "Transport write failed: %d\n", get_last_comm_error());
return PROTOCOL_ERROR;
}
}
Write_timeout:
int write_timeout(int fd, unsigned int wait_useconds)
{
Logger::LogInfo(__FUNCTION__);
int ret = OK;
if (wait_useconds > 0) {
fd_set write_fdset;
struct timeval timeout;
FD_ZERO(&write_fdset);
FD_SET(fd, &write_fdset);
timeout.tv_sec = 0;
timeout.tv_usec = wait_useconds;
do {
ret = select(fd + 1, NULL, &write_fdset, NULL, &timeout);
} while (ret < 0 && errno == EINTR);
if (ret == OK) {
ret = -1;
errno = ETIMEDOUT;
} else if (ret == 1)
return OK;
}
The receiving end is similar:
ret = read_timeout(fd, timeout);
if (ret != OK) {
Logger::LogError(PROTOCOL_ERROR, "Error while trying to receive data from the host - timeout\n");
return TIMED_OUT;
}
while (size) {
ret = recv(fd, ptr, size, 0);
ptr+=ret;
size-=ret;
if (ret == 0) {
return FAILED_TRANSACTION;
}
if (ret < 0) {
Logger::LogError(PROTOCOL_ERROR, "Transport read failed: %d\n", get_last_comm_error());
return UNKNOWN_ERROR;
}
}
return OK;
And timeout:
int read_timeout(int fd, unsigned int wait_useconds)
{
Logger::LogInfo(__FUNCTION__);
int ret = OK;
if (wait_useconds > 0) {
fd_set read_fdset;
struct timeval timeout;
FD_ZERO(&read_fdset);
FD_SET(fd, &read_fdset);
timeout.tv_sec = 0;
timeout.tv_usec = wait_useconds;
do {
ret = select(fd + 1, &read_fdset, NULL, NULL, &timeout);
} while (ret < 0 && errno == EINTR);
if (ret == OK) {
ret = -1;
errno = ETIMEDOUT;
} else if (ret == 1)
return OK;
}
Our code does work on Windows, but (after modifying it accordingly and) using it on ChromeOS does not seem to work unfortunately.
We're running the server on a Chromebook with version 93 and building the code with that code base as well.
I did try making the second packet 4 bytes as well, but it still does not work and connection is being reset by peer after the first one is received correctly.
Does anyone know if maybe the chrome OS system waits for bigger packets before sending? Or if something else works a little bit different when working with TCP on that OS that needs to be done differently then in Windows?

UDP server consumes all processor time in multithreaded program

I'm developing a client/server application. Client and server run on two different machines running Ubuntu 16.04. The client sends two variables that influence the flow of the server part, therefore, I want to decrease packet loss rate as much as possible. My application is thread-based.
In one thread UDP server is running. My project has a GUI implemented using Qt. When I tried to implement UDP server to be blocking, the whole program and the GUI froze until a packet is received. And sometimes even when a packet is received the program didn't response.
Therefore, I thought non-blocking UDP is the best way to do it. I managed to make UDP non-blocking using select(). Now comes the problem. If I set the timeout of recvfrom to be 10ms and the thread is allowed to run every 10ms, almost no packet is lost but, apparently, the UDP thread consumes all the processor time that the program freezes. If I increased the interval of calling the thread or reduced the timeout interval, around 80% of the packets are lost. I know that UDP is connectionless protocol and TCP may be a better option, but I have to use UDP as the client side sends the packets over UDP.
The question is: how can I reduce the packet loss rate without blocking the other threads from executing efficiently?
Following is my code (the based on a Stackoverflow answer which, at the moment, I can't find to refer here).
void receiveUDP(void)
{
fd_set readfds;
static int fd;
static struct timeval tv;
tv.tv_sec = 0;
tv.tv_usec = 10000;
static char buffer[10];
static int length;
if ( (fd = socket(AF_INET, SOCK_DGRAM, 0)) < 0 ) {
perror("socket failed");
return;
}
struct sockaddr_in serveraddr;
memset( &serveraddr, 0, sizeof(serveraddr) );
serveraddr.sin_family = AF_INET;
serveraddr.sin_port = htons( 50037 );
serveraddr.sin_addr.s_addr = htonl( INADDR_ANY );
if( bind(fd, (struct sockaddr *)&serveraddr, sizeof(serveraddr)) < 0)
{
perror("bind failed");
return;
}
fcntl(fd, F_SETFL, O_NONBLOCK);
FD_ZERO(&readfds);
FD_SET(fd, &readfds);
int rv = select(fd+1, &readfds, NULL, NULL, &tv);
if(rv == -1)
{
printf("Error in Select\n");
_exit(0);
}
else if(rv == 0)
{
printf("Timeout\n");
}
else
{
if(FD_ISSET(fd, &readfds))
{
length = recvfrom(fd, buffer, sizeof(buffer) - 1, 0, NULL, 0);
if(length < 0)
{
perror("recvfrom failed");
}
else
{
printf("%d bytes received: \n", length);
}
}
}
close(fd);
}

WSASend for UDP socket triggers FD_READ when no destination available

I'm writing C++ code for UDP socket class to handle basic operations (such as connect, send and receive data). I try using network events mechanism with WSAEventSelect for these basic operations associated with the socket.
When I use WSASend to send data to a (UDP) destination that receives the data everything goes well.
However, when I use WSASend to send data to a destination that does not exist (UDP) or is not reachable through the network I get the FD_READ event triggered. This of course causes serious problems since there is no actual data to receive !!
I can't explain why is this happening - any ideas ?
Maybe I'm doing something wrong, Here are relevant parts of my code:
WSADATA m_wsaData ;
SOCKET m_Socket ;
WSAEVENT m_SocketEvent ;
if(WSAStartup(MAKEWORD(2,2), &m_wsaData) != 0)
{
// some error
}
// Create a new socket to receive datagrams on
struct addrinfo hints, *res = NULL ;
int rc ;
memset(&hints, 0, sizeof(hints)) ;
hints.ai_family = AF_UNSPEC ;
hints.ai_socktype = SOCK_DGRAM ;
hints.ai_protocol = IPPROTO_UDP ;
rc = getaddrinfo("SomePC", "3030", &hints, &res) ;
if(rc == WSANO_DATA)
{
// some error
}
if ((m_Socket = WSASocket(res->ai_family, res->ai_socktype, res->ai_protocol, NULL, 0, 0)) == INVALID_SOCKET)
{
// some error
}
// create event and associate it with the socket
m_SocketEvent = WSACreateEvent() ;
if(m_SocketEvent == WSA_INVALID_EVENT)
{
// some error
}
// associate only the following events: close, read, write
if(SOCKET_ERROR == WSAEventSelect(m_Socket, m_SocketEvent, FD_CLOSE+FD_READ+FD_WRITE))
{
// some error
}
// connect to a server
int ConnectRet = WSAConnect(m_Socket, (SOCKADDR*)res->ai_addr, res->ai_addrlen, NULL, NULL, NULL, NULL) ;
if(ConnectRet == SOCKET_ERROR)
{
// some error
}
And then, whenever I try to send some data over the socket to a (UDP socket) destination that is not listening or not reachable I always get the FD_READ triggered:
char buf[32] ; // some data to send...
WSABUF DataBuf;
DataBuf.len = 32;
DataBuf.buf = (char*)&buf;
DWORD NumBytesActualSent ;
if( SOCKET_ERROR == WSASend(m_Socket, &DataBuf, 1, &NumBytesActualSent,0,0,0))
{
if(WSAGetLastError() == WSAEWOULDBLOCK) // non-blocking socket - wait for send ok ?
{
// handle WSAEWOULDBLOCK...
}
else
{
// some error
return ;
}
}
int ret = WSAWaitForMultipleEvents(1, &m_SocketEvent, FALSE, INFINITE, FALSE) ;
if(ret == WAIT_OBJECT_0)
{
WSANETWORKEVENTS NetworkEvents ;
ZeroMemory(&NetworkEvents, sizeof(NetworkEvents)) ;
if(SOCKET_ERROR == WSAEnumNetworkEvents(m_Socket, m_SocketEvent, &NetworkEvents))
{
return ; // some error
}
if(NetworkEvents.lNetworkEvents & FD_READ) // Read ?
{
if(NetworkEvents.iErrorCode[FD_READ_BIT] != 0) // read not ok ?
{
// some error
}
else
{
TRACE("Read Event Triggered ! - Why ? ? ? ? ? ? ?\n") ;
}
}
}
Any help or insights would be most appriciated !
Thanks,
Amit C.
The easiest way to inspect what really happens is to use Wireshark to capture packets. I do not have Windows PC nearby to provide you a complete example, but my guess is that this is a normal behaviour - you try to send UDP datagram, it gets dropped by a router (host not found) or rejected by a server (socket closed); an ICMP message is sent back to inform you about the failure, which is what you receive and get an event for. Since there is no actual user data, underlying stack translates ICMP message and provides you an appropriate error message via WSARecv() return code. The "false" FD_READ event is necessary since UDP is connectionless protocol and there are no other means to inform about a network state. Simply add error handling for WSARecv() and your code should work fine.

IOCP C++ TCP client

I am having some trouble implementing TCP IOCP client. I have implemented kqueue on Mac OSX so was looking to do something similar on windows and my understanding is that IOCP is the closest thing. The main problem is that GetCompetetionStatus is never returning and always timeouts out. I assume I am missing something when creating the handle to monitor, but not sure what. This is where I have gotten so far:
My connect routine: (remove some error handling for clarity )
struct sockaddr_in server;
struct hostent *hp;
SOCKET sckfd;
WSADATA wsaData;
int iResult = WSAStartup( MAKEWORD(2,2), &wsaData );
if ((hp = gethostbyname(host)) == NULL)
return NULL;
WSASocket(AF_INET,SOCK_STREAM,0,NULL,0,WSA_FLAG_OVERLAPPED)
if ((sckfd = WSASocket(AF_INET,SOCK_STREAM,0, NULL, 0, WSA_FLAG_OVERLAPPED)) == INVALID_SOCKET)
{
printf("Error at socket(): Socket\n");
WSACleanup();
return NULL;
}
server.sin_family = AF_INET;
server.sin_port = htons(port);
server.sin_addr = *((struct in_addr *)hp->h_addr);
memset(&(server.sin_zero), 0, 8);
//non zero means non blocking. 0 is blocking.
u_long iMode = -1;
iResult = ioctlsocket(sckfd, FIONBIO, &iMode);
if (iResult != NO_ERROR)
printf("ioctlsocket failed with error: %ld\n", iResult);
HANDLE hNewIOCP = CreateIoCompletionPort(INVALID_HANDLE_VALUE, NULL, ulKey, 0);
CreateIoCompletionPort((HANDLE)sckfd, hNewIOCP , ulKey, 0);
connect(sckfd, (struct sockaddr *)&server, sizeof(struct sockaddr));
//WSAConnect(sckfd, (struct sockaddr *)&server, sizeof(struct sockaddr),NULL,NULL,NULL,NULL);
return sckfd;
Here is the send routine: ( also remove some error handling for clarity )
IOPortConnect(int ServerSocket,int timeout,string& data){
char buf[BUFSIZE];
strcpy(buf,data.c_str());
WSABUF buffer = { BUFSIZE,buf };
DWORD bytes_recvd;
int r;
ULONG_PTR ulKey = 0;
OVERLAPPED overlapped;
OVERLAPPED* pov = NULL;
HANDLE port;
HANDLE hNewIOCP = CreateIoCompletionPort(INVALID_HANDLE_VALUE, NULL, ulKey, 0);
CreateIoCompletionPort((HANDLE)ServerSocket, hNewIOCP , ulKey, 0);
BOOL get = GetQueuedCompletionStatus(hNewIOCP,&bytes_recvd,&ulKey,&pov,timeout*1000);
if(!get)
printf("waiton server failed. Error: %d\n",WSAGetLastError());
if(!pov)
printf("waiton server failed. Error: %d\n",WSAGetLastError());
port = CreateIoCompletionPort(INVALID_HANDLE_VALUE, NULL, (u_long)0, 0);
SecureZeroMemory((PVOID) & overlapped, sizeof (WSAOVERLAPPED));
r = WSASend(ServerSocket, &buffer, 1, &bytes_recvd, NULL, &overlapped, NULL);
printf("WSA returned: %d WSALastError: %d\n",r,WSAGetLastError());
if(r != 0)
{
printf("WSASend failed %d\n",GetLastError());
printf("Bytes transfered: %d\n",bytes_recvd);
}
if (WSAGetLastError() == WSA_IO_PENDING)
printf("we are async.\n");
CreateIoCompletionPort(port, &overlapped.hEvent,ulKey, 0);
BOOL test = GetQueuedCompletionStatus(port,&bytes_recvd,&ulKey,&pov,timeout*1000);
CloseHandle(port);
return true;
}
Any insight would be appreciated.
You are associating the same socket with multiple IOCompletionPorts. I'm sure thats not valid. In your IOPortConnect function (Where you do the write) you call CreateIOCompletionPort 4 times passing in one shot handles.
My advice:
Create a single IOCompletion Port (that, ultimately, you associate numerous sockets with).
Create a pool of worker threads (by calling CreateThread) that each then block on the IOCompletionPort handle by calling GetQueuedCompletionStatus in a loop.
Create one or more WSA_OVERLAPPED sockets, and associate each one with the IOCompletionPort.
Use the WSA socket functions that take an OVERLAPPED* to trigger overlapped operations.
Process the completion of the issued requests as the worker threads return from GetQueuedCompletionStatus with the OVERLAPPED* you passed in to start the operation.
Note: WSASend returns both 0, and SOCKET_ERROR with WSAGetLastError() as WSA_IO_PENDING as codes to indicate that you will get an IO Completion Packet arriving at GetQueuedCompletionStatus. Any other error code means you should process the error immediately as an IO operation was not queued so there will be no further callbacks.
Note2: The OVERLAPPED* passed to the WSASend (or whatever) function is the OVERLAPPED* returned from GetQueuedCompletionStatus. You can use this fact to pass more context information with the call:
struct MYOVERLAPPED {
OVERLAPPED ovl;
};
MYOVERLAPPED ctx;
WSASend(...,&ctx.ovl);
...
OVERLAPPED* pov;
if(GetQueuedCompletionStatus(...,&pov,...)){
MYOVERLAPPED* pCtx = (MYOVERLAPPED*)pov;
Chris has dealt with most of the issues and you've probably already looked at plenty of example code, but...
I've got some free IOCP code that's available here: http://www.serverframework.com/products---the-free-framework.html
There are also several of my CodeProject articles on the subject linked from that page.

Example code of libssh2 being used for port forwarding

I'm looking for an example of how to use libssh2 to setup ssh port forwarding. I've looked at the API, but there is very little in the way of documentation in the area of port forwarding.
For instance, when using PuTTY's plink there is the remote port to listen on, but also the local port that traffic should be sent to. Is it the developers responsibility to set this up? Can someone give an example of how to do this?
Also, an example where remote port is brought to a local port would be useful. Do I use libssh2_channel_direct_tcpip_ex()?
I'm willing to put up a bounty if need be to get a couple of working examples of this.
The key to making libssh2 port forwarding work was discovering that it basically just gives you the data that came in to that port. You have to actually send the data onto a local port that you open:
(Note, this code is not yet complete, there is no error checking, and the thread yielding isn't correct, but it gives a general outline of how to accomplish this.)
void reverse_port_forward(CMainDlg* dlg, addrinfo * hubaddr, std::string username, std::string password, int port)
{
int iretval;
unsigned long mode = 1;
int last_socket_err = 0;
int other_port = 0;
fd_set read_set, write_set;
SOCKET sshsock = socket(AF_INET, SOCK_STREAM, 0);
iretval = connect(sshsock, hubaddr->ai_addr, hubaddr->ai_addrlen);
if (iretval != 0)
::PostQuitMessage(0);
LIBSSH2_SESSION * session = NULL;
session = libssh2_session_init();
iretval = libssh2_session_startup(session, sshsock);
if (iretval)
::PostQuitMessage(0);
iretval = libssh2_userauth_password(session, username.c_str(), password.c_str());
dlg->m_track_status(dlg, 1, 0, "Authorized");
LIBSSH2_LISTENER* listener = NULL;
listener = libssh2_channel_forward_listen_ex(session, "127.0.0.1", port, &other_port, 1);
if (!listener)
::PostQuitMessage(0);
LIBSSH2_CHANNEL* channel = NULL;
ioctlsocket(sshsock, FIONBIO, &mode);
libssh2_session_set_blocking(session, 0); // non-blocking
int err = LIBSSH2_ERROR_EAGAIN;
while (err == LIBSSH2_ERROR_EAGAIN)
{
channel = libssh2_channel_forward_accept(listener);
if (channel) break;
err = libssh2_session_last_errno(session);
boost::this_thread::yield();
}
if (channel)
{
char buf[MAX_BUF_LEN];
char* chunk;
long bytes_read = 0;
long bytes_written = 0;
int total_set = 0;
timeval wait;
wait.tv_sec = 0;
wait.tv_usec = 2000;
sockaddr_in localhost;
localhost.sin_family = AF_INET;
localhost.sin_addr.s_addr = inet_addr("127.0.0.1");
localhost.sin_port = htons(5900);
SOCKET local_sock = socket(AF_INET, SOCK_STREAM, IPPROTO_TCP);
ioctlsocket(local_sock, FIONBIO, &mode);
iretval = connect(local_sock, (sockaddr*) &localhost, sizeof(localhost) );
if (iretval == SOCKET_ERROR)
iretval = WSAGetLastError();
while (1)
{
bytes_read = libssh2_channel_read(channel, buf, MAX_BUF_LEN);
if (bytes_read >= 0){
FD_ZERO(&read_set);
FD_ZERO(&write_set);
FD_SET(local_sock, &write_set);
// wait until the socket can be written to
while (select(0, &read_set, &write_set, NULL, &wait) < 1)
boost::this_thread::yield();
if (FD_ISSET(local_sock, &write_set))
{
FD_CLR(local_sock, &write_set);
chunk = buf;
// everything may not get written in this call because we're non blocking. So
// keep writing more data until we've emptied the buffer pointer.
while ((bytes_written = send(local_sock, chunk, bytes_read, 0)) < bytes_read)
{
// if it couldn't write anything because the buffer is full, bytes_written
// will be negative which won't help our pointer math much
if (bytes_written > 0)
{
chunk = buf + bytes_written;
bytes_read -= bytes_written;
if (bytes_read == 0)
break;
}
FD_ZERO(&read_set);
FD_ZERO(&write_set);
FD_SET(local_sock, &write_set);
// wait until the socket can be written to
while (select(0, &read_set, &write_set, NULL, &wait) < 1)
boost::this_thread::yield();
}
}
}
FD_ZERO(&read_set);
FD_ZERO(&write_set);
FD_SET(local_sock, &read_set);
select(0, &read_set, &write_set, NULL, &wait);
if (FD_ISSET(local_sock, &read_set))
{
FD_CLR(local_sock, &read_set);
bytes_read = recv(local_sock, buf, MAX_BUF_LEN, 0);
if (bytes_read >= 0)
{
while ((bytes_written = libssh2_channel_write_ex(channel, 0, buf, bytes_read)) == LIBSSH2_ERROR_EAGAIN)
boost::this_thread::yield();
}
}
boost::this_thread::yield();
} // while
} // if channel
}
P.S. To make this work requires the latest SVN builds of libssh2. There were bugs in prior versions that kept port forwarding from being usable.
The libssh2 source code includes since a few years a direct_tcpip.c example which demonstrates how to create direct-tcpip SSH channels, and since last week a forward-tcpip.c example which demonstrates how to create forward-tcpip SSH channels.
direct-tcpip is what ssh -L uses, and forward-tcpip is what ssh -R uses.
It is always the responsibility of libssh2 users to deal with the actual data. libssh2 takes care of SSH channels and nothing else. You can benefit significantly from studying the SSH RFCs, in particular RFC 4254, to find more about what exactly each channel type promises you, and thus what you can expect from libssh2.