recv() failed: Bad file descriptor c++ Linux - c++

I have a problem. My program includes 10 TCP Server at the sam etime. As soon as a request from a client is noticed, the appropriate tcp server socket will accept the connection and handle it in a seperate thread. I know this is not the most efficient way of solving my actual problem but okay..
In main I have a for loop that will call a function in an object called Peer which is StartThread()
for (it= ListOfPeers.begin();it!= ListOfPeers.end();it++)
{
(*it).second->StartThread();
}
Of cause there are some conditions that this loop will be used but i wanted to narrow the code as much down as possible.
The function StartThread will be called in each peer object:
void StartThread()
{
pthread_t threadDoEvent;
pthread_create( &threadDoEvent, NULL, &DoEvent_helper,this);
pthread_detach(threadDoEvent);
}
void *DoEvent_helper( void *ptr ) // Helper to implement thread
{
return ((Peer *)ptr)->DoEvent();
}
DoEvent is the function which will handle the request and connection:
void* DoEvent()
{
unsigned char buffer[1024];
int rc;
int close_conn= FALSE;
int new_sd;
new_sd = accept(Socket, NULL, NULL);
if (new_sd < 0)
{
if (errno != EWOULDBLOCK)
{
perror(" accept() failed");
close_conn = TRUE;
}
}
do
{
rc = recv(new_sd, buffer, sizeof(buffer), 0);
if (rc < 0)
{
if (errno != EWOULDBLOCK)
{
perror(" recv() failed");
close_conn = TRUE;
}
break;
}
if (rc == 0)
{
printf(" Connection closed\n");
close_conn = TRUE;
break;
}
[DO SOMETHING WITH THE BUFFER]
rc = send(new_sd, buffer, len, 0);
if (rc <= 0)
{
perror(" send() failed");
close_conn = TRUE;
break;
}
}while (close_conn==FALSE);
close(new_sd);
}
My Question is why am I receiving an error : recv() failed: Bad file descriptor????
When I am adding a sleep(1) in between pthread_create and pthread_detach, everything is working!
Can somebody explain this circumstances to me? or maybe help me solving my problem?
Thanks!

Well you do try to receive even if accept fails. This leads me to believe that you haven't properly set up the passive Socket file descriptor properly before trying to accept from it.
If you set up the Socket descriptor in the main thread, then it could be that the new thread starts and runs before you do that.

Make sure new_sd is in persistent memory. e.g if it is allocated by a function that launches the thread and exits, then new_sd will be pointing to junk. (at least that was the case for me)

Related

How to handle 10093(WSANOTINITIALISED) error when invoke accept method

Recently my code meet some issue. My programme work as server and listen to ONE client to connect, then send some commands to my programme. Then I will handle this command and return the value to client side.
But now I got the issue accept() method will get 10093(WSANOTINITIALISED) error and seems accept() method didn't block there.
It is not always happened. I tested the programme. Sometimes it works very well. Client side connected to my programme and send first command. Then my programme handle the command and send back return value. Then stopped the connection(closesocket(sClient);). Then Client side connected to my programme again and send second command...While time to time it happened accept() get 10093(WSANOTINITIALISED) error and client side will fail to connect to my programme any more. and the while loop(while (true && !m_bExitThread)) also didn't block.
My questions are:
Why did it happen? did someone meet the same issue? I believe my code should be correct, otherwise why most of time it works well.
If this 10093 error comes, how should I handle it? Do I need to closesocket and wait for Client side connect again? or do I need to WSACleanup(); and try to start this socket Thread totally again?
Below is the code. it is a thread I start it when my programme start up and stop it when programme stopped.
UINT CMainFrame::RunSocketThread()
{
m_bExitThread = false;
WORD wSockVersion = MAKEWORD(2, 2);
WSADATA wsaData;
if (WSAStartup(wSockVersion, &wsaData) != 0) // Here always success, no problem
{
LOGL(ILoggingSink::LogLevel::Error, _T("WSAStartup error !"));
return 0;
}
SOCKET slisten = socket(AF_INET, SOCK_STREAM, IPPROTO_TCP);
if (slisten == INVALID_SOCKET)
{
LOGL(ILoggingSink::LogLevel::Error, _T("socket error !"));
return 0;
}
sockaddr_in stSocketAddr;
stSocketAddr.sin_family = AF_INET;
stSocketAddr.sin_port = htons(7700);
stSocketAddr.sin_addr.S_un.S_addr = INADDR_ANY;
if (bind(slisten, (LPSOCKADDR)&stSocketAddr, sizeof(stSocketAddr)) == SOCKET_ERROR)
{
LOGL(ILoggingSink::LogLevel::Error, _T("bind error !"));
return 0;
}
if (listen(slisten, 5) == SOCKET_ERROR)
{
LOGL(ILoggingSink::LogLevel::Error, _T("listen error !"));
return 0;
}
SOCKET sClient;
sockaddr_in remoteAddr;
int nAddrlen = sizeof(remoteAddr);
char revData[255];
while (true && !m_bExitThread)
{
// Waiting for connection
sClient = accept(slisten, (SOCKADDR*)&remoteAddr, &nAddrlen);
if (sClient == INVALID_SOCKET) // Here I can get error code 10093(WSANOTINITIALISED)
{
LOGL(ILoggingSink::LogLevel::Error, _T("accept error %d!"), WSAGetLastError());
continue;
}
// revice data
int ret = recv(sClient, revData, 255, 0);
if (ret > 0)
{
revData[ret] = 0x00;
ParseJsonCommand(revData);
}
// send data
// Here I wait for programme finished handling the income command and return a value, otherwise just sleep and wait
while (CmdLineInfo::m_eReturn == ReturnTypeEnum::kNull)
{
Sleep(100);
}
const char* sendData;
CString strData;
strData = "{\"Command\":\"";
strData += CmdLineInfo::s_sLFODCommandName;
strData += "\", \"ReturnValue\":\"";
if(CmdLineInfo::m_eReturn == ReturnTypeEnum::kSuccess)
strData +="1\"} ";
else
strData += "0\"} ";
CStringA strAData(strData);
sendData = strAData;
send(sClient, sendData, strlen(sendData), 0);
closesocket(sClient);
}
closesocket(slisten);
WSACleanup();
return 0;
}

c tcp socket non blocking receive timeout

Trying to write a client which will try to receive data till 3 seconds. I have implemented the connect method using select by below code.
//socket creation
m_hSocket = socket(AF_INET, SOCK_STREAM, IPPROTO_TCP);
m_stAddress.sin_family = AF_INET;
m_stAddress.sin_addr.S_un.S_addr = inet_addr(pchIP);
m_stAddress.sin_port = htons(iPort);
m_stTimeout.tv_sec = SOCK_TIMEOUT_SECONDS;
m_stTimeout.tv_usec = 0;
//connecting to server
long iMode = 1;
int iResult = ioctlsocket(m_hSocket, FIONBIO, &iMode);
connect(m_hSocket, (struct sockaddr *)&m_stAddress, sizeof(m_stAddress));
long iMode = 0;
iResult = ioctlsocket(m_hSocket, FIONBIO, &iMode);
fd_set stWrite;
FD_ZERO(&stWrite);
FD_SET(m_hSocket, &stWrite);
iResult = select(0, NULL, &stWrite, NULL, &m_stTimeout);
if((iResult > 0) && (FD_ISSET(m_hSocket, &stWrite)))
return true;
But I cannot figure out what I am missing at receiving timeout with below code? It doesn't wait if the server connection got disconnected. It just returns instantly from select method.
Also how can I write a non blocking socket call with timeout for socket send.
long iMode = 1;
int iResult = ioctlsocket(m_hSocket, FIONBIO, &iMode);
fd_set stRead;
FD_ZERO(&stRead);
FD_SET(m_hSocket, &stRead);
int iRet = select(0, &stRead, NULL, NULL, &m_stTimeout);
if ((iRet > 0) && (FD_ISSET(m_hSocket, &stRead)))
{
while ((iBuffLen-1) > 0)
{
int iRcvLen = recv(m_hSocket, pchBuff, iBuffLen-1, 0);
if (iRcvLen == SOCKET_ERROR)
{
return false;
}
else if (iRcvLen == 0)
{
break;
}
pchBuff += iRcvLen;
iBuffLen -= iRcvLen;
}
}
The first parameter to select should not be 0.
Correct usage of select can be found here :
http://developerweb.net/viewtopic.php?id=2933
the first parameter should be the max value of your socket +1 and take interrupted system calls into account if it is non blocking:
/* Call select() */
do {
FD_ZERO(&readset);
FD_SET(socket_fd, &readset);
result = select(socket_fd + 1, &readset, NULL, NULL, NULL);
} while (result == -1 && errno == EINTR);
This is just example code you probably need the timeout parameter as well.
If you can get EINTR this will complicate your required logic, because if you get EINTR you have to do the same call again, but with the remaining time to wait for.
I think for non blocking mode one needs to check the recv() failure along with a timeout value. That mean first select() will return whether the socket is ready to receive data or not. If yes it will go forward else it will sleep until timeout elapses on the select() method call line. But if the receive fails due to some uncertain situations while inside read loop there we need to manually check for socket error and maximum timeout value. If the socket error continues and timeout elapses we need to break it.
I'm done with my receive timeout logic with non blocking mode.
Please correct me if I am wrong.
bool bReturn = true;
SetNonBlockingMode(true);
//check whether the socket is ready to receive
fd_set stRead;
FD_ZERO(&stRead);
FD_SET(m_hSocket, &stRead);
int iRet = select(0, &stRead, NULL, NULL, &m_stTimeout);
DWORD dwStartTime = GetTickCount();
DWORD dwCurrentTime = 0;
//if socket is not ready this line will be hit after 3 sec timeout and go to the end
//if it is ready control will go inside the read loop and reads data until data ends or
//socket error is getting triggered continuously for more than 3 secs.
if ((iRet > 0) && (FD_ISSET(m_hSocket, &stRead)))
{
while ((iBuffLen-1) > 0)
{
int iRcvLen = recv(m_hSocket, pchBuff, iBuffLen-1, 0);
dwCurrentTime = GetTickCount();
if ((iRcvLen == SOCKET_ERROR) && ((dwCurrentTime - dwStartTime) >= SOCK_TIMEOUT_SECONDS * 1000))
{
bReturn = false;
break;
}
else if (iRcvLen == 0)
{
break;
}
pchBuff += iRcvLen;
iBuffLen -= iRcvLen;
}
}
SetNonBlockingMode(false);
return bReturn;

How to use both TCP and UDP in one application in c++

I'm developing a server-client project based on Winsock in c++. I have designed the server and the client sides so that they can send and receive text messages and files as well.
Then I decided to go for audio communication between server and client. I've actually implemented that however I've figured it out that I've done everything using TCP protocol and that for the audio communication it is better to use UDP protocol.
Then I've searched over the internet and found out that it is possible to use both TCP and UDP alongside each other.
I've tried to use the UDP protocol but I didn't have any major progresses.
My problem is I use both recv() and recvFrom() in a while loop like this:
while (true)
{
buflen = recv(clientS, buffer, 1024, NULL);
if (buflen > 0)
{
// Send the received buffer
}
else if (buflen == 0)
{
printf("closed\n");
break;
}
buflen = recvfrom(udpS, buffer, 1024, NULL, (struct sockaddr*)&_s, &_size);
But the recvFrom() blocks. I think I haven't done the job properly but I couldn't find out how to do it.
Here Server in C accepting UDP and TCP connections I found a similar question but the answers were just explanations and there were no sample codes to demonstrate the point clearly.
Now I need you to help me undestand clearly how to receive data from both TCP and UPD connections.
Any help is appreciated.
When dealing with multiple sockets at a time, use select() to know which socket has data pending before you read it, eg:
while (true)
{
fd_set rfd;
FD_ZERO(&rfd);
FD_SET(clientS, &rfd);
FD_SET(udpS, &rfd);
struct timeval timeout;
timeout.tv_sec = ...;
timeout.tv_usec = ...;
int ret = select(0, &rfd, NULL, NULL, &timeout);
if (ret == SOCKET_ERROR)
{
// handle error
break;
}
if (ret == 0)
{
// handle timeout
continue;
}
// at least one socket is readable, figure out which one(s)...
if (FD_ISSET(clientS, &rfd))
{
buflen = recv(clientS, buffer, 1024, NULL);
if (buflen == SOCKET_ERROR)
{
// handle error...
printf("error\n");
}
else if (buflen == 0)
{
// handle disconnect...
printf("closed\n");
}
else
{
// handle received data...
}
}
if (FD_ISSET(udpS, &rfd))
{
buflen = recvfrom(udpS, buffer, 1024, NULL, (struct sockaddr*)&_s, &_size);
//...
}
}

unblocking WSAccept for blocking TCP server sockets

I'm writing a TCP server (blocking socket model).
I'm having trouble implementing a valid normal program exit when the server is waiting (blocking) for new connection attempts on Accept (I use WSAccept).
The code for the server's listening socket is something like this (I omitted error handling and other irrelevant code):
int ErrCode = WSAStartup(MAKEWORD(2,2), &m_wsaData) ;
// Create a new socket to listen and accept new connection attempts
struct addrinfo hints, *res = NULL, *ptr = NULL ;
int rc, count = 0 ;
memset(&hints, 0, sizeof(hints)) ;
hints.ai_family = AF_UNSPEC ;
hints.ai_socktype = SOCK_STREAM ;
hints.ai_protocol = IPPROTO_TCP ;
hints.ai_flags = AI_PASSIVE ;
CString strPort ;
strPort.Format("%d", Port) ;
getaddrinfo(pLocalIp, strPort.GetBuffer(), &hints, &res) ;
strPort.ReleaseBuffer() ;
ptr = res ;
if ((m_Socket = WSASocket(res->ai_family, res->ai_socktype, res->ai_protocol, NULL, 0, 0)) == INVALID_SOCKET)
{
// some error
}
if(bind(m_Socket, (SOCKADDR *)res->ai_addr, res->ai_addrlen) == SOCKET_ERROR)
{
// some error
}
if (listen(m_Socket, SOMAXCONN) == SOCKET_ERROR)
{
// some error
}
So far so good... Then I implemented the WSAccept call inside a thread like this:
SOCKADDR_IN ClientAddr ;
int ClientAddrLen = sizeof(ClientAddr) ;
SOCKET TempS = WSAAccept(m_Socket, (SOCKADDR*) &ClientAddr, &ClientAddrLen, NULL, NULL);
Of course the WSAccept blocks until a new connection attempt is made but if I wish to exit
the program then i need some way to cause WSAccept to exit. I have tried several different approaches:
Attempt to call shutdown and/or closesocket with m_Socket from within another thread failed (program just hangs).
using WSAEventSelect indeed solves this issue but then WSAccept delivers only non-blocking sockets - which is not my intention. (Is there a way to make the sockets blocking?)
I Read about APC and tried to use something like QueueUserAPC(MyAPCProc, m_hThread, 1)) but it didn't work either.
What am I doing wrong ?
Is there a better way to cause this blocking WSAccept to exit ?
Use select() with a timeout to detect when a client connection is actually pending before then calling WSAAccept() to accept it. It works with blocking sockets without putting them into non-blocking mode. That will give your code more opportunities to check if the app is shutting down.
Go with non-blocking accepting socket (WSAEventSelect as you mentioned) and use non-blocking WSAccept. You can make a non-blocking socket that WSAccept returns into blocking socket with ioctlsocket (see msdn).
Do all the other stuff you absoultely have to on shutdown, (maybe you have DB connections to close, or files to flush?), and then call ExitProcess(0). That will stop your listening thread, no problem.
See log4cplus source for my take on this issue. I basically wait on two event objects, one is signaled when connection is being accepted (using WSAEventSelect()) and another is there to interrupt the waiting. The most relevant parts of the source is below. See ServerSocket::accept().
namespace {
static
bool
setSocketBlocking (SOCKET_TYPE s)
{
u_long val = 0;
int ret = ioctlsocket (to_os_socket (s), FIONBIO, &val);
if (ret == SOCKET_ERROR)
{
set_last_socket_error (WSAGetLastError ());
return false;
}
else
return true;
}
static
bool
removeSocketEvents (SOCKET_TYPE s, HANDLE ev)
{
// Clean up socket events handling.
int ret = WSAEventSelect (to_os_socket (s), ev, 0);
if (ret == SOCKET_ERROR)
{
set_last_socket_error (WSAGetLastError ());
return false;
}
else
return true;
}
static
bool
socketEventHandlingCleanup (SOCKET_TYPE s, HANDLE ev)
{
bool ret = removeSocketEvents (s, ev);
ret = setSocketBlocking (s) && ret;
ret = WSACloseEvent (ev) && ret;
return ret;
}
} // namespace
ServerSocket::ServerSocket(unsigned short port)
{
sock = openSocket (port, state);
if (sock == INVALID_SOCKET_VALUE)
{
err = get_last_socket_error ();
return;
}
HANDLE ev = WSACreateEvent ();
if (ev == WSA_INVALID_EVENT)
{
err = WSAGetLastError ();
closeSocket (sock);
sock = INVALID_SOCKET_VALUE;
}
else
{
assert (sizeof (std::ptrdiff_t) >= sizeof (HANDLE));
interruptHandles[0] = reinterpret_cast<std::ptrdiff_t>(ev);
}
}
Socket
ServerSocket::accept ()
{
int const N_EVENTS = 2;
HANDLE events[N_EVENTS] = {
reinterpret_cast<HANDLE>(interruptHandles[0]) };
HANDLE & accept_ev = events[1];
int ret;
// Create event and prime socket to set the event on FD_ACCEPT.
accept_ev = WSACreateEvent ();
if (accept_ev == WSA_INVALID_EVENT)
{
set_last_socket_error (WSAGetLastError ());
goto error;
}
ret = WSAEventSelect (to_os_socket (sock), accept_ev, FD_ACCEPT);
if (ret == SOCKET_ERROR)
{
set_last_socket_error (WSAGetLastError ());
goto error;
}
do
{
// Wait either for interrupt event or actual connection coming in.
DWORD wsawfme = WSAWaitForMultipleEvents (N_EVENTS, events, FALSE,
WSA_INFINITE, TRUE);
switch (wsawfme)
{
case WSA_WAIT_TIMEOUT:
case WSA_WAIT_IO_COMPLETION:
// Retry after timeout or APC.
continue;
// This is interrupt signal/event.
case WSA_WAIT_EVENT_0:
{
// Reset the interrupt event back to non-signalled state.
ret = WSAResetEvent (reinterpret_cast<HANDLE>(interruptHandles[0]));
// Clean up socket events handling.
ret = socketEventHandlingCleanup (sock, accept_ev);
// Return Socket with state set to accept_interrupted.
return Socket (INVALID_SOCKET_VALUE, accept_interrupted, 0);
}
// This is accept_ev.
case WSA_WAIT_EVENT_0 + 1:
{
// Clean up socket events handling.
ret = socketEventHandlingCleanup (sock, accept_ev);
// Finally, call accept().
SocketState st = not_opened;
SOCKET_TYPE clientSock = acceptSocket (sock, st);
int eno = 0;
if (clientSock == INVALID_SOCKET_VALUE)
eno = get_last_socket_error ();
return Socket (clientSock, st, eno);
}
case WSA_WAIT_FAILED:
default:
set_last_socket_error (WSAGetLastError ());
goto error;
}
}
while (true);
error:;
DWORD eno = get_last_socket_error ();
// Clean up socket events handling.
if (sock != INVALID_SOCKET_VALUE)
{
(void) removeSocketEvents (sock, accept_ev);
(void) setSocketBlocking (sock);
}
if (accept_ev != WSA_INVALID_EVENT)
WSACloseEvent (accept_ev);
set_last_socket_error (eno);
return Socket (INVALID_SOCKET_VALUE, not_opened, eno);
}
void
ServerSocket::interruptAccept ()
{
(void) WSASetEvent (reinterpret_cast<HANDLE>(interruptHandles[0]));
}
A not so neat way of solving this problem is by issuing a dummy WSAConnect request from the thread that needs to do the shutdown. If the dummy connect fails, you might resort to ExitProcess as suggested by Martin.
void Drain()
{
if (InterlockedIncrement(&drain) == 1)
{
// Make a dummy connection to unblock wsaaccept
SOCKET ConnectSocket = WSASocket(AF_INET, SOCK_STREAM, IPPROTO_TCP, NULL, 0, 0);
if (ConnectSocket != INVALID_SOCKET) {
int iResult = WSAConnect(ConnectSocket, result->ai_addr, result->ai_addrlen, 0, 0, 0, 0);
if (iResult != 0) {
printf("Unable to connect to server! %d\n", WSAGetLastError());
}
else
{
closesocket(ConnectSocket);
}
}
}
}

c++ ServerSocket(), FD_CLOEXEC, fork() & execl()

I had issues trying to background a command in my app, so I was told here to double fork and clear some of the settings, so this was my result:
if((pid = fork()) < 0)
perror("Error with Fork()");
else if(pid > 0) {
return "";
}
if (setsid()==-1) {
Log("failed to become a session leader");
}
if (chdir("/") == -1) {
Log("failed to change working directory");
}
umask(0);
close(STDIN_FILENO);
close(STDOUT_FILENO);
close(STDERR_FILENO);
if (open("/dev/null",O_RDONLY) == -1) {
Log("failed to reopen stdin");
}
if (open("/dev/null",O_WRONLY) == -1) {
Log("failed to reopen stdout");
}
if (open("/dev/null",O_RDWR) == -1) {
Log("failed to reopen stderr");
}
signal(SIGHUP, SIG_IGN);
Log("No return, forking..");
if((pid = fork()) < 0)
perror("Error with Fork()");
else if(pid > 0) {
return "";
} else {
if(execl("/bin/bash", "/bin/bash", "-c", cmddo, (char*) 0) < 0) perror("execl()");
exit(0);
}
Double forking fixed the issue of the execl'd proc stopping when its parent is closed but it left me with the execl'd process holding onto the parents socket, so when the parent tries to start again it can't.
Here is my parent socket stuff:
ServerSocket server(listenport);
while(true)
{
ServerSocket* new_sock = new ServerSocket();
server.accept (*new_sock);
pthread_t thread;
int rc = pthread_create(&thread, NULL, &LoadThread, (void*)(new_sock));
if (rc) Log_warn("Fatal Error: pthread_create() #%d", rc);
pthread_detach(thread);
}
I was told to FD_CLOEXEC on the socket in my last question but I do not understand how-to do that - and google (plus stack) isn't showing me much help in regards to that.
How do I FD_CLOEXEC on my ServerSocket() so when I fork/execl a sub process it won't hang my socket?
Thanks :D
ANSWER:
As told below to clear the fd's out - my code actually had this, and it worked for me:
struct rlimit rl;
int i;
if (rl.rlim_max == RLIM_INFINITY)
rl.rlim_max = 1024;
for (i = 0; (unsigned) i < rl.rlim_max; i++)
close(i);
FD_CLOEXEC is a flag that can be set on the file descriptor -- its effect is that when a process holding the handle calls exec(), the descriptor is closed.
Use
fcntl(fd, F_SETFD, (long)FD_CLOEXEC);
to set the flag; for this to work, you need to access the actual file descriptor.
Also, setsid() is fully sufficient to disassociate yourself from the parent process group, and while double fork() also works, it does not earn you brownie points with the embedded folks.
And last, there is no guarantee that after closing the first three filedescriptors, the next fds opened will be those three; it is better to use
fd newstdin = open(...);
if(dup2(newstdin, STDIN_FILENO) != 0) { /* handle error */ }
close(newstdin);