I am writing a simple client/server communication with fifo but i am stuck at using a signal handler to process client request.
The server open a fifo in readonly and non blocking mode, read datas received and writes back some datas to the client fifo.
And this actually works fine when there is no signal handler on the server side. Here is the main code for both sides.
Server :
int main(int argc, char *argv[])
{
// install handler
struct sigaction action;
action.sa_handler = requestHandler;
sigemptyset(&(action.sa_mask));
action.sa_flags = SA_RESETHAND | SA_RESTART;
sigaction(SIGIO, &action, NULL);
if(!makeFifo(FIFO_READ, 0644))
exit(1);
int rd_fifo = openFifo(FIFO_READ, O_RDONLY | O_NONBLOCK); // non blocking
if(rd_fifo == -1)
exit(1);
// wait for request and answer
while (1) {
qWarning() << "waiting client...";
sleep(1);
QString msg = readFifo(rd_fifo);
qWarning() << "msg = " << msg;
if(msg == "ReqMode") {
int wr_fifo = openFifo(FIFO_WRITE, O_WRONLY); // blocking
writeFifo(wr_fifo, QString("mode"));
break;
} else
qWarning() << "unknow request ..";
}
close(rd_fifo);
unlink(FIFO_READ);
return 0;
}
Client :
int main(int argc, char *argv[])
{
int wr_fifo = openFifo(FIFO_WRITE, O_WRONLY);
if(wr_fifo == -1)
exit(1);
// create a fifo to read server answer
if(!makeFifo(FIFO_READ, 0644))
exit(1);
// ask the server his mode
writeFifo(wr_fifo, QString("ReqMode"));
// read his answer and print it
int rd_fifo = openFifo(FIFO_READ, O_RDONLY); // blocking
qWarning() << "server is in mode : " << readFifo(rd_fifo);
close(rd_fifo);
unlink(FIFO_READ);
return 0;
}
Everything works as expected ( even if all errors are not properly handled, this is just a sample code to demonstrate that this is possible).
The problem is that the handler ( not shown here, but it only print a message on the terminal with the signal received ) is never called when the client write datas to the fifo. Beside, i have check that if i send a kill -SIGIO to the server from a bash ( or from elsewhere ) the signal handler is executed.
Thanks for your help.
Actually, i missed the 3 following lines on the server side :
fcntl(rd_fifo, F_SETOWN, getpid()); // set PID of the receiving process
fcntl(rd_fifo, F_SETFL, fcntl(rd_fifo, F_GETFL) | O_ASYNC); // enable asynchronous beahviour
fcntl(rd_fifo, F_SETSIG, SIGIO); // set the signal that is sent when the kernel tell us that there is a read/write on the fifo.
The last point was important because the default signal sent was 0 in my case, so i had to set it explicity to SIGIO to make things works. Here is the output of the server side :
waiting client...
nb_read = 0
msg = ""
unknow request ..
waiting client...
signal 29
SIGPOLL
nb_read = 7
msg = "ReqMode"
Now, i guess it's possible to handle the request inside the handler by moving what is inside the while loop into the requestHandler function.
Related
I am working on a game server that uses sockets and implemented a polling function that sends the message "[POLL]" over all player sockets in a lobby every second to notify the player clients that their connection is still alive.
If I disconnect on the client-side the socket is still polled with no errors, however, if I create a new connection with the same client (Gets a new FD and is added to the map as a second player), the whole server crashes without any exceptions/warnings/messages when it attempts to write to the previous socket FD. My call to Write on the socket is wrapped in a try/catch that doesn't catch any exceptions and, when debugging using gdb, I am not given any error messaging.
This is the Socket Write function:
int Socket::Write(ByteArray const& buffer)
{
if (!open)
{
return -1;
}
// Convert buffer to raw char array
char* raw = new char[buffer.v.size()];
for (int i=0; i < buffer.v.size(); i++)
{
raw[i] = buffer.v[i];
}
// Perform the write operation
int returnValue = write(GetFD(), raw, buffer.v.size()); // <- Crashes program
if (returnValue <= 0)
{
open = false;
}
return returnValue;
}
And this is the Poll function (Players are stored in a map of uint -> Socket*):
/*
Polls all connected players to tell them
to keep their connections alive.
*/
void Lobby::Poll()
{
playerMtx.lock();
for (auto it = players.begin(); it != players.end(); it++)
{
try
{
if (it->second != nullptr && it->second->IsOpen())
{
it->second->Write("[POLL]");
}
}
catch (...)
{
std::cout << "Failed to write to " << it->first << std::endl;
}
}
playerMtx.unlock();
}
I would expect to see the "Failed to write to " message but instead the entire server program exits with no messaging. What could be happening here?
I was unable to find a reason for the program crashing in the call to write but I was able to find a workaround.
I perform a poll operation on the file descriptor prior to calling write and I query the POLLNVAL event. If I receive a nonzero value, the FD is now invalid.
// Check if FD is valid
struct pollfd pollFd;
pollFd.fd = GetFD();
pollFd.events = POLLNVAL;
if (poll(&pollFd, 1, 0) > 0)
{
open = false;
return -1;
}
I'm new with epoll.
My code is working fine. The epoll is storing my file-descriptor and wait until file-descriptor is "ready".
But, for some reason it will not wake up until I will press on Enter (even though data has already received to fd, and after enter I will immediately see all data that has been sent before).
After one enter it will work as expected (no enters needed and when fd is ready again it will wake up again).
Here is am essence of my code:
int nEventCountReady = 0;
epoll_event event, events[EPOLL_MAX_EVENTS];
int epoll_fd = epoll_create1(0);
if(epoll_fd == -1)
{
std::cout << "Error: Failed to create EPoll" << std::endl;
return ;
}
event.events = EPOLLIN;
event.data.fd = myfd;
if(epoll_ctl(epoll_fd, EPOLL_CTL_ADD, 0, &event))
{
fprintf(stderr, "Failed to add file descriptor to epoll\n");
close(epoll_fd);
return ;
}
while(true)
{
std::cout << "Waiting for messages" << std::endl;
nEventCountReady = epoll_wait(epoll_fd, events, EPOLL_MAX_EVENTS, 30000); << Stuck until Enter will be pressed (at first while loop)
for(int i=0; i<nEventCountReady; i++)
{
msgrcv(events[i].data.fd, oIpCMessageContent, sizeof(SIPCMessageContent), 1, 0);
std::cout << oIpCMessageContent.buff << std::endl;
}
}
This
if(epoll_ctl(epoll_fd, EPOLL_CTL_ADD, 0, &event))
should probably be
if(epoll_ctl(epoll_fd, EPOLL_CTL_ADD, myfd, &event))
In the first line you tell epoll to monitor fd 0 which is typically the standard input. That's why it waits for it, e.g. for your Enter.
Note that your original code works only by coincidence. It just happens that when you Enter there is data in your myfd (and even if there's none msgrcv blocks). And once you pressed Enter it will wake up all the time since epoll knows that STDIN is ready but you didn't read from it.
Thanks to kamilCuk, I noticed that msgget doesn't return a file descriptor as I thought.
It returns a "System V message queue identifier".
And as freakish said before, System V message queues don't work with selectors like epoll.
I am having some problems with inter process communication in ZMQ between several instances of a program
I am using Linux OS
I am using zeromq/cppzmq, header-only C++ binding for libzmq
If I run two instances of this application (say on a terminal), I provide one with an argument to be a listener, then providing the other with an argument to be a sender. The listener never receives a message. I have tried TCP and IPC to no avail.
#include <zmq.hpp>
#include <string>
#include <iostream>
int ListenMessage();
int SendMessage(std::string str);
zmq::context_t global_zmq_context(1);
int main(int argc, char* argv[] ) {
std::string str = "Hello World";
if (atoi(argv[1]) == 0) ListenMessage();
else SendMessage(str);
zmq_ctx_destroy(& global_zmq_context);
return 0;
}
int SendMessage(std::string str) {
assert(global_zmq_context);
std::cout << "Sending \n";
zmq::socket_t publisher(global_zmq_context, ZMQ_PUB);
assert(publisher);
int linger = 0;
int rc = zmq_setsockopt(publisher, ZMQ_LINGER, &linger, sizeof(linger));
assert(rc==0);
rc = zmq_connect(publisher, "tcp://127.0.0.1:4506");
if (rc == -1) {
printf ("E: connect failed: %s\n", strerror (errno));
return -1;
}
zmq::message_t message(static_cast<const void*> (str.data()), str.size());
rc = publisher.send(message);
if (rc == -1) {
printf ("E: send failed: %s\n", strerror (errno));
return -1;
}
return 0;
}
int ListenMessage() {
assert(global_zmq_context);
std::cout << "Listening \n";
zmq::socket_t subscriber(global_zmq_context, ZMQ_SUB);
assert(subscriber);
int rc = zmq_setsockopt(subscriber, ZMQ_SUBSCRIBE, "", 0);
assert(rc==0);
int linger = 0;
rc = zmq_setsockopt(subscriber, ZMQ_LINGER, &linger, sizeof(linger));
assert(rc==0);
rc = zmq_bind(subscriber, "tcp://127.0.0.1:4506");
if (rc == -1) {
printf ("E: bind failed: %s\n", strerror (errno));
return -1;
}
std::vector<zmq::pollitem_t> p = {{subscriber, 0, ZMQ_POLLIN, 0}};
while (true) {
zmq::message_t rx_msg;
// when timeout (the third argument here) is -1,
// then block until ready to receive
std::cout << "Still Listening before poll \n";
zmq::poll(p.data(), 1, -1);
std::cout << "Found an item \n"; // not reaching
if (p[0].revents & ZMQ_POLLIN) {
// received something on the first (only) socket
subscriber.recv(&rx_msg);
std::string rx_str;
rx_str.assign(static_cast<char *>(rx_msg.data()), rx_msg.size());
std::cout << "Received: " << rx_str << std::endl;
}
}
return 0;
}
This code will work if I running one instance of the program with two threads
std::thread t_sub(ListenMessage);
sleep(1); // Slow joiner in ZMQ PUB/SUB pattern
std::thread t_pub(SendMessage str);
t_pub.join();
t_sub.join();
But I am wondering why when running two instances of the program the code above won't work?
Thanks for your help!
In case one has never worked with ZeroMQ,one may here enjoy to first look at "ZeroMQ Principles in less than Five Seconds"before diving into further details
Q : wondering why when running two instances of the program the code above won't work?
This code will never fly - and it has nothing to do with thread-based nor the process-based [CONCURENT] processing.
It was caused by a wrong design of the Inter Process Communication.
ZeroMQ can provide for this either one of the supported transport-classes :{ ipc:// | tipc:// | tcp:// | norm:// | pgm:// | epgm:// | vmci:// } plus having even smarter one for in-process comms, an inproc:// transport-class ready for inter-thread comms, where a stack-less communication may enjoy the lowest ever latency, being just a memory-mapped policy.
The selection of L3/L2-based networking stack for an Inter-Process-Communication is possible, yet sort of the most "expensive" option.
The Core Mistake :
Given that choice, any single processes ( not speaking about a pair of processes ) will collide on an attempt to .bind() its AccessPoint onto the very same TCP/IP-address:port#
The Other Mistake :
Even for the sake of a solo programme launched, both of the spawned threads attempt to .bind() its private AccessPoint, yet none does an attempt to .connect() a matching "opposite" AccessPoint.
At least one has to successfully .bind(), and
at least one has to successfully .connect(), so as to get a "channel", here of the PUB/SUB Archetype.
ToDo:
decide about a proper, right-enough Transport-Class ( best avoid an overkill to operate the full L3/L2-stack for localhost/in-process IPC )
refactor the Address:port# management ( for 2+ processes not to fail on .bind()-(s) to the same ( hard-wired ) address:port#
always detect and handle appropriately the returned {PASS|FAIL}-s from API calls
always set LINGER to zero explicitly ( you never know )
This is how my server looks like:
-WorkerThread(s):
calls epoll_wait, accepts connections, sets fd nonblocking(EPOLLIN | EPOLLOUT | EPOLLET | EPOLLRDHUP)
calls recv until EAGAIN on EPOLLIN event and pushes all data to global RecvBuffer(pthread_mutex synced)
on EPOLLOUT event: accesses global SendBuffer and if there's data to be sent for current ready fd, do it (in while loop until EAGAIN or until all data is sent; when whole packet is sent, pop it from SendBuffer)
-IOThread(s)
takes data from global RecvBuffer, proccess them
sends response by first trying to call send right away. If not all data is sent, push rest of it onto global SendBuffer to be sent from WorkerThread)
Problem is, that server doesnt send all queued data(they are left in SendBuffer) and amount of 'not sent' data grows by increasing number of clients.
For the sake of testing im using only 1 workerthread and 1 iothread, but it doesnt seem to make any difference if i use more.
Accessing global buffers is protected with pthread_mutex.
Also, my response data size is 130k bytes(it needs 3 send calls at least to send this amount of data). On the other side is windows client using blocking sockets.
Thank you very much!
MJ
edit:
Yes, by default I'm waiting for EPOLLOUT events even tho I have nothing to send. For implementation simplicity and man page guide, i did it like this. Also, my undestanding of it was like this:
Even if I "miss" EPOLLOUT event at the time i dont want to send anything it's no problem because when i want to send data, I'll call send until EAGAIN and EPOLLOUT should be triggered in future(and it is most of time)
Now I modified code to switch between IN/OUT events:
On accept:
event.events = EPOLLIN | EPOLLET | EPOLLRDHUP;
epoll_ctl (pNetServer->m_EventFD, EPOLL_CTL_ADD, infd, &event);
when all data has been sent:
event.events = EPOLLIN | EPOLLET | EPOLLRDHUP;
epoll_ctl (pNetServer->m_EventFD, EPOLL_CTL_MOD, events[i].data.fd, &event);
when I reach EAGAIN by calling send in IOThread:
event.events = EPOLLOUT | EPOLLET | EPOLLRDHUP;
epoll_ctl (pNetServer->m_EventFD, EPOLL_CTL_MOD, events[i].data.fd, &event);
..and I get same behavior. Also, I tried removing EPOLLET flag and nothing's changed
One side question: Does epoll_ctl with EPOLL_CTL_MOD flag replaces events member or just ORs it with given argument?
EDIT3: Updated IOThread function to send continiuosly until all data has been sent, or until EAGAIN.
I also tried to send even if I sent all data, but most of time i was getting errno 88 Socket operation on non-socket
EDIT4: I fixed some bugs in my 'sending code' so I dont get any queued data not sent now.. But, I dont receive as much data as I should :)) Highest amount of 'missed'(not received) data I get when client calls recv right away when sending is complete, and it grows with number of clients. When I put 2 sec delay between send and recv call on client(blocking calls) I lose none to little data on server, depending how many clients im running( client test code includes simple for loop with 1 send and 1 recv call after it )
Again, tried with and without ET mode.. Below is updated WorkerThread function which is responsible for receiving data.
#Admins/Mods Maybe I should open new topic now as problem is a bit different?
void CNetServer::WorkerThread(void* param)
{
CNetServer* pNetServer =(CNetServer*)param;
struct epoll_event event;
struct epoll_event *events;
int s = 0;
// events = (epoll_event*)calloc (MAXEVENTS, sizeof event);
while (1)
{
int n, i;
// printf ("BLOCKING NOW! epoll_wait thread %d\n",pthread_self());
n = pNetServer->m_epollCtrl.Wait(-1);
// printf ("epoll_wait thread %d\n",pthread_self());
pthread_mutex_lock(&g_mtx_WorkerThd);
for (i = 0; i < n; i++)
{
if ((pNetServer->m_epollCtrl.Event(i)->events & EPOLLERR))
{
// An error has occured on this fd, or the socket is not ready for reading (why were we notified then?)
// g_SendBufferArray.RemoveAll( 0 );
char szFileName[30] = {0};
sprintf( (char*)szFileName,"fd_%d.txt",pNetServer->m_epollCtrl.Event(i)->data.fd );
remove(szFileName);
/* printf( "\n\n\n");
printf( "\tDATA LEFT COUNT:%d\n",g_SendBufferArray.size());
for (int k=0;k<g_SendBufferArray.size();k++)
printf( "\tSD: %d DATA LEFT:%d\n",g_SendBufferArray[i]->sd,g_SendBufferArray[i]->nBytesSent );
*/
// fprintf (stderr, "epoll error\n");
// fflush(stdout);
close (pNetServer->m_epollCtrl.Event(i)->data.fd);
continue;
}
else if (pNetServer->m_ListenSocket == pNetServer->m_epollCtrl.Event(i)->data.fd)
{
// We have a notification on the listening socket, which means one or more incoming connections.
while (1)
{
struct sockaddr in_addr;
socklen_t in_len;
int infd;
char hbuf[NI_MAXHOST], sbuf[NI_MAXSERV];
in_len = sizeof in_addr;
infd = accept (pNetServer->m_ListenSocket, &in_addr, &in_len);
if (infd == -1)
{
if ((errno == EAGAIN) ||
(errno == EWOULDBLOCK))
{
// We have processed all incoming connections.
break;
}
else
{
perror ("accept");
break;
}
}
s = getnameinfo (&in_addr, in_len,
hbuf, sizeof hbuf,
sbuf, sizeof sbuf,
NI_NUMERICHOST | NI_NUMERICSERV);
if (s == 0)
{
printf("Accepted connection on descriptor %d "
"(host=%s, port=%s) thread %d\n", infd, hbuf, sbuf,pthread_self());
}
// Make the incoming socket non-blocking and add it to the list of fds to monitor.
CEpollCtrl::SetNonBlock(infd,true);
if ( !pNetServer->m_epollCtrl.Add( infd, EPOLLIN, NULL ))
{
perror ("epoll_ctl");
abort ();
}
}
continue;
}
if( (pNetServer->m_epollCtrl.Event(i)->events & EPOLLOUT) )
{
pNetServer->DoSend( pNetServer->m_epollCtrl.Event(i)->data.fd );
}
if( pNetServer->m_epollCtrl.Event(i)->events & EPOLLIN )
{
printf("EPOLLIN TRIGGERED FOR SD: %d\n",pNetServer->m_epollCtrl.Event(i)->data.fd);
// We have data on the fd waiting to be read.
int done = 0;
ssize_t count = 0;
char buf[512];
while (1)
{
count = read (pNetServer->m_epollCtrl.Event(i)->data.fd, buf, sizeof buf);
printf("recv sd %d size %d thread %d\n",pNetServer->m_epollCtrl.Event(i)->data.fd,count,pthread_self());
if (count == -1)
{
// If errno == EAGAIN, that means we have read all data. So go back to the main loop.
if ( errno != EAGAIN )
{
perror ("read");
done = 1;
}
break;
}
else if (count == 0)
{
//connection is closed by peer.. do a cleanup and close
done = 1;
break;
}
else if (count > 0)
{
static int nDataCounter = 0;
nDataCounter+=count;
printf("RECVDDDDD %d\n",nDataCounter);
CNetServer::s_pRecvContainer->OnData( pNetServer->m_epollCtrl.Event(i)->data.fd, buf, count );
}
}
if (done)
{
printf ("Closed connection on descriptor %d\n",pNetServer->m_epollCtrl.Event(i)->data.fd);
// Closing the descriptor will make epoll remove it from the set of descriptors which are monitored.
close (pNetServer->m_epollCtrl.Event(i)->data.fd);
}
}
}
//
pNetServer->IOThread( (void*)pNetServer );
pthread_mutex_unlock(&g_mtx_WorkerThd);
}
}
void CNetServer::IOThread(void* param)
{
BYTEARRAY* pbPacket = new BYTEARRAY;
int fd;
struct epoll_event event;
CNetServer* pNetServer =(CNetServer*)param;
printf("IOThread startin' !\n");
for (;;)
{
bool bGotIt = CNetServer::s_pRecvContainer->GetPacket( pbPacket, &fd );
if( bGotIt )
{
//process packet here
printf("Got 'em packet yo !\n");
BYTE* re = new BYTE[128000];
memset((void*)re,0xCC,128000);
buffer_t* responsebuff = new buffer_t( fd, re, 128000 ) ;
pthread_mutex_lock(&g_mtx_WorkerThd);
while( 1 )
{
int s;
int nSent = send( responsebuff->sd, ( responsebuff->pbBuffer + responsebuff->nBytesSent ),responsebuff->nSize - responsebuff->nBytesSent,0 );
printf ("IOT: Trying to send nSent: %d buffsize: %d \n",nSent,responsebuff->nSize - responsebuff->nBytesSent);
if (nSent == -1)
{
if (errno == EAGAIN || errno == EWOULDBLOCK )
{
g_vSendBufferArray.push_back( *responsebuff );
printf ("IOT: now waiting for EPOLLOUT\n");
event.data.fd = fd;
event.events = EPOLLIN | EPOLLOUT | EPOLLET | EPOLLRDHUP;
s = epoll_ctl (pNetServer->m_EventFD, EPOLL_CTL_MOD, fd, &event);
break;
if (s == -1)
{
perror ("epoll_ctl");
abort ();
}
}
else
{
printf( "%d\n",errno );
perror ("send");
break;
}
printf ("IOT: WOOOOT\n");
break;
}
else if (nSent == responsebuff->nSize - responsebuff->nBytesSent)
{
printf ("IOT:all is sent! wOOhOO\n");
responsebuff->sd = 0;
responsebuff->nBytesSent += nSent;
delete responsebuff;
break;
}
else if (nSent < responsebuff->nSize - responsebuff->nBytesSent)
{
printf ("IOT: partial send!\n");
responsebuff->nBytesSent += nSent;
}
}
delete [] re;
pthread_mutex_unlock(&g_mtx_WorkerThd);
}
}
}
Stop using EPOLLET. It's almost impossible to get right.
Don't ask for EPOLLOUT events if you have nothing to send.
When you have data to send on a connection, follow this logic:
A) If there's already data in your send queue for that connection, just add the new data. You're done.B) Try to send the data immediately. If you send it all, you're done.C) Save the leftover data in the send queue for this connection. Now ask for EPOLLOUT for this connection.
I'm running a server and a client. i'm testing my program on my computer.
this is the funcion in the server that sends data to the client:
int sendToClient(int fd, string msg) {
cout << "sending to client " << fd << " " << msg <<endl;
int len = msg.size()+1;
cout << "10\n";
/* send msg size */
if (send(fd,&len,sizeof(int),0)==-1) {
cout << "error sendToClient\n";
return -1;
}
cout << "11\n";
/* send msg */
int nbytes = send(fd,msg.c_str(),len,0); //CRASHES HERE
cout << "15\n";
return nbytes;
}
when the client exits it sends to the server "BYE" and the server is replying it with the above function. I connect the client to the server (its done on one computer, 2 terminals) and when the client exits the server crashes - it never prints the 15.
any idea why ? any idea how to test why?
thank you.
EDIT: this is how i close the client:
void closeClient(int notifyServer = 0) {
/** notify server before closing */
if (notifyServer) {
int len = SERVER_PROTOCOL[bye].size()+1;
char* buf = new char[len];
strcpy(buf,SERVER_PROTOCOL[bye].c_str()); //c_str - NEED TO FREE????
sendToServer(buf,len);
delete[] buf;
}
close(_sockfd);
}
btw, if i skipp this code, meaning just leave the close(_sockfd) without notifying the server everything is ok - the server doesn't crash.
EDIT 2: this is the end of strace.out:
5211 recv(5, "BYE\0", 4, 0) = 4
5211 write(1, "received from client 5 \n", 24) = 24
5211 write(1, "command: BYE msg: \n", 19) = 19
5211 write(1, "BYEBYE\n", 7) = 7
5211 write(1, "response = ALALA!!!\n", 20) = 20
5211 write(1, "sending to client 5 ALALA!!!\n", 29) = 29
5211 write(1, "10\n", 3) = 3
5211 send(5, "\t\0\0\0", 4, 0) = 4
5211 write(1, "11\n", 3) = 3
5211 send(5, "ALALA!!!\0", 9, 0) = -1 EPIPE (Broken pipe)
5211 --- SIGPIPE (Broken pipe) # 0 (0) ---
5211 +++ killed by SIGPIPE +++
broken pipe can kill my program?? why not just return -1 by send()??
You may want to specify MSG_NOSIGNAL in the flags:
int nbytes = send(fd,msg.c_str(), msg.size(), MSG_NOSIGNAL);
You're getting SIGPIPE because of a "feature" in Unix that raises SIGPIPE when trying to send on a socket that the remote peer has closed. Since you don't handle the signal, the default signal-handler is called, and it aborts/crashes your program.
To get the behavior your want (i.e. make send() return with an error, instead of raising a signal), add this to your program's startup routine (e.g. top of main()):
#include <signal.h>
int main(int argc, char ** argv)
{
[...]
signal(SIGPIPE, SIG_IGN);
Probably the clients exits before the server has completed the sending, thus breaking the socket between them. Thus making send to crash.
link
This socket was connected but the
connection is now broken. In this
case, send generates a SIGPIPE signal
first; if that signal is ignored or
blocked, or if its handler returns,
then send fails with EPIPE.
If the client exits before the second send from the server, and the connection is not disposed of properly, your server keeps hanging and this could provoke the crash.
Just a guess, since we don't know what server and client actually do.
I find the following line of code strange because you define int len = msg.size()+1;.
int nbytes = send(fd,msg.c_str(),len,0); //CRASHES HERE
What happens if you define int len = msg.size();?
If you are on Linux, try to run the server inside strace. This will write lots of useful data to a log file.
strace -f -o strace.out ./server
Then have a look at the end of the log file. Maybe it's obvious what the program did and when it crashed, maybe not. In the latter case: Post the last lines here.