c++ socket accept blocks cout - c++

I'm quite new to C++/C in general, so execuse me if this question might appear kinda stupid, but i'm really stuck here.
What I'm trying to do is a simple tcp server giving the user the opportunity to accept/decline incoming connections. I have a function named waitingForConnection() containing the main loop. It's called in the main function after the socket is sucessuflly bind and marked as passive.
What I'd expect is that, after the Client connects to the server, the function handleConnection() is getting called, causing the main loop to wait until the function is executed and only then to contiue.
But what actually seems to happen is that the main loop continues to the next accept() call, which is blocking the thread, before handleConnection() is completley executed. The test-output "done" will only become visible if the client connects a second time and the thread wakes up again.
To me it seem like the code is executed completley out of order, which I believe is not possible, since the whole code should be executed in a single thread.
Console Output after first connection attempt ("n" is user input):
"Accept connection ? (y/n)"n
Console Output after second connection attempt:
"Accept connection ? (y/n)"n
"doneAccept connection ? (y/n)"
Please note that I'm not looking for any workaround using select() or something similar, I'm just trying to understand why the code does what it does and maybe how to fix it by changing the structure.
int soc = socket(AF_INET, SOCK_STREAM, 0);
void handleConnection(int connectionSoc, sockaddr_in client){
string choice;
cout<<"Accept connection ? (y/n)";
cin>>choice;
if(choice=="y"||choice == "Y"){
//do something here
}else{
close(connectionSoc);
}
cout<<"done";
}
void waitingForConnection(){
while(running){
sockaddr_in clientAddress;
socklen_t length;
length = sizeof(clientAddress);
int soc1 = accept(soc,(struct sockaddr*)&clientAddress, &length);
if(soc1<0){
statusOutput("Connection failed");
}else{
handleConnection(soc1, clientAddress);
}
}
}

Problem is that output to std::cout is buffered and you simply do not see that until it is flushed. As std::cin automatically flushes std::cout you get your output on the next input. Change your output to:
std::cout << "done" << std::endl;
and you would get expected behavior.

Related

C++ + linux handle SIGPIPE signal

Yes, I understand this issue has been discussed many times.
And yes, I've seen and read these and other discussions:
1
2
3
and I still can't fix my code myself.
I am writing my own web server. In the next cycle, it listens on a socket, connects each new client and writes it to a vector.
Into my class i have this struct:
struct Connection
{
int socket;
std::chrono::system_clock::time_point tp;
std::string request;
};
with next data structures:
std::mutex connected_clients_mux_;
std::vector<HttpServer::Connection> connected_clients_;
and the cycle itself:
//...
bind (listen_socket_, (struct sockaddr *)&addr_, sizeof(addr_));
listen(listen_socket_, 4 );
while(1){
connection_socket_ = accept(listen_socket_, NULL, NULL);
//...
Connection connection_;
//...
connected_clients_mux_.lock();
this->connected_clients_.push_back(connection_);
connected_clients_mux_.unlock();
}
it works, clients connect, send and receive requests.
But the problem is that if the connection is broken ("^C" for client), then my program will not know about it even at the moment:
void SendRespons(HttpServer::Connection socket_){
write(socket_.socket,( socket_.request + std::to_string(socket_.socket)).c_str(), 1024);
}
as the title of this question suggests, my app receives a SIGPIPE signal.
Again, I have seen "solutions".
signal(SIGPIPE, &SigPipeHandler);
void SigPipeHandler(int s) {
//printf("Caught SIGPIPE\n%d",s);
}
but it does not help. At this moment, we have the "№" of the socket to which the write was made, is it possible to "remember" it and close this particular connection in the handler method?
my system:
Operating System: Ubuntu 20.04.2 LTS
Kernel: Linux 5.8.0-43-generic
g++ --version
g++ (Ubuntu 9.3.0-17ubuntu1~20.04) 9.3.0
As stated in the links you give, the solution is to ignore SIGPIPE, and CHECK THE RETURN VALUE of the write calls. This latter is needed for correct operation (short writes) in all but the most trivial, unloaded cases anyways. Also the fixed write size of 1024 that you are using is probably not what you want -- if your response string is shorter, you'll send a bunch of random garbage along with it. You probably really want something like:
void SendRespons(HttpServer::Connection socket_){
auto data = socket_.request + std::to_string(socket_.socket);
int sent = 0;
while (sent < data.size()) {
int len = write(socket_.socket, &data[sent], data.size() - sent);
if (len < 0) {
// there was an error -- might be EPIPE or EAGAIN or EINTR or ever a few other
// obscure corner cases. For EAGAIN or EINTR (which can only happen if your
// program is set up to allow them), you probably want to try again.
// Anything else, probably just close the socket and clean up.
if (errno == EINTR)
continue;
close(socket_.socket);
// should tell someone about it?
break; }
sent += len; }
}

Giving Each Child Process a Unique ID with Forking

To learn socket programming with TCP, I'm making a simple server and client. The client will send chunks of a file and the server will read them and write to a file. Client and server work properly without any multiprocessing. I want to make it so that multiple clients can connect simultaneously. I want to give each connected client a unique id, called "client_id". This is a number between 1 and n.
I tried to use fork() in order to spawn a child process, and in the child process I accept the connection and then read in the data and save it to the file. However, the client_id variable is not synchronized across processes so sometimes it will be incremented and sometimes not. I don't fully understand what's going on. The value of client_id should never be repeated, but sometimes I'm seeing numbers appear twice. I believe this is because on forking, the child process gets a copy of everything the parent had but there is no synchronization across parallel processes.
Here is my infinite loop that sits and waits for connecting clients. Within the child process, I transfer the file in another infinite loop that terminates when recv receives 0 bytes.
int client_id = 0;
while(1){
// accept a new connection
struct sockaddr_in clientAddr;
socklen_t clientAddrSize = sizeof(clientAddr);
//socket file descriptor to use for the connection
int clientSockfd = accept(sockfd, (struct sockaddr*)&clientAddr, &clientAddrSize);
if (clientSockfd == -1) {
perror("accept");
return 4;
}
else{ //handle forking
client_id++;
std::cout<<"Client id: "<<client_id<<std::endl;
pid_t pid = fork();
if(pid == 0){
//child process
std::string client_idstr = std::to_string(client_id);
char ipstr[INET_ADDRSTRLEN] = {'\0'};
inet_ntop(clientAddr.sin_family, &clientAddr.sin_addr, ipstr, sizeof(ipstr));
std::string connection_id = std::to_string(ntohs(clientAddr.sin_port));
std::cout << "Accept a connection from: " << ipstr << ":" << client_idstr
<< std::endl;
// read/write data from/into the connection
char buf[S_BUFSIZE] = {0};
std::stringstream ss;
//Create file stream
std::ofstream file_to_save;
FILE *pFile;
std::string write_dir = filedir+"/" + client_idstr + ".file";
std::string write_type = "wb";
pFile = fopen(write_dir.c_str(), write_type.c_str());
std::cout<<"write dir: "<<write_dir<<std::endl;
while (1) {
memset(buf, '\0', sizeof(buf));
int rec_value = recv(clientSockfd, buf, S_BUFSIZE, 0);
if (rec_value == -1) {
perror("recv");
return 5;
}else if(rec_value == 0){
//end of transmission, exit the loop
break;
}
fwrite(buf, sizeof(char), rec_value, pFile);
}
fclose(pFile);
close(clientSockfd);
}
else if(pid > 0){
//parent process
continue;
}else{
perror("failed to create multiple new threads");
exit(-1);
}
}
}
Here is the server output when I do the following, with the expected file name (client_id.file) in parentheses:
1) connect client 1, transfer file, disconnect client 1 (1.file)
2) connect client 2, transfer file, disconnect client 2 (2.file)
3) connect client 1, transfer file, disconnect client 1 (3.file)
4) connect client 1, transfer file, disconnect client 1 (4.file)
5) connect client 2, transfer file, disconnect client 2 (5.file)
6) connect client 2, transfer file, disconnect client 2 (6.file)
I might be wrong, but something tickles my memory about this counter approach. How your counter is defined, is it a local variable? it should have thread-wide time of life, so either a global variable or static local, then fork would copy its value to the spawned process when it copies memory pages. Local variables stored as temporary and what happens with them in C++ is undefined.
If you need shared variables, there is way, by using shared memory .. mmap and so on.
Second.. it's a copy of process, which becomes to be executed from the point where fork() was called. And what you have there? infinite loop wrap up. Loop would restart. You should exit from it if pid was 0.
There is example of processes breeding on Fibonacci principle within for() loop:
Visually what happens to fork() in a For Loop
Parent increased counter from 0 to 1 and called fork. Now we have child that printed 1.
CHild finished transmission and returned to the beginning of loop. it increased counter to 2 when it received connection. parent increased counter to two when it managed to receive connection. Now you have 4 processes.
If you made more tests in sequence, you would see that amount of duplicates rises. And you actually could see those processes in your taskmanager, or top, ps output, if you watched that data.

C++ winsockets threading issue

I made a pretty simple C++ socket server. I'm trying to spawn a thread each time a new client connects (so reading can be done in parallel).
void Server::start(void){
for(;;){
Logger::Log("Now accepting clients");
int client;
struct sockaddr_in client_addr;
size_t addr_size = sizeof(client_addr);
client = accept(this->m_socket, (sockaddr*)&client_addr, 0);
if(client != SOCKET_ERROR){
Logger::Log("New client connected!");
StateObject client_object(client, this);
this->clients.push_back(&client_object);
std::stringstream stream;
stream<<this->clients.size()<<" clients online";
Logger::Log(const_cast<char*>(stream.str().c_str()));
std::thread c_thread(std::bind(&StateObject::read, std::ref(client_object)));
//c_thread.join(); //if I join the child, new clients won't be accepted until the previous thread exits
}
}
}
Reading method in client class:
void StateObject::read(){
Logger::Log("Now reading");
for(;;){
int bytesReceived = recv(this->socket, buffer, 255, 0);
if(bytesReceived > 0){
Logger::Log(const_cast<char*>(std::string("Received: " + std::string(buffer).substr(0, bytesReceived)).c_str()));
}else if(bytesReceived == 0){
Logger::Log("Client gracefully disconnected");
break;
}else{
Logger::Log("Could not receive data from remote host");
break;
}
}
Server * server = reinterpret_cast<Server*>(parent);
server->removeClient(this);
}
Currently, after a client connects an exception is thrown:
Why and when has abort been triggered?
Please note that this happens when the child thread hasn't joined the main thread. On the other case, the "flow" goes expectedly synchronous (the current client thread has to exit so that the loop can continue to accept the next client).
Notes:
Since I am tied to Windows, I'm unable to fork child tasks - I am also not a fan of Cygwin. Asynchronous win32 methods seem to complicate things that is why I avoid them.
C++ std::thread reference
Tests have been done through Telnet
You either need to detach the thread or join it before it goes out of scope.. Otherwise std::thread calls std::terminate in its destructor.
http://www.cplusplus.com/reference/thread/thread/~thread/

valgrind/helgrind gets killed on stress test

I'm making a web server on linux in C++ with pthreads. I tested it with valgrind for leaks and memory problems - all fixed. I tested it with helgrind for thread problems - all fixed. I'm trying a stress test. I'm getting problem when the probram is run with helgrind
valgrind --tool=helgrind ./chats
It just dies on random places with the text "Killed" as it would do when I kill it with kill -9. The only report I get sometimes from helgrind is that the program exists while still holding some locks, which is normal when gets killed.
When checking for leaks:
valgrind --leak-check=full ./chats
it's more stable, but I managed to make it die once with few hundreds of concurrent connections.
I tried running program alone and couldn't make it crash at all. I tried up to 250 concurrent connections. Each thread delays with 100ms to make it easier to have multiple connections at the same time. No crash.
In all cases threads as well as connections do not get above 10 and I see it crash even with 2 connections, but never with only one connection at the same time (with including main thread and one helper thread is total of 3).
Is it possible that the problem will only happen when run with
helgrind or just helgrind makes it more likely to show?
What be the reason that a program gets killed (by kernel?) Allocating too much memory, too many file descriptors?
I tested a bit more and I found out that it only dies when the client times out and closes the connection. So here is the code which detects that the client closed the socket:
void *TcpClient::run(){
int ret;
struct timeval tv;
char * buff = (char *)malloc(10001);
int br;
colorPrintf(TC_GREEN, "new client starting: %d\n", sockFd);
while(isRunning()){
tv.tv_sec = 0;
tv.tv_usec = 500*1000;
FD_SET(sockFd, &readFds);
ret = select(sockFd+1, &readFds, NULL, NULL, &tv);
if(ret < 0){
//select error
continue;
}else if(ret == 0){
// no data to read
continue;
}
br = read(sockFd, buff, 10000);
buff[br] = 0;
if (br == 0){
// client disconnected;
setRunning(false);
break;
}
if (reader != NULL){
reader->tcpRead(this, std::string(buff, br));
}else{
readBuffer.append(buff, br);
}
//printf("received: %s\n", buff);
}
free(buff);
sendFeedback((void *)1);
colorPrintf(TC_RED, "closing client socket: %d\n", sockFd);
::close(sockFd);
sockFd = -1;
return NULL;
}
// this method writes to socket
bool TcpClient::write(std::string data){
int bw;
int dataLen = data.length();
bw = ::write(sockFd, data.data(), dataLen);
if (bw != dataLen){
return false; // I don't close the socket in this case, maybe I should
}
return true;
}
P.S. Threads are:
main thread. connections are accepted here.
one helper thread which listen for signals and sends signals. It stops signal reception for the app and manually polls the signal queue. The reason is because it's hard to handle signals when using threads. I found this technique here in stackoverflow and it seams to work pretty fine in other projects.
client connection threads
The full code is pretty big, but I can post chunks if someone is interested.
Update:
I managed to trigger the problem with only one connection. It's all happening in client thread. This is what I do:
I read/parse headers. I put delay before writing so the client can timeout (which causes the problem).
Here the client timeouts and leaves (probably closes socket)
I write back headers
I write back the html code.
Here is how I write back
bw = ::write(sockFd, data.data(), dataLen);
// bw is = dataLen = 108 when writing the headers
//then secondary write for HTML kills the program. there is a message before and after write()
bw = ::write(sockFd, data.data(), dataLen); // doesn't go past this point second time
Update 2: Got it :)
gdb sais:
Program received signal SIGPIPE, Broken pipe.
[Switching to Thread 0x41401940 (LWP 10554)]
0x0000003ac2e0d89b in write () from /lib64/libpthread.so.0
Question 1: What should I do to void receiving this signal.
Question 2: How to know that remote side disconnected while writing. On read select returns that there is data but data read is 0. How about write?
Well I just had to handle the SIGPIPE singal and write returned -1 -> I close socket and quit thread gracefully. Works like a charm.
I guess the easiest way is to set signal handler of SIGPIPE to SIG_IGN:
signal(SIGPIPE, SIG_IGN);
Note that first write was successful and didn't kill the program. If you have similar problem check if you are writing once or multiple times. If you are not familiar with gdb this is how to do it:
gdb ./your-program
> run
and gdb will tell you all about signals and sigfaults.

C++ How to exit out of a while loop recvfrom()

I'm trying to create a UDP broadcast program to check for local game servers, but I'm having some trouble with the receiving end. Since the amount of servers alive is unknown at all times, you must have a loop that only exits when you stop it. So in this bit of code here:
while(1) // start a while loop
{
if(recvfrom(sd,buff,BUFFSZ,0,(struct sockaddr *)&peer,&psz) < 0) // recvfrom() function call
{
cout << red << "Fatal: Failed to receive data" << white << endl;
return;
}
else
{
cout << green << "Found Server :: " << white;
cout << yellow << inet_ntoa(peer.sin_addr), htons(peer.sin_port);
cout << endl;
}
}
I wish to run this recvfrom() function until I press Ctrl + C. I've tried setting up handlers and such (from related questions), but they're all either too complicated for me, or it's a simple function that just exits the program as a demonstration. Here's my problem:
The program hangs on recvfrom until it receives a connection (my guess), so, there's never a chance for it to specifically wait for input. How can I set up an event that will work into this nicely?
Thanks!
In the CTRL-C handler, set a flag, and use that flag as condition in the while loop.
Oh, and if you're not on a POSIX systems where system-calls can be interrupted by signals, you might want to make the socket non-blocking and use e.g. select (with a small timeout) to poll for data.
Windows have a couple of problems with a scheme like this. The major problem is that functions calls can not be interrupted by the CTRL-C handler. Instead you have to poll if there is anything to receive in the loop, while also checking the "exit loop" flag.
It could be done something like this:
bool ExitRecvLoop = false;
BOOL CtrlHandler(DWORD type)
{
if (type == CTRL_C_EVENT)
{
ExitRecvLoop = true;
return TRUE;
}
return FALSE; // Call next handler
}
// ...
SetConsoleCtrlHandler((PHANDLER_ROUTINE) CtrlHandler, TRUE);
while (!ExitRecvLoop)
{
fd_set rs;
FD_ZERO(&rs);
FD_SET(sd, &rs);
timeval timeout = { 0, 1000 }; // One millisecond
if (select(sd + 1, &rs, NULL, NULL, &timeout) < 0)
{
// Handle error
}
else
{
if (FD_ISSET(sd, &rs))
{
// Data to receive, call `recvfrom`
}
}
}
You might have to make the socket non-blocking for this to work (see the ioctlsocket function for how to).
Thread off your recvFrom() loop so that your main thread can wait for user input. When user requests stop, close the fd from the main thread and the recvFrom() will return immediately with an error, so allowing your recvFrom() thread to exit.