To learn socket programming with TCP, I'm making a simple server and client. The client will send chunks of a file and the server will read them and write to a file. Client and server work properly without any multiprocessing. I want to make it so that multiple clients can connect simultaneously. I want to give each connected client a unique id, called "client_id". This is a number between 1 and n.
I tried to use fork() in order to spawn a child process, and in the child process I accept the connection and then read in the data and save it to the file. However, the client_id variable is not synchronized across processes so sometimes it will be incremented and sometimes not. I don't fully understand what's going on. The value of client_id should never be repeated, but sometimes I'm seeing numbers appear twice. I believe this is because on forking, the child process gets a copy of everything the parent had but there is no synchronization across parallel processes.
Here is my infinite loop that sits and waits for connecting clients. Within the child process, I transfer the file in another infinite loop that terminates when recv receives 0 bytes.
int client_id = 0;
while(1){
// accept a new connection
struct sockaddr_in clientAddr;
socklen_t clientAddrSize = sizeof(clientAddr);
//socket file descriptor to use for the connection
int clientSockfd = accept(sockfd, (struct sockaddr*)&clientAddr, &clientAddrSize);
if (clientSockfd == -1) {
perror("accept");
return 4;
}
else{ //handle forking
client_id++;
std::cout<<"Client id: "<<client_id<<std::endl;
pid_t pid = fork();
if(pid == 0){
//child process
std::string client_idstr = std::to_string(client_id);
char ipstr[INET_ADDRSTRLEN] = {'\0'};
inet_ntop(clientAddr.sin_family, &clientAddr.sin_addr, ipstr, sizeof(ipstr));
std::string connection_id = std::to_string(ntohs(clientAddr.sin_port));
std::cout << "Accept a connection from: " << ipstr << ":" << client_idstr
<< std::endl;
// read/write data from/into the connection
char buf[S_BUFSIZE] = {0};
std::stringstream ss;
//Create file stream
std::ofstream file_to_save;
FILE *pFile;
std::string write_dir = filedir+"/" + client_idstr + ".file";
std::string write_type = "wb";
pFile = fopen(write_dir.c_str(), write_type.c_str());
std::cout<<"write dir: "<<write_dir<<std::endl;
while (1) {
memset(buf, '\0', sizeof(buf));
int rec_value = recv(clientSockfd, buf, S_BUFSIZE, 0);
if (rec_value == -1) {
perror("recv");
return 5;
}else if(rec_value == 0){
//end of transmission, exit the loop
break;
}
fwrite(buf, sizeof(char), rec_value, pFile);
}
fclose(pFile);
close(clientSockfd);
}
else if(pid > 0){
//parent process
continue;
}else{
perror("failed to create multiple new threads");
exit(-1);
}
}
}
Here is the server output when I do the following, with the expected file name (client_id.file) in parentheses:
1) connect client 1, transfer file, disconnect client 1 (1.file)
2) connect client 2, transfer file, disconnect client 2 (2.file)
3) connect client 1, transfer file, disconnect client 1 (3.file)
4) connect client 1, transfer file, disconnect client 1 (4.file)
5) connect client 2, transfer file, disconnect client 2 (5.file)
6) connect client 2, transfer file, disconnect client 2 (6.file)
I might be wrong, but something tickles my memory about this counter approach. How your counter is defined, is it a local variable? it should have thread-wide time of life, so either a global variable or static local, then fork would copy its value to the spawned process when it copies memory pages. Local variables stored as temporary and what happens with them in C++ is undefined.
If you need shared variables, there is way, by using shared memory .. mmap and so on.
Second.. it's a copy of process, which becomes to be executed from the point where fork() was called. And what you have there? infinite loop wrap up. Loop would restart. You should exit from it if pid was 0.
There is example of processes breeding on Fibonacci principle within for() loop:
Visually what happens to fork() in a For Loop
Parent increased counter from 0 to 1 and called fork. Now we have child that printed 1.
CHild finished transmission and returned to the beginning of loop. it increased counter to 2 when it received connection. parent increased counter to two when it managed to receive connection. Now you have 4 processes.
If you made more tests in sequence, you would see that amount of duplicates rises. And you actually could see those processes in your taskmanager, or top, ps output, if you watched that data.
Related
I need the simplest most reliable IPC method from one C++ app running on the RPi to another app.
All I'm trying to do is send a string message of 40 characters from one app to another
The first app is running as a service on boot, the other app is started at a later time and is frequently exited and restarted for debugging
The frequent debugging for the second app is whats causing problems with the IPCs I've tried so far
I've tried about 3 different methods and here is where they failed:
File FIFO, the problem is one program hangs while the other program is writing to the file
Shared memory: cannot initialize on one thread and read from another thread. Also frequent exiting while debugging causing GDB crashes with the following GDB command is taking too long to complete -stack-list-frames --thread 1
UDP socket with localhost - same issue as above, plus improper exits block the socket, forcing me to reboot device
Non blocking pipe - not getting any messages on the receiving process
What else can I try? I dont want to get the DBus library, seems too complex for this application.
Any simple server and client code or a link to it would be helpful
Here is my non-blockign pipe code, that doesnt work for me,
I assume its because I dont have a reference to the pipe from one app to the other
Code sourced from here: https://www.geeksforgeeks.org/non-blocking-io-with-pipes-in-c/
char* msg1 = "hello";
char* msg2 = "bye !!";
int p[2], i;
bool InitClient()
{
// error checking for pipe
if(pipe(p) < 0)
exit(1);
// error checking for fcntl
if(fcntl(p[0], F_SETFL, O_NONBLOCK) < 0)
exit(2);
//Read
int nread;
char buf[MSGSIZE];
// write link
close(p[1]);
while (1) {
// read call if return -1 then pipe is
// empty because of fcntl
nread = read(p[0], buf, MSGSIZE);
switch (nread) {
case -1:
// case -1 means pipe is empty and errono
// set EAGAIN
if(errno == EAGAIN) {
printf("(pipe empty)\n");
sleep(1);
break;
}
default:
// text read
// by default return no. of bytes
// which read call read at that time
printf("MSG = % s\n", buf);
}
}
return true;
}
bool InitServer()
{
// error checking for pipe
if(pipe(p) < 0)
exit(1);
// error checking for fcntl
if(fcntl(p[0], F_SETFL, O_NONBLOCK) < 0)
exit(2);
//Write
// read link
close(p[0]);
// write 3 times "hello" in 3 second interval
for(i = 0 ; i < 3000000000 ; i++) {
write(p[0], msg1, MSGSIZE);
sleep(3);
}
// write "bye" one times
write(p[0], msg2, MSGSIZE);
return true;
}
Please consider ZeroMQ
https://zeromq.org/
It is lightweight and has wrapper for all major programming languages.
I made a pretty simple C++ socket server. I'm trying to spawn a thread each time a new client connects (so reading can be done in parallel).
void Server::start(void){
for(;;){
Logger::Log("Now accepting clients");
int client;
struct sockaddr_in client_addr;
size_t addr_size = sizeof(client_addr);
client = accept(this->m_socket, (sockaddr*)&client_addr, 0);
if(client != SOCKET_ERROR){
Logger::Log("New client connected!");
StateObject client_object(client, this);
this->clients.push_back(&client_object);
std::stringstream stream;
stream<<this->clients.size()<<" clients online";
Logger::Log(const_cast<char*>(stream.str().c_str()));
std::thread c_thread(std::bind(&StateObject::read, std::ref(client_object)));
//c_thread.join(); //if I join the child, new clients won't be accepted until the previous thread exits
}
}
}
Reading method in client class:
void StateObject::read(){
Logger::Log("Now reading");
for(;;){
int bytesReceived = recv(this->socket, buffer, 255, 0);
if(bytesReceived > 0){
Logger::Log(const_cast<char*>(std::string("Received: " + std::string(buffer).substr(0, bytesReceived)).c_str()));
}else if(bytesReceived == 0){
Logger::Log("Client gracefully disconnected");
break;
}else{
Logger::Log("Could not receive data from remote host");
break;
}
}
Server * server = reinterpret_cast<Server*>(parent);
server->removeClient(this);
}
Currently, after a client connects an exception is thrown:
Why and when has abort been triggered?
Please note that this happens when the child thread hasn't joined the main thread. On the other case, the "flow" goes expectedly synchronous (the current client thread has to exit so that the loop can continue to accept the next client).
Notes:
Since I am tied to Windows, I'm unable to fork child tasks - I am also not a fan of Cygwin. Asynchronous win32 methods seem to complicate things that is why I avoid them.
C++ std::thread reference
Tests have been done through Telnet
You either need to detach the thread or join it before it goes out of scope.. Otherwise std::thread calls std::terminate in its destructor.
http://www.cplusplus.com/reference/thread/thread/~thread/
I'm making a web server on linux in C++ with pthreads. I tested it with valgrind for leaks and memory problems - all fixed. I tested it with helgrind for thread problems - all fixed. I'm trying a stress test. I'm getting problem when the probram is run with helgrind
valgrind --tool=helgrind ./chats
It just dies on random places with the text "Killed" as it would do when I kill it with kill -9. The only report I get sometimes from helgrind is that the program exists while still holding some locks, which is normal when gets killed.
When checking for leaks:
valgrind --leak-check=full ./chats
it's more stable, but I managed to make it die once with few hundreds of concurrent connections.
I tried running program alone and couldn't make it crash at all. I tried up to 250 concurrent connections. Each thread delays with 100ms to make it easier to have multiple connections at the same time. No crash.
In all cases threads as well as connections do not get above 10 and I see it crash even with 2 connections, but never with only one connection at the same time (with including main thread and one helper thread is total of 3).
Is it possible that the problem will only happen when run with
helgrind or just helgrind makes it more likely to show?
What be the reason that a program gets killed (by kernel?) Allocating too much memory, too many file descriptors?
I tested a bit more and I found out that it only dies when the client times out and closes the connection. So here is the code which detects that the client closed the socket:
void *TcpClient::run(){
int ret;
struct timeval tv;
char * buff = (char *)malloc(10001);
int br;
colorPrintf(TC_GREEN, "new client starting: %d\n", sockFd);
while(isRunning()){
tv.tv_sec = 0;
tv.tv_usec = 500*1000;
FD_SET(sockFd, &readFds);
ret = select(sockFd+1, &readFds, NULL, NULL, &tv);
if(ret < 0){
//select error
continue;
}else if(ret == 0){
// no data to read
continue;
}
br = read(sockFd, buff, 10000);
buff[br] = 0;
if (br == 0){
// client disconnected;
setRunning(false);
break;
}
if (reader != NULL){
reader->tcpRead(this, std::string(buff, br));
}else{
readBuffer.append(buff, br);
}
//printf("received: %s\n", buff);
}
free(buff);
sendFeedback((void *)1);
colorPrintf(TC_RED, "closing client socket: %d\n", sockFd);
::close(sockFd);
sockFd = -1;
return NULL;
}
// this method writes to socket
bool TcpClient::write(std::string data){
int bw;
int dataLen = data.length();
bw = ::write(sockFd, data.data(), dataLen);
if (bw != dataLen){
return false; // I don't close the socket in this case, maybe I should
}
return true;
}
P.S. Threads are:
main thread. connections are accepted here.
one helper thread which listen for signals and sends signals. It stops signal reception for the app and manually polls the signal queue. The reason is because it's hard to handle signals when using threads. I found this technique here in stackoverflow and it seams to work pretty fine in other projects.
client connection threads
The full code is pretty big, but I can post chunks if someone is interested.
Update:
I managed to trigger the problem with only one connection. It's all happening in client thread. This is what I do:
I read/parse headers. I put delay before writing so the client can timeout (which causes the problem).
Here the client timeouts and leaves (probably closes socket)
I write back headers
I write back the html code.
Here is how I write back
bw = ::write(sockFd, data.data(), dataLen);
// bw is = dataLen = 108 when writing the headers
//then secondary write for HTML kills the program. there is a message before and after write()
bw = ::write(sockFd, data.data(), dataLen); // doesn't go past this point second time
Update 2: Got it :)
gdb sais:
Program received signal SIGPIPE, Broken pipe.
[Switching to Thread 0x41401940 (LWP 10554)]
0x0000003ac2e0d89b in write () from /lib64/libpthread.so.0
Question 1: What should I do to void receiving this signal.
Question 2: How to know that remote side disconnected while writing. On read select returns that there is data but data read is 0. How about write?
Well I just had to handle the SIGPIPE singal and write returned -1 -> I close socket and quit thread gracefully. Works like a charm.
I guess the easiest way is to set signal handler of SIGPIPE to SIG_IGN:
signal(SIGPIPE, SIG_IGN);
Note that first write was successful and didn't kill the program. If you have similar problem check if you are writing once or multiple times. If you are not familiar with gdb this is how to do it:
gdb ./your-program
> run
and gdb will tell you all about signals and sigfaults.
I'm using select() in a thread to monitor a datagram socket, but unless data is being sent to the socket before the thread starts, select() will continue to return 0.
I'm mixing a little C and C++; here's the method that starts the thread:
bool RelayStart() {
sock_recv = socket(AF_INET, SOCK_DGRAM, 0);
memset(&addr_recv, 0, sizeof(addr_recv));
addr_recv.sin_family = AF_INET;
addr_recv.sin_port = htons(18902);
addr_recv.sin_addr.s_addr = htonl(INADDR_ANY);
bind(sock_recv, (struct sockaddr*) &addr_recv, sizeof(addr_recv));
isRelayingPackets = true;
NSS::Thread::start(VIDEO_SEND_THREAD_ID);
return true;
}
The method that stops the thread:
bool RelayStop() {
isSendingVideo = false;
NSS::Thread::stop();
close(sock_recv);
return true;
}
And the method run in the thread:
void Run() {
fd_set read_fds;
int select_return;
struct timeval select_timeout;
FD_ZERO(&read_fds);
FD_SET(sock_recv, &read_fds);
while (isRelayingPackets) {
select_timeout.tv_sec = 1;
select_timeout.tv_usec = 0;
select_return = select(sock_recv + 1, &read_fds, NULL, NULL, &select_timeout);
if (select_return > 0 && FD_ISSET(sock_recv, &read_fds)) {
// ...
}
}
}
The problem is that if there isn't a process already sending UDP packets to port 18902 before RelayStart() is called, select() will always return 0. So, for example, I can't restart the sender without restarting the thread (in the correct order.)
Everything seems to work fine as long as the sender is started first.
The Run thread only constructs read_fds once.
The select call updates read_fds to have all its bits cleared for all descriptors that did not have data ready, and all its bits set for those that were set before and do have data ready.
Hence, if no descriptor has any data ready and the select call times out (and returns 0), all the bits in read_fds are now cleared. Further calls passing the same all-zero bit-mask will scan no file descriptors.
You can either re-construct the read-set on each trip inside the loop:
while (isRelayingPackets) {
FD_ZERO(&read_fds);
FD_SET(sock_recv, &read_fds);
...
}
or use an auxiliary variable with a copy of the bit-set:
while (isRelayingPackets) {
fd_set select_arg = read_fds;
... same as before but use &select_arg ...
}
(Or, of course, there are non-select interfaces that are easier to use in some ways.)
How were you expecting it to behave? The point of select() is to sleep to a timeout until data are available to be read; in this case, it will time out after 1 second and return 0. Perhaps you don't actually want a timeout before the start of a stream?
I am trying to port to Linux an existing Windows C++ code that uses IOCP. Having decided to use epoll_wait to achieve high concurrency, I am already faced with a theoretical issue of when we try to process received data.
Imagine two threads calling epoll_wait, and two consequetives messages being received such that Linux unblocks the first thread and soon the second.
Example :
Thread 1 blocks on epoll_wait
Thread 2 blocks on epoll_wait
Client sends a chunk of data 1
Thread 1 deblocks from epoll_wait, performs recv and tries to process data
Client sends a chunk of data 2
Thread 2 deblocks, performs recv and tries to process data.
Is this scenario conceivable ? I.e. can it occure ?
Is there a way to prevent it so to avoid implementing synchronization in the recv/processing code ?
If you have multiple threads reading from the same set of epoll handles, I would recommend you put your epoll handles in one-shot level-triggered mode with EPOLLONESHOT. This will ensure that, after one thread observes the triggered handle, no other thread will observe it until you use epoll_ctl to re-arm the handle.
If you need to handle read and write paths independently, you may want to completely split up the read and write thread pools; have one epoll handle for read events, and one for write events, and assign threads to one or the other exclusively. Further, have a separate lock for read and for write paths. You must be careful about interactions between the read and write threads as far as modifying any per-socket state, of course.
If you do go with that split approach, you need to put some thought into how to handle socket closures. Most likely you will want an additional shared-data lock, and 'acknowledge closure' flags, set under the shared data lock, for both read and write paths. Read and write threads can then race to acknowledge, and the last one to acknowledge gets to clean up the shared data structures. That is, something like this:
void OnSocketClosed(shareddatastructure *pShared, int writer)
{
epoll_ctl(myepollhandle, EPOLL_CTL_DEL, pShared->fd, NULL);
LOCK(pShared->common_lock);
if (writer)
pShared->close_ack_w = true;
else
pShared->close_ack_r = true;
bool acked = pShared->close_ack_w && pShared->close_ack_r;
UNLOCK(pShared->common_lock);
if (acked)
free(pShared);
}
I'm assuming here that the situation you're trying to process is something like this:
You have multiple (maybe very many) sockets that you want to receive data from at once;
You want to start processing data from the first connection on Thread A when it is first received and then be sure that data from this connection is not processed on any other thread until you have finished with it in Thread A.
While you are doing that, if some data is now received on a different connection you want Thread B to pick that data and process it while still being sure that no one else can process this connection until Thread B is done with it etc.
Under these circumstances it turns out that using epoll_wait() with the same epoll fd in multiple threads is a reasonably efficient approach (I'm not claiming that it is necessarily the most efficient).
The trick here is to add the individual connections fds to the epoll fd with the EPOLLONESHOT flag. This ensures that once an fd has been returned from an epoll_wait() it is unmonitored until you specifically tell epoll to monitor it again. This ensures that the thread processing this connection suffers no interference as no other thread can be processing the same connection until this thread marks the connection to be monitored again.
You can set up the fd to monitor EPOLLIN or EPOLLOUT again using epoll_ctl() and EPOLL_CTL_MOD.
A significant benefit of using epoll like this in multiple threads is that when one thread is finished with a connection and adds it back to the epoll monitored set, any other threads still in epoll_wait() are immediately monitoring it even before the previous processing thread returns to epoll_wait(). Incidentally that could also be a disadvantage because of lack of cache data locality if a different thread now picks up that connection immediately (thus needing to fetch the data structures for this connection and flush the previous thread's cache). What works best will sensitively depend on your exact usage pattern.
If you are trying to process messages received subsequently on the same connection in different threads then this scheme to use epoll is not going to be appropriate for you, and an approach using a listening thread feeding an efficient queue feeding worker threads might be better.
Previous answers that point out that calling epoll_wait() from multiple threads is a bad idea are almost certainly right, but I was intrigued enough by the question to try and work out what does happen when it is called from multiple threads on the same handle, waiting for the same socket. I wrote the following test code:
#include <netinet/in.h>
#include <pthread.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <sys/epoll.h>
#include <sys/socket.h>
#include <sys/types.h>
#include <unistd.h>
struct thread_info {
int number;
int socket;
int epoll;
};
void * thread(struct thread_info * arg)
{
struct epoll_event events[10];
int s;
char buf[512];
sleep(5 * arg->number);
printf("Thread %d start\n", arg->number);
do {
s = epoll_wait(arg->epoll, events, 10, -1);
if (s < 0) {
perror("wait");
exit(1);
} else if (s == 0) {
printf("Thread %d No data\n", arg->number);
exit(1);
}
if (recv(arg->socket, buf, 512, 0) <= 0) {
perror("recv");
exit(1);
}
printf("Thread %d got data\n", arg->number);
} while (s == 1);
printf("Thread %d end\n", arg->number);
return 0;
}
int main()
{
pthread_attr_t attr;
pthread_t threads[2];
struct thread_info thread_data[2];
int s;
int listener, client, epollfd;
struct sockaddr_in listen_address;
struct sockaddr_storage client_address;
socklen_t client_address_len;
struct epoll_event ev;
listener = socket(AF_INET, SOCK_STREAM, 0);
if (listener < 0) {
perror("socket");
exit(1);
}
memset(&listen_address, 0, sizeof(struct sockaddr_in));
listen_address.sin_family = AF_INET;
listen_address.sin_addr.s_addr = INADDR_ANY;
listen_address.sin_port = htons(6799);
s = bind(listener,
(struct sockaddr*)&listen_address,
sizeof(listen_address));
if (s != 0) {
perror("bind");
exit(1);
}
s = listen(listener, 1);
if (s != 0) {
perror("listen");
exit(1);
}
client_address_len = sizeof(client_address);
client = accept(listener,
(struct sockaddr*)&client_address,
&client_address_len);
epollfd = epoll_create(10);
if (epollfd == -1) {
perror("epoll_create");
exit(1);
}
ev.events = EPOLLIN;
ev.data.fd = client;
if (epoll_ctl(epollfd, EPOLL_CTL_ADD, client, &ev) == -1) {
perror("epoll_ctl: listen_sock");
exit(1);
}
thread_data[0].number = 0;
thread_data[1].number = 1;
thread_data[0].socket = client;
thread_data[1].socket = client;
thread_data[0].epoll = epollfd;
thread_data[1].epoll = epollfd;
s = pthread_attr_init(&attr);
if (s != 0) {
perror("pthread_attr_init");
exit(1);
}
s = pthread_create(&threads[0],
&attr,
(void*(*)(void*))&thread,
&thread_data[0]);
if (s != 0) {
perror("pthread_create");
exit(1);
}
s = pthread_create(&threads[1],
&attr,
(void*(*)(void*))&thread,
&thread_data[1]);
if (s != 0) {
perror("pthread_create");
exit(1);
}
pthread_join(threads[0], 0);
pthread_join(threads[1], 0);
return 0;
}
When data arrives, and both threads are waiting on epoll_wait(), only one will return, but as subsequent data arrives, the thread that wakes up to handle the data is effectively random between the two threads. I wasn't able to to find a way to affect which thread was woken.
It seems likely that a single thread calling epoll_wait makes most sense, with events passed to worker threads to pump the IO.
I believe that the high performance software that uses epoll and a thread per core creates multiple epoll handles that each handle a subset of all the connections. In this way the work is divided but the problem you describe is avoided.
Generally, epoll is used when you have a single thread listening for data on a single asynchronous source. To avoid busy-waiting (manually polling), you use epoll to let you know when data is ready (much like select does).
It is not standard practice to have multiple threads reading from a single data source, and I, at least, would consider it bad practice.
If you want to use multiple threads, but you only have one input source, then designate one of the threads to listen and queue the data so the other threads can read individual pieces from the queue.