poll() catch thread return value - c++

I have a poll() loop with a small socket communication, I want to start an other program by system() or exec() and I need the the return value of the system()/exec() but I don't want to stop the main loop while the child process is running so I thought I start it in a thread but I am not sure how to set up the pollfd to catch the thread when it is done, I am using c/c++
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <sys/socket.h>
#include <sys/un.h>
#include <unistd.h>
#include <poll.h>
#include <iostream>
#include <string>
#include <thread>
#include <future>
#define SOCKET_NAME "/tmp/9Lq7BNBnBycd6nxy.socket"
int runProgram(const std::string &programName, const std::string &fileName) {
return system((programName + " " + fileName).c_str());
}
int main(int argc, char *argv[]) {
struct sockaddr_un server;
int sock;
char buf[1024];
unlink(SOCKET_NAME);
sock = socket(AF_UNIX, SOCK_STREAM, 0);
if (sock == -1){
perror("socket");
exit(EXIT_FAILURE);
}
memset(&server, 0, sizeof(struct sockaddr_un));
server.sun_family = AF_UNIX;
strncpy(server.sun_path, SOCKET_NAME, sizeof(server.sun_path) - 1);
if (bind(sock, (struct sockaddr *) &server, sizeof(struct sockaddr_un)) < 0) {
perror("bind");
exit(EXIT_FAILURE);
}
if (listen(sock, 3) < -1) {
perror("listen");
exit(EXIT_FAILURE);
}
struct pollfd fds[2];
fds[0].fd = sock;
fds[0].events = POLLIN;
std::future<int> ret = std::async(&runProgram, "cat", "test.txt");
while (true) {
poll(fds, 2, -1);
if(fds[0].revents & POLLIN) {
int new_sd = accept(fds[0].fd, NULL, NULL);
if (new_sd < 0) {
perror("accept");
}
fds[1].fd = new_sd;
}
if (fds[0].revents & POLLIN) {
int rv = recv(fds[1].fd, buf, 1024, 0);
if (rv < 0)
perror("recv");
else if (rv == 0) {
printf("disconnet\n");
close(fds[1].fd);
} else {
printf("%s\n", buf);
send(fds[1].fd, buf, 1024, 0);
}
memset(buf, 0, 1024);
}
}
close(sock);
return(EXIT_SUCCESS);
}
So I want to add one more to the pollfd (fds[ret.get()]) and get a POLLIN on fds[2] when my thread is done and I can get the return value (ret.get()), here I used an exaple command cat but in my final code the command would need mach more time so I cant wait for that to finish

The simplest solution is to create an anonymous pipe (or, since you say that you are on Linux, an eventfd) and write data to one end of the pipe in the runProgram function once the call to system returns. You can then include the read end of the pipe in the set of file descriptors that you are polling.
int process_eventfd = eventfd(0, EFD_CLOEXEC);
if (process_eventfd == -1) exit(1); // change this to handle appropriately
struct pollfd fds[3];
fds[0].fd = sock;
fds[0].events = POLLIN;
fds[1].fd = process_eventfd;
fds[1].events = POLLIN;
// use fds[2] instead of fds[1] for your socket connection, etc.
You can add the eventfd number as an argument to runProgram. It should now look something like:
int runProgram(const std::string &programName, const std::string &fileName, int process_eventfd) {
return system((programName + " " + fileName).c_str());
uint64_t value = 1;
write(process_eventfd, &value, 8);
}
By the way, your current program has a bug: you always pass 2 as the number of file descriptors to poll, even before you have set up the second file descriptor in the array. You should only pass the number of valid descriptors actually present in the array.
However, if you don't need to use system and can use exec, there is no need to create another thread; just perform the following steps:
Mask (but don't ignore) the SIGCHLD signal. (You may need to set up a signal handler, even if it does nothing; I can't remember if this is true for Linux or not).
Create your external process via fork/exec
Use ppoll rather than poll, and include SIGCHLD in the signals to be enabled
If the ppoll call returns an EINTR error, use waitpid to obtain the child status
The child process will run in parallel to your program.

Related

Data gets cut off when send through a tcp socket in c/c++

Long messages get cut off when I send them through a tcp socket. It differs depending on the destination. When sending and receiving locally on my machine, all goes through. When sending and receiving locally on my server, it gets cut off after the 21846th byte consistently. When sending from my machine to the server, it gets cut off at the 1441th byte consistently. The server is in Stockholm and I'm in the UK. The same problem is also present when the client is on Windows and uses Windows' networking code.
Here the client is supposed to send 29 999 zeros and a null terminator, receive them and print them. When I counted them with wc, I got the figures of actual bytes that I received. So the figures represent a two-way transfer but from testing I can say that the problem has the same properties one-way.
The read/write functions are blocking as far as I can tell, so the problem is not that the data has not arrived fully before the functions exit - a problem described in the man page.
How can I fix this?
Go to the bottom to see solution
Here's the code that I used to test this:
server/main.cpp
#include <filesystem>
#include <fstream>
#include <iomanip>
#include <iostream>
#include <sstream>
#include <string>
#include "aes/aes.hpp"
#include "network.hpp"
#include <thread>
using namespace std;
int main()
{
int server_socket = InitServerSocket(26969);
std::cout << "Listening..." << std::endl;
while (true) {
// accept incoming connections - blocking
int client_socket = accept(server_socket, NULL, NULL);
if (client_socket < 0) {
std::cerr << "Unable to accept";
close(server_socket);
return 1;
}
char long_text[30000];
read(client_socket, long_text, 30000);
std::cout << long_text << std::endl;
write(client_socket, long_text, 30000);
close(client_socket);
}
return 0;
}
InitServerSocket():
int InitServerSocket(int port)
{
struct sockaddr_in server_address;
server_address.sin_family = AF_INET;
server_address.sin_port = htons(port);
server_address.sin_addr.s_addr = htonl(INADDR_ANY);
int server_socket;
server_socket = socket(AF_INET, SOCK_STREAM, 0);
if (server_socket < 0) {
perror("Unable to create socket");
_exit(1);
}
int result = bind(
server_socket,
(struct sockaddr*)&server_address,
sizeof(server_address));
if (result < 0) {
perror("Unable to bind");
_exit(1);
}
if (listen(server_socket, 1000) < 0) {
perror("Unable to listen");
_exit(1);
}
return server_socket;
}
client/main.cpp
#include <arpa/inet.h>
#include <iostream>
#include <netdb.h>
#include <string.h>
#include <string>
#include <sys/socket.h>
#include <sys/types.h>
#include <unistd.h>
#include "network.hpp"
using namespace std;
int main()
{
int socket = ConnectToHost("70.34.195.74", 26969);
char long_text[30000];
for (int i = 0; i < 30000; i++)
long_text[i] = '0';
long_text[29999] = '\0';
write(socket, long_text, 30000);
read(socket, long_text, 30000);
std::cout << long_text << std::endl;
CloseConnection(socket);
return 0;
}
ConnectToHost():
int ConnectToHost(char* IPAddress, int PortNo)
{
// create a socket
int network_socket; // socket descriptor ~= pointer ~= fd
network_socket = socket(AF_INET, SOCK_STREAM, 0);
// specify a destination address
struct sockaddr_in server_address;
server_address.sin_family = AF_INET; // specify protocol
server_address.sin_port = htons(PortNo); // specify port
// server_address.sin_addr.s_addr = a.s_addr; // specify resolved ip
server_address.sin_addr.s_addr = inet_addr(IPAddress);
// connect
int connection_status = connect(
network_socket,
(struct sockaddr*)&server_address,
sizeof(server_address));
if (connection_status == -1)
std::cout << "Failed to conect to remote host\n";
return network_socket;
}
Solution:
Here are the wrapper functions I wrote to fix the problem:
int Send(int soc, char* buffer, int size)
{
int ret = -1;
int index = 0;
do {
ret = write(soc, buffer + index, size - index);
index += ret;
} while (ret > 0);
if (ret == -1)
return -1;
else
return 1;
};
int Recv(int soc, char* buffer, int size)
{
int ret = -1;
int index = 0;
do {
ret = read(soc, buffer + index, size - index);
index += ret;
} while (ret > 0);
if (ret == -1)
return -1;
else
return 1;
};
write(client_socket, long_text, 30000);
You have no guarantees, whatsoever, that all 30000 bytes get written, even with blocking sockets. You must check the write()'s return value to determine how many bytes were actually written, then implement the required logic to try again, with whatever's left to be written, and proceed in this manner until all 30000 bytes get written to the socket. This is how sockets always work.
read(socket, long_text, 30000);
Same thing here, you must check the returned value. If the socket has only a hundred bytes of unread data waiting you'll get these 100 bytes and read() will return 100. If there's nothing unread from a blocking socket, read() will block. If the socket ends up receiving a packet with 1 byte, read() returns 1, which tells you that only 1 byte was read.
How can I fix this?
You must always check what every read() and write() returns, and proceed accordingly. If you need to read or write more data from the socket, then try again, and do that.
so the problem is not that the data has not arrived fully before
the functions exit - a problem described in the man page.
The man page also describes what the returned value from read() and write() means: a negative value indicates an error, a positive value indicates how many bytes were actually read or written. Only reading and writing to regular files guarantees that the requested number of bytes will be read or written (unless reading reaches the end of the file).

my epoll server losses some connections. why?

I'd like to make an epoll server. But my code of the server losses some connections.
My client spawns 100 threads and each sends the same message. Then my server is supposed to receive and print them with counting numbers.
But the server seems like losing connections and I don't know why.
I changed EPOLL_SIZE from 50 to 200, and did backlog argument of listen() from 5 to 1000. But they didn't work.
1.server:
#include <cstdio>
#include <cstdlib>
#include <cstring>
#include <unistd.h>
#include <arpa/inet.h>
#include <sys/socket.h>
#include <sys/epoll.h>
#include <memory>
#include <array>
#define BUF_SIZE 100
#define EPOLL_SIZE 200
void error_handling(const char *buf);
int main(int argc, char *argv[])
{
// Step 1. Initialization
int server_socket, client_socket;
struct sockaddr_in client_addr;
socklen_t addr_size;
int str_len, i;
char buf[BUF_SIZE];
int epfd, event_cnt;
if (argc != 2) {
printf("Usage : %s <port>\n", argv[0]);
exit(1);
}
// Step 2. Creating a socket
server_socket = socket(PF_INET, SOCK_STREAM, 0);
struct sockaddr_in server_addr;
memset(&server_addr, 0, sizeof(server_addr));
server_addr.sin_family = AF_INET;
server_addr.sin_addr.s_addr = htonl(INADDR_ANY);
server_addr.sin_port = htons(atoi(argv[1]));
// Step 3. Binding the server address onto the socket created just right before.
if (bind(server_socket, (struct sockaddr*) &server_addr, sizeof(server_addr)) == -1)
error_handling("bind() error");
// Step 4. Start to listen to the socket.
if (listen(server_socket, 1000) == -1)
error_handling("listen() error");
// Step 5. Create an event poll instance.
epfd = epoll_create(EPOLL_SIZE);
auto epoll_events = (struct epoll_event*) malloc(sizeof(struct epoll_event) * EPOLL_SIZE);
struct epoll_event event;
event.events = EPOLLIN;
event.data.fd = server_socket;
// Step 6. Adding the server socket file descriptor to the event poll's control.
epoll_ctl(epfd, EPOLL_CTL_ADD, server_socket, &event);
int recv_cnt = 0;
while(true)
{
// Step 7. Wait until some event happens
event_cnt = epoll_wait(epfd, epoll_events, EPOLL_SIZE, -1);
if (event_cnt == -1)
{
puts("epoll_wait() error");
break;
}
for (i = 0; i < event_cnt; i++)
{
if (epoll_events[i].data.fd == server_socket)
{
addr_size = sizeof(client_addr);
client_socket = accept(server_socket, (struct sockaddr*)&client_addr, &addr_size);
event.events = EPOLLIN;
event.data.fd = client_socket;
epoll_ctl(epfd, EPOLL_CTL_ADD, client_socket, &event);
//printf("Connected client: %d\n", client_socket);
}
else // client socket?
{
str_len = read(epoll_events[i].data.fd, buf, BUF_SIZE);
if (str_len == 0) // close request!
{
epoll_ctl(epfd, EPOLL_CTL_DEL, epoll_events[i].data.fd, nullptr);
close(epoll_events[i].data.fd);
printf("%d: %s\n", ++recv_cnt, buf);
//printf("closed client: %d \n", epoll_events[i].data.fd);
}
else
{
write(epoll_events[i].data.fd, buf, str_len); // echo!
}
} // end of else()
} // end of for()
} // end of while()
close(server_socket);
close(epfd);
free(epoll_events);
return EXIT_SUCCESS;
}
void error_handling(const char *buf)
{
fputs(buf, stderr);
fputc('\n', stderr);
exit(EXIT_FAILURE);
}
2.client:
#include <cstdio>
#include <cstdlib>
#include <cstring>
#include <string>
#include <unistd.h>
#include <arpa/inet.h>
#include <sys/socket.h>
#include <thread>
#include <vector>
#include <algorithm>
#include <mutex>
#define BUF_SIZE 100
#define EPOLL_SIZE 50
void error_handling(const char *buf);
int main(int argc, char *argv[])
{
// Step 1. Initialization
int socketfd;
if (argc != 3) {
printf("Usage : %s <ip address> <port>\n", argv[0], argv[1]);
exit(EXIT_FAILURE);
}
std::vector<std::thread> cli_threads;
std::mutex wlock;
for (int i = 0; i < 100; i++) {
cli_threads.push_back(std::thread([&](const char* szIpAddr, const char* szPort) {
// Step 2. Creating a socket
socketfd = socket(PF_INET, SOCK_STREAM, 0);
struct sockaddr_in server_addr;
memset(&server_addr, 0, sizeof(server_addr));
server_addr.sin_family = AF_INET;
server_addr.sin_addr.s_addr = inet_addr(szIpAddr);
server_addr.sin_port = htons(atoi(szPort));
// Step 3. Connecting to the server
if(connect(socketfd, (struct sockaddr*)&server_addr, sizeof(server_addr)) == -1)
error_handling("connect() error");
// Step 4. Writing message to the server
std::string msg = "Hey I'm a client!";
wlock.lock();
auto str_len = write(socketfd, msg.c_str(), msg.size()+1);
wlock.unlock();
close(socketfd);
}, argv[1], argv[2]));
}
std::for_each(cli_threads.begin(), cli_threads.end(),
[](std::thread &t)
{
t.join();
}
);
return EXIT_SUCCESS;
}
void error_handling(const char *buf)
{
fputs(buf, stderr);
fputc('\n', stderr);
exit(EXIT_FAILURE);
}
expected like...
1: Hey I'm a client!
...
100: Hey I'm a client!
but, the result varies, like...
1: Hey I'm a client!
...
n: Hey I'm a client!
where the n is less than 100.
You had undefined behaviour because of passing socketfd by reference to thread - std::thread([&](.... One instance of socket descriptor was being modified by all threads concurrently - it caused problems. Every thread should store its own descriptor.

Select() call not working localhost [closed]

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 6 years ago.
Improve this question
For a project I'm building a "super-server" like inetd. It is supposed to read a set of ports and commands from a config file, and for each port spin up a listener socket. It should then use select() to determine when one or more of these sockets is ready to read from. When select finds a socket, it should use accept() to connect to this socket, and then fork() a child process in which the command will be executed. Unfortunately, select is always either timing out or failing when I try to call "nc -l localhost 12345" to test it (with '12345 echo "hello world"' in the config.txt file).
Can you spot anything I might be doing wrong? Thanks in advance! I've been going crazy trying to figure this out!
#include <unistd.h>
#include <sys/types.h>
#include <sys/socket.h>
#include <netinet/in.h>
#include <sys/select.h>
#include <iostream>
#include <sstream>
#include <fstream>
#include <map>
using namespace std;
map<int,string> parse_config_file() {
string line;
ifstream file;
stringstream ss;
int port;
string command;
map<int,string> port_to_command;
file.open("config.txt");
while (getline(file,line)) {
ss = stringstream(line);
ss >> port;
getline(ss,command);
port_to_command[port] = command;
}
file.close();
return port_to_command;
}
void handle_client(int socket, string command) {
dup2(socket, STDIN_FILENO);
dup2(socket, STDOUT_FILENO);
execl("/bin/sh", "/bin/sh", "-c", command.c_str(), NULL);
}
int main(int argc, const char * argv[]) {
int rc;
int readyfd;
int peerfd;
int maxfd = 0;
int port;
pid_t child_pid;
fd_set readfds;
struct timeval tv;
tv.tv_sec = 10;
tv.tv_usec = 0;
struct sockaddr* server_address;
socklen_t server_address_length = sizeof(server_address);
struct sockaddr client_address;
socklen_t client_address_length = sizeof(client_address);
map<int,string> port_to_command = parse_config_file();
map<int,string>::iterator pcitr;
map<int,int> socket_to_port;
map<int,int>::iterator spitr;
// Create, bind, and listen on the sockets:
for (pcitr = port_to_command.begin(); pcitr != port_to_command.end(); pcitr++) {
int sockfd = socket(AF_INET, SOCK_STREAM, 0);
if (sockfd < 0) {
cerr << "ERROR opening socket";
exit(EXIT_FAILURE);
}
port = pcitr->first;
struct sockaddr_in server_address_internet;
bzero((char *) &server_address_internet, sizeof(server_address_internet));
server_address_internet.sin_family = AF_INET;
server_address_internet.sin_addr.s_addr = INADDR_ANY;
server_address_internet.sin_port = htons(port);
server_address = (struct sockaddr *)&server_address_internet;
bind(sockfd, server_address, server_address_length);
rc = listen(sockfd, 10);
if (rc < 0) {
cerr << "listen() failed";
exit(EXIT_FAILURE);
}
socket_to_port[sockfd] = pcitr->first;
if (sockfd > maxfd) {
maxfd = sockfd;
}
}
// Server Loop
while (true) {
// Rebuild the FD set:
FD_ZERO(&readfds);
for (spitr = socket_to_port.begin(); spitr != socket_to_port.end(); spitr++) {
FD_SET(spitr->first, &readfds);
}
// Select
rc = select(maxfd + 1, &readfds, NULL, NULL, &tv);
if (rc == 0) {
// Timeout
continue;
} else if (rc < 0) {
cerr << "select failed" << endl;
exit(EXIT_FAILURE);
}
// Find the socket that is ready to be read:
readyfd = -1;
for (spitr = socket_to_port.begin(); spitr != socket_to_port.end(); spitr++) {
if (FD_ISSET(spitr->first, &readfds)) {
readyfd = spitr->first;
break;
}
}
// Accept
peerfd = accept(readyfd, &client_address, &client_address_length);
if (peerfd < 0) {
cerr << "accept failed" << endl;
exit(EXIT_FAILURE);
}
// Fork to handle request:
child_pid = fork();
if (child_pid == 0) {
port = ((struct sockaddr_in*)&client_address)->sin_port;
handle_client(peerfd, port_to_command[port]);
close(peerfd);
exit(EXIT_SUCCESS);
} else {
close(peerfd);
}
}
return 0;
}
Well, I did spot a few things you were doing wrong.
using namespace std; -- that part is obviously wrong.
parse_config_file() does not validate and check the syntax of the configuration file. A typo, or a misplaced character would result in operator>> failing, which will not be detected. So, a command's port would either be random, uninitialized, or a copy of the previous command's port.
And, finally we come to this:
struct sockaddr* server_address;
socklen_t server_address_length = sizeof(server_address);
Pop quiz: what is sizeof(struct sockaddr *)? Well, it's a pointer, so it's going to be either 4 or 8 bytes, here.
bind(sockfd, server_address, server_address_length);
I'm fairly certain that struct sockaddr_in is larger than that. A quick check confirms that it's 16 bytes long. You were passing either 4 or 8 bytes, as the size of a 16-byte structure.
You had a two-fer here. Getting the size wrong, and failing to check the error code returned by bind(), so you remained completely unaware that the system call was always failing.
You can't assume that a system call will always succeed. Whether it's bind(), socket(), connect(), or accept(). Every system call can fail. Always check the return value from every system call. It might be tedious, or boring, to check the return value of a system call, but it must be done. If you did that, you would've caught the initial bug, with the wrong sizeof().

Receiving UDP Packets Asynchronously From Multiple File Descriptors

I have a questions about using fcntl and sigaction to receive a UDP packet asynchronously. In my program I have two sources of UDP traffic that I would like to monitor. I have set up two sockets for the traffic and used this tutorial to set the file descriptor to trigger a sigaction whenever I receive a UDP packet.
This works fine with only one source, but when I add the other source it will trigger only one of the handlers whenever either file descriptor receives a packet.
Here is a short program demonstrating the behavior:
#ifndef _GNU_SOURCE
#define _GNU_SOURCE
#endif
#include <stdio.h>
#include <unistd.h>
#include <string.h>
#include <fcntl.h>
#include <signal.h>
#include <sys/socket.h>
#include <sys/types.h>
#include <netinet/in.h>
#include <arpa/inet.h>
int done;
int a_fd;
int b_fd;
int recv_dgram(int fd, char* dgram, int size)
{
struct sockaddr_in addr;
int fromlen = sizeof(addr);
return recvfrom(fd, dgram, size, 0, (struct sockaddr*)&addr, (socklen_t*)&fromlen);
}
void a_handler(int signum)
{
char dgram[256];
int size = recv_dgram(a_fd, dgram, 256);
printf("a recieve size: %d\n", size);
}
void b_handler(int signum)
{
char dgram[256];
int size = recv_dgram(b_fd, dgram, 256);
printf("b recieve size: %d\n", size);
}
void sig_handle(int signum)
{
done = 1;
}
int init_fd(int port, const char* group, const char* interface)
{
int fd = socket(AF_INET, SOCK_DGRAM, 0);
if(fd < 0) {
return -1;
}
int reuse = 1;
if(setsockopt(fd, SOL_SOCKET, SO_REUSEADDR, (char*)&reuse, sizeof(reuse)) < 0) {
close(fd);
return -1;
}
if(fcntl(fd, F_SETFL, O_NONBLOCK) < 0) {
close(fd);
return -1;
}
struct sockaddr_in addr;
memset((char*)&addr, 0, sizeof(addr));
addr.sin_family = AF_INET;
addr.sin_port = htons(port);
addr.sin_addr.s_addr = INADDR_ANY;
if(bind(fd, (struct sockaddr*)&addr, sizeof(addr))) {
close(fd);
return -1;
}
struct ip_mreq mcast_group;
mcast_group.imr_multiaddr.s_addr = inet_addr(group);
mcast_group.imr_interface.s_addr = inet_addr(interface);
if(setsockopt(fd, IPPROTO_IP, IP_ADD_MEMBERSHIP, (char*)&mcast_group, sizeof(mcast_group))) {
close(fd);
return -1;
}
return fd;
}
int main(int argc, const char* argv[])
{
done = 0;
signal(SIGINT, sig_handle);
signal(SIGTERM, sig_handle);
// make sockets and sigactions
a_fd = init_fd([a port], [a multicast group], [a interface]);
if(a_fd < 0) {
return -1;
}
pid_t pid = getpid();
int a_flags = fcntl(a_fd, F_GETFL);
fcntl(a_fd, F_SETFL, a_flags | O_ASYNC);
struct sigaction a_sa;
a_sa.sa_flags = 0;
a_sa.sa_handler = a_handler;
sigemptyset(&a_sa.sa_mask);
sigaction(SIGIO, &a_sa, NULL);
fcntl(a_fd, F_SETOWN, pid);
fcntl(a_fd, F_SETSIG, SIGIO);
b_fd = init_fd([b port], [b multicast group], [b interface]);
if(b_fd < 0) {
close(a_fd);
return -1;
}
int b_flags = fcntl(b_fd, F_GETFL);
fcntl(b_fd, F_SETFL, b_flags | O_ASYNC);
struct sigaction b_sa;
b_sa.sa_flags = 0;
b_sa.sa_handler = b_handler;
sigemptyset(&b_sa.sa_mask);
sigaction(SIGIO, &b_sa, NULL);
fcntl(b_fd, F_SETOWN, pid);
fcntl(b_fd, F_SETSIG, SIGIO);
printf("start\n");
while(!done) { pause(); }
printf("done\n");
close(a_fd);
close(b_fd);
return 0;
}
I can compile this with (you can compile using gcc too):
g++ -c test.cpp
g++ -o test test.o
I'm using g++ 4.6.3 on Ubuntu 12.04 LTS.
When I run this program with two sources of UDP data, b_handler gets triggered when either file descriptors has a packet available. So it will print "b received size: -1" whenever a_handler should receive a packet. a_handler never gets called.
I suspect that this is because getpid() will return the same value for both of them so one of the sigaction handler will be overwritten.
Is there any way I can have these two handlers trigger independent of each other?
Thanks for the help.
Use two different signals, say SIGIO and SIGUSR1.
fcntl(descriptor, SETSIG, signal_desired);

How to create a single instance application in C or C++

What would be your suggestion in order to create a single instance application, so that only one process is allowed to run at a time? File lock, mutex or what?
A good way is:
#include <sys/file.h>
#include <errno.h>
int pid_file = open("/var/run/whatever.pid", O_CREAT | O_RDWR, 0666);
int rc = flock(pid_file, LOCK_EX | LOCK_NB);
if(rc) {
if(EWOULDBLOCK == errno)
; // another instance is running
}
else {
// this is the first instance
}
Note that locking allows you to ignore stale pid files (i.e. you don't have to delete them). When the application terminates for any reason the OS releases the file lock for you.
Pid files are not terribly useful because they can be stale (the file exists but the process does not). Hence, the application executable itself can be locked instead of creating and locking a pid file.
A more advanced method is to create and bind a unix domain socket using a predefined socket name. Bind succeeds for the first instance of your application. Again, the OS unbinds the socket when the application terminates for any reason. When bind() fails another instance of the application can connect() and use this socket to pass its command line arguments to the first instance.
Here is a solution in C++. It uses the socket recommendation of Maxim. I like this solution better than the file based locking solution, because the file based one fails if the process crashes and does not delete the lock file. Another user will not be able to delete the file and lock it. The sockets are automatically deleted when the process exits.
Usage:
int main()
{
SingletonProcess singleton(5555); // pick a port number to use that is specific to this app
if (!singleton())
{
cerr << "process running already. See " << singleton.GetLockFileName() << endl;
return 1;
}
... rest of the app
}
Code:
#include <netinet/in.h>
class SingletonProcess
{
public:
SingletonProcess(uint16_t port0)
: socket_fd(-1)
, rc(1)
, port(port0)
{
}
~SingletonProcess()
{
if (socket_fd != -1)
{
close(socket_fd);
}
}
bool operator()()
{
if (socket_fd == -1 || rc)
{
socket_fd = -1;
rc = 1;
if ((socket_fd = socket(AF_INET, SOCK_DGRAM, 0)) < 0)
{
throw std::runtime_error(std::string("Could not create socket: ") + strerror(errno));
}
else
{
struct sockaddr_in name;
name.sin_family = AF_INET;
name.sin_port = htons (port);
name.sin_addr.s_addr = htonl (INADDR_ANY);
rc = bind (socket_fd, (struct sockaddr *) &name, sizeof (name));
}
}
return (socket_fd != -1 && rc == 0);
}
std::string GetLockFileName()
{
return "port " + std::to_string(port);
}
private:
int socket_fd = -1;
int rc;
uint16_t port;
};
For windows, a named kernel object (e.g. CreateEvent, CreateMutex). For unix, a pid-file - create a file and write your process ID to it.
You can create an "anonymous namespace" AF_UNIX socket. This is completely Linux-specific, but has the advantage that no filesystem actually has to exist.
Read the man page for unix(7) for more info.
Avoid file-based locking
It is always good to avoid a file based locking mechanism to implement the singleton instance of an application. The user can always rename the lock file to a different name and run the application again as follows:
mv lockfile.pid lockfile1.pid
Where lockfile.pid is the lock file based on which is checked for existence before running the application.
So, it is always preferable to use a locking scheme on object directly visible to only the kernel. So, anything which has to do with a file system is not reliable.
So the best option would be to bind to a inet socket. Note that unix domain sockets reside in the filesystem and are not reliable.
Alternatively, you can also do it using DBUS.
It's seems to not be mentioned - it is possible to create a mutex in shared memory but it needs to be marked as shared by attributes (not tested):
pthread_mutexattr_t attr;
pthread_mutexattr_init(&attr);
pthread_mutexattr_setpshared(&attr, PTHREAD_PROCESS_SHARED);
pthread_mutex_t *mutex = shmat(SHARED_MEMORY_ID, NULL, 0);
pthread_mutex_init(mutex, &attr);
There is also shared memory semaphores (but I failed to find out how to lock one):
int sem_id = semget(SHARED_MEMORY_KEY, 1, 0);
No one has mentioned it, but sem_open() creates a real named semaphore under modern POSIX-compliant OSes. If you give a semaphore an initial value of 1, it becomes a mutex (as long as it is strictly released only if a lock was successfully obtained).
With several sem_open()-based objects, you can create all of the common equivalent Windows named objects - named mutexes, named semaphores, and named events. Named events with "manual" set to true is a bit more difficult to emulate (it requires four semaphore objects to properly emulate CreateEvent(), SetEvent(), and ResetEvent()). Anyway, I digress.
Alternatively, there is named shared memory. You can initialize a pthread mutex with the "shared process" attribute in named shared memory and then all processes can safely access that mutex object after opening a handle to the shared memory with shm_open()/mmap(). sem_open() is easier if it is available for your platform (if it isn't, it should be for sanity's sake).
Regardless of the method you use, to test for a single instance of your application, use the trylock() variant of the wait function (e.g. sem_trywait()). If the process is the only one running, it will successfully lock the mutex. If it isn't, it will fail immediately.
Don't forget to unlock and close the mutex on application exit.
It will depend on which problem you want to avoid by forcing your application to have only one instance and the scope on which you consider instances.
For a daemon — the usual way is to have a /var/run/app.pid file.
For user application, I've had more problems with applications which prevented me to run them twice than with being able to run twice an application which shouldn't have been run so. So the answer on "why and on which scope" is very important and will probably bring answer specific on the why and the intended scope.
Here is a solution based on sem_open
/*
*compile with :
*gcc single.c -o single -pthread
*/
/*
* run multiple instance on 'single', and check the behavior
*/
#include <stdio.h>
#include <fcntl.h>
#include <sys/stat.h>
#include <semaphore.h>
#include <unistd.h>
#include <errno.h>
#define SEM_NAME "/mysem_911"
int main()
{
sem_t *sem;
int rc;
sem = sem_open(SEM_NAME, O_CREAT, S_IRWXU, 1);
if(sem==SEM_FAILED){
printf("sem_open: failed errno:%d\n", errno);
}
rc=sem_trywait(sem);
if(rc == 0){
printf("Obtained lock !!!\n");
sleep(10);
//sem_post(sem);
sem_unlink(SEM_NAME);
}else{
printf("Lock not obtained\n");
}
}
One of the comments on a different answer says "I found sem_open() rather lacking". I am not sure about the specifics of what's lacking
Based on the hints in maxim's answer here is my POSIX solution of a dual-role daemon (i.e. a single application that can act as daemon and as a client communicating with that daemon). This scheme has the advantage of providing an elegant solution of the problem when the instance started first should be the daemon and all following executions should just load off the work at that daemon. It is a complete example but lacks a lot of stuff a real daemon should do (e.g. using syslog for logging and fork to put itself into background correctly, dropping privileges etc.), but it is already quite long and is fully working as is. I have only tested this on Linux so far but IIRC it should be all POSIX-compatible.
In the example the clients can send integers passed to them as first command line argument and parsed by atoi via the socket to the daemon which prints it to stdout. With this kind of sockets it is also possible to transfer arrays, structs and even file descriptors (see man 7 unix).
#include <stdio.h>
#include <stddef.h>
#include <stdbool.h>
#include <stdlib.h>
#include <unistd.h>
#include <errno.h>
#include <signal.h>
#include <sys/socket.h>
#include <sys/un.h>
#define SOCKET_NAME "/tmp/exampled"
static int socket_fd = -1;
static bool isdaemon = false;
static bool run = true;
/* returns
* -1 on errors
* 0 on successful server bindings
* 1 on successful client connects
*/
int singleton_connect(const char *name) {
int len, tmpd;
struct sockaddr_un addr = {0};
if ((tmpd = socket(AF_UNIX, SOCK_DGRAM, 0)) < 0) {
printf("Could not create socket: '%s'.\n", strerror(errno));
return -1;
}
/* fill in socket address structure */
addr.sun_family = AF_UNIX;
strcpy(addr.sun_path, name);
len = offsetof(struct sockaddr_un, sun_path) + strlen(name);
int ret;
unsigned int retries = 1;
do {
/* bind the name to the descriptor */
ret = bind(tmpd, (struct sockaddr *)&addr, len);
/* if this succeeds there was no daemon before */
if (ret == 0) {
socket_fd = tmpd;
isdaemon = true;
return 0;
} else {
if (errno == EADDRINUSE) {
ret = connect(tmpd, (struct sockaddr *) &addr, sizeof(struct sockaddr_un));
if (ret != 0) {
if (errno == ECONNREFUSED) {
printf("Could not connect to socket - assuming daemon died.\n");
unlink(name);
continue;
}
printf("Could not connect to socket: '%s'.\n", strerror(errno));
continue;
}
printf("Daemon is already running.\n");
socket_fd = tmpd;
return 1;
}
printf("Could not bind to socket: '%s'.\n", strerror(errno));
continue;
}
} while (retries-- > 0);
printf("Could neither connect to an existing daemon nor become one.\n");
close(tmpd);
return -1;
}
static void cleanup(void) {
if (socket_fd >= 0) {
if (isdaemon) {
if (unlink(SOCKET_NAME) < 0)
printf("Could not remove FIFO.\n");
} else
close(socket_fd);
}
}
static void handler(int sig) {
run = false;
}
int main(int argc, char **argv) {
switch (singleton_connect(SOCKET_NAME)) {
case 0: { /* Daemon */
struct sigaction sa;
sa.sa_handler = &handler;
sigemptyset(&sa.sa_mask);
if (sigaction(SIGINT, &sa, NULL) != 0 || sigaction(SIGQUIT, &sa, NULL) != 0 || sigaction(SIGTERM, &sa, NULL) != 0) {
printf("Could not set up signal handlers!\n");
cleanup();
return EXIT_FAILURE;
}
struct msghdr msg = {0};
struct iovec iovec;
int client_arg;
iovec.iov_base = &client_arg;
iovec.iov_len = sizeof(client_arg);
msg.msg_iov = &iovec;
msg.msg_iovlen = 1;
while (run) {
int ret = recvmsg(socket_fd, &msg, MSG_DONTWAIT);
if (ret != sizeof(client_arg)) {
if (errno != EAGAIN && errno != EWOULDBLOCK) {
printf("Error while accessing socket: %s\n", strerror(errno));
exit(1);
}
printf("No further client_args in socket.\n");
} else {
printf("received client_arg=%d\n", client_arg);
}
/* do daemon stuff */
sleep(1);
}
printf("Dropped out of daemon loop. Shutting down.\n");
cleanup();
return EXIT_FAILURE;
}
case 1: { /* Client */
if (argc < 2) {
printf("Usage: %s <int>\n", argv[0]);
return EXIT_FAILURE;
}
struct iovec iovec;
struct msghdr msg = {0};
int client_arg = atoi(argv[1]);
iovec.iov_base = &client_arg;
iovec.iov_len = sizeof(client_arg);
msg.msg_iov = &iovec;
msg.msg_iovlen = 1;
int ret = sendmsg(socket_fd, &msg, 0);
if (ret != sizeof(client_arg)) {
if (ret < 0)
printf("Could not send device address to daemon: '%s'!\n", strerror(errno));
else
printf("Could not send device address to daemon completely!\n");
cleanup();
return EXIT_FAILURE;
}
printf("Sent client_arg (%d) to daemon.\n", client_arg);
break;
}
default:
cleanup();
return EXIT_FAILURE;
}
cleanup();
return EXIT_SUCCESS;
}
All credits go to Mark Lakata. I merely did some very minor touch up only.
main.cpp
#include "singleton.hpp"
#include <iostream>
using namespace std;
int main()
{
SingletonProcess singleton(5555); // pick a port number to use that is specific to this app
if (!singleton())
{
cerr << "process running already. See " << singleton.GetLockFileName() << endl;
return 1;
}
// ... rest of the app
}
singleton.hpp
#include <netinet/in.h>
#include <unistd.h>
#include <cerrno>
#include <string>
#include <cstring>
#include <stdexcept>
using namespace std;
class SingletonProcess
{
public:
SingletonProcess(uint16_t port0)
: socket_fd(-1)
, rc(1)
, port(port0)
{
}
~SingletonProcess()
{
if (socket_fd != -1)
{
close(socket_fd);
}
}
bool operator()()
{
if (socket_fd == -1 || rc)
{
socket_fd = -1;
rc = 1;
if ((socket_fd = socket(AF_INET, SOCK_DGRAM, 0)) < 0)
{
throw std::runtime_error(std::string("Could not create socket: ") + strerror(errno));
}
else
{
struct sockaddr_in name;
name.sin_family = AF_INET;
name.sin_port = htons (port);
name.sin_addr.s_addr = htonl (INADDR_ANY);
rc = bind (socket_fd, (struct sockaddr *) &name, sizeof (name));
}
}
return (socket_fd != -1 && rc == 0);
}
std::string GetLockFileName()
{
return "port " + std::to_string(port);
}
private:
int socket_fd = -1;
int rc;
uint16_t port;
};
#include <windows.h>
int main(int argc, char *argv[])
{
// ensure only one running instance
HANDLE hMutexH`enter code here`andle = CreateMutex(NULL, TRUE, L"my.mutex.name");
if (GetLastError() == ERROR_ALREADY_EXISTS)
{
return 0;
}
// rest of the program
ReleaseMutex(hMutexHandle);
CloseHandle(hMutexHandle);
return 0;
}
FROM: HERE
On Windows you could also create a shared data segment and use an interlocked function to test for the first occurence, e.g.
#include <Windows.h>
#include <stdio.h>
#include <conio.h>
#pragma data_seg("Shared")
volatile LONG lock = 0;
#pragma data_seg()
#pragma comment(linker, "/SECTION:Shared,RWS")
void main()
{
if (InterlockedExchange(&lock, 1) == 0)
printf("first\n");
else
printf("other\n");
getch();
}
I have just written one, and tested.
#define PID_FILE "/tmp/pidfile"
static void create_pidfile(void) {
int fd = open(PID_FILE, O_RDWR | O_CREAT | O_EXCL, 0);
close(fd);
}
int main(void) {
int fd = open(PID_FILE, O_RDONLY);
if (fd > 0) {
close(fd);
return 0;
}
// make sure only one instance is running
create_pidfile();
}
Just run this code on a seperate thread:
void lock() {
while(1) {
ofstream closer("myapplock.locker", ios::trunc);
closer << "locked";
closer.close();
}
}
Run this as your main code:
int main() {
ifstream reader("myapplock.locker");
string s;
reader >> s;
if (s != "locked") {
//your code
}
return 0;
}