libev sets sockets to blocking with no timeout - c++

Rant: I really dislike boost::asio, So I've been looking at alternatives and came across libev. Which seems simple enough for me, but is doing a few things I cannot understand. If those are too many questions in one thread, please let me know.
1) I set the listening socket to NON_BLOCK, I also set each accepted incoming connection as NON_BLOCK, yet somewhere in the code the socket(s) turns into BLOCK.
Ex:
bool Server::Start()
{
// Setup event loop
loop = ev_default_loop(EVBACKEND_SELECT); //EVFLAG_AUTO ?
// Create Socket
sockfd = socket(PF_INET, SOCK_STREAM, 0);
addr_len = sizeof(addr)
// Set Socket to non blocking
fcntl(sockfd, F_SETFL, fcntl(sockfd, F_GETFL) | O_NONBLOCK);
if (fcntl(sockfd, F_GETFL) & O_NONBLOCK) std::cout << "Socket is NONBLOCK" << std::endl;
else std::cout << "Socket is BLOCK" << std::endl;
if (sockfd < 0) {
std::cout << "ERROR opening socket" << std::endl;
return false;
}
bzero((char *)&addr, sizeof(addr));
addr.sin_family = AF_INET;
addr.sin_port = htons(port);
addr.sin_addr.s_addr = INADDR_ANY;
// Bind port to socket
if (bind(sockfd,(struct sockaddr*)&addr, sizeof(addr))!=0) {
std::cout << "bind error" << std::endl;
return false;
}
// Listen
if (listen(sockfd, 2) < 0) {
std::cout << "listen error" << std::endl;
return false;
}
// Initialize and start a watcher to accepts client requests
ev_io_init(&w_accept, accept_cb, sockfd, EV_READ);
ev_io_start(loop, &w_accept);
return true;
}
I have tried to make the main loop also not to block:
void Server::MainLoop()
{
// Start infinite loop
while (1) {
ev_loop(loop, EVLOOP_NONBLOCK);
}
}
But it doesnt seem to have made a different. PLEASE DO NOT redirect me to the documentation (the only available source of documentation on the internet) I have read it.
I do this for the client socket that has been accepted:
void accept_cb(struct ev_loop *loop, struct ev_io *watcher, int revents)
....
c->client_sd = accept(watcher->fd, (struct sockaddr *)&c->client_addr, &c->client_len);
....
ev_io *w_client = (struct ev_io*) malloc (sizeof(struct ev_io));
ev_io_init(w_client, read_cb, c->client_sd, EV_READ);
ev_io_start(loop, w_client);
fcntl(watcher->fd, F_SETFL, fcntl(watcher->fd, F_GETFL) | O_NONBLOCK);
Yet every time my read callback is executed, the socket is magically set to BLOCK
2) I have tried setting a timeout for the socket:
struct timeval timeout;
timeout.tv_sec = 10;
timeout.tv_usec = 0;
if (setsockopt (sockfd, SOL_SOCKET, SO_RCVTIMEO, (char *)&timeout,
sizeof(timeout)) < 0)
error("setsockopt failed\n");
if (setsockopt (sockfd, SOL_SOCKET, SO_SNDTIMEO, (char *)&timeout,
sizeof(timeout)) < 0)
error("setsockopt failed\n");
(Taken from here: this question)
It simply doesn't work. Is this because the sockets are reset to BLOCKing mode ?
3) I have seen a C++ wrapper for libev. I absolutely hate the fact I have to make the callbacks static functions, it ruins everything for me. Yet all the examples I have seen use:
signal.loop.break_loop();
and
loop.run(0);
which, funnily enough produces:
error: ‘struct ev::loop_ref’ has no member named ‘break_loop’ error:
‘struct ev::default_loop’ has no member named ‘run’
on Debian Squeeze.
So, what I am asking is:
What, who, where is the socket changed from NON_BLOCK to BLOCK ?
How (if) can I set a timeout for the socket (blocking or non-blocking)
What is wrong with ev++.h and why are those nice people using the wrappers I can't use?
Please, bear in mind that I can use the sockets to read and send data, but in a blocking manner, without timeouts. Furthermore, as this is a server, I NEED to keep the code in classes, as I have to save messages per connected clients. Making this static or non-class methods simply ruins it, or forces me to take a very different approach.
PS: Any alternatives to libev ?

You aren't setting the client FD to non-blocking mode. You are setting the listening socket FD.

Related

how to deal with multiple clients in c++ socket problem?

I need some help with a socket program with multiple clients and one server. To simplify, I create
3 socket clients
1 socket server
For each client, it opens a new connection for sending a new message and closes the connection after a response is received.
For the server, it does not need to deal with connections concurrently, it can deal with the message one by one
here is my code (runnable), compile it with /usr/bin/g++ mycode.cpp -g -lpthread -lrt -Wall -o mycode
#include <iostream>
#include <arpa/inet.h>
#include <string.h>
#include <sys/socket.h>
#include <unistd.h>
#include <unordered_map>
#include <thread>
using namespace std;
void Warning(string msg) { std::cout<< msg << std::endl; }
namespace mySocket {
class Memcached {
public:
// start a server
static void controller(int port=7111) { std::thread (server, port).detach(); }
// open a new connection to send a message:
// 1. open a connection
// 2. send the message
// 3. read the message
// 4. close the connection
std::string sendMessage(string msg, string host, int port=7111) {
int sock = 0, client_fd;
struct sockaddr_in serv_addr;
char buffer[1024] = { 0 };
if ((sock = socket(AF_INET, SOCK_STREAM, 0)) < 0) {
std::cout << "Socket creation error, msg: " << msg << ", host: " << host << ", port: " << port << std::endl;
exit(1);
}
serv_addr.sin_family = AF_INET;
serv_addr.sin_port = htons(port);
if (inet_pton(AF_INET, host.c_str(), &serv_addr.sin_addr) <= 0) {
std::cout << "\nInvalid address/ Address not supported, kmsgey: " << msg << ", host: " << host << ", port: " << port << std::endl;
exit(1);
}
while ((client_fd = connect(sock, (struct sockaddr*)&serv_addr, sizeof(serv_addr))) < 0) { sleep(10*1000); }
std::cout << "client sends a message:"<<msg<<", msg size:"<<msg.size()<<std::endl;
send(sock, msg.c_str(), msg.size(), 0);
read(sock, buffer, 1024);
close(client_fd);
return std::string(buffer, strlen(buffer));
}
private:
// start a server
// 1. open a file descriptor
// 2. listen the fd with queue size 10
// 3. accept one connection at a time
// 4. deal with message in the connection
// 5. accept the next connection
// 6. repeat step 3
static void server(int port) {
int server_fd, new_socket;
struct sockaddr_in address;
int opt = 1;
int addrlen = sizeof(address);
char buffer[1024] = { 0 };
unordered_map<string,string> data;
if ((server_fd = socket(AF_INET, SOCK_STREAM, 0)) == 0) {
Warning("socket failed"); exit(1);
}
if (setsockopt(server_fd, SOL_SOCKET, SO_REUSEADDR | SO_REUSEPORT, &opt, sizeof(opt))) {
Warning("setsockopt failed"); exit(1);
}
address.sin_family = AF_INET;
address.sin_addr.s_addr = INADDR_ANY;
address.sin_port = htons(port);
if (bind(server_fd, (struct sockaddr*)&address, sizeof(address)) < 0) {
Warning("bind failed"); exit(1);
}
// the queue size is 10 > 3
if (listen(server_fd, 10) < 0) {
Warning("listen failed"); exit(1);
}
while(1)
{
if ((new_socket = accept(server_fd, (struct sockaddr*)&address, (socklen_t*)&addrlen)) < 0) {
std::cout << "accept failed"; exit(1);
}
memset(&buffer, 0, sizeof(buffer)); //clear the buffer
read(new_socket, buffer, 1024);
std::string msg = std::string(buffer, strlen(buffer));
if (msg.size()==0) {
std::cout<<"I can't believe it"<<std::endl;
}
std::cout<<"received msg from the client:"<<msg<<",msg size:"<<msg.size()<<std::endl;
std::string results="response from the server:["+msg+"]";
send(new_socket, results.c_str(), results.length(), 0);
//usleep(10*1000);
}
if (close(new_socket)<0){
std::cout <<"close error"<<std::endl;
}
shutdown(server_fd, SHUT_RDWR);
}
} ;
}
void operation(int client_id) {
auto obj = new mySocket::Memcached();
for (int i=0; i<10;i++){
int id=client_id*100+i;
std::cout<<obj->sendMessage(std::to_string(id), "127.0.0.1", 7111)<<std::endl<<std::endl;
}
}
int main(int argc, char const* argv[]) {
// start a socket server
mySocket::Memcached::controller();
// start 3 socket clients
std::thread t1(operation, 1);
std::thread t2(operation, 2);
std::thread t3(operation, 3);
t1.join();
t2.join();
t3.join();
}
In the code above, the client always sends a message with a length of 3. However, the server can receive messages with a length of 0 which causes further errors.
I'm struggling with this for several days and can't figure out why it happens. I noticed
if I add a short sleep inside the server while loop, the problem is solved. (uncomment usleep(10*1000); in the code).
or if I only use one client, the problem is also solved.
Any thought helps.
You are using TCP sockets. You may want to use some application-level protocol like HTTP, websockets instead, that will be much easier, because you will not need to worry about how message is sent/received and in which sequence. If you have to stick with TCP sockets, you firstly have to understand few things:
There's two types of TCP sockets you can use: non-blocking and blocking IO (input/output). You are currently using blocking IO. That IO will be sometimes blocked and you won't be able to do anything with sockets. In blocking IO, it can be work arounded by using one socket per thread on server-side. It's not efficient, but it's relatively easy comparing to Non-blocking IO. Non-blocking IO doesn't wait for anything. While in blocking IO you wait for data, in non-blocking IO you create something like events, callbacks, that are used when there's some data. You probably have to read about these types of IO.
In your server function, would be better, if you listen for incoming connections in one thread, and when there's incoming connection, move this connection into another thread and function, that will handle other things. This may solve your problem related to multiple clients at the same time.
In function operation, instead of allocating memory using raw pointer, use static allocation or smart pointers to avoid memory leaks. If you don't want to, then at least, do delete obj; in the end of function.
And the last one thing. You can use some TCP socket wrapper like sockpp to make things a lot easier. You will have anything TCP sockets have, but in C++ style and a little bit easier to understand and maintain. If you can't use application-level protocol, I strongly suggest you to use some wrapper at least.
Update
As was stated by commenters, there are more things you need to know:
TCP sockets are streams. This means that if you send your message with length of 1024 bytes, it can be divided into several TCP data packets and you can't know if it will be divided or not, how much packets other side will receive etc. You have to read in a while loop using recv() and wait for data. There's some tricks which can help you to properly receive data:
You can send length of your message first, so other side will know how much bytes it needs to receive.
You can place some terminating symbol or sequence of terminating symbols in the end of your message and read until these will be received. This can be a little risky, because there's chance that you would not receive these symbols at all and will be reading next.
You have to join client threads only when you know, that server is already started and listening for incoming connections. You can use some variable as a flag for these purposes, but make note, that you have to pay a lot of attention, when reading/writing variable from two or more different threads. For these purposes, you can use mutexes, which are some mechanism that will allow you safely access one variable from several threads.

Specifying timeout option with setsockopt() results in subsequent listen error

Right now, I am trying to specify options with setsockopt() using the following code:
// bind socket
// Use setsockopt() function to make sure the port is not in use
int yes = 1;
setsockopt(TCPSocket, SOL_SOCKET, SO_REUSEADDR, &yes, sizeof(int));
setsockopt(TCPSocket, SOL_SOCKET, SO_RCVTIMEO, (char *)&tv,sizeof(struct timeval));
status = bind(TCPSocket, host_info_list->ai_addr, host_info_list->ai_addrlen);
if (status == -1) std::cout << "bind error" << std::endl ;
// listen for connections
status = listen(TCPSocket, 5);
if (status == -1) std::cout << "listen error" << std::endl ;
int new_sd;
struct sockaddr_storage their_addr;
socklen_t addr_size = sizeof(their_addr);
new_sd = accept(TCPSocket, (struct sockaddr *)&their_addr, &addr_size);
if (new_sd == -1) std::cout << "listen error" << std::endl ;
Note tv is an already-specified timeval.
When I make only the first setsockopt() call, everything works fine. However, with the addition of the second (which does not return any errors), I encounter the second "listen error" specified in the code. I'm not sure why setting the timeout value affect this, can someone explain?
I do not take credit for the code specified; it is modified from the code presented in the tutorial here: http://codebase.eu/tutorial/linux-socket-programming-c/
If you see a TCP state diagram like this one you see there's a state called TIME_WAIT when actively closing a socket. This state can take some time before it ends, up to four minutes according to RFC793.
While the socket is in the TIME_WAIT you can not bind to an interface using the same address-port pair as the socket that is in the wait state. Setting the SO_REUSEADDR flag om a socket enables other sockets to bind to the address when the current socket (with the flag set) is in the TIME_WAIT state.
The SO_REUSEADDR option is most useful for server (passive, listening) sockets.
As for your problem, after each call to setsockopt check what it returns, and if it's -1 then you check errno to see what went wrong. You can use perror or strerror to print or get a printable string for the error, like
if (setsockopt(TCPSocket, SOL_SOCKET, SO_REUSEADDR, &yes, sizeof(int)) < 0)
{
std::cerr << "Error setting the SO_REUSEADDR: " << strerror(errno) << '\n';
// Do something appropriate
}
Joachim's solution did a great job of answering my intial question and explaining setsockopt(). To answer my own question after realizing the issue was further down in the code, the timeout affects the server being able to listen to a port. Say the timeout is only 10ms, the server must be started, then the client, and a connection must be established in that time. This wasn't happening in my case, thus the resulting error.

TCP proxy - mutex

i want to write a simple TCP proxy in C++ for the University. The proxy works with two threads one reads from source port and writes to the destination port and the other thread does the same in the other direction. The aim is to read and manipulate the packets in the future. If i use the mutex to lock the port for read and write on the same port i got package loss. Can you help me to locate the problem because i tried it a long time now?
thread1 = 0;
thread2 = 0;
//Client
struct sockaddr_in address;
int size;
if ((create_socket=socket (AF_INET, SOCK_STREAM, 0)) > 0)
printf ("Socket wurde angelegt\n");
address.sin_family = AF_INET;
address.sin_port = htons (PORT);
inet_aton (IP, &address.sin_addr);
if (connect ( create_socket, (struct sockaddr *) &address, sizeof (address)) == 0)
printf ("Verbindung mit dem Server (%s) hergestellt\n", inet_ntoa (address.sin_addr));
//Server
socklen_t addrlen;
struct sockaddr_in address2;
const int y = 1;
if ((create_socket2=socket (AF_INET, SOCK_STREAM, 0)) > 0)
printf ("Socket wurde angelegt\n");
setsockopt( create_socket2, SOL_SOCKET, SO_REUSEADDR, &y, sizeof(int));
address2.sin_family = AF_INET;
address2.sin_addr.s_addr = INADDR_ANY;
address2.sin_port = htons (PORT2);
if (bind ( create_socket2, (struct sockaddr *) &address2, sizeof (address2)) != 0) {
printf( "Der Port ist nicht frei – belegt!\n");
}
listen (create_socket2, 5);
addrlen = sizeof (struct sockaddr_in);
new_socket2 = accept ( create_socket2, (struct sockaddr *) &address2, &addrlen );
if (new_socket2 > 0)
printf ("Ein Client (%s) ist verbunden ...\n", inet_ntoa (address2.sin_addr));
thread apm(apm_gcs);
thread gcs(gcs_apm);
apm.join();
gcs.join();
}
inline void apm_gcs()
{
while (STOP==FALSE)
{
{
lock_guard<mutex> lock(tcp60Mutex);
res = read(create_socket, buffer2, sizeof(buffer2)); // returns after 5 chars have been input
}
{
lock_guard<mutex> lock(tcp65Mutex);
write(new_socket2, buffer2, res);
}
}
}
inline void gcs_apm()
{
while (STOP==FALSE)
{
{
lock_guard<mutex> lock(tcp65Mutex);
res2 = read(new_socket2, buffer, sizeof(buffer)); // returns after 5 chars have been input
}
{
lock_guard<mutex> lock(tcp60Mutex);
write(create_socket, buffer, res2);
}
}
}
Thank you for your help.
Greets
Tobi
There are several things to improve.
First of all: It's not clear what exactly you want to protect. I would understand if you would use one mutex to protect one buffer, and the other mutex for the other buffer, so each buffer will always be accessed by only one thread. However, that does not happen - both threads can read+write the same buffer at the same time. Instead, each mutex protects a socket against read+write at the same time, which is pointless because sockets can handle that perfectly. You can read+write on the same socket at the same time. sockets are used to do that for more than 30 years now.
Once that is changed and your mutexes protect buffers, you will run into blocking again, though less often. You will experience that a thread tries to read or write data while none is available, or the socket connection is full (which happens if you try to quickly write large amounts of data) and it takes time to transfer the data.
This can be solved then by select() or maybe by poll(). Thus the way to go is:
Each thread uses select() or poll() to find out if it can read or write data. Only if it can, it locks the mutex for the buffer, then read or write data (which won't block after select() or poll() made that sure) and then releases the mutex.

Improving port scanner performance

So I made a port scanner in C++ this morning and it seems to work alright, just having one rather annoying issue with it- whenever I use it to scan an IP over the network, it takes a good 10-20 seconds PER port.
It seems like the connect() method is what's taking it so long.
Now aside from multi-threading, which I'm sure will speed up the process but not by much, how could I make this faster? Here is the section of code that does the scanning:
for (i = 0; i < a_size(port_no); i++)
{
sock = socket(AF_INET, SOCK_STREAM, IPPROTO_TCP);
target.sin_family = AF_INET;
target.sin_port = htons(port_no[i]);
target.sin_addr.s_addr = inet_addr(argv[1]);
if (connect(sock, (SOCKADDR *)&target, sizeof(target)) != SOCKET_ERROR)
cout << "Port: " << port_no[i] << " - open" << endl;
else
cout << "Port: " << port_no[i] << " - closed" << endl;
closesocket(sock);
}
If you need more let me know.
Oh also, I am using the winsock2.h file. Is it because of this that its so slow?
When you call connect(2), the OS initiates the three-way handshake by sending a SYN packet to the other peer. If no response is received, it waits a little bit and sends a few more SYN packets. If no response is still received after a given timeout, then the operation fails, and connect(2) returns with the error code ETIMEODOUT.
Ordinarily, if a peer is up but not accepting TCP connections on a given port, it will reply to any SYN packets with a RST packet. This will cause connect(2) to fail much more quickly (one network round-trip time) with the error ECONNREFUSED. However, if the peer has a firewall set up, it'll just ignore your SYN packets and won't send those RST packets, which will cause connect(2) to take a long time to fail.
So, if you want to avoid waiting for that timeout for every port, you need to do multiple connections in parallel. You can do this multithreading (one synchronous connect(2) call per thread), but this doesn't scale well since threads take up a fair amount of resources.
The better method would be to use non-blocking sockets. To make a socket non-blocking, call fcntl(2) with the F_SETFL option and the O_NONBLOCK option. Then, connect(2) will return immediately with either EWOULDBLOCK or EAGAIN, at which point you can use either select(2) or poll(2) and friends to monitor a large number of sockets at once.
Try creating an array of non-blocking sockets to queue up a bunch of connection attempts at once.
Read about it here
I figured out a solution that works on windows. First I added:
u_long on = 1;
timeval tv = {0, 1000}; //timeout value in microseconds
fd_set fds;
FD_ZERO(&fds);
then i changed this code to look like this:
for (i = 0; i < a_size(port_no); i++)
{
sock = socket(AF_INET, SOCK_STREAM, IPPROTO_TCP);
FD_SET(sock, &fds);
ioctlsocket(sock, FIONBIO, &on);
target.sin_family = AF_INET;
target.sin_port = htons(port_no[i]);
target.sin_addr.s_addr = inet_addr(argv[1]);
connect(sock, (SOCKADDR *)&target, sizeof(target));
err = select(sock, &fds, &fds, &fds, &tv);
if (err != SOCKET_ERROR && err != 0)
cout << "Port: " << port_no[i] << " - open" << endl;
else
cout << "Port: " << port_no[i] << " - closed" << endl;
closesocket(sock);
}
and it seems to function much faster now! I will do some work to optimize it & clean it up a bit, but thank you for all your input everyone who responded! :)

C++ Multithreaded TCP Server Issue

I'm writing a simple TCP server. The model that I though of is the server accepting client connections in the main thread and handing them over to another thread so that server can listen for connections again. The relevant parts of the code that I used are posted below:
Accepting connections:
void startServer () {
int serverSideSocket = 0;
int clientSideSocket = 0;
serverSideSocket = socket(AF_INET, SOCK_STREAM, 0);
if (serverSideSocket < 0) {
error("ERROR opening socket");
exit(1);
}
clientAddressLength = sizeof(clientAddress);
memset((char *) &serverAddress, 0, sizeof(serverAddress));
memset((char *) &clientAddress, 0, clientAddressLength);
serverAddress.sin_family = AF_INET;
serverAddress.sin_addr.s_addr = INADDR_ANY;
serverAddress.sin_port = htons(32000);
if (bind(serverSideSocket, (struct sockaddr *) &serverAddress, sizeof(serverAddress)) < 0) {
error("ERROR on binding");
exit(1);
}
listen(serverSideSocket, SOMAXCONN);
while(true) {
clientSideSocket = accept(serverSideSocket, (struct sockaddr *) &clientAddress, &clientAddressLength);
if (clientSideSocket < 0)
error("ERROR on accept");
processingThreadGroup->create_thread(boost::bind(process, clientSideSocket, this));
}
}
Here, the processingThreadGroup is a boost::thread_group instance. In the process method:
void process (int clientSideSocket, DataCollector* collector) {
int numberOfCharactersRead = 0;
string buffer;
do {
char msgBuffer[1000];
numberOfCharactersRead = recv(clientSideSocket, msgBuffer, (1000 - 1), 0);
if (numberOfCharactersRead < 0) {
//display error
close(clientSideSocket);
}
else if (numberOfCharactersRead == 0)
close(clientSideSocket);
else {
printf("%s", msgBuffer);
memset(msgBuffer, 0, 1000);
}
} while (numberOfCharactersRead > 0);
}
However, when I debug the code, I saw that when the processing thread is invoked, the main thread is not accepting connections anymore. The data is read inside the process() method only. The main thread seem to be not running anymore. What is the issue with the approach I took and any suggestions to correct it?
EDIT: I think I found the issue here, and have updated it as an answer. Will not accept it since I answered my own question. Thank you for the help everyone!
Think I found the issue. I was using this as a server to accept syslog messages. The code I use for the syslog message generator is as follows:
openlog ("MyProgram", LOG_CONS | LOG_PID | LOG_NDELAY, LOG_LOCAL0);
cout << "opened the log" << endl;
for (int i = 0 ; i < 10 ; i++)
{
syslog (LOG_INFO, "Program started by User %d \n", getuid ());
syslog (LOG_WARNING, "Beware of the WARNING! \n");
syslog (LOG_ERR, "fatal ERROR! \n");
}
closelog ();
cout << "closed the log" << endl;
and I use an entry in the rsyslog.conf file to direct all syslog LOG_LOCAL0 application traffic to be sent to the relevant TCP port where the server is listening. Somehow, syslog allows only one connection to be made, not multiple connections. Therefore, it only used one connection in a single thread. If that connection was closed, a new connection is craeted.
I checked with a normal tcp client. That works fine, with multiple threads being spawned for each connection accepted.