I'm trying to implement simple networking game (client - server) which uses UDP to transfer game events over network, and I have this working well, but now I would like to add to game chat over tcp in same console application. I've tried to implement multi client chat using select() and non-blocking master socket. Chat is working as standalone application but I have problems putting it together.
Basically my server loop looks like this:
while(true)
{
sendUDPdata()
...
while(true)
{
receiveUDPdata()
}
}
Problem is that when I want to add chat to server's main loop (handling UDP) like this:
while(true)
{
HandleTCPConnections();
sendUDPdata();
...
while(true)
{
receiveUDPdata();
}
}
calling select() in HandleTCPConnections() blocks whole server. Is there any way how to handle this?
There are two good ways to do this:
Use threads. Have a thread to handle your TCP sockets and a thread to handle your UDP sockets.
Use a reactor. Both the UDP code and the TCP code register their sockets with the reactor. The reactor blocks on all the sockets (typically using poll) and calls into the appropriate code when activity occurs on that socket.
There are lots of libraries out there for both of these options (such as libevent and boost.asio) that you can use if you don't want to reinvent the wheel.
select is a blocking call if there's no data available from the sockets, in your case.
Your chat can either run along with the server or in parallel with it: you've already got the first case; for the second, you'd better go for a separate thread that handles the chat. C++ has <thread>, which you may want to look into.
A separate thread is easier to implement in this case because you've a separate connection, and separate sockets therefore, that would otherwise need to be looked after for concurrent access.
Related
I've got a C++ application that is using ZeroMQ for some messaging. But it also has to provide a SGCI connection for an AJAX / Comet based web service.
For this I need a normal TCP socket. I could do that by normal Posix sockets, but to stay cross platform portable and make my life easier (I hope...) I was thinking of using Boost::ASIO.
But now I have the clash of ZMQ wanting to use it's own zmq_poll() and ASIO it's io_service.run()...
Is there a way to get ASIO to work together with the 0MQ zmq_poll()?
Or is there an other recommended way to achieve such a setup?
Note: I could solve that by using multiple threads - but it's only a little single core / CPU box that'll run that program with a very low amount of SCGI traffic, so multithreading would be a waste of resources...
After reading the documentation here and here, specifically this paragraph
ZMQ_FD: Retrieve file descriptor associated with the socket The ZMQ_FD
option shall retrieve the file descriptor associated with the
specified socket. The returned file descriptor can be used to
integrate the socket into an existing event loop; the ØMQ library
shall signal any pending events on the socket in an edge-triggered
fashion by making the file descriptor become ready for reading.
I think you can use null_buffers for every zmq_pollitem_t and defer the event loop to an io_service, completely bypassing zmq_poll() altogether. There appear to be some caveats in the aforementioned documentation however, notably
The ability to read from the returned file descriptor does not
necessarily indicate that messages are available to be read from, or
can be written to, the underlying socket; applications must retrieve
the actual event state with a subsequent retrieval of the ZMQ_EVENTS
option.
So when the handler for one of your zmq sockets is fired, you'll have to do a little more work before handling the event I think. Uncompiled pseudo-code is below
const int fd = getZmqDescriptorSomehow();
boost::asio::posix::stream_descriptor socket( _io_service, fd );
socket->async_read_some(
boost::asio::null_buffers(),
[=](const boost::system::error_code& error)
{
if (!error) {
// handle data ready to be read
}
}
);
note you don't have to use a lambda here, boost::bind to a member function would be sufficient.
In the end I figured out there are two possible solutions:
Sam Miller's where we use the event loop of ASIO
The ZeroMQ's event loop by getting the ASIO file descriptors though the .native() methods of the acceptor and the socket and inserting them into the array of zmq_pollitem_t
I have accepted the answer of Sam Miller as that's for me the best solution in SCGI case where constantly new connections are created and ended. Handling the thus every changing zmq_pollitem_t array is big hassle that can be avoided by using the ASIO event loop.
Obtaining the socket to ZeroMQ is the smallest part of the battle. ZeroMQ is based on a protocol which is layered over TCP, so you will have to reimplement ZeroMQ within a custom Boost.Asio io_service if you go this route. I ran into the same problem when creating an asynchronous ENet service using Boost.Asio by first simply trying to catch traffic from an ENet client using a Boost.Asio UDP service. ENet is a TCP like protocol layered over UDP, so all I achieved at that point was catching packets in a virtually useless state.
Boost.Asio is template based, and the built-in io_service's use templates to basically wrap the system socket library to create TCP and UDP service. My final solution was to create a custom io_service that wrapped the ENet library rather than the systems socket library, allowing it to use ENet's transport functions rather than having to reimplement them using the built-in UDP transport.
The same can be done for ZeroMQ, but ZeroMQ is already a very high performance network library in it's own right that already provides async I/O. I think you can create a viable solution by receiving messages using ZeroMQ's existing API and passing the messages into a io_service thread pool. That way messages/tasks will still be handled asynchronously using Boost.Asio's reactor pattern without having to re-write anything. ZeroMQ will provide the async I/O, Boost.Asio will provide the async task handlers/workers.
The existing io_service can still be coupled to an existing TCP socket as well, allowing the threadpool to handle both TCP (HTTP in your case) and ZeroMQ. It's entirely possible in such a setup for the ZeroMQ task handlers to access the TCP services session objects, allowing you to send results of the ZeroMQ message/task back to a TCP client.
The following is just to illustrate the concept.
// Create a pool of threads to run all of the io_services.
std::vector<boost::shared_ptr<boost::thread> > threads;
for(std::size_t i = 0; i < thread_pool_size_; ++i) {
boost::shared_ptr<boost::thread> thread(new boost::thread(boost::bind(&boost::asio::io_service::run, &io_service_)));
threads.push_back(thread);
}
while (1) {
char buffer [10];
zmq_recv (responder_, buffer, 10, 0);
io_service_.post(boost::bind(&server::handle_zeromq_message, buffer, this));
}
2 years after this question someone posted a project which does exactly this. The project is here: https://github.com/zeromq/azmq. The blog post discussing the design is here: https://rodgert.github.io/2014/12/24/boost-asio-and-zeromq-pt1/.
Here is the sample code copied from the readme:
#include <azmq/socket.hpp>
#include <boost/asio.hpp>
#include <array>
namespace asio = boost::asio;
int main(int argc, char** argv) {
asio::io_service ios;
azmq::sub_socket subscriber(ios);
subscriber.connect("tcp://192.168.55.112:5556");
subscriber.connect("tcp://192.168.55.201:7721");
subscriber.set_option(azmq::socket::subscribe("NASDAQ"));
azmq::pub_socket publisher(ios);
publisher.bind("ipc://nasdaq-feed");
std::array<char, 256> buf;
for (;;) {
auto size = subscriber.receive(asio::buffer(buf));
publisher.send(asio::buffer(buf));
}
return 0;
}
Looks nice. If you try, let me know in the comments if it still works in 2019 [I will probably try in a couple of months and then update this answer] (the repo is stale, last commit was a year ago)
The solution is to poll your io_service as well instead of run().
Check out this solution for some poll() info.
Using poll instead of run will allow you to poll zmq's connections without any blocking issues.
Recently I've been learning socket programming and finally I found some good examples from Beej's Guide to Network Programming.
There's a chat server example using poll under the poll section.
Code:
charsever.c (chat server receives clients message and send messages to all the other clients)
After I read the code line by line and fully understand the example, I suddenly realize how clever and neat the design is.
Basically, it uses a poll to monitor everything, the server listening socket accept for new incoming TCP connections and all the existing TCP sockets. No extra threads or processes are needed.
Then I started to ask myself:
It seems that I can use multiple processes (or multiple threads if that's too complicated) to achieve the same effect. Take the chat server as example, the design can be:
main process handle new incoming TCP connections and add new connection sockets to a global array all_sockets.
for each new connection in main process, fork a child process to block, write something like:
//pseudo-code
int bytes_from_client;
while(true){
if( (bytes_from_client = recv(xx,xx,xx,xx)) <= 0 ){
if(bytes_from_client == 0){
client_shutdown();
} else {
error_handle();
}
} else {
//recv data from client and send messages to all the other clients
for(int i = 0; i < all_sockets[x]; i++){
send(xx,xx,xx,xx);
}
}
}
Ok then I need to handle some synchronization issues for global variables. Use mutex or something else. (hard part)
So now for the questions:
What exactly do I benefit from the poll design pattern, compared to the mulithreads one that I laterly described? No need to handle synchronization? Only this 1 advantage?
(A more generic but meaningful question) Is this design pattern made by poll like functions (select, epoll) that makes them so different/unique and great? (I am a newbie and I asked this because I've seen so many people saying how great and significant the poll family functions are. But they never tell why, neither give examples or comparisons. )
One basic reason why a simple select() loop works so very well is that network packets always arrive one-at-a-time and, compared to the blinding speed of any CPU, very slowly. There simply isn't any advantage to a more complicated arrangement.
Often, one thread or process is dedicated to handling the network connection ... polling for incoming packets and sending outgoing messages on behalf of everyone. (I've heard it called "the telegrapher.") It then uses queues to distribute the work to a stable of "worker-bee" processes, which retrieve requests on one queue, then post the outgoing answers to another queue which is also listened-to by the telegrapher.
Sometimes, the telegrapher simply listens for connection-requests, places the handles on a queue, and the worker bees open the connection and do the networking themselves. But, once again, they can simply use polling and it works just fine. Even the fastest communication line is orders of magnitude slower than a CPU.
At work I have been tasked with implementing a TCP server as part of a Modbus slave device. I have done a lot of reading both here on stack exchange and on the internet in general (including the excellent http://beej.us/guide/bgnet/) but I am struggling with a design issue. In summary, my device can accept just 2 connections and on each connection will be incoming modbus requests which I must process in my main controller loop and then reply with success or failure status. I have the following ideas of how to implement this.
Have a listener thread that creates, binds, listens and accepts connections, then spawns a new pthread to listen on the connection for incoming data and close connection after an idle timeout period. If the number of active threads is currently 2, new connections are instantly closed to ensure only 2 are allowed.
Do not spawn new threads from the listener thread, instead use select() to detect incoming connection requests as well as incoming modbus connects on active connections (similar to the approach in Beejs guide).
Create 2 listener threads each of which creates a socket (same IP and port number) which can block on accept() calls, then close the socket fd and deal with the connection. Here I am (perhaps naively) assuming that this will only allow max of 2 connections which I can deal with using blocking reads.
I have been using C++ for a long time but I am fairly new to Linux development. I would really welcome any suggestions as to which of the above approaches is best (if any) and if my inexperience with Linux means that any of them are really really bad ideas. I am keen to avoid fork() and stick to pthreads as incoming modbus requests are going to be queued and read off a main controller loop periodically. Thanks in advance for any advice.
The third alternative won't work, you can only bind to the local address once.
I would probably use your second alternative, unless you need to do a lot of processing in which case a combination of the first to alternatives might be useful.
The combination of the two first alternative I'm thinking of is to have the main thread (the one you always have when a program starts) create two worker threads, then go a blocking accept call to wait for a new connection. When a new connection arrives, tell one of the threads to start working on the new connection and go back to block on accept. When the second connection is accepted you tell the other thread to work on that connection. If both connections are open already, either don't accept until one connection is closed, or wait for new connections but close them immediately.
All of the design option you propose are not very object oriented, and they're all geared more towards C than C++. If your work allows you to use boost, then the Boost.Asio library is fantastic for making simple (and complex) socket servers. You could take nearly any of their examples and trivially extend it to only allow 2 active connections, closing all others as soon as they are opened.
Off the top of my head, their simple HTTP server could be modified to do this by keeping a static counter in the connection class (inc in the constructor, dec in the destructor), and when a new one is created check the count and decide whether to close the connection. The connection class could also gain a boost::asio::deadline_timer to keep track of timeouts.
This would most closely resemble your first design choice, boost could do this in 1 thread and in the background does something similar to select() (usually epoll()). But this is the "C++ way", and in my opinion using select() and raw pthreads is the C way.
Since you are only dealing with 2 connections, thread per connection is perfect for this kind of application. Object oriented approaches using non-blocking or asynchronous I/O would be better if you needed to scale up to thousands of connections. 2 listener threads makes sense, you don't need to close the accept fd. Just come back to accept on it when the connection is completed. In fact, a variation is to have three threads blocked doing accept. If two of the threads are actively handling connections, then the third resets the newly created connection (or returns busy response, whatever is appropriate for your device).
To have all three threads block on accept, you need to have the main thread create and bind your socket before the three threads launch to do their accept/handle processing.
The man page for pthreads on Linux indicates that accept is thread-safe. (The section under thread-safe functions lists the functions that are not thread-safe, go figure.)
I'm writing a daemon that needs to both run in the background and take care of tasks and also receive input directly from a frontend. I've been attempting to use sockets to take care of this task, however, I can't get it to work properly since sockets pause the program while waiting for a connection. Is there anyway to get around this?
I'm using the socket wrappers provided at http://linuxgazette.net/issue74/tougher.html
Thank you for any and all help
You will need to use threads to make the socket operations asynchronous. Or use some library that has already implemented it, one of the top ones is Boost Asio.
There are a few ways to handle this problem. This most common is using an event loop and something like libevent. Then you use non-blocking sockets.
Doing this in an event driven fashion can require a big shift in your program logic. But doing it with threads has its own complexities and isn't clearly a better choice.
Usually the daemons use event loops to avoid the problem of waiting for events.
It's the smartest solution to the problem that you present (do not wait to an asynchronous event). ç
Althought, usually the entire daemon is build over the event loop and it's callback architecture, and can cause a partial rewritting, so usually the quick and dirty solution is creating a separate thread to handle those events wich usually creates more bugs than it solves. So, use an event loop:
libevent.
glib event loop.
libev.
boost::asio
...
From your description, you have already divided your application into a frontend (receiving input) and backend (socket handling and tasks). If the input from the frontend is sent over the socket (via the backend) rather receiving input from the socket then it seems like you are describing a client and not a server. Client programs are typically not implemented as daemons.
You have created a blocking socket and need to either monitor in a separate thread execution a thread or even separate process) or make a non-blocking socket and poll frequently for updates.
The link to the LinuxGazette is a basic intro to network programming. If you would like a little more depth then take a look at Beej's Guide to Network Programming where the various API calls available to you are explained in a little detail.. and will, perhaps, make you appreciate more wrapper libraries such as Boost::ASIO.
Can be worth retaining control of the event loop yourself - its no complicated and provides flexibility down the track.
"C++ pseudo-code" for an event loop.
while (!done)
{
bool workDone = false;
// Loop over each event source or internal worker
for each module
{
// If it has work to do, do some.
if (module.hasWorkDoTo())
{
// Generally, do as little work as possible; e.g. process a single event for this module.
// But tinker with this to manage priorities if need be.
// E.g. Maybe allow the GUI to flush its queue.
module.doSomeWork();
workDone = true;
}
}
if (!workDone)
{
// System idle. No Sleep for a bit so we have benign idle baheviour.
nanosleep(...);
}
}
Hi I am working on an assignment writing multi threaded client server.
So far I have done is open a socket in a port and forked two thread for listening and writing to client. But I need to connect two type of clients to the server and service them differently. My question is what would be my best approach?
I am handling connection in a class which has a infinite loop to accept connection. When ever a connection is accepted, the class create two thread to read and write to client? Now if I wnat to handle another client of different type, what should we do?
Do I need to open another port? or is it possible to service through same port? May be if it is possible to identify the type of client in the socket than I can handle messages differently.
Or do you suggest like this?
Fork two thread for two type of client and monitor inbound connection in each thread in different port.
when a connection accepted each thread spawn another two thread for listening and writing.
please make a suggestion.
Perhaps you'll get a better answer from a Unix user, but I'll provide what I know.
Your server needs a thread that opens a 'listening' socket that waits for incoming connections. This thread can be the main thread for simplicity, but can be an alternate thread if you are concerned about UI interaction, for example (in Windows, this would be a concern, not sure about Unix). It sounds like you are at least this far.
When the 'listening' socket accepts a connection, you get a 'connected' socket that is connected to the 'client' socket. You would pass this 'connected' socket to a new thread that manages the reading from and writing to the 'connected' socket. Thus, one change I would suggest is managing the 'connected' socket in a single thread, not two separate threads (one for reading, one for writing) as you have done. Reading and writing against the same socket can be accomplished using the select() system call, as shown here.
When a new client connects, your 'listening' socket will provide a new 'connected' socket, which you will hand off to another thread. At this point, you have two threads - one that is managing the first connection and one that is managing the second connection. As far as the sockets are concerned, there is no distinction between the clients. You simply have two open connections, one to each of your two clients.
At this point, the question becomes what does it mean to "service them differently". If the clients are expected to interact with the server in unique ways, then this has to be determined somehow. The interactions could be determined based on the 'client' socket's IP address, which you can query, but this seems arbitrary and is subject to network changes. It could also be based on the initial block of data received from the 'client' socket which indicates the type of interaction required. In this case, the thread that is managing the 'connected' socket could read the socket for the expected type of interaction and then hand the socket off to a class object that manages that interaction type.
I hope this helps.
You can handle the read-write on a single client connection in one thread. The simplest solution based on multiple-threads will be this:
// C++ like pseudo-code
while (server_running)
{
client = server.accept();
ClientHandlingThread* cth = CreateNewClientHandlingThread(client);
cth->start();
}
class ClientHandlingThread
{
void start()
{
std::string header = client->read_protocol_header();
// We get a specific implementation of the ProtocolHandler abstract class
// from a factory, which create objects by inspecting some protocol header info.
ProtocolHandler* handler = ProtocolHandlerFactory.create(header);
if (handler)
handler->read_write(client);
else
log("unknown protocol")
}
};
To scale better, you can use a thread pool, instead of spawning a new thread for each client. There are many free thread pool implementations for C++.
while (server_running)
{
client = server.accept();
thread_pool->submit(client);
cth->start();
}
The server could be improved further by using some framework that implements the reactor pattern. They use select or poll functions under the hood. You can use these functions directly. But for a production system it is better to use an existing reactor framework. ACE is one of the most widely known C++ toolkits for developing highly scalable concurrent applications.
Different protocols are generally serviced on different ports. However, you could service both types of clients over the same port by negotiating the protocol to be used. This can be as simple as the client sending either HELO or EHLO to request one or another kind of service.