I'm looking for the best way to modify the Boost Asio HTTP Server 3 example to maintain a list of the currently connected clients.
If I modify server.hpp from the example as:
class server : private boost::noncopyable
{
public:
typedef std::vector< connection_ptr > ConnectionList;
// ...
ConnectionList::const_iterator GetClientList() const
{
return connection_list_.begin();
};
void handle_accept(const boost::system::error_code& e)
{
if (!e)
{
connection_list_.push_back( new_connection_ );
new_connection_->start();
// ...
}
}
private:
ConnectionList connection_list_;
};
Then I mess up the lifetime of the connection object such that it doesn't go out of scope and disconnect from the client because it still has a reference maintained in the ConnectionList.
If instead my ConnectionList is defined as typedef std::vector< boost::weak_ptr< connection > > ConnectionList; then I run the risk of the client disconnecting and nullifying its pointer while somebody is using it from GetClientList().
Anybody have a suggestion on a good & safe way to do this?
Thanks,
PaulH
HTTP is stateless. That means it's difficult to even define what "currently connected client" means, not to mention keeping track of which clients are at any given time. The only time there's really a "current client" is from the time a request is received to the time that request is serviced (often only a few milliseconds). A connection is not maintained even for the duration of downloading one page -- rather, each item on the page is requested and sent separately.
The typical method for handling this is to use a fairly simple timeout -- a client is considered "connected" for some arbitrary length of time (a few minutes) after they send in a request. A cookie of some sort is used to identify the client sending in a particular request.
The rest of what you're talking about is just a matter of making sure the collection you use to hold connection information is thread safe. You have one thread that adds connections, one thread that deletes them, and N threads that use the data currently in the list. The standard collections don't guarantee any thread safety, but there are others around that do.
Related
I'm implementing an auctioning system in C++ with Boost.Asio. There is a single centralized auctioneer (the server) and some connecting bidders (the clients). I am implementing this in an asynchronous fashion, and I have implemented the basic communication between the bidder and auctioneer (register, ping, get client list). The skeletal code for the auctioneer would look like follows:
class talkToBidder : public boost::enable_shared_from_this<talkToBidder>
{
// Code for sending and receiving messages, which works fine
};
void on_round_end()
{
// Choose the best bid and message the winner
if (!itemList.empty())
timer_reset();
}
void timer_reset()
{
// Send the item information to the bidders
// When the round ends, call on_round_end()
auction_timer.expires_from_now(boost::posix_time::millisec(ROUND_TIME));
auction_timer.async_wait(boost::bind(on_round_end));
}
void handle_accept(...)
{
// Create new bidder...
acceptor.async_accept(bidder->sock(),boost::bind(handle_accept,bidder,_1));
}
int main()
{
// Create new bidder and handle accepting it
talkToBidder::ptr bidder = talkToBidder::new_();
acceptor.async_accept(bidder->sock(),boost::bind(handle_accept,bidder,_1));
service.run();
}
My issue is, I need to wait for at least one bidder to connect before I can start the auction, so I cannot simply call timer_reset() before I use service.run(). What is the Boost.Asio way to go about doing this?
In asynchronous protocol design, it helps to draw Message Sequence Diagrams. Do include your timers.
The code now becomes trivial. You start your timer when the message arrives that should start your timer. Yes, this is shifting the problem a bit forwards. The real point here is that it's not a Boost Asio coding problem. In your case, that particular message appears to be the login of the first bidder, implemented as a TCP connect (SYN/ACK) which maps to handle_accept in your code.
I'm trying to create a WebSocket Server.
I can establish a connection and everything works fine so far.
In this GitHub example the data is send within the handleRequest() method that is called when a client connects.
But can I send data to the client from another class using the established WebSocket connection?
How can I archieve this? Is this even possible?
Thank you.
It is, of course, possible. In the example you referred, you should have a member pointer to WebSocket in the RequestHandlerFactory, eg.:
class RequestHandlerFactory: public HTTPRequestHandlerFactory
{
//...
private:
shared_ptr<WebSocket> _pwebSocket;
};
pass it to the WebSocketRequestHandler constructor:
return new WebSocketRequestHandler(_pwebSocket);
and WebSocketRequestHandler should look like this:
class WebSocketRequestHandler: public HTTPRequestHandler
{
public:
WebSocketRequestHandler(shared_ptr<WebSocket> pWebSocket) :_pWebSocket(pWebSocket)
{}
void handleRequest(HTTPServerRequest& request, HTTPServerResponse& response)
{
// ...
_pWebSocket.reset(make_shared<WebSocket>(request, response));
// ...
}
private:
shared_ptr<WebSocket> _pWebSocket;
}
Now, after the request handler creates it, you will have a pointer to the WebSocket in the factory (which is long lived, unlike RequestHandler, which comes and goes away with every request). Keep in mind that handler executes in its own thread, so you should have some kind of locking or notification mechanism to signal when the WebSocket has actually been created by the handler (bool cast of _pWebSocket will be true after WebSocket was successfully created).
The above example only illustrates the case with a single WebSocket - if you want to have multiple ones, you should have an array or vector of pointers and add/remove them as needed. In any case, the WebSocket pointer(s) need not necessarily reside in the factory - you can either (a) put them elsewhere in your application and propagate them to the factory/handler or (b) have a global facility (with proper multi-thread-access mechanism) holding the WebSocket(s).
So the only way that I know how to find which client I received from is by comparing the received endpoint in a loop of all the clients, and I was wondering if there was a more elegant way of handling this.
In tcp, every client has its own socket, and with it, it can find which client it receives from instantly. If I make every client have its own socket in udp, will it be more or less efficient?
I was also thinking of making a global socket, and making every client object listen to only their endpoint's, but I don't think that's possible, or efficient in asio.
The application code is responsible for demultiplexing. At a high-level, there are two options:
Use a single endpoint to conceptually function as an acceptor. Upon receiving a handshake message, the client would instantiate a new local endpoint, and inform the client to use the newly constructed endpoint for the remainder of the client's session. This results in a socket per client, and with connected UDP sockets, a client can be guaranteed to only receive messages from the expected remote endpoint. This should be no less efficient than the same approach used with TCP sockets. However, it requires making changes to the application protocol on both the sender and receiver.
Use a single socket. Upon receiving a message, the remote endpoint is used to demultiplex to the client object. If the application depends upon the demultiplex abstraction, then the implementation may be freely changed to best suit the application's usage. This requires no changes to the application protocol.
The first option will more easily support higher concurrency levels, as each client can control the lifetime of its asynchronous call chain. While it is possible to have a call chain per client in the second option, controlling the lifetime introduces complexity, as all asynchronous call chains are bound to the same I/O object.
On the other hand, as concurrency increase, so does memory. Hence, the first option is likely to use more memory than the second option. Furthermore, controlling overall memory is easier in the second, as the concurrency level will not be completely dynamic. In either case, reactor style operations can be used to mitigate the overall memory usage.
In the end, abstract the application from the implementation whilst keeping the code maintainable. Once the application is working, profile, identify bottlenecks, and make choices based on actual data.
To expand slightly on the second option, here is an complete minimal example of a basic client_manager that associates endpoints to client objects:
#include <memory>
#include <unordered_map>
#include <boost/asio.hpp>
namespace ip = boost::asio::ip;
/// #brief Mockup client.
class client:
public std::enable_shared_from_this<client>
{
public:
explicit client(ip::udp::endpoint endpoint)
: endpoint_(endpoint)
{}
const ip::udp::endpoint& endpoint() const { return endpoint_; }
private:
ip::udp::endpoint endpoint_;
};
/// #brief Basic class that manages clients. Given an endpoint, the
/// associated client, if any, can be found.
class client_manager
{
private:
// The underlying implementation used by the manager.
using container_type = std::unordered_map<
ip::udp::endpoint, std::shared_ptr<client>,
std::size_t (*)(const ip::udp::endpoint&)>;
/// #brief Return a hash value for the provided endpoint.
static std::size_t get_hash(const ip::udp::endpoint& endpoint)
{
std::ostringstream stream;
stream << endpoint;
std::hash<std::string> hasher;
return hasher(stream.str());
}
public:
using key_type = container_type::key_type;
using mapped_type = container_type::mapped_type;
/// #brief Constructor.
client_manager()
: clients_(0, &client_manager::get_hash)
{}
// The public abstraction upon which the application will depend.
public:
/// #brief Add a client to the manager.
void add(mapped_type client)
{
clients_[client->endpoint()] = client;
}
/// #brief Given an endpoint, retrieve the associated client. Return
/// an empty shared pointer if one is not found.
mapped_type get(key_type key) const
{
auto result = clients_.find(key);
return clients_.end() != result
? result->second // Found client.
: mapped_type(); // No client found.
}
private:
container_type clients_;
};
int main()
{
// Unique endpoints.
ip::udp::endpoint endpoint1(ip::address::from_string("11.11.11.11"), 1111);
ip::udp::endpoint endpoint2(ip::address::from_string("22.22.22.22"), 2222);
ip::udp::endpoint endpoint3(ip::address::from_string("33.33.33.33"), 3333);
// Create a client for each endpoint.
auto client1 = std::make_shared<client>(endpoint1);
auto client2 = std::make_shared<client>(endpoint2);
auto client3 = std::make_shared<client>(endpoint3);
// Add the clients to the manager.
client_manager manager;
manager.add(client1);
manager.add(client2);
manager.add(client3);
// Locate a client based on the endpoint.
auto client_result = manager.get(endpoint2);
assert(client1 != client_result);
assert(client2 == client_result);
assert(client3 != client_result);
}
Note that as the application only depends upon the client_manager abstraction (i.e. pre and post conditions for client_manager::add() and client_manager::get()), then the client_manager implementation can be changed without affecting the application as long as the implementation maintains the pre and post conditions. For instance, instead of using std::unordered_map, it could be implemented with a sequence container, such as std::vector, or an ordered associated container, such as std::map. Choose a container that best fits the expected usage. After profiling, if the container choice is an identified bottleneck, then change the implementation of client_manager to use a more suitable container based on the actual usage.
I'm using Boost.Asio for network operations, they have to (and actually, can, there's no complex data structures or anything) remain pretty low level since I can't afford the luxury of serialization overhead (and the libs I found that did offer well enough performance seemed to be badly suited for my case).
The problem is with an async write I'm doing from the client (in QT, but that should probably be irrelevant here). The callback specified in the async_write doesn't get called, ever, and I'm at a complete loss as to why. The code is:
void SpikingMatrixClient::addMatrix() {
std::cout << "entered add matrix" << std::endl;
int action = protocol::Actions::AddMatrix;
int matrixSize = this->ui->editNetworkSize->text().toInt();
std::ostream out(&buf);
out.write(reinterpret_cast<const char*>(&action), sizeof(action));
out.write(reinterpret_cast<const char*>(&matrixSize), sizeof(matrixSize));
boost::asio::async_write(*connection.socket(), buf.data(),
boost::bind(&SpikingMatrixClient::onAddMatrix, this, boost::asio::placeholders::error, boost::asio::placeholders::bytes_transferred));
}
which calls the first write. The callback is
void SpikingMatrixClient::onAddMatrix(const boost::system::error_code& error, size_t bytes_transferred) {
std::cout << "entered onAddMatrix" << std::endl;
if (!error) {
buf.consume(bytes_transferred);
requestMatrixList();
} else {
QString message = QString::fromStdString(error.message());
this->ui->statusBar->showMessage(message, 15000);
}
}
The callback never gets called, even though the server receives all the data. Can anyone think of any reason why it might be doing that?
P.S. There was a wrapper for that connection, and yes there will probably be one again. Ditched it a day or two ago because I couldn't find the problem with this callback.
As suggested, posting a solution I found to be the most suitable (at least for now).
The client application is [being] written in QT, and I need the IO to be async. For the most part, the client receives calculation data from the server application and has to render various graphical representations of them.
Now, there's some key aspects to consider:
The GUI has to be responsive, it should not be blocked by the IO.
The client can be connected / disconnected.
The traffic is pretty intense, data gets sent / refreshed to the client every few secs and it has to remain responsive (as per item 1.).
As per the Boost.Asio documentation,
Multiple threads may call io_service::run() to set up a pool of
threads from which completion handlers may be invoked.
Note that all threads that have joined an io_service's pool are considered equivalent, and the io_service may distribute work across them in an arbitrary fashion.
Note that io_service.run() blocks until the io_service runs out of work.
With this in mind, the clear solution is to run io_service.run() from another thread. The relevant code snippets are
void SpikingMatrixClient::connect() {
Ui::ConnectDialog ui;
QDialog *dialog = new QDialog;
ui.setupUi(dialog);
if (dialog->exec()) {
QString host = ui.lineEditHost->text();
QString port = ui.lineEditPort->text();
connection = TcpConnection::create(io);
boost::system::error_code error = connection->connect(host, port);
if (!error) {
io = boost::shared_ptr<boost::asio::io_service>(new boost::asio::io_service);
work = boost::shared_ptr<boost::asio::io_service::work>(new boost::asio::io_service::work(*io));
io_threads.create_thread(boost::bind(&SpikingMatrixClient::runIo, this, io));
}
QString message = QString::fromStdString(error.message());
this->ui->statusBar->showMessage(message, 15000);
}
}
for connecting & starting IO, where:
work is a private boost::shared_ptr to the boost::asio::io_service::work object it was passed,
io is a private boost::shared_ptr to a boost::asio::io_service,
connection is a boost::shared_ptr to my connection wrapper class, and the connect() call uses a resolver etc. to connect the socket, there's plenty examples of that around
and io_threads is a private boost::thread_group.
Surely it could be shortened with some typedefs if needed.
TcpConnection is my own connection wrapper implementation, which sortof lacks functionality for now, and I suppose I could move the whole thread thing into it when it gets reinstated. This snippet should be enough to get the idea anyway...
The disconnecting part goes like this:
void SpikingMatrixClient::disconnect() {
work.reset();
io_threads.join_all();
boost::system::error_code error = connection->disconnect();
if (!error) {
connection.reset();
}
QString message = QString::fromStdString(error.message());
this->ui->statusBar->showMessage(message, 15000);
}
the work object is destroyed, so that the io_service can run out of work eventually,
the threads are joined, meaning that all work gets finished before disconnecting, thus data shouldn't get corrupted,
the disconnect() calls shutdown() and close() on the socket behind the scenes, and if there's no error, destroys the connection pointer.
Note, that there's no error handling in case of an error while disconnecting in this snippet, but it could very well be done, either by checking the error code (which seems more C-like), or throwing from the disconnect() if the error code within it represents an error after trying to disconnect.
I encountered a similar problem (callbacks not fired) but the circumstances are different from this question (io_service had jobs but still would not fire the handlers ). I will post this anyway and maybe it will help someone.
In my program, I set up an async_connect() then followed by io_service.run(), which blocks as expected.
async_connect() goes to on_connect_handler() as expected, which in turn fires async_write().
on_write_complete_handler() does not fire, even though the other end of the connection has received all the data and has even sent back a response.
I discovered that it is caused by me placing program logic in on_connect_handler(). Specifically, after the connection was established and after I called async_write(), I entered an infinite loop to perform arbitrary logic, not allowing on_connect_handler() to exit. I assume this causes the io_service to not be able to execute other handlers, even if their conditions are met because it is stuck here. ( I had many misconceptions, and thought that io_service would automagically spawn threads for each async_x() call )
Hope that helps.
My application have an asio server socket that must accept connections from a defined List of IPs.
This filter must be done by the application, (not by the system), because it can change at any time (i must be able to update this list at any time)
The client must receive an acces_denied error.
I suppose when the handle_accept callback is called, SYN/ACK has already be sent, so don't want to accept then close brutally when i detect the connected ip est not allowed. I don't manage the client behavior, maybe it doesn't act the same when the connection is refused and just closed by peer, so i want to do everything clean.
(but it's what im soing for the moment)
Do you know how i can do that???
My access list is a container of std::strings (but i can convert it to a countainer of something else....)
Thank you very much
The async_accept method has an overload to obtain the peer endpoint. You can compare that value inside your async_accept handler. If it does not match an entry in your container, let the socket go out of scope. Otherwise, handle it as required by your appliation.
I don't know the details of your app, but this is how I'd do it.
In the accept handler/lambda
void onAccept(shared_ptr<connection> c, error_code ec)
{
if (ec) { /*... */ }
if (isOnBlackList(c->endpoint_))
{
c->socket_.async_write( /* a refusal message */,
[c](error_code, n)
{
c->socket_.shutdown();
// c fizzles out of all contexts...
});
}
else
{
// successful connection execution path
}
}