Accessing other threads or data from POCO HTTPRequestHandler - c++

I have an C++ application that reads a variety of sensors and then acts on them as required. Currently the sensors run in their own threads and have get/set methods for their values.
I'm trying to integrate a web socket server using POCO libraries to display the state of the sensors.
How do I go about getting the sensor information into the HTTPRequestHandler?
Should I be using the POCO::Application class and defining the sensors & server as subsystems? Is there another approach that I should be taking?

You can derive from HTTPRequestHandler and override handleRequest() and give access to the sensor information by for example storing a reference to your sensor info object as a member of the class derived from HTTPRequestHandler.
class SensorStateRequestHandler : public Poco::Net::HTTPRequestHandler
{
public:
SensorStateRequestHandler(SensorInfo &sensorInfo)
: sensorInfo_(sensorInfo)
{}
virtual void handleRequest(Poco::Net::HTTPServerRequest &request, Poco::Net::HTTPServerResponse &response) override
{
// receive request websocket frame
sensorInfo_.get_state(); // must be thread safe
// send response websocket frame with sensor state
}
private:
sensorInfo &sensorInfo_;
};

See how WebEventService in macchina.io is implemented - using Poco::Net::HTTPServer, WebSocket and Poco::NotificationQueue.
The design "in a nutshell" is a pub/sub pattern, client subscribes to notifications and receives them through WebSocket; in-process subscriptions/notifications (using Poco events) are also supported. There is a single short-living thread (HTTP handler) launched at subscription time and the rest of communication is through WebSocket reactor-like functionality, so performance and scalability is reasonably good (although there is room for improvement, depending on target platform).
You may consider using macchina.io itself (Apache license) - it is based on POCO/OSP and targets the type of application you have. WebEvent functionality will be part of Poco::NetEx in 1.7.0 release (sheduled for September this year).

Related

Auctions in Boost ASIO

I'm implementing an auctioning system in C++ with Boost.Asio. There is a single centralized auctioneer (the server) and some connecting bidders (the clients). I am implementing this in an asynchronous fashion, and I have implemented the basic communication between the bidder and auctioneer (register, ping, get client list). The skeletal code for the auctioneer would look like follows:
class talkToBidder : public boost::enable_shared_from_this<talkToBidder>
{
// Code for sending and receiving messages, which works fine
};
void on_round_end()
{
// Choose the best bid and message the winner
if (!itemList.empty())
timer_reset();
}
void timer_reset()
{
// Send the item information to the bidders
// When the round ends, call on_round_end()
auction_timer.expires_from_now(boost::posix_time::millisec(ROUND_TIME));
auction_timer.async_wait(boost::bind(on_round_end));
}
void handle_accept(...)
{
// Create new bidder...
acceptor.async_accept(bidder->sock(),boost::bind(handle_accept,bidder,_1));
}
int main()
{
// Create new bidder and handle accepting it
talkToBidder::ptr bidder = talkToBidder::new_();
acceptor.async_accept(bidder->sock(),boost::bind(handle_accept,bidder,_1));
service.run();
}
My issue is, I need to wait for at least one bidder to connect before I can start the auction, so I cannot simply call timer_reset() before I use service.run(). What is the Boost.Asio way to go about doing this?
In asynchronous protocol design, it helps to draw Message Sequence Diagrams. Do include your timers.
The code now becomes trivial. You start your timer when the message arrives that should start your timer. Yes, this is shifting the problem a bit forwards. The real point here is that it's not a Boost Asio coding problem. In your case, that particular message appears to be the login of the first bidder, implemented as a TCP connect (SYN/ACK) which maps to handle_accept in your code.

POCO WebSocket - sending data from another class

I'm trying to create a WebSocket Server.
I can establish a connection and everything works fine so far.
In this GitHub example the data is send within the handleRequest() method that is called when a client connects.
But can I send data to the client from another class using the established WebSocket connection?
How can I archieve this? Is this even possible?
Thank you.
It is, of course, possible. In the example you referred, you should have a member pointer to WebSocket in the RequestHandlerFactory, eg.:
class RequestHandlerFactory: public HTTPRequestHandlerFactory
{
//...
private:
shared_ptr<WebSocket> _pwebSocket;
};
pass it to the WebSocketRequestHandler constructor:
return new WebSocketRequestHandler(_pwebSocket);
and WebSocketRequestHandler should look like this:
class WebSocketRequestHandler: public HTTPRequestHandler
{
public:
WebSocketRequestHandler(shared_ptr<WebSocket> pWebSocket) :_pWebSocket(pWebSocket)
{}
void handleRequest(HTTPServerRequest& request, HTTPServerResponse& response)
{
// ...
_pWebSocket.reset(make_shared<WebSocket>(request, response));
// ...
}
private:
shared_ptr<WebSocket> _pWebSocket;
}
Now, after the request handler creates it, you will have a pointer to the WebSocket in the factory (which is long lived, unlike RequestHandler, which comes and goes away with every request). Keep in mind that handler executes in its own thread, so you should have some kind of locking or notification mechanism to signal when the WebSocket has actually been created by the handler (bool cast of _pWebSocket will be true after WebSocket was successfully created).
The above example only illustrates the case with a single WebSocket - if you want to have multiple ones, you should have an array or vector of pointers and add/remove them as needed. In any case, the WebSocket pointer(s) need not necessarily reside in the factory - you can either (a) put them elsewhere in your application and propagate them to the factory/handler or (b) have a global facility (with proper multi-thread-access mechanism) holding the WebSocket(s).

How should I find which client I am receiving from in Boost Asio in UDP?

So the only way that I know how to find which client I received from is by comparing the received endpoint in a loop of all the clients, and I was wondering if there was a more elegant way of handling this.
In tcp, every client has its own socket, and with it, it can find which client it receives from instantly. If I make every client have its own socket in udp, will it be more or less efficient?
I was also thinking of making a global socket, and making every client object listen to only their endpoint's, but I don't think that's possible, or efficient in asio.
The application code is responsible for demultiplexing. At a high-level, there are two options:
Use a single endpoint to conceptually function as an acceptor. Upon receiving a handshake message, the client would instantiate a new local endpoint, and inform the client to use the newly constructed endpoint for the remainder of the client's session. This results in a socket per client, and with connected UDP sockets, a client can be guaranteed to only receive messages from the expected remote endpoint. This should be no less efficient than the same approach used with TCP sockets. However, it requires making changes to the application protocol on both the sender and receiver.
Use a single socket. Upon receiving a message, the remote endpoint is used to demultiplex to the client object. If the application depends upon the demultiplex abstraction, then the implementation may be freely changed to best suit the application's usage. This requires no changes to the application protocol.
The first option will more easily support higher concurrency levels, as each client can control the lifetime of its asynchronous call chain. While it is possible to have a call chain per client in the second option, controlling the lifetime introduces complexity, as all asynchronous call chains are bound to the same I/O object.
On the other hand, as concurrency increase, so does memory. Hence, the first option is likely to use more memory than the second option. Furthermore, controlling overall memory is easier in the second, as the concurrency level will not be completely dynamic. In either case, reactor style operations can be used to mitigate the overall memory usage.
In the end, abstract the application from the implementation whilst keeping the code maintainable. Once the application is working, profile, identify bottlenecks, and make choices based on actual data.
To expand slightly on the second option, here is an complete minimal example of a basic client_manager that associates endpoints to client objects:
#include <memory>
#include <unordered_map>
#include <boost/asio.hpp>
namespace ip = boost::asio::ip;
/// #brief Mockup client.
class client:
public std::enable_shared_from_this<client>
{
public:
explicit client(ip::udp::endpoint endpoint)
: endpoint_(endpoint)
{}
const ip::udp::endpoint& endpoint() const { return endpoint_; }
private:
ip::udp::endpoint endpoint_;
};
/// #brief Basic class that manages clients. Given an endpoint, the
/// associated client, if any, can be found.
class client_manager
{
private:
// The underlying implementation used by the manager.
using container_type = std::unordered_map<
ip::udp::endpoint, std::shared_ptr<client>,
std::size_t (*)(const ip::udp::endpoint&)>;
/// #brief Return a hash value for the provided endpoint.
static std::size_t get_hash(const ip::udp::endpoint& endpoint)
{
std::ostringstream stream;
stream << endpoint;
std::hash<std::string> hasher;
return hasher(stream.str());
}
public:
using key_type = container_type::key_type;
using mapped_type = container_type::mapped_type;
/// #brief Constructor.
client_manager()
: clients_(0, &client_manager::get_hash)
{}
// The public abstraction upon which the application will depend.
public:
/// #brief Add a client to the manager.
void add(mapped_type client)
{
clients_[client->endpoint()] = client;
}
/// #brief Given an endpoint, retrieve the associated client. Return
/// an empty shared pointer if one is not found.
mapped_type get(key_type key) const
{
auto result = clients_.find(key);
return clients_.end() != result
? result->second // Found client.
: mapped_type(); // No client found.
}
private:
container_type clients_;
};
int main()
{
// Unique endpoints.
ip::udp::endpoint endpoint1(ip::address::from_string("11.11.11.11"), 1111);
ip::udp::endpoint endpoint2(ip::address::from_string("22.22.22.22"), 2222);
ip::udp::endpoint endpoint3(ip::address::from_string("33.33.33.33"), 3333);
// Create a client for each endpoint.
auto client1 = std::make_shared<client>(endpoint1);
auto client2 = std::make_shared<client>(endpoint2);
auto client3 = std::make_shared<client>(endpoint3);
// Add the clients to the manager.
client_manager manager;
manager.add(client1);
manager.add(client2);
manager.add(client3);
// Locate a client based on the endpoint.
auto client_result = manager.get(endpoint2);
assert(client1 != client_result);
assert(client2 == client_result);
assert(client3 != client_result);
}
Note that as the application only depends upon the client_manager abstraction (i.e. pre and post conditions for client_manager::add() and client_manager::get()), then the client_manager implementation can be changed without affecting the application as long as the implementation maintains the pre and post conditions. For instance, instead of using std::unordered_map, it could be implemented with a sequence container, such as std::vector, or an ordered associated container, such as std::map. Choose a container that best fits the expected usage. After profiling, if the container choice is an identified bottleneck, then change the implementation of client_manager to use a more suitable container based on the actual usage.

how to pass object of a class created inside client to server?

I have a client and a server class in which i am sending message from client to server by making use of TCP sockets.
I have a class created in client.cpp named as Employee consisting of variables such as :
int emp_id;
char *emp_name;
float emp_weight;
My question is as follows:
1) How to send object of the employee class from client side to the server i.e how will i pass employee_object shown as follows to server:
employee_object.emp_id=10;
employee_object *emp_name=new char[30];
employee_object.emp_weight=50.2;
Any help will be of great help.I am doing this to make my self clear how to pass different objects of classes from client to server.
You have two main options: directly write the struct or class to the socket, or "serialize" it.
If you do a direct write, it's quite simple, but it requires you take care that your client and server have the same "width" (32 or 64 bit) and "endianness" (little or big). If you're dealing with regular Intel or AMD desktop or server machines only, this isn't much of an issue.
If you want to "serialize," the sky is the limit. Look up Protocol Buffers, Cap'n'Proto, JSON, etc. There are tons of libraries for this, but Stack Overflow is not the site to figure out which one you should use--you'll have to do some research. Some key considerations are whether the format is human-readable (like JSON) and whether it is fast (like Cap'n'Proto, or the direct method mentioned previously).

boost asio: maintaining a list of connected clients

I'm looking for the best way to modify the Boost Asio HTTP Server 3 example to maintain a list of the currently connected clients.
If I modify server.hpp from the example as:
class server : private boost::noncopyable
{
public:
typedef std::vector< connection_ptr > ConnectionList;
// ...
ConnectionList::const_iterator GetClientList() const
{
return connection_list_.begin();
};
void handle_accept(const boost::system::error_code& e)
{
if (!e)
{
connection_list_.push_back( new_connection_ );
new_connection_->start();
// ...
}
}
private:
ConnectionList connection_list_;
};
Then I mess up the lifetime of the connection object such that it doesn't go out of scope and disconnect from the client because it still has a reference maintained in the ConnectionList.
If instead my ConnectionList is defined as typedef std::vector< boost::weak_ptr< connection > > ConnectionList; then I run the risk of the client disconnecting and nullifying its pointer while somebody is using it from GetClientList().
Anybody have a suggestion on a good & safe way to do this?
Thanks,
PaulH
HTTP is stateless. That means it's difficult to even define what "currently connected client" means, not to mention keeping track of which clients are at any given time. The only time there's really a "current client" is from the time a request is received to the time that request is serviced (often only a few milliseconds). A connection is not maintained even for the duration of downloading one page -- rather, each item on the page is requested and sent separately.
The typical method for handling this is to use a fairly simple timeout -- a client is considered "connected" for some arbitrary length of time (a few minutes) after they send in a request. A cookie of some sort is used to identify the client sending in a particular request.
The rest of what you're talking about is just a matter of making sure the collection you use to hold connection information is thread safe. You have one thread that adds connections, one thread that deletes them, and N threads that use the data currently in the list. The standard collections don't guarantee any thread safety, but there are others around that do.