Access to class data from handler for boost deadline_timer - c++

I was able to create a handler for a boost deadline_time (which is a member)
by declaring it static. Unfortunately this prevents the access to non-static member data.
I have a series of timeouts. So my idea was to have a single deadline_timer
while maintaining an ordered list of timeout events.
Every time the next timeout event would happen,
the class would retrigger the timer with the next timeout event in the class
calculating the remaining time for this timeout event.
For this concept to work the handler would need to manipulate
non-static data. But this is not possible sence boost::asio requires a static handler.
Anybody got an idea how to handle this?
class TimerController {
public:
void setTimer(const eibaddr_t gad, const timesecs_t timedelay);
void cancelTimer(const eibaddr_t gad);
bool isRunning(const eibaddr_t gad);
void setGad(const eibaddr_t gad);
static void timerHandler(const boost::system::error_code &ec);
private:
boost::asio::deadline_timer* m_pTimer;
struct timerList_s
{
eibaddr_t gad;
boost::posix_time::ptime absTimeOut;
timerList_s(const timerList_s& elem) : gad(elem.gad),
absTimeOut(elem.absTimeOut)
{
};
timerList_s(const eibaddr_t& pgad, const boost::posix_time::ptime pato)
: gad(pgad),
absTimeOut(pato)
{
};
timerList_s& operator= (const timerList_s& elem)
{
gad = elem.gad;
absTimeOut = elem.absTimeOut;
return *this;
};
bool operator< (const timerList_s& elem) const
{
return (absTimeOut < elem.absTimeOut);
};
bool operator== (const timerList_s& elem) const
{
return (gad == elem.gad);
};
};
std::list<timerList_s> m_timers;

It is possible to use the deadline_timer class with non-static data using boost::bind in the following way deadline_.async_wait(bind(&client::check_deadline, this));. Details available in ASIO's examples, for instance, here.

I have a series of timeouts. So my idea was to have a single
deadline_timer while maintaining an ordered list of timeout events.
Every time the next timeout event would happen, the class would
retrigger the timer with the next timeout event in the class
calculating the remaining time for this timeout event.
this is a very odd design.
For this concept to work the handler would need to manipulate
non-static data. But this is not possible sence boost::asio requires a
static handler.
boost::asio does not require a static handler, see the documentation. It requires a handler with the signature
void handler(
const boost::system::error_code& error // Result of operation.
);
The typical recipe here is to use boost::bind to bind a member function to the handler. The async TCP client example shows one way to do this. The author of the asio library has an excellent blog post describing this concept in detail if you have trouble understanding it.

Related

Asynchronous model in grpc c++

My team is designing a scalable solution with micro-services architecture and planning to use gRPC as the transport communication between layers. And we've decided to use async grpc model. The design that example(greeter_async_server.cc) provides doesn't seem viable if I scale the number of RPC methods, because then I'll have to create a new class for every RPC method, and create their objects in HandleRpcs() like this.
Pastebin (Short example code).
void HandleRpcs() {
new CallDataForRPC1(&service_, cq_.get());
new CallDataForRPC2(&service_, cq_.get());
new CallDataForRPC3(&service, cq_.get());
// so on...
}
It'll be hard-coded, all the flexibility will be lost.
I've around 300-400RPC methods to implement and having 300-400 classes will be cumbersome and inefficient when I'll have to handle more than 100K RPC requests/sec and this solution is a very bad design. I can't bear the overhead of creation of objects this way on every single request. Can somebody kindly provide me a workaround for this. Can async grpc c++ not be simple like its sync companion?
Edit: In favour of making the situation more clear, and for those who might be struggling to grasp the flow of this async example, I'm writing what I've understood so far, please make me correct if wrong somewhere.
In async grpc, every time we have to bind a unique-tag with the completion-queue so that when we poll, the server can give it back to us when the particular RPC will be hit by the client, and we infer from the returned unique-tag about the type of the call.
service_->RequestRPC2(&ctx_, &request_, &responder_, cq_, cq_,this); Here we're using the address of the current object as the unique-tag. This is like registering for our RPC call on the completion queue. Then we poll down in HandleRPCs() to see if the client hits the RPC, if so then cq_->Next(&tag, &OK) will fill the tag. The polling code snippet:
while (true) {
GPR_ASSERT(cq_->Next(&tag, &ok));
GPR_ASSERT(ok);
static_cast<CallData*>(tag)->Proceed();
}
Since, the unique-tag that we registered into the queue was the address of the CallData object so we're able to call Proceed(). This was fine for one RPC with its logic inside Proceed(). But with more RPCs each time we'll have all of them inside the CallData, then on polling, we'll be calling the only one Proceed() which will contain logic to (say) RPC1(postgres calls), RPC2(mongodb calls), .. so on. This is like writing all my program inside one function. So, to avoid this, I used a GenericCallData class with the virtual void Proceed() and made derived classes out of it, one class per RPC with their own logic inside their own Proceed(). This is a working solution but I want to avoid writing many classes.
Another solution I tried was keeping all RPC-function-logics out of the proceed() and into their own functions and maintaining a global std::map<long, std::function</*some params*/>> . So whenever I register an RPC with unique-tag onto the queue, I store its corresponding logic function (which I'll surely hard code into the statement and bind all the parameters required), then the unique-tag as key. On polling, when I get the &tag I do a lookup in the map for this key and call the corresponding saved function. Now, there's one more hurdle, I'll have to do this inside the function logic:
// pseudo code
void function(reply, responder, context, service)
{
// register this RPC with another unique tag so to serve new incoming request of the same type on the completion queue
service_->RequestRPC1(/*params*/, new_unique_id);
// now again save this new_unique_id and current function into the map, so when tag will be returned we can do lookup
map.emplace(new_unique_id, function);
// now you're free to do your logic
// do your logic
}
You see this, code has spread into another module now, and it's per RPC based.
Hope it clears the situation.
I thought if somebody could have implemented this type of server in a more easy way.
This post is pretty old by now but I have not seen any answer or example regarding this so I will show how I solved it to any other readers. I have around 30 RPC calls and was looking for a way of reducing the footprint when adding and removing RPC calls. It took me some iterations to figure out a good way to solve it.
So my interface for getting RPC requests from my (g)RPC library is a callback interface that the recepiant need to implement. The interface looks like this:
class IRpcRequestHandler
{
public:
virtual ~IRpcRequestHandler() = default;
virtual void onZigbeeOpenNetworkRequest(const smarthome::ZigbeeOpenNetworkRequest& req,
smarthome::Response& res) = 0;
virtual void onZigbeeTouchlinkDeviceRequest(const smarthome::ZigbeeTouchlinkDeviceRequest& req,
smarthome::Response& res) = 0;
...
};
And some code for setting up/register each RPC method after the gRPC server is started:
void ready()
{
SETUP_SMARTHOME_CALL("ZigbeeOpenNetwork", // Alias that is used for debug messages
smarthome::Command::AsyncService::RequestZigbeeOpenNetwork, // Generated gRPC service method for async.
smarthome::ZigbeeOpenNetworkRequest, // Generated gRPC service request message
smarthome::Response, // Generated gRPC service response message
IRpcRequestHandler::onZigbeeOpenNetworkRequest); // The callback method to call when request has arrived.
SETUP_SMARTHOME_CALL("ZigbeeTouchlinkDevice",
smarthome::Command::AsyncService::RequestZigbeeTouchlinkDevice,
smarthome::ZigbeeTouchlinkDeviceRequest,
smarthome::Response,
IRpcRequestHandler::onZigbeeTouchlinkDeviceRequest);
...
}
This is all that you need to care about when adding and removing RPC methods.
The SETUP_SMARTHOME_CALL is a home-cooked macro which looks like this:
#define SETUP_SMARTHOME_CALL(ALIAS, SERVICE, REQ, RES, CALLBACK_FUNC) \
new ServerCallData<REQ, RES>( \
ALIAS, \
std::bind(&SERVICE, \
&mCommandService, \
std::placeholders::_1, \
std::placeholders::_2, \
std::placeholders::_3, \
std::placeholders::_4, \
std::placeholders::_5, \
std::placeholders::_6), \
mCompletionQueue.get(), \
std::bind(&CALLBACK_FUNC, requestHandler, std::placeholders::_1, std::placeholders::_2))
I think the ServerCallData class looks like the one from gRPCs examples with a few modifications. ServerCallData is derived from a non-templete class with an abstract function void proceed(bool ok) for the CompletionQueue::Next() handling. When ServerCallData is created, it will call the SERVICE method to register itself on the CompletionQueue and on every first proceed(ok) call, it will clone itself which will register another instance. I can post some sample code for that as well if someone is interested.
EDIT: Added some more sample code below.
GrpcServer
class GrpcServer
{
public:
explicit GrpcServer(std::vector<grpc::Service*> services);
virtual ~GrpcServer();
void run(const std::string& sslKey,
const std::string& sslCert,
const std::string& password,
const std::string& listenAddr,
uint32_t port,
uint32_t threads = 1);
private:
virtual void ready(); // Called after gRPC server is created and before polling CQ.
void handleRpcs(); // Function that polls from CQ, can be run by multiple threads. Casts object to CallData and calls CallData::proceed().
std::unique_ptr<ServerCompletionQueue> mCompletionQueue;
std::unique_ptr<Server> mServer;
std::vector<grpc::Service*> mServices;
std::list<std::shared_ptr<std::thread>> mThreads;
...
}
And the main part of the CallData object:
template <typename TREQUEST, typename TREPLY>
class ServerCallData : public ServerCallMethod
{
public:
explicit ServerCallData(const std::string& methodName,
std::function<void(ServerContext*,
TREQUEST*,
::grpc::ServerAsyncResponseWriter<TREPLY>*,
::grpc::CompletionQueue*,
::grpc::ServerCompletionQueue*,
void*)> serviceFunc,
grpc::ServerCompletionQueue* completionQueue,
std::function<void(const TREQUEST&, TREPLY&)> callback,
bool first = false)
: ServerCallMethod(methodName),
mResponder(&mContext),
serviceFunc(serviceFunc),
completionQueue(completionQueue),
callback(callback)
{
requestNewCall();
}
void proceed(bool ok) override
{
if (!ok)
{
delete this;
return;
}
if (callStatus() == ServerCallMethod::PROCESS)
{
callStatus() = ServerCallMethod::FINISH;
new ServerCallData<TREQUEST, TREPLY>(callMethodName(), serviceFunc, completionQueue, callback);
try
{
callback(mRequest, mReply);
}
catch (const std::exception& e)
{
mResponder.Finish(mReply, Status::CANCELLED, this);
return;
}
mResponder.Finish(mReply, Status::OK, this);
}
else
{
delete this;
}
}
private:
void requestNewCall()
{
serviceFunc(
&mContext, &mRequest, &mResponder, completionQueue, completionQueue, this);
}
ServerContext mContext;
TREQUEST mRequest;
TREPLY mReply;
ServerAsyncResponseWriter<TREPLY> mResponder;
std::function<void(ServerContext*,
TREQUEST*,
::grpc::ServerAsyncResponseWriter<TREPLY>*,
::grpc::CompletionQueue*,
::grpc::ServerCompletionQueue*,
void*)>
serviceFunc;
std::function<void(const TREQUEST&, TREPLY&)> callback;
grpc::ServerCompletionQueue* completionQueue;
};
Although the thread is old I wanted to share a solution I am currently implementing. It mainly consists templated classes inheriting CallData to be scalable. This way, each new rpc will only require specializing the templates of the required CallData methods.
Calldata header:
class CallData {
protected:
enum Status { CREATE, PROCESS, FINISH };
Status status;
virtual void treat_create() = 0;
virtual void treat_process() = 0;
public:
void Proceed();
};
CallData Proceed implementation:
void CallData::Proceed() {
switch (status) {
case CREATE:
status = PROCESS;
treat_create();
break;
case PROCESS:
status = FINISH;
treat_process();
break;
case FINISH:
delete this;
}
}
Inheriting from CallData header (simplified):
template <typename Request, typename Reply>
class CallDataTemplated : CallData {
static_assert(std::is_base_of<google::protobuf::Message, Request>::value,
"Request and reply must be protobuf messages");
static_assert(std::is_base_of<google::protobuf::Message, Reply>::value,
"Request and reply must be protobuf messages");
private:
Service,Cq,Context,ResponseWriter,...
Request request;
Reply reply;
protected:
void treat_create() override;
void treat_process() override;
public:
...
};
Then, for specific rpc's in theory you should be able to do things like:
template<>
void CallDataTemplated<HelloRequest, HelloReply>::treat_process() {
...
}
It's a lot of templated methods but preferable to creating a class per rpc from my point of view.

c++ Expiring map entries thread vs event-loop

I am looking for some advice on a c++ design issue I am having. Some background on the issue...
I have Runnable class as shown bellow:
class Runnable
{
public:
Runnable();
virtual ~Runnable();
void Stop();
void Start();
Runnable(Runnable const&) = delete;
Runnable& operator =(Runnable const&) = delete;
protected:
virtual void Run() = 0;
// main thread function.
std::atomic<bool> mStop;
private:
static void StaticRun(void *);
std::thread mThread;
};
Then I have an ExpirationMap that inherits the Runnable class as shown below:
class ExpirationMap : Runnable
{
public:
explicit ExpirationMap();
virtual ~ExpirationMap();
void Init(uint8_t);
void Run() override;
virtual void DoExpire(uint8_t) = 0;
// Expiry function to be implemented by the derived classes.
private:
uint8_t mDelay;
};
I have a third class that inherits the ExpirationMap class. This class encapsulates std::unorderd_map.
template
class MyMap : public ExpirationMap
{
public:
void DoExpire(uint8_t) override;
void Init(uint8_t);
void Add(const KeyType, const ValueType&);
ValueType Get(const KeyType);
bool Exists(const KeyType);
ValueType Remove(const KeyType);
void Clear();
...
private:
std::unordered_map<KeyType, ValueType> mMap;
std::shared_ptr<boost::shared_mutex> mLock;
};
MyMap::Init kicks off ExpirationMap::Init which spawns off a thread with MyMap::DoExpire as the thread function. The MyMap::DoExpire is basically a never ending while loop. The basic job of the thread is to scans elements of MyMap and remove the expired entries. Each element (value) of the map has an expiration time which is used to check if an element is a candidate for expiry. All of this is implemented and is working well.
Sorry for the long intro but now on to the real problem.
Now, I have a situation where I have to port this code to an event-loop based platform. Since event-loop system supports timers with callbacks, I could pass in the DoExpire function as the callback to timer function. However, I am trying to see if there is a better way to refactor the code so that the code works on both the platforms i.e. thread based (what I have now) and event-loop based while minimizing the duplication. When creating MyMap, I want to be able to say: create a map that uses thread based expiry or timer+callback based expiry. Any suggestions or advice is greatly appreciated. Thanks.
I think you can do better than either approach -- you can make it so that you do not need to periodically do anything at all, and thus you won't need either event loops or an update-thread.
Since every entry in your map already has an expiration-time associated with it, all you need to do is build an API layer around the map object that pretends the expired object is no longer there, e.g. (pseudocode):
bool ExpirationMap :: Exists(const KeyType & key) const
{
if (mMap.has_key(key) == false) return false;
return (mMap[key].mExpirationTime < now); // expired entries don't count!
}
ValueType ExpirationMap :: Get(const KeyType & key) const
{
return Exists(key) ? mMap[key] : ValueType();
}
This is sufficient to get the behavior you want; the only remaining issue (which may or may not be an actual problem depending on your use case) is that the map might become large over time, full of useless old/expired entries. That can be handled various ways (including just ignoring the problem, if memory usage turns out not to be an issue, or removing an entry only when it is looked up and found to be expired), but one close-to-optimal way to handle it would be to keep a second internal data structure (e.g. a std::priority_queue that holds the entries sorted by expiration-time); then any time any method is called, you can do something like:
while(mEntriesByExpirationTime.size() > 0)
{
const ByTimeEntry & firstEntry = mEntriesByExpirationTime.begin();
if (firstEntry.mExpirationTime < now)
{
mMap.erase(firstEntry.mKey);
mEntriesByExpirationTime.pop();
}
else break;
}
... Since the entries in this priority_queue are held in expiration-time order, this call is as inexpensive as it can be since it will never iterate over more than just the expired entries that ought to be removed right now.
A design that doesn't require your program to wake up at regular intervals is generally preferable over one that does, particularly on power-constrained platforms like laptops and phones. The CPU can't sleep efficiently if your program keeps demanding to be woken up every so often :)

Lifetime issues of std::promise in an async API

I'm wondering how to develop an asynchronous API using promises and futures.
The application is using a single data stream that is used for both unsolicited periodic data and requesty/reply communication.
For the requesty/reply blocking until the reply is received is not an option and I don't want lo litter the code using callbacks, so I'd like to write some kind of a SendMessage that accepts the id of the expected reply and exits only upon reception. It's up to the caller to read the reply.
A candidate API could be:
std::future<void> sendMessage(Message msg, id expected)
{
// Write the message
auto promise = make_shared<std::promise<void>>();
// Memorize the promise somewhere accessible to the receiving thread
return promise->get_future();
}
The worker thread upon reception of a message should be able to query a data-structure to know if there is someone waiting for it and "release" the future.
Given that promises are not re-usable what I'm trying to understand is what kind of data-structure should I use to manage "in flight" promises.
This answer has been rewritten.
Setting the state of a shared flag can enable the worker to know whether the other side, say boss, is still expecting the result.
The shared flag along with the promise and the future can be enclosed into a class (template), say Request. The boss set the flag by destructing his copy of the request. And the worker query whether the boss is still expecting the request being done by calling certain member function on his own copy of the request.
Simultaneous reading/writing on the flag should be probably synchronized.
The boss may not access the promise and the worker may not access the future.
There should be at most two copies of the request, becaue the flag will be set on the destruction of the request object. For achieving this, we can delcare corresponding member functions as delete or private, and provide two copies of the request on construction.
Here follows a simple implementation of request:
#include <atomic>
#include <future>
#include <memory>
template <class T>
class Request {
public:
struct Detail {
std::atomic<bool> is_canceled_{false};
std::promise<T> promise_;
std::future<T> future_ = promise_.get_future();
};
static auto NewRequest() {
std::unique_ptr<Request> copy1{new Request()};
std::unique_ptr<Request> copy2{new Request(*copy1)};
return std::make_pair(std::move(copy1), std::move(copy2));
}
Request(Request &&) = delete;
~Request() {
detail_->is_canceled_.store(true);
}
Request &operator=(const Request &) = delete;
Request &operator=(Request &&) = delete;
// simple api
std::promise<T> &Promise(const WorkerType &) {
return detail_->promise_;
}
std::future<T> &Future(const BossType &) {
return detail_->future_;
}
// return value:
// true if available, false otherwise
bool CheckAvailable() {
return detail_->is_canceled_.load() == false;
}
private:
Request() : detail_(new Detail{}) {}
Request(const Request &) = default;
std::shared_ptr<Detail> detail_;
};
template <class T>
auto SendMessage() {
auto result = Request<T>::NewRequest();
// TODO : send result.second(the another copy) to the worker
return std::move(result.first);
}
New request is contructed by factroy function NewRequest, the return value is a std::pair which contains two std::unique_ptr, each hold a copy of the newly created request.
The worker can now use the member function CheckAvailable() to check whether the request is canceled.
And the shared state is managed proprely(I believe) by the std::shared_ptr.
Note on std::promise<T> &Promise(const WorkerType &): The const reference parameter(which should be replaced with a propre type according to your implementation) is for preventing the boss from calling this function by accident while the worker should be able to easily provide a propre argument for calling this function. The same for std::future<T> &Future(const BossType &).

How and why one would use Boost signals2?

Learning c++ and trying to get familiar with some patterns. The signals2 doc clearly has a vast array of things I can do with slots and signals. What I don't understand is what types of applications (use cases) I should use it for.
I'm thinking along the lines of a state machine dispatching change events. Coming from a dynamically typed background (C#,Java etc) you'd use an event dispatcher or a static ref or a callback.
Are there difficulties in c++ with using cross-class callbacks? Is that essentially why signals2 exists?
One to the example cases is a document/view. How is this pattern better suited than say, using a vector of functions and calling each one in a loop, or say a lambda that calls state changes in registered listening class instances?
class Document
{
public:
typedef boost::signals2::signal<void ()> signal_t;
public:
Document()
{}
/* Connect a slot to the signal which will be emitted whenever
text is appended to the document. */
boost::signals2::connection connect(const signal_t::slot_type &subscriber)
{
return m_sig.connect(subscriber);
}
void append(const char* s)
{
m_text += s;
m_sig();
}
const std::string& getText() const
{
return m_text;
}
private:
signal_t m_sig;
std::string m_text;
};
and
class TextView
{
public:
TextView(Document& doc): m_document(doc)
{
m_connection = m_document.connect(boost::bind(&TextView::refresh, this));
}
~TextView()
{
m_connection.disconnect();
}
void refresh() const
{
std::cout << "TextView: " << m_document.getText() << std::endl;
}
private:
Document& m_document;
boost::signals2::connection m_connection;
};
Boost.Signals2 is not just "an array of callbacks", it has a lot of added value. IMO, the most important points are:
Thread-safety: several threads may connect/disconnect/invoke the same signal concurrently, without introducing race conditions. This is especially useful when communicating with an asynchronous subsystem, like an Active Object running in its own thread.
connection and scoped_connection handles that allow disconnection without having direct access to the signal. Note that this is the only way to disconnect incomparable slots, like boost::function (or std::function).
Temporary slot blocking. Provides a clean way to temporarily disable a listening module (eg. when a user requests to pause receiving messages in a view).
Automatic slot lifespan tracking: a signal disconnects automatically from "expired" slots. Consider the situation when a slot is a binder referencing a non-copyable object managed by shared_ptrs:
shared_ptr<listener> l = listener::create();
auto slot = bind(&listener::listen, l.get()); // we don't want aSignal_ to affect `listener` lifespan
aSignal_.connect(your_signal_type::slot_type(slot).track(l)); // but do want to disconnect automatically when it gets destroyed
Certainly, one can re-implement all the above functionality on his own "using a vector of functions and calling each one in a loop" etc, but the question is how it would be better than Boost.Signals2. Re-inventing the wheel is rarely a good idea.

Asynchronous write to socket and user values (boost::asio question)

I'm pretty new to boost. I needed a cross platform low level C++ network API, so I chose asio. Now, I've successfully connected and written to a socket, but since I'm using the asynchronous read/write, I need a way to keep track of the requests (to have some kind of IDs, if you will). I've looked at the documentation/reference, and I found no way to pass user data to my handler, the only option I can think of is creating a special class that acts as a callback and keeps track of it's id, then pass it to the socket as a callback. Is there a better way? Or is the best way to do it?
The async_xxx functions are templated on the type of the completion handler. The handler does not have to be a plain "callback", and it can be anything that exposes the right operator() signature.
You should thus be able to do something like this:
// Warning: Not tested
struct MyReadHandler
{
MyReadHandler(Whatever ContextInformation) : m_Context(ContextInformation){}
void
operator()(const boost::system::error_code& error, std::size_t bytes_transferred)
{
// Use m_Context
// ...
}
Whatever m_Context;
};
boost::asio::async_read(socket, buffer, MyReadHander(the_context));
Alternatively, you could also have your handler as a plain function and bind it at the call site, as described in the asio tutorial. The example above would then be:
void
HandleRead(
const boost::system::error_code& error,
std::size_t bytes_transferred
Whatever context
)
{
//...
}
boost::asio::async_read(socket, buffer, boost::bind(&HandleRead,
boost::asio::placeholders::error_code,
boost::asio::placeholders::bytes_transferred,
the_context
));