How to use callback results in asynchronous model C++ - c++

I have a C++ API which has a certain defined functions and it's related callbacks.
All these functions are asynchronous in nature.
Now, using this API I want to construct an asynchronous system which sends
multiple request to the server for collecting different data items and then use
these data item for further use.
For example:
void functionA()
{
requestDataForA(); //asynchronous request to the server
//async wait for the callback
processDataForA();
}
void functionB()
{
requestDataForB(); //asynchronous request to the server
//async wait for the callback
processDataForB();
}
void functionC()
{
requestDataForC(); //asynchronous request to the server
//async wait for the callback
processDataForC();
}
Now my question is that when the callback gives the data item, how to use it for subsequent processing. It cannot be done in callback as callback doesn't know who will use the data.
Thanks
Shiv

You implicitly have this information, you just need to track it. Lets say that object A calls functionA, you should make A implement a particular interface that accepts data related that is the response from calling requestA. Lets say this response is DataA, then the interface would be
class InterfaceADataHandler
{
public:
virtual void handle(DataA const&) = 0; // this is the method that will process the data..
};
class A : public InterfaceADataHandler
{
public:
void handle(DataA const&) {} // do something with data
// Now I want to be called back
void foo()
{
functionA(this); // call function A with this instance
}
};
void functionA(InterfaceADataHandler* pHandler)
{
// store this handler against request (say some id)
request..();
// wait for callback
// when you have callback, lookup the handler that requested the data, and call that handler
}

In most API's, you the developer would be providing the callback which will be invoked by the API with the data that has been retrieved. You can then store the data and use it at a later time or use it within the callback (assuming that you won't take very long to process and promise not to block for I/O).
The model would look more like:
void functionA()
{
requestDataForA(processDataForA); //asynchronous request to the server
}
void processDataForA(void *someData)
{
// process "someData"
}

Related

C++ GRPC ClientAsyncReaderWriter: how to check if data is available for read?

I have bidirectional streaming async grpc client that use ClientAsyncReaderWriter for communication with server. RPC code looks like:
rpc Process (stream Request) returns (stream Response)
For simplicity Request and Response are bytes arrays (byte[]). I send several chunks of data to server, and when server accumulate enough data, server process this data and send back the response and continue accumulating data for next responses. After several responses, the server send final response and close connection.
For async client I using CompletionQueue. Code looks like:
...
CompletionQueue cq;
std::unique_ptr<Stub> stub;
grpc::ClientContext context;
std::unique_ptr<grpc::ClientAsyncReaderWriter<Request,Response>> responder = stub->AsyncProcess(&context, &cq, handler);
// thread for completition queue
std::thread t(
[]{
void *handler = nullptr;
bool ok = false;
while (cq_.Next(&handler, &ok)) {
if (can_read) {
// how do you know that it is read data available
// Do read
} else {
// do write
...
Request request = prepare_request();
responder_->Write(request, handler);
}
}
}
);
...
// wait
What is the proper way to async reading? Can I try to read if it no data available? Is it blocking call?
Sequencing Read() calls
Can I try to read if it no data available?
Yep, and it's going to be case more often than not. Read() will do nothing until data is available, and only then put its passed tag into the completion queue. (see below for details)
Is it blocking call?
Nope. Read() and Write() return immediately. However, you can only have one of each in flight at any given moment. If you try to send a second one before the previous has completed, it (the second one) will fail.
What is the proper way to async reading?
Each time a Read() is done, start a new one. For that, you need to be able to tell when a Read() is done. This is where tags come in!
When you call Read(&msg, tag), or Write(request, tag),you are telling grpc to put tag in the completion queue associated with that responder once that operation has completed. grpc doesn't care what the tag is, it just hands it off.
So the general strategy you will want to go for is:
As soon as you are ready to start receiving messages:
call responder->Read() once with some tag that you will recognize as a "read done".
Whenever cq_.Next() gives you back that tag, and ok == true:
consume the message
Queue up a new responder->Read() with that same tag.
Obviously, you'll also want to do something similar for your calls to Write().
But since you still want to be able to lookup the handler instance from a given tag, you'll need a way to pack a reference to the handler as well as information about which operation is being finished in a single tag.
Completion queues
Lookup the handler instance from a given tag? Why?
The true raison d'ĂȘtre of completion queues is unfortunately not evident from the examples. They allow multiple asynchronous rpcs to share the same thread. Unless your application only ever makes a single rpc call, the handling thread should not be associated with a specific responder. Instead, that thread should be a general-purpose worker that dispatches events to the correct handler based on the content of the tag.
The official examples tend to do that by using pointer to the handler object as the tag. That works when there's a specific sequence of events to expect since you can easily predict what a handler is reacting to. You often can't do that with async bidirectional streams, since any given completion event could be a Read() or a Write() finishing.
Example
Here's a general outline of what I personally consider to be a clean way to go about all that:
// Base class for async bidir RPCs handlers.
// This is so that the handling thread is not associated with a specific rpc method.
class RpcHandler {
// This will be used as the "tag" argument to the various grpc calls.
struct TagData {
enum class Type {
start_done,
read_done,
write_done,
// add more as needed...
};
RpcHandler* handler;
Type evt;
};
struct TagSet {
TagSet(RpcHandler* self)
: start_done{self, TagData::Type::start_done},
read_done{self, TagData::Type::read_done},
write_done{self, TagData::Type::write_done} {}
TagData start_done;
TagData read_done;
TagData write_done;
};
public:
RpcHandler() : tags(this) {}
virtual ~RpcHandler() = default;
// The actual tag objects we'll be passing
TagSet tags;
virtual void on_ready() = 0;
virtual void on_recv() = 0;
virtual void on_write_done() = 0;
static void handling_thread_main(grpc::CompletionQueue* cq) {
void* raw_tag = nullptr;
bool ok = false;
while (cq->Next(&raw_tag, &ok)) {
TagData* tag = reinterpret_cast<TagData*>(raw_tag);
if(!ok) {
// Handle error
}
else {
switch (tag->evt) {
case TagData::Type::start_done:
tag->handler->on_ready();
break;
case TagData::Type::read_done:
tag->handler->on_recv();
break;
case TagData::Type::write_done:
tag->handler->on_write_done();
break;
}
}
}
}
};
void do_something_with_response(Response const&);
class MyHandler final : public RpcHandler {
public:
using responder_ptr =
std::unique_ptr<grpc::ClientAsyncReaderWriter<Request, Response>>;
MyHandler(responder_ptr responder) : responder_(std::move(responder)) {
// This lock is needed because StartCall() can
// cause the handler thread to access the object.
std::lock_guard lock(mutex_);
responder_->StartCall(&tags.start_done);
}
~MyHandler() {
// TODO: finish/abort the streaming rpc as appropriate.
}
void send(const Request& msg) {
std::lock_guard lock(mutex_);
if (!sending_) {
sending_ = true;
responder_->Write(msg, &tags.write_done);
} else {
// TODO: add some form of synchronous wait, or outright failure
// if the queue starts to get too big.
queued_msgs_.push(msg);
}
}
private:
// When the rpc is ready, queue the first read
void on_ready() override {
std::lock_guard l(mutex_); // To synchronize with the constructor
responder_->Read(&incoming_, &tags.read_done);
};
// When a message arrives, use it, and start reading the next one
void on_recv() override {
// incoming_ never leaves the handling thread, so no need to lock
// ------ If handling is cheap and stays in the handling thread.
do_something_with_response(incoming_);
responder_->Read(&incoming_, &tags.read_done);
// ------ If responses is expensive or involves another thread.
// Response msg = std::move(incoming_);
// responder_->Read(&incoming_, &tags.read_done);
// do_something_with_response(msg);
};
// When has been sent, send the next one is there is any
void on_write_done() override {
std::lock_guard lock(mutex_);
if (!queued_msgs_.empty()) {
responder_->Write(queued_msgs_.front(), &tags.write_done);
queued_msgs_.pop();
} else {
sending_ = false;
}
};
responder_ptr responder_;
// Only ever touched by the handler thread post-construction.
Response incoming_;
bool sending_ = false;
std::queue<Request> queued_msgs_;
std::mutex mutex_; // grpc might be thread-safe, MyHandler isn't...
};
int main() {
// Start the thread as soon as you have a completion queue.
auto cq = std::make_unique<grpc::CompletionQueue>();
std::thread t(RpcHandler::handling_thread_main, cq.get());
// Multiple concurent RPCs sharing the same handling thread:
MyHandler handler1(serviceA->MethodA(&context, cq.get()));
MyHandler handler2(serviceA->MethodA(&context, cq.get()));
MyHandlerB handler3(serviceA->MethodB(&context, cq.get()));
MyHandlerC handler4(serviceB->MethodC(&context, cq.get()));
}
If you have a keen eye, you will notice that the code above stores a bunch (1 per event type) of redundant this pointers in the handler. It's generally not a big deal, but it is possible to do without them via multiple inheritance and downcasting, but that's starting to be somewhat beyond the scope of this question.

How to implement OOBE complete notification in c++?

I want to do a win32 implementation of RegisterWaitUntilOOBECompleted API for my app.
The goal is to detect OOBE complete and perform specific operations.
However, I don't quite understand how to implement it in c++ code.
I spent the past 6 hours looking for sample implementation but to no luck.
Can anyone explain how to do it?
Registers a callback to be called once OOBE (Windows Welcome) has been
completed.
Syntax C++
BOOL RegisterWaitUntilOOBECompleted( OOBE_COMPLETED_CALLBACK
OOBECompletedCallback, PVOID CallbackContext,
PVOID *WaitHandle );
Parameters
OOBECompletedCallback
Pointer to an application-defined callback function that will be
called upon completion of OOBE. For more information, see
OOBE_COMPLETED_CALLBACK.
CallbackContext
Pointer to the callback context. This value will be passed to the
function specified by OOBECompletedCallback. This value can be nulll.
WaitHandle
Pointer to a variable that will receive the handle to the wait
callback registration.
For anyone in the future who might be looking for sample implementation of this API, here's how I did it. This sample code is not intended to compile though.
Header file
#include "Oobenotification.h"
class MainClass
{
// put constructor here
// destructor
~MainClass();
// OOBE notification
OOBE_COMPLETED_CALLBACK OOBECompletedCallback;
PVOID m_OOBEHandle = NULL;
// receive notification one OOBE complete
void OOBERegisterNotification();
static void CALLBACK NotifyOOBEComplete(PVOID CallbackContext);
public:
Init();
};
CPP file
#include "header.h"
void MainClass::~MainClass()
{
if (m_OOBEHandle != NULL)
UnregisterWaitUntilOOBECompleted(m_OOBEHandle);
}
void MainClass::Init()
{
// register to receive oobe complete notification
OOBERegisterNotification();
}
void MainClass::OOBERegisterNotification()
{
OOBECompletedCallback = &NotifyOOBEComplete;
BOOL bRes = ::RegisterWaitUntilOOBECompleted(OOBECompletedCallback, NULL, &m_OOBEHandle);
if (!bRes)
{
// handle failed registration here
}
}
void CALLBACK MainClass::NotifyOOBEComplete(PVOID Context)
{// async
UNREFERENCED_PARAMETER(Context);
// what you want to do after OOBE
}

Asynchronous model in grpc c++

My team is designing a scalable solution with micro-services architecture and planning to use gRPC as the transport communication between layers. And we've decided to use async grpc model. The design that example(greeter_async_server.cc) provides doesn't seem viable if I scale the number of RPC methods, because then I'll have to create a new class for every RPC method, and create their objects in HandleRpcs() like this.
Pastebin (Short example code).
void HandleRpcs() {
new CallDataForRPC1(&service_, cq_.get());
new CallDataForRPC2(&service_, cq_.get());
new CallDataForRPC3(&service, cq_.get());
// so on...
}
It'll be hard-coded, all the flexibility will be lost.
I've around 300-400RPC methods to implement and having 300-400 classes will be cumbersome and inefficient when I'll have to handle more than 100K RPC requests/sec and this solution is a very bad design. I can't bear the overhead of creation of objects this way on every single request. Can somebody kindly provide me a workaround for this. Can async grpc c++ not be simple like its sync companion?
Edit: In favour of making the situation more clear, and for those who might be struggling to grasp the flow of this async example, I'm writing what I've understood so far, please make me correct if wrong somewhere.
In async grpc, every time we have to bind a unique-tag with the completion-queue so that when we poll, the server can give it back to us when the particular RPC will be hit by the client, and we infer from the returned unique-tag about the type of the call.
service_->RequestRPC2(&ctx_, &request_, &responder_, cq_, cq_,this); Here we're using the address of the current object as the unique-tag. This is like registering for our RPC call on the completion queue. Then we poll down in HandleRPCs() to see if the client hits the RPC, if so then cq_->Next(&tag, &OK) will fill the tag. The polling code snippet:
while (true) {
GPR_ASSERT(cq_->Next(&tag, &ok));
GPR_ASSERT(ok);
static_cast<CallData*>(tag)->Proceed();
}
Since, the unique-tag that we registered into the queue was the address of the CallData object so we're able to call Proceed(). This was fine for one RPC with its logic inside Proceed(). But with more RPCs each time we'll have all of them inside the CallData, then on polling, we'll be calling the only one Proceed() which will contain logic to (say) RPC1(postgres calls), RPC2(mongodb calls), .. so on. This is like writing all my program inside one function. So, to avoid this, I used a GenericCallData class with the virtual void Proceed() and made derived classes out of it, one class per RPC with their own logic inside their own Proceed(). This is a working solution but I want to avoid writing many classes.
Another solution I tried was keeping all RPC-function-logics out of the proceed() and into their own functions and maintaining a global std::map<long, std::function</*some params*/>> . So whenever I register an RPC with unique-tag onto the queue, I store its corresponding logic function (which I'll surely hard code into the statement and bind all the parameters required), then the unique-tag as key. On polling, when I get the &tag I do a lookup in the map for this key and call the corresponding saved function. Now, there's one more hurdle, I'll have to do this inside the function logic:
// pseudo code
void function(reply, responder, context, service)
{
// register this RPC with another unique tag so to serve new incoming request of the same type on the completion queue
service_->RequestRPC1(/*params*/, new_unique_id);
// now again save this new_unique_id and current function into the map, so when tag will be returned we can do lookup
map.emplace(new_unique_id, function);
// now you're free to do your logic
// do your logic
}
You see this, code has spread into another module now, and it's per RPC based.
Hope it clears the situation.
I thought if somebody could have implemented this type of server in a more easy way.
This post is pretty old by now but I have not seen any answer or example regarding this so I will show how I solved it to any other readers. I have around 30 RPC calls and was looking for a way of reducing the footprint when adding and removing RPC calls. It took me some iterations to figure out a good way to solve it.
So my interface for getting RPC requests from my (g)RPC library is a callback interface that the recepiant need to implement. The interface looks like this:
class IRpcRequestHandler
{
public:
virtual ~IRpcRequestHandler() = default;
virtual void onZigbeeOpenNetworkRequest(const smarthome::ZigbeeOpenNetworkRequest& req,
smarthome::Response& res) = 0;
virtual void onZigbeeTouchlinkDeviceRequest(const smarthome::ZigbeeTouchlinkDeviceRequest& req,
smarthome::Response& res) = 0;
...
};
And some code for setting up/register each RPC method after the gRPC server is started:
void ready()
{
SETUP_SMARTHOME_CALL("ZigbeeOpenNetwork", // Alias that is used for debug messages
smarthome::Command::AsyncService::RequestZigbeeOpenNetwork, // Generated gRPC service method for async.
smarthome::ZigbeeOpenNetworkRequest, // Generated gRPC service request message
smarthome::Response, // Generated gRPC service response message
IRpcRequestHandler::onZigbeeOpenNetworkRequest); // The callback method to call when request has arrived.
SETUP_SMARTHOME_CALL("ZigbeeTouchlinkDevice",
smarthome::Command::AsyncService::RequestZigbeeTouchlinkDevice,
smarthome::ZigbeeTouchlinkDeviceRequest,
smarthome::Response,
IRpcRequestHandler::onZigbeeTouchlinkDeviceRequest);
...
}
This is all that you need to care about when adding and removing RPC methods.
The SETUP_SMARTHOME_CALL is a home-cooked macro which looks like this:
#define SETUP_SMARTHOME_CALL(ALIAS, SERVICE, REQ, RES, CALLBACK_FUNC) \
new ServerCallData<REQ, RES>( \
ALIAS, \
std::bind(&SERVICE, \
&mCommandService, \
std::placeholders::_1, \
std::placeholders::_2, \
std::placeholders::_3, \
std::placeholders::_4, \
std::placeholders::_5, \
std::placeholders::_6), \
mCompletionQueue.get(), \
std::bind(&CALLBACK_FUNC, requestHandler, std::placeholders::_1, std::placeholders::_2))
I think the ServerCallData class looks like the one from gRPCs examples with a few modifications. ServerCallData is derived from a non-templete class with an abstract function void proceed(bool ok) for the CompletionQueue::Next() handling. When ServerCallData is created, it will call the SERVICE method to register itself on the CompletionQueue and on every first proceed(ok) call, it will clone itself which will register another instance. I can post some sample code for that as well if someone is interested.
EDIT: Added some more sample code below.
GrpcServer
class GrpcServer
{
public:
explicit GrpcServer(std::vector<grpc::Service*> services);
virtual ~GrpcServer();
void run(const std::string& sslKey,
const std::string& sslCert,
const std::string& password,
const std::string& listenAddr,
uint32_t port,
uint32_t threads = 1);
private:
virtual void ready(); // Called after gRPC server is created and before polling CQ.
void handleRpcs(); // Function that polls from CQ, can be run by multiple threads. Casts object to CallData and calls CallData::proceed().
std::unique_ptr<ServerCompletionQueue> mCompletionQueue;
std::unique_ptr<Server> mServer;
std::vector<grpc::Service*> mServices;
std::list<std::shared_ptr<std::thread>> mThreads;
...
}
And the main part of the CallData object:
template <typename TREQUEST, typename TREPLY>
class ServerCallData : public ServerCallMethod
{
public:
explicit ServerCallData(const std::string& methodName,
std::function<void(ServerContext*,
TREQUEST*,
::grpc::ServerAsyncResponseWriter<TREPLY>*,
::grpc::CompletionQueue*,
::grpc::ServerCompletionQueue*,
void*)> serviceFunc,
grpc::ServerCompletionQueue* completionQueue,
std::function<void(const TREQUEST&, TREPLY&)> callback,
bool first = false)
: ServerCallMethod(methodName),
mResponder(&mContext),
serviceFunc(serviceFunc),
completionQueue(completionQueue),
callback(callback)
{
requestNewCall();
}
void proceed(bool ok) override
{
if (!ok)
{
delete this;
return;
}
if (callStatus() == ServerCallMethod::PROCESS)
{
callStatus() = ServerCallMethod::FINISH;
new ServerCallData<TREQUEST, TREPLY>(callMethodName(), serviceFunc, completionQueue, callback);
try
{
callback(mRequest, mReply);
}
catch (const std::exception& e)
{
mResponder.Finish(mReply, Status::CANCELLED, this);
return;
}
mResponder.Finish(mReply, Status::OK, this);
}
else
{
delete this;
}
}
private:
void requestNewCall()
{
serviceFunc(
&mContext, &mRequest, &mResponder, completionQueue, completionQueue, this);
}
ServerContext mContext;
TREQUEST mRequest;
TREPLY mReply;
ServerAsyncResponseWriter<TREPLY> mResponder;
std::function<void(ServerContext*,
TREQUEST*,
::grpc::ServerAsyncResponseWriter<TREPLY>*,
::grpc::CompletionQueue*,
::grpc::ServerCompletionQueue*,
void*)>
serviceFunc;
std::function<void(const TREQUEST&, TREPLY&)> callback;
grpc::ServerCompletionQueue* completionQueue;
};
Although the thread is old I wanted to share a solution I am currently implementing. It mainly consists templated classes inheriting CallData to be scalable. This way, each new rpc will only require specializing the templates of the required CallData methods.
Calldata header:
class CallData {
protected:
enum Status { CREATE, PROCESS, FINISH };
Status status;
virtual void treat_create() = 0;
virtual void treat_process() = 0;
public:
void Proceed();
};
CallData Proceed implementation:
void CallData::Proceed() {
switch (status) {
case CREATE:
status = PROCESS;
treat_create();
break;
case PROCESS:
status = FINISH;
treat_process();
break;
case FINISH:
delete this;
}
}
Inheriting from CallData header (simplified):
template <typename Request, typename Reply>
class CallDataTemplated : CallData {
static_assert(std::is_base_of<google::protobuf::Message, Request>::value,
"Request and reply must be protobuf messages");
static_assert(std::is_base_of<google::protobuf::Message, Reply>::value,
"Request and reply must be protobuf messages");
private:
Service,Cq,Context,ResponseWriter,...
Request request;
Reply reply;
protected:
void treat_create() override;
void treat_process() override;
public:
...
};
Then, for specific rpc's in theory you should be able to do things like:
template<>
void CallDataTemplated<HelloRequest, HelloReply>::treat_process() {
...
}
It's a lot of templated methods but preferable to creating a class per rpc from my point of view.

How to call a function when a work item is finished in Boost.Asio?

I would like to implement a command queue which handles incoming commands concurrently with a thread pool (so the queue grows temporarily when all threads are working). I would like to post a callback to the callers when a command worker is started and finished. My implementation is based on this example from the Asio website.
Is there a way to hook into these events and signal somehow? I would like to avoid the command functors knowing about the callbacks (since obviously I could call the callbacks inside the command functors).
Pseudocode to illustrate (initialization and error handling omitted for brevity):
class CommandQueue
{
public:
void handle_command(CmdId id, int param)
{
io_service.post(boost::bind(&(dispatch_map[id]), param));
// PSEUDOCODE:
// when one of the worker threads start with this item, I want to call
callback_site.cmd_started(id, param);
// when the command functor returns and the thread finished
callback_site.cmd_finished(id, param);
}
private:
boost::asio::io_service io_service;
asio::io_service::work work;
std::map<CmdId, CommandHandler> dispatch_map; // CommandHandler is a functor taking an int parameter
CallbackSite callback_site;
};
Is there a way to do this without having the command functors depend on the CallbackSite?
My initial response would be that std::futures are what you want given that boost-asio now even has built in support for them. However you have tagged this as c++03 so you will have to make do with boost::future.
Basically you pass in a boost::promise to the task you want to pass into asio but beforehand call get_future on it and store the future values which shares state with the promise. When the task finishes you can call promise::set_value. In another thread you can check to see if this has happened by calling future::is_ready (non-blocking) or future::wait (blocking) and then retrieve the value from it before calling the appropriate callback functions.
e.g. the value set could be a CmdId in your example to determine which callback to call.
So what you want is to build in something that happens when one of the run() commands starts process a command, and then does something on return.
Personally, I do this by wrapping the function call:
class CommandQueue
{
public:
void handle_command(CmdId id, int param)
{
io_service.post(boost::bind(&CommandQueue::DispatchCommand, this,id,param));
}
private:
boost::asio::io_service io_service;
asio::io_service::work work;
std::map<CmdId, CommandHandler> dispatch_map; // CommandHandler is a functor taking an int parameter
CallbackSite callback_site;
void DispatchCommand(CmdId id, int param)
{
// when one of the worker threads start with this item, I want to call
callback_site.cmd_started(id, param);
dispatch_map[id](param);
// when the command functor returns and the thread finished
callback_site.cmd_finished(id, param);
}
};
This is also the pattern I use when I want to handle exceptions in the dispatched commands. You can also post different events instead of running them inline.

request-response system

I have request objects with corresponding response objects. Sender object makes a request and then listens for response. One sender/listener object may send different requests. Every request goes into a global queue and after it was processed, corresponding response is sent to every listener object.
There are several solutions to your problem. One would be, that the transceiver informs all Request object about its destruction. For this, you would need a method like
Transceiver::addRequest() which a Request object uses to register itself. In the
destructor of Transceiver you have to inform all registered Request's. For example:
class Transceiver
{
virtual ~Transceiver()
{
for (auto request : m_requests)
request->deleteTransceiver(this);
}
void addRequest(Request* r)
{
m_requests.push_back(r);
}
void removeRequest(Request* r)
{
m_requests.erase(std::remove(m_requests.begin(), m_requests.end(), r),
m_requests.end());
}
std::vector<Request*> m_requests;
};
class Request
{
virtual void deleteTransceiver(Transceiver* t) = 0;
virtual void notify() = 0;
};
class RequestImpl : public Request
{
RequestImpl(Transceiver* t)
: m_target(t)
{
if (t)
t->addRequest(this);
}
~RequestImpl()
{
if (m_target)
m_target->removeRequest(this);
}
virtual void deleteTransceiver(Transceiver* t)
{
if (m_target == t)
m_target = 0;
}
virtual void notify()
{
if (m_target)
m_target->process(ResponseType());
}
Transceiver* m_target;
};
A second approach would of course be to prevent the destruction of a Transceiver as
long as it is in use. You could use a std::shared_ptr<Transceiver> m_target in the
Request class, which means the transceiver lives at least as long as the associated request.
For a bit more flexibility, there is also the possibility of an std::weak_ptr<Transceiver>. Then the transceiver could be destroyed when the request
is still alive. However, when you try a std::weak_ptr<Transceiver>::lock() and it
fails, you know that the Transceiver is dead.
Edit: Added a method to remove a Request if it is destroyed before its Transceiver.