ASIO proper handling of multiple threads + strand + socket + timer - c++

I'm using the latest ASIO version(as of this time 1.18.0). Currently designing a multithreaded asynchronous TCP server with timers(for timeouts). I have a single io_context with multiple threads calling its run() function. I accept new connections like this:
void Server::AcceptConnection()
{
acceptor_.async_accept(asio::make_strand(io_context_),
[this](const asio::error_code& error, asio::ip::tcp::socket peer) {
if (!error) {
std::make_shared<Session>(std::move(peer))->run();
}
AcceptConnection();
});
}
And here a stripped down version of the Session class:
class Session : public std::enable_shared_from_this<Session>
{
public:
Session(asio::ip::tcp::socket&& peer) : peer_(std::move(peer)) {}
void run()
{
/*
asio::async_read(peer_, some_buffers_, some_callback_);
timeout_timer_.expires_after();
timeout_timer_.async_wait();
// etc
*/
}
private:
asio::ip::tcp::socket peer_;
asio::steady_timer timeout_timer_{peer_.get_executor()};
}
Please note the initialization of the timer.
Also, beware that I'm not using any kind of strand::wrap() or asio::bind_executor() wrappers for the async handlers of the socket and the timer, because of what I understood, they are no longer required if I initialize my objects with the proper executors.
The question here is: Is that a correct way of handling TCP connection inside a strand with a timer inside the same strand that the TCP connection is using?
Note: The timer is used to abort the TCP connection if the timeout passes.

Yes, this is the way I write the things as well, using the new interfaces.
I remember having the same concerns when I started out using the new interface, and ended up checking that the various completion handlers do run on the expected strand, which they do.
All in all, these result in significant simplifications in library usage.

Related

How to enable async boost::beast websocket applications to support multiple connections for load balance?

Reference:
https://www.boost.org/doc/libs/1_80_0/libs/beast/example/websocket/client/async-ssl/websocket_client_async_ssl.cpp
My websocket client application was built based on the above example. Now I experience a bottleneck where one websocket connection subscribes too many channels from server and the server has a limit on how many messages a single connection can receive. In order to fix the issues, the solution is to load balance multiple channels across a few websocket connections. The platform I am using is Linux.
Question> Does the boost::beast framework offer a way so that I can easily add support for multiple websocket connections without locks(i.e. still use the asynchronous operations shown in above example). It is best if there is an example to illustrate the basic idea.
class WebSocketConnection {
...
void on_read(beast::error_code ec, std::size_t) {
if (ec)
return fail(ec, "read");
// trigger MainClass::message_callback
msg_callback(std::move(buffered_message_from_server));
// continue to read further messages from server
}
...
}
class MainClass {
void message_callback(std::string msg) {
}
...
WebSocketConnection websocket_;
};
Updated:
One solution I could figure out is follows but I am not sure whether this is applicable.
class WebSocketConnection {
...
void on_read(beast::error_code ec, std::size_t) {
if (ec)
return fail(ec, "read");
// trigger MainClass::message_callback
msg_callback(std::move(buffered_message_from_server));
// continue to read further messages from server
}
...
}
class MainClass {
void message_callback(std::string msg) {
}
...
std::vector<WebSocketConnection> websockets_;
};
Basically, I define a vector of WebSocketConnection within MyClass. Each instance of WebSocketConnection will be registered a same callback function(i.e. MainClass::message_callback). Also each instance of WebSocketConnection will be passed the same net::io_context and net::strandnet::io_context::executor_type, so that all message_callback from the pool of websocket will be a sequential event.
Question> Can I do this setup without introducing race condition?
Thank you

c++ websocket server, how not create thread. and convert idle loop system

example websocket :
std::thread{ wsserver }.detach();
void()
{
boost::asio::ip::tcp::socket socket{ ioc };
acceptor.accept(socket);
}
The thread process is running separately from the server.
I think this may lead to erroneous results in some interlinked operations.
so I want to put this process into an infinite loop.
In short, I should not use multi-threads.
I want.
example:
while(loop());
int loop()
{
boost::asio::ip::tcp::socket socket{ ioc };
acceptor.accept(socket);
return 1;
}
but not work idle loop.
because "acceptor.accept" is all time waiting connect.
How to use a different command instead of the "accept" command.
How I can also get rid of your thread command.
Can I do this with smart data?
I hope I can explained.

How to shutdown gRPC server from Client (using RPC function)

I'm using gRPC for inter-process communication between C++ App (gRPC Server) and Java App (gRPC Client). Everything run on one machine. I want to provide client possibility to shut down the server. My idea is to add RPC function to service in proto which would do it.
The C++ Implementation would be:
class Service : public grpcGeneratedService
{
public:
......
private:
grpc::Server* m_pServer;
};
grpc::Status Service::ShutDown(grpc::ServerContext* pContext, const ShutDownRequest* pRequest, ShutDownResponse* pResponse)
{
if (m_pServer)
m_pServer->Shutdown();
return grpc::Status(grpc::StatusCode::OK, "");
}
However the ShutDown blocks until all RPC calls are processed what means dead-lock. Is there any elegant way how to implement it?
I'm using a std::promise with a method almost exactly like yours.
// Somewhere in the global scope :/
std::promise<void> exit_requested;
// My method looks nearly identical to yours
Status CoreServiceImpl::shutdown(ServerContext *context, const SystemRequest *request, Empty*)
{
LOG(INFO) << context->peer() << " - Shutdown request acknowledged.";
exit_requested.set_value();
return Status::OK;
}
In order to make this work, I call server->Wait() in a second thread and wait on the future for the exit_requested promise to block a shutdown call:
auto serveFn = [&]() {
server->Wait();
};
std::thread serving_thread(serveFn);
auto f = exit_requested.get_future();
f.wait();
server->Shutdown();
serving_thread.join();
Once I had this I was also able to support a clean shutdown via signal handlers as well:
auto handler = [](int s) {
exit_requested.set_value();
};
std::signal(SIGINT, handler);
std::signal(SIGTERM, handler);
std::signal(SIGQUIT, handler);
I've been satisfied with this approach so far and it's kept me within the bounds of gRPC and the standard c++ libs. Rather than use some globally scoped promise (I have to declare it as an external in my service implementation source) I should probably think of something more elegant.
One thing to note here is that setting the value of the promise more than once will throw an exception. This could happen if you somehow send the shutdown message and also pkill -2 my_awesome_service at the same time. I actually ran into this when there was a deadlock in my persistence layer preventing shutdown from finishing, when I tried to send a SIGINT again the service aborted instead! For my needs this is still an acceptable solution but I'd love to hear about alternatives that work around or solve that little problem.
You can create an std::function from the ShutDown() handler and run that function in a separate thread (or threadpool). This will allow decoupling the handling of the RPC from the execution of the shutdown logic and eliminate the deadlock.

In boost::asio, why does the asynchronous accept handler need to restart the asynchronous accept?

In the Daytime.3 tutorial for boost::asio (asynchronous TCP server), the class tcp_server contains the following two methods:
void start_accept()
{
tcp_connection::pointer new_connection =
tcp_connection::create(acceptor_.get_io_service());
acceptor_.async_accept(new_connection->socket(),
boost::bind(&tcp_server::handle_accept, this, new_connection,
boost::asio::placeholders::error));
}
void handle_accept(tcp_connection::pointer new_connection,
const boost::system::error_code& error)
{
if (!error) new_connection->start(); // ***
start_accept();
}
My concern is the line marked ***. What if this operation takes a long time to complete? Even if it doesn't, there must be some time gap between the *** line and the call to start_accept, during which the server will fail to accept incoming connections. Wouldn't it make more sense for async_accept to register an OS handler that doesn't halt when it accepts its first connection? Also, is this a real issue and how would I fix it?
The server won't "fail to accept incoming connections"; that's what the second parameter of the listen() function is for in the sockets API. But you are correct that the server can have a delay in handling the client request. A single-threaded application that requires lots of computation will cause issues, hence why this particular example really only performs IO. If your server really does need to perform something CPU intensive, then the handler should be passed to a task manager of some sort.

boost asio for sync server keeping TCP session open (with google proto buffers)

I currently have a very simple boost::asio server that sends a status update upon connecting (using google proto buffers):
try
{
boost::asio::io_service io_service;
tcp::acceptor acceptor(io_service,tcp::endpoint(tcp::v4(), 13));
for (;;)
{
tcp::socket socket(io_service);
acceptor.accept(socket);
...
std::stringstream message;
protoMsg.SerializeToOstream(&message);
boost::system::error_code ignored_error;
boost::asio::write(socket, boost::asio::buffer(message.str()), ignored_error);
}
}
catch (std::exception& e) { }
I would like to extend it to first read after accepting a new connection, check what request was received, and send different messages back depending on this message. I'd also like to keep the TCP connection open so the client doesn't have to re-connect, and would like to handle multiple clients (not many, maybe 2 or 3).
I had a look at a few examples on boost asio, namely the async time tcp server and the chat server, but both are a bit over my head tbh. I don't even understand whether I need an async server. I guess I could just do a read after acceptor.accept(socket), but I guess then I wouldn't keep on listening for further requests. And if I go into a loop I guess that would mean I could only handle one client. So I guess that means I have to go async? Is there a simpler example maybe that isn't 250 lines of code? Or do I just have to bite my way through those examples? Thanks
The examples you mention from the Boost.Asio documentation are actually pretty good to see how things work. You're right that at first it might look a bit difficult to understand, especially if you're new to these concepts. However, I would recommend that you start with the chat server example and get that built on your machine. This will allow you to closer look into things and start changing things in order to learn how it works. Let me guide you through a few things I find important to get started.
From your description what you want to do, it seems that the chat server gives you a good starting point as it already has similar pieces you need. Having the server asynchronous is what you want as you then quite easily can handle multiple clients with a single thread. Nothing too complicated from the start.
Simplified, asynchronous in this case means that your server works off a queue, taking a handler (task) and executes it. If there is nothing on the queue, it just waits for something to be put on the queue. In your case that means it could be a connect from a client, a new read of a message from a client or something like this. In order for this to work, each handler (the function handling the reaction to a particular event) needs to be set up.
Let me explain a bit using code from the chat server example.
In the server source file, you see the chat_server class which calls start_accept in the constructor. Here the accept handler gets set up.
void start_accept()
{
chat_session_ptr new_session(new chat_session(io_service_, room_)); // 1
acceptor_.async_accept(new_session->socket(), // 2
boost::bind(&chat_server::handle_accept, this, new_session, // 3
boost::asio::placeholders::error)); // 4
}
Line 1: A chat_session object is created which represents a session between one client and the server. A session is created for the accept (no client has connected yet).
Line 2: An asynchronous accept for the socket...
Line 3: ...bound to call chat_server::handle_accept when it happens. The session is passed along to be used by the first client which connects.
Now, if we look at the handle_accept we see that upon client connect, start is called for the session (this just starts stuff between the server and this client). Lastly a new accept is put outstanding in case other clients want to connect as well.
void handle_accept(chat_session_ptr session,
const boost::system::error_code& error)
{
if (!error)
{
session->start();
}
start_accept();
}
This is what you want to have as well. An outstanding accept for incoming connections. And if multiple clients can connect, there should always be one of these outstanding so the server can handle the accept.
How the server and the client(s) interact is all in the session and you could follow the same design and modify this to do what you want. You mention that the server needs to look at what is sent and do different things. Take a look at chat_session and the start function which was called by the server in handle_accept.
void start()
{
room_.join(shared_from_this());
boost::asio::async_read(socket_,
boost::asio::buffer(read_msg_.data(), chat_message::header_length),
boost::bind(
&chat_session::handle_read_header, shared_from_this(),
boost::asio::placeholders::error));
}
What is important here is the call to boost::asio::async_read. This is what you want too. This puts an outstanding read on the socket, so the server can read what the client sends. There is a handler (function) which is bound to this event chat_session::handle_read_header. This will be called whenever the server reads something on the socket. In this handler function you could start putting your specific code to determine what to do if a specific message is sent and so on.
What is important to know is that whenever calling these asynchronous boost::asio functions things will not happen within that call (i.e. the socket is not read if you call the function read). This is the asynchronous aspect. You just kind of register a handler for something and your code is called back when this happens. Hence, when this read is called it will immediately return and you're back in the handle_accept for the server (if you follow how things get called). And if you remember there we also call start_accept to set up another asynchronous accept. At this point you have two outstanding handlers waiting for either another client to connect or the first client to send something. Depending on what happens first, that specific handler will be called.
Also what is important to understand is that whenever something is run, it will run uninterrupted until everything it needs to do has been done. Other handlers have to wait even if there is are outstanding events which trigger them.
Finally, in order to run the server you'll need the io_service which is a central concept in Asio.
io_service.run();
This is one line you see in the main function. This just says that the thread (only one in the example) should run the io_service, which is the queue where handlers get enqueued when there is work to be done. When nothing, the io_service just waits (blocking the main thread there of course).
I hope this helps you get started with what you want to do. There is a lot of stuff you can do and things to learn. I find it a great piece of software! Good luck!
In case anyone else wants to do this, here is the minimum to get above going: (similar to the tutorials, but a bit shorter and a bit different)
class Session : public boost::enable_shared_from_this<Session>
{
tcp::socket socket;
char buf[1000];
public:
Session(boost::asio::io_service& io_service)
: socket(io_service) { }
tcp::socket& SocketRef() { return socket; }
void Read() {
boost::asio::async_read( socket,boost::asio::buffer(buf),boost::asio::transfer_at_least(1),boost::bind(&Session::Handle_Read,shared_from_this(),boost::asio::placeholders::error));
}
void Handle_Read(const boost::system::error_code& error) {
if (!error)
{
//read from buffer and handle requests
//if you want to write sth, you can do it sync. here: e.g. boost::asio::write(socket, ..., ignored_error);
Read();
}
}
};
typedef boost::shared_ptr<Session> SessionPtr;
class Server
{
boost::asio::io_service io_service;
tcp::acceptor acceptor;
public:
Server() : acceptor(io_service,tcp::endpoint(tcp::v4(), 13)) { }
~Server() { }
void operator()() { StartAccept(); io_service.run(); }
void StartAccept() {
SessionPtr session_ptr(new Session(io_service));
acceptor.async_accept(session_ptr->SocketRef(),boost::bind(&Server::HandleAccept,this,session_ptr,boost::asio::placeholders::error));
}
void HandleAccept(SessionPtr session,const boost::system::error_code& error) {
if (!error)
session->Read();
StartAccept();
}
};
From what I gathered through trial and error and reading: I kick it off in the operator()() so you can have it run in the background in an additional thread. You run one Server instance. To handle multiple clients, you need an extra class, I called this a session class. For asio to clean up dead sessions, you need a shared pointer as pointed out above. Otherwise the code should get you started.