Is waiting for action completion in MQTT async_client necessary? - c++

I am working on an cpp MQTT async_client utilizing the paho library. I am trying to fully understand the workings of the asynchronous client, but I am not sure how to correctly utilize the features.
As I understand it, messages with QoS > 0 follow a handshake procedure specified by the standard. The delivery to the server is regarded as complete if either PUBACK (QoS 1) or PUBCOMP (QoS 2) has been received. All of this seems to be handled by the library. The user can interact with the client either through callbacks or through tokens associated with specific actions (e.g. publish).
I have the following questions regarding these tools:
The user can either register a mqtt::iaction_listener callback to an action that is invoked when the action fails and/or succeeds. Simultaneously, there exists a callback when the last message of a handshake has been successfully received (PUBACK and PUBCOMP, respectively). Are the success callbacks not redundant?
Waiting for message tokens to complete in an asynchronous client does not make sense for me. If I wait for message to be published, I block the sending thread. Surely, I could use separate thread for waiting, but that defeats the purpose of asynchronous clients. So do I just omit the wait, especially when sending messages in rapid succession? (see code below)
Test client code:
#include <iostream>
#include <cstdlib>
#include <string>
#include <cstring>
#include <cctype>
#include <thread>
#include <chrono>
#include "mqtt/async_client.h"
const std::string SERVER_ADDRESS("tcp://localhost:1883");
const std::string CLIENT_ID("client1");
const std::string TOPIC("hello");
class Callback
: public virtual mqtt::callback
{
// the mqtt client
mqtt::async_client& cli_;
void delivery_complete(mqtt::delivery_token_ptr tok) override
{
std::cout << "delivery_complete for token: " << tok->get_message_id() << std::endl;
}
public:
Callback(mqtt::async_client& cli)
: cli_(cli) {}
};
int main(int argc, char* argv[])
{
mqtt::async_client cli(SERVER_ADDRESS, CLIENT_ID);
mqtt::connect_options connOpts;
connOpts.set_clean_session(false);
Callback cb(cli);
cli.set_callback(cb);
// Connect; waiting makes sense here as publishing without a connection is nonsense
try {
cli.connect(connOpts);
}
catch (const mqtt::exception& exc) {
std::cerr << "\nERROR: Unable to connect to MQTT server: '"
<< SERVER_ADDRESS << "'" << exc << std::endl;
return 1;
}
// Press 'm' to send messages or 'q' to exit
while (true)
{
char c = std::tolower(std::cin.get());
if (c == 'q') {
break;
}
else if (c == 'm')
{
for (int i = 0; i < 10; ++i)
{
mqtt::message_ptr pubmsg = mqtt::make_message(TOPIC, "TESTMSG");
pubmsg->set_qos(1);
auto tok = cli.publish(pubmsg);
tok->wait(); // how is this done correctly with QoS > 0 ?
}
}
}
// Disconnect
try {
cli.disconnect()->wait();
std::cout << "OK" << std::endl;
}
catch (const mqtt::exception& exc) {
std::cerr << exc << std::endl;
return 1;
}
return 0;
}

Related

Function which calls a service in gRPC

I have to write a function in c++ which calls shared library which is created from gRPC client which runs the gRPC client and returns the result.
Is there any samples?
Here are Some Examples I hope i would be helpful somehow.
#include <iostream>
#include <memory>
#include <string>
#include <grpcpp/grpcpp.h>
#ifdef BAZEL_BUILD
#include "examples/protos/helloworld.grpc.pb.h"
#else
#include "helloworld.grpc.pb.h"
#endif
using grpc::Channel;
using grpc::ClientContext;
using grpc::Status;
using helloworld::HelloRequest;
using helloworld::HelloReply;
using helloworld::Greeter;
class GreeterClient
{
public:
GreeterClient(std::shared_ptr<Channel> channel): stub_(Greeter::NewStub(channel)) {}
// Assembles the client's payload, sends it and presents the response back from the server.
std::string SayHello(const std::string &user)
{
// Data we are sending to the server.
HelloRequest request;
request.set_name(user);
// Container for the data we expect from the server.
HelloReply reply;
// Context for the client. It could be used to convey extra information to the server and/or tweak certain RPC behaviors.
ClientContext context;
// The actual RPC.
Status status = stub_->SayHello(&context, request, &reply);
// Act upon its status.
if (status.ok())
{
return reply.message();
}
else
{
std::cout << status.error_code() << ": " << status.error_message() <<
std::endl;
return "RPC failed";
}
}
private:
std::unique_ptr<Greeter::Stub > stub_;
};
int main(int argc, char **argv)
{
// Instantiate the client. It requires a channel, out of which the actual RPCs are created. This channel models a connection to an endpoint (in this case, localhost at port 50051). We indicate that the channel isn't authenticated (use of InsecureChannelCredentials()).
GreeterClient greeter(grpc::CreateChannel("localhost:50051", grpc::InsecureChannelCredentials()));
std::string user("world");
std::string reply = greeter.SayHello(user);
std::cout << "Greeter received: " << reply << std::endl;
return 0;
}
OR
#include <iostream>
#include <string>
#include <dlfcn.h > // for dlopen, dlsym, dlclose
using namespace std;
int main()
{
void *handle; // handle to the library
string(*func)(string); // function pointer
char *error; // error message
handle = dlopen("/home/user/Desktop/grpc_client/libgrpc_client.so", RTLD_NOW); // open the library
if (!handle)
{
fputs(dlerror(), stderr); // print error message
exit(1);
}
func = (string(*)(string)) dlsym(handle, "run_grpc_client"); // get the function pointer
if ((error = dlerror()) != NULL)
{
fputs(error, stderr); // print error message
exit(1);
}
string result = func("hello"); // call the function
cout << result << endl;
dlclose(handle); // close the library
return 0;
}

Using zmq::proxy with REQ/REP pattern

I'm trying to understand how the zmq::proxy works, but I'm encountering problems: I'd like to have messages routed to the right worker, but seems like the identity and the evelopes are ignored: in the example I would like to route messages from client1 to worker2, and messages from client2 to worker1, but seems like the messages are served on a "first available worker" based rule.
Am I doing something wrong, or did I misunderstood how the identity works?
#include <atomic>
#include <cassert>
#include <chrono>
#include <iostream>
#include <thread>
#include <mutex>
#include <zmq.hpp>
#include <zmq_addon.hpp>
using namespace zmq;
std::atomic_bool running;
context_t context(4);
std::mutex mtx;
void client_func(std::string name, std::string target, std::string message)
{
std::this_thread::sleep_for(std::chrono::seconds(1));
socket_t request_socket(context, socket_type::req);
request_socket.connect("inproc://router");
request_socket.setsockopt( ZMQ_IDENTITY, name.c_str(), name.size());
while(running)
{
multipart_t msg;
msg.addstr(target);
msg.addstr("");
msg.addstr(message);
std::cout << name << "sent a message: " << message << std::endl;
msg.send(request_socket);
multipart_t reply;
if(reply.recv(request_socket))
{
std::unique_lock<std::mutex>(mtx);
std::cout << name << " received a reply!" << std::endl;
for(size_t i = 0 ; i < reply.size() ; i++)
{
std::string theData(static_cast<char*>(reply[i].data()),reply[i].size());
std::cout << "Part " << i << ": " << theData << std::endl;
}
}
std::this_thread::sleep_for(std::chrono::seconds(1));
}
request_socket.close();
}
void worker_func(std::string name, std::string answer)
{
std::this_thread::sleep_for(std::chrono::seconds(1));
socket_t response_socket(context, socket_type::rep);
response_socket.connect("inproc://dealer");
response_socket.setsockopt( ZMQ_IDENTITY, "resp", 4);
while(running)
{
multipart_t request;
if(request.recv(response_socket))
{
std::unique_lock<std::mutex>(mtx);
std::cout << name << " received a request:" << std::endl;
for(size_t i = 0 ; i < request.size() ; i++)
{
std::string theData(static_cast<char*>(request[i].data()),request[i].size());
std::cout << "Part " << i << ": " << theData << std::endl;
}
std::string questioner(static_cast<char*>(request[0].data()),request[0].size());
multipart_t msg;
msg.addstr(questioner);
msg.addstr("");
msg.addstr(answer);
msg.send(response_socket);
}
}
response_socket.close();
}
int main(int argc, char* argv[])
{
running = true;
zmq::socket_t dealer(context, zmq::socket_type::dealer);
zmq::socket_t router(context, zmq::socket_type::router);
dealer.bind("inproc://dealer");
router.bind("inproc://router");
std::thread client1(client_func, "Client1", "Worker2", "Ciao");
std::thread client2(client_func, "Client2", "Worker1", "Hello");
std::thread worker1(worker_func, "Worker1","World");
std::thread worker2(worker_func, "Worker2","Mondo");
zmq::proxy(dealer,router);
dealer.close();
router.close();
if(client1.joinable())
client1.join();
if(client2.joinable())
client2.join();
if(worker1.joinable())
worker1.join();
if(worker2.joinable())
worker2.join();
return 0;
}
From the docs:
When the frontend is a ZMQ_ROUTER socket, and the backend is a ZMQ_DEALER socket, the proxy shall act as a shared queue that collects requests from a set of clients, and distributes these fairly among a set of services. Requests shall be fair-queued from frontend connections and distributed evenly across backend connections. Replies shall automatically return to the client that made the original request.
The proxy handles multiple clients and and uses multiple workers to process the requests. The identity is used to send the response to the right client. You cannot use the identify to "select" a specific worker.

Thread-safety when accessing data from N-theads in context of an async TCP-server

As the title says i have a question concerning the following scenario (simplyfied example):
Assume that i have an object of the Generator-Class below, which continuously updates its dataChunk member ( running in the main thread).
class Generator
{
void generateData();
uint8_t dataChunk[999];
}
Furthermore i have an async. acceptor of TCP-connections to which 1-N clients can connect to (running in a second thread).
The acceptor starts a new thread for each new client-connection, in which an object of the Connection class below, receives a request message from the client and provides a fraction of the dataChunk (belonging to the Generator) as an answer. Then waits for a new request and so on...
class Connection
{
void setDataChunk(uint8_t* dataChunk);
void handleRequest();
uint8_t* dataChunk;
}
Finally the actual question: The desired behaviour is that the Generator object generates a new dataChunk and waits until all 1-N Connection objects have delt with their client requests until it generates a new dataChunk.
How do i lock the dataChunk for writing access of the Generator object while the Connection objects deal with their requests, but all Connection objects in their respective threads are supposed to have reading-access at the same time during their request-handling phase.
On the other hand the Connection objects are supposed to wait for a new dataChunk after dealing with their respective request, without dropping a new client request.
--> I think a single mutex won't do the trick here.
My first idea was to share a struct between the objects with a semaphore for the Generator and a vector of semaphores for the connections. With these, every object could "understand" the state of the full-system and work accordingly.
What to you guys think, what is best practice i cases like this?
Thanks in advance!
There are several ways to solve it.
You can use std::shared_mutex.
void Connection::handleRequest()
{
while(true)
{
std::shared_lock<std::shared_mutex> lock(GeneratorObj.shared_mutex);
if(GeneratorObj.DataIsAvailable()) // we need to know that data is available
{
// Send to client
break;
}
}
}
void Generator::generateData()
{
std::unique_lock<std::shared_mutex> lock(GeneratorObj.shared_mutex);
// Generate data
}
Or you can use a boost::lockfree::queue, but data structures will be different.
How do i lock the dataChunk for writing access of the Generator object while the Connection objects deal with their requests, but all Connection objects in their respective threads are supposed to have reading-access at the same time during their request-handling phase.
I'd make a logical chain of operations, that includes the generation.
Here's a sample:
it is completely single threaded
accepts unbounded connections and deals with dropped connections
it uses a deadline_timer object to signal a barrier when waiting for to send of a chunck to (many) connections. This makes it convenient to put the generateData call in an async call chain.
Live On Coliru
#include <boost/asio.hpp>
#include <list>
#include <iostream>
namespace ba = boost::asio;
using ba::ip::tcp;
using boost::system::error_code;
using Clock = std::chrono::high_resolution_clock;
using Duration = Clock::duration;
using namespace std::chrono_literals;
struct Generator {
void generateData();
uint8_t dataChunk[999];
};
struct Server {
Server(unsigned short port) : _port(port) {
_barrier.expires_at(boost::posix_time::neg_infin);
_acc.set_option(tcp::acceptor::reuse_address());
accept_loop();
}
void generate_loop() {
assert(n_sending == 0);
garbage_collect(); // remove dead connections, don't interfere with sending
if (_socks.empty()) {
std::clog << "No more connections; pausing Generator\n";
} else {
_gen.generateData();
_barrier.expires_at(boost::posix_time::pos_infin);
for (auto& s : _socks) {
++n_sending;
ba::async_write(s, ba::buffer(_gen.dataChunk), [this,&s](error_code ec, size_t written) {
assert(n_sending);
--n_sending; // even if failed, decreases pending operation
if (ec) {
std::cerr << "Write: " << ec.message() << "\n";
s.close();
}
std::clog << "Written: " << written << ", " << n_sending << " to go\n";
if (!n_sending) {
// green light to generate next chunk
_barrier.expires_at(boost::posix_time::neg_infin);
}
});
}
_barrier.async_wait([this](error_code ec) {
if (ec && ec != ba::error::operation_aborted)
std::cerr << "Client activity: " << ec.message() << "\n";
else generate_loop();
});
}
}
void accept_loop() {
_acc.async_accept(_accepting, [this](error_code ec) {
if (ec) {
std::cerr << "Accept fail: " << ec.message() << "\n";
} else {
std::clog << "Accepted: " << _accepting.remote_endpoint() << "\n";
_socks.push_back(std::move(_accepting));
if (_socks.size() == 1) // first connection?
generate_loop(); // start generator
accept_loop();
}
});
}
void run_for(Duration d) {
_svc.run_for(d);
}
void garbage_collect() {
_socks.remove_if([](tcp::socket& s) { return !s.is_open(); });
}
private:
ba::io_service _svc;
unsigned short _port;
tcp::acceptor _acc { _svc, { {}, _port } };
tcp::socket _accepting {_svc};
std::list<tcp::socket> _socks;
Generator _gen;
size_t n_sending = 0;
ba::deadline_timer _barrier {_svc};
};
int main() {
Server s(6767);
s.run_for(3s); // COLIRU
}
#include <fstream>
// synchronously generate random data chunks
void Generator::generateData() {
std::ifstream ifs("/dev/urandom", std::ios::binary);
ifs.read(reinterpret_cast<char*>(dataChunk), sizeof(dataChunk));
std::clog << "Generated chunk: " << ifs.gcount() << "\n";
}
Prints (for just the 1 client):
Accepted: 127.0.0.1:60870
Generated chunk: 999
Written: 999, 0 to go
Generated chunk: 999
[... snip ~4000 lines ...]
Written: 999, 0 to go
Generated chunk: 999
Write: Broken pipe
Written: 0, 0 to go
No more connections; pausing Generator

Boost ASIO: Send message to all connected clients

I'm working on a project that involves a boost::beast websocket/http mixed server, which runs on top of boost::asio. I've heavily based my project off the advanced_server.cpp example source.
It works fine, but right now I'm attempting to add a feature that requires the sending of a message to all connected clients.
I'm not very familiar with boost::asio, but right now I can't see any way to have something like "broadcast" events (if that's even the correct term).
My naive approach would be to see if I can have the construction of websocket_session() attach something like an event listener, and the destructor detatch the listener. At that point, I could just fire the event, and have all the currently valid websocket sessions (to which the lifetime of websocket_session() is scoped) execute a callback.
There is https://stackoverflow.com/a/17029022/268006, which does more or less what I want by (ab)using a boost::asio::steady_timer, but that seems like a kind of horrible hack to accomplish something that should be pretty straightforward.
Basically, given a stateful boost::asio server, how can I do an operation on multiple connections?
First off: You can broadcast UDP, but that's not to connected clients. That's just... UDP.
Secondly, that link shows how to have a condition-variable (event)-like interface in Asio. That's only a tiny part of your problem. You forgot about the big picture: you need to know about the set of open connections, one way or the other:
e.g. keeping a container of session pointers (weak_ptr) to each connection
each connection subscribing to a signal slot (e.g. Boost Signals).
Option 1. is great for performance, option 2. is better for flexibility (decoupling the event source from subscribers, making it possible to have heterogenous subscribers, e.g. not from connections).
Because I think Option 1. is much simpler w.r.t to threading, better w.r.t. efficiency (you can e.g. serve all clients from one buffer without copying) and you probably don't need to doubly decouple the signal/slots, let me refer to an answer where I already showed as much for pure Asio (without Beast):
How to design proper release of a boost::asio socket or wrapper thereof
It shows the concept of a "connection pool" - which is essentially a thread-safe container of weak_ptr<connection> objects with some garbage collection logic.
Demonstration: Introducing Echo Server
After chatting about things I wanted to take the time to actually demonstrate the two approaches, so it's completely clear what I'm talking about.
First let's present a simple, run-of-the mill asynchronous TCP server with
with multiple concurrent connections
each connected session reads from the client line-by-line, and echoes the same back to the client
stops accepting after 3 seconds, and exits after the last client disconnects
master branch on github
#include <boost/asio.hpp>
#include <memory>
#include <list>
#include <iostream>
namespace ba = boost::asio;
using ba::ip::tcp;
using boost::system::error_code;
using namespace std::chrono_literals;
using namespace std::string_literals;
static bool s_verbose = false;
struct connection : std::enable_shared_from_this<connection> {
connection(ba::io_context& ioc) : _s(ioc) {}
void start() { read_loop(); }
void send(std::string msg, bool at_front = false) {
post(_s.get_io_service(), [=] { // _s.get_executor() for newest Asio
if (enqueue(std::move(msg), at_front))
write_loop();
});
}
private:
void do_echo() {
std::string line;
if (getline(std::istream(&_rx), line)) {
send(std::move(line) + '\n');
}
}
bool enqueue(std::string msg, bool at_front)
{ // returns true if need to start write loop
at_front &= !_tx.empty(); // no difference
if (at_front)
_tx.insert(std::next(begin(_tx)), std::move(msg));
else
_tx.push_back(std::move(msg));
return (_tx.size() == 1);
}
bool dequeue()
{ // returns true if more messages pending after dequeue
assert(!_tx.empty());
_tx.pop_front();
return !_tx.empty();
}
void write_loop() {
ba::async_write(_s, ba::buffer(_tx.front()), [this,self=shared_from_this()](error_code ec, size_t n) {
if (s_verbose) std::cout << "Tx: " << n << " bytes (" << ec.message() << ")" << std::endl;
if (!ec && dequeue()) write_loop();
});
}
void read_loop() {
ba::async_read_until(_s, _rx, "\n", [this,self=shared_from_this()](error_code ec, size_t n) {
if (s_verbose) std::cout << "Rx: " << n << " bytes (" << ec.message() << ")" << std::endl;
do_echo();
if (!ec)
read_loop();
});
}
friend struct server;
ba::streambuf _rx;
std::list<std::string> _tx;
tcp::socket _s;
};
struct server {
server(ba::io_context& ioc) : _ioc(ioc) {
_acc.bind({{}, 6767});
_acc.set_option(tcp::acceptor::reuse_address());
_acc.listen();
accept_loop();
}
void stop() {
_ioc.post([=] {
_acc.cancel();
_acc.close();
});
}
private:
void accept_loop() {
auto session = std::make_shared<connection>(_acc.get_io_context());
_acc.async_accept(session->_s, [this,session](error_code ec) {
auto ep = ec? tcp::endpoint{} : session->_s.remote_endpoint();
std::cout << "Accept from " << ep << " (" << ec.message() << ")" << std::endl;
session->start();
if (!ec)
accept_loop();
});
}
ba::io_context& _ioc;
tcp::acceptor _acc{_ioc, tcp::v4()};
};
int main(int argc, char** argv) {
s_verbose = argc>1 && argv[1] == "-v"s;
ba::io_context ioc;
server s(ioc);
std::thread th([&ioc] { ioc.run(); }); // todo exception handling
std::this_thread::sleep_for(3s);
s.stop(); // active connections will continue
th.join();
}
Approach 1. Adding Broadcast Messages
So, let's add "broadcast messages" that get sent to all active connections simultaneously. We add two:
one at each new connection (saying "Player ## has entered the game")
one that emulates a global "server event", like you described in the question). It gets triggered from within main:
std::this_thread::sleep_for(1s);
auto n = s.broadcast("random global event broadcast\n");
std::cout << "Global event broadcast reached " << n << " active connections\n";
Note how we do this by registering a weak pointer to each accepted connection and operating on each:
_acc.async_accept(session->_s, [this,session](error_code ec) {
auto ep = ec? tcp::endpoint{} : session->_s.remote_endpoint();
std::cout << "Accept from " << ep << " (" << ec.message() << ")" << std::endl;
if (!ec) {
auto n = reg_connection(session);
session->start();
accept_loop();
broadcast("player #" + std::to_string(n) + " has entered the game\n");
}
});
broadcast is also used directly from main and is simply:
size_t broadcast(std::string const& msg) {
return for_each_active([msg](connection& c) { c.send(msg, true); });
}
using-asio-post branch on github
#include <boost/asio.hpp>
#include <memory>
#include <list>
#include <iostream>
namespace ba = boost::asio;
using ba::ip::tcp;
using boost::system::error_code;
using namespace std::chrono_literals;
using namespace std::string_literals;
static bool s_verbose = false;
struct connection : std::enable_shared_from_this<connection> {
connection(ba::io_context& ioc) : _s(ioc) {}
void start() { read_loop(); }
void send(std::string msg, bool at_front = false) {
post(_s.get_io_service(), [=] { // _s.get_executor() for newest Asio
if (enqueue(std::move(msg), at_front))
write_loop();
});
}
private:
void do_echo() {
std::string line;
if (getline(std::istream(&_rx), line)) {
send(std::move(line) + '\n');
}
}
bool enqueue(std::string msg, bool at_front)
{ // returns true if need to start write loop
at_front &= !_tx.empty(); // no difference
if (at_front)
_tx.insert(std::next(begin(_tx)), std::move(msg));
else
_tx.push_back(std::move(msg));
return (_tx.size() == 1);
}
bool dequeue()
{ // returns true if more messages pending after dequeue
assert(!_tx.empty());
_tx.pop_front();
return !_tx.empty();
}
void write_loop() {
ba::async_write(_s, ba::buffer(_tx.front()), [this,self=shared_from_this()](error_code ec, size_t n) {
if (s_verbose) std::cout << "Tx: " << n << " bytes (" << ec.message() << ")" << std::endl;
if (!ec && dequeue()) write_loop();
});
}
void read_loop() {
ba::async_read_until(_s, _rx, "\n", [this,self=shared_from_this()](error_code ec, size_t n) {
if (s_verbose) std::cout << "Rx: " << n << " bytes (" << ec.message() << ")" << std::endl;
do_echo();
if (!ec)
read_loop();
});
}
friend struct server;
ba::streambuf _rx;
std::list<std::string> _tx;
tcp::socket _s;
};
struct server {
server(ba::io_context& ioc) : _ioc(ioc) {
_acc.bind({{}, 6767});
_acc.set_option(tcp::acceptor::reuse_address());
_acc.listen();
accept_loop();
}
void stop() {
_ioc.post([=] {
_acc.cancel();
_acc.close();
});
}
size_t broadcast(std::string const& msg) {
return for_each_active([msg](connection& c) { c.send(msg, true); });
}
private:
using connptr = std::shared_ptr<connection>;
using weakptr = std::weak_ptr<connection>;
std::mutex _mx;
std::vector<weakptr> _registered;
size_t reg_connection(weakptr wp) {
std::lock_guard<std::mutex> lk(_mx);
_registered.push_back(wp);
return _registered.size();
}
template <typename F>
size_t for_each_active(F f) {
std::vector<connptr> active;
{
std::lock_guard<std::mutex> lk(_mx);
for (auto& w : _registered)
if (auto c = w.lock())
active.push_back(c);
}
for (auto& c : active) {
std::cout << "(running action for " << c->_s.remote_endpoint() << ")" << std::endl;
f(*c);
}
return active.size();
}
void accept_loop() {
auto session = std::make_shared<connection>(_acc.get_io_context());
_acc.async_accept(session->_s, [this,session](error_code ec) {
auto ep = ec? tcp::endpoint{} : session->_s.remote_endpoint();
std::cout << "Accept from " << ep << " (" << ec.message() << ")" << std::endl;
if (!ec) {
auto n = reg_connection(session);
session->start();
accept_loop();
broadcast("player #" + std::to_string(n) + " has entered the game\n");
}
});
}
ba::io_context& _ioc;
tcp::acceptor _acc{_ioc, tcp::v4()};
};
int main(int argc, char** argv) {
s_verbose = argc>1 && argv[1] == "-v"s;
ba::io_context ioc;
server s(ioc);
std::thread th([&ioc] { ioc.run(); }); // todo exception handling
std::this_thread::sleep_for(1s);
auto n = s.broadcast("random global event broadcast\n");
std::cout << "Global event broadcast reached " << n << " active connections\n";
std::this_thread::sleep_for(2s);
s.stop(); // active connections will continue
th.join();
}
Approach 2: Those Broadcast But With Boost Signals2
The Signals approach is a fine example of Dependency Inversion.
Most salient notes:
signal slots get invoked on the thread invoking it ("raising the event")
the scoped_connection is there so subscriptions are *automatically removed when the connection is destructed
there's subtle difference in the wording of the console message from "reached # active connections" to "reached # active subscribers".
The difference is key to understanding the added flexibility: the signal owner/invoker does not know anything about the subscribers. That's the decoupling/dependency inversion we're talking about
using-signals2 branch on github
#include <boost/asio.hpp>
#include <memory>
#include <list>
#include <iostream>
#include <boost/signals2.hpp>
namespace ba = boost::asio;
using ba::ip::tcp;
using boost::system::error_code;
using namespace std::chrono_literals;
using namespace std::string_literals;
static bool s_verbose = false;
struct connection : std::enable_shared_from_this<connection> {
connection(ba::io_context& ioc) : _s(ioc) {}
void start() { read_loop(); }
void send(std::string msg, bool at_front = false) {
post(_s.get_io_service(), [=] { // _s.get_executor() for newest Asio
if (enqueue(std::move(msg), at_front))
write_loop();
});
}
private:
void do_echo() {
std::string line;
if (getline(std::istream(&_rx), line)) {
send(std::move(line) + '\n');
}
}
bool enqueue(std::string msg, bool at_front)
{ // returns true if need to start write loop
at_front &= !_tx.empty(); // no difference
if (at_front)
_tx.insert(std::next(begin(_tx)), std::move(msg));
else
_tx.push_back(std::move(msg));
return (_tx.size() == 1);
}
bool dequeue()
{ // returns true if more messages pending after dequeue
assert(!_tx.empty());
_tx.pop_front();
return !_tx.empty();
}
void write_loop() {
ba::async_write(_s, ba::buffer(_tx.front()), [this,self=shared_from_this()](error_code ec, size_t n) {
if (s_verbose) std::cout << "Tx: " << n << " bytes (" << ec.message() << ")" << std::endl;
if (!ec && dequeue()) write_loop();
});
}
void read_loop() {
ba::async_read_until(_s, _rx, "\n", [this,self=shared_from_this()](error_code ec, size_t n) {
if (s_verbose) std::cout << "Rx: " << n << " bytes (" << ec.message() << ")" << std::endl;
do_echo();
if (!ec)
read_loop();
});
}
friend struct server;
ba::streambuf _rx;
std::list<std::string> _tx;
tcp::socket _s;
boost::signals2::scoped_connection _subscription;
};
struct server {
server(ba::io_context& ioc) : _ioc(ioc) {
_acc.bind({{}, 6767});
_acc.set_option(tcp::acceptor::reuse_address());
_acc.listen();
accept_loop();
}
void stop() {
_ioc.post([=] {
_acc.cancel();
_acc.close();
});
}
size_t broadcast(std::string const& msg) {
_broadcast_event(msg);
return _broadcast_event.num_slots();
}
private:
boost::signals2::signal<void(std::string const& msg)> _broadcast_event;
size_t reg_connection(connection& c) {
c._subscription = _broadcast_event.connect(
[&c](std::string msg){ c.send(msg, true); }
);
return _broadcast_event.num_slots();
}
void accept_loop() {
auto session = std::make_shared<connection>(_acc.get_io_context());
_acc.async_accept(session->_s, [this,session](error_code ec) {
auto ep = ec? tcp::endpoint{} : session->_s.remote_endpoint();
std::cout << "Accept from " << ep << " (" << ec.message() << ")" << std::endl;
if (!ec) {
auto n = reg_connection(*session);
session->start();
accept_loop();
broadcast("player #" + std::to_string(n) + " has entered the game\n");
}
});
}
ba::io_context& _ioc;
tcp::acceptor _acc{_ioc, tcp::v4()};
};
int main(int argc, char** argv) {
s_verbose = argc>1 && argv[1] == "-v"s;
ba::io_context ioc;
server s(ioc);
std::thread th([&ioc] { ioc.run(); }); // todo exception handling
std::this_thread::sleep_for(1s);
auto n = s.broadcast("random global event broadcast\n");
std::cout << "Global event broadcast reached " << n << " active subscribers\n";
std::this_thread::sleep_for(2s);
s.stop(); // active connections will continue
th.join();
}
See the diff between Approach 1. and 2.: Compare View on github
A sample of the output when run against 3 concurrent clients with:
(for a in {1..3}; do netcat localhost 6767 < /etc/dictionaries-common/words > echoed.$a& sleep .1; done; time wait)
The answer from #sehe was amazing, so I'll be brief. Generally speaking, to implement an algorithm which operates on all active connections you must do the following:
Maintain a list of active connections. If this list is accessed by multiple threads, it will need synchronization (std::mutex). New connections should be inserted to the list, and when a connection is destroyed or becomes inactive it should be removed from the list.
To iterate the list, synchronization is required if the list is accessed by multiple threads (i.e. more than one thread calling asio::io_context::run, or if the list is also accessed from threads that are not calling asio::io_context::run)
During iteration, if the algorithm needs to inspect or modify the state of any connection, and that state can be changed by other threads, additional synchronization is needed. This includes any internal "queue" of messages that the connection object stores.
A simple way to synchronize a connection object is to use boost::asio::post to submit a function for execution on the connection object's context, which will be either an explicit strand (boost::asio::strand, as in the advanced server examples) or an implicit strand (what you get when only one thread calls io_context::run). The Approach 1 provided by #sehe uses post to synchronize in this fashion.
Another way to synchronize the connection object is to "stop the world." That means call io_context::stop, wait for all the threads to exit, and then you are guaranteed that no other threads are accessing the list of connections. Then you can read and write connection object state all you want. When you are finished with the list of connections, call io_context::restart and launch the threads which call io_context::run again. Stopping the io_context does not stop network activity, the kernel and network drivers still send and receive data from internal buffers. TCP/IP flow control will take care of things so the application still operates smoothly even though it becomes briefly unresponsive during the "stop the world." This approach can simplify things but depending on your particular application you will have to evaluate if it is right for you.
Hope this helps!
Thank you #sehe for the amazing answer. Still, I think there is a small but severe bug in the Approach 2. IMHO reg_connection should look like this:
size_t reg_connection(std::shared_ptr<connection> c) {
c->_subscription = _broadcast_event.connect(
[weak_c = std::weak_ptr<connection>(c)](std::string msg){
if(auto c = weak_c.lock())
c->send(msg, true);
}
);
return _broadcast_event.num_slots();
}
Otherwise you can end up with a race condition leading to a server crash. In case the connection instance is destroyed during the call to the lambda, the reference becomes invalid.
Similarly connection#send() should look like this, because otherwise this might be dead by the time the lambda is called:
void send(std::string msg, bool at_front = false) {
post(_s.get_io_service(),
[self=shared_from_this(), msg=std::move(msg), at_front] {
if (self->enqueue(std::move(msg), at_front))
self->write_loop();
});
}
PS: I would have posted this as a comment on #sehe's answer, but unfortunately I have not enough reputation.

MHD_resume_connection() of libmicrohttpd not working properly with external select

I encountered some problems with MHD_suspend_connection() and MHD_resume_connection() in libmicrohttpd while using the external event loop. Afterwards I have wrote a small example (without error handling) below. My question is: What am I doing wrong? Or is it a bug in the library? It should work as far as I understand the manual. Using external select with suspend/resume is allowed explicitly.
The problem is that connections are not resumed correctly. Processing the connection does not continue right after calling MHD_resume_connection(). In some versions of my program, it did continue after another request was incomming. In other versions later requests was not handled at all (access_handler() was never called). In some of this versions I got a response for the first request while stopping libmicrohttpd. When I enable MHD_USE_SELECT_INTERNALLY and remove my external loop (let it sleep), everything works.
I tested it on Debian (libmicrohttpd 0.9.37) and Arch (libmicrohttpd 0.9.50). The problem exists on both systems but maybe the behavior was a little bit different.
#include <algorithm>
#include <csignal>
#include <cstring>
#include <iostream>
#include <vector>
#include <sys/select.h>
#include <microhttpd.h>
using std::cerr;
using std::cout;
using std::endl;
static volatile bool run_loop = true;
static MHD_Daemon *ctx = nullptr;
static MHD_Response *response = nullptr;
static std::vector<MHD_Connection*> susspended;
void sighandler(int)
{
run_loop = false;
}
int handle_access(void *cls, struct MHD_Connection *connection,
const char *url, const char *method, const char *version,
const char *upload_data, size_t *upload_data_size,
void **con_cls)
{
static int second_call_marker;
static int third_call_marker;
if (*con_cls == nullptr) {
cout << "New connection" << endl;
*con_cls = &second_call_marker;
return MHD_YES;
} else if (*con_cls == &second_call_marker) {
cout << "Suspending connection" << endl;
MHD_suspend_connection(connection);
susspended.push_back(connection);
*con_cls = &third_call_marker;
return MHD_YES;
} else {
cout << "Send response" << endl;
return MHD_queue_response(connection, 200, response);
}
}
void myapp()
{
std::signal(SIGINT, &sighandler);
std::signal(SIGINT, &sighandler);
ctx = MHD_start_daemon(MHD_USE_DUAL_STACK //| MHD_USE_EPOLL
| MHD_USE_SUSPEND_RESUME | MHD_USE_DEBUG,
8080, nullptr, nullptr,
&handle_access, nullptr,
MHD_OPTION_END);
response = MHD_create_response_from_buffer(4, const_cast<char*>("TEST"),
MHD_RESPMEM_PERSISTENT);
while (run_loop) {
int max;
fd_set rs, ws, es;
struct timeval tv;
struct timeval *tvp;
max = 0;
FD_ZERO(&rs);
FD_ZERO(&ws);
FD_ZERO(&es);
cout << "Wait for IO activity" << endl;
MHD_UNSIGNED_LONG_LONG mhd_timeout;
MHD_get_fdset(ctx, &rs, &ws, &es, &max);
if (MHD_get_timeout(ctx, &mhd_timeout) == MHD_YES) {
//tv.tv_sec = std::min(mhd_timeout / 1000, 1ull);
tv.tv_sec = mhd_timeout / 1000;
tv.tv_usec = (mhd_timeout % 1000) * 1000;
tvp = &tv;
} else {
//tv.tv_sec = 2;
//tv.tv_usec = 0;
//tvp = &tv;
tvp = nullptr;
}
if (select(max + 1, &rs, &ws, &es, tvp) < 0 && errno != EINTR)
throw "select() failed";
cout << "Handle IO activity" << endl;
if (MHD_run_from_select(ctx, &rs, &ws, &es) != MHD_YES)
throw "MHD_run_from_select() failed";
for (MHD_Connection *connection : susspended) {
cout << "Resume connection" << endl;
MHD_resume_connection(connection);
}
susspended.clear();
}
cout << "Stop server" << endl;
MHD_stop_daemon(ctx);
}
int main(int argc, char *argv[])
{
try {
myapp();
} catch (const char *str) {
cerr << "Error: " << str << endl;
cerr << "Errno: " << errno << " (" << strerror(errno) << ")" << endl;
return EXIT_FAILURE;
}
return EXIT_SUCCESS;
}
I've compiled and run your sample on Windows and am seeing the same behavior w/ 0.9.51.
It's not a bug in microhttpd. The problem is that you are resuming a connection before queuing a response on it. The only code you have that creates a response relies on more activity on the connection so it's a catch-22.
The point of MHD_suspend_connection/MHD_resume_connection is to not block new connections while long-running work is going on. Thus typically after suspending the connection you need to kick off that work on another thread to continue while the listening socket is maintained. When that thread has queued the response it can resume the connection and the event loop will know it is ready to send back to the client.
I'm not sure of your other design requirements but you may not need to be implementing external select. That is to say, suspend/resume does not require it (I've used suspend/resume just fine with MHD_USE_SELECT_INTERNALLY, e.g.).
I dont know if it's mentioned. But you have a multi-threading bug, and perhaps, "intent bug". As the lib, may or may not use threads, depending on other factors. You can see if you are using threads, by printing the thread id, from the functions. But, your answerToConnection function, sets your vector (without mutex protection), and then you are immediately looking at it, and retrying potentially from another thread. this goes against the intent/purpose of suspend/retry, since suspend is really for something taking "a long time". The gotcha, is that you dont own the calling code, so, you dont know when it's totally done. however, you can age your retry, with a timeval, so, you dont retry too soon. at least a value of tv_usec +1. you need to note, that you are using the vector from two or more threads, without mutex protection.