std::bind arguments not matching function parameters? - c++

I'm trying to pass a socket along a connection handshake, and use std::bind to do so. The compile issue I'm getting (in one continuous block, which I've added spaces to for readability) is:
'std::_Bind<_Functor(_Bound_args ...)>::_Bind(_Functor&&, _Args&& ...)
[with _Args = {socket_state**, std::function<void(socket_state*)>&, boost::asio::basic_socket_acceptor<boost::asio::ip::tcp, boost::asio::executor>&, boost::asio::io_context&};
_Functor = void (*)(socket_state*, std::function<void(socket_state*)>&, boost::asio::basic_socket_acceptor<boost::asio::ip::tcp>&, boost::asio::io_context&);
_Bound_args = {socket_state**, std::function<void(socket_state*)>, boost::asio::basic_socket_acceptor<boost::asio::ip::tcp, boost::asio::executor>, boost::asio::io_context}]':
My code is below, with the error appearing to nag at the std::bind arguments given to boost::asio::acceptor.async_accept(socket, ...) and the parameters for the accept_new_client method
void start_server(std::function<void(socket_state*)>& func, tcp::acceptor& acceptor, boost::asio::io_context& context)
{
acceptor.listen();
// Start client connection loop
networking::wait_for_client(func, acceptor, context);
}
void wait_for_client(std::function<void(socket_state*)>& func, tcp::acceptor& acceptor, boost::asio::io_context& context)
{
boost::asio::ip::tcp::socket socket(context);
// socket_state is its own class which links a particular socket with an ID and buffer data
// it also holds a function to indicate which part of the connection handshake it needs to go to next
socket_state* state = new socket_state(func, &socket);
acceptor.async_accept(socket, std::bind(&networking::accept_new_client, state, func, acceptor, context));
}
void accept_new_client(socket_state* state, std::function<void(socket_state*)>& func, tcp::acceptor& acceptor, boost::asio::io_context& context)
{
state->on_network_action(state);
wait_for_client(func, acceptor, context);
}
It seems like they would match, but you can see the error state my std::bind arguments are socket_state** instead of socket_state*, and boost::asio::basic_socket_acceptor<boost::asio::ip::tcp, boost::asio::executor>& instead of boost::asio::basic_socket_acceptor<boost::asio::ip::tcp>&.
I have no idea what the "with _Args" vs. "_Bound_args" is either.

There's many problems in this code.
The shared pointer seems to be at the wrong level of abstraction. You would want the entire "connection" type to be of shared lifetime, not just the socket. In your case, socket_state is a good candidate.
Regardless, your socket is a local variable that you pass a stale pointer to inside socket_state. Socket-state looks like it will necessarily be leaked.
So that will never work already.
Next up, the bind is binding all parameters eagerly, leaving a nullary signature. That's not what any overload accepts [no pun intended]. You need to match
AcceptHandler or
MoveAcceptHandler
Let's go for AcceptHandler. Also, let's not bind all the redundant args (func was already in the socket_stateremember,io_context` is overshared etc.).
In general it looks like you need to develop confidence in knowing where your state is. E.g. this line is is symptomatic:
state->on_network_action(state);
Since on_network_action is a member function of socket_state, there should never be any need to pass the state as an argument (it will be this implicitly). The same thing goes for acceptor and contest in all occurrences.
Demo
Fixing all the above, using std::shared_ptr and bind (you already did), notice the placeholder::_1 to accept the error_code etc.)
Live On Coliru
#include <boost/asio.hpp>
#include <memory>
#include <iostream>
namespace ba = boost::asio;
using namespace std::chrono_literals;
using boost::system::error_code;
using ba::ip::tcp;
struct socket_state;
using Callback = std::function<void(socket_state&)>;
struct socket_state : std::enable_shared_from_this<socket_state> {
Callback _callback;
tcp::socket _socket;
template <typename Executor>
socket_state(Callback cb, Executor ex) : _callback(cb)
, _socket(ex)
{
}
void on_network_action() {
std::cout << __PRETTY_FUNCTION__ << std::endl;
}
};
struct networking {
using StatePtr = std::shared_ptr<socket_state>;
explicit networking(ba::io_context& ctx, Callback callback)
: context(ctx)
, callback(callback)
{
}
ba::io_context& context;
tcp::acceptor acceptor {context, {{}, 8989}};
Callback callback;
void start_server()
{
std::cout << "start_server" << std::endl;
acceptor.listen();
wait_for_client(); // Start client connection loop
}
void stop_server() {
std::cout << "stop_server" << std::endl;
acceptor.cancel();
acceptor.close();
}
void wait_for_client()
{
std::cout << "wait_for_client" << std::endl;
// socket_state is its own class which links a particular socket with
// an ID and buffer data it also holds a function to indicate which
// part of the connection handshake it needs to go to next
auto state =
std::make_shared<socket_state>(callback, context.get_executor());
acceptor.async_accept(state->_socket,
std::bind(&networking::accept_new_client, this,
std::placeholders::_1, state));
}
void accept_new_client(error_code ec, StatePtr state)
{
if (ec) {
std::cout << "accept_new_client " << ec.message() << std::endl;
return;
}
std::cout << "accept_new_client " << state->_socket.remote_endpoint()
<< std::endl;
state->on_network_action();
wait_for_client();
}
};
int main() {
ba::io_context ctx;
networking server(ctx, [](socket_state&) {
std::cout << "This is our callback" << std::endl;
});
server.start_server();
ctx.run_for(5s);
server.stop_server();
ctx.run();
}
With some random connections:
start_server
wait_for_client
accept_new_client 127.0.0.1:54376
void socket_state::on_network_action()
wait_for_client
accept_new_client 127.0.0.1:54378
void socket_state::on_network_action()
wait_for_client
accept_new_client 127.0.0.1:54380
void socket_state::on_network_action()
wait_for_client
accept_new_client 127.0.0.1:54382
void socket_state::on_network_action()
wait_for_client
stop_server
accept_new_client Operation canceled
Note that version makes the comments
// socket_state is its own class which links a particular socket with
// an ID and buffer data it also holds a function to indicate which
// part of the connection handshake it needs to go to next
no longer complete lies :)

Related

Cannot store bind_front_handler return value in variable

With these present in other parts of my codebase,
namespace net = boost::asio;
using boost::asio::ip::tcp;
boost::asio::io_context& io_context_;
tcp::acceptor acceptor_;
void server::on_accept(boost::beast::error_code ec, boost::asio::ip::tcp::socket socket);
I have noticed that this piece of code compiles:
auto strand = net::make_strand(io_context_);
std::shared_ptr<server> this_pointer = shared_from_this();
acceptor_.async_accept(
strand,
boost::beast::bind_front_handler(&server::on_accept, this_pointer)
);
whereas this does not:
auto strand = net::make_strand(io_context_);
std::shared_ptr<server> this_pointer = shared_from_this();
auto call_next = boost::beast::bind_front_handler(&server::on_accept, this_pointer);
acceptor_.async_accept(
strand,
call_next
);
and it fails with the error
/usr/include/boost/beast/core/detail/bind_handler.hpp:251:45: error: cannot convert ‘boost::beast::detail::bind_front_wrapper<void (server::*)(boost::system::error_code, boost::asio::basic_stream_socket<boost::asio::ip::tcp>), std::shared_ptr<server> >’ to ‘void (server::*)(boost::system::error_code, boost::asio::basic_stream_socket<boost::asio::ip::tcp>)’ in initialization
251 | , args_(std::forward<Args_>(args)...)
I am very curious why passing the value returned from bind_front_handler directly to the async_accept would work but storing that value in a variable and then passing that variable would not work.
I also understand very little about Boost and Beast right now, but here it appears to me like I am forgetting something very basic about C++ itself. Why are both of those piece of code not equivalent?
Indeed, you should not be doing that. The bind-front wrapper wants to be a temporary (in that it is move only). You could "fix" it by doing
acceptor_.async_accept(strand, std::move(call_next));
(after which you will have to remember that call_next may not be used again because it has been moved-from).
I would personally go the other way - as this helper was clearly intended - and write the idiomatic
acceptor_.async_accept(
make_strand(io_context_),
bind_front_handler(&server::on_accept, shared_from_this()));
Which replaces the entire function.
Demo
Live On Coliru
#include <boost/beast.hpp>
#include <boost/asio.hpp>
#include <iostream>
namespace net = boost::asio;
namespace beast = boost::beast;
using boost::system::error_code;
using net::ip::tcp;
struct server : std::enable_shared_from_this<server> {
server() {
acceptor_.listen();
}
void start(){
using beast::bind_front_handler;
acceptor_.async_accept(
make_strand(io_context_),
bind_front_handler(&server::on_accept, shared_from_this()));
}
void wait() {
work_.reset();
if (thread_.joinable())
thread_.join();
}
private:
void on_accept(error_code ec, tcp::socket&& socket) {
std::cout << "Accepted connection from " << socket.remote_endpoint() << "\n";
//// loop to accept more:
// start();
}
net::io_context io_context_;
tcp::acceptor acceptor_{io_context_, {{}, 9999}};
net::executor_work_guard<net::io_context::executor_type> work_{
io_context_.get_executor()};
std::thread thread_{[this] { io_context_.run(); }};
};
int main()
{
auto s = std::make_shared<server>();
s->start();
s->wait();
}
With
g++ -std=c++20 -O2 -Wall -pedantic -pthread main.cpp
./a.out& sleep .5; nc 127.0.0.1 9999 <<<'hello world'; wait
Prints e.g.
Accepted connection from 127.0.0.1:36402

Why is this ADL resolution ambigous

I am trying to use the custom allocaters feature of c++ asio library (http://think-async.com/Asio/asio-1.10.6/doc/asio/overview/core/allocation.html). My functions code is all in namespace bb, as is a custom allocation function void* asio_handler_allocate(std::size_t size, ...). I expected ADL to pick my custom version, but for some reason it results in an ambiguity:
c:\mysrv\asio\detail\handler_alloc_helpers.hpp(38): error C2668: 'bb::asio_handler_allocate': ambiguous call to overloaded function
1> c:\mysrv\connection.hpp(16): note: could be 'void *bb::asio_handler_allocate(std::size_t,...)' [found using argument-dependent lookup]
1> c:\mysrv\asio\impl\handler_alloc_hook.ipp(27): note: or 'void *asio::asio_handler_allocate(std::size_t,...)'
1> c:\mysrv\asio\detail\handler_alloc_helpers.hpp(38): note: while trying to match the argument list '(std::size_t, bb::Server::do_accept::<lambda_18e060fa7342c1167c1b66e6dfdfd1b2> *)'
Any explanation as to why the second one also matches and/or as to how to use this feature correctly would be appreciated
Thanks
P.S. I am adding the boost-asio tag since it is supposed the same library but only in a different namespace. I am actually using the stand-alone c++11 version fouind here http://think-async.com/
Here is a simplified example:
#include "asio.hpp"
#include <memory>
#include <iostream>
namespace bb {
void* asio_handler_allocate(std::size_t size, ...) {
std::cerr << 'H' << ' ' << /**h <<*/ ' ' << size << '\n';
return asio::asio_handler_allocate(size);
}
class Connection
: public std::enable_shared_from_this<Connection>
{
public:
Connection(asio::ip::tcp::socket socket)
: socket_(std::move(socket))
{
}
void start()
{
do_read();
}
private:
void do_read()
{
auto self(shared_from_this());
socket_.async_read_some(asio::buffer(data_),
[this, self](std::error_code ec, std::size_t length)
{
if (!ec)
{
do_write(length);
}
});
}
void do_write(std::size_t length)
{
auto self(shared_from_this());
asio::async_write(socket_, asio::buffer(data_, length),
[this, self](std::error_code ec, std::size_t /*length*/)
{
if (!ec)
{
do_read();
}
});
}
asio::ip::tcp::socket socket_;
std::array<char, 1024> data_;
};
class Server
{
public:
Server(asio::io_service& io_service, short port)
: acceptor_(io_service, asio::ip::tcp::endpoint(asio::ip::tcp::v4(), port)),
socket_(io_service)
{
do_accept();
}
private:
void do_accept()
{
acceptor_.async_accept(socket_,
[this](std::error_code ec)
{
if (!ec)
{
std::make_shared<Connection>(std::move(socket_))->start();
}
do_accept();
});
}
asio::ip::tcp::acceptor acceptor_;
asio::ip::tcp::socket socket_;
};
}
int main(int argc, char* argv[])
{
try
{
if (argc != 2)
{
std::cerr << "Usage: server <port>\n";
return 1;
}
asio::io_service io_service;
bb::Server s(io_service, std::atoi(argv[1]));
io_service.run();
}
catch (std::exception& e)
{
std::cerr << "Exception: " << e.what() << "\n";
}
return 0;
}
ADL merely add argument's namespaces to the list of namespaces where the name should be looked up (this is not very precise, but for the purpose of your question, its enough). It does not make an overload in that namespace the preferable one automatically. If two equally ranked match are found, an ambiguous still happens. In your case, the two are exactly the same.
If you read the ASIO document you linked, you will know that the correct way to declare the ADL overload is
void* asio_handler_allocate(size_t, Handler);
Where Handler is the user defined handler type. The above declaration can also have possible CV qualifiers for Handler. In this case, the two overload found are one with the second parameter a concrete type and another with .... Any legal match with a typed parameter is ranked higher than the variadic arguments (not to be confused with C++11 variadic template arguments). Therefore, there would be no ambiguous match

Stopping boost::asio::io_service::run() from concurrent destructor

Can anybody explain me why this program does not terminate (see the comments)?
#include <boost/asio/io_service.hpp>
#include <boost/asio.hpp>
#include <memory>
#include <cstdio>
#include <iostream>
#include <future>
class Service {
public:
~Service() {
std::cout << "Destroying...\n";
io_service.post([this]() {
std::cout << "clean and stop\n"; // does not get called
// do some cleanup
// ...
io_service.stop();
std::cout << "Bye!\n";
});
std::cout << "...destroyed\n"; // last printed line, blocks
}
void operator()() {
io_service.run();
std::cout << "run completed\n";
}
private:
boost::asio::io_service io_service;
boost::asio::io_service::work work{io_service};
};
struct Test {
void start() {
f = std::async(std::launch::async, [this]() { service(); std::cout << "exiting thread\n";});
}
std::future<void> f;
Service service;
};
int main(int argc, char* argv[]) {
{
Test test;
test.start();
std::string exit;
std::cin >> exit;
}
std::cout << "exiting program\n"; // never printed
}
The real issue is that destruction of io_service is (obviously) not thread-safe.
Just reset the work and join the thread. Optionally, set a flag so your IO operations know shutdown is in progress.
You Test and Service classes are trying to share responsibility for the IO service, that doesn't work. Here's much simplified, merging the classes and dropping the unused future.
Live On Coliru
The trick was to make the work object optional<>:
#include <boost/asio.hpp>
#include <boost/optional.hpp>
#include <iostream>
#include <thread>
struct Service {
~Service() {
std::cout << "clean and stop\n";
io_service.post([this]() {
work.reset(); // let io_service run out of work
});
if (worker.joinable())
worker.join();
}
void start() {
assert(!worker.joinable());
worker = std::thread([this] { io_service.run(); std::cout << "exiting thread\n";});
}
private:
boost::asio::io_service io_service;
std::thread worker;
boost::optional<boost::asio::io_service::work> work{io_service};
};
int main() {
{
Service test;
test.start();
std::cin.ignore(1024, '\n');
std::cout << "Start shutdown\n";
}
std::cout << "exiting program\n"; // never printed
}
Prints
Start shutdown
clean and stop
exiting thread
exiting program
See here: boost::asio hangs in resolver service destructor after throwing out of io_service::run()
I think the trick here is to destroy the worker (the work member) before calling io_service.stop(). I.e. in this case the work could be an unique_ptr, and call reset() explicitly before stopping the service.
EDIT: The above helped me some time ago in my case, where the ioservice::stop didn't stop and was waiting for some dispatching events which never happened.
However I reproduced the problem you have on my machine and this seems to be a race condition inside ioservice, a race between ioservice::post() and the ioservice destruction code (shutdown_service). In particular, if the shutdown_service() is triggered before the post() notification wakes up the other thread, the shutdown_service() code removes the operation from the queue (and "destroys" it instead of calling it), therefore the lambda is never called then.
For now it seems to me that you'd need to call the io_service.stop() directly in the destructor, not postponed via the post() as that apparently doest not work here because of the race.
I was able to fix the problem by rewriting your code like so:
class Service {
public:
~Service() {
std::cout << "Destroying...\n";
work.reset();
std::cout << "...destroyed\n"; // last printed line, blocks
}
void operator()() {
io_service.run();
std::cout << "run completed\n";
}
private:
boost::asio::io_service io_service;
std::unique_ptr<boost::asio::io_service::work> work = std::make_unique<boost::asio::io_service::work>(io_service);
};
However, this is largely a bandaid solution.
The problem lies in your design ethos; specifically, in choosing not to tie the lifetime of the executing thread directly to the io_service object:
struct Test {
void start() {
f = std::async(std::launch::async, [this]() { service(); std::cout << "exiting thread\n";});
}
std::future<void> f; //Constructed First, deleted last
Service service; //Constructed second, deleted first
};
In this particular scenario, the thread is going to continue to attempt to execute io_service.run() past the lifetime of the io_service object itself. If more than the basic work object were executing on the service, you very quickly begin to deal with undefined behavior with calling member functions of deleted objects.
You could reverse the order of the member objects in Test:
struct Test {
void start() {
f = std::async(std::launch::async, [this]() { service(); std::cout << "exiting thread\n";});
}
Service service;
std::future<void> f;
};
But it still represents a significant design flaw.
The way that I usually implement anything which uses io_service is to tie its lifetime to the threads that are actually going to be executing on it.
class Service {
public:
Service(size_t num_of_threads = 1) :
work(std::make_unique<boost::asio::io_service::work>(io_service))
{
for (size_t thread_index = 0; thread_index < num_of_threads; thread_index++) {
threads.emplace_back([this] {io_service.run(); });
}
}
~Service() {
work.reset();
for (std::thread & thread : threads)
thread.join();
}
private:
boost::asio::io_service io_service;
std::unique_ptr<boost::asio::io_service::work> work;
std::vector<std::thread> threads;
};
Now, if you have any infinite loops active on any of these threads, you'll still need to make sure you properly clean those up, but at least the code specific to the operation of this io_service is cleaned up correctly.

Boost async_write callback as a member function of another class instance

I have a protocol structure where one class takes care of protocol states (Protocol) and another class takes care of send and receiving messages (Comm).
I´m using boost:asio in asynchronous mode.
So I have the following code structure:
#include <string>
#include <iostream>
#include "boost/asio.hpp"
#include "boost/bind.hpp"
class Comm {
public:
Comm::Comm();
void SendMessage(std::string message, void (callback) (const boost::system::error_code& errorCode, std::size_t bytesTranferred));
private:
boost::asio::io_service ioService;
std::shared_ptr<boost::asio::ip::tcp::socket> mySocket;
};
Comm::Comm()
{
boost::asio::ip::tcp::resolver resolver(ioService);
boost::asio::ip::tcp::resolver::query query("192.168.0.1");
boost::asio::ip::tcp::resolver::iterator iterator = resolver.resolve(query);
mySocket->connect(*iterator);
}
void Comm::SendMessage(std::string message, void (callback) (const boost::system::error_code& errorCode, std::size_t bytesTranferred))
{
mySocket->async_send(boost::asio::buffer(message.c_str(), message.length()), boost::bind(&callback)); // <<< ERROR HERE
}
class Protocol {
public:
void SendMessage(std::string message);
void SendMessageHandler(const boost::system::error_code& errorCode, std::size_t bytesTranferred);
private:
Comm channel;
};
void Protocol::SendMessage(std::string message)
{
channel.SendMessage(message, &SendMessageHandler); // <<< ERROR HERE
}
void Protocol::SendMessageHandler(const boost::system::error_code& errorCode, std::size_t bytesTranferred)
{
if (!errorCode)
std::cout << "Send OK" << std::endl;
else
std::cout << "Send FAIL." << std::endl;
}
As shown, I need that the callback of the async_send to be a non-static function of the caller´s class, so I have to pass the callback function in SendMessage and use it as a parameter in async_send.
These both statements are not compiling. I´ve tried variations but I can´t find what what´s going on here.
Help appreciated.
Try something like this using binding to class method:
void Comm::SendMessage(std::string message, boost::function< void(const boost::system::error_code& , std::size_t) > callback )
{
mySocket->async_send(boost::asio::buffer(message.c_str(), message.length()), callback);
}
...//later
channel.SendMessage(message, boost::bind(&Protocol::SendMessageHandler, this) );
Note/more importantly you have amount unfixable bugs here:
You are taking std::string message by value several times - it will copy the content.
Comm::SendMessage uses local message object, which will be destroyed before async operation will complete (boost::asio::buffer will not copy content).
It will be hard to use 2 or more Comm objects, since each have its own ioService (you will not able to run them all at same time)
No shared_ptr or any other capability to control object lifetime: your SendMessageHandler can be called when Protocol already destroyed.
Protocol does not control write parallelism, and multiple SendMessage calls can lead to write mixed buffers into sockets, this can/will send complete trash over network.
More fatal/minor issues, no point to search for them.
Consider taking one of the asio examples as base usage pattern.

Integrate boost::asio into file descriptor based eventloops (select/poll)

If I want to integrate stuff from boost::asio into an eventloop that is based on file descriptors (select/poll), how can I achieve it? Other libraries with asynchronous functions offer to hand out a file descriptor that becomes readable as soon as there is work to be done, so that you can integrate it into the select/poll of the eventloop and let it call a processing callback of the library (like a single shot event processing).
A great example would be an asynchronous name resolver in a thread pool, like discussed in this question.
Based on the example in this answer I came up with this solution that uses a generic handler, which writes into a wake-up pipe and then posts the handler call into another io_service. The read end of the pipe can be used in a file descriptor based event loop and the callback run_handler() is called from there, which clears the pipe and runs pending handlers in the main thread.
#include <iostream>
#include <boost/asio.hpp>
#include <boost/bind.hpp>
#include <boost/optional.hpp>
#include <boost/thread.hpp>
/// #brief Type used to emulate asynchronous host resolution with a
/// dedicated thread pool.
class resolver {
public:
resolver(const std::size_t pool_size)
: work_(boost::ref(resolver_service_)) {
// Create wake-up pipe
pipe(pipe_);
fcntl(pipe_[0], F_SETFL, O_NONBLOCK);
// Create pool.
for (std::size_t i = 0; i < pool_size; ++i)
threads_.create_thread(boost::bind(&boost::asio::io_service::run,
&resolver_service_));
}
~resolver() {
work_ = boost::none;
threads_.join_all();
}
template <typename QueryOrEndpoint, typename Handler>
void async_resolve(QueryOrEndpoint query, Handler handler) {
resolver_service_.post(boost::bind(
&resolver::do_async_resolve<QueryOrEndpoint, Handler>, this,
query, handler));
}
// callback for eventloop in main thread
void run_handler() {
char c;
// clear wake-up pipe
while (read(pipe_[0], &c, 1) > 0);
// run handler posted from resolver threads
handler_service_.poll();
handler_service_.reset();
}
// get read end of wake up pipe for polling in eventloop
int fd() {
return pipe_[0];
}
private:
/// #brief Resolve address and invoke continuation handler.
template <typename QueryOrEndpoint, typename Handler>
void do_async_resolve(const QueryOrEndpoint& query, Handler handler) {
typedef typename QueryOrEndpoint::protocol_type protocol_type;
typedef typename protocol_type::resolver resolver_type;
// Resolve synchronously, as synchronous resolution will perform work
// in the calling thread. Thus, it will not use Boost.Asio's internal
// thread that is used for asynchronous resolution.
boost::system::error_code error;
resolver_type resolver(resolver_service_);
typename resolver_type::iterator result = resolver.resolve(query, error);
// post handler callback to service running in main thread
handler_service_.post(boost::bind(handler, error, result));
// wake up eventloop in main thread
write(pipe_[1], "*", 1);
}
private:
boost::asio::io_service resolver_service_;
boost::asio::io_service handler_service_;
boost::optional<boost::asio::io_service::work> work_;
boost::thread_group threads_;
int pipe_[2];
};
template <typename ProtocolType>
void handle_resolve(
const boost::system::error_code& error,
typename ProtocolType::resolver::iterator iterator) {
std::stringstream stream;
stream << "handle_resolve:\n"
" " << error.message() << "\n";
if (!error)
stream << " " << iterator->endpoint() << "\n";
std::cout << stream.str();
std::cout.flush();
}
int main() {
// Resolver will emulate asynchronous host resolution with a pool of 5
// threads.
resolver resolver(5);
namespace ip = boost::asio::ip;
resolver.async_resolve(
ip::udp::resolver::query("localhost", "12345"),
&handle_resolve<ip::udp>);
resolver.async_resolve(
ip::tcp::resolver::query("www.google.com", "80"),
&handle_resolve<ip::tcp>);
resolver.async_resolve(
ip::udp::resolver::query("www.stackoverflow.com", "80"),
&handle_resolve<ip::udp>);
resolver.async_resolve(
ip::icmp::resolver::query("some.other.address", "54321"),
&handle_resolve<ip::icmp>);
pollfd fds;
fds.fd = resolver.fd();
fds.events = POLLIN;
// simple eventloop
while (true) {
if (poll(&fds, 1, 2000)) // waiting for wakeup call
resolver.run_handler(); // call resolve handler
else
break;
}
}
Several of the objects in the Boost ASIO library expose a native_handle for scenarios like this.