Boost.Asio contrived example inexplicably blocking - c++

I've used Boost.Asio extensively, but I've came across a problem with a unit test that I don't understand. I've reduced the problem down to a very contrived example:
#include <string>
#include <chrono>
#include <thread>
#include <mutex>
#include <condition_variable>
#include <boost/asio.hpp>
#define BOOST_TEST_MODULE My_Module
#define BOOST_TEST_DYN_LINK
#include <boost/test/unit_test.hpp>
#include <boost/test/auto_unit_test.hpp>
using namespace std::string_literals;
using namespace std::chrono_literals;
namespace BA = boost::asio;
namespace BAI = BA::ip;
BOOST_AUTO_TEST_CASE(test)
{
std::mutex m;
std::condition_variable cv;
BA::io_service servicer;
auto io_work = std::make_unique<BA::io_service::work>(servicer);
auto thread = std::thread{[&]() {
servicer.run();
}};
auto received_response = false;
auto server_buf = std::array<std::uint8_t, 4096>{};
auto server_sock = BAI::tcp::socket{servicer};
auto acceptor = BAI::tcp::acceptor{servicer,
BAI::tcp::endpoint{BAI::tcp::v4(), 20123}};
acceptor.async_accept(server_sock, [&](auto&& ec) {
if (ec) {
BOOST_TEST_MESSAGE(ec.message());
}
BOOST_REQUIRE(!ec);
BOOST_TEST_MESSAGE("Accepted connection from " << server_sock.remote_endpoint() <<
", reading...");
BA::async_read(server_sock,
BA::buffer(server_buf),
[&](auto&& ec, auto&& bytes_read){
std::unique_lock<decltype(m)> ul(m);
received_response = true;
if (ec) {
BOOST_TEST_MESSAGE(ec.message());
}
BOOST_REQUIRE(!ec);
const auto str = std::string{server_buf.begin(),
server_buf.begin() + bytes_read};
BOOST_TEST_MESSAGE("Read: " << str);
ul.unlock();
cv.notify_one();
});
});
const auto send_str = "hello"s;
auto client_sock = BAI::tcp::socket{servicer, BAI::tcp::v4()};
client_sock.async_connect(BAI::tcp::endpoint{BAI::tcp::v4(), 20123},
[&](auto&& ec) {
if (ec) {
BOOST_TEST_MESSAGE(ec.message());
}
BOOST_REQUIRE(!ec);
BOOST_TEST_MESSAGE("Connected...");
BA::async_write(client_sock,
BA::buffer(send_str),
[&](auto&& ec, auto&& bytes_written) {
if (ec) {
BOOST_TEST_MESSAGE(ec.message());
}
BOOST_REQUIRE(!ec);
BOOST_TEST_MESSAGE("Written " << bytes_written << " bytes");
});
});
std::unique_lock<decltype(m)> ul(m);
cv.wait_for(ul, 2s, [&](){ return received_response; });
BOOST_CHECK(received_response);
io_work.reset();
servicer.stop();
if (thread.joinable()) {
thread.join();
}
}
which I compile with:
g++ -std=c++17 source.cc -l boost_unit_test_framework -pthread -l boost_system -ggdb
The output is:
Accepted connection from 127.0.0.1:51688, reading...
Connected...
Written 5 bytes
And then it times out.
Running through the debugger shows that the async_read handler is never called. Pausing execution during the phase where it doesn't seem to be doing anything, shows that the main thread is waiting on the condition_variable (cv) and the io_service thread is on an epoll_wait.
I seem to be deadlocking but can't see how.

This is how the function is defined to work, it waits for exactly the number of bytes that the buffer has space for (http://www.boost.org/doc/libs/1_62_0/doc/html/boost_asio/reference/async_read/overload1.html).
Try this one instead: http://www.boost.org/doc/libs/1_62_0/doc/html/boost_asio/reference/async_read/overload2.html
You can give a callback to decide whether the read is complete and that could include waiting for and checking a length provided by another channel once the writer has written its message (if you've determined a deadlock-free way to do that) or just before the message proper.
Adding this completion condition makes it work:
[&](auto&& ec, auto&& bytes_read){
return bytes_read < 5 ? 5 - bytes_read : 0;
},

The answer provided by #codeshot is correct, but it is one of several solutions - which is most appropriate is dependent entirely upon the protocol you're using across the TCP connection.
For example, in a traditional Key-Length-Value style protocol, you would do two reads:
Using boost::asio::async_read (or equivalent) to read into fixed length buffer to obtain the fixed-length header
Use the length specified by the header to create a buffer of the required size, and repeat step 1 using it
There's a good example of this in the chat server example code.
If you were using HTTP or RTSP (the latter is what I was trying to do), then you don't know how much is data is coming, all you care about is receiving a packet's worth of data (I know this is an oversimplification due to the Content-Length header in responses, chunked transfer encoding, etc. but bear with me). For this you need async_read_some (or equivalent), see the HTTP server example.

Related

Boost Asio experimental channel poor performance

I wrote the following code to analyze experimental channel performance in a single thread application. On i7-6700HQ#3.2GHz It takes around 1 second to complete which shows a throughput of around 3M item per second.
The problem might be due to the fact that because asio is in single threaded mode the producer has to signal the consumer part and that leads to immediate resumption of consumer coroutine on every call to async_send(), but i don't know how to test to make sure this is the case and how we can avoid it in real applications. reducing channel buffer size even to 0 has no effect on the throughput which might be for the same reason.
#include <boost/asio.hpp>
#include <boost/asio/experimental/awaitable_operators.hpp>
#include <boost/asio/experimental/channel.hpp>
namespace asio = boost::asio;
using namespace asio::experimental::awaitable_operators;
using channel_t = asio::experimental::channel< void(boost::system::error_code, uint64_t) >;
asio::awaitable< void >
producer(channel_t &ch)
{
for (uint64_t i = 0; i < 3'000'000; i++)
co_await ch.async_send(boost::system::error_code {}, i, asio::use_awaitable);
ch.close();
}
asio::awaitable< void >
consumer(channel_t &ch)
{
for (;;)
co_await ch.async_receive(asio::use_awaitable);
}
asio::awaitable< void >
experiment()
{
channel_t ch { co_await asio::this_coro::executor, 1000 };
co_await (consumer(ch) && producer(ch));
}
int
main()
{
asio::io_context ctx {};
asio::co_spawn(ctx, experiment(), asio::detached);
ctx.run();
}
You can save a little by providing hints about the threading:
provide concurrency hint unsafe (BOOST_ASIO_CONCURRENCY_HINT_UNSAFE)
optionally disabling all threading - this will in practice probably not matter, it's just possible as long as you don't need any services that employ internal threads)
avoiding type erasure on the executor; this means replacing any_io_executor with the concrete executor type that you employ
I wrote a side-by-side benchmark with reduced message-count (30k) so that Nonius can sample 100 runs and do statistical analysis on the results:
//#define TWEAKS
#ifdef TWEAKS
#define BOOST_ASIO_DISABLE_THREADS 1
#endif
#include <boost/asio.hpp>
#include <boost/asio/experimental/awaitable_operators.hpp>
#include <boost/asio/experimental/channel.hpp>
#include <iostream>
namespace asio = boost::asio;
using namespace asio::experimental::awaitable_operators;
using boost::system::error_code;
using context = asio::io_context;
#ifdef TWEAKS
using executor_t = context::executor_type;
using channel_t = asio::experimental::channel<executor_t, void(error_code, uint64_t)>;
#else
using executor_t = asio::any_io_executor;
using channel_t = asio::experimental::channel<void(error_code, uint64_t)>;
#endif
asio::awaitable<void> producer(channel_t& ch) {
for (uint64_t i = 0; i < 30'000; i++)
co_await ch.async_send(error_code {}, i, asio::use_awaitable);
ch.close();
}
asio::awaitable<void> consumer(channel_t& ch) {
for (;;)
co_await ch.async_receive(asio::use_awaitable);
}
asio::awaitable<void> experiment() {
asio::any_io_executor ex = co_await asio::this_coro::executor;
channel_t ch { *ex.target<executor_t>(), 1000 };
co_await (consumer(ch) && producer(ch));
}
void foo() {
try {
#ifdef TWEAKS
asio::io_context ctx{BOOST_ASIO_CONCURRENCY_HINT_UNSAFE};
#else
asio::io_context ctx{1};
#endif
asio::co_spawn(ctx, experiment(), asio::detached);
ctx.run();
} catch (std::exception& e) {
std::cerr << "Exception: " << e.what() << "\n";
}
}
#include <nonius/benchmark.h++>
#define NONIUS_RUNNER
#include <nonius/main.h++>
NONIUS_BENCHMARK( //
"foo", //
[](nonius::chronometer cm) { cm.measure([] { foo(); }); })
The results per 30k batch (including construction and teardown) are:
Without TWEAKS: mean 12.091 ± 0.233ms (full data graph)
With TWEAKS defined: mean 8.784±0.097ms (full data graph)
So ~25% speed increase, and also much reduced variance.
Combining the series in one graph:
Thoughts
These are just the Asio technical tweaks. I might be missing some still.
I suspect you should be able to get much better throughput with smart buffering. I'm assuming you need the Asio integration for other reasons, making this the right choice.
It turned out the consumer and producer sides are scheduled in the event loop on each send/receive operation that's why channel size has no effect on the throughout.
I've changed the code to the following and now it can send 90M per seconds. but this is what I was expected from the implementation.
asio::awaitable< void >
producer(channel_t &ch)
{
for (uint64_t i = 0; i < 90'000'000; i++)
{
if (!ch.try_send(boost::system::error_code {}, i))
co_await ch.async_send(boost::system::error_code {}, i, asio::use_awaitable);
}
ch.close();
}
asio::awaitable< void >
consumer(channel_t &ch)
{
for (;;)
{
if (!ch.try_receive([](auto, auto) {}))
co_await ch.async_receive(asio::use_awaitable);
}
}
I think the reason that this is not the default behavior of channels is that because there is no way for awaitables in asio to return true in await_ready() call they always have to suspend and initiate an asynchronous operation.

How to make a timeout at receiving in boost::asio udp::socket?

I create an one-thread application which exchanges with another one via UDP. When the second is disconnecting, my socket::receive_from blocks and I don't know how to solve this problem not changing the entire program into multi-threads or async interactions.
I thought that next may be a solution:
std::chrono::milliseconds timeout{4};
boost::system::error_code err;
data_t buffer(kPackageMaxSize);
std::size_t size = 0;
const auto status = std::async(std::launch::async,
[&]{
size = socket_.receive_from(boost::asio::buffer(buffer), dst_, 0, err);
}
).wait_for(timeout);
switch (status)
{
case std::future_status::timeout: /*...*/ break;
}
But I achieved a new problem: Qt Creator (GDB 11.1) (I don't have ability to try something yet) began to fall when I am debugging. If it runs without, the solution also not always works.
PS. As for "it doesn't work when debugging", debugging (specifically breakpoints) obviously changes timing. Also, keep in mind network operations have varying latency and UDP isn't a guaranteed protocol: messages may not be delivered.
Asio stands for "Asynchronous IO". As you might suspect, this means that asynchronous IO is a built-in feature, it's the entire purpose of the library. See overview/core/async.html: Concurrency Without Threads
It's not necessary to complicate with std::async. In your case I'd suggest using async_receive_from with use_future, as it is closest to the model you opted for:
Live On Coliru
#include <boost/asio.hpp>
#include <iostream>
#include <iomanip>
namespace net = boost::asio;
using net::ip::udp;
using namespace std::chrono_literals;
constexpr auto kPackageMaxSize = 65520;
using data_t = std::vector<char>;
int main() {
net::thread_pool ioc;
udp::socket socket_(ioc, udp::v4());
socket_.bind({{}, 8989});
udp::endpoint ep;
data_t buffer(kPackageMaxSize);
auto fut =
socket_.async_receive_from(net::buffer(buffer), ep, net::use_future);
switch (fut.wait_for(4ms)) {
case std::future_status::ready: {
buffer.resize(fut.get()); // never blocks here
std::cout << "Received " << buffer.size() << " bytes: "
<< std::quoted(
std::string_view(buffer.data(), buffer.size()))
<< "\n";
break;
}
case std::future_status::timeout:
case std::future_status::deferred: {
std::cout << "Timeout\n";
socket_.cancel(); // stop the IO operation
// fut.get() would throw system_error(net::error::operation_aborted)
break;
}
}
ioc.join();
}
The Coliru output:
Received 12 bytes: "Hello World
"
Locally demonstrating both timeout and successful path:

Close Boost Websocket from Server side, C++, tcp::acceptor accept() timeout?

UPDATE:
Well it appears that I need to address my issue with an asynchronous implementation. I will update my posting with a new direction, once I've completed testing
Original:
I'm currently writing a multiserver application that will collect, share, and request information from multiple machines. In some cases, Machine A will request information from Machine B but will need to send it to Machine C, which will reply to A. Without getting too deep into what the application is going to do I need some help with my client application.
I have my client application designed with two threads. I used this example from boost, as the basis for my design.
Thread one will open a Client Websocket with Machine-A, it will stream a series of data points and commands. Here is a stripped-down version of my code
#include "Poco/Clock.h"
#include "Poco/Task.h"
#include "Poco/Thread.h"
#include <boost/asio.hpp>
#include <boost/beast.hpp>
#include <jsoncons/json.hpp>
namespace beast = boost::beast; // from <boost/beast.hpp>
namespace http = beast::http; // from <boost/beast/http.hpp>
namespace websocket = beast::websocket; // from <boost/beast/websocket.hpp>
namespace net = boost::asio; // from <boost/asio.hpp>
using tcp = net::ip::tcp; // from <boost/asio/ip/tcp.hpp>
class ResponseChannel : public Poco::Runnable {
void do_session(tcp::socket socket)
{
try {
websocket::stream<tcp::socket> ws{std::move(socket)};
ws.set_option(websocket::stream_base::decorator(
[](websocket::response_type& res) {
res.set(http::field::server,
std::string(BOOST_BEAST_VERSION_STRING) +
" websocket-server-sync");
}));
ws.accept();
for (;;) {
beast::flat_buffer buffer;
ws.read(buffer);
if (ws.got_binary()) {
// do something
}
}
} catch (beast::system_error const& se) {
if (se.code() != websocket::error::closed) {
std::cerr << "do_session1 ->: " << se.code().message()
<< std::endl;
return;
}
} catch (std::exception const& e) {
std::cerr << "do_session2 ->: " << e.what() << std::endl;
return;
}
}
virtual void run()
{
auto const address = net::ip::make_address(host);
auto const port = static_cast<unsigned short>(respPort);
try {
net::io_context ioc{1};
tcp::acceptor acceptor{ioc, {address, port}};
tcp::socket socket{ioc};
for (; keep_running;) {
acceptor.accept(socket);
std::thread(&ResponseChannel::do_session, this,
std::move(socket))
.detach();
}
} catch (const std::exception& e) {
std::cout << "run: " << e.what() << std::endl;
}
}
void _terminate() { keep_running = false; }
public:
std::string host;
int respPort;
bool keep_running = true;
int responseCount = 0;
std::vector<long long int> latency_times;
long long int time_sum;
Poco::Clock* responseClock;
};
int main()
{
using namespace std::chrono_literals;
Poco::Clock clock = Poco::Clock();
Poco::Thread response_thread;
ResponseChannel response_channel;
response_channel.responseClock = &clock;
response_channel.host = "0.0.0.0";
response_channel.respPort = 8080;
response_thread.start(response_channel);
response_thread.setPriority(Poco::Thread::Priority::PRIO_HIGH);
// doing some work here. work will vary depending on command-line arguments
std::this_thread::sleep_for(30s);
response_channel.keep_running = false;
response_thread.join();
}
The way I have designed the multiple machines works as expected regarding sending commands to Machine-B and receiving results from Machine-C.
The issue I'm facing is closing out Thread 2, which contains my local response channel.
I went back and forth between Poco::Thread and Poco::Task, but I decided that I do not want to use Task, as it would be a mistake to be able to close the 2nd thread/task from the main thread. I need to know that all packets have been received before closing down the 2nd thread.
So I need to close events down only once I have received a websocket::error::closed flag from Machine-C. Shutting down the websocket, detached, thread is no issue, as when the flag arrives it takes care of that for me.
However, as part of the loop process for reconnecting after a closed socket, the thread just waits for a new connection.
acceptor.accept(socket);
It's blocking, and through the documentation, there doesn't seem to be a timeout feature. I see that there is a close option, but my attempt to use close simply threw an exception. Which ultimately added complexity, I didn't want.
Ultimately, I want the Server to continuously loop through a series of connections from both Machine-B and Machine-C, but only after my client application has ended. The last thing I do before waiting for the Poco::Thread to complete is to set the flag that I no longer want the Websocket server to run.
I've put that flag before the blocking accept() call. This would work, only with perfect timing of the flag going up, a new connection is opened and then closed, before looping back to wait for a new connection.
Ideally, there would be a timeout so that it would loop around, first checking if it timed out, allow for a periodic check if I wanted the thread to remain open.
Has anyone ever run into this?

setting the execution rate of while loop in a C++ code for real time synchronization

I am doing a real_time simulation using a .cpp source code. I have to take a sample every 0.2 seconds (200 ms) ... There is a while loop that takes a sample every time step... I want to synchronize the execution of this while loop to get a sample every (200 ms) ... How should I modify the while loop ?
while (1){
// get a sample every 200 ms
}
Simple and accurate solution with std::this_thread::sleep_until:
#include "date.h"
#include <chrono>
#include <iostream>
#include <thread>
int
main()
{
using namespace std::chrono;
using namespace date;
auto next = steady_clock::now();
auto prev = next - 200ms;
while (true)
{
// do stuff
auto now = steady_clock::now();
std::cout << round<milliseconds>(now - prev) << '\n';
prev = now;
// delay until time to iterate again
next += 200ms;
std::this_thread::sleep_until(next);
}
}
"date.h" isn't needed for the delay part. It is there to provide the round<duration> function (which is now in C++17), and to make it easier to print out durations. This is all under "do stuff", and doesn't matter for the loop delay.
Just get a chrono::time_point, add your delay to it, and sleep until that time_point. Your loop will on average stay true to your delay, as long as your "stuff" takes less time than your delay. No other thread needed. No timer needed. Just <chrono> and sleep_until.
This example just output for me:
200ms
205ms
200ms
195ms
205ms
198ms
202ms
199ms
196ms
203ms
...
what you are asking is tricky, unless you are using a real-time operating system.
However, Boost has a library that supports what you want. (There is, however, no guarantee that you are going to be called exactly every 200ms.
The Boost ASIO library is probably what you are looking for though, here is code from their tutorial:
//
// timer.cpp
// ~~~~~~~~~
//
// Copyright (c) 2003-2012 Christopher M. Kohlhoff (chris at kohlhoff dot com)
//
// Distributed under the Boost Software License, Version 1.0. (See accompanying
// file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)
//
#include <iostream>
#include <boost/asio.hpp>
#include <boost/date_time/posix_time/posix_time.hpp>
int main()
{
boost::asio::io_service io;
boost::asio::deadline_timer t(io, boost::posix_time::seconds(5));
t.wait();
std::cout << "Hello, world!\n";
return 0;
}
link is here: link to boost asio.
You could take this code, and re-arrange it like this
#include <iostream>
#include <boost/asio.hpp>
#include <boost/date_time/posix_time/posix_time.hpp>
int main()
{
boost::asio::io_service io;
while(1)
{
boost::asio::deadline_timer t(io, boost::posix_time::seconds(5));
// process your IO here - not sure how long your IO takes, so you may need to adjust your timer
t.wait();
}
return 0;
}
There is also a tutorial for handling the IO asynchronously on the next page(s).
The offered answers show you that there are tools available in Boost to help you accomplish this. My late offering illustrates how to use setitimer(), which is a POSIX facility for iterative timers.
You basically need a change like this:
while (1){
// wait until 200 ms boundary
// get a sample
}
With an iterative timer, the fired signal would interrupt any blocked signal call. So, you could just block on something forever. select will do fine for that:
while (1){
int select_result = select(0, 0, 0, 0, 0);
assert(select_result < 0 && errno == EINTR);
// get a sample
}
To establish an interval timer for every 200 ms, use setitimer(), passing in an appropriate interval. In the code below, we set an interval for 200 ms, where the first one fires 150 ms from now.
struct itimerval it = { { 0, 200000 }, { 0, 150000 } };
if (setitimer(ITIMER_REAL, &it, 0) != 0) {
perror("setitimer");
exit(EXIT_FAILURE);
}
Now, you just need to install a signal handler for SIGALRM that does nothing, and the code is complete.
You can follow the link to see the completed example.
If it is possible for multiple signals to be fired during the program execution, then instead of relying on the interrupted system call, it is better to block on something that the SIGALRM handler can wake up in a deterministic way. One possibility is to have the while loop block on read of the read end of a pipe. The signal handler can then write to the write end of that pipe.
void sigalarm_handler (int)
{
if (write(alarm_pipe[1], "", 1) != 1) {
char msg[] = "write: failed from sigalarm_handler\n";
write(2, msg, sizeof(msg)-1);
abort();
}
}
Follow the link to see the completed example.
#include <thread>
#include <chrono>
#include <iostream>
int main() {
std::thread timer_thread;
while (true) {
timer_thread = std::thread([](){
std::this_thread::sleep_for (std::chrono::seconds(1));
});
// do stuff
std::cout << "Hello World!" << std::endl;
// waits until thread has "slept"
timer_thread.join();
// will loop every second unless the stuff takes longer than that.
}
return 0;
}
To get absolute percision will be nearly impossible - maybe in embedded systems. However, if you require only an approximate frequency, you can get pretty decent performance with a chrono library such as std::chrono (c++11) or boost::chrono. Like so:
while (1){
system_clock::time_point now = system_clock::now();
auto duration = now.time_since_epoch();
auto start_millis = std::chrono::duration_cast<std::chrono::milliseconds>(duration).count();
//run sample
now = system_clock::now();
duration = now.time_since_epoch();
auto end_millis = std::chrono::duration_cast<std::chrono::milliseconds>(duration).count();
auto sleep_for = max(0, 200 - (end_millis - start_millis ));
std::this_thread::sleep_for( sleep_for );
}

using boost:asio with select? blocking on TCP input OR file update

I had intended to have a thread in my program which would wait on two file descriptors, one for a socket and a second one for a FD describing the file system (specifically waiting to see if a new file is added to a directory). Since I expect to rarely see either the new file added or new TCP messages coming in I wanted to have one thread waiting for either input and handle whichever input is detected when it occures rather then bothering with seperate threads.
I then (finally!) got permission from the 'boss' to use boost. So now I want to replace the basic sockets with boost:asio. Only I'm running into a small problem. It seems like asio implimented it's own version of select rather then providing a FD I could use with select directly. This leaves me uncertain how I can block on both conditions, new file and TCP input, at the same time when one only works with select and the other doesn't seem to support the use of select. Is there an easy work around to this I'm missing?
ASIO is best used asynchronously (that's what it stands for): you can set up handlers for both TCP reads and the file descriptor activity, and the handlers would be called for you.
Here's a demo example to get you started (written for Linux with inotify support):
#include <iostream>
#include <boost/asio.hpp>
#include <boost/bind.hpp>
#include <sys/inotify.h>
namespace asio = boost::asio;
void start_notify_handler();
void start_accept_handler();
// this stuff goes into your class, only global for the simplistic demo
asio::streambuf buf(1024);
asio::io_service io_svc;
asio::posix::stream_descriptor stream_desc(io_svc);
asio::ip::tcp::socket sock(io_svc);
asio::ip::tcp::endpoint end(asio::ip::tcp::v4(), 1234);
asio::ip::tcp::acceptor acceptor(io_svc, end);
// this gets called on file system activity
void notify_handler(const boost::system::error_code&,
std::size_t transferred)
{
size_t processed = 0;
while(transferred - processed >= sizeof(inotify_event))
{
const char* cdata = processed
+ asio::buffer_cast<const char*>(buf.data());
const inotify_event* ievent =
reinterpret_cast<const inotify_event*>(cdata);
processed += sizeof(inotify_event) + ievent->len;
if(ievent->len > 0 && ievent->mask & IN_OPEN)
std::cout << "Someone opened " << ievent->name << '\n';
}
start_notify_handler();
}
// this gets called when nsomeone connects to you on TCP port 1234
void accept_handler(const boost::system::error_code&)
{
std::cout << "Someone connected from "
<< sock.remote_endpoint().address() << '\n';
sock.close(); // dropping connection: this is just a demo
start_accept_handler();
}
void start_notify_handler()
{
stream_desc.async_read_some( buf.prepare(buf.max_size()),
boost::bind(&notify_handler, asio::placeholders::error,
asio::placeholders::bytes_transferred));
}
void start_accept_handler()
{
acceptor.async_accept(sock,
boost::bind(&accept_handler, asio::placeholders::error));
}
int main()
{
int raw_fd = inotify_init(); // error handling ignored
stream_desc.assign(raw_fd);
inotify_add_watch(raw_fd, ".", IN_OPEN);
start_notify_handler();
start_accept_handler();
io_svc.run();
}