I'm new to the boost::asio, and boost::process libraries and I've come across a problem which I'm struggling to find a solution for...
Consider that I have a small toy program that does the following:
Firstly, fork()s itself into a parent-branch and a child-branch.
The child-branch then uses the boost::process::child class to invoke the unix command ls in an asynchronous context.
The child-branch supplies the boost::process::child class with a boost::process::async_pipe to direct std_out to.
The parent-branch wishes to read what has been written to the pipe, line by line, and process it further.
Currently, my implementation of this works up to a point. However, the read_loop() call in the parent-branch does not terminate. It is almost as if it never reaches EOF, or is blocked. Why is this?
Here is my MWE:
#include <boost/process.hpp>
#include <boost/asio.hpp>
#include <iostream>
#include <string>
#include <unistd.h>
void read_loop(boost::process::async_pipe& pipe)
{
static boost::asio::streambuf buffer;
boost::asio::async_read_until(
pipe,
buffer,
'\n',
[&](boost::system::error_code error_code, std::size_t bytes) {
if (!error_code) {
std::istream is(&buffer);
if (std::string line; std::getline(is, line)) {
std::cout << "Read Line: " << line << "\n";
}
read_loop(pipe);
}
else {
std::cout << "Error in read_loop()!\n";
pipe.close();
}
}
);
}
int main(int argc, char* argv[])
{
boost::asio::io_context io_context{};
boost::process::async_pipe pipe{ io_context };
io_context.notify_fork(boost::asio::io_context::fork_prepare);
pid_t pid{ fork() };
if (pid == 0) {
io_context.notify_fork(boost::asio::io_context::fork_child);
boost::process::child child(
boost::process::args({ "/usr/bin/ls", "/etc/" }),
boost::process::std_out > pipe,
boost::process::on_exit([&](int exit, std::error_code error_code) { std::cout << "[Exited with code " << exit << " (" << error_code.message() << ")]\n"; }),
io_context
);
io_context.run();
}
else {
io_context.notify_fork(boost::asio::io_context::fork_parent);
read_loop(pipe);
io_context.run();
}
return 0;
}
Which will successfully give the (abridged) output, as expected:
Read Line: adduser.conf
...
[Exited with code 0 (Success)]
...
Read Line: zsh_command_not_found
but will then just hang until it is forcibly killed.
Which leaves the main question, why does my read_loop() function end up blocking/not exiting correctly?
Thanks in advance!
Chasing The Symptom
The process not "seeing" EOF makes me think you have to close either end of the pipe. This is somewhat hacky, but works:
Live On Coliru
#include <boost/asio.hpp>
#include <boost/process.hpp>
#include <iostream>
namespace bp = boost::process;
void read_loop(bp::async_pipe& pipe) {
static boost::asio::streambuf buffer;
using boost::system::error_code;
async_read_until( //
pipe, buffer, '\n', [&](error_code ec, [[maybe_unused]] size_t bytes) {
// std::cout << "Handler " << ec.message() << " bytes:" << bytes << " (" <<
// buffer.size() << ")" << std::endl;
if (!ec) {
std::istream is(&buffer);
if (std::string line; std::getline(is, line)) {
std::cout << "Read Line: " << line << "\n";
}
read_loop(pipe);
} else {
std::cout << "Loop exit (" << ec.message() << ")" << std::endl;
pipe.close();
}
});
}
int main() {
boost::asio::io_context ioc{};
bp::async_pipe pipe{ioc};
ioc.notify_fork(boost::asio::io_context::fork_prepare);
pid_t pid{fork()};
if (pid == 0) {
ioc.notify_fork(boost::asio::io_context::fork_child);
bp::child child( //
bp::args({"/usr/bin/ls", "/etc/"}), bp::std_out > pipe, bp::std_in.close(),
bp::on_exit([&](int exit, std::error_code ec) {
std::cout << "[Exited with code " << exit << " (" << ec.message() << ")]\n";
pipe.close();
}),
ioc);
ioc.run();
} else {
ioc.notify_fork(boost::asio::io_context::fork_parent);
std::move(pipe).sink().close();
read_loop(pipe);
ioc.run();
}
}
Side note: I guess it would be nice to have a more unhacky way to specify this, like (bp::std_in < pipe).close() or so.
Fixing The Root Cause
When using Boost Process, the fork is completely redundant. Boost Process literally does the fork for you, complete with correct service notification and file descriptor handling.
You'll find the code becomes a lot simpler and also handles the closing correctly (likely because some assumptions within Boost Process implementation details):
Live On Coliru
#include <boost/asio.hpp>
#include <boost/process.hpp>
#include <iostream>
namespace bp = boost::process;
void read_loop(bp::async_pipe& pipe) {
static boost::asio::streambuf buffer;
static std::string line; // re-used because we can
async_read_until( //
pipe, buffer, '\n',
[&](boost::system::error_code ec, size_t /*bytes*/) {
if (ec) {
std::cout << "Loop exit (" << ec.message() << ")" << std::endl;
return;
}
if (getline(std::istream(&buffer), line))
std::cout << "Read Line: " << line << "\n";
read_loop(pipe);
});
}
int main() {
boost::asio::io_context ioc{};
bp::async_pipe pipe{ioc};
bp::child child( //
bp::args({"/bin/ls", "/etc/"}), bp::std_out > pipe,
bp::on_exit([&](int exit, std::error_code ec) {
std::cout << "[Exited with " << exit << " (" << ec.message()
<< ")]\n";
}));
read_loop(pipe);
ioc.run();
}
I have to handle information from 100 ports in parallel for 100ms per second.
I am using Ubuntu OS.
I did some research and i saw that poll() function is a good candidate, to avoid to open 100 threads to handle in parallel data coming on udp protocol.
I did main part with boost and I tried to integrate poll() with boost.
The problem is when i am trying to send by client data to the server, I receive nothing.
According to wireshark, data are coming on the right host. (localhost, port 1234)
Did I miss something or did I put something wrong ?
The test code (server) :
#include <deque>
#include <iostream>
#include <chrono>
#include <thread>
#include <sys/poll.h>
#include <boost/optional.hpp>
#include <boost/asio.hpp>
#include <boost/bind/bind.hpp>
using boost::asio::ip::udp;
using namespace boost::asio;
using namespace std::chrono_literals;
std::string ip_address = "127.0.0.1";
template<typename T, size_t N>
size_t arraySize( T(&)[N] )
{
return(N);
}
class UdpReceiver
{
using Resolver = udp::resolver;
using Sockets = std::deque<udp::socket>;
using EndPoint = udp::endpoint;
using Buffer = std::array<char, 100>; // receiver buffer
public:
explicit UdpReceiver()
: work_(std::ref(resolver_context)), thread_( [this]{ resolver_context.run(); })
{ }
~UdpReceiver()
{
work_ = boost::none; // using work to keep run active always !
thread_.join();
}
void async_resolve(udp::resolver::query const& query_) {
resolver_context.post([this, query_] { do_resolve(query_); });
}
// callback for event-loop in main thread
void run_handler(int fd_idx) {
// start reading
auto result = read(fd_idx, receive_buf.data(), sizeof(Buffer));
// increment number of received packets
received_packets = received_packets + 1;
std::cout << "Received bytes " << result << " current recorded packets " << received_packets <<'\n';
// run handler posted from resolver threads
handler_context.poll();
handler_context.reset();
}
static void handle_receive(boost::system::error_code error, udp::resolver::iterator const& iterator) {
std::cout << "handle_resolve:\n"
" " << error.message() << "\n";
if (!error)
std::cout << " " << iterator->endpoint() << "\n";
}
// get current file descriptor
int fd(size_t idx)
{
return sockets[idx].native_handle();
}
private:
void do_resolve(boost::asio::ip::udp::resolver::query const& query_) {
boost::system::error_code error;
Resolver resolver(resolver_context);
Resolver::iterator result = resolver.resolve(query_, error);
sockets.emplace_back(udp::socket(resolver_context, result->endpoint()));
// post handler callback to service running in main thread
resolver_context.post(boost::bind(&UdpReceiver::handle_receive, error, result));
}
private:
Sockets sockets;
size_t received_packets = 0;
EndPoint remote_receiver;
Buffer receive_buf {};
io_context resolver_context;
io_context handler_context;
boost::optional<boost::asio::io_context::work> work_;
std::thread thread_;
};
int main (int argc, char** argv)
{
UdpReceiver udpReceiver;
udpReceiver.async_resolve(udp::resolver::query(ip_address, std::to_string(1234)));
//logic
pollfd fds[2] { };
for(int i = 0; i < arraySize(fds); ++i)
{
fds[i].fd = udpReceiver.fd(0);
fds[i].events = 0;
fds[i].events |= POLLIN;
fcntl(fds[i].fd, F_SETFL, O_NONBLOCK);
}
// simple event-loop
while (true) {
if (poll(fds, arraySize(fds), -1)) // waiting for wakeup call. Timeout - inf
{
for(auto &fd : fds)
{
if(fd.revents & POLLIN) // checking if we have something to read
{
fd.revents = 0; // reset kernel message
udpReceiver.run_handler(fd.fd); // call resolve handler. Do read !
}
}
}
}
return 0;
}
This looks like a confused mix of C style poll code and Asio code. The point is
you don't need poll (Asio does it internally (or epoll/select/kqueue/IOCP - whatever is available)
UDP is connectionless, so you don't need more than one socket to receive all "connections" (senders)
I'd replace it all with a single udp::socket on a single thread. You don't even have to manage the thread/work:
net::thread_pool io(1); // single threaded
udp::socket s{io, {{}, 1234}};
Let's run an asynchronous receive loop for 5s:
std::array<char, 100> receive_buffer;
udp::endpoint sender;
std::function<void(error_code, size_t)> read_loop;
read_loop = [&](error_code ec, size_t bytes) {
if (bytes != size_t(-1)) {
//std::cout << "read_loop (" << ec.message() << ")\n";
if (ec)
return;
received_packets += 1;
unique_senders.insert(sender);
//std::cout << "Received:" << bytes << " sender:" << sender << " recorded:" << received_packets << "\n";
//std::cout << std::string_view(receive_buffer.data(), bytes) << "\n";
}
s.async_receive_from(net::buffer(receive_buffer), sender, read_loop);
};
read_loop(error_code{}, -1); // prime the async pump
// after 5s stop
std::this_thread::sleep_for(5s);
post(io, [&s] { s.cancel(); });
io.join();
At the end, we can report the statistics:
std::cout << "A total of " << received_packets << " were received from "
<< unique_senders.size() << " unique senders\n";
With a similated load in bash:
function client() { while read a; do echo "$a" > /dev/udp/localhost/1234 ; done < /etc/dictionaries-common/words; }
for a in {1..20}; do client& done; time wait
We get:
A total of 294808 were received from 28215 unique senders
real 0m5,007s
user 0m0,801s
sys 0m0,830s
This is obviously not optimized, the bottle neck here is likely the many many bash subshells being launched for the clients.
Full Listing
#include <boost/asio.hpp>
#include <boost/bind/bind.hpp>
#include <iostream>
#include <set>
namespace net = boost::asio;
using boost::asio::ip::udp;
using boost::system::error_code;
using namespace std::chrono_literals;
int main ()
{
net::thread_pool io(1); // single threaded
udp::socket s{io, {{}, 1234}};
std::set<udp::endpoint> unique_senders;
size_t received_packets = 0;
{
std::array<char, 100> receive_buffer;
udp::endpoint sender;
std::function<void(error_code, size_t)> read_loop;
read_loop = [&](error_code ec, size_t bytes) {
if (bytes != size_t(-1)) {
//std::cout << "read_loop (" << ec.message() << ")\n";
if (ec)
return;
received_packets += 1;
unique_senders.insert(sender);
//std::cout << "Received:" << bytes << " sender:" << sender << " recorded:" << received_packets << "\n";
//std::cout << std::string_view(receive_buffer.data(), bytes) << "\n";
}
s.async_receive_from(net::buffer(receive_buffer), sender, read_loop);
};
read_loop(error_code{}, -1); // prime the async pump
// after 5s stop
std::this_thread::sleep_for(5s);
post(io, [&s] { s.cancel(); });
io.join();
}
std::cout << "A total of " << received_packets << " were received from "
<< unique_senders.size() << " unique senders\n";
}
I'm implementing a TCP client which read and send files and strings and I'm using Boost as my main library. I'd like to continue reading or sending files while I keep sending strings, which in these case are the commands to send to the server. For this purpose I thought about using a Thread Pool in order to not overload the client. My question is, can I use futures to use callbacks when on of the thread in the pool ends? In case I can't, is there any other solution?
I was doing something like this, where pool_ is a boost:asio:thread_pool
void send_file(std::string const& file_path){
boost::asio::post(pool_, [this, &file_path] {
handle_send_file(file_path);
});
// DO SOMETHING WHEN handle_send_file ENDS
}
void handle_send_file(std::string const& file_path) {
boost::array<char, 1024> buf{};
boost::system::error_code error;
std::ifstream source_file(file_path, std::ios_base::binary | std::ios_base::ate);
if(!source_file) {
std::cout << "[ERROR] Failed to open " << file_path << std::endl;
//TODO gestire errore
}
size_t file_size = source_file.tellg();
source_file.seekg(0);
std::string file_size_readable = file_size_to_readable(file_size);
// First send file name and file size in bytes to server
boost::asio::streambuf request;
std::ostream request_stream(&request);
request_stream << file_path << "\n"
<< file_size << "\n\n"; // Consider sending readable version, does it change anything?
// Send the request
boost::asio::write(*socket_, request, error);
if(error){
std::cout << "[ERROR] Send request error:" << error << std::endl;
//TODO lanciare un'eccezione? Qua dovrò controllare se il server funziona o no
}
if(DEBUG) {
std::cout << "[DEBUG] " << file_path << " size is: " << file_size_readable << std::endl;
std::cout << "[DEBUG] Start sending file content" << std::endl;
}
long bytes_sent = 0;
float percent = 0;
print_percentage(percent);
while(!source_file.eof()) {
source_file.read(buf.c_array(), (std::streamsize)buf.size());
int bytes_read_from_file = source_file.gcount(); //int is fine because i read at most buf's size, 1024 in this case
if(bytes_read_from_file<=0) {
std::cout << "[ERROR] Read file error" << std::endl;
break;
//TODO gestire questo errore
}
percent = std::ceil((100.0 * bytes_sent) / file_size);
print_percentage(percent);
boost::asio::write(*socket_, boost::asio::buffer(buf.c_array(), source_file.gcount()),
boost::asio::transfer_all(), error);
if(error) {
std::cout << "[ERROR] Send file error:" << error << std::endl;
//TODO lanciare un'eccezione?
}
bytes_sent += bytes_read_from_file;
}
std::cout << "\n" << "[INFO] File " << file_path << " sent successfully!" << std::endl;
}
The operations posted to the pool end without the threads ending. That's the whole purpose of pooling the threads.
void send_file(std::string const& file_path){
post(pool_, [this, &file_path] {
handle_send_file(file_path);
});
// DO SOMETHING WHEN handle_send_file ENDS
}
This has several issues. The largest one is that you should not capture file_path by reference, as the argument is soon out of scope, and the handle_send_file call will run at an unspecified time in another thread. That's a race condition and dangling reference. Undefined Behaviour results.
Then the
// DO SOMETHING WHEN handle_send_file ENDS
is on a line which has no sequence relation with handle_send_file. In fact, it will probably run before that operation ever has a chance to start.
Simplifying
Here's a simplified version:
#include <boost/array.hpp>
#include <boost/asio.hpp>
#include <fstream>
#include <iostream>
namespace asio = boost::asio;
using asio::ip::tcp;
static asio::thread_pool pool_;
struct X {
std::unique_ptr<tcp::socket> socket_;
explicit X(unsigned short port) : socket_(new tcp::socket{ pool_ }) {
socket_->connect({ {}, port });
}
asio::thread_pool pool_;
std::unique_ptr<tcp::socket> socket_{ new tcp::socket{ pool_ } };
void send_file(std::string file_path) {
post(pool_, [=, this] {
send_file_implementation(file_path);
// DO SOMETHING WHEN send_file_implementation ENDS
});
}
// throws system_error exception
void send_file_implementation(std::string file_path) {
std::ifstream source_file(file_path,
std::ios_base::binary | std::ios_base::ate);
size_t file_size = source_file.tellg();
source_file.seekg(0);
write(*socket_,
asio::buffer(file_path + "\n" + std::to_string(file_size) + "\n\n"));
boost::array<char, 1024> buf{};
while (source_file.read(buf.c_array(), buf.size()) ||
source_file.gcount() > 0)
{
int n = source_file.gcount();
if (n <= 0) {
using namespace boost::system;
throw system_error(errc::io_error, system_category());
}
write(*socket_, asio::buffer(buf), asio::transfer_exactly(n));
}
}
};
Now, you can indeed run several of these operations in parallel (assuming several instances of X, so you have separate socket_ connections).
To do something at the end, just put code where I moved the comment:
// DO SOMETHING WHEN send_file_implementation ENDS
If you don't know what to do there and you wish to make a future ready at that point, you can:
std::future<void> send_file(std::string file_path) {
std::packaged_task<void()> task([=, this] {
send_file_implementation(file_path);
});
return post(pool_, std::move(task));
}
This overload of post magically¹ returns the future from the packaged task. That packaged task will set the internal promise with either the (void) return value or the exception thrown.
See it in action: Live On Coliru
int main() {
// send two files simultaneously to different connections
X clientA(6868);
X clientB(6969);
std::future<void> futures[] = {
clientA.send_file("main.cpp"),
clientB.send_file("main.cpp"),
};
for (auto& fut : futures) try {
fut.get();
std::cout << "Everything completed without error\n";
} catch(std::exception const& e) {
std::cout << "Error occurred: " << e.what() << "\n";
};
pool_.join();
}
I tested this while running two netcats to listen on 6868/6969:
nc -l -p 6868 | head& nc -l -p 6969 | md5sum&
./a.out
wait
The server prints:
Everything completed without error
Everything completed without error
The netcats print their filtered output:
main.cpp
1907
#include <boost/array.hpp>
#include <boost/asio.hpp>
#include <fstream>
#include <iostream>
#include <future>
namespace asio = boost::asio;
using asio::ip::tcp;
7ecb71992bcbc22bda44d78ad3e2a5ef -
¹ not magic: see https://www.boost.org/doc/libs/1_66_0/doc/html/boost_asio/reference/async_result.html
Using the results of this question simultaneous read and write to child's stdio using boost.process, I am trying to modify the code so that a file is read, piped through gzip, the output of gzip piped through bzip2, and finally the output of bzip2 written to a file.
My first attempt was
/*
* ProcessPipe.cpp
*
* Created on: Apr 17, 2018
* Author: dbetz
*/
//#define BOOST_ASIO_ENABLE_HANDLER_TRACKING 1
#include <boost/asio.hpp>
#include <boost/asio/high_resolution_timer.hpp>
#include <boost/process.hpp>
#include <boost/process/async.hpp>
#include <iostream>
#include <fstream>
#include <functional>
namespace bp = boost::process;
using boost::system::error_code;
using namespace std::chrono_literals;
using Loop = std::function<void()>;
using Buffer = std::array<char, 500>;
int main() {
boost::asio::io_service svc;
auto gzip_exit = [](int code, std::error_code ec) {
std::cout << "gzip exited " << code << " (" << ec.message() << ")\n";
};
auto bzip2_exit = [](int code, std::error_code ec) {
std::cout << "bzip2 exited " << code << " (" << ec.message() << ")\n";
};
bp::async_pipe file_to_gzip_pipe{svc}, gzip_to_bzip_pipe{svc}, bzip_to_file_pipe{svc};
bp::child process_gzip("/usr/bin/gzip", "-c", bp::std_in < file_to_gzip_pipe, bp::std_out > gzip_to_bzip_pipe, svc, bp::on_exit(gzip_exit));
bp::child process_bzip2("/usr/bin/bzip2", "-c", bp::std_in < gzip_to_bzip_pipe, bp::std_out > bzip_to_file_pipe, svc, bp::on_exit(bzip2_exit));
std::ifstream ifs("src/ProcessPipe2.cpp");
Buffer file_to_gzip_buffer;
Loop file_to_gzip_loop;
file_to_gzip_loop = [&] {
if (!ifs.good())
{
error_code ec;
file_to_gzip_pipe.close(ec);
std::cout << "Read file, write gzip: closed stdin (" << ec.message() << ")\n";
return;
}
ifs.read(file_to_gzip_buffer.data(), file_to_gzip_buffer.size());
boost::asio::async_write(file_to_gzip_pipe, boost::asio::buffer(file_to_gzip_buffer.data(), ifs.gcount()),
[&](error_code ec, size_t transferred) {
std::cout << "Read file, write gzip: " << ec.message() << " sent " << transferred << " bytes\n";
if (!ec) {
file_to_gzip_loop(); // continue writing
}
});
};
Buffer gzip_to_bzip_buffer;
Loop gzip_to_bzip_loop;
gzip_to_bzip_loop=[&] {
gzip_to_bzip_pipe.async_read_some(boost::asio::buffer(gzip_to_bzip_buffer),
[&](error_code ec, size_t transferred){
// duplicate buffer
std::cout << "Read gzip, write bzip: " << ec.message() << " got " << transferred << " bytes\n";
if (!ec)
gzip_to_bzip_loop();
else
gzip_to_bzip_pipe.close();
}
);
};
std::ofstream ofs("src/ProcessPipe2.gz");
Buffer bzip_to_file_buffer;
Loop bzip_to_file_loop;
bzip_to_file_loop = [&] {
bzip_to_file_pipe.async_read_some(boost::asio::buffer(bzip_to_file_buffer),
[&](error_code ec, size_t transferred) {
std::cout << "Read bzip, write file: " << ec.message() << " got " << transferred << " bytes\n";
ofs << std::string(bzip_to_file_buffer.data(),transferred);
if (!ec)
bzip_to_file_loop(); // continue reading
});
};
file_to_gzip_loop(); // async
gzip_to_bzip_loop();
bzip_to_file_loop(); // async
svc.run(); // Await all async operations
}
but this gives an error:
Read gzip, write bzip: Bad file descriptor got 0 bytes
The problem seems to be that gzip_to_bzip_pipe is opened for writing by gzip and for reading by bzip. Any ideas?
I'd write that code simply like:
#include <boost/process.hpp>
#include <iostream>
namespace bp = boost::process;
int main() {
bp::pipe intermediate;
bp::child process_gzip("/bin/gzip", "-c", bp::std_in<"src/ProcessPipe2.cpp", bp::std_out> intermediate);
bp::child process_bzip2("/bin/bzip2", "-c", bp::std_in<intermediate, bp::std_out> "src/ProcessPipe2.gz.bz2");
process_bzip2.wait();
process_bzip2.wait();
}
BONUS
You can do without sub processes entirely and just use boost::iostreams::copy:
#include <boost/iostreams/filtering_stream.hpp>
#include <boost/iostreams/filter/bzip2.hpp>
#include <boost/iostreams/filter/gzip.hpp>
#include <boost/iostreams/copy.hpp>
#include <iostream>
#include <fstream>
namespace io = boost::iostreams;
int main() {
std::ifstream ifs("src/ProcessPipe2.cpp");
io::filtering_stream<io::output> os;
os.push(io::gzip_compressor());
os.push(io::bzip2_compressor());
std::ofstream ofs("src/ProcessPipe2.gz.bz2");
os.push(ofs);
io::copy(ifs, os);
}
I knew that the thread in which runs io_service.run() is responsible of executing function handlers of an asynchronous operation, but I have problems in assigning a thread for an asynchronous operation that fires in callback function of a parent async operation.
For example consider the bellow program:
#ifdef WIN32
#define _WIN32_WINNT 0x0501
#include <stdio.h>
#endif
#include <fstream> // for writting to file
#include <iostream> // for writting to file
#include <stdlib.h> // atoi (string to integer)
#include <chrono>
#include <boost/thread.hpp> // for multi threading
#include <boost/bind.hpp>
#include <boost/asio.hpp>
#include <boost/shared_ptr.hpp>
#include <boost/enable_shared_from_this.hpp>
#include <signal.h> // For Interrupt Handling (Signal Handling Event)
#include <vector>
#define max_length 46
#define server_ip1 "127.0.0.1"
//#define server_ip2 "127.0.0.1"
#define server_port 4000
#define MEM_FN(x) boost::bind(&self_type::x, shared_from_this())
#define MEM_FN1(x,y) boost::bind(&self_type::x, shared_from_this(),y)
#define MEM_FN2(x,y,z) boost::bind(&self_type::x, shared_from_this(),y,z)
void talk1();
using namespace boost::asio;
io_service service, service2;
std::chrono::time_point<std::chrono::high_resolution_clock> t_start;
ip::udp::socket sock1(service);
ip::udp::endpoint ep1( ip::address::from_string(server_ip1), 4000);
//ip::udp::socket sock2(service);
//ip::udp::endpoint ep2( ip::address::from_string(server_ip2), 4000);
std::chrono::time_point<std::chrono::high_resolution_clock> tc;
int OnCon[2];
class talk_to_svr1 : public boost::enable_shared_from_this<talk_to_svr1>, boost::noncopyable {
typedef talk_to_svr1 self_type;
talk_to_svr1(const std::string & message, ip::udp::endpoint ep) : started_(true), message_(message) {}
void start(ip::udp::endpoint ep) {
do_write(message_);
}
public:
typedef boost::system::error_code error_code;
typedef boost::shared_ptr<talk_to_svr1> ptr;
static ptr start(ip::udp::endpoint ep, const std::string & message) {
ptr new_(new talk_to_svr1(message, ep));
new_->start(ep);
return new_;
}
bool started() { return started_; }
private:
void on_read(const error_code & err, size_t bytes) {
this->t2 = std::chrono::high_resolution_clock::now(); // Time of finished reading
if ( !err) {
auto t0_rel = 1.e-9*std::chrono::duration_cast<std::chrono::nanoseconds>(t0-t_start).count();
auto t1_rel = 1.e-9*std::chrono::duration_cast<std::chrono::nanoseconds>(t1-t_start).count();
auto t2_rel = 1.e-9*std::chrono::duration_cast<std::chrono::nanoseconds>(t2-t_start).count();
std::cout << "Sock1: " << t0_rel << ", " << t1_rel << ", " << t2_rel << std::endl;
std::string msg(read_buffer_, bytes);
std::cout << msg << std::endl;
}
else {
std::cout << "Error occured in reading data from server (Sock1)" << std::endl;
}
}
void on_write(const error_code & err, size_t bytes) {
this->t1 = std::chrono::high_resolution_clock::now(); // Time of finished writting
std::cout << "Sock1 successfully sent " << bytes << " bytes of data" << std::endl;
do_read();
}
void do_read() {
sock1.async_receive_from(buffer(read_buffer_),ep1 ,MEM_FN2(on_read,_1,_2));
}
void do_write(const std::string & msg) {
if ( !started() ) return;
std::copy(msg.begin(), msg.end(), write_buffer_);
this->t0 = std::chrono::high_resolution_clock::now(); // Time of starting to write
sock1.async_send_to( buffer(write_buffer_, msg.size()), ep1, MEM_FN2(on_write,_1,_2) );
}
public:
std::chrono::time_point<std::chrono::high_resolution_clock> t0; // Time of starting to write
std::chrono::time_point<std::chrono::high_resolution_clock> t1; // Time of finished writting
std::chrono::time_point<std::chrono::high_resolution_clock> t2; // Time of finished reading
private:
int indx;
char read_buffer_[max_length];
char write_buffer_[max_length];
bool started_;
std::string message_;
};
void wait_s(int seconds)
{
boost::this_thread::sleep_for(boost::chrono::seconds{seconds});
}
void wait_ms(int msecs) {
boost::this_thread::sleep( boost::posix_time::millisec(msecs));
}
void async_thread() {
service.run();
}
void async_thread2() {
service2.run();
}
void GoOperational(int indx) {
if (indx == 0) {
talk_to_svr1::start(ep1, "Message01");
wait_s(1);
talk_to_svr1::start(ep1, "Message02");
wait_s(2);
}
else if (indx == 1) {
//talk_to_svr2::start(ep2, "Masoud");
wait_s(1);
//talk_to_svr2::start(ep2, "Ahmad");
wait_s(2);
}
else {
std::cout << "Wrong index!." << std::endl;
}
}
void on_connect(const boost::system::error_code & err, int ii) {
std::cout << "Socket "<< ii << " is connected."<< std::endl;
OnCon[ii] = 1;
if ( !err) {
tc = std::chrono::high_resolution_clock::now();
auto ty = 1.e-9*std::chrono::duration_cast<std::chrono::nanoseconds>(tc-t_start).count();
std::cout << "Sock " << ii << " connected at time: " << ty << " seconds" << std::endl;
if ( (OnCon[0] /*+ OnCon[1]*/ ) == 1) {
GoOperational(0);
//GoOperational(1);
}
}
else {
std::cout << "Socket " << ii << "had a problem for connecting to server.";
}
}
int main(int argc, char* argv[]) {
OnCon[0] = 0;
OnCon[1] = 0;
ep1 = ep1;
//ep2 = ep2;
std::cout.precision(9);
std::cout << "///////////////////////" << std::endl;
std::cout << "Socket Number, Time of starting to write, Time of finished writting, time of finished reading" << std::endl;
t_start = std::chrono::high_resolution_clock::now();
sock1.async_connect(ep1, boost::bind(on_connect, boost::asio::placeholders::error, 0));
//sock2.async_connect(ep2, boost::bind(on_connect, boost::asio::placeholders::error, 1));
boost::thread b{boost::bind(async_thread)};
b.join();
}
In this program I have a global udp socket named sock1 which will connect by running sock1.async_connect() at line #9 of main function. At the callback function of this asynchronous operation, I make two instance of talk_to_svr1 class which each of them is responsible for sending a messages to server and then receiving the response from server asynchronously.
I need to wait 3 seconds before sending second message and that is why I called wait_s(1) before making second instance of talk_to_svr1. The problem is that calling wait_s(1) in addition to pausing the main thread will also pause the the asynchronous sending operation which is not desired.
I would be grateful if anybody could change the above code in a way that another thread become responsible for asynchronously sending message to server so that calling wait_s(1) will not pause sending operation.
Note: posted an alternative using coroutines as well
Asynchronous coding by definition doesn't require you to "control" threads. In fact, you shouldn't need threads. Of course, you can't block inside completion handlers because that will hinder progress.
You can simply use a timer, expiring in 3s, async_wait for it and in its completion handler send the second request.
Here's a big cleanup of your code. Note that I removed all use of global variables. They were making things very error prone and leading to a lot of duplication (in fact talk_to_svr1 hardcoded ep1 and sock1 so it was useless for your second channel, that was largely commented out).
The crux of the change is to have message_operation take a continuation:
template <typename F_>
void async_message(udp::socket& s, std::string const& message, F_&& handler) {
using Op = message_operation<F_>;
boost::shared_ptr<Op> new_(new Op(s, message, std::forward<F_>(handler)));
new_->do_write();
}
When the message/response is completed, handler is called. Now, we can implement the application protocol (basically what you tried to capture in on_connect/GoOperational):
////////////////////////////////////////////////////
// basic protocol (2 messages, 1 delay)
struct ApplicationProtocol {
ApplicationProtocol(ba::io_service& service, udp::endpoint ep, std::string m1, std::string m2, std::chrono::seconds delay = 3s)
: _service(service),
_endpoint(ep),
message1(std::move(m1)), message2(std::move(m2)),
delay(delay), timer(service)
{ }
void go() {
_socket.async_connect(_endpoint, boost::bind(&ApplicationProtocol::on_connect, this, _1));
}
private:
ba::io_service& _service;
udp::socket _socket{_service};
udp::endpoint _endpoint;
std::string message1, message2;
std::chrono::seconds delay;
ba::high_resolution_timer timer;
void on_connect(error_code ec) {
std::cout << _endpoint << " connected at " << relatime() << " ms\n";
if (!ec) {
async_message(_socket, message1, boost::bind(&ApplicationProtocol::on_message1_sent, this, _1, _2));
} else {
std::cout << "Socket had a problem for connecting to server.";
}
}
void on_message1_sent(error_code ec, std::string response) {
if (ec)
std::cout << "Message 1 failed: " << ec.message() << "\n";
else {
std::cout << "Message 1 returned: '" << response << "'\n";
timer.expires_from_now(delay);
timer.async_wait(boost::bind(&ApplicationProtocol::on_delay_complete, this, _1));
}
}
void on_delay_complete(error_code ec) {
if (ec)
std::cout << "Delay faile: " << ec.message() << "\n";
else {
std::cout << "Delay completed\n";
async_message(_socket, message2, boost::bind(&ApplicationProtocol::on_message2_sent, this, _1, _2));
}
}
void on_message2_sent(error_code ec, std::string response) {
if (ec)
std::cout << "Message 2 failed: " << ec.message() << "\n";
else {
std::cout << "Message 2 returned: '" << response << "'\n";
}
}
};
Note how much simpler it becomes to use it:
int main() {
ba::io_service service;
std::cout.precision(2);
std::cout << std::fixed;
ApplicationProtocol
channel1(service, {{}, 4000}, "Message01\n", "Message02\n", 3s),
channel2(service, {{}, 4001}, "Masoud\n", "Ahmad\n", 2s);
channel1.go();
channel2.go();
service.run();
}
When running two udp services like so:
yes first|nl|netcat -ulp 4000& yes second|nl|netcat -ulp 4001& time wait
We get the following output: Live On Coliru
0.0.0.0:4000 connected at 1.87 ms
0.0.0.0:4001 connected at 1.99 ms
127.0.0.1:4000 successfully sent 10 bytes of data
127.0.0.1:4001 successfully sent 7 bytes of data
127.0.0.1:4000: start 1.91, written 2.03, finished 2.25 ms
Message 1 returned: ' 1 first
2 first
3 first
4 '
127.0.0.1:4001: start 2.00, written 2.06, finished 2.34 ms
Message 1 returned: ' 1 second
2 second
3 second
'
Delay completed
127.0.0.1:4001 successfully sent 6 bytes of data
127.0.0.1:4001: start 2002.46, written 2002.49, finished 2002.53 ms
Message 2 returned: '47 second
148 second
149 second
150 s'
Delay completed
127.0.0.1:4000 successfully sent 10 bytes of data
127.0.0.1:4000: start 3002.36, written 3002.39, finished 3002.41 ms
Message 2 returned: 'first
159 first
160 first
161 first
'
And the server side receives the following messages in sequence:
Full Code
Live On Coliru
#include <boost/asio.hpp>
#include <boost/asio/high_resolution_timer.hpp>
#include <boost/bind.hpp>
#include <boost/enable_shared_from_this.hpp>
#include <boost/shared_ptr.hpp>
#include <chrono>
#include <iostream>
#define MEM_FN2(x, y, z) boost::bind(&self_type::x, shared_from_this(), y, z)
namespace ba = boost::asio;
using ba::ip::udp;
using boost::system::error_code;
using ba::asio_handler_invoke;
////////////////////////////////////////////////////
// timing stuff
using namespace std::chrono_literals;
using hrclock = std::chrono::high_resolution_clock;
using time_point = hrclock::time_point;
static double relatime(time_point tp = hrclock::now()) {
static const time_point t_start = hrclock::now();
return (tp - t_start)/1.0ms;
}
////////////////////////////////////////////////////
// message operation - with F continuation
template <typename F>
class message_operation : public boost::enable_shared_from_this<message_operation<F> >, boost::noncopyable {
typedef message_operation self_type;
template <typename F_>
friend void async_message(udp::socket&, std::string const&, F_&&);
private:
template <typename F_>
message_operation(udp::socket& s, std::string message, F_&& handler)
: _socket(s), _endpoint(s.remote_endpoint()), handler_(std::forward<F_>(handler)), message_(std::move(message)) {}
using boost::enable_shared_from_this<message_operation>::shared_from_this;
void do_write() {
t0 = hrclock::now(); // Time of starting to write
_socket.async_send_to(ba::buffer(message_), _endpoint, MEM_FN2(on_write, _1, _2));
}
void on_write(const error_code & err, size_t bytes) {
t1 = hrclock::now(); // Time of finished writting
if (err)
handler_(err, "");
else
{
std::cout << _endpoint << " successfully sent " << bytes << " bytes of data\n";
do_read();
}
}
void do_read() {
_socket.async_receive_from(ba::buffer(read_buffer_), _sender, MEM_FN2(on_read, _1, _2));
}
void on_read(const error_code &err, size_t bytes) {
t2 = hrclock::now(); // Time of finished reading
if (!err) {
std::cout << _endpoint
<< ": start " << relatime(t0)
<< ", written " << relatime(t1)
<< ", finished " << relatime(t2)
<< " ms\n";
handler_(err, std::string(read_buffer_, bytes));
} else {
std::cout << "Error occured in reading data from server\n";
}
}
time_point t0, t1, t2; // Time of starting to write, finished writting, finished reading
// params
udp::socket& _socket;
udp::endpoint _endpoint;
F handler_;
// sending
std::string message_;
// receiving
udp::endpoint _sender;
char read_buffer_[46];
};
template <typename F_>
void async_message(udp::socket& s, std::string const& message, F_&& handler) {
using Op = message_operation<F_>;
boost::shared_ptr<Op> new_(new Op(s, message, std::forward<F_>(handler)));
new_->do_write();
}
////////////////////////////////////////////////////
// basic protocol (2 messages, 1 delay)
struct ApplicationProtocol {
ApplicationProtocol(ba::io_service& service, udp::endpoint ep, std::string m1, std::string m2, std::chrono::seconds delay = 3s)
: _service(service),
_endpoint(ep),
message1(std::move(m1)), message2(std::move(m2)),
delay(delay), timer(service)
{ }
void go() {
_socket.async_connect(_endpoint, boost::bind(&ApplicationProtocol::on_connect, this, _1));
}
private:
ba::io_service& _service;
udp::socket _socket{_service};
udp::endpoint _endpoint;
std::string message1, message2;
std::chrono::seconds delay;
ba::high_resolution_timer timer;
void on_connect(error_code ec) {
std::cout << _endpoint << " connected at " << relatime() << " ms\n";
if (!ec) {
async_message(_socket, message1, boost::bind(&ApplicationProtocol::on_message1_sent, this, _1, _2));
} else {
std::cout << "Socket had a problem for connecting to server.";
}
}
void on_message1_sent(error_code ec, std::string response) {
if (ec)
std::cout << "Message 1 failed: " << ec.message() << "\n";
else {
std::cout << "Message 1 returned: '" << response << "'\n";
timer.expires_from_now(delay);
timer.async_wait(boost::bind(&ApplicationProtocol::on_delay_complete, this, _1));
}
}
void on_delay_complete(error_code ec) {
if (ec)
std::cout << "Delay faile: " << ec.message() << "\n";
else {
std::cout << "Delay completed\n";
async_message(_socket, message2, boost::bind(&ApplicationProtocol::on_message2_sent, this, _1, _2));
}
}
void on_message2_sent(error_code ec, std::string response) {
if (ec)
std::cout << "Message 2 failed: " << ec.message() << "\n";
else {
std::cout << "Message 2 returned: '" << response << "'\n";
}
}
};
int main() {
ba::io_service service;
relatime(); // start the clock
std::cout.precision(2);
std::cout << std::fixed;
ApplicationProtocol
channel1(service, {{}, 4000}, "Message01\n", "Message02\n", 3s),
channel2(service, {{}, 4001}, "Masoud\n", "Ahmad\n", 2s);
channel1.go();
channel2.go();
service.run();
}
In addition to the "normal" answer posted before, here's one that does exactly the same but using coroutines:
Live On Coliru
#include <boost/asio.hpp>
#include <boost/asio/spawn.hpp>
#include <boost/asio/high_resolution_timer.hpp>
#include <iostream>
namespace ba = boost::asio;
using ba::ip::udp;
using boost::system::error_code;
////////////////////////////////////////////////////
// timing stuff
using namespace std::chrono_literals;
using hrclock = std::chrono::high_resolution_clock;
using time_point = hrclock::time_point;
static double relatime(time_point tp = hrclock::now()) {
static const time_point t_start = hrclock::now();
return (tp - t_start)/1.0ms;
}
int main() {
ba::io_service service;
relatime(); // start the clock
std::cout.precision(2);
std::cout << std::fixed;
auto go = [&](udp::endpoint ep, std::string const& m1, std::string const& m2, hrclock::duration delay) {
ba::spawn(service, [=,&service](ba::yield_context yield) {
udp::socket sock(service);
time_point t0, t1, t2;
auto async_message = [&](std::string const& message) {
t0 = hrclock::now();
auto bytes = sock.async_send_to(ba::buffer(message), ep, yield);
t1 = hrclock::now();
char read_buffer_[46];
udp::endpoint _sender;
bytes = sock.async_receive_from(ba::buffer(read_buffer_), _sender, yield);
t2 = hrclock::now();
return std::string {read_buffer_, bytes};
};
try {
sock.async_connect(ep, yield);
std::cout << ep << " connected at " << relatime() << " ms\n";
std::cout << "Message 1 returned: '" << async_message(m1) << "'\n";
std::cout << ep << ": start " << relatime(t0) << ", written " << relatime(t1) << ", finished " << relatime(t2) << " ms\n";
ba::high_resolution_timer timer(service, delay);
timer.async_wait(yield);
std::cout << "Message 2 returned: '" << async_message(m2) << "'\n";
std::cout << ep << ": start " << relatime(t0) << ", written " << relatime(t1) << ", finished " << relatime(t2) << " ms\n";
} catch(std::exception const& e) {
std::cout << ep << " error: " << e.what() << "\n";
}
});
};
go({{}, 4000}, "Message01\n", "Message02\n", 3s),
go({{}, 4001}, "Masoud\n", "Ahmad\n", 2s);
service.run();
}
As you can see, using coroutines has the luxury of having all coro state "implicitly" on the coro stack. This means: no more adhoc classes for async operations with state, and vastly reduced lifetime issues.
Output
0.0.0.0:4000 connected at 0.52 ms
Message 1 returned: '0.0.0.0:4001 connected at 0.64 ms
Message 1 returned: ' 1 first
2 first
3 first
4 '
0.0.0.0:4000: start 0.55, written 0.68, finished 0.86 ms
1 second
2 second
3 second
'
0.0.0.0:4001: start 0.65, written 0.70, finished 0.91 ms
Message 2 returned: '47 second
148 second
149 second
150 s'
0.0.0.0:4001: start 2001.03, written 2001.06, finished 2001.07 ms
Message 2 returned: 'first
159 first
160 first
161 first
'
0.0.0.0:4000: start 3001.10, written 3001.15, finished 3001.16 ms