I have run into a dilemma whilst using boost::asio and boost::io_service
My classes wrap around the async client example provided by boost for socket connections.
I use another class which encapsulates:
class service_controller
{
...
/// IO service
boost::asio::io_service __io_service;
/// Endpoint Resolver
boost::asio::ip::tcp::resolver::query __query;
/// Resolution for TCP
boost::asio::ip::tcp::resolver __resolver;
}
So, when I construct my clients, the constructor takes references:
asio_service_client (
boost::asio::ip::tcp::resolver::query & query,
boost::asio::ip::tcp::resolver & resolver,
boost::asio::io_service & io_service
);
Everything works fine, but I have to call
io_service.run()
At the end, after creating all all my clients.
If I encapsulate seperate io_service objects for each client, I essentially remove the async io nature, as each one will block until its finished.
Therefore, I decided to form a type of group, by making all client objects use the same io_service.
io_service::poll() does not appear to work at all (nothing happens), nor does io_service::run_one().
In fact, the only thing that appears to work, is:
// with a callback => the callback will run once finished
rapp::services::asio_service_client c1( ctrl.Query(), ctrl.Resolver(), ctrl.Services() );
// without a callback => asio_service_client::handle_reply will run once finished
rapp::services::asio_service_client c2 ( ctrl.Query(), ctrl.Resolver(), ctrl.Services() );
rapp::services::asio_service_client c3 ( ctrl.Query(), ctrl.Resolver(), ctrl.Services() );
rapp::services::asio_service_client c4 ( ctrl.Query(), ctrl.Resolver(), ctrl.Services() );
// Run services c1, c2
c1.Run( header, post,
[&]( boost::asio::streambuf & buffer )
{
std::string raw ( ( std::istreambuf_iterator<char>( &buffer ) ), std::istreambuf_iterator<char>() );
std::cout << raw << std::endl;
});
c2.Run( header, post );
ctrl.Services().run();
/// Run remaining services ( c3, c4 )
c3.Run( header, post );
c4.Run( header, post );
ctrl.Services().reset();
ctrl.Services().run();
Unless of course, if I request a group to be run altogether (e.g., ask for c1, c2, c3 and c4 Run).
Is there some way, or some class pattern, where I could automate a queue, where I create objects, add them, and they are run asynchronously? Ideally with threads, but without will also work.
Some kind of a stack, where whilst I add objects, they are asynchronously executed, as they are added.
If I try something like:
Scheduler::Execute ( asio_service_client & client )
{
client.Run( ... )
io_service.reset();
io_service.run();
}
I will reset previous running services, and start all over, which is not what I want.
My only obvious option, is to either accept and assign a separate io_service for each added asio_service_client, or force them to be added all together in a job group, which is then executed?
The other solution I can think of, is using threads, thereby, each asio_service_client will run in its own thread, and thus won't block other asio_service_clients, executing in parallel?
You probably want to share a single io_service instance and post a io_service::work object on it so it stays active even if no client currently has any pending asycn operations:
boost::asio::io_service io_service;
auto work = boost::make_shared<boost::asio::io_service::work>(io_service);
// any client can post it's asynchronous operations on this service object, from any thread
// completion handlers will be invoked on any thread that runs `io_service.run()`
// once you want the `io_service` to empty the queue and return:
work.reset();
// now `run()` will return when it runs out of queued tasks
Related
example websocket :
std::thread{ wsserver }.detach();
void()
{
boost::asio::ip::tcp::socket socket{ ioc };
acceptor.accept(socket);
}
The thread process is running separately from the server.
I think this may lead to erroneous results in some interlinked operations.
so I want to put this process into an infinite loop.
In short, I should not use multi-threads.
I want.
example:
while(loop());
int loop()
{
boost::asio::ip::tcp::socket socket{ ioc };
acceptor.accept(socket);
return 1;
}
but not work idle loop.
because "acceptor.accept" is all time waiting connect.
How to use a different command instead of the "accept" command.
How I can also get rid of your thread command.
Can I do this with smart data?
I hope I can explained.
I am write an app using boost:asio.
I have a single io_serice::run() thread, and many worker threads. All the worker threads may send msg at any time.
Here is how I implement the send_msg().
// Note: send_msg() could be called from any thread.
// 'msg' must be 'malloc'ed, and its owner ship will be transfered to '_send_q'
//
// NetLibConnection has base classes of tcp::socket and boost::enable_shared_from_this
void NetLibConnection::send_msg(PlainNetLibMsg* msg)
{
AutoLocker __dummy(this->_lock_4_send_q); // _lock_4_send_q is a 'mutex'
bool write_in_progress = ! this->_send_q.empty(); // _send_q is std::deque<PlainNetLibMsg* >,
// the 'send_q' mechansim is learned from boost_asio_example/cpp03/chat
this->_send_q.push_back(msg);
if (write_in_progress)
{
return;
}
this->get_io_service().post( // queue the 'send operation' to a singlton io_serivce::run() thread
boost::bind(&NetLibConnection::async_send_front_of_q
, boost::dynamic_pointer_cast<NetLibConnection>(shared_from_this())
)
);
}
void NetLibConnection::async_send_front_of_q()
{
boost::asio::async_write(*this
, boost::asio::buffer( this->_send_q.front() , _send_q.front()->header.DataSize + sizeof(NetLibChunkHeader) )
, this->_strand.wrap( // this great post https://stackoverflow.com/questions/12794107/why-do-i-need-strand-per-connection-when-using-boostasio/
// convinced me that I should use strand along with Connection
boost::bind( &NetLibConnection::handle_send
, boost::dynamic_pointer_cast<NetLibConnection>(shared_from_this())
, boost::asio::placeholders::error
)
)
);
}
The code works fine. But I am not satisfied with its verbosity. I feel the senq_q acts as the same role of strand.
Since
all real async_write call happen in a single io_service::run() thread
all real async_write are queued one-by-one via the send_q
Do I still need the strand?
Yes, indeed. The documentation details this here:
Threads And Boost Asio
By only calling io_service::run() from a single thread, the user's code can avoid the development complexity associated with synchronisation. For example, a library user can implement scalable servers that are single-threaded (from the user's point of view).
Thinking a bit more broadly, your scenario is the simplest form of having a single logical strand. There are other ways in which you can maintain logical strands (by chaining handlers), see this most excellent answer on the subject: Why do I need strand per connection when using boost::asio?
Referring to HTTP Server- Single threaded Implementation
I am trying to Explicitly control Lifetime of server instance
My Requirements are:
1) I should be able to explicitly destroy the server
2) I need to keep multiple Server Instances alive which should listen to different ports
3) Manager Class maintains list of all active server instances; should be able to create and destroy the server instances by create and drop methods
I am trying to implement Requirement 1 and
I have come up with code:
void server::stop()
{
DEBUG_MSG("Stopped");
io_service_.post(boost::bind(&server::handle_stop, this));
}
where handle_stop() is
void server::handle_stop()
{
// The server is stopped by cancelling all outstanding asynchronous
// operations. Once all operations have finished the io_service::run() call
// will exit.
acceptor_.close();
connection_manager_.stop_all();
}
I try to call it from main() as:
try
{
http::server::server s("127.0.0.1","8973");
// Run the server until stopped.
s.run();
boost::this_thread::sleep_for(boost::chrono::seconds(3));
s.stop();
}
catch (std::exception& e)
{
std::cerr << "exception: " << e.what() << "\n";
}
Question 1)
I am not able to call server::handle_stop().
I suppose io_service_.run() is blocking my s.stop() call.
void server::run()
{
// The io_service::run() call will block until all asynchronous operations
// have finished. While the server is running, there is always at least one
// asynchronous operation outstanding: the asynchronous accept call waiting
// for new incoming connections.
io_service_.run();
}
How do I proceed?
Question 2:
For requirement 2) where I need to have multiple server instances, i think I will need to create an io_service instance in main and must pass the same instance to all server instances. Am I right?
Is it mandatory to have only one io_service instance per process or can I have more than one ?
EDIT
My aim is to implement a class which can control multi server instances:
Something of below sort (Incorrect code // Just giving view, what I try to implement ) I want to achieve-
How do i design?
I have confusion regarding io_Service and how do I cleanly call mng.create(), mng.drop()
Class Manager{
public:
void createServer(ServerPtr)
{
list_.insert(make_shared<Server> (ip, port));
}
void drop()
{
list_.drop((ServerPtr));
}
private:
io_service iO_;
set<server> list_;
};
main()
{
io_service io;
Manager mng(io);
mng.createServer(ip1,port1);
mng.createServer(ip2,port2);
io.run();
mng.drop(ip1,port1);
}
I am not able to call server::handle_stop().
As you say, run() won't return until the service is stopped or runs out of work. There's no point calling stop() after that.
In a single-threaded program, you can call stop() from an I/O handler - for your example, you could use a deadline_timer to call it after three seconds. Or you could do something complicated with poll() rather than run(), but I wouldn't recommend that.
In a multi-threaded program, you could call it from another thread than the one calling run(), as long as you make sure it's thread-safe.
For [multiple servers] I think I will need to create an io_service instance in main
Yes, that's probably the best thing to do.
Is it mandatory to have only one io_service instance per process or can I have more than one?
You can have as many as you like. But I think you can only run one at a time on a single thread, so it would be tricky to have more than one in a single-threaded program. I'd have a single instance that all the servers can use.
You are right, it's not working because you call stop after blocking run, and run blocks until there are some unhandled callbacks. There are multiple ways to solve this and it depands from what part of program stop will be called:
If you can call it from another thread, then run each instance of server in separate thread.
If you need to stop server after some IO operation for example you can simply do as you have tried io_service_.post(boost::bind(&server::handle_stop, this));, but it should be registered from another thread or from another callback in current thread.
You can use io_service::poll(). It is non-blocking version of run, so you create a loop where you call poll until you need to stop server.
You can do it both ways. Even with the link you provided you can take a look at:
HTTP Server 3 - An HTTP server using a single io_service and a thread pool
and HTTP Server 2 - An HTTP server using an io_service-per-CPU design
I am using boost:asio with multiple io_services to keep different forms of blocking I/O separate. E.g. I have one io_service for blocking file I/O, and another for long-running CPU-bound tasks (and this could be extended to a third for blocking network I/O, etc.) Generally speaking I want to ensure that one form of blocking I/O cannot starve the others.
The problem I am having is that since tasks running in one io_service can post events to other io_service (e.g. a CPU-bound task may need to start a file I/O operation, or a completed file I/O operation may invoke a CPU-bound callback), I don't know how to keep both io_services running until they are both out of events.
Normally with a single I/O service, you do something like:
shared_ptr<asio::io_service> io_service (new asio::io_service);
shared_ptr<asio::io_service::work> work (
new asio::io_service::work(*io_service));
// Create worker thread(s) that call io_service->run()
io_service->post(/* some event */);
work.reset();
// Join worker thread(s)
However if I simply do this for both io_services, the one into which I did not post an initial event finishes immediately. And even if I post initial events to both, if the initial event on io_service B finishes before the task on io_service A posts a new event to B, io_service B will finish prematurely.
How can I keep io_service B running while io_service A is still processing events (because one of the queued events in service A might post a new event to B), and vice-versa, while still ensuring that both io_services exit their run() methods if they are ever both out of events at the same time?
Figured out a way to do this, so documenting it for the record in case anyone else finds this question in a search:
Create each N cross-communicating io_services, create a work object for each of them, and then start their worker threads.
Create a "master" io_service object which will not run any worker threads.
Do not allow posting events directly to the services. Instead, create accessor functions to the io_services which will:
Create a work object on the master thread.
Wrap the callback in a function that runs the real callback, then deletes the work.
Post this wrapped callback instead.
In the main flow of execution, once all of the N io_services have started and you have posted work to at least one of them, call run() on the master io_service.
When the master io_service's run() method returns, delete all of the initial work on the N cross-communicating io_services, and join all worker threads.
Having the master io_service's thread own work on each of the other io_services ensures that they will not terminate until the master io_service runs out of work. Having each of the other io_services own work on the master io_service for every posted callback ensure that the master io_service will not run out of work until every one of the other io_services no longer has any posted callbacks left to process.
An example (could be enapsulated in a class):
shared_ptr<boost::asio::io_service> master_io_service;
void RunWorker(boost::shared_ptr<boost::asio::io_service> io_service) {
io_service->run();
}
void RunCallbackAndDeleteWork(boost::function<void()> callback,
boost::asio::io_service::work* work) {
callback();
delete work;
}
// All new posted callbacks must come through here, rather than being posted
// directly to the io_service object.
void PostToService(boost::shared_ptr<boost::asio::io_service> io_service,
boost::function<void()> callback) {
io_service->post(boost::bind(
&RunCallbackAndDeleteWork, callback,
new boost::asio::io_service::work(*master_io_service)));
}
int main() {
vector<boost::shared_ptr<boost::asio::io_service> > io_services;
vector<boost::shared_ptr<boost::asio::io_service::work> > initial_work;
boost::thread_pool worker_threads;
master_io_service.reset(new boost::asio::io_service);
const int kNumServices = X;
const int kNumWorkersPerService = Y;
for (int i = 0; i < kNumServices; ++i) {
shared_ptr<boost::asio::io_service> io_service(new boost::asio::io_service);
io_services.push_back(io_service);
initial_work.push_back(new boost::asio::io_service::work(*io_service));
for (int j = 0; j < kNumWorkersPerService; ++j) {
worker_threads.create_thread(boost::bind(&RunWorker, io_service));
}
}
// Use PostToService to start initial task(s) on at least one of the services
master_io_service->run();
// At this point, there is no real work left in the services, only the work
// objects in the initial_work vector.
initial_work.clear();
worker_threads.join_all();
return 0;
}
The HTTP server example 2 does something similar that you may find useful. It uses the concept of an io_service pool that retains vectors of shared_ptr<boost::asio::io_service> and a shared_ptr<boost::asio::io_service::work> for each io_service. It uses a thread pool to run each service.
The example uses a round-robin scheduling for doling out work to the I/O services, I don't think that will apply in your case since you have specific tasks for io_service A and io_service B.
I am building an HTTP client based on the example on HTTP server given at boost website. Now, the difference between that code and mine is that the example uses the server constructor to start the asynchronous operations. This makes sense since a server is supposed to listen all the time. In my client, on the other hand, I want to first construct the object and then have a send() function that starts off by connecting to the endpoint and later on sends a request and finally listens for the reply. This makes sense too, doesn't it?
When I create my object (client) I do it in the same manner as in the server example (winmain.cpp). It looks like this:
client c("www.boost.org);
c.start(); // starts the io_service in a thread
c.send(msg_);
The relevant parts of the code are these:
void enabler::send(common::geomessage& msg_)
{
new_connection_.reset(new connection(io_service_,
connection_manager_,
message_manager_, msg_
));
boost::asio::ip::tcp::resolver resolver(io_service_);
boost::asio::ip::tcp::resolver::query query(host_address, "http");
resolver.async_resolve(query, boost::bind(
&enabler::handle_resolve,
boost::ref(*this),
boost::asio::placeholders::error,
boost::asio::placeholders::iterator
));
}
void enabler::run()
{
io_service_.run();
}
The problem with this is that the program gets stuck somewhere here. The last thing that prints is the "Resolving host", after that the program ends. I don't know why because the io_service should block until all async operations have returned to their callbacks. If, however, I change the order of how I call the functions, it works. If I call run() just after the call to async_resolve() and also omit calling start() in my main program, it works!
In this scenario, io_service blocks as it should and I can see that I get a response from the server.
It has something to do from the fact that I call run() from inside the same class as where I call async_resolve(). Could this be true? The I suppose I need to give a reference from the main program when I call run(), is it like that?
I have struggled with getting io_service::work to work but the program just gets stuck and yeah, similar problems as the one above occur. So it does not really help.
So, what can I do to get this right? As I said earlier, what I want is to be able to create the client object and have the io_service running all the time in a separate thread inside the client class. Secondly to have a function, send(), that sends requests to the server.
You need to start at least some work before calling run(), as it returns when there is no more work to do.
If you call it before you start the async resolve, it won't have any work so it returns.
If you don't expect to have some work at all times, to keep the io_service busy, you should construct an io_service::work object in some scope which can be exited without io_service::run() having to return first. If you're running the io_service in a separate thread, I would imagine you wouldn't have a problem with that.
It's sort of hard to know what you're trying to do with those snippets of code. I imagine that you'd want to do something along these lines:
struct client
{
io_service io_service_;
io_service::work* w_;
pthread_t main_thread_;
client(): w_(new io_service::work(io_service)) { ... }
void start() { pthread_create(&main_thread_, 0, main_thread, this); }
static long main_thread(void* arg) { ((client*)arg)->io_service_.run(); }
// release the io_service and allow run() to return
void stop() { delete w_; w_ = 0; pthread_join(main_thread_); }
};