I have followed the documentation and examples provided by the boost asio implementation but not having any luck after connecting my client to the server. Regardless of success or failure, the handler is never called. I have verified that the server is receiving and accepting the connection from the client but nothing happens on the clients end to indicate success.
void ssl_writer::main_thread() {
using namespace std::placeholders;
using namespace asio::ip;
tcp::resolver resolver(io_context);
tcp::resolver::query query("192.168.170.115", "8591");
tcp::resolver::iterator endpointer_iterator = resolver.resolve(query);
io_context.run();
std::cout << "connecting...";
asio::async_connect(socket.lowest_layer(), endpointer_iterator, std::bind(&ssl_writer::handle_connect, this, _1));
}
//...
void ssl_writer::handle_connect(const std::error_code& error) {
if (!error) {
std::cout << "connected!";
}
else {
std::cout << "failed!";
}
}
io_context::run() processes handlers until there are no more handlers to process. As you haven't yet run any asynchronous calls there are no handlers and run returns immediately.
In this simple example you need to call io_context::run() after async_connect, in more complex programs you would normally create a worker thread to call io_context::run() and create an instance of boost::asio::executor_work_guard to prevent the io_context running out of work.
Related
I am unable to get this example working
http://www.boost.org/doc/libs/1_61_0/doc/html/boost_asio/tutorial/tutdaytime3/src.html
I have changed the port 13 to 1163 so that I don't need to be a root user to start listening.
And I am running the io_service in separate thread.
int main()
{
try
{
boost::asio::io_service io_service;
tcp_server server(io_service);
boost::thread t(boost::bind(&boost::asio::io_service::run, &io_service));
t.detach();
}
catch (std::exception& e)
{
std::cerr << e.what() << std::endl;
}
string wait;
cin >> wait;
return 0;
}
When testing the above server with http://www.boost.org/doc/libs/1_61_0/doc/html/boost_asio/tutorial/tutdaytime1/src.html client, it says connection refused.
netstat --listen didn't show any open ports on 1163
I couldn't figure out how to use boost::asio::async_result<typename Handler> I am confused on Handler.
Working modification
int main()
{
try
{
boost::asio::io_service io_service;
tcp_server server(io_service);
boost::thread t(boost::bind(&boost::asio::io_service::run, &io_service));
t.detach();
string wait;
cin >> wait;
}
catch (std::exception& e)
{
std::cerr << e.what() << std::endl;
}
return 0;
}
If the wait is inside the try block, the code working!
If asio can't listen on the port the creation or binding of the acceptor will already fail. In your case you are creating the acceptor with acceptor_(io_service, tcp::endpoint(tcp::v4(), 13)), which is this overload: http://www.boost.org/doc/libs/1_62_0/doc/html/boost_asio/reference/basic_socket_acceptor/basic_socket_acceptor/overload3.html
This will directly try to bind the socket and throw an exception if that fails. The alternative (which is described on the bottom of this page) is to create the socket without an assigned endpoint and call open and bind on it, where bind will fail with an error (e.g. if the socket is already in use). In any case you won't need to wait for accept/async_accept to see an error.
I guess your issue is in the client program, which still tries to connect to port 13. Have you changed it to use port 1163? In the example code the port is not directly written, but a well-known service name is used here: tcp::resolver::query query(argv[1], "daytime");. The "daytime" reference will tell the resolver to use port 13.
Update: Now that that I see the actual code with the thread it's a totally different error:
If the wait is not inside the try block the asio io_service and tcp_server will go out of scope nearly immediatly, which means their destructors are called. This will stop all communication. Even worse, the detached thread now operates on some dangling pointers. As a general rule asio objects (the io_service eventloop, sockets, etc.) should all live in the the thread that uses them. The lifetime of the io_service should be tied to or shorter than the lifetime of a thread. The lifetime of the sockets should be shorter than that of the eventloop (io_service) that runs them. Other usage scenarios might be possible by using shared_ptr's or by putting a lots of thoughts into the design, but that's not what I would recommend.
Everything I've read in the Boost ASIO docs and here on StackOverflow suggests I can stop an async_accept operation by calling close on the acceptor socket. However, I get an intermittent not_socket error in the async_accept handler when I try to do this. Am I doing something wrong or does Boost ASIO not support this?
(Related questions: here and here.)
(Note: I'm running on Windows 7 and using the Visual Studio 2015 compiler.)
The core problem I face is a race condition between the async_accept operation accepting an incoming connection and my call to close. This happens even when using a strand, explicit or implicit.
Note my call to async_accept strictly happens before my call to close. I conclude the race condition is between my call to close and the under-the-hood code in Boost ASIO that accepts the incoming connection.
I've included code demonstrating the problem. The program repeatedly creates an acceptor, connects to it, and immediately closes the acceptor. It expects the async_accept operation to either complete successfully or else be canceled. Any other error causes the program to abort, which is what I'm seeing intermittently.
For synchronization the program uses an explicit strand. Nevertheless, the call to close is unsynchronized with the effect of the async_accept operation, so sometimes the acceptor closes before it accepts the incoming connection, sometimes it closes afterward, sometimes neither—hence the problem.
Here's the code:
#include <algorithm>
#include <boost/asio.hpp>
#include <cstdlib>
#include <future>
#include <iostream>
#include <memory>
#include <thread>
int main()
{
boost::asio::io_service ios;
auto work = std::make_unique<boost::asio::io_service::work>(ios);
const auto ios_runner = [&ios]()
{
boost::system::error_code ec;
ios.run(ec);
if (ec)
{
std::cerr << "io_service runner failed: " << ec.message() << '\n';
abort();
}
};
auto thread = std::thread{ios_runner};
const auto make_acceptor = [&ios]()
{
boost::asio::ip::tcp::resolver resolver{ios};
boost::asio::ip::tcp::resolver::query query{
"localhost",
"",
boost::asio::ip::resolver_query_base::passive |
boost::asio::ip::resolver_query_base::address_configured};
const auto itr = std::find_if(
resolver.resolve(query),
boost::asio::ip::tcp::resolver::iterator{},
[](const boost::asio::ip::tcp::endpoint& ep) { return true; });
assert(itr != boost::asio::ip::tcp::resolver::iterator{});
return boost::asio::ip::tcp::acceptor{ios, *itr};
};
for (auto i = 0; i < 1000; ++i)
{
auto acceptor = make_acceptor();
const auto saddr = acceptor.local_endpoint();
boost::asio::io_service::strand strand{ios};
boost::asio::ip::tcp::socket server_conn{ios};
// Start accepting.
std::promise<void> accept_promise;
strand.post(
[&]()
{
acceptor.async_accept(
server_conn,
strand.wrap(
[&](const boost::system::error_code& ec)
{
accept_promise.set_value();
if (ec.category() == boost::asio::error::get_system_category()
&& ec.value() == boost::asio::error::operation_aborted)
return;
if (ec)
{
std::cerr << "async_accept failed (" << i << "): " << ec.message() << '\n';
abort();
}
}));
});
// Connect to the acceptor.
std::promise<void> connect_promise;
strand.post(
[&]()
{
boost::asio::ip::tcp::socket client_conn{ios};
{
boost::system::error_code ec;
client_conn.connect(saddr, ec);
if (ec)
{
std::cerr << "connect failed: " << ec.message() << '\n';
abort();
}
connect_promise.set_value();
}
});
connect_promise.get_future().get(); // wait for connect to finish
// Close the acceptor.
std::promise<void> stop_promise;
strand.post([&acceptor, &stop_promise]()
{
acceptor.close();
stop_promise.set_value();
});
stop_promise.get_future().get(); // wait for close to finish
accept_promise.get_future().get(); // wait for async_accept to finish
}
work.reset();
thread.join();
}
Here's the output from a sample run:
async_accept failed (5): An operation was attempted on something that is not a socket
The number in parentheses denotes how many successfully iterations the program ran.
UPDATE #1: Based on Tanner Sansbury's answer, I've added a std::promise for signaling the completion of the async_accept handler. This has no effect on the behavior I'm seeing.
UPDATE #2: The not_socket error originates from a call to setsockopt, from call_setsockopt, from socket_ops::setsockopt in the file boost\asio\detail\impl\socket_ops.ipp (Boost version 1.59). Here's the full call:
socket_ops::setsockopt(new_socket, state,
SOL_SOCKET, SO_UPDATE_ACCEPT_CONTEXT,
&update_ctx_param, sizeof(SOCKET), ec);
Microsoft's documentation for setsockopt says about SO_UPDATE_ACCEPT_CONTEXT:
Updates the accepting socket with the context of the listening socket.
I'm not sure what exactly this means, but it sounds like something that fails if the listening socket is closed. This suggests that, on Windows, one cannot safely close an acceptor that is currently running a completion handler for an async_accept operation.
I hope someone can tell me I'm wrong and that there is a way to safely close a busy acceptor.
The example program will not cancel the async_accept operation. Once the connection has been established, the async_accept operation will be posted internally for completion. At this point, the operation is no longer cancelable and is will not be affected by acceptor.close().
The issue being observed is the result of undefined behavior. The program fails to meet a lifetime requirement for async_accept's peer parameter:
The socket into which the new connection will be accepted. Ownership of the peer object is retained by the caller, which must guarantee that it is valid until the handler is called.
In particular, the peer socket, server_conn, has automatic scope within the for loop. The loop may begin a new iteration while the async_accept operation is outstanding, causing server_conn to be destroyed and violate the lifetime requirement. Consider extending server_conn's lifetime by either:
set a std::future within the accept handler and wait on the related std::promise before continuing to the next iteration of the loop
managing server_conn via a smart pointer and passing ownership to the accept handler
asio::io_service ioService;
asio::ip::tcp::socket* socket = new asio::ip::tcp::socket(ioService);
socket->async_connect(endpoint, handler);
delete socket;
Socket's destructor should close the socket. But can the asynchronous backend handle this? Will it cancel the asynchronous operation and calling the handler? Probably not?
When the socket is destroyed, it invokes destroy on its service. When a SocketService's destroy() function is invoked, it cancels asynchronous operations by calling a non-throwing close(). Handlers for cancelled operations will be posted for invocation within io_service with a boost::asio::error::operation_aborted error.
Here is a complete example demonstrating the documented behavior:
#include <iostream>
#include <boost/asio.hpp>
void handle_connect(const boost::system::error_code& error)
{
std::cout << "handle_connect: " << error.message() << std::endl;
}
int main()
{
namespace ip = boost::asio::ip;
using ip::tcp;
boost::asio::io_service io_service;
// Create socket with a scoped life.
{
tcp::socket socket(io_service);
socket.async_connect(
tcp::endpoint(ip::address::from_string("1.2.3.4"), 12345),
&handle_connect);
}
io_service.run();
}
And its output:
handle_connect: Operation canceled
Why did you create the socket using new? It won't definitely do normal process.
If you really want to create the socket using new, you have to close and delete at the end of your program.
Here is a sample, just.
io_service service_;
ip::tcp::socket sock(service_);
sock.async_connect(ep, connect_handler);
deadline_timer t(service_, boost::posix_time::seconds(5));
t.async_wait(timeout_handler);
service_.run();
I am working on a project where I need to be able to use a few persistent to talk to different servers over long periods of time. This server will have a fairly high throughput. I am having trouble figuring out a way to setup the persistent connections correctly. The best way I could think of to do this is create a persistent connection class. Ideally I would connect to the server one time, and do async_writes as information comes into me. And read information as it comes back to me. I don't think I am structuring my class correctly though.
Here is what I have built right now:
persistent_connection::persistent_connection(std::string ip, std::string port):
io_service_(), socket_(io_service_), strand_(io_service_), is_setup_(false), outbox_()
{
boost::asio::ip::tcp::resolver resolver(io_service_);
boost::asio::ip::tcp::resolver::query query(ip,port);
boost::asio::ip::tcp::resolver::iterator iterator = resolver.resolve(query);
boost::asio::ip::tcp::endpoint endpoint = *iterator;
socket_.async_connect(endpoint, boost::bind(&persistent_connection::handler_connect, this, boost::asio::placeholders::error, iterator));
io_service_.poll();
}
void persistent_connection::handler_connect(const boost::system::error_code &ec, boost::asio::ip::tcp::resolver::iterator endpoint_iterator)
{
if(ec)
{
std::cout << "Couldn't connect" << ec << std::endl;
return;
}
else
{
boost::asio::socket_base::keep_alive option(true);
socket_.set_option(option);
boost::asio::async_read_until(socket_, buf_ ,"\r\n\r\n", boost::bind(&persistent_connection::handle_read_headers, this, boost::asio::placeholders::error));
}
}
void persistent_connection::write(const std::string &message)
{
write_impl(message);
//strand_.post(boost::bind(&persistent_connection::write_impl, this, message));
}
void persistent_connection::write_impl(const std::string &message)
{
outbox_.push_back(message);
if(outbox_.size() > 1)
{
return;
}
this->write_to_socket();
}
void persistent_connection::write_to_socket()
{
std::string message = "GET /"+ outbox_[0] +" HTTP/1.0\r\n";
message += "Host: 10.1.10.120\r\n";
message += "Accept: */*\r\n";
boost::asio::async_write(socket_, boost::asio::buffer(message.c_str(), message.size()), strand_.wrap(
boost::bind(&persistent_connection::handle_write, this, boost::asio::placeholders::error, boost::asio::placeholders::bytes_transferred)));
}
void persistent_connection::handle_write(const boost::system::error_code& ec, std::size_t bytes_transfered)
{
outbox_.pop_front();
if(ec)
{
std::cout << "Send error" << boost::system::system_error(ec).what() << std::endl;
}
if(!outbox_.empty())
{
this->write_to_socket();
}
boost::asio::async_read_until(socket_, buf_ ,"\r\n\r\n",boost::bind(&persistent_connection::handle_read_headers, this, boost::asio::placeholders::error));
}
The first message I will send from this seems to send out fine, the server gets it, and responds with a valid response. I see two problem unfortunately:
1) My handle_write is never called after doing the async_write command, I have no clue why.
2) The program never reads the response, I am guessing this is related to #1, since asyn_read_until is not called until that function happens.
3) I was also wondering if someone could tell me why my commented out strand_.post call would not work.
I am guessing most of this has to due with my lack of knowledge of how I should be using my io_service, so if somebody could give me any pointer that would be greatly appreciated. And if you need any additional information, I would be glad to provide some more.
Thank you
Edit call to write:
int main()
{
persistent_connection p("10.1.10.220", "80");
p.write("100");
p.write("200");
barrier b(1,30000); //Timed mutex, waits for 300 seconds.
b.wait();
}
and
void persistent_connection::handle_read_headers(const boost::system::error_code &ec)
{
std::istream is(&buf_);
std::string read_stuff;
std::getline(is,read_stuff);
std::cout << read_stuff << std::endl;
}
The behavior described is the result of the io_service_'s event loop no longer being processed.
The constructor invokes io_service::poll() which will run handlers that are ready to run and will not block waiting for work to finish, where as io_service::run() will block until all work has finished. Thus, when polling, if the other side of the connection has not written any data, then no handlers may be ready to run, and execution will return from poll().
With regards to threading, if each connection will have its own thread, and the communication is a half-duplex protocol, such as HTTP, then the application code may be simpler if it is written synchronously. On the other hand, if it each connection will have its own thread, but the code is written asynchronously, then consider handling exceptions being thrown from within the event loop. It may be worth reading Boost.Asio's
effect of exceptions thrown from handlers.
Also, persistent_connection::write_to_socket() introduces undefined behavior. When invoking boost::asio::async_write(), it is documented that the caller retains ownership of the buffer and must guarantee that the buffer remains valid until the handler is called. In this case, the message buffer is an automatic variable, whose lifespan may end before the persistent_connection::handle_write handler is invoked. One solution could be to change the lifespan of message to match that of persistent_connection by making it a member variable.
I'm working on a multithreaded application in which one thread acts as a tcp server which receives commands from a client. The thread uses a Boost socket and acceptor to wait for a client to connect, receives a command from the client, passes the command to the rest of the application, then waits again. Here's the code:
void ServerThreadFunc()
{
using boost::asio::ip::tcp;
boost::asio::io_service io_service;
tcp::acceptor acceptor(io_service, tcp::endpoint(tcp::v4(), port_no));
for (;;)
{
// listen for command connection
tcp::socket socket(io_service);
acceptor.accept(socket);
// connected; receive command
boost::array<char,256> msg_buf;
socket.receive(boost::asio::buffer(msg_buf));
// do something with received bytes here
}
}
This thread spends most of its time blocked on the call to acceptor.accept(). At the moment, the thread only gets terminated when the application exits. Unfortunately, this causes a crash after main() returns - I believe because the thread tries to access the app's logging singleton after the singleton has been destroyed. (It was like that when I got here, honest guv.)
How can I shut this thread down cleanly when it's time for the application to exit? I've read that a blocking accept() call on a raw socket can be interrupted by closing the socket from another thread, but this doesn't appear to work on a Boost socket. I've tried converting the server logic to asynchronous i/o using the Boost asynchronous tcp echo server example, but that just seems to exchange a blocking call to acceptor::accept() for a blocking call to io_service::run(), so I'm left with the same problem: a blocked call which I can't interrupt. Any ideas?
In short, there are two options:
Change code to be asynchronous (acceptor::async_accept() and async_read), run within the event loop via io_service::run(), and cancel via io_service::stop().
Force blocking calls to interrupt with lower level mechanics, such as signals.
I would recommend the first option, as it is more likely to be the portable and easier to maintain. The important concept to understand is that the io_service::run() only blocks as long as there is pending work. When io_service::stop() is invoked, it will try to cause all threads blocked on io_service::run() to return as soon as possible; it will not interrupt synchronous operations, such as acceptor::accept() and socket::receive(), even if the synchronous operations are invoked within the event loop. It is important to note that io_service::stop() is a non-blocking call, so synchronization with threads that were blocked on io_service::run() must use another mechanic, such as thread::join().
Here is an example that will run for 10 seconds and listens to port 8080:
#include <boost/asio.hpp>
#include <boost/bind.hpp>
#include <boost/shared_ptr.hpp>
#include <boost/thread.hpp>
#include <iostream>
void StartAccept( boost::asio::ip::tcp::acceptor& );
void ServerThreadFunc( boost::asio::io_service& io_service )
{
using boost::asio::ip::tcp;
tcp::acceptor acceptor( io_service, tcp::endpoint( tcp::v4(), 8080 ) );
// Add a job to start accepting connections.
StartAccept( acceptor );
// Process event loop.
io_service.run();
std::cout << "Server thread exiting." << std::endl;
}
void HandleAccept( const boost::system::error_code& error,
boost::shared_ptr< boost::asio::ip::tcp::socket > socket,
boost::asio::ip::tcp::acceptor& acceptor )
{
// If there was an error, then do not add any more jobs to the service.
if ( error )
{
std::cout << "Error accepting connection: " << error.message()
<< std::endl;
return;
}
// Otherwise, the socket is good to use.
std::cout << "Doing things with socket..." << std::endl;
// Perform async operations on the socket.
// Done using the socket, so start accepting another connection. This
// will add a job to the service, preventing io_service::run() from
// returning.
std::cout << "Done using socket, ready for another connection."
<< std::endl;
StartAccept( acceptor );
};
void StartAccept( boost::asio::ip::tcp::acceptor& acceptor )
{
using boost::asio::ip::tcp;
boost::shared_ptr< tcp::socket > socket(
new tcp::socket( acceptor.get_io_service() ) );
// Add an accept call to the service. This will prevent io_service::run()
// from returning.
std::cout << "Waiting on connection" << std::endl;
acceptor.async_accept( *socket,
boost::bind( HandleAccept,
boost::asio::placeholders::error,
socket,
boost::ref( acceptor ) ) );
}
int main()
{
using boost::asio::ip::tcp;
// Create io service.
boost::asio::io_service io_service;
// Create server thread that will start accepting connections.
boost::thread server_thread( ServerThreadFunc, boost::ref( io_service ) );
// Sleep for 10 seconds, then shutdown the server.
std::cout << "Stopping service in 10 seconds..." << std::endl;
boost::this_thread::sleep( boost::posix_time::seconds( 10 ) );
std::cout << "Stopping service now!" << std::endl;
// Stopping the io_service is a non-blocking call. The threads that are
// blocked on io_service::run() will try to return as soon as possible, but
// they may still be in the middle of a handler. Thus, perform a join on
// the server thread to guarantee a block occurs.
io_service.stop();
std::cout << "Waiting on server thread..." << std::endl;
server_thread.join();
std::cout << "Done waiting on server thread." << std::endl;
return 0;
}
While running, I opened two connections. Here is the output:
Stopping service in 10 seconds...
Waiting on connection
Doing things with socket...
Done using socket, ready for another connection.
Waiting on connection
Doing things with socket...
Done using socket, ready for another connection.
Waiting on connection
Stopping service now!
Waiting on server thread...
Server thread exiting.
Done waiting on server thread.
When you receive an event that it's time to exit, you can call acceptor.cancel(), which will cancel the pending accept (with an error code of operation_canceled). On some systems, you might also have to close() the acceptor as well to be safe.
If it comes to it, you could open a temporary client connection to it on localhost - that will wake it up. You could even send it a special message so that you can shut down your server from the pub - there should be an app for that:)
Simply call shutdown with native handle and the SHUT_RD option, to cancel the existing receive(accept) operation.
The accepted answer is not exactly correct. Infact #JohnYu answered correctly.
Using blocking API of ASIO is much like using BSD sockets API that ASIO library wraps in its classes.
Problem is boost::asio::ip::tcp::acceptor class does not provide shutdown() functionality so you must do it using "old" sockets API.
Additional note: Make sure acceptor, socket and io_service are not deleted before all threads using it exit. In following code std::shared_ptr is used to keep shared resources alive so user of ApplicationContext class can delete the ApplicationContext object and avoid SEGFAULT crash.
Additional note: pay attention to boost documentation, there are overloaded methods that raise exception and ones that return error code. Original Poster's code used acceptor->accept(socket); without try/catch which would cause program exit instead of normal thread-routine exit and cleanup.
Here is the solution description:
#include <unistd.h> // include ::shutdown() function
// other includes ...
using boost::asio::ip::tcp;
using boost::asio::io_service;
class ApplicationContext {
// Use shared pointer to extend life of resources afer ApplicationContext is deleted
// and running threads can still keep using shared resources
std::shared_ptr<tcp::acceptor> acceptor;
std::shared_ptr<io_service> ioservice;
// called `ServerThreadFunc` in question code example
void AcceptLoopThreadRoutine(int port_no) {
ioservice = std::make_shared<io_service>();
acceptor = std::make_shared<tcp::acceptor>(*ioservice, tcp::endpoint(tcp::v4(), port_no));
try {
for (;;) {
// listen for client connection
tcp::socket socket(*ioservice);
// Note boost::system::system_error is raised when using this overload
acceptor->accept(socket);
// connected receive some data ...
// // boost::array<char,256> msg_buf;
// // socket.receive(boost::asio::buffer(msg_buf));
// do something with received bytes here
}
} catch(std::exception const & exception) {
// boost::system::system_error here indicates clean exit ;)
}
}
void StopAcceptThread() {
if(acceptor) {
// boost::asio::ip::tcp::acceptor does not have shutdown() functionality
// exposed, so we need to do it with this low-level approach
int shutdown_status = shutdown(acceptor->native_handle(), SHUT_RDWR);
}
}
};
Also note that using signals to unblock accept thread is very nasty implementation and temporary client connection on localhost to unblock accept thread is very awkward.
The ASIO is here to help you accomplish everything in single thread with callbacks. If you are mixing threads and ASIO chances are your design is bad.
Additional note: Do not confuse shutdown() and close(). Some systems may allow you to use close() on accept socket to unblock accept loop but this is not portable.