I am trying to make precise time measurements in Windows when UDP datagrams arrive. Reading documentation from Microsoft I have decided to use QueryPerformanceCounter. In the same documentation it suggests that you set the affinity and performance of the timing thread. I am using Boost and c++ 11 to implement my async UDP server:
void receiveEvent()
{
listening_socket.async_receive_from(
boost::asio::buffer(event_buffer, max_length), signaling_endpoint,
[this](boost::system::error_code ec, std::size_t bytes_recvd)
{
if (!ec && bytes_recvd > 0)
{
LARGE_INTEGER prectime;
::QueryPerformanceCounter(&prectime);
std::cout << prectime.QuadPart << std::endl;
} else if (ec == boost::asio::error::operation_aborted) {
std::cout << "socket canceled" << std::endl;
return;
}
receiveEvent();
});
}
listening_socket(io, boost::asio::ip::udp::endpoint(boost::asio::ip::udp::v4(), port));
receiveEvent();
async_thread = std::thread(boost::bind(&boost::asio::io_service::run, &io));
// TODO set thread affinity using SetProcessAffinityMask
// TODO set thread priority using SetPriorityClass
How do I use std::thread async_thread with SetProcessAffinityMask and SetPriorityClass?
Related
I am using boost beast for making websocket connections, a process of mine may have a large number of streams/connections and while terminating the process I am calling blocking close of each websocket in destructor using :
if ( m_ws.next_layer().next_layer().is_open() )
{
boost::system::error_code ec;
m_ws.close(boost::beast::websocket::close_code::normal, ec);
}
Having a lot of websockets, makes terminating the process blocking for a long time, is there a way to force terminate(delete) a connection and free up underlying resources faster? Thanks in advance.
As I told you in the comments, closing the connection would be a fast operation on sockets but it doesn't take time and block the thread. In your case, I don't know how much work your program does to close each socket, but keep in mind that if your main thread ends, which means that the process has ended, the SO releases all the resources that it has been using, without closing each socket, I use this technic, and the WebSocket clients detect the end of the connections, but I close the socket when I have some problems with the protocol or the remote endpoint has been disconnected abruptly.
It could be useful that you share your code to see what other activities your code is doing.
By the way, let me share with you my code, where I have no problem to close websocket:
void wscnx::close(beast::websocket::close_code reason, const boost::system::error_code &ec)
{
// std::cout << "\nwscnx::close\n";
if (is_ssl && m_wss->is_open())
m_wss->close(beast::websocket::normal);
else if (!is_ssl && m_ws->is_open())
m_wss->close(beast::websocket::normal);
if (!m_server.expired())
m_server.lock()->cnx_closed(m_cnx_info, ec);
}
In my case, I'm using asynchronous methods to read and synchronous methods
to write, I'm not using an asynchronous method to write to avoid the scenery of two simultaneous write operations.
Also, it's important to say that I'm using the asynchronous way to accept a new connection.
The code to accept the socket connection, where you can set the TIMES OUTS to write and read, instead of using timers.
void wscnx::accept_tcp()
{
m_ws->set_option(
websocket::stream_base::timeout::suggested(
beast::role_type::server));
m_ws->set_option(websocket::stream_base::decorator(
[](websocket::response_type &res)
{
res.set(http::field::server,
std::string(BOOST_BEAST_VERSION_STRING) +
" websocket-server-async");
}));
//std::cout << "wscnx::[" << m_id << "]:: TCP async_handshake\n";
m_ws->async_accept(
[self{shared_from_this()}](const boost::system::error_code &ec)
{
if (ec)
{
self->close(beast::websocket::close_code::protocol_error, ec);
return;
}
// self->read_tcp();
self->read();
});
}
The code to read:
void wscnx::read()
{
if (!is_ssl && !m_ws->is_open())
return;
else if (is_ssl && !m_wss->is_open())
return;
auto f_read = [self{shared_from_this()}](const boost::system::error_code &ec, std::size_t bytes_transferred)
{
boost::ignore_unused(bytes_transferred);
// This indicates that the session was closed
if (ec == websocket::error::closed)
{
self->close(beast::websocket::close_code::normal, ec);
return;
}
if (ec)
{
self->close(beast::websocket::close_code::abnormal, ec);
return;
}
std::string data = beast::buffers_to_string(self->m_rx_buffer.data());
self->m_rx_buffer.consume(bytes_transferred);
if (!self->m_server.expired())
{
std::string_view vdata(data.c_str());
self->m_server.lock()->on_data_rx(self->m_cnx_info.id, vdata, self->cnx_info());
}
self->read();
};//lambda
if (!is_ssl)
m_ws->async_read(m_rx_buffer, f_read);
else
m_wss->async_read(m_rx_buffer, f_read);
}
The code to write:
void wscnx::write(std::string_view data, bool close_on_write)
{
std::unique_lock<std::mutex> u_lock(m_mtx_write);
if ( (!is_ssl && !m_ws->is_open()) || (is_ssl && !m_wss->is_open()))
return;
boost::system::error_code ec;
size_t bytes_transferred{0};
if (is_ssl)
bytes_transferred = m_wss->write(net::buffer(data), ec);
else
bytes_transferred = m_ws->write(net::buffer(data), ec);
boost::ignore_unused(bytes_transferred);
// This indicates that the session was closed
if (ec == websocket::error::closed)
{
// std::cout << "[wscnx::[" << self->m_id << "]::on wirite] Error: " << ec.message() << "\n";
close(beast::websocket::close_code::normal, ec);
return;
}
if (ec)
{
// std::cout << "[wscnx::[" << self->m_id << "]::on wirite] Error READING: " << ec.message() << "\n";
close(beast::websocket::close_code::abnormal, ec);
return;
}
if (close_on_write)
close(beast::websocket::close_code::normal, ec);
}
If you want to see the whole code, this is the Link. The project is still in the developer phase, but it works.
I'm trying to write a TCP client using several different examples using Asio from Boost 1.60. The connection works properly for probably 30 seconds or so, but disconnects with the error:
The network connection was aborted by the local system
I've attempted to set up a "ping/pong" setup to keep the connection alive but it still terminates. The only previous Stack Overflow answers I've found suggested using Boost's shared_from_this and a shared pointer, which I've adapted my code to use. But the problem persists.
Setting up the Connection object and its thread:
boost::asio::io_service ios;
boost::asio::ip::tcp::resolver res(ios);
boost::shared_ptr<Connection> conn = boost::shared_ptr<Connection>(new Connection(ios));
conn->Start(res.resolve(boost::asio::ip::tcp::resolver::query("myserver", "10635")));
boost::thread t(boost::bind(&boost::asio::io_service::run, &ios));
Here's the relevant portions of the Connection class (I made sure to use shared_from_this() everywhere else, too):
class Connection : public boost::enable_shared_from_this<Connection>
{
public:
Connection(boost::asio::io_service &io_service)
: stopped_(false),
socket_(io_service),
deadline_(io_service),
heartbeat_timer_(io_service)
{
}
void Start(tcp::resolver::iterator endpoint_iter)
{
start_connect(endpoint_iter);
deadline_.async_wait(boost::bind(&Connection::check_deadline, shared_from_this()));
}
private:
void start_read()
{
deadline_.expires_from_now(boost::posix_time::seconds(30));
boost::asio::async_read_until(socket_, input_buffer_, 0x1f,
boost::bind(&Connection::handle_read, shared_from_this(), _1));
}
void handle_read(const boost::system::error_code& ec)
{
if (stopped_)
return;
if (!ec)
{
std::string line;
std::istream is(&input_buffer_);
std::getline(is, line);
if (!line.empty())
{
std::cout << "Received: " << line << "\n";
}
start_read();
}
else
{
// THIS IS WHERE THE ERROR IS LOGGED
std::cout << "Error on receive: " << ec.message() << "\n";
Stop();
}
}
void check_deadline()
{
if (stopped_)
return;
if (deadline_.expires_at() <= deadline_timer::traits_type::now())
{
socket_.close();
deadline_.expires_at(boost::posix_time::pos_infin);
}
deadline_.async_wait(boost::bind(&Connection::check_deadline, shared_from_this()));
}
};
The issue turned out to be on the server's end. The server wasn't sending the "pong" response to the client's ping properly, so the async_read_until() call never finished and consequently never reset the deadline timer.
I'm converting an application from using Juce asynchronous i/o to asio. The first part is to rewrite the code that receives traffic from another application on the same machine (it's a Lightroom Lua plugin that sends \n delimited messages on port 58764). Whenever I successfully connect to that port with my C++ program, I get a series of error codes, all the same:
An operation on a socket could not be performed because the system lacked sufficient buffer space or because a queue was full.
Can someone point out my error? I can see that the socket is successfully opened. I've reduced this from my full program to a minimal example. I also tried it with connect instead of async_connect and had the same problem.
#include <iostream>
#include "asio.hpp"
asio::io_context io_context_;
asio::ip::tcp::socket socket_{io_context_};
void loop_me()
{
asio::streambuf streambuf{};
while (true) {
if (!socket_.is_open()) {
return;
}
else {
asio::async_read_until(socket_, streambuf, '\n',
[&streambuf](const asio::error_code& error_code, std::size_t bytes_transferred) {
if (error_code) {
std::cerr << "Socket error " << error_code.message() << std::endl;
return;
}
// Extract up to the first delimiter.
std::string command{buffers_begin(streambuf.data()),
buffers_begin(streambuf.data()) + bytes_transferred};
std::cout << command << std::endl;
streambuf.consume(bytes_transferred);
});
}
}
}
int main()
{
auto work_{asio::make_work_guard(io_context_)};
std::thread io_thread_;
std::thread run_thread_;
io_thread_ = std::thread([] { io_context_.run(); });
socket_.async_connect(asio::ip::tcp::endpoint(asio::ip::address_v4::loopback(), 58764),
[&run_thread_](const asio::error_code& error) {
if (!error) {
std::cout << "Socket connected in LR_IPC_In\n";
run_thread_ = std::thread(loop_me);
}
else {
std::cerr << "LR_IPC_In socket connect failed " << error.message() << std::endl;
}
});
std::this_thread::sleep_for(std::chrono::seconds(1));
socket_.close();
io_context_.stop();
if (io_thread_.joinable())
io_thread_.join();
if (run_thread_.joinable())
run_thread_.join();
}
You are trying to start an infinite number of asynchronous read operations at the same time. You shouldn't start a new asynchronous read until the previous one finished.
async_read_until returns immediately, even though the data hasn't been received yet. That's the point of "async".
I have two objects A and B. A uses B underneath, which is a tcp client object. Object B is created in a separate thread in object A's constructor. Within the constructor for B, I use Boost Asio for socket and a deadline timer. I used async calls for the socket connect and the wait on the timer. Here is the code for object B's constructor:
B::B(const boost::asio::ip::address& ipaddr,
std::uint16_t port) : io_service_(),
socket_(io_service_),
endpoint_(ipaddr, port),
connected_(false) {
boost::asio::deadline_timer dt(io_service_);
socket_.async_connect(endpoint_, [this, &dt](const boost::system::error_code& ec) {
if (ec) {
std::cout << ec.message() << std::endl;
}
else {
dt.cancel();
std::cout << "Connected before timer expired" << std::endl;
connected_ = true;
socket_.set_option(boost::asio::socket_base::keep_alive(true));
}
});
dt.expires_from_now(boost::posix_time::seconds(3));
dt.async_wait([this](const boost::system::error_code& ec) {
if (ec) {
std::cout << ec.message() << std::endl;
}
else {
std::cout << "Timer expired before connection" << std::endl;
io_service_.stop();
}
});
io_service_.run();
}
When the machine the tcp client wants to connect to is in the ON state, it works. When the machine is in the OFF state, the expectation is that the deadline timer will expire in 3 seconds and the connection flag will not be set. I am getting an abort call during execution. I am sure it is something with the deadline timer, but cannot pin down exactly what.
Does anyone see what may be wrong here or have any other suggestions? Is there something I am missing with Boost Deadline Timer?
I'm trying to create a server that receives connections via domain sockets. I can start the server and I can see the socket being created on the filesystem. But whenever I try to connect to it via socat I get the following error:
2015/03/02 14:00:10 socat[62720] E connect(3, LEN=19 AF=1 "/var/tmp/rpc.sock", 19): Connection refused
This is my Asio code (only the .cpp files). Despite the post title I'm using the Boost-free version of Asio but I don't think that would be a problem.
namespace myapp {
DomainListener::DomainListener(const string& addr) : socket{this->service}, Listener{addr} {
remove(this->address.c_str());
stream_protocol::endpoint ep(this->address);
stream_protocol::acceptor acceptor(this->service, ep);
acceptor.async_accept(this->socket, ep, bind(&DomainListener::accept_callback, this, _1));
}
DomainListener::~DomainListener() {
this->service.stop();
remove(this->address.c_str());
}
void DomainListener::accept_callback(const error_code& ec) noexcept {
this->socket.async_read_some(asio::buffer(this->data), bind(&DomainListener::read_data, this, _1, _2));
}
void DomainListener::read_data(const error_code& ec, size_t length) noexcept {
//std::cerr << "AAA" << std::endl;
//std::cerr << this->data[0] << std::endl;
//std::cerr << "BBB" << std::endl;
}
}
Listener::Listener(const string& addr) : work{asio::io_service::work(this->service)} {
this->address = addr;
}
void Listener::listen() {
this->service.run();
}
Listener::~Listener() {
}
In the code that uses these classes I call listen() whenever I want to start listening to the socket for connections.
I've managed to get this to work with libuv and changed to Asio because I thought it would make for more readable code but I'm finding the documentation to be very ambiguous.
The issue is most likely the lifetime of the acceptor.
The acceptor is an automatic variable in the DomainListener constructor. When the DomainListener constructor completes, the acceptor is destroyed, causing the acceptor to close and cancel outstanding operations, such as the async_accept operations. Cancelled operations will be provided an error code of asio::error::operation_aborted and scheduled for deferred invocation within the io_service. Hence, there may not be an active listener when attempting to connect to the domain socket. For more details on the affects of IO object destruction, see this answer.
DomainListener::DomainListener(const string&) : /* ... */
{
// ...
stream_protocol::acceptor acceptor(...);
acceptor.async_accept(..., bind(accept_callback, ...));
} // acceptor destroyed, and accept_callback likely cancelled
To resolve this, consider extending the lifetime of the acceptor by making it a data member for DomainListener. Additionally, checking the error_code provided to asynchronous operations can provide more insight into the asynchronous call chains.
Here is a complete minimal example demonstrating using domain sockets with Asio.
#include <cstdio>
#include <iostream>
#include <boost/array.hpp>
#include <boost/asio.hpp>
#include <boost/bind.hpp>
/// #brief server demonstrates using domain sockets to accept
/// and read from a connection.
class server
{
public:
server(
boost::asio::io_service& io_service,
const std::string& file)
: io_service_(io_service),
acceptor_(io_service_,
boost::asio::local::stream_protocol::endpoint(file)),
client_(io_service_)
{
std::cout << "start accepting connection" << std::endl;
acceptor_.async_accept(client_,
boost::bind(&server::handle_accept, this,
boost::asio::placeholders::error));
}
private:
void handle_accept(const boost::system::error_code& error)
{
std::cout << "handle_accept: " << error.message() << std::endl;
if (error) return;
std::cout << "start reading" << std::endl;
client_.async_read_some(boost::asio::buffer(buffer_),
boost::bind(&server::handle_read, this,
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred));
}
void handle_read(
const boost::system::error_code& error,
std::size_t bytes_transferred)
{
std::cout << "handle_read: " << error.message() << std::endl;
if (error) return;
std::cout << "read: ";
std::cout.write(buffer_.begin(), bytes_transferred);
std::cout.flush();
}
private:
boost::asio::io_service& io_service_;
boost::asio::local::stream_protocol::acceptor acceptor_;
boost::asio::local::stream_protocol::socket client_;
std::array<char, 1024> buffer_;
};
int main(int argc, char* argv[])
{
if (argc != 2)
{
std::cerr << "Usage: <file>\n";
return 1;
}
// Remove file on startup and exit.
std::string file(argv[1]);
struct file_remover
{
file_remover(std::string file): file_(file) { std::remove(file.c_str()); }
~file_remover() { std::remove(file_.c_str()); }
std::string file_;
} remover(file);
// Create and run the server.
boost::asio::io_service io_service;
server s(io_service, file);
io_service.run();
}
Coliru does not have socat installed, so the following commands use OpenBSD netcat to write "asio domain socket example" to the domain socket:
export SOCKFILE=$PWD/example.sock
./a.out $SOCKFILE &
sleep 1
echo "asio domain socket example" | nc -U $SOCKFILE
Which outputs:
start accepting connection
handle_accept: Success
start reading
handle_read: Success
read: asio domain socket example