I am developing a QT 6 Widget based UDP audio application that repeatedly sends out a single UDP audio frame sample (4K bytes sine wave tone) to a remote UDP echo server at a predetermined rate - (right now the echo server is hosted locally though).
The UDP echo server is based on the asynchronous UDP echo server sample developed by the asio author (not me). This is shown below (slightly modified to include a hard coded 4K block for testing purposes). The application is also launched with a port parameter 1234 - so it listens on port 1234 for the incoming audio packet that it will echo back to client.
//
// async_udp_echo_server.cpp
// ~~~~~~~~~~~~~~~~~~~~~~~~~
//
// Copyright (c) 2003-2022 Christopher M. Kohlhoff (chris at kohlhoff dot com)
//
// Distributed under the Boost Software License, Version 1.0. (See accompanying
// file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)
//
#include <cstdlib>
#include <iostream>
#include <asio/ts/buffer.hpp>
#include <asio/ts/internet.hpp>
using asio::ip::udp;
class server {
public:
server(asio::io_context& io_context, short port)
: socket_(io_context, udp::endpoint(udp::v4(), port)) {
do_receive();
}
void do_receive() {
socket_.async_receive_from(
asio::buffer(data_, max_length), sender_endpoint_,
[this](std::error_code ec, std::size_t bytes_recvd) {
if (!ec && bytes_recvd > 0) {
do_send(bytes_recvd);
} else {
do_receive();
}
});
}
void do_send(std::size_t length) {
socket_.async_send_to(
asio::buffer(data_, length), sender_endpoint_,
[this](std::error_code /*ec*/, std::size_t /*bytes_sent*/) {
do_receive();
});
}
private:
udp::socket socket_;
udp::endpoint sender_endpoint_;
enum { max_length = 4096 };
char data_[max_length]{};
};
int main(int argc, char* argv[]) {
try {
if (argc != 2) {
std::cerr << "Usage: async_udp_echo_server <port>\n";
return 1;
}
asio::io_context io_context;
server s(io_context, std::atoi(argv[1]));
io_context.run();
} catch (std::exception& e) {
std::cerr << "Exception: " << e.what() << "\n";
}
return 0;
}
I currently have this working successfully in the client as a stand alone asio worker thread, however since I need to graphically display the returned audio packets, I cannot use the stand alone asio thread approach; I need to use QT with its signals/slots async magic instead.
For the purposes of illustration, I also include my working asio client code that runs in a separate joinable thread. This client thread uses a asio::steady_timer that fires an asynchronous 4k UDP packet repeatedly to the echo server. The code also compares the echoed back contents to this outgoing audio sample successfully.
void
RTPClient::start() {
mpSendEndpoint = std::make_unique<ip::udp::endpoint>(
ip::address::from_string(mConfig.mHostName),
mConfig.mPortNum);
mpSocket = std::make_unique<ip::udp::socket>(
mIOContext, mpSendEndpoint->protocol());
mpSocketTimer = std::make_unique<steady_timer>(
mIOContext);
mWorker = std::thread([this]() {
mIOContext.run();
});
if (!mShutdownFlag) {
// kick off the async chain by immediate timeout
mpSocketTimer->expires_after(std::chrono::seconds(0));
mpSocketTimer->async_wait([this]<typename T0>(T0&& ec) {
handle_timeout(std::forward<T0>(ec));
});
}
}
void
RTPClient::handle_timeout(const error_code& ec)
{
if (!ec && !mShutdownFlag) {
if (!mpAudioOutput) {
// check to see if there is new audio test data waiting in queue
if (const auto audioData = mIPCQueue->try_pop(); audioData) {
// new audio waiting, copy the data to mpAudioTXData and allocate an identically
// sized receive buffer to receive the echo replies from the server
mpAudioInput = std::make_unique<AudioDatagram>(audioData->first.size());
mpAudioOutput = std::make_unique<AudioDatagram>(std::move(audioData->first));
mAudioBlockUSecs = audioData->second;
} else {
mpSocketTimer->expires_after(seconds(1));
mpSocketTimer->async_wait([this]<typename T0>(T0&& ec) {
handle_timeout(std::forward<T0>(ec));
});
// nothing to send as waveform data not received from GUI.
// short circuit return with a 1 sec poll
return;
}
}
mpSocket->async_send_to(asio::buffer(
mpAudioOutput.get(), mpAudioOutput->size()),
*mpSendEndpoint, [this]<typename T0, typename T1>(T0&& ec, T1&& bytes_transferred) {
handle_send_to(std::forward<T0>(ec), std::forward<T1>(bytes_transferred));
});
}
}
void
RTPClient::handle_send_to(const error_code& ec, std::size_t bytes_transferred) {
if (!ec && bytes_transferred > 0 && !mShutdownFlag) {
mpSocketTimer->expires_after(microseconds(mAudioBlockUSecs));
mpSocketTimer->async_wait([this]<typename T0>(T0&& ec) {
handle_timeout(std::forward<T0>(ec));
});
mpSocket->async_receive_from(asio::buffer(
mpAudioInput.get(), mpAudioInput->size()), *mpSendEndpoint,
[this]<typename T0, typename T1>(T0&& ec, T1&& bytes_transferred) {
handle_receive(std::forward<T0>(ec), std::forward<T1>(bytes_transferred));
});
}
}
void
RTPClient::handle_receive(const error_code& ec, std::size_t bytes_transferred) {
if (!ec && bytes_transferred > 0) {
double foo = 0.0;
for (const auto next : *mpAudioOutput) {
foo += (double)next;
}
double bar = 0.0;
for (const auto next : *mpAudioInput) {
bar += (double)next;
}
if (foo != bar)
{
auto baz = 0;
(void)baz;
}
}
}
/**
* Shutdown the protocol instance by shutting down the IPC
* queue and closing the socket and associated timers etc.
*
* <p>This is achieved by setting a flag which is read by the
* busy loop as an exit condition.
*/
void
RTPClient::shutdown() {
// set the shared shutdown flag
mShutdownFlag = true;
// wake up any locked threads so they can see the above flag
if (mIPCQueue) {
mIPCQueue->shutdown();
}
// stop the socket timer - do not reset it
// as there are some time sensitive parts in the code
// where mpSocketTimer is dereferenced
if (mpSocketTimer) {
mpSocketTimer->cancel();
}
std::error_code ignoredError;
// close the socket if we created & opened it, making
// sure that we close down both ends of the socket.
if (mpSocket && mpSocket->is_open()) {
mpSocket->shutdown(ip::udp::socket::shutdown_both, ignoredError);
// reset so we will reallocate and then reopen
// via boost::async_connect(...) later.
mpSocket.reset();
}
// wait for the any other detached threads to see mShutdownFlag
// as it is running in a detached mWorkerThread which sleeps
// for 50ms CDU key polling requests.
std::this_thread::sleep_for(milliseconds(200));
}
I need to replace this separate asio client thread code with a QUdpSocket based client code to do the equivalent, as I need to use signals/slots to notify the GUI when the blocks arrive and display the returned waveform in a widget. To this end I have the following QT worker thread. I can see that the asio echo server receives the datagram, however I do not know how to receive the echoed contents back into the client. Is there some bind or connect call that I need to do on the client side. I am totally confused with when to call bind and when to call connect on UDP sockets.
// SYSTEM INCLUDES
//#include <..>
// APPLICATION INCLUDES
#include "RTPSession.h"
// DEFINES
// EXTERNAL FUNCTIONS
// EXTERNAL VARIABLES
// CONSTANTS
// STRUCTS
// FUNCTIONS
// NAMESPACE USAGE
using namespace std::chrono;
// STATIC VARIABLE INITIALIZATIONS
std::mutex RTPSession::gMutexGuard;
RTPSession::RTPSession(QObject* parent)
: QObject(parent)
, mpSocket{ std::make_unique<QUdpSocket>(parent) }
{
mpSocket->bind(45454, QUdpSocket::DefaultForPlatform);
connect(mpSocket.get(), &QUdpSocket::readyRead,
this, &RTPSession::processPendingDatagrams);
}
/**
* Thread function that listens RTP session updates.
*
* <p>The implementation polls for shutdown every second.
*
* #param rRTPInfo [in] qt thread parameters.
*/
void
RTPSession::doWork(
const std::tuple<int32_t, int32_t, int32_t>& /*rRTPInfo*/)
{
try {
// just dispatched, so reset exit flag
mExitWorkLoop = false;
int frameCounter = 0;
while (!mExitWorkLoop) {
constexpr auto gPollMillis = 1000;
// poll using shortest (non zero) interval in schedule
std::unique_lock<std::mutex> lk(gMutexGuard);
mCondVariable.wait_for(lk, milliseconds(gPollMillis),
[this] { return mExitWorkLoop; });
QByteArray datagram = "Broadcast message " + QByteArray::number(frameCounter++);
mpSocket->writeDatagram(datagram.data(), datagram.size(),
QHostAddress::LocalHost, 1234);
if (mpSocket->hasPendingDatagrams()) {
//mpSocket->readDatagram()
int t = 0;
(void)t;
}
// update GUI with the audio stats - add more later
emit updateProgress(frameCounter++);
}
} catch (const std::exception& rEx) {
// exit thread with the exception details
emit finishWork(tr("exiting worker, error:") + rEx.what());
}
// exit thread with status bar message
emit finishWork(tr("finished"));
}
void
RTPSession::shutdown()
{
// Critical section.
std::scoped_lock<std::mutex> lock(gMutexGuard);
mExitWorkLoop = true;
// Notify the potentially sleeping thread that is
// waiting for up to 1 second
mCondVariable.notify_one();
}
void
RTPSession::processPendingDatagrams() {
QByteArray datagram;
while (mpSocket->hasPendingDatagrams()) {
datagram.resize(int(mpSocket->pendingDatagramSize()));
mpSocket->readDatagram(datagram.data(), datagram.size());
//statusLabel->setText(tr("Received datagram: \"%1\"")
// .arg(datagram.constData()));
}
}
Related
UPDATE:
Well it appears that I need to address my issue with an asynchronous implementation. I will update my posting with a new direction, once I've completed testing
Original:
I'm currently writing a multiserver application that will collect, share, and request information from multiple machines. In some cases, Machine A will request information from Machine B but will need to send it to Machine C, which will reply to A. Without getting too deep into what the application is going to do I need some help with my client application.
I have my client application designed with two threads. I used this example from boost, as the basis for my design.
Thread one will open a Client Websocket with Machine-A, it will stream a series of data points and commands. Here is a stripped-down version of my code
#include "Poco/Clock.h"
#include "Poco/Task.h"
#include "Poco/Thread.h"
#include <boost/asio.hpp>
#include <boost/beast.hpp>
#include <jsoncons/json.hpp>
namespace beast = boost::beast; // from <boost/beast.hpp>
namespace http = beast::http; // from <boost/beast/http.hpp>
namespace websocket = beast::websocket; // from <boost/beast/websocket.hpp>
namespace net = boost::asio; // from <boost/asio.hpp>
using tcp = net::ip::tcp; // from <boost/asio/ip/tcp.hpp>
class ResponseChannel : public Poco::Runnable {
void do_session(tcp::socket socket)
{
try {
websocket::stream<tcp::socket> ws{std::move(socket)};
ws.set_option(websocket::stream_base::decorator(
[](websocket::response_type& res) {
res.set(http::field::server,
std::string(BOOST_BEAST_VERSION_STRING) +
" websocket-server-sync");
}));
ws.accept();
for (;;) {
beast::flat_buffer buffer;
ws.read(buffer);
if (ws.got_binary()) {
// do something
}
}
} catch (beast::system_error const& se) {
if (se.code() != websocket::error::closed) {
std::cerr << "do_session1 ->: " << se.code().message()
<< std::endl;
return;
}
} catch (std::exception const& e) {
std::cerr << "do_session2 ->: " << e.what() << std::endl;
return;
}
}
virtual void run()
{
auto const address = net::ip::make_address(host);
auto const port = static_cast<unsigned short>(respPort);
try {
net::io_context ioc{1};
tcp::acceptor acceptor{ioc, {address, port}};
tcp::socket socket{ioc};
for (; keep_running;) {
acceptor.accept(socket);
std::thread(&ResponseChannel::do_session, this,
std::move(socket))
.detach();
}
} catch (const std::exception& e) {
std::cout << "run: " << e.what() << std::endl;
}
}
void _terminate() { keep_running = false; }
public:
std::string host;
int respPort;
bool keep_running = true;
int responseCount = 0;
std::vector<long long int> latency_times;
long long int time_sum;
Poco::Clock* responseClock;
};
int main()
{
using namespace std::chrono_literals;
Poco::Clock clock = Poco::Clock();
Poco::Thread response_thread;
ResponseChannel response_channel;
response_channel.responseClock = &clock;
response_channel.host = "0.0.0.0";
response_channel.respPort = 8080;
response_thread.start(response_channel);
response_thread.setPriority(Poco::Thread::Priority::PRIO_HIGH);
// doing some work here. work will vary depending on command-line arguments
std::this_thread::sleep_for(30s);
response_channel.keep_running = false;
response_thread.join();
}
The way I have designed the multiple machines works as expected regarding sending commands to Machine-B and receiving results from Machine-C.
The issue I'm facing is closing out Thread 2, which contains my local response channel.
I went back and forth between Poco::Thread and Poco::Task, but I decided that I do not want to use Task, as it would be a mistake to be able to close the 2nd thread/task from the main thread. I need to know that all packets have been received before closing down the 2nd thread.
So I need to close events down only once I have received a websocket::error::closed flag from Machine-C. Shutting down the websocket, detached, thread is no issue, as when the flag arrives it takes care of that for me.
However, as part of the loop process for reconnecting after a closed socket, the thread just waits for a new connection.
acceptor.accept(socket);
It's blocking, and through the documentation, there doesn't seem to be a timeout feature. I see that there is a close option, but my attempt to use close simply threw an exception. Which ultimately added complexity, I didn't want.
Ultimately, I want the Server to continuously loop through a series of connections from both Machine-B and Machine-C, but only after my client application has ended. The last thing I do before waiting for the Poco::Thread to complete is to set the flag that I no longer want the Websocket server to run.
I've put that flag before the blocking accept() call. This would work, only with perfect timing of the flag going up, a new connection is opened and then closed, before looping back to wait for a new connection.
Ideally, there would be a timeout so that it would loop around, first checking if it timed out, allow for a periodic check if I wanted the thread to remain open.
Has anyone ever run into this?
I 'm using a strand to avoid concurrent writes on TCP server using Boost.Asio. But it seems it only prevents concurrent execution of handlers.
Indeed if I do two successive async_write, one with a very big packet, and the other with a very small one, wireshark shows interleaves. As async_write is composed of multiple calls of async_write_some, it seems that the handler of my second write is allowed to be executed between two handlers of the first call. Which is very bad for me.
Wireshark output: [Packet 1.1] [Packet 1.2] [Packet 2] [Packet 1.3] ... [Packet 1.x]
struct Command
{
// Header
uint64_t ticket_id; // UUID
uint32_t data_size; // size of data
// data
std::vector<unsigned char> m_internal_buffer;
}
typedef std::shared_ptr<Command> command_type;
void tcp_server::write(command_type cmd)
{
boost::asio::async_write(m_socket, boost::asio::buffer(cmd->getData(), cmd->getTotalPacketSize()),
boost::asio::bind_executor(m_write_strand,
[this, cmd](const boost::system::error_code& error, std::size_t bytes_transferred)
{
if (error)
{
// report
}
}
)
);
}
and the main:
int main()
{
tcp_server.write(big_packet); // Packet 1 = 10 MBytes !
tcp_server.write(small_packet); // Packet 2 = 64 kbytes
}
Is the strand not appropriate in my case ?
P.S: I saw that close topic here but it does not cover the same use case in my opinion.
You have to make sure your async operation is initiated from the strand. Your code currently doesn't show this to be the case. Hopefully this helps, otherwise, post a MCVE
So e.g.
void tcp_server::write(command_type cmd)
{
post(m_write_strand, [this, cmd] { this->do_write(cmd); });
}
Making up a MCVE from your question code:
Live On Coliru
#include <boost/asio.hpp>
using boost::asio::ip::tcp;
using Executor = boost::asio::thread_pool::executor_type;
struct command {
char const* getData() const { return ""; }
size_t getTotalPacketSize() const { return 1; }
};
using command_type = command*;
struct tcp_server {
tcp_server(Executor ex) : m_socket(ex), m_write_strand(ex)
{
// more?
}
void write(command_type cmd);
void do_write(command_type cmd);
tcp::socket m_socket;
boost::asio::strand<Executor> m_write_strand;
};
void tcp_server::write(command_type cmd)
{
post(m_write_strand, [this, cmd] { this->do_write(cmd); });
}
void tcp_server::do_write(command_type cmd)
{
boost::asio::async_write(
m_socket,
boost::asio::buffer(cmd->getData(), cmd->getTotalPacketSize()),
bind_executor(m_write_strand,
[/*this, cmd*/](boost::system::error_code error,
size_t bytes_transferred) {
if (error) {
// report
}
}));
}
int main() {
boost::asio::thread_pool ioc;
tcp_server tcp_server(ioc.get_executor());
command_type big_packet{}, small_packet{};
tcp_server.write(big_packet); // Packet 1 = 10 MBytes !
tcp_server.write(small_packet); // Packet 2 = 64 kbytes
ioc.join();
}
I'm using OpenSSL 1.1.1b and Boost 1.68 to create a simple server using https.
I followed the examples provided by boost beast and in particular the advance server flex.
The application seems to work properly. I can accept https session and also wss sessions.
The problem is when I exit from the application where the Visual Leak Detector finds 16 memory leaks that target at:
c:\openssl-1.1.1b\crypto\mem.c (233): abc.exe!CRYPTO_zalloc
c:\openssl-1.1.1b\crypto\err\err.c (716): abc.exe!ERR_get_state + 0x17 bytes
c:\openssl-1.1.1b\crypto\err\err.c (443): abc.exe!ERR_clear_error + 0x5 bytes
c:\usr\work\abc_repo\ext\boost_1_68_0\boost\asio\ssl\detail\impl\engine.ipp (235): abc.exe!boost::asio::ssl::detail::engine::perform
c:\usr\work\abc_repo\ext\boost_1_68_0\boost\asio\ssl\detail\impl\engine.ipp (137): abc.exe!boost::asio::ssl::detail::engine::handshake
I modified the pattern of the http session from the original boost beast code but it should perform exactly the same things.
I've tried to understand if the memory leaks increase based on the number of connections but it seems not. I don't understand how to get rid of this problem.
Following the code I used.
First a based http session class
class CApplicationServerBaseHttpSession
{
public:
std::shared_ptr<CApplicationServerSharedState> m_state = nullptr;
CApplicationServerHttpQueue m_queue;
// The parser is stored in an optional container so we can
// construct it from scratch it at the beginning of each new message.
boost::optional<boost::beast::http::request_parser<boost::beast::http::string_body>> parser_;
protected:
boost::asio::steady_timer m_timer;
boost::beast::flat_buffer buffer_;
boost::log::sources::severity_channel_logger<boost::log::trivial::severity_level> m_Logger{boost::log::keywords::channel = LOG_APPLICATION_SERVER_CHANNEL_ID};
boost::asio::strand<boost::asio::io_context::executor_type> m_strand;
public:
// Construct the session
CApplicationServerBaseHttpSession(
boost::asio::io_context& ioc,
boost::beast::flat_buffer buffer,
std::shared_ptr<CApplicationServerSharedState> const& state)
: m_state(state)
, m_strand(ioc.get_executor())
, m_timer(ioc,
(std::chrono::steady_clock::time_point::max)()
)
, m_queue(*this)
, buffer_(std::move(buffer))
{
}
void DoRead();
void OnRead(boost::system::error_code ec);
void OnWrite(boost::system::error_code ec, bool close);
virtual void WriteRequestStringBody(boost::beast::http::response<boost::beast::http::string_body> & msg) = 0;
virtual void WriteRequestFileBody(boost::beast::http::response<boost::beast::http::file_body> & msg) = 0;
protected:
virtual void ReadRequest() = 0;
virtual void DoEof() = 0;
virtual std::string GetRemoteAddress() = 0;
virtual void MakeWebSocketSession(boost::beast::http::request<boost::beast::http::string_body> req) = 0;
};
Here the implementation:
void CApplicationServerBaseHttpSession::DoRead()
{
// Set the timer
m_timer.expires_after(std::chrono::seconds(OCV_HTTP_SESSION_TIMER_EXPIRE_AFTER));
// Construct a new parser for each message
parser_.emplace();
// Apply a reasonable limit to the allowed size
// of the body in bytes to prevent abuse.
parser_->body_limit(HTTP_BODY_LIMIT);
this->ReadRequest();
}
void CApplicationServerBaseHttpSession::OnRead(boost::system::error_code ec)
{
// Happens when the timer closes the socket
if(ec == boost::asio::error::operation_aborted)
return;
// This means they closed the connection
if(ec == http::error::end_of_stream)
return this->DoEof();
if(ec == boost::asio::ssl::error::stream_truncated){
// "stream truncated" means that the other end closed the connection abruptly.
return warning(ec, "Http read", m_Logger);
}
if(ec)
return fail(ec, "Http read", m_Logger);
// See if it is a WebSocket Upgrade
if(websocket::is_upgrade(parser_->get())) {
// Get a websocket request handler to execute operation as authentication and authorization
// If these steps are allowed than the websocket session will be started
std::shared_ptr<CApplicationServerWsApiBase> endpointWs = m_state->GetEndpointWs(parser_->get().target().to_string());
if(endpointWs) {
int endpointErrorDefault = endpointWs->HandleRequest(parser_->get());
if(endpointErrorDefault > 0) { // Success Auth
// Make timer expire immediately, by setting expiry to time_point::min we can detect
// the upgrade to websocket in the timer handler
m_timer.expires_at((std::chrono::steady_clock::time_point::min)());
// Transfer the stream to a new WebSocket session
return MakeWebSocketSession(parser_->release());
} else {
// Authentication or Authorization failed
m_queue(endpointWs->GetResponseError(parser_->get(), endpointErrorDefault));
return;
}
} else {
// Wrong endpoint called: BadRequest
std::shared_ptr<CApplicationServerApiBase> endpoint = m_state->GetEndpoint(ApiURI::REQUEST_NOT_IMPLEMENTED);
if(endpoint) {
endpoint->HandleRequest(m_state->GetDocRoot(), parser_->release(), m_queue);
}
return;
}
}
BOOST_LOG_SEV(m_Logger, boost::log::trivial::trace) <<
"Request From: " <<
this->GetRemoteAddress() <<
" Request Target: " <<
parser_->get().target().to_string();
std::shared_ptr<CApplicationServerApiBase> endpoint = m_state->GetEndpoint(parser_->get().target().to_string());
if(endpoint) {
endpoint->HandleRequest(m_state->GetDocRoot(), parser_->release(), m_queue);
}
// If we aren't at the queue limit, try to pipeline another request
if(!m_queue.IsFull()) {
DoRead();
}
}
void CApplicationServerBaseHttpSession::OnWrite(boost::system::error_code ec, bool close)
{
// Happens when the timer closes the socket
if(ec == boost::asio::error::operation_aborted)
return;
if(ec)
return fail(ec, "write", m_Logger);
if(close) {
// This means we should close the connection, usually because
// the response indicated the "Connection: close" semantic.
return this->DoEof();
}
// Inform the queue that a write completed
if(m_queue.OnWrite()) {
// Read another request
DoRead();
}
}
The https session:
class COcvApplicationServerHttpSessionSSL
: public std::enable_shared_from_this<COcvApplicationServerHttpSessionSSL>
, public CApplicationServerBaseHttpSession
{
public:
public:
COcvApplicationServerHttpSessionSSL(boost::asio::ip::tcp::socket&& socket,boost::asio::ssl::context& ctx, boost::beast::flat_buffer&& buffer, std::shared_ptr<CApplicationServerSharedState> const& state);
~COcvApplicationServerHttpSessionSSL();
// Called by the base class
boost::beast::ssl_stream<boost::asio::ip::tcp::socket>& Stream();
boost::beast::ssl_stream<boost::asio::ip::tcp::socket> ReleaseStream();
void DoTimeout();
// Start the asynchronous operation
void Run();
void OnHandshake(boost::system::error_code ec, std::size_t bytes_used);
void OnShutdown(boost::system::error_code ec);
void OnTimer(boost::system::error_code ec);
private:
public:
boost::beast::ssl_stream<boost::asio::ip::tcp::socket> m_stream;
bool m_eof = false;
protected:
// Inherited via COcvApplicationServerBaseHttpSession
virtual void ReadRequest() override;
virtual void WriteRequestStringBody(boost::beast::http::response<boost::beast::http::string_body> & msg) override;
virtual void WriteRequestFileBody(boost::beast::http::response<boost::beast::http::file_body> & msg) override;
virtual void DoEof() override;
virtual std::string GetRemoteAddress() override;
virtual void MakeWebSocketSession(boost::beast::http::request<boost::beast::http::string_body> req) override;
};
and at the end the implementatition
COcvApplicationServerHttpSessionSSL::COcvApplicationServerHttpSessionSSL(tcp::socket&& socket, ssl::context & ctx, beast::flat_buffer&& buffer, std::shared_ptr<CApplicationServerSharedState> const & state)
: CApplicationServerBaseHttpSession(
socket.get_executor().context(),
std::move(buffer),
state)
, m_stream(std::move(socket), ctx)
{
}
COcvApplicationServerHttpSessionSSL::~COcvApplicationServerHttpSessionSSL()
{
}
beast::ssl_stream<tcp::socket> & COcvApplicationServerHttpSessionSSL::Stream()
{
return m_stream;
}
beast::ssl_stream<tcp::socket> COcvApplicationServerHttpSessionSSL::ReleaseStream()
{
return std::move(m_stream);
}
void COcvApplicationServerHttpSessionSSL::DoTimeout()
{
// If this is true it means we timed out performing the shutdown
if(m_eof)
return;
// Start the timer again
m_timer.expires_at(
(std::chrono::steady_clock::time_point::max)());
OnTimer({});
DoEof();
}
std::string COcvApplicationServerHttpSessionSSL::GetRemoteAddress()
{
return Stream().next_layer().remote_endpoint().address().to_string();
}
void COcvApplicationServerHttpSessionSSL::MakeWebSocketSession(boost::beast::http::request<boost::beast::http::string_body> req)
{
std::make_shared<CApplicationServerWebSocketSessionSSL>(
std::move(m_stream), m_state)->Run(std::move(req));
}
void COcvApplicationServerHttpSessionSSL::Run()
{
// Make sure we run on the strand
if(!m_strand.running_in_this_thread())
return boost::asio::post(
boost::asio::bind_executor(
m_strand,
std::bind(
&COcvApplicationServerHttpSessionSSL::Run,
shared_from_this())));
// Run the timer. The timer is operated
// continuously, this simplifies the code.
OnTimer({});
// Set the timer
m_timer.expires_after(std::chrono::seconds(OCV_HTTP_SESSION_TIMER_EXPIRE_AFTER));
// Perform the SSL handshake
// Note, this is the buffered version of the handshake.
m_stream.async_handshake(
ssl::stream_base::server,
buffer_.data(),
boost::asio::bind_executor(
m_strand,
std::bind(
&COcvApplicationServerHttpSessionSSL::OnHandshake,
shared_from_this(),
std::placeholders::_1,
std::placeholders::_2)));
}
void COcvApplicationServerHttpSessionSSL::OnHandshake(boost::system::error_code ec, std::size_t bytes_used)
{
// Happens when the handshake times out
if(ec == boost::asio::error::operation_aborted)
return;
if(ec)
return fail(ec, "handshake", m_Logger);
// Consume the portion of the buffer used by the handshake
buffer_.consume(bytes_used);
DoRead();
}
void COcvApplicationServerHttpSessionSSL::OnShutdown(boost::system::error_code ec)
{
// Happens when the shutdown times out
if(ec == boost::asio::error::operation_aborted || ec == boost::asio::ssl::error::stream_truncated)
return;
if(ec)
return fail(ec, "shutdown HTTPS", m_Logger);
// At this point the connection is closed gracefully
}
void COcvApplicationServerHttpSessionSSL::OnTimer(boost::system::error_code ec)
{
if(ec && ec != boost::asio::error::operation_aborted)
return fail(ec, "timer", m_Logger);
// Check if this has been upgraded to Websocket
if(m_timer.expires_at() == (std::chrono::steady_clock::time_point::min)())
return;
// Verify that the timer really expired since the deadline may have moved.
if(m_timer.expiry() <= std::chrono::steady_clock::now())
return DoTimeout();
// Wait on the timer
m_timer.async_wait(
boost::asio::bind_executor(
m_strand,
std::bind(
&COcvApplicationServerHttpSessionSSL::OnTimer,
shared_from_this(),
std::placeholders::_1)));
}
void COcvApplicationServerHttpSessionSSL::ReadRequest()
{
// Read a request
http::async_read(
Stream(),
buffer_,
*parser_,
boost::asio::bind_executor(
m_strand,
std::bind(
&CApplicationServerBaseHttpSession::OnRead,
shared_from_this(),
std::placeholders::_1)));
}
void COcvApplicationServerHttpSessionSSL::WriteRequestStringBody(boost::beast::http::response<boost::beast::http::string_body> & msg)
{
boost::beast::http::async_write(
Stream(),
msg,
boost::asio::bind_executor(
m_strand,
std::bind(
&CApplicationServerBaseHttpSession::OnWrite,
shared_from_this(),
std::placeholders::_1,
msg.need_eof()
)
)
);
}
void COcvApplicationServerHttpSessionSSL::WriteRequestFileBody(boost::beast::http::response<boost::beast::http::file_body> & msg)
{
boost::beast::http::async_write(
Stream(),
msg,
boost::asio::bind_executor(
m_strand,
std::bind(
&CApplicationServerBaseHttpSession::OnWrite,
shared_from_this(),
std::placeholders::_1,
msg.need_eof()
)
)
);
}
void COcvApplicationServerHttpSessionSSL::DoEof()
{
m_eof = true;
// Set the timer
m_timer.expires_after(std::chrono::seconds(OCV_HTTP_SESSION_TIMER_EXPIRE_DO_EOF));
// Perform the SSL shutdown
m_stream.async_shutdown(
boost::asio::bind_executor(
m_strand,
std::bind(
&COcvApplicationServerHttpSessionSSL::OnShutdown,
shared_from_this(),
std::placeholders::_1)));
}
The Visual Leak Detector gives me the following:
c:\openssl-1.1.1b\crypto\mem.c (233): abc.exe!CRYPTO_zalloc
c:\openssl-1.1.1b\crypto\err\err.c (716): abc.exe!ERR_get_state + 0x17 bytes
c:\openssl-1.1.1b\crypto\err\err.c (443): abc.exe!ERR_clear_error + 0x5 bytes
c:\usr\work\abc_repo\ext\boost_1_68_0\boost\asio\ssl\detail\impl\engine.ipp (235): abc.exe!boost::asio::ssl::detail::engine::perform
c:\usr\work\abc_repo\ext\boost_1_68_0\boost\asio\ssl\detail\impl\engine.ipp (137): abc.exe!boost::asio::ssl::detail::engine::handshake
c:\usr\work\abc_repo\ext\boost_1_68_0\boost\asio\ssl\detail\buffered_handshake_op.hpp (70): abc.exe!boost::asio::ssl::detail::buffered_handshake_op<boost::asio::const_buffer>::process<boost::asio::const_buffer const * __ptr64> + 0x1F bytes
c:\usr\work\abc_repo\ext\boost_1_68_0\boost\asio\ssl\detail\buffered_handshake_op.hpp (48): abc.exe!boost::asio::ssl::detail::buffered_handshake_op<boost::asio::const_buffer>::operator()
c:\usr\work\abc_repo\ext\boost_1_68_0\boost\asio\ssl\detail\io.hpp (136): abc.exe!boost::asio::ssl::detail::io_op<boost::asio::basic_stream_socket<boost::asio::ip::tcp>,boost::asio::ssl::detail::buffered_handshake_op<boost::asio::const_buffer>,boost::asio::executor_binder<std::_Binder<std::_Unforced,void (__cdecl CabcApplicationServerH + 0x50 bytes
c:\usr\work\abc_repo\ext\boost_1_68_0\boost\asio\ssl\detail\io.hpp (333): abc.exe!boost::asio::ssl::detail::async_io<boost::asio::basic_stream_socket<boost::asio::ip::tcp>,boost::asio::ssl::detail::buffered_handshake_op<boost::asio::const_buffer>,boost::asio::executor_binder<std::_Binder<std::_Unforced,void (__cdecl CabcApplicationServ + 0x87 bytes
c:\usr\work\abc_repo\ext\boost_1_68_0\boost\asio\ssl\stream.hpp (505): abc.exe!boost::asio::ssl::stream<boost::asio::basic_stream_socket<boost::asio::ip::tcp> >::async_handshake<boost::asio::const_buffer,boost::asio::executor_binder<std::_Binder<std::_Unforced,void (__cdecl CabcApplicationServerHttpSessionSSL::*)(boost::system::erro + 0x5E bytes
c:\usr\work\abc_repo\ext\boost_1_68_0\boost\beast\experimental\core\ssl_stream.hpp (485): abc.exe!boost::beast::ssl_stream<boost::asio::basic_stream_socket<boost::asio::ip::tcp> >::async_handshake<boost::asio::const_buffer,boost::asio::executor_binder<std::_Binder<std::_Unforced,void (__cdecl CabcApplicationServerHttpSessionSSL::*)(boost::system::erro
c:\usr\work\abc_repo\util\capplicationserverhttpsession.cpp (343): abc.exe!CabcApplicationServerHttpSessionSSL::Run + 0x154 bytes
In some of the Leaks I also have:
c:\usr\work\abc_repo\ext\boost_1_68_0\boost\asio\ssl\detail\impl\engine.ipp (290): abc.exe!boost::asio::ssl::detail::engine::do_accept
Of course seems related to the ssl handshake but I check the session shutdown and seems ok.
Thank you in advance.
Every thread that uses async_handshake() leaks memory. I added OPENSSL_thread_stop() at the end of my thread procedure and it solved the issue.
Took it from here: https://github.com/openssl/openssl/issues/3033#issuecomment-289838302
I am following ASIO's async_tcp_echo_server.cpp example to write a server.
My server logic looks like this (.cpp part):
1.Server startup:
bool Server::Start()
{
mServerThread = std::thread(&Server::ServerThreadFunc, this, std::ref(ios));
//ios is asio::io_service
}
2.Init acceptor and listen for incoming connection:
void Server::ServerThreadFunc(io_service& service)
{
tcp::endpoint endp{ address::from_string(LOCAL_HOST),MY_PORT };
mAcceptor = acceptor_ptr(new tcp::acceptor{ service,endp });
// Add a job to start accepting connections.
StartAccept(*mAcceptor);
// Process event loop.Hang here till service terminated
service.run();
std::cout << "Server thread exiting." << std::endl;
}
3.Accept a connection and start reading from the client:
void Server::StartAccept(tcp::acceptor& acceptor)
{
acceptor.async_accept([&](std::error_code err, tcp::socket socket)
{
if (!err)
{
std::make_shared<Connection>(std::move(socket))->StartRead(mCounter);
StartAccept(acceptor);
}
else
{
std::cerr << "Error:" << "Failed to accept new connection" << err.message() << std::endl;
return;
}
});
}
void Connection::StartRead(uint32_t frameIndex)
{
asio::async_read(mSocket, asio::buffer(&mHeader, sizeof(XHeader)), std::bind(&Connection::ReadHandler, shared_from_this(), std::placeholders::_1, std::placeholders::_2, frameIndex));
}
So the Connection instance finally triggers ReadHandler callback where I perform actual read and write:
void Connection::ReadHandler(const asio::error_code& error, size_t bytes_transfered, uint32_t frameIndex)
{
if (bytes_transfered == sizeof(XHeader))
{
uint32_t reply;
if (mHeader.code == 12345)
{
reply = (uint32_t)12121;
size_t len = asio::write(mSocket, asio::buffer(&reply, sizeof(uint32_t)));
}
else
{
reply = (uint32_t)0;
size_t len = asio::write(mSocket, asio::buffer(&reply, sizeof(uint32_t)));
this->mSocket.shutdown(tcp::socket::shutdown_both);
return;
}
}
while (mSocket.is_open())
{
XPacket packet;
packet.dataSize = rt->buff.size();
packet.data = rt->buff.data();
std::vector<asio::const_buffer> buffers;
buffers.push_back(asio::buffer(&packet.dataSize,sizeof(uint64_t)));
buffers.push_back(asio::buffer(packet.data, packet.dataSize));
auto self(shared_from_this());
asio::async_write(mSocket, buffers,
[this, self](const asio::error_code error, size_t bytes_transfered)
{
if (error)
{
ERROR(200, "Error sending packet");
ERROR(200, error.message().c_str());
}
}
);
}
}
Now, here is the problem. The server receives data from the client and sends ,using sync asio::write, fine. But when it comes to to asio::async_read or asio::async_write inside the while loop, the method's lambda callback never gets triggered, unless I put io_context().run_one(); immediately after that. I don't understand why I see this behaviour. I do call io_service.run() right after acceptor init, so it blocks there till the server exit. The only difference of my code from the asio example, as far as I can tell, is that I run my logic from a custom thread.
Your callback isn't returning, preventing the event loop from executing other handlers.
In general, if you want an asynchronous flow, you would chain callbacks e.g. callback checks is_open(), and if true calls async_write() with itself as the callback.
In either case, the callback returns.
This allows the event loop to run, calling your callback, and so on.
In short, you should make sure your asynchronous callbacks always return in a reasonable time frame.
Problem
I am using boost::asio for a project where two processes on the same machine communicate using TCP/IP. One generates data to be read by the other, but I am encountering a problem where intermittently no data is being sent through the connection. I've boiled this down to a very simple example below, based on the async tcp echo server example.
The processes (source code below) start out fine, delivering data at a fast rate from the sender to the receiver. Then all of a sudden, no data at all is delivered for about five seconds. Then data is delivered again until the next inexplicable pause. During these five seconds, the processes eat 0% CPU and no other processes seem to do anything in particular. The pause is always the same length - five seconds.
I am trying to figure out how to get rid of these stalls and what causes them.
CPU usage during an entire run:
Notice how there are three dips of CPU usage in the middle of the run - a "run" is a single invocation of the server process and the client process. During these dips, no data was delivered. The number of dips and their timing differs between runs - some times no dips at all, some times many.
I am able to affect the "probability" of these stalls by changing the size of the read buffer - for instance if I make the read buffer a multiple of the send chunk size it appears that this problem almost goes away, but not entirely.
Source and test description
I've compiled the below code with Visual Studio 2005, using Boost 1.43 and Boost 1.45. I have tested on Windows Vista 64 bit (on a quad-core) and Windows 7 64 bit (on both a quad-core and a dual-core).
The server accepts a connection and then simply reads and discards data. Whenever a read is performed a new read is issued.
The client connects to the server, then puts a bunch of packets into a send queue. After this it writes the packets one at the time. Whenever a write has completed, the next packet in the queue is written. A separate thread monitors the queue size and prints this to stdout every second. During the io stalls, the queue size remains exactly the same.
I have tried to used scatter io (writing multiple packets in one system call), but the result is the same. If I disable IO completion ports in Boost using BOOST_ASIO_DISABLE_IOCP, the problem appears to go away but at the price of significantly lower throughput.
// Example is adapted from async_tcp_echo_server.cpp which is
// Copyright (c) 2003-2010 Christopher M. Kohlhoff (chris at kohlhoff dot com)
//
// Start program with -s to start as the server
#ifndef _WIN32_WINNT
#define _WIN32_WINNT 0x0501
#endif
#include <iostream>
#include <tchar.h>
#include <boost/asio.hpp>
#include <boost/bind.hpp>
#include <boost/thread.hpp>
#define PORT "1234"
using namespace boost::asio::ip;
using namespace boost::system;
class session {
public:
session(boost::asio::io_service& io_service) : socket_(io_service) {}
void do_read() {
socket_.async_read_some(boost::asio::buffer(data_, max_length),
boost::bind(&session::handle_read, this, _1, _2));
}
boost::asio::ip::tcp::socket& socket() { return socket_; }
protected:
void handle_read(const error_code& ec, size_t bytes_transferred) {
if (!ec) {
do_read();
} else {
delete this;
}
}
private:
tcp::socket socket_;
enum { max_length = 1024 };
char data_[max_length];
};
class server {
public:
explicit server(boost::asio::io_service& io_service)
: io_service_(io_service)
, acceptor_(io_service, tcp::endpoint(tcp::v4(), atoi(PORT)))
{
session* new_session = new session(io_service_);
acceptor_.async_accept(new_session->socket(),
boost::bind(&server::handle_accept, this, new_session, _1));
}
void handle_accept(session* new_session, const error_code& ec) {
if (!ec) {
new_session->do_read();
new_session = new session(io_service_);
acceptor_.async_accept(new_session->socket(),
boost::bind(&server::handle_accept, this, new_session, _1));
} else {
delete new_session;
}
}
private:
boost::asio::io_service& io_service_;
boost::asio::ip::tcp::acceptor acceptor_;
};
class client {
public:
explicit client(boost::asio::io_service &io_service)
: io_service_(io_service)
, socket_(io_service)
, work_(new boost::asio::io_service::work(io_service))
{
io_service_.post(boost::bind(&client::do_init, this));
}
~client() {
packet_thread_.join();
}
protected:
void do_init() {
// Connect to the server
tcp::resolver resolver(io_service_);
tcp::resolver::query query(tcp::v4(), "localhost", PORT);
tcp::resolver::iterator iterator = resolver.resolve(query);
socket_.connect(*iterator);
// Start packet generation thread
packet_thread_.swap(boost::thread(
boost::bind(&client::generate_packets, this, 8000, 5000000)));
}
typedef std::vector<unsigned char> packet_type;
typedef boost::shared_ptr<packet_type> packet_ptr;
void generate_packets(long packet_size, long num_packets) {
// Add a single dummy packet multiple times, then start writing
packet_ptr buf(new packet_type(packet_size, 0));
write_queue_.insert(write_queue_.end(), num_packets, buf);
queue_size = num_packets;
do_write_nolock();
// Wait until all packets are sent.
while (long queued = InterlockedExchangeAdd(&queue_size, 0)) {
std::cout << "Queue size: " << queued << std::endl;
Sleep(1000);
}
// Exit from run(), ignoring socket shutdown
work_.reset();
}
void do_write_nolock() {
const packet_ptr &p = write_queue_.front();
async_write(socket_, boost::asio::buffer(&(*p)[0], p->size()),
boost::bind(&client::on_write, this, _1));
}
void on_write(const error_code &ec) {
if (ec) { throw system_error(ec); }
write_queue_.pop_front();
if (InterlockedDecrement(&queue_size)) {
do_write_nolock();
}
}
private:
boost::asio::io_service &io_service_;
tcp::socket socket_;
boost::shared_ptr<boost::asio::io_service::work> work_;
long queue_size;
std::list<packet_ptr> write_queue_;
boost::thread packet_thread_;
};
int _tmain(int argc, _TCHAR* argv[]) {
try {
boost::asio::io_service io_svc;
bool is_server = argc > 1 && 0 == _tcsicmp(argv[1], _T("-s"));
std::auto_ptr<server> s(is_server ? new server(io_svc) : 0);
std::auto_ptr<client> c(is_server ? 0 : new client(io_svc));
io_svc.run();
} catch (std::exception& e) {
std::cerr << "Exception: " << e.what() << "\n";
}
return 0;
}
So my question is basically:
How do I get rid of these stalls?
What causes this to happen?
Update: There appears to be some correlation with disk activity contrary to what I stated above, so it appears that if I start a large directory copy on the disk while the test is running this might increase the frequency of the io stalls. This could indicate that this is the Windows IO Prioritization that kicks in? Since the pauses are always the same length, that does sound somewhat like a timeout somewhere in the OS io code...
adjust boost::asio::socket_base::send_buffer_size and receive_buffer_size
adjust max_length to a larger number. Since TCP is stream oriented, don't think of it as receiving single packets. This is most likely causing some sort of "gridlock" between TCP send/receive windows.
I recently encountered a very similar sounding problem, and have a solution that works for me. I have an asynchronous server/client written in asio that sends and receives video (and small request structures), and I was seeing frequent 5 second stalls just as you describe.
Our fix was to increase the size of the socket buffers on each end, and to disable the Nagle algorithm.
pSocket->set_option( boost::asio::ip::tcp::no_delay( true) );
pSocket->set_option( boost::asio::socket_base::send_buffer_size( s_SocketBufferSize ) );
pSocket->set_option( boost::asio::socket_base::receive_buffer_size( s_SocketBufferSize ) );
It might be that only one of the above options is critical, but I've not investigated this further.