I have a c++17 project using the non-boost version of ASIO because i need to connect, read and write to a TCP socket. The application has a read and write thread that run periodically and share a mutex therefore my reading thread has a time slot of 20 milliseconds in which it needs to read as much as it can and exit.
My problem is that i cant figure out how to get ASIO to read and then stop reading gracefully until another read is requested. There is no read with timeout functions and neither could i find any examples of such behaviour.
The closest thing ive found seems to kinda work but not exactly and i have no idea why. My current code is something like this:
ErrorCode Read(uint8_t* buf, unsigned int maxAmountOfBytesToRead, unsigned int& nRead)
{
std::lock_guard<std::mutex> tcpSocketLock(m_TCPSocketMutex);
asio::error_code asioError;
unsigned int amountOfBytesInBuffer = 0;
m_TCPConnectionSocket.async_read_some(asio::buffer(buf, maxAmountOfBytesToRead),
[&](const asio::error_code& errorCode, unsigned int result_n)
{
asioError = errorCode;
amountOfBytesInBuffer = result_n;
});
RunIOContextWithTimeOut(std::chrono::milliseconds(20));
nRead = amountOfBytesInBuffer;
// finish up and exit.
}
void RunIOContextWithTimeOut(std::chrono::steady_clock::duration timeout)
{
// Restart the io_context, as it may have been left in the "stopped" state
// by a previous operation.
m_TCPioContext.restart();
// Block until the asynchronous operation has completed, or timed out. If
// the pending asynchronous operation is a composed operation, the deadline
// applies to the entire operation, rather than individual operations on
// the socket.
m_TCPioContext.run_for(timeout);
// If the asynchronous operation completed successfully then the io_context
// would have been stopped due to running out of work. If it was not
// stopped, then the io_context::run_for call must have timed out.
if (!m_TCPioContext.stopped())
{
m_TCPioContext.stop();
// Run the io_context again until the operation completes.
m_TCPioContext.run();
}
}
But when running this code, i do notice that the data coming in is not exactly correct and that there are chunks of it missing. Adding logs and debugging i see that when the run_for pops out because of a time out, it never finishes the async read callback handler which makes me suspect that when the run_for doesnt finish on its own and is asked to stop, it abandons what ever data is has read and exits.
But i thought that was what the subsequent run() function was used for, to make the thread go back in and finish running the read before exiting. But apparently not? I dont understand how to make it just read and when its time to stop, copy over all that has read and stop gracefully. All other examples have you closing sockets and cancelling everything but i want to keep the socket open, the connection established, just to stop reading.
I cant let it read for as long as it wants because there is a write thread waiting for the read to finish so that it can be executed. I also would prefer not to make a solution that uses an additional thread of continues reading because this solution will be scaled up which will cause the usage of an additional 40 threads on a system with limited resources, we want to be as efficient as possible with our CPU resources.
Related
im trying to connect to a server via boost asio and beast. I need to send heartbeats to the server every 40 seconds, but when I try to, my write requests get stuck in a queue and never get executed, unless the server sends something first.
I have this code to look for new messages that come in.
this->ioContext.run();
thread heartbeatThread(&client::heartbeatCycle, this);
while (this->p->is_socket_open()) {
this->ioContext.restart();
this->p->asyncQueue("", true);
this->ioContext.run();
}
The asyncQueue function just calls async_read, and blocks the io context. The heartbeatCycle tries to send heartbeats, but gets stuck in the queue. If I force it to send anyways, I get
Assertion failed: (id_ != T::id), function try_lock, file soft_mutex.hpp, line 89.
When the server sends a message, the queue is unblocked, and all the queued messages go through, until there is no more work, and the io_context starts blocking again.
So my main question is, is there any way to unblock the io context without having the server send a message? If not, is there a way to emulate the server sending a message?
Thanks!
EDIT:
I have this queue function that queues messages being sent called asyncQueue.
void session::asyncQueue(const string& payload, const bool& madeAfterLoop)
{
if(!payload.empty())
{
queue_.emplace_back(payload);
}
if(payload.empty() && madeAfterLoop)
{
queue_.emplace_back("KEEPALIVE");
}
// If there is something to write, write it.
if(!currentlyQueued_ && !queue_.empty() && queue_.at(0) != "KEEPALIVE")
{
currentlyQueued_ = true;
ws_.async_write(
net::buffer(queue_.at(0)),
beast::bind_front_handler(
&session::on_write,
shared_from_this()));
queue_.erase(queue_.begin());
}
// If there is nothing to write, read the buffer to keep stream alive
if(!currentlyQueued_ && !queue_.empty())
{
currentlyQueued_ = true;
ws_.async_read(
buffer_,
beast::bind_front_handler(
&session::on_read,
shared_from_this()));
queue_.erase(queue_.begin());
}
}
The problem is when the code has nothing no work left to do, it calls async read, and gets stuck until the server sends something.
In the function where I initialized the io_context, I also created a separate thread to send heartbeats every x seconds.
void client::heartbeatCycle()
{
while(this->p->is_socket_open())
{
this->p->asyncQueue(bot::websocket::sendEvents::getHeartbeatEvent(cache_), true );
this_thread::sleep_for(chrono::milliseconds(10000));
}
}
Lastly, I have these 2 lines in my on_read function that runs whenever async read is called.
currentlyQueued_ = false;
asyncQueue();
Once there is no more work to do, the program calls async_read but currentlyQueued_ is never set to false.
The problem is the io_context is stuck looking for something to read. What can I do to stop the io_context from blocking the heartbeats from sending?
The only thing I have found that stops the io_context from blocking is when the server sends me a message. When it does, currentlyQueued_ is set to false, and the queue able to run and the queue is cleared.
That is the reason im looking for something that can emulate the server sending me a message. So is there a function that can do that in asio/beast? Or am I going about this the wrong way.
Thanks so much for your help.
The idea is to run the io_service elsewhere (on a thread, or in main, after starting an async chain).
Right now you're calling restart() on it which simply doesn't afford continuous operation. Why stop() or let it run out of work at all?
Note, manually starting threads is atypical and unsafe.
I would give examples, but lots already exist (also on this site). I'd need to see question code with more detail to give concrete suggestions.
While sending some data to client (multiple chunks of data); if the client stop reading the data after some packets, the server gets stuck on boost::asio::write() which results in unwanted behavior of the product.
We thought of shifting to async_write() and have a timer over it so that if such condition occurs, we could fallback to original good state, but due to design faults we could not use io_service (due to high concurrency) after async_write which resulted in not getting callbacks to stop the timer.
So, is there any way through which (without using io_serivce) we can unblock the write() API.
Somthing like we could execute write() API on a separate thread and terminate it through some timer. But here the question arises, is there any way through which we can clear out the boost buffers which already has some pending write data ?
Any help would be appreciated.
Thanks.
Eventually went with using boost::asio::async_write() but with io_service::poll() -> poll being non-blocking.
run() was not an option as the system is highly concurrent and read/write had to share the same io_service.
Pseudo code looks something like this:
data_to_write = size of data;
set current_bytes_transffered = 0
set timeout_occurred to false
/*
current_bytes_transffered -> obtained from async_write() callback
timeout_occurred -> obtained from a seperate timer
*/
while((data_to_write != current_bytes_transffered) || (!timeout_occurred))
{
// poll() is used instead of run() as the system
// has high concurrency and read and write operations
// shares same io_service
io_service.poll();
if(data_to_write == current_bytes_transffered)
{
// SUCCESS write logic
}
else if(timeout_occurred)
{
// timeout logic
}
}
I'm using boost::asio for serial communications and I'd like to listen for incoming data on a certain port. So, I register a ReadHandler using serialport::async_read_some() and then create a separate thread to process async handlers (calls io_service::run()). My ReadHandler re-registers itself at its end by again calling async_read_some(), which seems to be a common pattern.
This all works, and my example can print data to stdout as it's received - except that I've noticed that data received while the ReadHandler is running will not be 'read' until the ReadHandler is done executing and new data is received after that happens. That is to say, when data is received while ReadHandler is running, although async_read_some is called at the conclusion of ReadHandler, it will not immediately invoke ReadHandler again for that data. ReadHandler will only be called again if additional data is received after the initial ReadHandler is completed. At this point, the data received while ReadHandler was running will be correctly in the buffer, alongside the 'new' data.
Here's my minimum-viable-example - I had initially put it in Wandbox but realized it won't help to compile it online because it requires a serial port to run anyway.
// Include standard libraries
#include <iostream>
#include <string>
#include <memory>
#include <thread>
// Include ASIO networking library
#include <boost/asio.hpp>
class SerialPort
{
public:
explicit SerialPort(const std::string& portName) :
m_startTime(std::chrono::system_clock::now()),
m_readBuf(new char[bufSize]),
m_ios(),
m_ser(m_ios)
{
m_ser.open(portName);
m_ser.set_option(boost::asio::serial_port_base::baud_rate(115200));
auto readHandler = [&](const boost::system::error_code& ec, std::size_t bytesRead)->void
{
// Need to pass lambda as an input argument rather than capturing because we're using auto storage class
// so use trick mentioned here: http://pedromelendez.com/blog/2015/07/16/recursive-lambdas-in-c14/
// and here: https://stackoverflow.com/a/40873505
auto readHandlerImpl = [&](const boost::system::error_code& ec, std::size_t bytesRead, auto& lambda)->void
{
if (!ec)
{
const auto elapsed = std::chrono::duration_cast<std::chrono::milliseconds>(std::chrono::system_clock::now() - m_startTime);
std::cout << elapsed.count() << "ms: " << std::string(m_readBuf.get(), m_readBuf.get() + bytesRead) << std::endl;
// Simulate some kind of intensive processing before re-registering our read handler
std::this_thread::sleep_for(std::chrono::seconds(5));
//m_ser.async_read_some(boost::asio::buffer(m_readBuf.get(), bufSize), lambda);
m_ser.async_read_some(boost::asio::buffer(m_readBuf.get(), bufSize), std::bind(lambda, std::placeholders::_1, std::placeholders::_2, lambda));
}
};
readHandlerImpl(ec, bytesRead, readHandlerImpl);
};
m_ser.async_read_some(boost::asio::buffer(m_readBuf.get(), bufSize), readHandler);
m_asioThread = std::make_unique<std::thread>([this]()
{
this->m_ios.run();
});
}
~SerialPort()
{
m_ser.cancel();
m_asioThread->join();
}
private:
const std::chrono::system_clock::time_point m_startTime;
static const std::size_t bufSize = 512u;
std::unique_ptr<char[]> m_readBuf;
boost::asio::io_service m_ios;
boost::asio::serial_port m_ser;
std::unique_ptr<std::thread> m_asioThread;
};
int main()
{
std::cout << "Type q and press enter to quit" << std::endl;
SerialPort port("COM1");
while (std::cin.get() != 'q')
{
std::this_thread::sleep_for(std::chrono::milliseconds(200));
}
return 0;
}
(Don't mind the weird lambda stuff going on)
This program just prints data to stdout as it's received, along with a timestamp (milliseconds since program started). By connecting a virtual serial device to a virtual serial port pair, I can send data to the program (just typing in RealTerm, really). I can see the problem when I type a short string.
In this case, I typed 'hi', and the 'h' was printed immediately. I had typed the 'i' very shortly after, but at computer speeds it was quite a while, so it wasn't part of the initial data read into the buffer. At this point, the ReadHandler executes, which takes 5 seconds. During that time, the 'i' was received by the OS. But the 'i' does not get printed after the 5 seconds is up - the next async_read_some ignores it until I then type a 't', at which point it suddenly prints both the 'i' and the 't'.
Example program output
Here's a clearer description of this test and what I want:
Test: Start program, wait 1 second, type hi, wait 9 seconds, type t
What I want to happen (printed to stdout by this program):
1000ms: h
6010ms: i
11020ms: t
What actually happens:
1000ms: h
10000ms: it
It seems very important that the program has a way to recognize data that was received between reads. I know there is no way to check if data is available (in the OS buffer) using ASIO serial ports (without using the native_handle, anyway). But I don't really need to, as long as the read call returns. One solution to this issue might be to just make sure ReadHandler finishes running as quickly as possible - obviously the 5-second delay in this example is contrived. But that doesn't strike me as a good solution; no matter how fast I make ReadHandler, it will still be possible to 'miss' data (in that it will not be seen until some new data is received later). Is there any way to ensure that my handler will read all data within some short time of it being received, without depending on the receipt of further data?
I've done a lot of searching on SO and elsewhere, but everything I've found so far is just discussing other pitfalls that cause the system to not work at all.
As an extreme measure, it looks like it may be possible to have my worker thread call io_service::run_for() with a timeout, rather than run(), and then every short while have that thread somehow trigger a manual read. I'm not sure what form that would take yet - it could just call serial_port::cancel() I suppose, and then re-call async_read_some. But this sounds hacky to me, even if it might work - and it would require a newer version of boost, to boot.
I'm building with boost 1.65.1 on Windows 10 with VS2019, but I really hope that's not relevant to this question.
Answering the question in the title: You can't. By the nature of async_read_some you're asking for a partial read and a call to your handler as soon as anything is read. You're then sleeping for a long time before another async_read_some is called.
no matter how fast I make ReadHandler, it will still be possible to 'miss' data (in that it will not be seen until some new data is received later)
If I'm understanding your concern correctly, it doesn't matter - you won't miss anything. The data is still there waiting on a socket/port buffer until next you read it.
If you only want to begin processing once a read is complete, you need one of the async_read overloads instead. This will essentially perform multiple read_somes on the stream until some condition is met. That could just mean everything on the port/socket, or you can provide some custom CompletionCondition. This is called on each read_some until it returns 0, at which point the read is considered complete and the ReadHandler is then called.
How should QLocalSocket/QDataStream be read?
I have a program that communicates with another via named pipes using QLocalSocket and QDataStream. The recieveMessage() slot below is connected to the QLocalSocket's readyRead() signal.
void MySceneClient::receiveMessage()
{
qint32 msglength;
(*m_stream) >> msglength;
char* msgdata = new char[msglength];
int read = 0;
while (read < msglength) {
read += m_stream->readRawData(&msgdata[read], msglength - read);
}
...
}
I find that the application sometimes hangs on readRawData(). That is, it succesfully reads the 4 byte header, but then never returns from readRawData().
If I add...
if (m_socket->bytesAvailable() < 5)
return;
...to the start of this function, the application works fine (with the short test message).
I am guessing then (the documentation is very sparse) that there is some sort of deadlock occurring, and that I must use the bytesAvailable() signal to gradually build up the buffer rather than blocking.
Why is this? And what is the correct approach to reading from QLocalSocket?
Your loop blocks the event loop, so you will never get data if all did not arrive pn first read, is what causes your problem I think.
Correct approach is to use signals and slots, readyRead-signal here, and just read the available data in your slot, and if there's not enough, buffer it and return, and read more when you get the next signal.
Be careful with this alternative approach: If you are absolutely sure all the data you expect is going to arrive promptly (perhaps not unreasonable with a local socket where you control both client and server), or if the whole thing is in a thread which doesn nothing else, then it may be ok to use waitForReadyRead method. But the event loop will remain blocked until data arrives, freezing GUI for example (if in GUI thread), and generally troublesome.
EDIT: I have now edited my code a bit to have a rough idea of "all" the code. Maybe this
might be helpful to identify the problem ;)
I have integrated the following simple code fragement which either cancels the timer if data
is read from the TCP socket or otherwise it cancels the data read from the socket
// file tcp.cpp
void CheckTCPSocket()
{
TRequestStatus iStatus;
TSockXfrLength len;
int timeout = 1000;
RTimer timer;
TRequestStatus timerstatus;
TPtr8 buff;
iSocket.RecvOneOrMore( buff, 0, iStatus, len );
timer.CreateLocal();
timer.After(timerstatus, timeout);
// Wait for two requests – if timer completes first, we have a
// timeout.
User::WaitForRequest(iStatus, timerstatus);
if(timerstatus.Int() != KRequestPending)
{
iSocket.CancelRead();
}
else
{
timer.Cancel();
}
timer.Close();
}
// file main.cpp
void TestActiveObject::RunL()
{
TUint Data;
MQueue.ReceiveBlocking(Data);
CheckTCPSocket();
SetActive();
}
This part is executed within active Object and since integrating the code piece above I always get the kernel panic:
E32User-CBase 46: This panic is raised by an active scheduler, a CActiveScheduler. It is caused by a stray signal.
I never had any problem with my code until now this piece of code is executed; code executes fine as data is read from the socket and
then the timer is canceled and closed. I do not understand how the timer object has here any influence on the AO.
Would be great if someone could point me to the right direction.
Thanks
This could be a problem with another active object completing (not one of these two), or SetActive() not being called. See Forum Nokia. Hard to say without seeing all your code!
BTW User::WaitForRequest() is nearly always a bad idea. See why here.
Never mix active objects and User::WaitForRequest().
(Well, almost never. When you know exactly what you are doing it can be ok, but the code you posted suggests you still have some learning to do.)
You get the stray signal panic when the thread request semaphore is signalled with RThread::RequestComplete() by the asynchronous service provider and the active scheduler that was waiting on the semaphore with User::WaitForAnyRequest() tries to look for an active object that was completed so that its RunL() could be called, but cannot find any in its list of active objects.
In this case you have two ongoing requests, neither of which is controlled by the active scheduler (for example, not using CActive::iStatus as the TRequestStatus; issuing SetActive() on an object where CActive::iStatus is not involved in an async request is another error in your code but not the reason for stray signal). You wait for either one of them to complete with WaitForRequest() but don't wait for the other to complete at all. The other request's completion signal will go to the active scheduler's WaitForAnyRequest(), resulting in stray signal. If you cancel a request, you will still need to wait on the thread request semaphore.
The best solution is to make the timeout timer an active object as well. Have a look at the CTimer class.
Another solution is just to add another WaitForRequest on the request not yet completed.
You are calling TestActiveObject::SetActive() but there is no call to any method that sets TestActiveObject::iStatus to KRequestPending. This will create the stray signal panic.
The only iStatus variable in your code is local to the CheckTCPSocket() method.