Implementing communication timeout - c++

I'm implementing a class that talks to a motor controller over a USB device. I have everything working except for a way to indicate whether a parameter fetched over the comm link is "fresh" or not. What I have so far:
class MyCommClass
{
public:
bool getSpeed( double *speed );
private:
void rxThread();
struct MsgBase
{ /* .. */ };
struct Msg1 : public MsgBase
{ /* .. */ };
struct Msg2 : public MsgBase
{ /* .. */ };
/* .. */
struct MsgN : public MsgBase
{ /* .. */ };
Msg1 msg1;
Msg2 msg2;
/* .. */
MsgN msgn;
std::map< unsigned long id, MsgBase *msg > messages;
};
rxThead() is an infinite loop running in a separate thread checking the USB device for available messages. Each message has a unique identifier which rxThread() uses to stick it into the right msgx object. What I need is when the user calls the getSpeed() function it needs to be able to tell whether the current speed value is "fresh" or "stale" i.e. whether the msgx object that contains the speed value was updated within a specified timeout period. So each message object needs to implement its own timeout (since they vary per message).
All messages are transmitted periodically by the motor controller, but there are also some that get transmitted as soon as their contents change (but they will also be transmitted periodically if the contents do not change). This means that receiving a message at more than the nominal rate is OK, but it should appear at least once within the maximum timeout period.
The USB device provides timestamps along with the messages so I have access to that information. The timestamp does not reflect the current time, it is an unsigned long number with microsecond resolution that the device updates every time a message is received. I suspect the device just starts incrementing this from 0 from the time I call its initialization functions. A couple of different ways I can think of implementing this are:
Each message object launches a thread that runs infinitely waiting (WaitForSingleObject) for the timeout period. After the timeout it checks whether a counter variable (that was cached before the wait) has been incremented. If not it sets a flag marking the message as stale. The counter would be incremented every time rxThread() updates that message object.
rxThread(), in addition to stuffing messages, also iterates through the list of messages and checks the timestamp that each was last updated. If the timestamp exceeds the timeout it flags the message as stale. This method might have a problem with the amount of processing required. It probably wouldn't be a problem on most machines but this code needs to run on a piggishly slow 'industrial computer'.
I'd really appreciate your thoughts and suggestions on how to implement this. I'm open to ideas other than the two I've mentioned. I'm using Visual Studio 2005 and cross-platform portability is not a big concern as the USB device drivers are Windows only. There are currently about 8 messages I'm monitoring, but it would be nice if the solution were lightweight enough that I could add several (maybe another 8) more without running into processing horsepower limitations.
Thanks in advance,
Ashish.

If you don't need to do something "right away" when a message becomes stale, I think you can skip using timers if you store both the computer's time and the device's timestamp with each message:
#include <ctime>
#include <climits>
class TimeStamps {
public:
std::time_t sys_time() const; // in seconds
unsigned long dev_time() const; // in ms
/* .. */
};
class MyCommClass {
/* .. */
private:
struct MsgBase {
TimeStamps time;
/* .. */
};
TimeStamps most_recent_time;
bool msg_stale(MsgBase const& msg, unsigned long ms_timeout) const {
if (most_recent_time.sys_time() - msg.time.sys_time() > ULONG_MAX/1000)
return true; // device timestamps have wrapped around
// Note the subtraction may "wrap".
return most_recent_time.dev_time() - msg.time.dev_time() >= ms_timeout;
}
/* .. */
};
Of course, TimeStamps can be another nested class in MyCommClass if you prefer.
Finally, rxThread() should set the appropriate message's TimeStamps object and the most_recent_time member each time a message is received. All this won't detect a message as stale if it became stale after the last message of any other type was received, but your second possible solution in the question would have the same issue, so maybe that doesn't matter. If it does matter, something like this could still work, if msg_stale() also compares the current time.

How about storing the timestamp in the message, and having getSpeed() check the timestamp?

Related

Wait simulation time Omnet++

I need to modify UdpEchoApp (from Inet package) so that before it sends back the packet it waits "x" seconds of simulation time. I tried doing something like:
simtime_t before;
//something to calculate
simtime_t after;
if (after-before > x) {continue}
else {do something and then recalculate after}
but this crashes Qtenv. Is there something i can do to resolve this problem ?
I also post the function that sends back the received packet:
void UdpEchoApp::socketDataArrived(UdpSocket *socket, Packet *pk)
{
// determine its source address/port
L3Address remoteAddress = pk->getTag<L3AddressInd>()->getSrcAddress();
int srcPort = pk->getTag<L4PortInd>()->getSrcPort();
pk->clearTags();
pk->trim();
// statistics
numEchoed++;
emit(packetSentSignal, pk);
// send back
socket->sendTo(pk, remoteAddress, srcPort);
}
Thank you
Your code is wrong: simulation time is increased by simulation environment according to incoming events. In other words, simulation time is modified outside the standard methods that defines a behavior of a module.
To simulate a delay during the simulation one has to use a selfmessage.
In short:
In socketDataArrived():
remember the packet to send and remoteAddress in a buffer,
schedule a selfmessage x seconds later (using scheduleAt()).
In handleMessageWhenUp() when your selfmessage occurs take the packet from the buffer and send it.

How can I read all available data with boost::asio's async_read_some() without waiting for new data to arrive?

I'm using boost::asio for serial communications and I'd like to listen for incoming data on a certain port. So, I register a ReadHandler using serialport::async_read_some() and then create a separate thread to process async handlers (calls io_service::run()). My ReadHandler re-registers itself at its end by again calling async_read_some(), which seems to be a common pattern.
This all works, and my example can print data to stdout as it's received - except that I've noticed that data received while the ReadHandler is running will not be 'read' until the ReadHandler is done executing and new data is received after that happens. That is to say, when data is received while ReadHandler is running, although async_read_some is called at the conclusion of ReadHandler, it will not immediately invoke ReadHandler again for that data. ReadHandler will only be called again if additional data is received after the initial ReadHandler is completed. At this point, the data received while ReadHandler was running will be correctly in the buffer, alongside the 'new' data.
Here's my minimum-viable-example - I had initially put it in Wandbox but realized it won't help to compile it online because it requires a serial port to run anyway.
// Include standard libraries
#include <iostream>
#include <string>
#include <memory>
#include <thread>
// Include ASIO networking library
#include <boost/asio.hpp>
class SerialPort
{
public:
explicit SerialPort(const std::string& portName) :
m_startTime(std::chrono::system_clock::now()),
m_readBuf(new char[bufSize]),
m_ios(),
m_ser(m_ios)
{
m_ser.open(portName);
m_ser.set_option(boost::asio::serial_port_base::baud_rate(115200));
auto readHandler = [&](const boost::system::error_code& ec, std::size_t bytesRead)->void
{
// Need to pass lambda as an input argument rather than capturing because we're using auto storage class
// so use trick mentioned here: http://pedromelendez.com/blog/2015/07/16/recursive-lambdas-in-c14/
// and here: https://stackoverflow.com/a/40873505
auto readHandlerImpl = [&](const boost::system::error_code& ec, std::size_t bytesRead, auto& lambda)->void
{
if (!ec)
{
const auto elapsed = std::chrono::duration_cast<std::chrono::milliseconds>(std::chrono::system_clock::now() - m_startTime);
std::cout << elapsed.count() << "ms: " << std::string(m_readBuf.get(), m_readBuf.get() + bytesRead) << std::endl;
// Simulate some kind of intensive processing before re-registering our read handler
std::this_thread::sleep_for(std::chrono::seconds(5));
//m_ser.async_read_some(boost::asio::buffer(m_readBuf.get(), bufSize), lambda);
m_ser.async_read_some(boost::asio::buffer(m_readBuf.get(), bufSize), std::bind(lambda, std::placeholders::_1, std::placeholders::_2, lambda));
}
};
readHandlerImpl(ec, bytesRead, readHandlerImpl);
};
m_ser.async_read_some(boost::asio::buffer(m_readBuf.get(), bufSize), readHandler);
m_asioThread = std::make_unique<std::thread>([this]()
{
this->m_ios.run();
});
}
~SerialPort()
{
m_ser.cancel();
m_asioThread->join();
}
private:
const std::chrono::system_clock::time_point m_startTime;
static const std::size_t bufSize = 512u;
std::unique_ptr<char[]> m_readBuf;
boost::asio::io_service m_ios;
boost::asio::serial_port m_ser;
std::unique_ptr<std::thread> m_asioThread;
};
int main()
{
std::cout << "Type q and press enter to quit" << std::endl;
SerialPort port("COM1");
while (std::cin.get() != 'q')
{
std::this_thread::sleep_for(std::chrono::milliseconds(200));
}
return 0;
}
(Don't mind the weird lambda stuff going on)
This program just prints data to stdout as it's received, along with a timestamp (milliseconds since program started). By connecting a virtual serial device to a virtual serial port pair, I can send data to the program (just typing in RealTerm, really). I can see the problem when I type a short string.
In this case, I typed 'hi', and the 'h' was printed immediately. I had typed the 'i' very shortly after, but at computer speeds it was quite a while, so it wasn't part of the initial data read into the buffer. At this point, the ReadHandler executes, which takes 5 seconds. During that time, the 'i' was received by the OS. But the 'i' does not get printed after the 5 seconds is up - the next async_read_some ignores it until I then type a 't', at which point it suddenly prints both the 'i' and the 't'.
Example program output
Here's a clearer description of this test and what I want:
Test: Start program, wait 1 second, type hi, wait 9 seconds, type t
What I want to happen (printed to stdout by this program):
1000ms: h
6010ms: i
11020ms: t
What actually happens:
1000ms: h
10000ms: it
It seems very important that the program has a way to recognize data that was received between reads. I know there is no way to check if data is available (in the OS buffer) using ASIO serial ports (without using the native_handle, anyway). But I don't really need to, as long as the read call returns. One solution to this issue might be to just make sure ReadHandler finishes running as quickly as possible - obviously the 5-second delay in this example is contrived. But that doesn't strike me as a good solution; no matter how fast I make ReadHandler, it will still be possible to 'miss' data (in that it will not be seen until some new data is received later). Is there any way to ensure that my handler will read all data within some short time of it being received, without depending on the receipt of further data?
I've done a lot of searching on SO and elsewhere, but everything I've found so far is just discussing other pitfalls that cause the system to not work at all.
As an extreme measure, it looks like it may be possible to have my worker thread call io_service::run_for() with a timeout, rather than run(), and then every short while have that thread somehow trigger a manual read. I'm not sure what form that would take yet - it could just call serial_port::cancel() I suppose, and then re-call async_read_some. But this sounds hacky to me, even if it might work - and it would require a newer version of boost, to boot.
I'm building with boost 1.65.1 on Windows 10 with VS2019, but I really hope that's not relevant to this question.
Answering the question in the title: You can't. By the nature of async_read_some you're asking for a partial read and a call to your handler as soon as anything is read. You're then sleeping for a long time before another async_read_some is called.
no matter how fast I make ReadHandler, it will still be possible to 'miss' data (in that it will not be seen until some new data is received later)
If I'm understanding your concern correctly, it doesn't matter - you won't miss anything. The data is still there waiting on a socket/port buffer until next you read it.
If you only want to begin processing once a read is complete, you need one of the async_read overloads instead. This will essentially perform multiple read_somes on the stream until some condition is met. That could just mean everything on the port/socket, or you can provide some custom CompletionCondition. This is called on each read_some until it returns 0, at which point the read is considered complete and the ReadHandler is then called.

Best way to send packet with RakNet

I was wondering how to send packet to client in client-server architecture with RakNet. In this sample code we have this line:
peer->Send(&bsOut,HIGH_PRIORITY,RELIABLE_ORDERED,0,packet->systemAddress,false);
However, the prototype is the following (from the interface class):
virtual uint32_t Send( const RakNet::BitStream * bitStream,
PacketPriority priority,
PacketReliability reliability,
char orderingChannel,
const AddressOrGUID systemIdentifier,
bool broadcast,
uint32_t forceReceiptNumber=0 )=0;
As you can see, the 5th parameter takes a AddressOrGUID, it means we can send the SystemAddress as in the sample, but also can send the unique GUID of a connected machine.
There is a function called:
RakNet::GetSystemAddressFromGUID();
But I'm not sure if RakNet uses it to convert the GUID we can send as a parameter (I didn't find any use of this method in RakPeer (implemention of RakPeerInterface) and I'm not able to find how buffered packet are sent each tick).
The problem is the following:
The sample code replies directly to the received packet. However, in a game, server has to send information without receiving packet from client. So I don't have access to something like
packet->systemAddress
because there is no received packet.
So I will have to stock something in my Player class to know how to send them packets: SystemAddress or RakNetGUID. RakNetGUID is simpler and lighter to stock than a SystemAddress.
But, if RakNet uses GetSystemAddressFromGUID(), it's not worth because is has a O(log(n)) algorithm.
Do I need to stock the SystemAddress for each Player myself or RakNet::Send() doesn't use this method with a RakNetGUID ?
Thank you!
Ok I just did a mistake by not correctly following condition statement the first time I tried to understand the source code, and because this question is really specific, I think it would be great to look in source code.
The simple answer is Yes, store RakNetGUID in Player class
Details here:
Ok so first, file concerned is RakPeer.cpp only. The starting point is:
uint32_t RakPeer::Send( const RakNet::BitStream * bitStream,
PacketPriority priority,
PacketReliability reliability,
char orderingChannel,
const AddressOrGUID systemIdentifier,
bool broadcast,
uint32_t forceReceiptNumber ) // Line 1366
Then, we have this line where SendBuffered is called:
SendBuffered((const char*)bitStream->GetData(),
bitStream->GetNumberOfBitsUsed(),
priority,
reliability,
orderingChannel,
systemIdentifier, // This is the initial AddressOrGUID
broadcast,
RemoteSystemStruct::NO_ACTION,
usedSendReceipt); // Line 1408
In the method above, we can know the name of the buffer variable:
bufferedCommands.Push(bcs); // Line 4216
And by searching every place where bufferedCommands is used, we find a meaningful method name:
bool RakPeer::RunUpdateCycle(BitStream &updateBitStream ) // Line 5567
We can find a loop that sends every buffered message here:
callerDataAllocationUsed=SendImmediate((char*)bcs->data,
bcs->numberOfBitsToSend,
bcs->priority,
bcs->reliability,
bcs->orderingChannel,
bcs->systemIdentifier, // Initial AddressOfGUID
bcs->broadcast,
true,
timeNS,
bcs->receipt); // Line 5630
RakPeer::SendImmediate() will ask RakPeer::GetSystemIndexFromGuid() to find the appropriate Index:
else if (systemIdentifier.rakNetGuid!=UNASSIGNED_RAKNET_GUID)
remoteSystemIndex=GetSystemIndexFromGuid(systemIdentifier.rakNetGuid); // Line 4300
Finally, this last method will store the index directly in the RakNet::RakNetGUID when found:
unsigned int i;
for ( i = 0; i < maximumNumberOfPeers; i++ )
{
if (remoteSystemList[ i ].guid == input )
{
// Set the systemIndex so future lookups will be fast
remoteSystemList[i].guid.systemIndex = (SystemIndex) i;
return i;
}
} // Line 2440
If we call Send() with RakNetGUID, then it will check if RakNetGUID::systemIndex is set. If yes, it doesn't need to search. Else, it will have a linear searching time O(n) (n = maximumNumbersOfPeers) for the first packet sent.
I wrote this to help people understand how it works if they have the same question in mind.
From the doc : http://www.raknet.net/raknet/manual/systemaddresses.html
It is preferred that you refer to remote systems by RakNetGUID, instead of SystemAddress. RakNetGUID is a unique identifier for an instance of RakPeer, while SystemAddress is not. And it is necessary to exclusively use RakNetGUID if you plan to use the Router2 plugin.
SystemAddress is only the combination of the IP and the port

avoiding collisions when collapsing infinity lock-free buffer to circular-buffer

I'm solving two feeds arbitrate problem of FAST protocol.
Please don't worry if you not familar with it, my question is pretty general actually. But i'm adding problem description for those who interested (you can skip it).
Data in all UDP Feeds are disseminated in two identical feeds (A and B) on two different multicast IPs. It is strongly recommended that client receive and process both feeds because of possible UDP packet loss. Processing two identical feeds allows one to statistically decrease the probability of packet loss.
It is not specified in what particular feed (A or B) the message appears for the first time. To arbitrate these feeds one should use the message sequence number found in Preamble or in tag 34-MsgSeqNum. Utilization of the Preamble allows one to determine message sequence number without decoding of FAST message.
Processing messages from feeds A and B should be performed using the following algorithm:
Listen feeds A and B
Process messages according to their sequence numbers.
Ignore a message if one with the same sequence number was already processed before.
If the gap in sequence number appears, this indicates packet loss in both feeds (A and B). Client should initiate one of the Recovery process. But first of all client should wait a reasonable time, perhaps the lost packet will come a bit later due to packet reordering. UDP protocol can’t guarantee the delivery of packets in a sequence.
// tcp recover algorithm further
I wrote such very simple class. It preallocates all required classes and then first thread that receive particular seqNum can process it. Another thread will drop it later:
class MsgQueue
{
public:
MsgQueue();
~MsgQueue(void);
bool Lock(uint32_t msgSeqNum);
Msg& Get(uint32_t msgSeqNum);
void Commit(uint32_t msgSeqNum);
private:
void Process();
static const int QUEUE_LENGTH = 1000000;
// 0 - available for use; 1 - processing; 2 - ready
std::atomic<uint16_t> status[QUEUE_LENGTH];
Msg updates[QUEUE_LENGTH];
};
Implementation:
MsgQueue::MsgQueue()
{
memset(status, 0, sizeof(status));
}
MsgQueue::~MsgQueue(void)
{
}
// For the same msgSeqNum should return true to only one thread
bool MsgQueue::Lock(uint32_t msgSeqNum)
{
uint16_t expected = 0;
return status[msgSeqNum].compare_exchange_strong(expected, 1);
}
void MsgQueue::Commit(uint32_t msgSeqNum)
{
status[msgSeqNum] = 2;
Process();
}
// this method probably should be combined with "Lock" but please ignore! :)
Msg& MsgQueue::Get(uint32_t msgSeqNum)
{
return updates[msgSeqNum];
}
void MsgQueue::Process()
{
// ready packets must be processed,
}
Usage:
if (!msgQueue.Lock(seq)) {
return;
}
Msg msg = msgQueue.Get(seq);
msg.Ticker = "HP"
msg.Bid = 100;
msg.Offer = 101;
msgQueue.Commit(seq);
This works fine if we assume that QUEUE_LENGTH is infinity. Because in this case one msgSeqNum = one updates array item.
But I have to make buffer circular because it is not possible to store entire history (many millions of packets) and there are no reason to do so. Actually I need to buffer enough packets to reconstruct the session, and once session is reconstructed i can drop them.
But having circular buffer significantly complicates algorithm. For example assume that we have circular buffer of length 1000. And at the same time we try to process seqNum = 10 000 and seqNum = 11 000 (this is VERY unlikely but still possible). Both these packets will map to the array updates at index 0 and so collision occur. In such case buffer should 'drop' old packets and process new packets.
It's trivial to implement what I want using locks but writing lock-free code on circular-buffer that used from different threads is really complicated. So I welcome any suggestions and advice how to do that. Thanks!
I don't believe you can use a ring buffer. A hashed index can be used in the status[] array. Ie, hash = seq % 1000. The issue is that the sequence number is dictated by the network and you have no control over it's ordering. You wish to lock based on this sequence number. Your array doesn't need to be infinite, just the range of the sequence number; but that is probably larger than practical.
I am not sure what is happening when the sequence number is locked. Does this mean another thread is processing it? If so, you must maintain a sub-list for hash collisions to resolve the particular sequence number.
You may also consider an array size as a power of 2. For example, 1024 will allow hash = seq & 1023; which should be quite efficient.

How can I set tens of thousands of tasks to each trigger at a different defined time?

I'm constructing a data visualisation system that visualises over 100,000 data points (visits to a website) across a time period. The time period (say 1 week) is then converted into simulation time (1 week = 2 minutes in simulation), and a task is performed on each and every piece of data at the specific time it happens in simulation time (the time each visit occurred during the week in real time). With me? =p
In other programming languages (eg. Java) I would simply set a timer for each datapoint. After each timer is complete it triggers a callback that allows me to display that datapoint in my app. I'm new to C++ and unfortunately it seems that timers with callbacks aren't built-in. Another method I would have done in ActionScript, for example, would be using custom events that are triggered after a specific timeframe. But then again I don't think C++ has support for custom events either.
In a nutshell; say I have 1000 pieces of data that span across a 60 second period. Each piece of data has it's own time in relation to that 60 second period. For example, one needs to trigger something at 1 second, another at 5 seconds, etc.
Am I going about this the right way, or is there a much easier way to do this?
Ps. I'm using Mac OS X, not Windows
I would not use timers to do that. Sounds like you have too many events and they may lie too close to each other. Performance and accuracy may be bad with timers.
a simulation is normally done like that:
You are simly doing loops (or iterations). And on every loop you add an either measured (for real time) or constant (non real time) amount to your simulation time.
Then you manually check all your events and execute them if they have to.
In your case it would help to have them sorted for execution time so you would not have to loop through them all every iteration.
Tme measuring can be done with gettimer() c function for low accuracy or there are better functions for higher accuracy e.g. QueryPerformanceTimer() on windows - dont know the equivalent for Mac.
Just make a "timer" mechanism yourself, that's the best, fastest and most flexible way.
-> make an array of events (linked to each object event happens to) (std::vector in c++/STL)
-> sort the array on time (std::sort in c++/STL)
-> then just loop on the array and trigger the object action/method upon time inside a range.
Roughly that gives in C++:
// action upon data + data itself
class Object{
public:
Object(Data d) : data(d) {
void Action(){display(data)};
Data data;
};
// event time + object upon event acts
class Event{
public:
Event(double t, Object o) time (t), object(o) {};
// useful for std::sort
bool operator<(Event e) { return time < e.time; }
double time;
Object object;
}
//init
std::vector<Event> myEvents;
myEvents.push_back(Event(1.0, Object(data0)));
//...
myEvents.push_back(Event(54.0, Object(data10000)));
// could be removed is push_back() is guaranteed to be in the correct order
std::sort(myEvents.begin(), myEvents.end());
// the way you handle time... period is for some fuzziness/animation ?
const double period = 0.5;
const double endTime = 60;
std::vector<Event>::iterator itLastFirstEvent = myEvents.begin();
for (double currtime = 0.0; currtime < endTime; currtime+=0.1)
{
for (std::vector<Event>::iterator itEvent = itLastFirstEvent ; itEvent != myEvents.end();++itEvent)
{
if (currtime - period < itEvent.time)
itLastFirstEvent = itEvent; // so that next loop start is optimised
else if (itEvent.time < currtime + period)
itEvent->actiontick(); // action speaks louder than words
else
break; // as it's sorted, won't be any more tick this loop
}
}
ps: About custom events, you might want to read/search about delegates in c++ and function/method pointers.
If you are using native C++, you should look at the Timers section of the Windows API on the MSDN website. They should tell you exactly what you need to know.