Wait simulation time Omnet++ - c++

I need to modify UdpEchoApp (from Inet package) so that before it sends back the packet it waits "x" seconds of simulation time. I tried doing something like:
simtime_t before;
//something to calculate
simtime_t after;
if (after-before > x) {continue}
else {do something and then recalculate after}
but this crashes Qtenv. Is there something i can do to resolve this problem ?
I also post the function that sends back the received packet:
void UdpEchoApp::socketDataArrived(UdpSocket *socket, Packet *pk)
{
// determine its source address/port
L3Address remoteAddress = pk->getTag<L3AddressInd>()->getSrcAddress();
int srcPort = pk->getTag<L4PortInd>()->getSrcPort();
pk->clearTags();
pk->trim();
// statistics
numEchoed++;
emit(packetSentSignal, pk);
// send back
socket->sendTo(pk, remoteAddress, srcPort);
}
Thank you

Your code is wrong: simulation time is increased by simulation environment according to incoming events. In other words, simulation time is modified outside the standard methods that defines a behavior of a module.
To simulate a delay during the simulation one has to use a selfmessage.
In short:
In socketDataArrived():
remember the packet to send and remoteAddress in a buffer,
schedule a selfmessage x seconds later (using scheduleAt()).
In handleMessageWhenUp() when your selfmessage occurs take the packet from the buffer and send it.

Related

How to add a delay time to start a new reading?

I'm trying to develop a smart lock, with an rfid module, an esp8266 and integration with SinricPro (which makes the bridge for the lock to integrate with Alexa and Google Home)
It turns out that I'm having a very annoying problem, and I would like your help to solve it!
In this function, I execute what needs to be executed after the card passes the RFID module:
void handleRFID() {
if (RFID_card_is_not_present()) return;
String card_id = get_RFID_card_ID();
bool RFID_card_is_valid = validate_RFID_card(card_id);
if (RFID_card_is_valid) {
Serial.printf("The RFID card \"%s\" is valid.\r\n", card_id.c_str());
unlock_with_auto_relock();
send_lock_state(false);
// Insert a timeout here, to start reading the card again only after TEMP_AUTOLOCK is over
} else {
Serial.printf("The RFID card \"%s\" is not valid.\r\n", card_id.c_str());
// Insert a delay time here, to start reading the card again only after X time (something like 3 seconds)
}
}
If I run the code as it is, my serial monitor is spammed with a message that the card is valid/not valid, and it sends a shower of requests to the SinricPro api, as I have nothing limiting the reading of cards in the rfid module for X time, as a delay() function
But unfortunately I can't use delay(), so it's already out of the question
So basically what I want to do is limit the speed at which the cards are read by inserting some wait time where I put the comments in the code. Can someone help me?
For better understanding, I'll make my code available, and the RFID module library I'm using!
Project code: https://github.com/ogabrielborges/smartlock-rfid-iot
MRFC522 library: https://github.com/miguelbalboa/rfid
My serial monitor is spammed with messages that inform that the card is registered/not registered because I don't know how to limit the time that the module reads the tag
printscreen of the serial monitor with the message spam when keeping the tag on the sensor
There is a "blink without delay" example in the arduino environment (or there used to be?).
You can find a similar example here: https://docs.arduino.cc/built-in-examples/digital/BlinkWithoutDelay
Basically what you do is remember the current time in a global and check if enough time has passed for the next check:
These are your globals:
unsigned long previousMillis = 0; // will store last time you did your magic
const long interval = 1000; // interval at which to do your magic (milliseconds, 1000 = 1 sec)
Then do this somewhere in a function:
unsigned long currentMillis = millis();
if (currentMillis - previousMillis >= interval) {
// save the last time you did your magic
previousMillis = currentMillis;
// Do your magic here
}

Serial Communication Timeout in QT with Arduino

I want to implement a timeout mechanism such that if the arduino doesn't read the command within one second, it results in a timeout and the new command is discarded and the program runs fine.
But right now, the program hangs if any new command is sent during the execution of the old one.
This is the timeout section of my code:
QByteArray requestData = myRequest.toLocal8Bit();
serial.write(requestData);
if (serial.waitForBytesWritten(waitTime)) {
if (serial.waitForReadyRead(myWaitTimeout)) {
QByteArray responseData = serial.readAll();
while (serial.waitForReadyRead(10))
responseData += serial.readAll();
QString response(responseData);
emit this->response(response);
} else {
emit timeout(tr("Wait Read Request Timed Out %1")
.arg(QTime::currentTime().toString()));
}
} else {
emit timeout(tr("Wait Write Request Timed Out %1")
.arg(QTime::currentTime().toString()));
}
The timeout signal is connected to a slot that just prints the timeout message and does nothing.
How can I fix this so that I can achieve what I target?
You are using blocking approach to transmit data via serial port. Unless you are using threads I don't see possibility to send any additional data during execution of previous loop.
BTW: Your program, for example, will block indefinitely if Arduino manages to keep sending something within 10ms periods.
Add couple of QDebug() << "I'm here"; lines to check where your code gets stuck; it is possible that you are blocking somewhere outside code you pasted here. Alternative is to use debugger.
What if previous 'command' you tried to send is still in the buffer ? You'll end up filling output buffer. Check with QDebug how many bytes are in output buffer before writing more data to it. Buffer should be empty. (qint64 QIODevice::bytesToWrite() const).

how to simulate time delay in network

Let's say that we need to send this message Hellow World using UDP protocol between two PCs A and B . Computer A will send the message to B with some time delay (i.e. constant or time-varying). Now to simulate this scenario, my first attempt is to use sleep function but this solution will freezes the entire application. Another solution is to implement mutlithreads and use sleep() with the thread that is responsible for getting the data and store this in a global variable and access this variable from another thread. In this solution, there might be difficulties in the synchronization between the threads. To overcome this problem, I will write the received data in txt file and read it from another thread. My question is what is the proper way to carry out this trivial experiment? I will appreciate if the answer has some C++ pseudo.
Edit:
My attempt to solve it is as follows, for the Master side (client),
Master masterObj
int main()
{
masterObj.initialize();
masterObj.connect();
while( masterObj.isConnected() == true ){
get currentTime and data; // currentTime here is sendTime
datagram = currentTime + data;
masterObj.send( datagram );
}
}
For the Slave side (server), the pseudo code is
Slave slaveObj
int main()
{
slaveObj.initialize();
slaveObj.connect();
slaveObj.slaveThreadInit();
while( slaveObj.isConnected() == true ){
slaveObj.getData();
}
}
Slave::recieve()
{
get currentTime and call it recievedTime
get datagram from Master;
this->slaveThread( recievedTime + datagram );
}
Slave::slaveThread( info )
{
sleep( 1 msec );
info = recievedTime + datagram ;
get time delay;
time delay = sendTime - recievedTime;
extract data from datagram;
insert data and time delay in txt file ( call it txtSlaveData);
}
Slave::getData()
{
read from txtSlaveData;
}
As you can see, I'm using an independent thread which inside it, I'm using sleep(). I'm not sure if this approach is applicable.
A simple way to simulate sending UDP datagrams from one computer to another is to send the datagrams through the loopback interface to another - or the same - process on the same computer. That will function exactly like the real thing except for the delay.
You can simulate the delay either when sending or receiving. Once you've implemented it one way, the other should be trivial. I think delaying the sending side is more natural option. Here is an approach for the more general problem of simulating network delay. See the last paragraph for a trivial experiment of sending only one datagram.
In case you choose delaying on send, what you could do is, instead of sending, store the datagram in a queue, along with the time it should be sent (target = now + delay).
Then, in another thread, wait for a datagram to become available, then sleep for max(target - now, 0). After sleeping, send the datagram and move on to the next one. Wait if queue is empty.
To simulate jitter, randomize the delay. To let jitter simulation send the datagrams in non-sequential order, use a priority queue, sorted by the target send-time.
Remember to synchronize the access to the queue.
For a single datagram, you can do much simpler. Simply start a new thread, sleep for the delay, send and end thread. No need for synchronization. Here's c++ code for that:
std::thread([]{
std::this_thread::sleep_for(delay);
send("foo");
}).detach();

UDP real time sending and receiving on Linux on command from control computer

I am currently working on a project written in C++ involving UDP real time connection. I receive UDP packets from a control computer containing commands to start/stop an infinite while loop that reads data from an IMU and sends that data to the control computer.
My problem is the following: First I implemented an exit condition from the loop using recvfrom() and read(), but the control computer sends a UDP packet every second, which was delaying the whole loop and made sending the data in the desired time interval of 5ms impossible.
I tried to fix this problem by usingfcntl(fd, F_SETFL, O_NONBLOCK);and using only read(), which actually works fine, but I am unsure whether this is a wise idea or not, since I am not checking for errors anymore. Is there any elegant way how to solve this problem? I thought about using Pthreads or something like that, however I have never worked with threads or parallel programming so I would have to spend some time learning that.
I appreciate any advice on that problem you could give me.
Here is a code example:
//include
...
int main() {
RNet cmd; //RNet: struct that contains all the information of the UDP header and the command
RNet* pCmd = &cmd;
ssize_t b;
int fd2;
struct sockaddr_in snd; // sender is control computer
socklen_t length;
// further declaration of variables, connecting to socket, etc...
...
fcntl(fd2, F_SETFL, O_NONBLOCK);
while (1)
{
// read messages from control computer
if ((b = read(fd2, pCmd, 19)) > 0) {
memcpy(&cmd, pCmd, b);
}
// transmission
while (cmd.CLout.MotionCommand == 1) // MotionCommand: 1 - send messages; 0 - do nothing
{
if(time_elapsed >= 5) // elapsed time in ms
{
// update sensor values
...
//sendto ()
...
// update control time, timestamp, etc.
...
}
if (recvfrom(fd2, pCmd, (int)sizeof(pCmd), 0, (struct sockaddr*) &snd, &length) < 0) {
perror("error receiving data");
return 0;
}
// checking Control Model Command
if ((b = read(fd2, pCmd, 19)) > 0) {
memcpy(&cmd, pCmd, b);
}
}
}
}
I really like the "blocking calls on multiple threads" design. It enables you to have distinct independent tasks, and you don't have to worry about how each task can disturb another. It can have some drawbacks but it is usually a good fit for many needs.
To do that, just use pthread_create to create a new thread for each task (you may keep the main thread for one task). In your case, you should have a thread to receive commands, and another one to send your data. You also need for the receiving thread to notify the sending thread of the commands. To do that, you can use some synchronization tool, like a mutex.
Overall, you should have your receiving thread blocking on recvfrom, and the sending thread waiting for a signal from the mutex (wait for the mutex to be freed, technically). When the receiving thread receive a start command, it signals the mutex and go back to recvfrom (optionally you can set a variable to provide more information to the other thread).
As a comment, remember that UDP are 1-to-many, thus your code here will react to any packet sent to you (even from some random or malicious host). You may want to filter with the remote sockaddr after recvfrom, or use connect + recv. It depends on what you want.

What means blocking for boost::asio::write?

I'm using boost::asio::write() to write data from a buffer to a com-Port. It's a serial port with a baud rate 115200 which means (as far as my understanding goes) that I can write effectively 11520 byte/s or 11,52KB/s data to the socket.
Now I'm having a quite big chunk of data (10015 bytes) which i want to write. I think that this should take little less than a second to really write on the port. But boost::asio::write() returns already 300 microseconds after the call with the transferred bytes 10015. I think this is impossible with that baud rate?
So my question is what is it actually doing? Really writing it to the port, or just some other kind of buffer maybe, which later writes it to the port.
I'd like the write() to only return after all the bytes have really been written to the port.
EDIT with code example:
The problem is that i always run into the timeout for the future/promise because it takes alone more than 100ms to send the message, but I think the timer should only start after the last byte is sent. Because write() is supposed to block?
void serial::write(std::vector<uint8_t> message) {
//create new promise for the request
promise = new boost::promise<deque<uint8_t>>;
boost::unique_future<deque<uint8_t>> future = promise->get_future();
// --- Write message to serial port --- //
boost::asio::write(serial_,boost::asio::buffer(message));
//wait for data or timeout
if (future.wait_for(boost::chrono::milliseconds(100))==boost::future_status::timeout) {
cout << "ACK timeout!" << endl;
//delete pointer and set it to 0
delete promise;
promise=nullptr;
}
//delete pointer and set it to 0 after getting a message
delete promise;
promise=nullptr;
}
How can I achieve this?
Thanks!
In short, boost::asio::write() blocks until all data has been written to the stream; it does not block until all data has been transmitted. To wait until data has been transmitted, consider using tcdrain().
Each serial port has both a receive and transmit buffer within kernel space. This allows the kernel to buffer received data if a process cannot immediately read it from the serial port, and allows data written to a serial port to be buffered if the device cannot immediately transmit it. To block until the data has been transmitted, one could use tcdrain(serial_.native_handle()).
These kernel buffers allow for the write and read rates to exceed that of the transmit and receive rates. However, while the application may write data at a faster rate than the serial port can transmit, the kernel will transmit at the appropriate rates.