Arduino locks up - c++

The intention of the program below is to periodically output a dataframe on serial. The period is defined by a timed interrupt, every second.
The code worked on Arduino IDE version 0022, but on 1.0 I can't get it working. When using the timer routine and maxFrameLength is set to 0x40 or higher, the controller locks up. When using 0x39 or lower, the program keeps running (indicated by the flashing LED).
What's going wrong here and why? Is it a bug? Am I doing something wrong?
I'm using http://code.google.com/p/arduino-timerone/downloads/detail?name=TimerOne-v9.zip for the timer routine on a Mega1280.
#include "TimerOne.h"
#define LED 13
#define maxFrameLength 0x40
boolean stateLED = true;
byte frame[ maxFrameLength ];
void sendFrame() {
digitalWrite( LED , stateLED );
stateLED = !stateLED;
Serial.write( frame, maxFrameLength ); // ptr + bytes to send
}
void setup() {
pinMode( LED , OUTPUT );
Timer1.initialize( 1000000 ); // initialize timer1 with 1 second period
Timer1.attachInterrupt( sendFrame );
Serial.begin( 9600 );
};
void loop() {
};

There are a number of issues that may or may not be causing a problem, but it should be fixed in any case. These comments are general in nature; I am not familiar with Arduino or its library specifically.
It is almost certainly inappropriate to issue a Serial.write() call in an interrupt handler (ISR). If the Serial object is interrupt driven, it will have an associated buffer. If that buffer is not large enough to take all the data, the function may block, which is a no no in an interrupt handler. Moreover, if the timer interrupt is a higher priority that the serial interrupt, you will cause a deadlock when Serial.write() blocks. 0x40 (64 bytes) seems like a likely buffer size for serial output, so that is likely the primary cause. If you can increase the buffer size that may make it work, but it remains a bad idea to perform potentially blocking operations in an ISR.
Even if serial output is polled rather than interrupt driven, your interrupt handler will take rather long, which is also a bad idea, but probably not the issue in this case, but at 9600,n,8,1, 64 characters will take 67 milliseconds to clear the transmit register.
stateLED and frame are shared variables (between interrupt and main contexts) and should therefore be declared volatile.
It is not shown in your fragment how and where frame is updated, but since the interrupt will occur asynchronously, any update to frame should be in a critical section - with at least the timer1 interrupt disabled.
Update
In the light of A.H.'s response I downloaded the source code and took a look. Serial is a static object of class HardwareSerial defined in \arduino-1.0\hardware\arduino\cores\arduino\hardwareSerial.cpp/.h. The transmit buffer length is indeed 64 bytes, and the HardwareSerial::write() function does "busy-wait" if the buffer is full. You will need to modify and re-build the source to extend the buffer or add a non-blocking version of write().
This is however certainly the cause of the lock-up - the buffer will never empty because the transmit interrupt cannot be serviced while the timer1 interrupt is running.

The release notes for 1.0 tell you:
Serial transmission is now asynchronous - that is, calls to
Serial.print(), etc. add data to an outgoing buffer which is transmitted
in the background. Also, the Serial.flush() command has been repurposed
to wait for outgoing data to be transmitted, rather than dropping
received incoming data.
Therefore your code worked before 1.0, because HardwareSerial::write(uint8_t) (which is the foundation for all output) had no buffer and returned only after the byte has been transmitted.
I find it astonishing, that the reference page for Serial does not mention this behaviour.

Related

Is HAL_UARTEx_RxEventCallback Size parameter calculated programmatically or by hardware

I'm realizing UART-DMA with STM_HAL library and I want to know if message size is counted by hardware (counting clock ticks till line is idle for example) or by some program method(something like strlen). So if Size in
HAL_UARTEx_RxEventCallback(UART_HandleTypeDef *huart, uint16_t Size)
is counted by hardware, I can send data in pure HEX format, but if it is calculated by something like strline, I may recieve problems if data is 0x00 and have to send data in ASCII.
I've tried to make some research in generated code in Keil but failed (maybe I didn't try hard enough) so maybe somebody can help me.
If you are using UART DMA, it is calculated by hardware.
If you check the call hierarchy of HAL_UARTEx_RxEventCallback using your ide, you can see how the Size variable is calculated.
The function is executed in the following flow.(Depending on the version of HAL Driver, it may be slightly different)
UART Idle Interrupt occur
Call HAL_UART_IRQHandler()
If DMA mod is enabled, Call HAL_UARTEx_RxEventCallback(huart, (huart->RxXferSize - huart->RxXferCount))
Therefore, Size variable is calculated as (huart->RxXferSize - huart->RxXferCount)
huart->RxXferSize is a set value when initializing RX DMA.
huart->RxXferCount is (huart->hdmarx)->Instance->NDTR
NDTR is a value calculated by hardware as the size of the buffer remaining after DMA transfer data to memory!!

QSerialPort continuous reading accumulative delay

I am trying to do communication from QT Application to Arduino. The flow is like this: QT Application sends a '1' and Arduino is expected to respond with some data(the data String length is huge, around 300). QT Application is sending '1' at the rate of around 5Hz(every 200ms).
The problem I am facing is, there is an accumulative delay between the Arduino to QT communication. That is, the data I receive from Arduino is not recent data but the frequency of data coming of Arduino is 5Hz only(which is as expected), just the data coming is not recent. This delay keeps on increasing with time. I believe there is some problem with buffer or something.
What I tried:
QSerialPort serialPort; is my device port
serialPort.clear()
serialPort.flush()
Increasing and decreasing Baud Rate from both ends.
Reduce character length from Arduino, here delay reduces significantly but the accumulated delay is observed after a long time.
to clear serial communication buffer, but the issue still persists.
Here is my code snippet:
connect(timer_getdat, SIGNAL(timeout()), this, SLOT(Rec()));
timer_getdat->start(200);
where Rec() is the function where I do communication part.
In Rec():
serialPort.write("1", 2);
// serialPort.waitForBytesWritten(100);
long long bytes_available = serialPort.bytesAvailable();
if (bytes_available >= 1)
{
serialPort.readLine(temp, 500);
serialPort.flush(); // no change
serialPort.clear(); // no change by .clear() also
}
I have been stuck on this issue for a quite long time. The above code snippet is what I think is necessary but if anyone needs more clarification, I may reveal more of the code.
I also encountered with the same issue, and yes QSerialPort.clear() and QSerialPort.flush() doesn't help. Try doing readAll()
So change the part in your Rec() function to something like this:
serialPort.write("1", 2);
long long bytes_available = serialPort.bytesAvailable();
if (bytes_available >= 1)
{
serialPort.readLine(temp, 500);
serialPort.readAll(); // This reads all the data in buffer at once and clears the queue.
}
Even on QT forums, I didn't find the answer to this, was playing with all functions available with QSerialPort class and readAll() seems to work.
About readAll(), Qt documentation says:
Reads all remaining data from the device, and returns it as a byte
array.
My explanation for the resolution is that readAll captures all of the data from the communication buffer and empties it.
This should be the job of clear() function but apparently readAll() seems to work.

c++ streaming udp data into a queue?

I am streaming data as a string over UDP, into a Socket class inside Unreal engine. This is threaded, and runs in the background.
My read function is:
float translate;
void FdataThread::ReceiveUDP()
{
uint32 Size;
TArray<uint8> ReceivedData;
if (ReceiverSocket->HasPendingData(Size))
{
int32 Read = 0;
ReceivedData.SetNumUninitialized(FMath::Min(Size, 65507u));
ReceiverSocket->RecvFrom(ReceivedData.GetData(), ReceivedData.Num(), Read, *targetAddr);
}
FString str = FString(bytesRead, UTF8_TO_TCHAR((const UTF8CHAR *)ReceivedData));
translate = FCString::Atof(*str);
}
I then call the translate variable from another class, on a Tick, or timer.
My test case sends an incrementing number from another application.
If I print this number from inside the above Read function, it looks as expected, counting up incrementally.
When i print it from the other thread, it is missing some of the numbers.
I believe this is because I call it on the Tick, so it misses out some data due to processing time.
My question is:
Is there a way to queue the incoming data, so that when i pull the value, it is the next incremental value and not the current one? What is the best way to go about this?
Thank you, please let me know if I have not been clear.
Is this the complete code? ReceivedData isn't used after it's filled with data from the socket. Instead, an (in this code) undefined variable 'buffer' is being used.
Also, it seems that the while loop could run multiple times, overwriting old data in the ReceivedData buffer. Add some debugging messages to see whether RecvFrom actually reads all bytes from the socket. I believe it reads only one 'packet'.
Finally, especially when you're using UDP sockets over the network, note that the UDP protocol isn't guaranteed to actually deliver its packets. However, I doubt this is causing your problems if you're using it on a single computer or a local network.
Your read loop doesn't make sense. You are reading and throwing away all datagrams but the last in any given sequence that happen to be in the socket receive buffer at the same time. The translate call should be inside the loop, and the loop should be while(true), or while (running), or similar.

What means blocking for boost::asio::write?

I'm using boost::asio::write() to write data from a buffer to a com-Port. It's a serial port with a baud rate 115200 which means (as far as my understanding goes) that I can write effectively 11520 byte/s or 11,52KB/s data to the socket.
Now I'm having a quite big chunk of data (10015 bytes) which i want to write. I think that this should take little less than a second to really write on the port. But boost::asio::write() returns already 300 microseconds after the call with the transferred bytes 10015. I think this is impossible with that baud rate?
So my question is what is it actually doing? Really writing it to the port, or just some other kind of buffer maybe, which later writes it to the port.
I'd like the write() to only return after all the bytes have really been written to the port.
EDIT with code example:
The problem is that i always run into the timeout for the future/promise because it takes alone more than 100ms to send the message, but I think the timer should only start after the last byte is sent. Because write() is supposed to block?
void serial::write(std::vector<uint8_t> message) {
//create new promise for the request
promise = new boost::promise<deque<uint8_t>>;
boost::unique_future<deque<uint8_t>> future = promise->get_future();
// --- Write message to serial port --- //
boost::asio::write(serial_,boost::asio::buffer(message));
//wait for data or timeout
if (future.wait_for(boost::chrono::milliseconds(100))==boost::future_status::timeout) {
cout << "ACK timeout!" << endl;
//delete pointer and set it to 0
delete promise;
promise=nullptr;
}
//delete pointer and set it to 0 after getting a message
delete promise;
promise=nullptr;
}
How can I achieve this?
Thanks!
In short, boost::asio::write() blocks until all data has been written to the stream; it does not block until all data has been transmitted. To wait until data has been transmitted, consider using tcdrain().
Each serial port has both a receive and transmit buffer within kernel space. This allows the kernel to buffer received data if a process cannot immediately read it from the serial port, and allows data written to a serial port to be buffered if the device cannot immediately transmit it. To block until the data has been transmitted, one could use tcdrain(serial_.native_handle()).
These kernel buffers allow for the write and read rates to exceed that of the transmit and receive rates. However, while the application may write data at a faster rate than the serial port can transmit, the kernel will transmit at the appropriate rates.

IRQ 8 isn't working... HW or SW?

First, I program for Vintage computer groups. What I write is specifically for MS-DOS and not windows, because that's what people are running. My current program is for later systems and not the 8086 line, so the plan was to use IRQ 8. This allows me to set the interrupt rate in binary values from 2 / second to 8192 / second (2, 4, 8, 16, etc...)
Only, for some reason, on the newer old systems (ok, that sounds weird,) it doesn't seem to be working. In emulation, and the 386 system I have access to, it works just fine, but on the P3 system I have (GA-6BXC MB w/P3 800 CPU,) it just doesn't work.
The code
setting up the interrupt
disable();
oldrtc = getvect(0x70); //Reads the vector for IRQ 8
settvect(0x70,countdown); //Sets the vector for
outportb(0x70,0x8a);
y = inportb(0x71) & 0xf0;
outportb(0x70,0x8a);
outportb(0x71,y | _MRATE_); //Adjustable value, set for 64 interrupts per second
outportb(0x70,0x8b);
y = inportb(0x71);
outportb(0x70,0x8b);
outportb(0x71,y | 0x40);
enable();
at the end of the interrupt
outportb(0x70,0x0c);
inportb(0x71); //Reading the C register resets the interrupt
outportb(0xa0,0x20); //Resets the PIC (turns interrupts back on)
outportb(0x20,0x20); //There are 2 PICs on AT machines and later
When closing program down
disable();
outportb(0x70,0x8b);
y = inportb(0x71);
outportb(0x70,0x8b);
outportb(0x71,y & 0xbf);
setvect(0x70,oldrtc);
enable();
I don't see anything in the code that can be causing the problem. But it just doesn't seem to make sense. While I don't completely trust the information, MSD "does" report IRQ 8 as the RTC Counter and says it is present and working just fine. Is it possible that later systems have moved the vector? Everything I find says that IRQ 8 is vector 0x70, but the interrupt never triggers on my Pentium III system. Is there some way to find if the Vectors have been changed?
It's been a LONG time since I've done any MS-DOS code and I don't think I ever worked with this particular interrupt (I'm pretty sure you can just read the memory location to fetch the time too, and IRQ0 can be used to trigger you at an interval too, so maybe that's better. Anyway, given my rustiness, forgive me for kinda link dumping.
http://wiki.osdev.org/Real_Time_Clock the bottom of that page has someone saying they've had problem on some machines too. RBIL suggests it might be a BIOS thing: http://www.ctyme.com/intr/rb-7797.htm
Without DOS, I'd just capture IRQ0 itself and remap all of them to my own interrupt numbers and change the timing as needed. I've done that somewhat recently! I think that's a bad idea on DOS though, this looks more recommended for that: http://www.ctyme.com/intr/rb-2443.htm
Anyway though, I betcha it has to do with the BIOS thing:
"Notes: Many BIOSes turn off the periodic interrupt in the INT 70h handler unless in an event wait (see INT 15/AH=83h,INT 15/AH=86h).. May be masked by setting bit 0 on I/O port A1h "