Error in ffmpeg when reading from UDP stream - c++

I'm trying to process frames from a UDP stream using ffmpeg. Everything will run fine for a while but av_read_frame() will always eventually return either AVERROR_EXIT (Immeditate exit requested) or -5 (Error number -5 occurred) while the stream should still be running fine. Right before the error it always prints the following message to the console
[mpeg2video # 0caf6600] ac-tex damaged at 14 10
[mpeg2video # 0caf6600] Warning MVs not available
[mpeg2video # 0caf6600] concealing 800 DC, 800 AC, 800 MV errors in I frame
(the numbers in the message vary from run to run)
I have a suspicion that the error is related to calling av_read_frame too quickly. If I have it run as fast as possible, I usually get an error within 10-20 frames, but if I put a sleep before reading it will run fine for a minute or so and then exit with an error. I realize this is hacky and assume there is a better solution. Bottom line: is there a way to dynamically check if 'av_read_frame()' is ready to be called? or a way to supress the error?
Psuedo code of what I'm doing below. Thanks in advance for the help!
void getFrame()
{
//wait here?? seems hacky...
//boost::this_thread::sleep(boost::posix_time::milliseconds(25));
int av_read_frame_error = av_read_frame(m_input_format_context, &m_input_packet);
if(av_read_frame_error == 0){
//DO STUFF - this all works fine when it gets here
}
else{
//error
char errorBuf[AV_ERROR_MAX_STRING_SIZE];
av_make_error_string(errorBuf, AV_ERROR_MAX_STRING_SIZE, av_read_frame_error);
cout << "FFMPEG Input Stream Exit Code: " << av_read_frame_error << " Message: " << errorBuf << endl;
}
}

Incoming frames needs to be handled in a callback function. So the mechanism should be such that a callback gets called whenever there is a new frame. In this way there is no need to manually fine tune the delay.
Disclaimer: I have not used ffmpeg APIs.

Related

How can i mock a serial port UART linux

Preface
So basically im doing a project for a extracurricular activity and it involves having a microcontroller read some data from a CAN bus and then send that data over through a UART serial connection to a bananaPi Zero M2 thats currently running arch linux.
The microcontroller is probably an arduino of some kind(most likely a modified version of it), the problem resides with the constant change of the project, and since i want my code to survive longer than a year a part of that is creating tests, ive been looking for a way to emulate the serial connection that is made from the bananaPi (on file/dev /dev/ttyS0) to the microcontroller so that i dont have to constantly compile the code for the bananaPi set everything up just to check if "hello" is being correctly sent over the serial line. The thing is i havent found a way to sucessfully virtualize a serial port
Attempts
So i've looked a bit on options and i found socat, apparently it can redirect sockets and all kinds of connections and especially baud rates(although personally its not really that relevant for giving credence to the tests to my colleagues is of the most importance) So i spent a evening trying to learn three things at once and after a lot of problems and a lot of learning i came to this
void Tst_serialport::sanityCheck(){
socat.startDetached("socat -d -d pty,rawer,b115200,link=/tmp/banana, pty,rawer,b115200,link=/tmp/tango");
sleep(1);
_store = new store("/tmp/banana");
QCOMPARE(_store->dev,"/tmp/banana");
}
void Tst_serialport::checkSendMessage(){
QSerialPort tango;
tango.setPortName("/tmp/tango");
tango.setBaudRate(QSerialPort::Baud115200);
tango.setDataBits(QSerialPort::Data8);
tango.setParity(QSerialPort::NoParity);
tango.setStopBits(QSerialPort::OneStop);
tango.setFlowControl(QSerialPort::NoFlowControl);
tango.open(QIODevice::ReadWrite);
tango.write("Hello");
tango.waitForBytesWritten();
tango.close();
QCOMPARE(_store->lastMessage,"Hello");
}
void Tst_serialport::closeHandle(){
socat.close();
}
QTEST_MAIN(Tst_serialport)
The intent here being that in sanityCheck a fake serial device would be created on /tmp/banana and /tmp/tango that would redirect io between each other so that when _store started listening to banana and i sent a message to tango i would receive that same message inside the store object
The thing is the function that is waiting for messages, etc... isnt triggering even tough ive managed to work with it when i had an arduino plugged directly to my computer
before continuing im sorry that the code is kinda all messed up, im kinda new to both qt and c++, although i have some experience with C which made me use a lot of C stuff when i shouldve sticked with qt. Unfortunately i havent had much time to refactor everything to a more clean version of the code
Here's the other side
int store::setupSerial() {
QSerialPort* serial= new QSerialPort();
serial->setPortName(this->dev);
serial->setBaudRate(QSerialPort::Baud115200);
serial->setDataBits(QSerialPort::Data8);
serial->setStopBits(QSerialPort::OneStop);
serial->setParity(QSerialPort::NoParity);
serial->setFlowControl(QSerialPort::NoFlowControl);
if (!serial->open(QIODevice::ReadOnly)) {
qDebug() << "Can't open " << this->dev << ", error code" << serial->error();
return 1;
}
this->port = serial;
connect(this->port, &QSerialPort::readyRead, this, &store::handleReadyRead);
connect(this->port, &QSerialPort::errorOccurred, this, &store::handleError);
return 0;
}
store::store( char * dev, QObject *parent ): QObject(parent){
if (dev == nullptr){
// TODO: fix this(use a better function preferably one handled by QT)
int len = sizeof(char)*strlen(DEFAULT_DEVICE)+1;
this->dev = (char*)malloc(len);
strcpy(this->dev,DEFAULT_DEVICE);
}
//copy dev to this->dev
else{
int len = sizeof(char)*strlen(dev)+1;
this->dev = (char*)malloc(len);
strcpy(this->dev,dev);
}
setupSerial();
}
void store::handleReadyRead(){
bufferMessage=port->readAll();
serialLog.append(bufferMessage);
//can be optimized using pointers or even a variable as a "bookmark" wether a int or pointer
lastMessage.append(bufferMessage);
uint32_t size = (int)lastMessage[0] | (int)lastMessage[1] << 8 | (int)lastMessage[2] << 16 | (int)lastMessage[3] << 24;
int8_t eof = 0x00;
if((bool)((long unsigned int)lastMessage.size() == size+sizeof(size)+sizeof(eof))&& ((bool) lastMessage[lastMessage.size()-1] == eof)){
parseJson();
//clear lastMessage()
lastMessage.clear();
}
}
//... some other code here
If you're wondering whats the output or the result well
11:23:40: Starting /home/micron/sav/Trabalhos/2022-2023/FormulaStudent/VolanteAlphaQT/build-VolanteAlphaQT-Desktop-Testing/bin/VolanteAlphaQT_testes...
********* Start testing of Tst_serialport *********
Config: Using QtTest library 5.15.8, Qt 5.15.8 (x86_64-little_endian-lp64 shared (dynamic) release build; by GCC 12.2.1 20230201), arch unknown
PASS : Tst_serialport::initTestCase()
2023/02/15 11:23:40 socat[6248] N PTY is /dev/pts/2
2023/02/15 11:23:40 socat[6248] N PTY is /dev/pts/3
2023/02/15 11:23:40 socat[6248] N starting data transfer loop with FDs [5,5] and [7,7]
PASS : Tst_serialport::sanityCheck()
FAIL! : Tst_serialport::checkSendMessage() Compared values are not the same
Actual (_store->lastMessage): ""
Expected ("Hello") : Hello
Loc: [../VolanteAlphaQT_1/test/tst_serialport.cpp(35)]
PASS : Tst_serialport::closeHandle()
PASS : Tst_serialport::cleanupTestCase()
Totals: 4 passed, 1 failed, 0 skipped, 0 blacklisted, 1005ms
********* Finished testing of Tst_serialport *********
11:23:41: /home/micron/sav/Trabalhos/2022-2023/FormulaStudent/VolanteAlphaQT/build-VolanteAlphaQT-Desktop-Testing/bin/VolanteAlphaQT_testes exited with code 1
As usual all per most of my questions its not very descriptive it basically just never triggers the signal ReadyRead which in turn causes last message to be blank
Conclusion / TL;DR
So what am i doing wrong? why is the ready read signal not being triggered? Is there a better way to simulate/mock a serial connection?
Well, I found the solution.
Apparently it wasn't a socat problem, the ready signal is way slower than I had in mind and when I slept it actually froze the process. Due to the ready signal taking some time even after the buffer itself being ready, the QCOMPARE came right after the "unfreeze" making the stall useless.
The actual solution was rather simple I placed a _store->waitForReadyRead(); so I could wait for the signal to be sent without freezing the process.

Timing inconsistent for CAN message transmission

I am attempting to write a program in C++ that does some video processing using OpenCV and then uses the information from the video to send a message onto a CAN bus using PCAN-basic.
When the code for the CAN bus is running by itself, the timing on the messages are pretty good i.e. the system that I am talking to does not complain. However when the OpenCV part of the program is introduced, then the cycle time intermittently increases to an unacceptable value, which causes issues.
I am using chrono::high_resolution_clock to compare a start time and the time now. If this comparison is >10ms then I am sending a CAN message and restarting the clock.
I have tried the following:
Updated OpenCV to latest version (in hope that it would run faster/free up resources)
Set the thread priority of the thread that the CAN message function lives in, to be of a higher priority. Set as 0, which I assume is the highest priority.
Lowered the comparison to send out the message at 8ms, this was intended as a workaround, not a fix.
//Every 10ms send a CAN signal
chrono::duration<double, milli>xyTimeDifference = timeNow - xyTimer;
xyTimerCompare = xyTimeDifference.count();
if (xyTimerCompare > 10)
{
if (xyTimerCompare > 16)
{
cout << "xyTimerComapare went over by: " << xyTimerCompare << endl;
}
result = CAN_Write(PCAN_USBBUS1, &joystickXY);
//Reset the timer
xyTimer = chrono::high_resolution_clock::now();
if (result != PCAN_ERROR_OK)
{
break;
}
}
Is there a better method to obtain a reliable signal to within +/- 1ms?

Unix socket hangs on recv, until I place/remove a breakpoint anywhere

[TL;DR version: the code below hangs indefinitely on the second recv() call both in Release and Debug mode. In Debug, if I place or remove a breakpoint anywhere in the code, it makes the execution continue and everything behaves normally]
I'm coding a simple client-server communication using UNIX sockets. The server is in C++ while the client is in python. The connection (TCP socket on localhost) gets established no problem, but when it comes to receiving data on the server side, it hangs on the recv function. Here is the code where the problem happens:
bool server::readBody(int csock) // csock is the socket filedescriptor
{
int bytecount;
// protobuf-related variables
google::protobuf::uint32 siz;
kinMsg::request message;
// if the code is working, client will send false
// I initialize at true to be sure that the message is actually read
message.set_endconnection(true);
// First, read 4-characters header for extracting data size
char buffer_hdr[5];
if((bytecount = recv(csock, buffer_hdr, 4, MSG_WAITALL))== -1)
::std::cerr << "Error receiving data "<< ::std::endl;
buffer_hdr[4] = '\0';
siz = atoi(buffer_hdr);
// Second, read the data. The code hangs here !!
char buffer [siz];
if((bytecount = recv(csock, (void *)buffer, siz, MSG_WAITALL))== -1)
::std::cerr << "Error receiving data " << errno << ::std::endl;
//Finally, process the protobuf message
google::protobuf::io::ArrayInputStream ais(buffer,siz);
google::protobuf::io::CodedInputStream coded_input(&ais);
google::protobuf::io::CodedInputStream::Limit msgLimit = coded_input.PushLimit(siz);
message.ParseFromCodedStream(&coded_input);
coded_input.PopLimit(msgLimit);
if (message.has_endconnection())
return !message.endconnection();
return false;
}
As can be seen in the code, the protocol is such that the client will first send the number of bytes in the message in a 4-character array, followed by the protobuf message itself. The first recv call works well and does not hang. Then, the code hangs on the second recv call, which should be recovering the body of the message.
Now, for the interesting part. When run in Release mode, the code hangs indefinitely and I have to kill either the client or the server. It does not matter whether I run it from my IDE (qtcreator), or from the CLI after a clean build (using cmake/g++).
When I run the code in Debug mode, it also hangs at the same recv() call. Then, if I place or remove a breakpoint ANYWHERE in the code (before or after that line of code), it starts again and works perfectly : the server receives the data, and reads the correct message.endconnection() value before returning out of the readBody function. The breakpoint that I have to place to trigger this behavior is not necessarily trigerred. Since the readBody() function is in a loop (my C++ server waits for requests from the python client), at the next iteration, the same behavior happens again, and I have to place or remove a breakpoint anywhere in the code, which is not necessarily triggered, in order to go past that recv() call. The loop looks like this:
bool connection = true;
// server waiting for client connection
if (!waitForConnection(connectionID)) std::cerr << "Error accepting connection" << ::std::endl;
// main loop
while(connection)
{
if((bytecount = recv(connectionID, buffer, 4, MSG_PEEK))== -1)
{
::std::cerr << "Error receiving data "<< ::std::endl;
}
else if (bytecount == 0)
break;
try
{
if(readBody(connectionID))
{
sendResponse(connectionID);
}
// if client is requesting disconnection, break the while(true)
else
{
std::cout << "Disconnection requested by client. Exiting ..." << std::endl;
connection = false;
}
}
catch(...)
{
std::cerr << "Erro receiving message from client" << std::endl;
}
}
Finally, as you can see, when the program returns from readBody(), it sends back another message to the client, which processes it and prints in the standard output (python code working, not shown because the question is already long enough). From this last behavior, I can conclude that the protocol and client code are OK. I tried to put sleep instructions at many points to see whether it was a timing problem, but it did not change anything.
I searched all over Google and SO for a similar problem, but did not find anything. Help would be much appreciated !
The solution is to not use any flags. Call recv with 0 for the flags or just use read instead of recv.
You are requesting the socket for data that is not there. The recv expects 10 bytes, but the client only sent 6. The MSG_WAITALL states clearly that the call should block until 10 bytes are available in the stream.
If you dont use any flags, the call will succeed with a bytecount at 6, which is the exact same effect than with MSG_DONTWAIT, without the potential side effects of non-blocking calls.
I did the test on the github project, it works.
The solution is to replace MSG_WAITALL by MSG_DONTWAIT in the recv() calls. It now works fine. To summarize, it makes the recv() calls non blocking, which makes the whole code work fine.
However, this still raises many questions, the first of which being: why was it working with this weird breakpoint changing thing ?
If the socket was blocking in the first place, one could assume that it is because there is no data on the socket. Let's assume both situations here :
There is no data on the socket, which is the reason why the blocking recv() call was not working. Changing it to a non blocking recv() call would then, in the same situation, trigger an error. If not, the protobuf deserialization would afterwards fail trying to deserialize from an empty buffer. But it does not ...
There is data on the socket. Then, why on earth would it block in the first place ?
Obviously there is something that I don't get about sockets in C, and I'd be very happy if somebody has an explanation for this behavior !

OpenCV hangs on VideoCapture grab()

I am trying to write a program to capture images from 2 webcams simultaneously (or near simultaneously), but sometimes when I would run my program it would start to hang. What I mean by that is it would drop the FPS so low, that there would be a good 5 - 10 seconds between each image capture. I decided to make a much more sparse program that uses the code I thought might be causing the problem so I could isolate the source. Sure enough, my small program is causing problems but I am stumped as for what is causing them. Most of the time it will run without fault, but sometimes it will exhibit the same symptoms of hanging anywhere from 10 seconds to 1 minute into running the code. No errors are raised, but from the output of my program I am confident that VideoCapture's grab() is the line slowing down.
I am running this in OS X, with two external webcams through a USB hub, OpenCV version 10.4.11_1, and in C++. I don't think the USB hub is causing the problem. Quite frankly, it is so slow to tell when it will and will not freeze that it is difficult to troubleshoot. I would get rid of the USB hub, but I need it in the end and I know bandwidth is not the issue. I can run multiple (I have tried 4) instances of a different OpenCV test program that captures from a single camera with all cameras attached through the USB hub.
I wonder if there is an internal buffer in the VideoCapture class that is filling up, or some other internal issue because I can't seem to find much documentation on VideoCapture's grab() function and find out what it is actually taking so long to do.
Thanks for reading my lengthy description. Here is my code:
int main(){
VideoCapture vc1(1);
VideoCapture vc2(2);
Timer tmr;
Mat img1;
Mat img2;
namedWindow("WINDOW1", CV_WINDOW_NORMAL);
namedWindow("WINDOW2", CV_WINDOW_NORMAL);
waitKey(1);
int count = 0;
while (true){
tmr.reset();
vc1.grab();
vc2.grab();
cout << "Double grab time(" << ++count << "): " << tmr.elapsed() << endl;
tmr.reset();
vc1.retrieve(img1);
vc2.retrieve(img2);
cout << "Double retrieve time: " << tmr.elapsed() << endl;
imshow("WINDOW1", img1);
imshow("WINDOW2", img2);
if (waitKey(25) == 27){
cout << "Quit" << endl;
break;
}
}
return 0;
}
using this timer class from a SO post:
class Timer
{
public:
Timer() : beg_(clock_::now()) {}
void reset() { beg_ = clock_::now(); }
double elapsed() const {
return std::chrono::duration_cast<second_>
(clock_::now() - beg_).count(); }
private:
typedef std::chrono::high_resolution_clock clock_;
typedef std::chrono::duration<double, std::ratio<1> > second_;
std::chrono::time_point<clock_> beg_;
};
and compiled with:
clang++ `pkg-config --libs --cflags opencv` -o test test.cpp
I just can't image I am the only one who has or will run into this, so if I find anything out I will be sure to post it. In the mean time I would be eternally grateful for some help.
Thanks
I have a partial solution in case anyone else runs into this problem. I was able to stop my program from freezing by using different webcams. Originally I used two webcams called "Creative Live! Cam Chat HD, 5.7MP" which otherwise seem to work perfectly, however after replacing them with two Logitech c920s I was able get it to work. (or so it seems. I have been using them now for about 1.5 months and the one time I saw it freeze like it did before was when I was adding code to resize the video based on CLI input in a multithreaded program. I was also getting seg. faults so it is not exactly strong evidence.)
If I find out why the Logitech cameras work when the others didn't I will post a reply, but my advice would be to try using different webcams if anyone runs into a similar problem.

Boost asio exits with code 0 for no reason. Setting a breakpoint AFTER the problematic statement solves it

I'm writing a TCP server-client pair with boost asio. It's very simple and synchronous.
The server is supposed to transmit a large amount of binary data through several recursive calls to a function that transmits a packet of data over TCP. The client does the analogue, reading and appending the data through a recursive function that reads incoming packets from the socket.
However, in the middle of receiving this data, most times (around 80%) the client just stops recursion suddenly, always before one of the read calls (shown below). It shouldn't be able to do this, given that there are several other statements and function calls after the recursion.
size_t bytes_transferred = m_socket.read_some(boost::asio::buffer(m_fileReadBuffer, m_fileReadBuffer.size()));
m_fileReadBuffer is a boost::array of char, with size 4096 (although I have tried other buffer formats as well with no success).
There is absolutely no way I can conceive of deducing why this is happening.
The program exits immediately, so I can't pass an error code to read_some and read any error messages, since that would need to happen after the read_some statement
No exceptions are thrown
No errors or warnings on compile/runtime
If I put breakpoints inside the recursive function, the problem never happens (transfer completes successfully)
If I put breakpoints after the transfer, or trap the execution in a while loop after the transfer, the problem never happens and there is no sign of anything wrong
Also, it's important to note that the server ALWAYS successfully sends all the data. On top of that, the problem always happens at the very end of transmissions: I can send 8000 bytes and it will exit when around 6000 or 7000 bytes have been transferred, and I can send 8000000 bytes and it will exit when something like 7996000 bytes have been transferred.
I can provide any code necessary, I just have no idea of where the problem could be. Below is the recursive read function on the client:
void TCP_Client::receive_volScan_message()
{
try
{
//If the transfer is complete, exit this loop
if(m_rollingSum >= (std::streamsize)m_fileSize)
{
std::cout << "File transfer complete!\n";
std::cout << m_fileSize << " "<< m_fileData.size() << "\n\n";
return;
}
boost::system::error_code error;
//Transfer isn't complete, so we read some more
size_t bytes_transferred = m_socket.read_some(boost::asio::buffer(m_fileReadBuffer, m_fileReadBuffer.size()));
std::cout << "Received " << (std::streamsize)bytes_transferred << " bytes\n";
//Copy the bytes_transferred to m_fileData vector. Only copies up to m_fileSize bytes into m_fileData
if(bytes_transferred+m_rollingSum > m_fileSize)
{
//memcpy(&m_fileData[m_rollingSum], &m_fileReadBuffer, m_fileSize-m_rollingSum);
m_rollingSum += m_fileSize-m_rollingSum;
}
else
{
// memcpy(&m_fileData[m_rollingSum], &m_fileReadBuffer, bytes_transferred);
m_rollingSum += (std::streamsize)bytes_transferred;
}
std::cout << "rolling sum: " << m_rollingSum << std::endl;
this->receive_volScan_message();
}
catch(...)
{
std::cout << "whoops";
}
}
As a suggestion, I have tried changing the recursive loops to for loops on both the client and server. The problem persists, somehow. The only difference is that now instead of exiting 0 before the previously mentioned read_some call, it exits 0 at the end of one of the for loop blocks, just before it starts executing another for loop pass.
EDIT: As it turns out, the error doesn't take place whenever I built the client in debug mode on my IDE.
I haven't completely understood the problem, however I have managed to fix it entirely.
The root of the issue was that on the client, the boost::asio::read calls made main exit with code 0 if the server messages hadn't arrived yet. That means that a simple
while(m_socket.available() == 0)
{
;
}
before all read calls completely prevented the problem. Both in debug and release mode.
This is very strange because as I understand these functions should just block until there is something to read, and even if they encountered errors they should return zero.
I think the debug/release discrepancy happened because the m_readBuffer wasn't initialized to anything whenever the read calls took place. This made the read call return some form of silent error. On debug, uninitialized variables get automatically set to NULL, stealthily fixing my problem.
I have no idea why adding a while loop after the transfer prevented the issue, though. Neither why it normally happened on the end of transfers, after the m_readBuffer had been set and successfully used several times.
On top of that, I have never seen this type of "crash" before, where the program simply exits with code 0 in a random place, with no errors or exceptions thrown.