RtAudio + Qt : duplex not working with RME Fireface on Linux - c++

This is my first post on Stackoverflow, I hope I'm doing this right.
I'm new to C++.
I've been playing with RtAudio and Qt (on linux, desktop and raspberry pi).
Backend is ALSA.
Audio out went fine both on my desktop computer (RME Fireface UCX in ClassCompilant mode and on the Raspberry Pi 3 with HifiBerry and PiSound)
Lately, I tried to add audio input support to my program.
I read the duplex tutorial on RtAudio website, and tried to implement it inside my code.
As soon as I added the input StreamParameters to openStream I got a very cracky sound.
Although, StreamStatus is ok in the callback...
I tried to create an empty C++ project, and simply copy the RtAudio tutorial.
Sadly, the problem remains...
I added this to my project file in Qt Creator
LIBS += -lpthread -lasound
I think my issue is similar to this one, but I couldn't find how (or if) it was solved
I tried different buffer sizes (from 64 to 4096 and more), the cracks are less audible, but still present when buffer size increases
Do you know anything that should be done regarding RtAudio in duplex mode that might solve this ? It seems that buffer size is not the same when in duplex mode.
edit :
Out of curiosity (and despair), I tried even lower buffer sizes with the canonical example from RtAudio help : it turns out using buffer size 1, 2, 4 and 8 frames removes the cracks...
As soon as I use 16 frames, sound is awful
Even 15 frames works, I really don't get what's going on
Code Sample :
RtAudio::StreamOptions options;
options.flags |= RTAUDIO_SCHEDULE_REALTIME;
RtAudio::StreamParameters params_in, params_out;
params_in.deviceId = 3;
params_in.nChannels = 2;
params_out.deviceId = 3;
params_out.nChannels = 2;
When output only, it works :
try {
audio.openStream(
&params_out,
NULL,
RTAUDIO_SINT16,
48000,
&buffer_frames,
&inout,
(void *) &buffer_bytes,
&options
);
}
catch (RtAudioError& e) {
std::cout << "Error while opening stream" << std::endl;
e.printMessage();
exit(0);
}
Cracks appear when changing NULL to &params_in :
try {
audio.openStream(
&params_out,
&params_in,
RTAUDIO_SINT16,
48000,
&buffer_frames,
&inout,
(void *) &buffer_bytes,
&options
);
}
catch (RtAudioError& e) {
std::cout << "Error while opening stream" << std::endl;
e.printMessage();
exit(0);
}
Thank you for your help

Answering my own question.
I re did my tests from scratch on the Raspberry Pi 3 / PiSound.
It turns out I must have done something wrong the first time. The canonical example from RtAudio (and the input implementation I did for my program) work well at 64, 128, etc buffer sizes.
The desktop build still have cracky sound, but works with weird buffer sizes (like 25, 30 or 27). The problem most likely comes from the Fireface UCX which is not well supported on Linux (even in ClassCompilant mode).
Thank you for your help, and sorry if I wasted your time.

Related

How can i mock a serial port UART linux

Preface
So basically im doing a project for a extracurricular activity and it involves having a microcontroller read some data from a CAN bus and then send that data over through a UART serial connection to a bananaPi Zero M2 thats currently running arch linux.
The microcontroller is probably an arduino of some kind(most likely a modified version of it), the problem resides with the constant change of the project, and since i want my code to survive longer than a year a part of that is creating tests, ive been looking for a way to emulate the serial connection that is made from the bananaPi (on file/dev /dev/ttyS0) to the microcontroller so that i dont have to constantly compile the code for the bananaPi set everything up just to check if "hello" is being correctly sent over the serial line. The thing is i havent found a way to sucessfully virtualize a serial port
Attempts
So i've looked a bit on options and i found socat, apparently it can redirect sockets and all kinds of connections and especially baud rates(although personally its not really that relevant for giving credence to the tests to my colleagues is of the most importance) So i spent a evening trying to learn three things at once and after a lot of problems and a lot of learning i came to this
void Tst_serialport::sanityCheck(){
socat.startDetached("socat -d -d pty,rawer,b115200,link=/tmp/banana, pty,rawer,b115200,link=/tmp/tango");
sleep(1);
_store = new store("/tmp/banana");
QCOMPARE(_store->dev,"/tmp/banana");
}
void Tst_serialport::checkSendMessage(){
QSerialPort tango;
tango.setPortName("/tmp/tango");
tango.setBaudRate(QSerialPort::Baud115200);
tango.setDataBits(QSerialPort::Data8);
tango.setParity(QSerialPort::NoParity);
tango.setStopBits(QSerialPort::OneStop);
tango.setFlowControl(QSerialPort::NoFlowControl);
tango.open(QIODevice::ReadWrite);
tango.write("Hello");
tango.waitForBytesWritten();
tango.close();
QCOMPARE(_store->lastMessage,"Hello");
}
void Tst_serialport::closeHandle(){
socat.close();
}
QTEST_MAIN(Tst_serialport)
The intent here being that in sanityCheck a fake serial device would be created on /tmp/banana and /tmp/tango that would redirect io between each other so that when _store started listening to banana and i sent a message to tango i would receive that same message inside the store object
The thing is the function that is waiting for messages, etc... isnt triggering even tough ive managed to work with it when i had an arduino plugged directly to my computer
before continuing im sorry that the code is kinda all messed up, im kinda new to both qt and c++, although i have some experience with C which made me use a lot of C stuff when i shouldve sticked with qt. Unfortunately i havent had much time to refactor everything to a more clean version of the code
Here's the other side
int store::setupSerial() {
QSerialPort* serial= new QSerialPort();
serial->setPortName(this->dev);
serial->setBaudRate(QSerialPort::Baud115200);
serial->setDataBits(QSerialPort::Data8);
serial->setStopBits(QSerialPort::OneStop);
serial->setParity(QSerialPort::NoParity);
serial->setFlowControl(QSerialPort::NoFlowControl);
if (!serial->open(QIODevice::ReadOnly)) {
qDebug() << "Can't open " << this->dev << ", error code" << serial->error();
return 1;
}
this->port = serial;
connect(this->port, &QSerialPort::readyRead, this, &store::handleReadyRead);
connect(this->port, &QSerialPort::errorOccurred, this, &store::handleError);
return 0;
}
store::store( char * dev, QObject *parent ): QObject(parent){
if (dev == nullptr){
// TODO: fix this(use a better function preferably one handled by QT)
int len = sizeof(char)*strlen(DEFAULT_DEVICE)+1;
this->dev = (char*)malloc(len);
strcpy(this->dev,DEFAULT_DEVICE);
}
//copy dev to this->dev
else{
int len = sizeof(char)*strlen(dev)+1;
this->dev = (char*)malloc(len);
strcpy(this->dev,dev);
}
setupSerial();
}
void store::handleReadyRead(){
bufferMessage=port->readAll();
serialLog.append(bufferMessage);
//can be optimized using pointers or even a variable as a "bookmark" wether a int or pointer
lastMessage.append(bufferMessage);
uint32_t size = (int)lastMessage[0] | (int)lastMessage[1] << 8 | (int)lastMessage[2] << 16 | (int)lastMessage[3] << 24;
int8_t eof = 0x00;
if((bool)((long unsigned int)lastMessage.size() == size+sizeof(size)+sizeof(eof))&& ((bool) lastMessage[lastMessage.size()-1] == eof)){
parseJson();
//clear lastMessage()
lastMessage.clear();
}
}
//... some other code here
If you're wondering whats the output or the result well
11:23:40: Starting /home/micron/sav/Trabalhos/2022-2023/FormulaStudent/VolanteAlphaQT/build-VolanteAlphaQT-Desktop-Testing/bin/VolanteAlphaQT_testes...
********* Start testing of Tst_serialport *********
Config: Using QtTest library 5.15.8, Qt 5.15.8 (x86_64-little_endian-lp64 shared (dynamic) release build; by GCC 12.2.1 20230201), arch unknown
PASS : Tst_serialport::initTestCase()
2023/02/15 11:23:40 socat[6248] N PTY is /dev/pts/2
2023/02/15 11:23:40 socat[6248] N PTY is /dev/pts/3
2023/02/15 11:23:40 socat[6248] N starting data transfer loop with FDs [5,5] and [7,7]
PASS : Tst_serialport::sanityCheck()
FAIL! : Tst_serialport::checkSendMessage() Compared values are not the same
Actual (_store->lastMessage): ""
Expected ("Hello") : Hello
Loc: [../VolanteAlphaQT_1/test/tst_serialport.cpp(35)]
PASS : Tst_serialport::closeHandle()
PASS : Tst_serialport::cleanupTestCase()
Totals: 4 passed, 1 failed, 0 skipped, 0 blacklisted, 1005ms
********* Finished testing of Tst_serialport *********
11:23:41: /home/micron/sav/Trabalhos/2022-2023/FormulaStudent/VolanteAlphaQT/build-VolanteAlphaQT-Desktop-Testing/bin/VolanteAlphaQT_testes exited with code 1
As usual all per most of my questions its not very descriptive it basically just never triggers the signal ReadyRead which in turn causes last message to be blank
Conclusion / TL;DR
So what am i doing wrong? why is the ready read signal not being triggered? Is there a better way to simulate/mock a serial connection?
Well, I found the solution.
Apparently it wasn't a socat problem, the ready signal is way slower than I had in mind and when I slept it actually froze the process. Due to the ready signal taking some time even after the buffer itself being ready, the QCOMPARE came right after the "unfreeze" making the stall useless.
The actual solution was rather simple I placed a _store->waitForReadyRead(); so I could wait for the signal to be sent without freezing the process.

FFmpeg - RTCP BYE packets

I’m working on some C++ project which depends on Wi-Fi RAK5206 electronic board. I’m using ffmpeg library to obtain video and audio stream and I have issue where I can start and stop stream for four times, but when I want to start for the fifth time I get error. Error description is Invalid data found when processing input and it happens when I call avformat_open_input function and I need to restart the electronic board, reconnect to Wi-Fi etc.
I figured out with Wireshark application that VLC is working and it is sending some BYE packets when TEARDOWN is called. I wonder if error depends to them, because from my application I’m not sending. How I can make setup to force ffmpeg to send BYE packets?
I found some declarations in rtpenc.h file which options to set and tried when I want to connect, but obviously without success.
The code that I used for setting options and opening input:
AVDictionary* stream_opts = 0;
av_dict_set(&stream_opts, "rtpflags", "send_bye", 0);
avformat_open_input(&format_ctx, url.c_str(), NULL, &stream_opts);
Make sure you are calling this av_write_trailer function, from your application.
if not please debug and check it.
/* Write the trailer, if any. The trailer must be written before you
* close the CodecContexts open when you wrote the header; otherwise
* av_write_trailer() may try to use memory that was freed on
* av_codec_close(). */
av_write_trailer(oc);
function Call flow code snippet from ffmpeg source:
av_write_trailer ->
....
ret = s->oformat->write_trailer(s);
} else {
s->oformat->write_trailer(s);
}
...
.write_trailer = rtp_write_trailer ->
...
if (s1->pb && (s->flags & FF_RTP_FLAG_SEND_BYE))
rtcp_send_sr(s1, ff_ntp_time(), 1)
Resolved issue with adding flag 16 (binary: 10000) to AVFormatContext object's flag.
formatCtx->flags = formatCtx->flags | 16;
According to rtpenc.h:
#define FF_RTP_FLAG_SEND_BYE 16

Record in linux with QAudioInput and play it in windows

For my purpose, I want to record sounds in raw format(only samples), 8kHz, 16bit(little endian) and 1 channel. Then, I would like to transfer those samples to the windows and play it with QAudioOutput. So I have two separated programs: one for recording voice with QAudioInput, and other one gives a file which is contained some samples, then I play it with QAudioOutput. Below is my source code for creating QAudioInput and QAudioOutput.
//Initialize audio
void AudioBuffer::initializeAudio()
{
m_format.setFrequency(8000); //set frequency to 8000
m_format.setChannels(1); //set channels to mono
m_format.setSampleSize(16); //set sample sze to 16 bit
m_format.setSampleType(QAudioFormat::UnSignedInt ); //Sample type as usigned integer sample
m_format.setByteOrder(QAudioFormat::LittleEndian); //Byte order
m_format.setCodec("audio/pcm"); //set codec as simple audio/pcm
QAudioDeviceInfo infoIn(QAudioDeviceInfo::defaultInputDevice());
if (!infoIn.isFormatSupported(m_format))
{
//Default format not supported - trying to use nearest
m_format = infoIn.nearestFormat(m_format);
}
QAudioDeviceInfo infoOut(QAudioDeviceInfo::defaultOutputDevice());
if (!infoOut.isFormatSupported(m_format))
{
//Default format not supported - trying to use nearest
m_format = infoOut.nearestFormat(m_format);
}
createAudioInput();
createAudioOutput();
}
void AudioBuffer::createAudioOutput()
{
m_audioOutput = new QAudioOutput(m_Outputdevice, m_format, this);
}
void AudioBuffer::createAudioInput()
{
if (m_input != 0) {
disconnect(m_input, 0, this, 0);
m_input = 0;
}
m_audioInput = new QAudioInput(m_Inputdevice, m_format, this);
}
These programs work well in windows and Linux separately. However, it has a lot of noise when I record a voice in Linux and play it in windows.
I figure out captured samples in windows and Linux are different. First picture is related to captured sound in Linux and second one for windows.
Captured sound in Linux:
Captured sound in Windows:
A bit more on details is that silence in Windows and Linux is different. I tried many things including swapping bytes, even though I set little endian in both platforms.
Now, I am in doubt about alsa configuration. Are there any missed settings?
Do you think it will be better if I record voice directly without using QAudioInput?
The voice is UnSignedInt, but sample value has both negative and positive value! It seems you had trouble in capturing. Change QAudioFormat::UnSignedInt to QAudioFormat::SignedInt.

How to use hardware acceleration with ffmpeg

I need to have ffmpeg decode my video(e.g. h264) using hardware acceleration. I'm using the usual way of decoding frames: read packet -> decode frame. And I'd like to have ffmpeg speed up decoding. So I've built it with --enable-vaapi and --enable-hwaccel=h264. But I don't really know what should I do next. I've tried to use avcodec_find_decoder_by_name("h264_vaapi") but it returns nullptr.
Anyway, I might want to use others API and not just VA API. How one is supposed to speed up ffmpeg decoding?
P.S. I didn't find any examples on Internet which uses ffmpeg with hwaccel.
After some investigation I was able to implement the necessary HW accelerated decoding on OS X (VDA) and Linux (VDPAU). I will update the answer when I get my hands on Windows implementation as well.
So let's start with the easiest:
Mac OS X
To get HW acceleration working on Mac OS you should just use the following:
avcodec_find_decoder_by_name("h264_vda");
Note, however that you can accelerate h264 videos only on Mac OS with FFmpeg.
Linux VDPAU
On Linux things are much more complicated(who is surprised?). FFmpeg has 2 HW accelerators on Linux: VDPAU(Nvidia) and VAAPI(Intel) and only one HW decoder: for VDPAU. And it may seems perfectly reasonable to use vdpau decoder like in the Mac OS example above:
avcodec_find_decoder_by_name("h264_vdpau");
You might be surprised to find out that it doesn't change anything and you have no acceleration at all. That's because it is only the beginning, you have to write much more code to get the acceleration working. Happily, you don't have to come up with a solution on your own: there are at least 2 good examples of how to achieve that: libavg and FFmpeg itself. libavg has VDPAUDecoder class which is perfectly clear and which I've based my implementation on. You can also consult ffmpeg_vdpau.c to get another implementation to compare. In my opinion the libavg implementation is easier to grasp, though.
The only things both aforementioned examples lack is proper copying of the decoded frame to the main memory. Both examples uses VdpVideoSurfaceGetBitsYCbCr which killed all the performance I gained on my machine. That's why you might want to use the following procedure to extract the data from a GPU:
bool VdpauDecoder::fillFrameWithData(AVCodecContext* context,
AVFrame* frame)
{
VdpauDecoder* vdpauDecoder = static_cast<VdpauDecoder*>(context->opaque);
VdpOutputSurface surface;
vdp_output_surface_create(m_VdpDevice, VDP_RGBA_FORMAT_B8G8R8A8, frame->width, frame->height, &surface);
auto renderState = reinterpret_cast<vdpau_render_state*>(frame->data[0]);
VdpVideoSurface videoSurface = renderState->surface;
auto status = vdp_video_mixer_render(vdpauDecoder->m_VdpMixer,
VDP_INVALID_HANDLE,
nullptr,
VDP_VIDEO_MIXER_PICTURE_STRUCTURE_FRAME,
0, nullptr,
videoSurface,
0, nullptr,
nullptr,
surface,
nullptr, nullptr, 0, nullptr);
if(status == VDP_STATUS_OK)
{
auto tmframe = av_frame_alloc();
tmframe->format = AV_PIX_FMT_BGRA;
tmframe->width = frame->width;
tmframe->height = frame->height;
if(av_frame_get_buffer(tmframe, 32) >= 0)
{
VdpStatus status = vdp_output_surface_get_bits_native(surface, nullptr,
reinterpret_cast<void * const *>(tmframe->data),
reinterpret_cast<const uint32_t *>(tmframe->linesize));
if(status == VDP_STATUS_OK && av_frame_copy_props(tmframe, frame) == 0)
{
av_frame_unref(frame);
av_frame_move_ref(frame, tmframe);
return;
}
}
av_frame_unref(tmframe);
}
vdp_output_surface_destroy(surface);
return 0;
}
While it has some "external" objects used inside you should be able to understand it once you have implemented the "get buffer" part(to which the aforementioned examples are of great help). Also I've used BGRA format which was more suitable for my needs maybe you will choose another.
The problem with all of it is that you can't just get it working from FFmpeg you need to understand at least basics of the VDPAU API. And I hope that my answer will aid someone in implementing the HW acceleration on Linux. I've spent much time on it myself before I realized that there is no simple, one-line way of implementing HW accelerated decoding on Linux.
Linux VA-API
Since my original question was regarding VA-API I can't not leave it unanswered.
First of all there is no decoder for VA-API in FFmpeg so avcodec_find_decoder_by_name("h264_vaapi") doesn't make any sense: it is nullptr.
I don't know how much harder(or maybe simpler?) is to implement decoding via VA-API since all the examples I've seen were quite intimidating. So I choose not to use VA-API at all and I had to implement the acceleration for an Intel card. Fortunately enough for me, there is a VDPAU library(driver?) which works over VA-API. So you can use VDPAU on Intel cards!
I've used the following link to setup it on my Ubuntu.
Also, you might want to check the comments to the original question where #Timothy_G also mentioned some links regarding VA-API.

FFMPEG with C++ accessing a webcam

I have searched all around and can not find any examples or tutorials on how to access a webcam using ffmpeg in C++. Any sample code or any help pointing me to some documentation, would greatly be appreciated.
Thanks in advance.
I have been working on this for months now. Your first "issue" is that ffmpeg (libavcodec and other ffmpeg libs) does NOT access web cams, or any other device.
For a basic USB webcam, or audio/video capture card, you first need driver software to access that device. For linux, these drivers fall under the Video4Linux (V4L2 as it is known) category, which are modules that are part of most distros. If you are working with MS Windows, then you need to get an SDK that allows you to access the device. MS may have something for accessing generic devices, (but from my experience, they are not very capable, if they work at all) If you've made it this far, then you now have raw frames (video and/or audio).
THEN you get to the ffmpeg part - libavcodec - which takes the raw frames (audio and/or video) and encodes them into a streams, which ffmpeg can then mux into your final container.
I have searched, but have found very few examples of all of these, and most are piece-meal.
If you don't need to actually code of this yourself, the command line ffmpeg, as well as vlc, can access these devices, capture and save to files, and even stream.
That's the best I can do for now.
ken
For windows use dshow
For Linux (like ubuntu) use Video4Linux (V4L2).
FFmpeg can take input from V4l2 and can do the process.
To find the USB video path type : ls /dev/video*
E.g : /dev/video(n) where n = 0 / 1 / 2 ….
AVInputFormat – Struct which holds the information about input device format / media device format.
av_find_input_format ( “v4l2”) [linux]
av_format_open_input(AVFormatContext , “/dev/video(n)” , AVInputFormat , NULL)
if return value is != 0 then error.
Now you have accessed the camera using FFmpeg and can continue the operation.
sample code is below.
int CaptureCam()
{
avdevice_register_all(); // for device
avcodec_register_all();
av_register_all();
char *dev_name = "/dev/video0"; // here mine is video0 , it may vary.
AVInputFormat *inputFormat =av_find_input_format("v4l2");
AVDictionary *options = NULL;
av_dict_set(&options, "framerate", "20", 0);
AVFormatContext *pAVFormatContext = NULL;
// check video source
if(avformat_open_input(&pAVFormatContext, dev_name, inputFormat, NULL) != 0)
{
cout<<"\nOops, could'nt open video source\n\n";
return -1;
}
else
{
cout<<"\n Success !";
}
} // end function
Note : Header file < libavdevice/avdevice.h > must be included
This really doesn't answer the question as I don't have a pure ffmpeg solution for you, However, I personally use Qt for webcam access. It is C++ and will have a much better API for accomplishing this. It does add a very large dependency on your code however.
It definitely depends on the webcam - for example, at work we use IP cameras that deliver a stream of jpeg data over the network. USB will be different.
You can look at the DirectShow samples, eg PlayCap (but they show AmCap and DVCap samples too). Once you have a directshow input device (chances are whatever device you have will be providing this natively) you can hook it up to ffmpeg via the dshow input device.
And having spent 5 minutes browsing the ffmpeg site to get those links, I see this...