input delay with PortAudio callback and ASIO sdk - c++

I'm trying to get the input from my guitar to be played through my computer using the portaudio library and the ASIO sdk.
I have been following some of the tutorials on the official website to get the basics set up. Currently I got it working so that portaudio is listening to the right input and output device and I have the callback setup to just output the input and do nothing with it like this:
static int paTestCallback(const void *inputBuffer, void *outputBuffer, unsigned long framesPerBuffer, const PaStreamCallbackTimeInfo* timeInfo, PaStreamCallbackFlags statusFlags, void *userData)
{
float *out = (float*)outputBuffer;
float* in = (float*)inputBuffer;
for (int i = 0; i<framesPerBuffer; i++)
{
*out++ = *in++; /* left */
*out++ = *in++; /* right */
}
return 0;
}
This callback is setup by calling this:
PaError error = Pa_OpenDefaultStream(&stream, 2, 2, paFloat32, 44100, paFramesPerBufferUnspecified, paTestCallback, &data);
Pa_StartStream(stream);
Now, this does work but I have a lot of delay (about 0.5s) when I strike a string on my guitar and when I hear it through the monitors.
Is there a way to solve this delay? Do I need to rewrite the callback method?
EDIT:
So, I got the delay to be a lot better using this code instead of the basic Pa_OpenDefaultStream()
int defaultIn = Pa_GetDefaultInputDevice();
int defaultOut = Pa_GetDefaultOutputDevice();
PaStreamParameters *inParam = new PaStreamParameters();
inParam->channelCount = 2;
inParam->device = defaultIn;
inParam->sampleFormat = paFloat32;
inParam->suggestedLatency = 0.05;
PaStreamParameters *outParam = new PaStreamParameters();
outParam->channelCount = 2;
outParam->device = defaultOut;
outParam->sampleFormat = paFloat32;
outParam->suggestedLatency = 0;
error = Pa_OpenStream(&stream, inParam, outParam, 44100, paFramesPerBufferUnspecified, paNoFlag, paTestCallback, &data);
if (error != paNoError) {
Logger::log("[PortAudioManager] Could not open default stream. Exiting function...");
return;
}
Pa_StartStream(stream);
There is still a little bit of delay though, mostly noticeable when playing more then just a single note.
EDIT:
I figured out with the help of Ross-Bencina that the windows default input device and output device doesn't change anything to the index of the host api's in PortAudio. I seemed to be using MME all this time. I did the following to get the right index for the ASIO device:
int hostNr = Pa_GetHostApiCount();
std::vector<const PaHostApiInfo*> infoVertex;
for (int t = 0; t < hostNr; ++t) {
infoVertex.push_back(Pa_GetHostApiInfo(t));
}
Then I just checked which is the one with ASIO and set the suggestedLatency in both PaStreamParameters to 0 and the delay is now gone and sound is good (although it's mono for now).

You are on the right track using paFramesPerBufferUnspecified.
The ASIO latency behavior depends on the driver. There are two possibilities:
The ASIO driver lets the code (i.e. PortAudio) request a latency (possibly with some constraints). PortAudio finds to best match between a supported driver buffer size and the latency that you request.
The other possibility is that your audio interface does not provide programmatic control over latency settings. Instead, latency is only selectable from the driver's ASIO control panel UI (and the driver will force a fixed buffer size on PortAudio). In this case, you should investigate the driver control panel UI to set the lowest workable latency.
In either case, your approach with Pa_OpenStream is close to optimal, but you should request zero latency for both input and output (in your edit you're requesting 50ms input latency, zero output latency). The end result will be that PortAudio selects the lowest available ASIO buffer size. If this turns out to be unstable (audio glitches) then you'll need to increase the requested latency.
include/pa_asio.h exposes a host-API-specific interface for querying the ASIO buffer sizes allowed by the driver (be aware that this can change if you change settings in the control panel). It also provides a function to display the driver's control panel UI.
EDIT: Note that Pa_GetDefaultInputDevice() and Pa_GetDefaultOutputDevice() will only return ASIO devices if you built PortAudio for ASIO only. If you included any other more common APIs in the build (e.g. WMME or DirectSound) they will be given priority as the (lowest common denominator) default device. You could add a check that you are actually accessing the ASIO device:
assert(Pa_GetHostApiInfo(Pa_GetDeviceInfo(Pa_GetDefaultOutputDevice())->hostApi)->type == paASIO);
If PortAudio is compiled with support for multiple host APIs: To get the default ASIO device: enumerate host APIs using Pa_GetHostApiCount and Pa_GetHostApiInfo to find the ASIO host API. Then pull the default ASIO device indices from the returned PaHostApiInfo struct.

Related

Accessing assets from C++ plugin through Flutter

I'm trying to use Google Oboe for a 3D audio processing app due to it's low latency. The app will have a C++ backend, which does the processing, and the frontend is done with Flutter. I'm running a couple of tests to see if it'll work but I'm having issues loading assets from Flutter to Oboe. I checked the example RhythmGame in Oboe's repo, done with Java, but couldn't quiet find a way of doing that straight from Dart to C++.
The connection between front and backend is through dart::ffi
Here's what I've tried so far. Based on the example published by Richard Heap here, I changed the noise variable from just a sine wave to a short fragment of a song in a wav file:
class _MyAppState extends State<MyApp> {
final stream = OboeStream();
var noise = Float32List(512);
Timer t;
#override
void initState() {
super.initState();
// for (var i = 0; i < noise.length; i++) {
// noise[i] = sin(8 * pi * i / noise.length);
// }
_loadSound();
}
void _loadSound() async {
final ByteData data = await rootBundle.load('assets/song_cut.wav');
noise = data.buffer.asFloat32List();
}
(...)
Then this function in Dart calls the Dart wrapper of the native library:
void start() {
stream.start();
var interval = (512000 / stream.getSampleRate()).floor() + 1;
t = Timer.periodic(Duration(milliseconds: interval), (_) {
stream.write(noise);
});
}
The wrapper in Dart is:
void write(Float32List original) {
var length = original.length;
var copy = allocate<Float>(count: length)
..asTypedList(length).setAll(0, original);
FfiGoogleOboe()._streamWrite(_nativeInstance, copy, length);
free(copy);
}
_streamWrite is the native function in C++:
EXTERNC void stream_write(void* ptr, void* data, int32_t size) {
auto stream = static_cast<OboeFfiStream*>(ptr);
auto dataToWrite = static_cast<float*>(data);
stream->write(dataToWrite, size);
}
void OboeFfiStream::write(float *data, int32_t size) {
managedStream->write(data, size, 1000000);
}
Now I can hear the song but it comes out with too much distortion. When trying with the sine I could hear it too, but it also had some distortion. I'm not yet using the callback mode in Oboe, since I wanted to try if this worked first.
1 - what format is your WAV file in? Is it 32 bit floats? Don't forget that WAV files have a header, so you should discard the first few tens of bytes (up to the data segment). Be sure that you start reading the audio data on a float boundary (which may not be a multiple of 4 if the header isn't). If necessary, just use a hex editor to ascertain the offset of the float data and start reading there. Or, truncate the header and rename your asset to song_cut.raw. Audacity should be able to produce a header-less raw audio file.
2 - What sample rate is your audio clip recorded at? Does that match the sample rate of the device? (Note that iOS devices are normally 44.1k, but Android devices are frequently 48k. When using an Android emulator on macOS, who knows what the reported sample rate will be! Expect pitch distortion if your rates don't match - or use a resampler. I think Oboe has one. Alternatively, the sample repo associated with the talk contains one you can use.)
3 - note that the timer interval is finely tuned (for demo purposes) to the approximate time taken to deliver 512 samples at the sound card rate. This might be ok for demos, but isn't for real life. Also, your wav file probably doesn't have exactly 512 samples in it. Either adjust your audio loop to 512 samples, or adjust the 512000 constant to match the number of samples in your loop.
4a - You aren't using the callback method yet, but you probably should as soon as possible. One method I've had success with is to use a lock-free circular buffer. The Oboe callback tries to empty the buffer, while the Dart timer routine tries to fill it. The bigger the buffer the less chance there is of an underflow, but the worse the latency.
4b - The ideal solution would be to have the Oboe callback call up into Dart, but I haven't found a way to do that as C->Dart calls must be on the main Dart thread, but the Oboe callbacks are surely on a high-priority IO thread.

WASAPI: Identify non-active channels on loopback recording

I have a DSP software which captures the audio playing using the WASAPI api in shared loopback mode.
hr = _pAudioClient->Initialize(AUDCLNT_SHAREMODE_SHARED, AUDCLNT_STREAMFLAGS_LOOPBACK, 0, 0, _pFormat, 0);
This part works fine, but now I want to be able to detect the number of channels actually playing. In other words how would I be able to detect if the audio playing is in stereo, 5.1, 7.1?
The problem is:
* Since loopback have to use shared mode there could be multiple sources playing.
* This analysis has to be done in real-time. Can't wait until playback is done.
* Detect the difference between a channel not used at all by any playback source and a channel that is temporarily silent
The best solution in my mind would be If I could retrieve a list of all playback source/sub mixes and query them each for the number of channels. That way I don't have to analyse the audio data stream itself.
Loopback recording takes place in mix format defined on the endpoint, so regardless of what the original audio format was you get the data in the mix format, mixed from possibly multiple played sources and also converted to such shared format.
Device Formats
Loopback Recording
WASAPI loopback contains the mix of all audio being played...
The GetMixFormat method retrieves the stream format that the audio engine uses for its internal processing of shared-mode streams...
After an application has used GetMixFormat or IsFormatSupported to find an appropriate format for a shared-mode or exclusive-mode stream, the application can call the Initialize method to initialize a stream with that format. An application that attempts to initialize a shared-mode stream with a format that is not identical to the mix format obtained from the GetMixFormat method, but that has the same number of channels and the same sample rate as the mix format, is likely to succeed. Before calling Initialize, the application can call IsFormatSupported to verify that Initialize will accept the format.
That is, even though WASAPI offers some flexibility in audio format, channel configuration and sample rate are defined by shared format when it comes to loopback capture.
As you are getting the mix, you cannot really identify "non-active" channels: this information is lost during mixing to shared format.
Also, the actual shared format can be configured interactively via Control Panel:
Ok I now have a solution to my problem. As far as I know you can not detect sub-mixes in the shared mix so the only option was to analyze the audio stream/capture buffer.
First during my main capture loop I set the current timestamp for all channels playing.
const time_t now = Date::getCurrentTimeMillis();
//Iterate all capture frames
for (i = 0; i < numFramesAvailable; ++i) {
for (j = 0; j < _nChannelsIn; ++j) {
//Identify which channels are playing.
if (pCaptureBuffer[j] != 0) {
_pUsedChannels[j] = now;
}
}
}
Then every second I call this function which evaluates if a channel has played the last second. Based upon which channels are playing I can do conditional routing.
void checkUsedChannels() {
const time_t now = Date::getCurrentTimeMillis();
//Compare now against last used timestamp and determine active channels
for (size_t i = 0; i < _nChannelsIn; ++i) {
if (now - _pUsedChannels[i] > 1000) {
_pUsedChannels[i] = 0;
}
}
//Update conditional routing
for (const Input *pInut : _inputs) {
pInut->evalConditions();
}
}
Very simple solution but it appears to be working.

How do I open a ttyMFD device on the Intel-Edison [C++]?

I have a /dev/ttyUSB device and a /dev/ttyMFD device that I need to stream to logfiles. For the USB device I could use termios and configure it through that. This was pretty straight forward and there was a bit of documentation for this as well.
I can't seem to find any for the MFD though. Some places call it a MultiFuctionDevice and others call it the Medfield High Speed UART device.
Which is correct in the first place?
And secondly, can I open it in the same way that I open up a regular ttyUSB device?
Here is the snippit I use to open USB devices.
int fd = open(USBDEVICE0, O_RDWR);
struct termios io;
memset(&io, 0, sizeof(io));
io.c_iflag = 0;
io.c_oflag = 0;
io.c_cflag = CS8|CREAD|CLOCAL; // 8n1, see termios.h for more information
io.c_lflag = 0;
// TODO -- Since we are operating in non-blocking mode; confirm VMIN and VTIME settings have no effect on duration of the read() call.
io.c_cc[VMIN] = 1;
io.c_cc[VTIME] = 5;
speed_t speedSymbol = B921600;
cfsetospeed(&io, speedSymbol);
cfsetispeed(&io, speedSymbol);
int retVal;
retVal = tcsetattr(fd, TCSANOW, &io);
tcflush(fd, TCIOFLUSH);
usleep(100);
EDIT
For anyone who comes across this, there is one caveat. You must open the device in raw mode and dump everything into a log file. Parsing must be done post. Everything will come out as raw data but if you try to do any kind of configuration, the devices buffer will not be able to capture all the data, hold it, and process it in time before more data comes along.
MFD in Linux kernel is common abbreviation to Multi-Functional Device, Legacy serial driver for Edison abuses that and uses it's own interpretation as you mentioned: Medfield. In the upstream kernel the abbreviation MID is used regarding to Mobile Internet Device. In particular the serial driver is called drivers/tty/serial/8250_mid.c. See https://en.wikipedia.org/wiki/Mobile_Internet_device.
Yes, you may do the same operations as you did on top of /dev/ttyUSBx.

Record in linux with QAudioInput and play it in windows

For my purpose, I want to record sounds in raw format(only samples), 8kHz, 16bit(little endian) and 1 channel. Then, I would like to transfer those samples to the windows and play it with QAudioOutput. So I have two separated programs: one for recording voice with QAudioInput, and other one gives a file which is contained some samples, then I play it with QAudioOutput. Below is my source code for creating QAudioInput and QAudioOutput.
//Initialize audio
void AudioBuffer::initializeAudio()
{
m_format.setFrequency(8000); //set frequency to 8000
m_format.setChannels(1); //set channels to mono
m_format.setSampleSize(16); //set sample sze to 16 bit
m_format.setSampleType(QAudioFormat::UnSignedInt ); //Sample type as usigned integer sample
m_format.setByteOrder(QAudioFormat::LittleEndian); //Byte order
m_format.setCodec("audio/pcm"); //set codec as simple audio/pcm
QAudioDeviceInfo infoIn(QAudioDeviceInfo::defaultInputDevice());
if (!infoIn.isFormatSupported(m_format))
{
//Default format not supported - trying to use nearest
m_format = infoIn.nearestFormat(m_format);
}
QAudioDeviceInfo infoOut(QAudioDeviceInfo::defaultOutputDevice());
if (!infoOut.isFormatSupported(m_format))
{
//Default format not supported - trying to use nearest
m_format = infoOut.nearestFormat(m_format);
}
createAudioInput();
createAudioOutput();
}
void AudioBuffer::createAudioOutput()
{
m_audioOutput = new QAudioOutput(m_Outputdevice, m_format, this);
}
void AudioBuffer::createAudioInput()
{
if (m_input != 0) {
disconnect(m_input, 0, this, 0);
m_input = 0;
}
m_audioInput = new QAudioInput(m_Inputdevice, m_format, this);
}
These programs work well in windows and Linux separately. However, it has a lot of noise when I record a voice in Linux and play it in windows.
I figure out captured samples in windows and Linux are different. First picture is related to captured sound in Linux and second one for windows.
Captured sound in Linux:
Captured sound in Windows:
A bit more on details is that silence in Windows and Linux is different. I tried many things including swapping bytes, even though I set little endian in both platforms.
Now, I am in doubt about alsa configuration. Are there any missed settings?
Do you think it will be better if I record voice directly without using QAudioInput?
The voice is UnSignedInt, but sample value has both negative and positive value! It seems you had trouble in capturing. Change QAudioFormat::UnSignedInt to QAudioFormat::SignedInt.

Boost asio serial port initially sending bad data

I am attempting to use boost asio for serial communication. I am currently working in Windows, but will eventually be moving the code into Linux. When ever I restart my computer data sent from the program is not what it should be (for example I send a null followed by a carriage return and get "00111111 10000011" in binary) and it is not consistent (multiple nulls yield different binary).
However, as soon as I use any other program to send any data to the serial port and run the program again it works perfectly. I think I must be missing something in the initialization of the port, but my research has not turned anything up.
Here is how I am opening the port:
// An IOService to get the socket to work
boost::asio::io_service *io;
// An acceptor for getting connections
boost::shared_ptr<boost::asio::serial_port> port;
// Cnstructor Functions
void Defaults() {
io = new boost::asio::io_service();
// Set Default Commands
command.prefix = 170;
command.address = 3;
command.xDot[0] = 128;
command.xDot[1] = 128;
command.xDot[2] = 128;
command.throtle = 0;
command.button8 = 0;
command.button16 = 0;
command.checkSum = 131;
}
void Defaults(char * port, int baud) {
Defaults();
// Setup the serial port
port.reset(new boost::asio::serial_port(*io,port));
port->set_option( boost::asio::serial_port_base::baud_rate( baud ));
// This is for testing
printf("portTest: %i\n",(int)port->is_open());
port->write_some(boost::asio::buffer((void*)"\0", 1));
boost::asio::write(*port, boost::asio::buffer((void*)"\0", 1));
boost::asio::write(*port, boost::asio::buffer((void*)"\r", 1));
boost::asio::write(*port, boost::asio::buffer((void*)"\r", 1));
Sleep(2000);
}
Edit: In an attempt to remove unrelated code I accidentally deleted the the line where I set the baud rate, I added it back. Also, I am checking the output with a null-modem and Docklight. Aside from the baud rate I am using all of the default serial settings specified for a boost serial port (I have also tried explicitly setting them with no effect).
You haven't said how you're checking what's being sent, but it's probably a baud rate mismatch between the two ends.
It looks like you're missing this:
port->set_option( boost::asio::serial_port_base::baud_rate( baud ) );
Number of data bits, parity, and start and stop bits will also need configuring if they're different to the default values.
If you still can't sort it, stick an oscilloscope on the output and compare the waveform of the sender and receiver. You'll see something like this.
This is the top search result when looking for this problem, so I thought I'd give what I believe is the correct answer:
Boost Asio is bugged as of this writing and does not set default values for ports on the Windows platform. Instead, it grabs the current values on the port and then bakes them right back in. After a fresh reboot, the port hasn't been used yet, so its Windows-default values are likely not useful for communication (byte size typically defaults to 9, for example). After you use the port with a program or library that sets the values correctly, the values Asio picks up afterward are correct.
To solve this until Asio incorporates the fix, just set everything on the port explicitly, ie:
using boost::asio::serial_port;
using boost::asio::serial_port_base;
void setup_port(serial_port &serial, unsigned int baud)
{
serial.set_option(serial_port_base::baud_rate(baud));
serial.set_option(serial_port_base::character_size(8));
serial.set_option(serial_port_base::flow_control(serial_port_base::flow_control::none));
serial.set_option(serial_port_base::parity(serial_port_base::parity::none));
serial.set_option(serial_port_base::stop_bits(serial_port_base::stop_bits::one));
}