Can I pause the callback from within itself? - c++

I am using SDL audio to play sounds.
SDL_LockAudio tells this :
Do not call this from the callback function or you will cause deadlock.
But, SDL_PauseAudio doesn't say that, instead it tells :
This function pauses and unpauses the audio callback processing
My mixer callback looks like this :
void AudioPlaybackCallback( void *, core::bty::UInt8 *stream, int len )
{
// number of bytes left to play in the current sample
const int thisSampleLeft = currentSample.dataLength - currentSample.dataPos;
// number of bytes that will be sent to the audio stream
const int amountToPlay = std::min( thisSampleLeft, len );
if ( amountToPlay > 0 )
{
SDL_MixAudio( stream,
currentSample.data + currentSample.dataPos,
amountToPlay,
currentSample.volume );
// update the current sample
currentSample.dataPos += amountToPlay;
}
else
{
if ( PlayingQueue::QueueHasElements() )
{
// update the current sample
currentSample = PlayingQueue::QueuePop();
}
else
{
// since the current sample finished, and there are no more samples to
// play, pause the playback
SDL_PauseAudio( 1 );
}
}
}
PlayingQueue is a class which provides access to a static std::queue object. Nothing fancy.
This worked fine, until we decided to update the SDL and alsa libraries (now there is no turning back anymore). Since then I see this in my log :
ALSA lib pcm.c:7316:(snd_pcm_recover) underrun occurred
If I assume there are no bugs in SDL or alsa library (this is most likely wrong, after googling this message), I guess it should be possible to change my code to fix, or at least avoid the underrun.
So, the question is : can I pause the callback from itself? Can it cause underruns I am seeing?

Finally I figured out.
When the SDL_PauseAudio( 1 ); is called in the callback, then the SDL is going to switch to another callback (which just put zeros into the audio stream). The callback will finish the execution after the function is called.
Therefore, it is safe to call this function from the callback.

Related

Getting External Headphones (8): EXC_BAD_ACCESS (code=1, address=0x0) error when I am trying to use maximilian

I am testing out using the maximilian library with JUCE. I am trying to use the maxiSample feature and I have implemented it exactly how the example code says to. Whenever I run the standalone app, I get the error "External Headphones (8): EXC_BAD_ACCESS (code=1, address=0x0)" and it gives me a breakpoint at line 747 of maximilian.cpp. It's not my headphones as it does the same thing with any playback device. Truly at a loss.
I've attached my MainComponent.cpp below. Any advice would be great, thank you!
#include "MainComponent.h"
#include "maximilian.h"
//==============================================================================
MainComponent::MainComponent()
{
// Make sure you set the size of the component after
// you add any child components.
setSize (800, 600);
// Some platforms require permissions to open input channels so request that here
if (juce::RuntimePermissions::isRequired (juce::RuntimePermissions::recordAudio)
&& ! juce::RuntimePermissions::isGranted (juce::RuntimePermissions::recordAudio))
{
juce::RuntimePermissions::request (juce::RuntimePermissions::recordAudio,
[&] (bool granted) { setAudioChannels (granted ? 2 : 0, 2); });
}
else
{
// Specify the number of input and output channels that we want to open
setAudioChannels (2, 2);
}
}
MainComponent::~MainComponent()
{
// This shuts down the audio device and clears the audio source.
shutdownAudio();
sample1.load("/Users/(username)/JuceTestPlugins/maxiSample/Source/kick.wav");
}
//==============================================================================
void MainComponent::prepareToPlay (int samplesPerBlockExpected, double sampleRate)
{
// This function will be called when the audio device is started, or when
// its settings (i.e. sample rate, block size, etc) are changed.
// You can use this function to initialise any resources you might need,
// but be careful - it will be called on the audio thread, not the GUI thread.
// For more details, see the help for AudioProcessor::prepareToPlay()
}
void MainComponent::getNextAudioBlock (const juce::AudioSourceChannelInfo& bufferToFill)
{
// Your audio-processing code goes here!
// For more details, see the help for AudioProcessor::getNextAudioBlock()
// Right now we are not producing any data, in which case we need to clear the buffer
// (to prevent the output of random noise)
//bufferToFill.clearActiveBufferRegion();
for(int sample = 0; sample < bufferToFill.buffer->getNumSamples(); ++sample){
//float sample2 = sample1.
//float wave = tesOsc.sinewave(200);
//double sample2 = sample1.play();
// leftSpeaker[sample] = (0.25 * wave);
// rightSpeaker[sample] = leftSpeaker[sample];
double *output;
output[0] = sample1.play();
output[1] = output[0];
}
}
void MainComponent::releaseResources()
{
// This will be called when the audio device stops, or when it is being
// restarted due to a setting change.
// For more details, see the help for AudioProcessor::releaseResources()
}
//==============================================================================
void MainComponent::paint (juce::Graphics& g)
{
// (Our component is opaque, so we must completely fill the background with a solid colour)
g.fillAll (getLookAndFeel().findColour (juce::ResizableWindow::backgroundColourId));
// You can add your drawing code here!
}
void MainComponent::resized()
{
// This is called when the MainContentComponent is resized.
// If you add any child components, this is where you should
// update their positions.
}
Can't say for sure, but couple of things catch my attention.
In getNextAudioBlock() you are dereferencing invalid pointers:
double *output;
output[0] = sample1.play();
output[1] = output[0];
The pointer variable output is uninitialised and will probably be filled with garbage or zeros, which will make the program read from invalid memory. This problem is most likely to cause the EXC_BAD_ACCESS. This method is called from the realtime audio thread, so you probably get a crash on a non-main thread (in this case the thread of External Headphones (8)).
It also is no clear to me what exactly it is you're trying to do here, so it's hard for me to say how it should be. What I can say is that assigning the result of sample1.play() to a double value looks suspicious.
Normally, when dealing with juce::AudioSourceChannelInfo you would get pointers to the audio buffers like so:
auto** bufferPointer = bufferToFill.buffer->getArrayOfWritePointers()
Further, you are loading a file inside the destructor of MainComponent. This at least is suspicious, why would you load a file during destruction?
MainComponent::~MainComponent()
{
// This shuts down the audio device and clears the audio source.
shutdownAudio();
sample1.load("/Users/(username)/JuceTestPlugins/maxiSample/Source/kick.wav");
}

PortAudio UI control thread vs. audio thread

I'm working on a personal command-line driven sound application https://github.com/aparks5/synthcastle and I've run into some trouble.
Question: How can I have a control thread communicate to an audio thread without interruption of the audio?
I want to be able to pass commands from the command line to the audio thread and return back to the command line for further sound manipulation.
The problem is, without multithreading, that if I invoke a command such as "loop 4" (to repeat a pattern 4 times), my "loop" command just waits until the loop finishes before returning execution to the user.
I tried another paradigm by creating a control thread and audio thread, where invoking "loop" immediately returns, caching the passed data along with enqueuing a command to lock-free queue.
In main.cxx:
userData.server = server;
std::shared_ptr<MixerStream> stream = std::make_shared<MixerStream>(SAMPLE_RATE, &userData);
std::thread control(controlThread, stream);
// todo: move graphics thread out of audio thread
std::thread audio(audioThread, stream);
audio.detach();
control.join();
However, when I poll the queue from the audio thread, the data is corrupted and doesn't play in its entirety, only the last bit is played.
int MixerStream::paCallbackMethod(const void* inputBuffer, void* outputBuffer,
...
for (size_t sampIdx = 0; sampIdx < framesPerBuffer; sampIdx++)
{
...
output = m_mixer();
output = clip(output);
*out++ = output;
*out++ = output;
writeBuff[sampIdx] = output;
g_buffer[sampIdx] = static_cast<float>(output*1.0);
}
g_ready = true;
while (m_callbackData->commandsToAudioCallback->size_approx()) {
Commands cmd = *(m_callbackData->commandsToAudioCallback->peek());
m_callbackData->commandsToAudioCallback->pop();
switch (cmd) {
case Commands::START_LOOP:
m_bLoop = true;
while (m_loopTimes > 0) {
playPattern(m_pattern, m_bpm);
m_loopTimes--;
}
break;
case Commands::START_RECORDING:
m_callbackData->server->openWrite();
break;
case Commands::STOP_RECORDING:
m_callbackData->server->closeWrite();
break;
}
m_callbackData->commandsFromAudioCallback->enqueue(cmd); // return command to main thread for deletion
}
...
}
It's clear that nothing should be invoked in paCallbackMethod except for the generation of audio.
It's almost as if I need a third thread in between the UI and the audio, but this doesn't make sense to me.
I'm totally new to multithreading so any pointers as to how I can make control appear seamless would be greatly appreciated.
Update: It's not going to be a perfect solution but my conclusion at the end of my question, creating a third thread so UI -> Control -> Audio actually produced the desired result. I can make loops in the UI, the control thread will poll the stream continuously if it's ready to play, and will issue the commands while the UI is ready to accept new commands. I'm just waiting for it to produce glitches when I try continuous control.

pjsip capture and play pcm data

I have some embedded Devices that have no audio device by default. They communicate with each other via a FPGA. So my question is, how do I capture/play back audio from pjsip in pcm in order to send/receive it with the FPGA?
I know that there is pjmedia_mem_player_create() and pjmedia_mem_capture_create() but I can't seem to find any good info towards using these functions.
I tried the following piece of code, but an assertion failed cause one of the function's parameter is "empty".
Error:
pjmedia_mem_capture_create: Assertion `pool && buffer && size && clock_rate && channel_count && samples_per_frame && bits_per_sample && p_port' failed.
Note: I'm mainly using pjsua2 for everything else like registrations, transports etc. Also the default audio is set to null with ep.audDevManager().setNullDev(); as without this, making/receiving a call would simply fail?!
void MyCall::onCallMediaState(OnCallMediaStateParam &prm){
CallInfo ci = getInfo();
pj_caching_pool_init(&cp, &pj_pool_factory_default_policy, 0);
pj_pool_t *pool = pj_pool_create(&cp.factory, "POOLNAME", 2000, 2000, NULL);
void *buffer;
pjmedia_port *prt;
#define CLOCK_RATE 8000
#define CHANELS 1
#define SAMPLES_PER_FRAME 480
#define BITS_PER_SAMPLE 16
pjmedia_mem_capture_create( pool, //Pool
buffer, //Buffer
2000, //Buffer Size
CLOCK_RATE,
CHANELS,
SAMPLES_PER_FRAME,
BITS_PER_SAMPLE,
0, //Options
&prt); //The return port}
UPDATE
The assertion failed cause the buffer variable doesn't have any memory allocated to it. Allocate with twice the amount of samples per frame to have sufficient memory.
buffer = pj_pool_zalloc(pool, 960);
Also a callback needs to be registered with pjmedia_mem_capture_set_eof_cb2() (The two at the end is necessary for PJSIP 2.10 or later) Apparently from there the buffer can be used. Just that my implementation atm doesn't execute the callback.
Looks like I found the solution, I have modified your code and wrote a simple code in C with pjsua API to dump every frame to file. Sorry for mess, I'm not proficient in C:
pjsua_call_info ci;
pjsua_call_get_info(call_id, &ci);
pjsua_conf_port_info cpi;
pjsua_conf_get_port_info(ci.conf_slot, &cpi);
pj_pool_t *pool = pjsua_pool_create("POOLNAME", 2000, 2000);
pjmedia_port *prt;
uint buf_size = cpi.bits_per_sample*cpi.samples_per_frame/8;
void *buffer = pj_pool_zalloc(pool, buf_size);
pjsua_conf_port_id port_id;
pjmedia_mem_capture_create( pool,
buffer,
buf_size,
cpi.clock_rate,
cpi.channel_count,
cpi.samples_per_frame,
cpi.bits_per_sample,
0,
&prt);
pjmedia_mem_capture_set_eof_cb(prt, buffer, dump_incoming_frames);
pjsua_conf_add_port(pool, prt, &port_id);
pjsua_conf_connect(ci.conf_slot, port_id); //connect port with conference
///////dumping frames///
static pj_status_t dump_incoming_frames(pjmedia_port * port, void * usr_data){
pj_size_t buf_size = pjmedia_mem_capture_get_size(port);
char * data = usr_data;
...
fwrite(data,sizeof(data[0]),buf_size,fptr);
...
}
Documenation says pjmedia_mem_capture_set_eof_cb is deprecated but I couldn't make work pjmedia_mem_capture_set_eof_cb2, buf_size is 0 for every call of dump_incoming_frames so just left with deprecated function. I also succeed the same result with creating custom port.
I hope you can modify it easily to your C++/pjsua2 code
UPD:
I have modified the PJSIP and packed audio in-out streaming into proper PJSUA2/Media classes so it can be called from Python. Full code is here.

How to access Audio data from JUCE Demo Audio Plugin Host?

I am working on a project which requires me to record audio data as .wav files(of 1 second each) from a MIDI Synth plugin loaded in the JUCE Demo Audio Plugin host. Basically, I need to create a dataset automatically (corresponding to different parameter configurations) from the MIDI Synth.
Will I have to send MIDI Note On/Off messages to generate audio data? Or is there a better way of getting audio data?
AudioBuffer<FloatType> getBusBuffer (AudioBuffer<FloatType>& processBlockBuffer) const
Is this the function which will solve my needs? If yes, how would I store the data? If not, could someone please guide me to the right function/solution.
Thank you.
I'm not exactly sure what you're asking, so I'm going to guess:
You need to programmatically trigger some MIDI notes in your synth, then write all the audio to a .wav file, right?
Assuming you already know JUCE, it would be fairly trivial to make an app that opens your plugin, sends MIDI, and records audio, but it's probably just easier to tweak the AudioPluginHost project.
Lets break it into a few simple steps (first open the AudioPluginHost project):
Programmatically send MIDI
Look at GraphEditorPanel.h, specifically the class GraphDocumentComponent. It has a private member variable: MidiKeyboardState keyState;. This collects incoming MIDI Messages and then inserts them into the incoming Audio & MIDI buffer that is sent to the plugin.
You can simply call keyState.noteOn (midiChannel, midiNoteNumber, velocity) and keyState.noteOff (midiChannel, midiNoteNumber, velocity) to trigger a note on.
Record Audio Output
This is a fairly straightforward thing to do in JUCE — you should start by looking at the JUCE Demos. The following example records output audio in the background, but there are plenty of other ways to do it:
class AudioRecorder : public AudioIODeviceCallback
{
public:
AudioRecorder (AudioThumbnail& thumbnailToUpdate)
: thumbnail (thumbnailToUpdate)
{
backgroundThread.startThread();
}
~AudioRecorder()
{
stop();
}
//==============================================================================
void startRecording (const File& file)
{
stop();
if (sampleRate > 0)
{
// Create an OutputStream to write to our destination file...
file.deleteFile();
ScopedPointer<FileOutputStream> fileStream (file.createOutputStream());
if (fileStream.get() != nullptr)
{
// Now create a WAV writer object that writes to our output stream...
WavAudioFormat wavFormat;
auto* writer = wavFormat.createWriterFor (fileStream.get(), sampleRate, 1, 16, {}, 0);
if (writer != nullptr)
{
fileStream.release(); // (passes responsibility for deleting the stream to the writer object that is now using it)
// Now we'll create one of these helper objects which will act as a FIFO buffer, and will
// write the data to disk on our background thread.
threadedWriter.reset (new AudioFormatWriter::ThreadedWriter (writer, backgroundThread, 32768));
// Reset our recording thumbnail
thumbnail.reset (writer->getNumChannels(), writer->getSampleRate());
nextSampleNum = 0;
// And now, swap over our active writer pointer so that the audio callback will start using it..
const ScopedLock sl (writerLock);
activeWriter = threadedWriter.get();
}
}
}
}
void stop()
{
// First, clear this pointer to stop the audio callback from using our writer object..
{
const ScopedLock sl (writerLock);
activeWriter = nullptr;
}
// Now we can delete the writer object. It's done in this order because the deletion could
// take a little time while remaining data gets flushed to disk, so it's best to avoid blocking
// the audio callback while this happens.
threadedWriter.reset();
}
bool isRecording() const
{
return activeWriter != nullptr;
}
//==============================================================================
void audioDeviceAboutToStart (AudioIODevice* device) override
{
sampleRate = device->getCurrentSampleRate();
}
void audioDeviceStopped() override
{
sampleRate = 0;
}
void audioDeviceIOCallback (const float** inputChannelData, int numInputChannels,
float** outputChannelData, int numOutputChannels,
int numSamples) override
{
const ScopedLock sl (writerLock);
if (activeWriter != nullptr && numInputChannels >= thumbnail.getNumChannels())
{
activeWriter->write (inputChannelData, numSamples);
// Create an AudioBuffer to wrap our incoming data, note that this does no allocations or copies, it simply references our input data
AudioBuffer<float> buffer (const_cast<float**> (inputChannelData), thumbnail.getNumChannels(), numSamples);
thumbnail.addBlock (nextSampleNum, buffer, 0, numSamples);
nextSampleNum += numSamples;
}
// We need to clear the output buffers, in case they're full of junk..
for (int i = 0; i < numOutputChannels; ++i)
if (outputChannelData[i] != nullptr)
FloatVectorOperations::clear (outputChannelData[i], numSamples);
}
private:
AudioThumbnail& thumbnail;
TimeSliceThread backgroundThread { "Audio Recorder Thread" }; // the thread that will write our audio data to disk
ScopedPointer<AudioFormatWriter::ThreadedWriter> threadedWriter; // the FIFO used to buffer the incoming data
double sampleRate = 0.0;
int64 nextSampleNum = 0;
CriticalSection writerLock;
AudioFormatWriter::ThreadedWriter* volatile activeWriter = nullptr;
};
Note that the actual audio callbacks that contain the audio data from your plugin are happening inside the AudioProcessorGraph inside FilterGraph. There is an audio callback happening many times a second where the raw audio data is passed in. It would probably be very messy to change that inside AudioPluginHost unless you know what you are doing — it would probably be simpler to use something like the above example or create your own app that has its own audio flow.
The function you asked about:
AudioBuffer<FloatType> getBusBuffer (AudioBuffer<FloatType>& processBlockBuffer) const
is irrelevant. Once you're already in the audio callback, this would give you the audio being sent to a bus of your plugin (aka if your synth had a side chain). What you want to do instead is take the audio coming out of the callback and pass it to an AudioFormatWriter, or preferably an AudioFormatWriter::ThreadedWriter so that the actual writing happens on a different thread.
If you're not at all familiar with C++ or JUCE, Max/MSP or Pure Data might be easier for you to quickly whip something up.

Libevent writes to the socket only after second buffer_write

Libevent is great and I love it so far. However, on a echo server, the write only sends to the socket on a second write. My writing is from another thread, a pump thread that talks to a db and does some minimal data massaging.
I verified this by setting up a callback for the write:
bufferevent_setcb( GetBufferEvent(), DataAvailable, DataWritten, HandleSocketError, this );
calling bufferevent_flush( m_bufferEvent, EV_READ|EV_WRITE, BEV_NORMAL ) doesn't seem to have any effect.
Here is the setup, just in case I blew it somewhere. I have dramatically simplified the overhead in my code base in order to obtain some help. This includes initialization of sockets, my thread init, etc. This is a multi-threaded app, so there may be some problem there. I start with this:
m_LibEventInstance = event_base_new();
evthread_use_windows_threads();
m_listener = evconnlistener_new_bind( m_LibEventInstance,
OnAccept,
this,
LEV_OPT_CLOSE_ON_FREE | LEV_OPT_CLOSE_ON_EXEC | LEV_OPT_REUSEABLE,
-1,// no maximum number of backlog connections
(struct sockaddr*)&ListenAddress, socketSize );
if (!m_listener) {
perror("Couldn't create listener");
return false;
}
evconnlistener_set_error_cb( m_listener, OnSystemError );
AFAIK, this is copy and paste from samples so it should work. My OnAccept does the following:
void OnAccept( evconnlistener* listenerObj, evutil_socket_t newConnectionId, sockaddr* ClientAddr, int socklen, void* context )
{
// We got a new connection! Set up a bufferevent for it.
struct event_base* base = evconnlistener_get_base( listenerObj );
struct bufferevent* bufferEvent = bufferevent_socket_new( base, newConnectionId, BEV_OPT_CLOSE_ON_FREE );
bufferevent_setcb( GetBufferEvent(), DataAvailable, DataWritten,
HandleSocketError, this );
// We have to enable it before our callbacks will be called.
bufferevent_enable( GetBufferEvent(), EV_READ | EV_WRITE );
DisableNagle( m_connectionId );
}
Now, I simply respond to data coming in and store it in a buffer for later processing. This is a multi-threaded application, so I will process the data later, massage it, or return a response to the client.
void DataAvailable( struct bufferevent* bufferEventObj, void* arg )
{
const U32 MaxBufferSize = 8192;
MyObj* This = (MyObj*) arg;
U8 data[ MaxBufferSize ];
size_t numBytesreceived;
/* Read 8k at a time and send it to all connected clients. */
while( 1 )
{
numBytesreceived = bufferevent_read( bufferEventObj, data, sizeof( data ) );
if( numBytesreceived <= 0 ) // nothing to send
{
break;
}
if( This )
{
This->OnDataReceived( data, numBytesreceived );
}
}
}
the last thing that happens, once I look up my data, package into a buffer, and then on a threaded timeslice I do this:
bufferevent_write( m_bufferEvent, buffer, bufferOffset );
It never, ever sends the first time. To get it to send, I have to send a second buffer full of data.
This behavior is killing me and I have spent a lot of hours on it. Any ideas?
//-------------------------------------------------------
I finally gave up and used this hack instead... there just was not enough info to tell me why libevent wasn't writing to the socket. This works just fine.
int result = send( m_connectionId, (const char* )buffer, bufferOffset, 0 );
I met the problem, too! I spent one day on this problem. At last, I solved it.
When the thread you call event_base_dispatch, it will be asleep until any semaphore wakes it up. So, when it sleeps, you call bufferevent_write, the bufferevent's fd adds to the event list, but it won't be epoll until next time. So you must send semaphore to wake up the dispatch thread after you called bufferevent_write. The way you can use is set up an event bind pair socket and add it to event_base. Then send 1 byte anytime when you need to wake up the disptach thread.