I've noticed that as apparently documented IMFTransform::ProcessOutput() for a resampler can only output one sample per call! I guess its more orientated at large frame size video coding. Given all the code I have been looking at as reference for related audio playback allocates one IMFMediaBuffer per call of ProcessOutput, this seems a little insane and terrible architecture - unless I am missing something?
It is especially bad from the point of view of media buffer usage. For example a SourceReader decoding my test MP3 gives me chunks of about 64KB in one sample with one buffer. Which is sensible. But GetOutputStreamInfo() is requesting a media buffer of just 24 bytes per call for ProcessOutput().
64KB chunks => chopped into many 24B chunks => to further processing, seems very daft overhead (the resampler would be doing a lot of overhead per every 24 bytes, and enforcing that overhead later down the pipeline if its not consolidated).
From https://learn.microsoft.com/en-us/windows/win32/api/mftransform/nf-mftransform-imftransform-processoutput
Its says:
The MFT cannot return more than one sample per stream in a single call to ProcessOutput
The MFT writes the output data to the start of the buffer, overwriting any data that already exists in the buffer
So it's not even the case it can append to the end of partially full buffer attached to the sample.
I could create my own pooling object that supports the media buffers interface but pointer bumps into a vanilla locked media buffer I guess. The only other option seemingly being to lock/copy those 24 bytes to another larger buffer for processing. But this all seems excessive, and at the wrong granularity.
What is the best way to deal with this?
Here is a simplified sketch of my test so far:
...
status = transform->ProcessInput(0, sample, 0);
sample->Release();
while(1)
{
MFT_OUTPUT_STREAM_INFO outDetails{};
MFT_OUTPUT_DATA_BUFFER outData{};
IMFMediaBuffer* outBuffer;
IMFSample* outSample;
DWORD outStatus;
status = transform->GetOutputStreamInfo(0, &outDetails);
status = MFCreateAlignedMemoryBuffer(outDetails.cbSize, outDetails.cbAlignment, &outBuffer);
status = MFCreateSample(&outSample);
status = outSample->AddBuffer(outBuffer);
outBuffer->Release();
outData.pSample = outSample;
status = transform->ProcessOutput(0, 1, &outData, &outStatus);
if (status == MF_E_TRANSFORM_NEED_MORE_INPUT)
break;
...
}
I wrote some code for you to prove that audio resamper is capable to process large audio blocks at once. It is good, efficient processing style:
winrt::com_ptr<IMFTransform> Transform;
winrt::check_hresult(CoCreateInstance(CLSID_CResamplerMediaObject, nullptr, CLSCTX_ALL, IID_PPV_ARGS(Transform.put())));
WAVEFORMATEX InputWaveFormatEx { WAVE_FORMAT_PCM, 1, 44100, 44100 * 2, 2, 16 };
WAVEFORMATEX OutputWaveFormatEx { WAVE_FORMAT_PCM, 1, 48000, 48000 * 2, 2, 16 };
winrt::com_ptr<IMFMediaType> InputMediaType;
winrt::check_hresult(MFCreateMediaType(InputMediaType.put()));
winrt::check_hresult(MFInitMediaTypeFromWaveFormatEx(InputMediaType.get(), &InputWaveFormatEx, sizeof InputWaveFormatEx));
winrt::com_ptr<IMFMediaType> OutputMediaType;
winrt::check_hresult(MFCreateMediaType(OutputMediaType.put()));
winrt::check_hresult(MFInitMediaTypeFromWaveFormatEx(OutputMediaType.get(), &OutputWaveFormatEx, sizeof OutputWaveFormatEx));
winrt::check_hresult(Transform->SetInputType(0, InputMediaType.get(), 0));
winrt::check_hresult(Transform->SetOutputType(0, OutputMediaType.get(), 0));
MFT_OUTPUT_STREAM_INFO OutputStreamInfo { };
winrt::check_hresult(Transform->GetOutputStreamInfo(0, &OutputStreamInfo));
_A(!(OutputStreamInfo.dwFlags & MFT_OUTPUT_STREAM_SINGLE_SAMPLE_PER_BUFFER));
DWORD const InputMediaBufferSize = InputWaveFormatEx.nAvgBytesPerSec;
winrt::com_ptr<IMFMediaBuffer> InputMediaBuffer;
winrt::check_hresult(MFCreateMemoryBuffer(InputMediaBufferSize, InputMediaBuffer.put()));
winrt::check_hresult(InputMediaBuffer->SetCurrentLength(InputMediaBufferSize));
winrt::com_ptr<IMFSample> InputSample;
winrt::check_hresult(MFCreateSample(InputSample.put()));
winrt::check_hresult(InputSample->AddBuffer(InputMediaBuffer.get()));
winrt::check_hresult(Transform->ProcessInput(0, InputSample.get(), 0));
DWORD const OutputMediaBufferCapacity = OutputWaveFormatEx.nAvgBytesPerSec;
winrt::com_ptr<IMFMediaBuffer> OutputMediaBuffer;
winrt::check_hresult(MFCreateMemoryBuffer(OutputMediaBufferCapacity, OutputMediaBuffer.put()));
winrt::check_hresult(OutputMediaBuffer->SetCurrentLength(0));
winrt::com_ptr<IMFSample> OutputSample;
winrt::check_hresult(MFCreateSample(OutputSample.put()));
winrt::check_hresult(OutputSample->AddBuffer(OutputMediaBuffer.get()));
MFT_OUTPUT_DATA_BUFFER OutputDataBuffer { 0, OutputSample.get() };
DWORD Status;
winrt::check_hresult(Transform->ProcessOutput(0, 1, &OutputDataBuffer, &Status));
DWORD OutputMediaBufferSize = 0;
winrt::check_hresult(OutputMediaBuffer->GetCurrentLength(&OutputMediaBufferSize));
You can see that after feeding one second of input, the output holds [almost] one second of data as expected.
Related
I am implementing a file system on SPI flash memory using a w25qxx chip and an STM32F4xx on STM32CubeIDE. I have successfully created the basic i/o for the w25 over SPI, being able to write and read sectors at a time.
In my user_diskio.c I have implemented all of the needed i/o methods and have verified that they are properly linked and being called.
in my main.cpp I go to format the drive using f_mkfs(), then get the free space, and finally open and close a file. However, f_mkfs() keeps returning FR_MKFS_ABORTED. (FF_MAX_SS is set to 16384)
fresult = FR_NO_FILESYSTEM;
if (fresult == FR_NO_FILESYSTEM)
{
BYTE work[FF_MAX_SS]; // Formats the drive if it has yet to be formatted
fresult = f_mkfs("0:", FM_ANY, 0, work, sizeof work);
}
f_getfree("", &fre_clust, &pfs);
total = (uint32_t)((pfs->n_fatent - 2) * pfs->csize * 0.5);
free_space = (uint32_t)(fre_clust * pfs->csize * 0.5);
fresult = f_open(&fil, "file67.txt", FA_OPEN_ALWAYS | FA_READ | FA_WRITE);
f_puts("This data is from the FILE1.txt. And it was written using ...f_puts... ", &fil);
fresult = f_close(&fil);
fresult = f_open(&fil, "file67.txt", FA_READ);
f_gets(buffer, f_size(&fil), &fil);
f_close(&fil);
Upon investigating my ff.c, it seems that the code is halting on line 5617:
if (fmt == FS_FAT12 && n_clst > MAX_FAT12) return FR_MKFS_ABORTED; /* Too many clusters for FAT12 */
n_clst is calculated a few lines up before some conditional logic, on line 5594:
n_clst = (sz_vol - sz_rsv - sz_fat * n_fats - sz_dir) / pau;
Here is what the debugger reads the variables going in as:
This results in n_clst being set to 4294935040, as it is unsigned, though the actual result of doing the calculations would be -32256 if the variable was signed. As you can imagine, this does not seem to be an accurate calculation.
The device I am using has 16M-bit (2MB) of storage organized in 512 sectors of 4kb in size. The minimum erasable block size is 32kb. If you would need more info on the flash chip I am using, page 5 of this pdf outlines all of the specs.
This is what my USER_ioctl() looks like:
DRESULT USER_ioctl (
BYTE pdrv, /* Physical drive nmuber (0..) */
BYTE cmd, /* Control code */
void *buff /* Buffer to send/receive control data */
)
{
/* USER CODE BEGIN IOCTL */
UINT* result = (UINT*)buff;
HAL_GPIO_WritePin(GPIOE, GPIO_PIN_11, GPIO_PIN_SET);
switch (cmd) {
case GET_SECTOR_COUNT:
result[0] = 512; // Sector and block sizes of
return RES_OK;
case GET_SECTOR_SIZE:
result[0] = 4096;
return RES_OK;
case GET_BLOCK_SIZE:
result[0] = 32768;
return RES_OK;
}
return RES_ERROR;
/* USER CODE END IOCTL */
}
I have tried monkeying around with the parameters to f_mkfs(), swapping FM_ANY out for FM_FAT, FM_FAT32, and FM_EXFAT (along with enabling exFat in my ffconf.h. I have also tried using several values for au rather than the default. For a deeper documentation on the f_mkfs() method I am using, check here, there are a few variations of this method floating around out there.
Here:
fresult = f_mkfs("0:", FM_ANY, 0, work, sizeof work);
The second argument is not valid. It should be a pointer to a MKFS_PARM structure or NULL for default options, as described at http://elm-chan.org/fsw/ff/doc/mkfs.html.
You should have something like:
MKFS_PARM fmt_opt = {FM_ANY, 0, 0, 0, 0};
fresult = f_mkfs("0:", &fmt_opt, 0, work, sizeof work);
except that it is unlikely for your media (SPI flash) that the default option are appropriate - the filesystem cannot obtain formatting parameters from the media as it would for SD card for example. You have to provide the necessary formatting information.
Given your erase block size I would guess:
MKFS_PARM fmt_opt = {FM_ANY, 0, 32768, 0, 0};
but to be clear I have never used the ELM FatFS (which STM32Cube incorporates) with SPI flash - there may be additional issues. I also do not use STM32CubeMX - it is possible I suppose that the version has a different interface, but I would recommend using the latest code from ELM rather than ST's possibly fossilised version.
Another consideration is that FatFs is not particularly suitable for your media due to wear-levelling issues. Also ELM FatFs has not journalling or check/repair function, so is not power fail safe. That is particularly important for non-removable media that you cannot easily back-up or repair.
You might consider a file system specifically designed for SPI NOR flash such as SPIFFS, or the power-fail safe LittleFS. Here is an example of LittleFS in STM32: https://uimeter.com/2018-04-12-Try-LittleFS-on-STM32-and-SPI-Flash/
Ok, I think the real problem was that the IOCTL call GET_BLOCK_SIZE to get the block size was returning the sector size instead of the number of sectors in the block. Which is usually 1 for SPI Flash.
I'm building a graphics engine, and I need to write te result image to a .bmp file. I'm storing the pixels in a vector<Color>. While also saving the width and the heigth of the image. Currently I'm writing the image as follows(I didn't write this code myself):
std::ostream &img::operator<<(std::ostream &out, EasyImage const &image) {
//temporaryily enable exceptions on output stream
enable_exceptions(out, std::ios::badbit | std::ios::failbit);
//declare some struct-vars we're going to need:
bmpfile_magic magic;
bmpfile_header file_header;
bmp_header header;
uint8_t padding[] =
{0, 0, 0, 0};
//calculate the total size of the pixel data
unsigned int line_width = image.get_width() * 3; //3 bytes per pixel
unsigned int line_padding = 0;
if (line_width % 4 != 0) {
line_padding = 4 - (line_width % 4);
}
//lines must be aligned to a multiple of 4 bytes
line_width += line_padding;
unsigned int pixel_size = image.get_height() * line_width;
//start filling the headers
magic.magic[0] = 'B';
magic.magic[1] = 'M';
file_header.file_size = to_little_endian(pixel_size + sizeof(file_header) + sizeof(header) + sizeof(magic));
file_header.bmp_offset = to_little_endian(sizeof(file_header) + sizeof(header) + sizeof(magic));
file_header.reserved_1 = 0;
file_header.reserved_2 = 0;
header.header_size = to_little_endian(sizeof(header));
header.width = to_little_endian(image.get_width());
header.height = to_little_endian(image.get_height());
header.nplanes = to_little_endian(1);
header.bits_per_pixel = to_little_endian(24);//3bytes or 24 bits per pixel
header.compress_type = 0; //no compression
header.pixel_size = pixel_size;
header.hres = to_little_endian(11811); //11811 pixels/meter or 300dpi
header.vres = to_little_endian(11811); //11811 pixels/meter or 300dpi
header.ncolors = 0; //no color palette
header.nimpcolors = 0;//no important colors
//okay that should be all the header stuff: let's write it to the stream
out.write((char *) &magic, sizeof(magic));
out.write((char *) &file_header, sizeof(file_header));
out.write((char *) &header, sizeof(header));
//okay let's write the pixels themselves:
//they are arranged left->right, bottom->top, b,g,r
// this is the main bottleneck
for (unsigned int i = 0; i < image.get_height(); i++) {
//loop over all lines
for (unsigned int j = 0; j < image.get_width(); j++) {
//loop over all pixels in a line
//we cast &color to char*. since the color fields are ordered blue,green,red they should be written automatically
//in the right order
out.write((char *) &image(j, i), 3 * sizeof(uint8_t));
}
if (line_padding > 0)
out.write((char *) padding, line_padding);
}
//okay we should be done
return out;
}
As you can see, the pixels are being written one by one. This is quite slow, I put some timers in my program, and found that the writing was my main bottleneck.
I tried to write entire (horizontal) lines, but I did not find how to do it(best I found was this.
Secondly, I wanted to write to the file using multithreading(not sure if I need to use threading or processing). using openMP. But that means I need to specify which byte address to write to, I think, which I couldn't solve.
Latstly, I thought about immidiatly writing to the file whenever I drew an object, but then I had the same issue with writing to specific locations in the file.
So, my question is: what's the best(fastest) way to tackle this problem. (Compiling this for windows and linux)
The fastest method to write to a file is to use hardware assist. Write your output to memory (a.k.a. buffer), then tell the hardware device to transfer from memory to the file (disk).
The next fastest method is to write all the data to a buffer then block write the data to the file. If you want other tasks or threads to execute during your writing, then create a thread that writes the buffer to the file.
When writing to a file, the more data per transaction, the more efficient the write will be. For example, 1 write of 1024 bytes is faster than 1024 writes of one byte.
The idea is to keep the data streaming. Slowing down the transfer rate may be faster than a burst write, delay, burst write, delay, etc.
Remember that the disk is essentially a serial device (unless you have a special hard drive). Bits are laid down on the platters using a bit stream. Writing data in parallel will have adverse effects because the head will have to be moved between the parallel activities.
Remember that if you use more than one core, there will be more traffic on the data bus. The transfer to the file will have to pause while other threads/tasks are using the data bus. So, if you can, block all tasks, then transfer your data. :-)
I've written programs that copy from slow memory to fast memory, then transferred from fast memory to the hard drive. That was also using interrupts (threads).
Summary
Fast writing to a file involves:
Keep the data streaming; minimize the pauses.
Write in binary mode (no translations, please).
Write in blocks (format into memory as necessary before writing the block).
Maximize the data in a transaction.
Use separate writing thread, if you want other tasks running "concurrently".
The hard drive is a serial device, not parallel. Bits are written to the platters in a serial stream.
I am working on a C++ project to read/process/play raw audio from a microphone array system, with its own C++ API. I am using Qt to program the software.
From this post about Real Time Streaming With QAudioOutput (Qt), I wanted to follow up and ask for advice about what to do if the Raw Audio Data comes from a function call that takes about 1000ms (1 sec) to process? How would I still be able to achieve the real time audio playback.
It takes about about a second to process because I had read that when writing to QIODevice::QAudioFormat->start(); it is advisable to use a period's worth of bytes to prevent buffer underrun / overrun. http://cell0907.blogspot.sg/2012/10/qt-audio-output.html
I have set up a QByteArray and QDataStream to stream the data received from the function call.
The API is CcmXXX()
Reading the data from the microphone array returns an array of 32 bit integers
Of the 32 bit integers, 24 bits resolution, 8 bits LSB padded zeros.
It comes in block sizes (set at 1024 samples) x 40 microphones
Each chunk writes about one block, till the number of bytes written reaches close to the period size / free amount of bytes.
Tested: Connected my slots to a notify of about 50ms, to write one period worth of bytes. QByteArray in circular buffer style. Added a mutex lock/unlock at the read/write portions.
Result: Very short split ms of actual audio played, lots of jittering and non-recorded sounds.
Please do offer feedback on how I could improve my code.
Setting up QAudioFormat
void MainWindow::init_audio_format(){
m_format.setSampleRate(48000); //(8000, 11025, 16000, 22050, 32000, 44100, 48000, 88200, 96000, 192000
m_format.setByteOrder(QAudioFormat::LittleEndian);
m_format.setChannelCount(1);
m_format.setCodec("audio/pcm");
m_format.setSampleSize(32); //(8, 16, 24, 32, 48, 64)
m_format.setSampleType(QAudioFormat::SignedInt); //(SignedInt, UnSignedInt, Float)
m_device = QAudioDeviceInfo::defaultOutputDevice();
QAudioDeviceInfo info(m_device);
if (!info.isFormatSupported(m_format)) {
qWarning() << "Raw audio format not supported by backend, cannot play audio.";
return;
}
}
Initialising Audio and QByteArray/Datastream
void MainWindow::init_audio_output(){
m_bytearray.resize(65536);
mstream = new QDataStream(&m_bytearray,QIODevice::ReadWrite);
mstream->setByteOrder(QDataStream::LittleEndian);
audio = new QAudioOutput(m_device,m_format,this);
audio->setBufferSize(131072);
audio->setNotifyInterval(50);
m_audiodevice = audio->start();
connect(audio,SIGNAL(notify()),this,SLOT(slot_writedata()));
read_frames();
}
Slot:
void MainWindow::slot_writedata(){
QMutex mutex;
mutex.lock();
read_frames();
mutex.unlock();
}
To read the frames:
void MainWindow::read_frames(){
qint32* buffer;
int frameSize, byteCount=0;
DWORD tdFrames, fdFrames;
float fvalue = 0;
qint32 q32value;
frameSize = 40 * mBlockSize; //40 mics
buffer = new int[frameSize];
int periodBytes = audio->periodSize();
int freeBytes = audio->bytesFree();
int chunks = qMin(periodBytes/mBlockSize,freeBytes/mBlockSize);
CcmStartInput();
while(chunks){
CcmReadFrames(buffer,NULL,frameSize,0,&tdFrames,&fdFrames,NULL,CCM_WAIT);
if(tdFrames==0){
break;
}
int diffBytes = periodBytes - byteCount;
if(diffBytes>=(int)sizeof(q32value)*mBlockSize){
for(int x=0;x<mBlockSize;x++){
q32value = (quint32)buffer[x]/256;
*mstream << (qint32)fvalue;
byteCount+=sizeof(q32value);
}
}
else{
for(int x=0;x<(diffBytes/(int)sizeof(q32value));x++){
q32value = (quint32)buffer[x]/256;
*mstream << (qint32) fvalue;
byteCount+=sizeof(q32value);
}
}
--chunks;
}
CcmStopInput();
mPosEnd = mPos + byteCount;
write_frames();
mPos += byteCount;
if(mPos >= m_bytearray.length()){
mPos = 0;
mstream->device()->seek(0); //change mstream pointer back to bytearray start
}
}
To write the frames:
void MainWindow::write_frames()
{
int len = m_bytearray.length() - mPos;
int bytesWritten = mPosEnd - mPos;
if(len>=audio->periodSize()){
m_audiodevice->write(m_bytearray.data()+mPos, bytesWritten);
}
else{
w_data.replace(0,qAbs(len),m_bytearray.data()+mPos);
w_data.replace(qAbs(len),audio->periodSize()-abs(len),m_bytearray.data());
m_audiodevice->write(w_data.data(),audio->periodSize());
}
}
Audio support in Qt is actually quite rudimentary. The goal is to have media playback at the lowest possible implementation and maintenance cost. The situation is especially bad on windows, where I think the ancient MME API is still employed for audio playback.
As a result, the Qt audio API is very far from realtime, making it particularly ill-suited for such applications. I recommend using portaudio or rtaudio, which you can still wrap in Qt style IO devices if you will. This will give you access to better performing platform audio APIs and much better playback performance at very low latency.
I have created using the C API of ffmpeg a C++ application that reads frames from a file and writes them to a new file. Everything works fine, as long as I write immediately the frames to the output. In other words, the following structure of the program outputs the correct result (I put only the pseudocode for now, if needed I can also post some real snippets but the classes that I have created for handling the ffmpeg functionalities are quite large):
AVFrame* frame = av_frame_alloc();
int got_frame;
// readFrame returns 0 if file is ended, got frame = 1 if
// a complete frame has been extracted
while(readFrame(inputfile,frame, &got_frame)) {
if (got_frame) {
// I actually do some processing here
writeFrame(outputfile,frame);
}
}
av_frame_free(&frame);
The next step has been to parallelize the application and, as a consequence, frames are not written immediately after they are read (I do not want to go into the details of the parallelization). In this case problems arise: there is some flickering in the output, as if some frames get repeated randomly. However, the number of frames and the duration of the output video remains correct.
What I am trying to do now is to separate completely the reading from writing in the serial implementation in order to understand what is going on. I am creating a queue of pointers to frames:
std::queue<AVFrame*> queue;
int ret = 1, got_frame;
while (ret) {
AVFrame* frame = av_frame_alloc();
ret = readFrame(inputfile,frame,&got_frame);
if (got_frame)
queue.push(frame);
}
To write frames to the output file I do:
while (!queue.empty()) {
frame = queue.front();
queue.pop();
writeFrame(outputFile,frame);
av_frame_free(&frame);
}
The result in this case is an output video with the correct duration and number of frames that is only a repetition of the last 3 (I think) frames of the video.
My guess is that something might go wrong because of the fact that in the first case I use always the same memory location for reading frames, while in the second case I allocate many different frames.
Any suggestions on what could be the problem?
Ah, so I'm assuming that readFrame() is a wrapper around libavformat's av_read_frame() and libavcodec's avcodec_decode_video2(), is that right?
From the documentation:
When AVCodecContext.refcounted_frames is set to 1, the frame is
reference counted and the returned reference belongs to the caller.
The caller must release the frame using av_frame_unref() when the
frame is no longer needed.
and:
When
AVCodecContext.refcounted_frames is set to 0, the returned reference
belongs to the decoder and is valid only until the next call to this
function or until closing or flushing the decoder.
Obviously, from this it follows from this that you need to set AVCodecContext.refcounted_frames to 1. The default is 0, so my gut feeling is you need to set it to 1 and that will fix your problem. Don't forget to use av_fame_unref() on the pictures after use to prevent memleaks, and also don't forget to free your AVFrame in this loop if got_frame = 0 - again to prevent memleaks:
while (ret) {
AVFrame* frame = av_frame_alloc();
ret = readFrame(inputfile,frame,&got_frame);
if (got_frame)
queue.push(frame);
else
av_frame_free(frame);
}
(Or alternatively you could implement some cache for frame so you only realloc it if the previous object was pushed in the queue.)
There's nothing obviously wrong with your pseudocode. The problem almost certainly lies in how you lock the queue between threads.
Your memory allocation seems same to me. Do you maybe do something else in between reading and writing the frames?
Is queue the same queue in the routines that read and write the frames?
I'm trying for several months to figure out how it works. I have a program that I'm developing, I have an mp3 file in and out I have the pcm that goes to "alsa" for playback. Using the library mpg123 where the main code is this:
while (mpg123_read (mh, buffer, buffer_size, & done) == MPG123_OK)
sendoutput (dev, buffer, done);
Now, my attempts have been based on the use of library avutil/avcodec on the buffer for reducing/increase the number of samples in one second. The result is awful and isn't audibly. In a previous question someone advised me to increase my PC performance but if a simple program like "VLC" can do this on old computers why I can't?
And for the problem of position in the audio file how can I achieve this?
Edit
I Add some piece of code to try to explain.
SampleConversion.c
#define LENGTH_MS 1000 // how many milliseconds of speech to store 0,5s:x=1:44100 x=22050 sample da memorizzare
#define RATE 44100 // the sampling rate (input)
struct AVResampleContext* audio_cntx = 0;
//(LENGTH_MS*RATE*16*CHANNELS)/8000
void inizializeResample(int inRate, int outRate)
{
audio_cntx = av_resample_init( outRate, //out rate
inRate, //in rate
16, //filter length
10, //phase count
0, //linear FIR filter
0.8 ); //cutoff frequency
assert( audio_cntx && "Failed to create resampling context!");
}
void resample(char dataIn[],char dataOut[],int nsamples)
{
int samples_consumed;
int samples_output = av_resample( audio_cntx, //resample context
(short*)dataOut, //buffout
(short*)dataIn, //buffin
&samples_consumed, //&consumed
nsamples, //nb_samples
sizeof(dataOut)/2,//lenout sizeof(out_buffer)/2 (Right?)
0);//is_last
assert( samples_output > 0 && "Error calling av_resample()!" );
}
void endResample()
{
av_resample_close( audio_cntx );
}
My edited play function (Mpg123.c)
if (isPaused==0 && mpg123_read(mh, buffer, buffer_size, &done) == MPG123_OK)
{
int i=0; char * resBuffer=malloc(sizeof(buffer));
//resBuffer=&buffer[0];
resample(buffer,resBuffer,44100);
if((ao_play(dev, (char*)resBuffer, done)==0)){
return 1;
}
}
Both codes are made by me so I can not ask anybody ever suggested improvements as in the previous question (although I do not know if they are right, sigh)
Edit2: Updated with changes
In the call to av_resample, samples_consumed is never read, so any unconsumed frames are skipped.
Furthermore, nsamples is the constant value 44100 instead of the actual number of frames read (done from mpg123_read).
sizeof(dataOut) is wrong; it's the size of a pointer.
is_last is wrong at the end of the input.
In the play function, sizeof(buffer) is likely to be wrong, depending on the definition of buffer.