I get the AudioBufferList from a wav file. (The sample frequency is 44100Hz and the time length is 2 second). However, I can't get a 44100*2=88200 sample. I get a AudioBufferList which contains 512 NumberBuffers. How can I get the sample from the AudioBufferList?
u get the number of frames in file using:
AudioUnitGetProperty(<#AudioUnit inUnit#>, kAudioFilePropertyAudioDataPacketCount, <#AudioUnitScope inScope#>, <#AudioUnitElement inElement#>, <#void *outData#>, <#UInt32 *ioDataSize#>)
Related
I'm attempting to write a simple windows media foundation command line tool to use IMFSourceReader and IMFSyncWriter to load in a video, read the video and audio as uncompressed streams and re-encode them to H.246/AAC with some specific hard-coded settings.
The simple program Gist is here
sample video 1
sample video 2
sample video 3
(Note: the video's i've been testing with are all stereo, 48000k sample rate)
The program works, however in some cases when comparing the newly outputted video to the original in an editing program, I see that the copied video streams match, but the audio stream of the copy is pre-fixed with some amount of silence and the audio is offset, which is unacceptable in my situation.
audio samples:
original - |[audio1] [audio2] [audio3] [audio4] [audio5] ... etc
copy - |[silence] [silence] [silence] [audio1] [audio2] [audio3] ... etc
In cases like this the first video frames coming in have a non zero timestamp but the first audio frames do have a 0 timestamp.
I would like to be able to produce a copied video who's first frame from the video and audio streams is 0, so I first attempted to subtract that initial timestamp (videoOffset) from all subsequent video frames which produced the video i wanted, but resulted in this situation with the audio:
original - |[audio1] [audio2] [audio3] [audio4] [audio5] ... etc
copy - |[audio4] [audio5] [audio6] [audio7] [audio8] ... etc
The audio track is shifted now in the other direction by a small amount and still doesn't align. This can also happen sometimes when a video stream does have a starting timestamp of 0 yet WMF still cuts off some audio samples at the beginning anyway (see sample video 3)!
I've been able to fix this sync alignment and offset the video stream to start at 0 with the following code inserted at the point of passing the audio sample data to the IMFSinkWriter:
//inside read sample while loop
...
// LONGLONG llDuration has the currently read sample duration
// DWORD audioOffset has the global audio offset, starts as 0
// LONGLONG audioFrameTimestamp has the currently read sample timestamp
//add some random amount of silence in intervals of 1024 samples
static bool runOnce{ false };
if (!runOnce)
{
size_t numberOfSilenceBlocks = 1; //how to derive how many I need!? It's aribrary
size_t samples = 1024 * numberOfSilenceBlocks;
audioOffset = samples * 10000000 / audioSamplesPerSecond;
std::vector<uint8_t> silence(samples * audioChannels * bytesPerSample, 0);
WriteAudioBuffer(silence.data(), silence.size(), audioFrameTimeStamp, audioOffset);
runOnce= true;
}
LONGLONG audioTime = audioFrameTimeStamp + audioOffset;
WriteAudioBuffer(dataPtr, dataSize, audioTime, llDuration);
Oddly, this creates an output video file that matches the original.
original - |[audio1] [audio2] [audio3] [audio4] [audio5] ... etc
copy - |[audio1] [audio2] [audio3] [audio4] [audio5] ... etc
The solution was to insert extra silence in block sizes of 1024 at the beginning of the audio stream. It doesn't matter what the audio chunk sizes provided by IMFSourceReader are, the padding is in multiples of 1024.
My problem is that there seems to be no detectable reason for the the silence offset. Why do i need it? How do i know how much i need? I stumbled across the 1024 sample silence block solution after days of fighting this problem.
Some videos seem to only need 1 padding block, some need 2 or more, and some need no extra padding at all!
My question here are:
Does anyone know why this is happening?
Am I using Media Foundation incorrectly in this situation to cause this?
If I am correct, How can I use the video metadata to determine if i need to pad an audio stream and how many 1024 blocks of silence need to be in the pad?
EDIT:
For the sample videos above:
sample video 1 : the video stream starts at 0 and needs no extra blocks, passthrough of original data works fine.
sample video 2 : video stream starts at 834166 (hns) and needs 1 1024 block of silence to sync
sample video 3 : video stream starts at 0 and needs 2 1024 blocks of silence to sync.
UPDATE:
Other things I have tried:
Increasing the duration of the first video frame to account for the offset: Produces no effect.
I wrote another version of your program to handle NV12 format correctly (yours was not working) :
EncodeWithSourceReaderSinkWriter
I use Blender as video editing tools. Here is my results with Tuning_against_a_window.mov :
from the bottom to the top :
Original file
Encoded file
I changed the original file by settings "elst" atoms with the value of 0 for number entries (I used Visual Studio hexa editor)
Like Roman R. said, MediaFoundation mp4 source doesn't use the "edts/elst" atoms. But Blender and your video editing tools do. Also the "tmcd" track is ignored by mp4 source.
"edts/elst" :
Edits Atom ( 'edts' )
Edit lists can be used for hint tracks...
MPEG-4 File Source
The MPEG-4 file source silently ignores hint tracks.
So in fact, the encoding is good. I think there is no audio stream sync offset, comparing to the real audio/video data. For example, you can add "edts/elst" to the encoded file, to get the same result.
PS: on the encoded file, i added "edts/elst" for both audio/video tracks. I also increased size for trak atoms and moov atom. I confirm, Blender shows same wave form for both original and encoded file.
EDIT
I tried to understand relation between mvhd/tkhd/mdhd/elst atoms, in the 3 video samples. (Yes I know, i should read the spec. But i'm lazy...)
You can use a mp4 explorer tool to get atom's values, or use the mp4 parser from my H264Dxva2Decoder project :
H264Dxva2Decoder
Tuning_against_a_window.mov
elst (media time) from tkhd video : 20689
elst (media time) from tkhd audio : 1483
GREEN_SCREEN_ANIMALS__ALPACA.mp4
elst (media time) from tkhd video : 2002
elst (media time) from tkhd audio : 1024
GOPR6239_1.mov
elst (media time) from tkhd video : 0
elst (media time) from tkhd audio : 0
As you can see, with GOPR6239_1.mov, media time from elst is 0. That's why there is no video/audio sync problem with this file.
For Tuning_against_a_window.mov and GREEN_SCREEN_ANIMALS__ALPACA.mp4, i tried to calculate the video/audio offset.
I modified my project to take this into account :
EncodeWithSourceReaderSinkWriter
For now, i didn't find a generic calculation for all files.
I just find the video/audio offset needed to encode correctly both files.
For Tuning_against_a_window.mov, i begin encoding after (movie time - video/audio mdhd time).
For GREEN_SCREEN_ANIMALS__ALPACA.mp4, i begin encoding after video/audio elst media time.
It's OK, but I need to find the right unique calculation for all files.
So you have 2 options :
encode the file and add elst atom
encode the file using right offset calculation
it depends on your needs :
The first option permits you to keep the original file.But you have to add the elst atom
With the second option you have to read atom from the file before encoding, and the encoded file will loose few original frames
If you choose the first option, i will explain how I add the elst atom.
PS : i'm intersting by this question, because in my H264Dxva2Decoder project, the edts/elst atom is in my todo list.
I parse it, but i don't use it...
PS2 : this link sounds interesting :
Audio Priming - Handling Encoder Delay in AAC
How do I calculate the size of the compressed opus frame (number of bytes)? I have read the OggS Page and the TOC-Header. The next bytes should belong to the compressed frame, but how do I get the number of bytes?
You're inside an ogg file, I assume. Why can't you read it from the lacing table like any other data packet?
The first ogg page is OPUSHEAD, the second is OPUSTAGS, every page following that should just be the opus packets laced together, no special formatting or anything. It's in the spec here: https://wiki.xiph.org/OggOpus
I have searched for an answer to this question for several hours. I have already removed the 44 byte header, and have transferred the data using an ofstream. The input stereo WAV file is 16 bit PCM at a 44.1k Hz sample rate.
int szm;
char* buff = new char[szm];
ifstream ssn(f_infile,ios::binary);
ssn.seekg(0,ssn.end);
szm = ssn.tellg();
ssn.seekg(0,ssn.beg);
ssn.read(buff,szm);
ssn.close();
ofstream sso(f_outfile,ios::binary);
for(int i =0; i < szm; i++)
{
if(i > 44)
{
word_w(file, buff[i],1);
word_w(file, 0-(buff[i]), 1);
}
}
sso.close();
file.close();
I got the size of the file, and read the data into a buffer. I know all a RAW data file is is binary data, and I thought this simple technique would work. However, I got mixed results.
This first one worked like a charm. It was the original sample I wanted to convert. It is a side by side comparison of the original WAV file [top] and the raw data [bottom] imported into Audacity at 44.1k Hz.
This next one distorted the right channel for some reason, and doubled the length of the file. It is also a stereo wave file, 16 bit PCM, 44.1k Hz sample rate.
This third one is completely distorted, and the length has increased even more than the previous one.
Why did it work on the first file, but not the other ones when they are all in the exact same file format (16 bit, 44.1k Hz sample rate, 2 channels)?
it's the first time when I'm working with wave files.
The problem is that I don't exactly understand how to properly read stored data. My code for reading:
uint8_t* buffer = new uint8_t[BUFFER_SIZE];
std::cout << "Buffering data... " << std::endl;
while ((bytesRead = fread(buffer, sizeof buffer[0], BUFFER_SIZE / (sizeof buffer[0]), wavFile)) > 0)
{
//do sth with buffer data
}
Sample file header gives me information that data is PCM (1 channel) with 8 bits per sample and sampling rate is 11025Hz.
Output data gives me (after updates) values from 0 to 255, so values are proper PCM values for 8bit modulation. But, any idea what BUFFER_SIZE would be prefferable to correctly read those values?
WAV file I'm using: http://www.wavsource.com/movies/2001.htm (daisy.wav)
TXT output: https://paste.ee/p/pXGvm
You've got two common situations. The first is where the WAV file represents a short audio sample and you want to read the whole thing into memory and manipulate it. So BUFFER_SIZE is a variable. Basically you seek to the end of the file to get its size, then load it.
The second common situation is that the WAV file represent fairly long audio recording, and you want to process it piecewise, often by writing to an output device in real time. So BUFFER_SIZE needs to be large enough to hold a bite-sized chunk, but not so large that you require excessive memory. Now often the size of a "frame" of audio is given by the output device itself, it expects 25 samples per second to synchronise with video or something similar. You generally need a double buffer to ensure that you can always meet the demand for more samples when the DAC (digital to analogue converter) runs out. Then on giving out a sample you load the next chunk of data from disk. Sometimes there isn't a "right" value for the chunk size, you've just got to go with something fairly sensible that balances memory footprint against the number of calls.
If you need to do FFT, it's normal to use a buffer size that is a power of two, to make the fast transform simpler. Size you need depends on the lowest frequency you are interested in.
To play my test wave file, I set the following format fmt:
fmt.setChannelCount(2);
fmt.setCodec("audio/pcm");
fmt.setByteOrder(QAudioFormat::LittleEndian);
fmt.setSampleType(QAudioFormat::SignedInt);
fmt.setSampleRate(44100);
fmt.setSampleSize(16);
It also works with these settings:
fmt.setSampleRate(22050);
fmt.setSampleSize(32);
Those settings are meant for a QAudioOutput:
player = new QAudioOutput(fmt);
file = new QFile(fileName);
file->open(QIODevice::ReadOnly);
player->start(file);
With this setting I can play my test wave file correctly.
But I want to detect the format's settings by reading the header.
I analyse it, it says:
Opening WAV file at: "C:/Deep Purple - Anthology (Disc 2) - 09 - Hold On.wav"
The size of the WAV file is: 53994908
WAV File Header read:
File Type: "RIFFWAVE"
File Size: 53994900
WAV Marker: "WAVE"
Format Name: "fmt??("
Format Length: 4128
Format Type: 256
Number of Channels: 512
Sample Rate: 11289600
Sample Rate * Bits/Sample * Channels / 8: 45158400
Bits per Sample * Channels / 8.1: 1024
Bits per Sample: 4096
Data Header: ""
Data Size: 937783393
If I divide the sample rate by the number of channels, I get a sample rate per channel of 22050. But why do I have to set 44100 to make it sound good? And why are there 512 channels? Opening the file with Audacity, there are only 2 (Audacity says: Stereo, 44100Hz, 32-bit float).
Here are a bunch of links that helped me out a ton when I was working on this kind of a project.
https://github.com/visore/QAudioCoder
http://qt-project.org/forums/viewthread/6899
http://doc.qt.digia.com/qt-maemo/demos-spectrum-app-wavfile-cpp.html
http://fledisplace.com/QtMultimediaExample2.html
I did find a typo in one of the examples, it wasn't reading one of the parameters with the correct endianness. Let me double check which one it was, and I'll get back to you.
Hope that helps.