I'm creating an application that will read a unique format that contains sound "bank" and offsets when the sound must be played.
Imagine something like..
Sound bank: (ID on the left-hand and file name on the right-hand side)
0 kick.wav
1 hit.wav
2 flute.wav
And the offsets: (Time in ms on the left-hand and sound ID on the right-hand side)
1000 0
2000 1
3000 2
And the application will generate a new sound file (ie. wav, for later conversion to other formats) that plays a kick at first sec, a hit at second sec, and flute at third sec.
I completely have no idea on where to begin.
I usually use FMOD for audio playbacks, but never did something like this before.
I'm using C++ and wxWidgets on a MSVC++ Express Edition environment, and LGPL libraries would be fine.
If I understand correctly, you want to generate a new wave file by mixing wavs from a soundbank. You may not need a sound API at all for this, especially if all your input wavs are in the same format.
Simply load each wav file into a buffer. For SampleRate*secondsUntilStartTime samples, for each buffer in the ActiveList, add buffer[bufferIdx++] into the output buffer. If bufferIdx == bufferLen, remove this buffer from the ActiveList. At StartTime, add the next buffer the ActiveList, and repeat.
If FMOD supports output to a file instead of the sound hardware, you can do this same thing with the streaming API. Just keep track of elapsed samples in the StreamCallback, and start mixing in new files whenever you reach their start offsets.
Related
I am writing simple c++ synthesizer with MIDI playback. I've already implemented playback, but in some midi files information about PPQ or SMPTE(or data invalid, eg. all data bytes is 0) is absent and if i use "default" values of PPQ(ex. 24) and tempo from event(in this files tempo event is only one) playback is too slow or too fast. In this case i correct this value by hand. But if I import this midi in any DAW, they read file correctly and play melody with target BPM.
How to correctly convert events tick to real-time in this case? What am I missing and what do DAWs do in this case?
The ticks-per-quarter-note value is part of the header chunk, so it is present in every file.
If this value is zero, then the file is invalid and cannot be played at all.
For tempo and time signature, the default values are defined in the SMF specification:
All MIDI Files should specify tempo and time signature. If they don't, the time signature is assumed to be 4/4, and the tempo 120 beats per minute.
(120 BPM is the same as a tempo value of 500,000 microseconds per quarter note.)
I would like to use Overtone to play a longer continuous audio file from disk.
I know Overtone has facilities for loading in samples into memory, but seeing as these files will be long and large (possibly on the order of hours), this is not the method I want to use.
SuperCollider - which Overtone uses as its audio engine - however, also has another way to load and stream files, namely using DiskIn, which Overtone also seems to have, but I wasn't able to find (docs, github) a corresponding Buffer.cueSoundFile() function.
Does Overtone have cueSoundFile at all? Is there another way I can use?
cueSoundFile is a fancier term for the equivalent osc message /b_read which you'll find in Overtone as overtone.sc.buffer/buffer-cue.
As a 5 second demo, this plays a 2-channel file from disk.
(demo (disk-in 2 (buffer-cue "~/Music/10mb.wav")))
And the doc for disk-in (SC DiskIn)
user=> (doc disk-in)
-------------------------
overtone.live/disk-in
([numChannels bufnum loop])
stream audio in from disk file
[numChannels :none, bufnum :none, loop 0]
numChannels - Number of channels in the incoming
audio.
bufnum - Id of buffer
loop - Soundfile will loop if 1 otherwise
not.
Continuously play a longer soundfile from disk. This
requires a buffer to be preloaded with one buffer size of
sound. If loop is set to 1, the soundfile will loop.
Categories: InOut, Buffer
Rates: [ :ar ]
Default rate: :ar
I'm attempting to write a simple windows media foundation command line tool to use IMFSourceReader and IMFSyncWriter to load in a video, read the video and audio as uncompressed streams and re-encode them to H.246/AAC with some specific hard-coded settings.
The simple program Gist is here
sample video 1
sample video 2
sample video 3
(Note: the video's i've been testing with are all stereo, 48000k sample rate)
The program works, however in some cases when comparing the newly outputted video to the original in an editing program, I see that the copied video streams match, but the audio stream of the copy is pre-fixed with some amount of silence and the audio is offset, which is unacceptable in my situation.
audio samples:
original - |[audio1] [audio2] [audio3] [audio4] [audio5] ... etc
copy - |[silence] [silence] [silence] [audio1] [audio2] [audio3] ... etc
In cases like this the first video frames coming in have a non zero timestamp but the first audio frames do have a 0 timestamp.
I would like to be able to produce a copied video who's first frame from the video and audio streams is 0, so I first attempted to subtract that initial timestamp (videoOffset) from all subsequent video frames which produced the video i wanted, but resulted in this situation with the audio:
original - |[audio1] [audio2] [audio3] [audio4] [audio5] ... etc
copy - |[audio4] [audio5] [audio6] [audio7] [audio8] ... etc
The audio track is shifted now in the other direction by a small amount and still doesn't align. This can also happen sometimes when a video stream does have a starting timestamp of 0 yet WMF still cuts off some audio samples at the beginning anyway (see sample video 3)!
I've been able to fix this sync alignment and offset the video stream to start at 0 with the following code inserted at the point of passing the audio sample data to the IMFSinkWriter:
//inside read sample while loop
...
// LONGLONG llDuration has the currently read sample duration
// DWORD audioOffset has the global audio offset, starts as 0
// LONGLONG audioFrameTimestamp has the currently read sample timestamp
//add some random amount of silence in intervals of 1024 samples
static bool runOnce{ false };
if (!runOnce)
{
size_t numberOfSilenceBlocks = 1; //how to derive how many I need!? It's aribrary
size_t samples = 1024 * numberOfSilenceBlocks;
audioOffset = samples * 10000000 / audioSamplesPerSecond;
std::vector<uint8_t> silence(samples * audioChannels * bytesPerSample, 0);
WriteAudioBuffer(silence.data(), silence.size(), audioFrameTimeStamp, audioOffset);
runOnce= true;
}
LONGLONG audioTime = audioFrameTimeStamp + audioOffset;
WriteAudioBuffer(dataPtr, dataSize, audioTime, llDuration);
Oddly, this creates an output video file that matches the original.
original - |[audio1] [audio2] [audio3] [audio4] [audio5] ... etc
copy - |[audio1] [audio2] [audio3] [audio4] [audio5] ... etc
The solution was to insert extra silence in block sizes of 1024 at the beginning of the audio stream. It doesn't matter what the audio chunk sizes provided by IMFSourceReader are, the padding is in multiples of 1024.
My problem is that there seems to be no detectable reason for the the silence offset. Why do i need it? How do i know how much i need? I stumbled across the 1024 sample silence block solution after days of fighting this problem.
Some videos seem to only need 1 padding block, some need 2 or more, and some need no extra padding at all!
My question here are:
Does anyone know why this is happening?
Am I using Media Foundation incorrectly in this situation to cause this?
If I am correct, How can I use the video metadata to determine if i need to pad an audio stream and how many 1024 blocks of silence need to be in the pad?
EDIT:
For the sample videos above:
sample video 1 : the video stream starts at 0 and needs no extra blocks, passthrough of original data works fine.
sample video 2 : video stream starts at 834166 (hns) and needs 1 1024 block of silence to sync
sample video 3 : video stream starts at 0 and needs 2 1024 blocks of silence to sync.
UPDATE:
Other things I have tried:
Increasing the duration of the first video frame to account for the offset: Produces no effect.
I wrote another version of your program to handle NV12 format correctly (yours was not working) :
EncodeWithSourceReaderSinkWriter
I use Blender as video editing tools. Here is my results with Tuning_against_a_window.mov :
from the bottom to the top :
Original file
Encoded file
I changed the original file by settings "elst" atoms with the value of 0 for number entries (I used Visual Studio hexa editor)
Like Roman R. said, MediaFoundation mp4 source doesn't use the "edts/elst" atoms. But Blender and your video editing tools do. Also the "tmcd" track is ignored by mp4 source.
"edts/elst" :
Edits Atom ( 'edts' )
Edit lists can be used for hint tracks...
MPEG-4 File Source
The MPEG-4 file source silently ignores hint tracks.
So in fact, the encoding is good. I think there is no audio stream sync offset, comparing to the real audio/video data. For example, you can add "edts/elst" to the encoded file, to get the same result.
PS: on the encoded file, i added "edts/elst" for both audio/video tracks. I also increased size for trak atoms and moov atom. I confirm, Blender shows same wave form for both original and encoded file.
EDIT
I tried to understand relation between mvhd/tkhd/mdhd/elst atoms, in the 3 video samples. (Yes I know, i should read the spec. But i'm lazy...)
You can use a mp4 explorer tool to get atom's values, or use the mp4 parser from my H264Dxva2Decoder project :
H264Dxva2Decoder
Tuning_against_a_window.mov
elst (media time) from tkhd video : 20689
elst (media time) from tkhd audio : 1483
GREEN_SCREEN_ANIMALS__ALPACA.mp4
elst (media time) from tkhd video : 2002
elst (media time) from tkhd audio : 1024
GOPR6239_1.mov
elst (media time) from tkhd video : 0
elst (media time) from tkhd audio : 0
As you can see, with GOPR6239_1.mov, media time from elst is 0. That's why there is no video/audio sync problem with this file.
For Tuning_against_a_window.mov and GREEN_SCREEN_ANIMALS__ALPACA.mp4, i tried to calculate the video/audio offset.
I modified my project to take this into account :
EncodeWithSourceReaderSinkWriter
For now, i didn't find a generic calculation for all files.
I just find the video/audio offset needed to encode correctly both files.
For Tuning_against_a_window.mov, i begin encoding after (movie time - video/audio mdhd time).
For GREEN_SCREEN_ANIMALS__ALPACA.mp4, i begin encoding after video/audio elst media time.
It's OK, but I need to find the right unique calculation for all files.
So you have 2 options :
encode the file and add elst atom
encode the file using right offset calculation
it depends on your needs :
The first option permits you to keep the original file.But you have to add the elst atom
With the second option you have to read atom from the file before encoding, and the encoded file will loose few original frames
If you choose the first option, i will explain how I add the elst atom.
PS : i'm intersting by this question, because in my H264Dxva2Decoder project, the edts/elst atom is in my todo list.
I parse it, but i don't use it...
PS2 : this link sounds interesting :
Audio Priming - Handling Encoder Delay in AAC
I'm having trouble with SDL_Mixer (my lack of experience). Chunks and Music play just fine (using Mix_PlayChannel and Mix_PlayMusic), and playing two different chunks simultaneously isn't an issue.
My problem is that I would like to play some chunk1, and then play second iteration of chunk1 overlapping the first. I am trying to play a single chunk in rapid succession, but it instead plays the sound repeatedly at a much longer interval (not as quickly as I want). I've tested console output and my method of playing/looping is not at fault, since I can see console messages printing, looped at the right speed.
I have an array of Chunks that I periodically load during initialization, using Mix_LoadWAV();
Mix_Chunk *sounds[32];
I also have a function reserved for playing these chunks:
void PlaySound(int snd_id)
{
if(snd_id >= 0 && snd_id < 32)
{
if(Mix_PlayChannel(-1, sounds[snd_id], 0) == -1)
{
printf("Mix_PlayChannel: %s\n",Mix_GetError());
}
}
}
Attempting to play a single sound several times in rapid succession(say, 100ms delay/10bps), I am given the sound playing at a set, slower interval(some 500ms or so/2bps) despite the function being called at 10bps.
I already used "Mix_AllocateChannels(16);" to ensure I have allocated channels (let me know if I'm using that incorrectly) and still, a single chunk from the array refuses to play at a certain rate.
Any ideas/help is appreciated, as well as critique on how I posted this question.
As said in the documentation of SDL_Mixer (https://www.libsdl.org/projects/SDL_mixer/docs/SDL_mixer_28.html) :
"... -1 for the first free unreserved channel."
So if your chunk is longer than 1.6 seconds (16 channels*100ms) you'll run out of channels after 1.6 seconds, and so you wont be enabled to play new chunks until one of the channels end playing.
So there are basically 2 solutions :
Allocate more channels (more than : ChunkDuration (in sec) / Delay (in sec))
Stop a channel, so that you can use it. (and to do it properly, you should not use -1 as channel but a variable that you increment each time you play a chunk (don't forget to set it back to 0 when it's equal to your number of channels) )
I am reading a .wav file in C and then I am trying to play the audio file using some of the QT functions. Here is how I read the file:
FILE *fhandle=fopen("myAudioFile.wav","rb");
fread(ChunkID,1,4,fhandle);
fread(&ChunkSize,4,1,fhandle);
fread(Format,1,4,fhandle);
fread(Subchunk1ID,1,4,fhandle);
fread(&Subchunk1Size,4,1,fhandle);
fread(&AudioFormat,2,1,fhandle);
fread(&NumChannels,2,1,fhandle);
fread(&SampleRate,4,1,fhandle);
fread(&ByteRate,4,1,fhandle);
fread(&BlockAlign,2,1,fhandle);
fread(&BitsPerSample,2,1,fhandle);
fread(&Subchunk2ID,1,4,fhandle);
fread(&Subchunk2Size,4,1,fhandle);
Data=new quint16 [Subchunk2Size/(BitsPerSample/8)];
fread(Data,BitsPerSample/8,Subchunk2Size/(BitsPerSample/8),fhandle);
fclose(fhandle);
So my audio file is inside Data. Each element of Data is unsigned 16-bit Integer.
To play the sound I divide each 16-bit unsigned Integer into two characters and then every 3 ms (using a timer) I send 256 characters to the audio card.
Assume myData is a character array of 256 characters I do this (every 3 ms) to play the sound:
m_output->write(myData, 256);
Also m_output is defined as:
m_output = m_audioOutput->start();
and m_audioOutput is defined as:
m_audioOutput = new QAudioOutput(m_Outputdevice, m_format, this);
And the audio format is set correctly as:
m_format.setFrequency(44100);
m_format.setChannels(2);
m_format.setSampleSize(16);
m_format.setSampleType(QAudioFormat::UnSignedInt );
m_format.setByteOrder(QAudioFormat::LittleEndian);
m_format.setCodec("audio/pcm");
However, when I try to run the code I hear some noise which is very different from the real audio file.
Is there anything I am doing wronge?
Thanks,
TJ
I think the problem is that you are using QTimer. QTimer is absolutely not going to allow you to run code every three milliseconds exactly, regardless of the platform you're using. And if you're off by just one sample, your audio is going to sound horrible. According to the QTimer docs:
...they are not guaranteed to time out at the exact value specified. In
many situations, they may time out late by a period of time that
depends on the accuracy of the system timers.
and
...the accuracy of the timer will not equal [1 ms] in many real-world situations.
As much as I love Qt, I wouldn't try to use it for signal processing. I would use another framework such as JUCE.