How to convert a WAV file to RAW Audio in C++? - c++

I have searched for an answer to this question for several hours. I have already removed the 44 byte header, and have transferred the data using an ofstream. The input stereo WAV file is 16 bit PCM at a 44.1k Hz sample rate.
int szm;
char* buff = new char[szm];
ifstream ssn(f_infile,ios::binary);
ssn.seekg(0,ssn.end);
szm = ssn.tellg();
ssn.seekg(0,ssn.beg);
ssn.read(buff,szm);
ssn.close();
ofstream sso(f_outfile,ios::binary);
for(int i =0; i < szm; i++)
{
if(i > 44)
{
word_w(file, buff[i],1);
word_w(file, 0-(buff[i]), 1);
}
}
sso.close();
file.close();
I got the size of the file, and read the data into a buffer. I know all a RAW data file is is binary data, and I thought this simple technique would work. However, I got mixed results.
This first one worked like a charm. It was the original sample I wanted to convert. It is a side by side comparison of the original WAV file [top] and the raw data [bottom] imported into Audacity at 44.1k Hz.
This next one distorted the right channel for some reason, and doubled the length of the file. It is also a stereo wave file, 16 bit PCM, 44.1k Hz sample rate.
This third one is completely distorted, and the length has increased even more than the previous one.
Why did it work on the first file, but not the other ones when they are all in the exact same file format (16 bit, 44.1k Hz sample rate, 2 channels)?

Related

MPEG 2 and 2.5 - problems calculating frame sizes in bytes

I have a console program which I have used for years, for (among other things) displaying info about certain audio-file formats, including mp3. I used data from the mpeghdr site to calculate the frame sizes, in order to further calculate playing time for the tracks. The equation that I got from mpeghdr was:
// Read the BitRate, SampleRate and Padding of the frame header.
// For Layer I files use this formula:
//
// FrameLengthInBytes = (12 * BitRate / SampleRate + Padding) * 4
//
// For Layer II & III files use this formula:
//
// FrameLengthInBytes = 144 * BitRate / SampleRate + Padding
This works well for most mp3 files, but there have always been a small subset for whom this equation failed. Recently, I've been looking at a set of very small mp3 files, and have found that for these files this formula fails much more often, so I'm trying to finally nail down what is going on. All of these mp3 files were generated using Lame V3.100, with default settings, on Windows 7 64-bit.
In all cases, I can successfully find the first frame header, but when I used the above formula to calculate the offset to the next frame header, it is sometimes not correct.
As an example, I have a file 'wolf howl.mp3'; analytical files such as MPEGAudioInfo show frame size as 288 bytes. When I run my program, though, it shows length of first frame as 576 bytes (2 * 288). When I look at the mp3 file in a hex editor, with first frame at 0x154, I can see that the next frame is at 0x154 + 208 bytes, but this calculation does in fact result in 576 bytes...
File info:
mpegV2.5, layer III
frame: bitrate=32, sample_rate=8000, pad=0, bytes=576
mtemp->frame_length_in_bytes =
(144 * (mtemp->bitrate * 1000) / mtemp->sample_rate) + mtemp->padding_bit;
which equals 576
I've looked at numerous other references, and they all show this equation...
At first I thought is was an issue with MPEG 2.5, which is an unofficial standard, but I have also seen this with MPEG2 files as well. Only happens with small files, though.
Does anyone have any insights on what I am missing here??
//**************************************
Later notes:
I thought maybe audio format would be relevant to this issue, so I dumped channel_mode and mode_extension for each of my test files (3 calculate properly, 2 don't). Sadly, all of them are cmode=3, mode_ext=0
(i.e., last byte of the header is 0xC4)... so that doesn't help...
Okay, I found the answer to this queston... it was in the MPEGAudioInfo program on CodeProject site. Here is the vital key:
//*************************************************************************************
// This reference data is from MPEGAudioInfo app
// Samples per Frame / 8
static const u32 m_dwCoefficients[2][3] =
{
{ // MPEG 1
12, // Layer1 (must be multiplied with 4, because of slot size)
144, // Layer2
144 // Layer3
},
{ // MPEG 2, 2.5
12, // Layer1 (must be multiplied with 4, because of slot size)
144, // Layer2
72 // Layer3
}
};
It is unfortunately that none of the reference pages mention this detail !!
My program now successfully calculates frame sizes for all of my mp3 files, including the small ones.
I had the same problem. Some documents, I've read, don't define dividing by 2 in Frame-Size formula for MPEG2.5L3. But some src-code, I encountered - does.
It's hard to find out any proof.
I have nothing better than this link:
https://link.springer.com/chapter/10.1007/978-1-4615-0327-9_12
(it's better to share that link in "add a comment"-form, but I have insufficient rank)

Media Foundation video re-encoding producing audio stream sync offset

I'm attempting to write a simple windows media foundation command line tool to use IMFSourceReader and IMFSyncWriter to load in a video, read the video and audio as uncompressed streams and re-encode them to H.246/AAC with some specific hard-coded settings.
The simple program Gist is here
sample video 1
sample video 2
sample video 3
(Note: the video's i've been testing with are all stereo, 48000k sample rate)
The program works, however in some cases when comparing the newly outputted video to the original in an editing program, I see that the copied video streams match, but the audio stream of the copy is pre-fixed with some amount of silence and the audio is offset, which is unacceptable in my situation.
audio samples:
original - |[audio1] [audio2] [audio3] [audio4] [audio5] ... etc
copy - |[silence] [silence] [silence] [audio1] [audio2] [audio3] ... etc
In cases like this the first video frames coming in have a non zero timestamp but the first audio frames do have a 0 timestamp.
I would like to be able to produce a copied video who's first frame from the video and audio streams is 0, so I first attempted to subtract that initial timestamp (videoOffset) from all subsequent video frames which produced the video i wanted, but resulted in this situation with the audio:
original - |[audio1] [audio2] [audio3] [audio4] [audio5] ... etc
copy - |[audio4] [audio5] [audio6] [audio7] [audio8] ... etc
The audio track is shifted now in the other direction by a small amount and still doesn't align. This can also happen sometimes when a video stream does have a starting timestamp of 0 yet WMF still cuts off some audio samples at the beginning anyway (see sample video 3)!
I've been able to fix this sync alignment and offset the video stream to start at 0 with the following code inserted at the point of passing the audio sample data to the IMFSinkWriter:
//inside read sample while loop
...
// LONGLONG llDuration has the currently read sample duration
// DWORD audioOffset has the global audio offset, starts as 0
// LONGLONG audioFrameTimestamp has the currently read sample timestamp
//add some random amount of silence in intervals of 1024 samples
static bool runOnce{ false };
if (!runOnce)
{
size_t numberOfSilenceBlocks = 1; //how to derive how many I need!? It's aribrary
size_t samples = 1024 * numberOfSilenceBlocks;
audioOffset = samples * 10000000 / audioSamplesPerSecond;
std::vector<uint8_t> silence(samples * audioChannels * bytesPerSample, 0);
WriteAudioBuffer(silence.data(), silence.size(), audioFrameTimeStamp, audioOffset);
runOnce= true;
}
LONGLONG audioTime = audioFrameTimeStamp + audioOffset;
WriteAudioBuffer(dataPtr, dataSize, audioTime, llDuration);
Oddly, this creates an output video file that matches the original.
original - |[audio1] [audio2] [audio3] [audio4] [audio5] ... etc
copy - |[audio1] [audio2] [audio3] [audio4] [audio5] ... etc
The solution was to insert extra silence in block sizes of 1024 at the beginning of the audio stream. It doesn't matter what the audio chunk sizes provided by IMFSourceReader are, the padding is in multiples of 1024.
My problem is that there seems to be no detectable reason for the the silence offset. Why do i need it? How do i know how much i need? I stumbled across the 1024 sample silence block solution after days of fighting this problem.
Some videos seem to only need 1 padding block, some need 2 or more, and some need no extra padding at all!
My question here are:
Does anyone know why this is happening?
Am I using Media Foundation incorrectly in this situation to cause this?
If I am correct, How can I use the video metadata to determine if i need to pad an audio stream and how many 1024 blocks of silence need to be in the pad?
EDIT:
For the sample videos above:
sample video 1 : the video stream starts at 0 and needs no extra blocks, passthrough of original data works fine.
sample video 2 : video stream starts at 834166 (hns) and needs 1 1024 block of silence to sync
sample video 3 : video stream starts at 0 and needs 2 1024 blocks of silence to sync.
UPDATE:
Other things I have tried:
Increasing the duration of the first video frame to account for the offset: Produces no effect.
I wrote another version of your program to handle NV12 format correctly (yours was not working) :
EncodeWithSourceReaderSinkWriter
I use Blender as video editing tools. Here is my results with Tuning_against_a_window.mov :
from the bottom to the top :
Original file
Encoded file
I changed the original file by settings "elst" atoms with the value of 0 for number entries (I used Visual Studio hexa editor)
Like Roman R. said, MediaFoundation mp4 source doesn't use the "edts/elst" atoms. But Blender and your video editing tools do. Also the "tmcd" track is ignored by mp4 source.
"edts/elst" :
Edits Atom ( 'edts' )
Edit lists can be used for hint tracks...
MPEG-4 File Source
The MPEG-4 file source silently ignores hint tracks.
So in fact, the encoding is good. I think there is no audio stream sync offset, comparing to the real audio/video data. For example, you can add "edts/elst" to the encoded file, to get the same result.
PS: on the encoded file, i added "edts/elst" for both audio/video tracks. I also increased size for trak atoms and moov atom. I confirm, Blender shows same wave form for both original and encoded file.
EDIT
I tried to understand relation between mvhd/tkhd/mdhd/elst atoms, in the 3 video samples. (Yes I know, i should read the spec. But i'm lazy...)
You can use a mp4 explorer tool to get atom's values, or use the mp4 parser from my H264Dxva2Decoder project :
H264Dxva2Decoder
Tuning_against_a_window.mov
elst (media time) from tkhd video : 20689
elst (media time) from tkhd audio : 1483
GREEN_SCREEN_ANIMALS__ALPACA.mp4
elst (media time) from tkhd video : 2002
elst (media time) from tkhd audio : 1024
GOPR6239_1.mov
elst (media time) from tkhd video : 0
elst (media time) from tkhd audio : 0
As you can see, with GOPR6239_1.mov, media time from elst is 0. That's why there is no video/audio sync problem with this file.
For Tuning_against_a_window.mov and GREEN_SCREEN_ANIMALS__ALPACA.mp4, i tried to calculate the video/audio offset.
I modified my project to take this into account :
EncodeWithSourceReaderSinkWriter
For now, i didn't find a generic calculation for all files.
I just find the video/audio offset needed to encode correctly both files.
For Tuning_against_a_window.mov, i begin encoding after (movie time - video/audio mdhd time).
For GREEN_SCREEN_ANIMALS__ALPACA.mp4, i begin encoding after video/audio elst media time.
It's OK, but I need to find the right unique calculation for all files.
So you have 2 options :
encode the file and add elst atom
encode the file using right offset calculation
it depends on your needs :
The first option permits you to keep the original file.But you have to add the elst atom
With the second option you have to read atom from the file before encoding, and the encoded file will loose few original frames
If you choose the first option, i will explain how I add the elst atom.
PS : i'm intersting by this question, because in my H264Dxva2Decoder project, the edts/elst atom is in my todo list.
I parse it, but i don't use it...
PS2 : this link sounds interesting :
Audio Priming - Handling Encoder Delay in AAC

webRTC : How to apply webRTC's VAD on audio through samples obtained from WAV file

Currently, I am parsing wav files and storing samples in std::vector<int16_t> sample. Now, I want to apply VAD (Voice Activity Detection) on this data to find out the "regions" of voice, and more specifically the start and end of words.
The parsed wav files are 16KHz, 16 bit PCM, mono. My code is in C++.
I have searched a lot about it but could not find proper documentation regarding webRTC's VAD functions.
From what I have found, the function that I need to use is WebRtcVad_Process(). It's prototype is written below :
int WebRtcVad_Process(VadInst* handle, int fs, const int16_t* audio_frame,
size_t frame_length)
From what I found here : https://stackoverflow.com/a/36826564/6487831
Each frame of audio that you send to the VAD must be 10, 20 or 30 milliseconds long.
Here's an outline of an example that assumes audio_frame is 10 ms (320 bytes) of audio at 16000 Hz:
int is_voiced = WebRtcVad_Process(vad, 16000, audio_frame, 160);
It makes sense :
1 sample = 2B = 16 bits
SampleRate = 16000 sample/sec = 16 samples/ms
For 10 ms, no of samples = 160
So, based on that I have implemented this :
const int16_t * temp = sample.data();
for(int i = 0, ms = 0; i < sample.size(); i += 160, ms++)
{
int isActive = WebRtcVad_Process(vad, 16000, temp, 160); //10 ms window
std::cout<<ms<<" ms : "<<isActive<<std::endl;
temp = temp + 160; // processed 160 samples
}
Now, I am not really sure if this is correct. Also, I am also unsure about whether this gives me correct output or not.
So,
Is it possible to use the samples parsed directly from the wav files, or does it need some processing?
Am I looking at the correct function to do the job?
How to use the function to properly perform VAD on the audio stream?
Is it possible to distinct between the spoken words?
What is the best way to check if the output I am getting is correct?
If not, what is the best way to do this task?
I'll start by saying that no, I don't think you will be able to segment an utterance into individual words using VAD. From the article on speech segmentation in Wikipedia:
One might expect that the inter-word spaces used by many written
languages like English or Spanish would correspond to pauses in their
spoken version, but that is true only in very slow speech, when the
speaker deliberately inserts those pauses. In normal speech, one
typically finds many consecutive words being said with no pauses
between them, and often the final sounds of one word blend smoothly or
fuse with the initial sounds of the next word.
That said, I'll try to answer your other questions.
You need to decode the WAV file, which could be compressed, into raw PCM audio data before running VAD. See e.g. Reading and processing WAV file data in C/C++. Alternately, you could use something like sox to convert the WAV file to raw audio before running your code. This command will convert a WAV file of any format to 16 KHz, 16-bit PCM in the format that WebRTCVAD expects:
sox my_file.wav -r 16000 -b 16 -c 1 -e signed-integer -B my_file.raw
It looks like you are using the right function. To be more specific, you should be doing this:
#include "webrtc/common_audio/vad/include/webrtc_vad.h"
// ...
VadInst *vad;
WebRtcVad_Create(&vad);
WebRtcVad_Init(vad);
const int16_t * temp = sample.data();
for(int i = 0, ms = 0; i < sample.size(); i += 160, ms += 10)
{
int isActive = WebRtcVad_Process(vad, 16000, temp, 160); //10 ms window
std::cout << ms << " ms : " << isActive << std::endl;
temp = temp + 160; // processed 160 samples (320 bytes)
}
To see if it's working, you can run known files and see if you get the results you expect. For example, you could start by processing silence and confirm that you never (or rarely--this algorithm is not perfect) see a voiced result come back from WebRtcVad_Process. Then you could try a file that is all silence except for one short utterance in the middle, etc. If you want to compare to an existing test, the py-webrtcvad module has a unit test that does this; see the test_process_file function.
To do word-level segmentation, you will probably need to find a speech recognition library that does it or gives you access to the information that you need to do it. E.g. this thread on the Kaldi mailing list seems to talks about how to segment by words.

Read data from wav file before applying FFT

it's the first time when I'm working with wave files.
The problem is that I don't exactly understand how to properly read stored data. My code for reading:
uint8_t* buffer = new uint8_t[BUFFER_SIZE];
std::cout << "Buffering data... " << std::endl;
while ((bytesRead = fread(buffer, sizeof buffer[0], BUFFER_SIZE / (sizeof buffer[0]), wavFile)) > 0)
{
//do sth with buffer data
}
Sample file header gives me information that data is PCM (1 channel) with 8 bits per sample and sampling rate is 11025Hz.
Output data gives me (after updates) values from 0 to 255, so values are proper PCM values for 8bit modulation. But, any idea what BUFFER_SIZE would be prefferable to correctly read those values?
WAV file I'm using: http://www.wavsource.com/movies/2001.htm (daisy.wav)
TXT output: https://paste.ee/p/pXGvm
You've got two common situations. The first is where the WAV file represents a short audio sample and you want to read the whole thing into memory and manipulate it. So BUFFER_SIZE is a variable. Basically you seek to the end of the file to get its size, then load it.
The second common situation is that the WAV file represent fairly long audio recording, and you want to process it piecewise, often by writing to an output device in real time. So BUFFER_SIZE needs to be large enough to hold a bite-sized chunk, but not so large that you require excessive memory. Now often the size of a "frame" of audio is given by the output device itself, it expects 25 samples per second to synchronise with video or something similar. You generally need a double buffer to ensure that you can always meet the demand for more samples when the DAC (digital to analogue converter) runs out. Then on giving out a sample you load the next chunk of data from disk. Sometimes there isn't a "right" value for the chunk size, you've just got to go with something fairly sensible that balances memory footprint against the number of calls.
If you need to do FFT, it's normal to use a buffer size that is a power of two, to make the fast transform simpler. Size you need depends on the lowest frequency you are interested in.

Noise after changing volume in QAudioOutput

I am trying to play sound by using QAudioOutput and wav in "raw format". After timer's timeout (every 50ms) I do following:
QByteArray TempSBuffer;
short int *hi;
// Check if wav has reached their end and reset its position to the beginning if yes
if((m_timerStepNum+1)*m_audioOutput->periodSize()>=m_soundBuffer.size()) {
m_timerStepNum=0;
}
// 2. Write the buffer data for the next timecycle into a temporary QByteArray TempSBuffer
TempSBuffer=m_soundBuffer.mid(m_timerStepNum*m_audioOutput->periodSize(), m_audioOutput->periodSize());
hi=(short int *)TempSBuffer.data();
for(int i=0;i < m_audioOutput->periodSize() / 2;i++) { hi[i]*= m_audioOutput->volume(); }
// 4. Play the resulting buffer
m_ioDevice->write(TempSBuffer, m_audioOutput->periodSize());
m_timerStepNum++;
Everything plays ok but when I try to change volume say for example 0.2 in QAudioOutput (and my master volume is 100%) I've got the horrible noise. I should admit that this happens only for my one wav file which has format:
bitsPerSample: 8
channels: 1
frequency: 16000
Other files play ok, as I said. Format examples of good-played waves:
bitsPerSample: 16
channels: 1
frequency: 22050
bitsPerSample: 16
channels: 2
frequency: 22050
bitsPerSample: 16
channels: 2
frequency: 22050
Well, according to The ABCs of PCM (Uncompressed) digital audio in Final Notes -
For some reason, WAV files don't support signed 8-bit format, so when reading and writing WAV files, be aware that 8-bits means unsigned, but in virtually all other cases it's safe to assume integers are signed.
I solved for a while my problem by converting my raw wav to 16-bit format.