Recording and playing MP3 audio - mp3

I have followed the different threads on how to record and play MP3 but I still always get this exception trying to play MP3 files that I have recorded:
mp3filereader does not support sample rate changes
So here is my code to record :
waveInStream = new WaveIn();
waveInStream.WaveFormat = new WaveFormat(8000, 16, 1);
writer = new WaveFileWriter(outputfileName, waveInStream.WaveFormat);
waveInStream.DataAvailable += new EventHandler<WaveInEventArgs>(waveInStream_DataAvailable);
waveInStream.StartRecording();
The waveInStream_DataAvailable is :
void waveInStream_DataAvailable(object sender, WaveInEventArgs e)
{
writer.Write(e.Buffer, 0, e.BytesRecorded);
}
At this point the recorded file should be PCM uncompressed right?
Do I need to transcode it to MP3 before being able to play it?
My playing code:
WaveChannel32 inputStream;
WaveStream mp3Reader = new Mp3FileReader(fileName); var pStream = NAudio.Wave.WaveFormatConversionStream.CreatePcmStream(mp3Reader);
inputStream = new WaveChannel32(mp3Reader);
volumeStream = inputStream;
return volumeStream;
The exception occurs every time at the call of Mp3FileReader and says something like:
Got a frame at sample rate 44100, in a MP3 sample rate 32000
Mp3FileReader does not support sample rate change

Yes, you have saved a WAV file, not an MP3 file. Either convert to MP3 using something like LAME.exe, or just use the WaveFileReader instead of the Mp3FileReader. MP3 doesn't really support low sample rates like 8kHz in any case, which is typically only used for telephony.

Related

ffmpeg audio frame from directshow sampleCB imediasample

i use isamplegrabber sampleCB callback to get audio sample, i can get buffer and buffer length from imediasample and i use avcodec_fill_audio_frame(frame,ost->enc->channels,ost->enc->sample_fmt,(uint8_t *)buffer,length,0) to make an avframe , but this frame does not make any audio in my mux file! i think the length is very smaller than frame_size.
can every one help me please? or give me some example if it is possible.
thank you
this is my samplecb code :
HRESULT AudioSampleGrabberCallBack::SampleCB(double Time, IMediaSample*pSample){
BYTE *pBuffer;
pSample->GetPointer(&pBuffer);
long BufferLen = pSample->GetActualDataLength();
muxer->PutAudioFrame(pBuffer,BufferLen);
}
and this is samplegrabber pin media type :
AM_MEDIA_TYPE pmt2;
ZeroMemory(&pmt2, sizeof(AM_MEDIA_TYPE));
pmt2.majortype = MEDIATYPE_Audio;
pmt2.subtype = FOURCCMap(0x1602);
pmt2.formattype = FORMAT_WaveFormatEx;
hr = pSampleGrabber_audio->SetMediaType(&pmt2);
after that i using ffmpeg muxing example to process frames and i think i need only to change the signal generating part of code :
AVFrame *Muxing::get_audio_frame(OutputStream *ost,BYTE* buffer,long length)
{
AVFrame *frame = ost->tmp_frame;
int j, i, v;
uint16_t *q = (uint16_t*)frame->data[0];
int buffer_size = av_samples_get_buffer_size(NULL, ost->enc->channels,
ost->enc->frame_size,
ost->enc->sample_fmt, 0);
// uint8_t *sample = (uint8_t *) av_malloc(buffer_size);
av_samples_alloc(&frame->data[0], frame->linesize, ost->enc->channels, ost->enc->frame_size, ost->enc->sample_fmt, 1);
avcodec_fill_audio_frame(frame, ost->enc->channels, ost->enc->sample_fmt,frame->data[0], buffer_size, 1);
frame->pts = ost->next_pts;
ost->next_pts += frame->nb_samples;
return frame;
}
The code snippets suggest you are getting AAC data using Sample Grabber and you are trying to write that into file using FFmpeg's libavformat. This can work out.
You initialize your sample grabber to get audio data in WAVE_FORMAT_AAC_LATM format. This format is not so wide spread and you are interested in reviewing your filter graph to make sure the upstream connection on the Sample Grabber is such that you expect. There is a chance that somehow there is a weird chain of filter that pretend to produce AAC-LATM and the reality is that the data is invalid (or not even reaching grabber callback). So you need to review the filter graph (see Loading a Graph From an External Process and Understanding Your DirectShow Filter Graph), then step through your callback with debugger to make sure you get the data and it makes sense.
Next thing, you are expected to initialize AVFormatContext, AVStream to indicate that you will be writing data in AAC LATM format. Provided code does not show you are doing it right. The sample you are referring to is using default codecs.
Related reading: Support LATM AAC in MP4 container
Then, you need to make sure that both incoming data and your FFmpeg output setup are in agreement about whether the data has or does not have ADTS headers, the provided code does not shed any light on this.
Furthermore, I am afraid you might be preparing your audio data incorrectly. The sample in question generates raw audio data and applies encoder to produce compressed content using avcodec_encode_audio2. Then a packed with compressed audio is being sent to writing using av_interleaved_write_frame. The way you attached your code snippets to the question makes me thing you are doing it wrong. For starters, you still don't show relevant code which makes me think you have troubles identifying what code is relevant exactly. Then you are dealing with your AAC data as if it was raw PCM audio in get_audio_frame code snippet whereas you are interested in reviewing FFmpeg sample code with the thought in mind that you already have compressed AAC data and sample gets to thins point after return from avcodec_encode_audio2 call. This is where you are supposed to merge your code and the sample.

ffmpeg c++ API encode mpegts with KLV data stream

I need to encode an mpegts video using the ffmpeg C++ API. The output video shall have two streams: the first one shall be of type AVMEDIA_TYPE_VIDEO; the second one shall be of type AVMEDIA_TYPE_DATA and shall contain a set of KLV data.
I have written my own KLV library to manage the KLV format.
However I'm not able to create "from scratch" a new video by combining the two streams. Following the implementation as in FFMPEG C api h.264 encoding / MPEG2 ts streaming problems I can successfully encode a mpegts video with a single video stream.
However I'm not able to add a new AVMEDIA_TYPE_DATA stream to the output video since, as soon as I add a new data stream using methods like avformat_new_stream(...) the output video is empty: neither the data stream nor the video one are produced and the output file is empty.
Can anyone suggest me a tutorial page or a sample on how to properly add a data stream to my output video in mpegts format?
Thanks a lot!
I was able to get a KLV stream added to a muxed output by starting with the "muxing.c" example that comes with the FFmpeg source, and modifying it as follows.
First, I created the AVStream as follows, where "oc" is the AVFormatContext (muxer) variable:
AVStream *klv_stream = klv_stream = avformat_new_stream(oc, NULL);
klv_stream->codec->codec_type = AVMEDIA_TYPE_DATA;
klv_stream->codec->codec_id = AV_CODEC_ID_TIMED_ID3;
klv_stream->time_base = AVRational{ 1, 30 };
klv_stream->id = oc->nb_streams - 1;
Then, during the encoding/muxing loop:
AVPacket pkt;
av_init_packet(&pkt);
pkt.data = (uint8_t*)GetKlv(pkt.size);
auto res = write_frame(oc, &video_st.st->time_base, klv_stream, &pkt);
free(pkt.data);
(The GetKlv() function returns a malloc()'ed array of binary data that would be replaced by whatever you're using to get your encoded KLV. It sets pkt.size to the length of the data.)
With this modification, and specifying a ".ts" target file, I get a three-stream file that plays just fine in VLC. The KLV stream has a stream_type of 0x15, indicating synchronous KLV.
Note the codec_id value of AV_CODEC_ID_TIMED_ID3. According to the libavformat source file "mpegtsenc.c", a value of AV_CODEC_ID_OPUS should result in stream_type 6, for asynchronous KLV (no accompanying PTS or DTS). This is actually important for my application, but I'm unable to get it to work -- the call to avformat_write_header() throws a division by zero error. If I get that figured out, I'll add an update here.

Stream live audio live555

I was writing as I could not find the answer in previous topics. I am using live555 to stream live video (h264) and audio(g723), which are being recorded by a web camera. The video part is already done and it works perfectly, but I have no clue about the audio task.
As long as I have read I have to create a ServerMediaSession to which I should add two subsessions: one for the video and one for the audio. For the video part I created a subclass of OnDemandServerMediaSubsession, a subclass of FramedSource and the Encoder class, but for the audio aspect I do not know on which classes should I base the implementation.
The web camera records and delivers audio frames in g723 format separatedly from the video. I would say the audio is raw as when I try to play it in VLC it says that it could not find any startcode; so I suppose it is the raw audio stream what is recorded by the web cam.
I was wondering if someone could give me a hint.
For an audio stream ,your override of OnDemandServerMediaSubsession::createNewRTPSink should create a SimpleRTPSink.
Something like :
RTPSink* YourAudioMediaSubsession::createNewRTPSink(Groupsock* rtpGroupsock, unsigned char rtpPayloadTypeIfDynamic, FramedSource* inputSource)
{
return SimpleRTPSink::createNew(envir(), rtpGroupsock,
4,
frequency,
"audio",
"G723",
channels );
}
The frequency and the number of channels should comes from the inputSource.

Visualise audio waveform from video local file using Qmedia player

I try without success to plot waveform using qMediaPlayer and QaudioProbe object to get the QAudioBuffer but it's always fails when I try:
player = new QMediaPlayer;
audio = new QAudioProbe ;
QAudioRecorder *recorder = new QAudioRecorder();
if (audio->setSource(player))
{
// Probing succeeded, audioProbe->isValid() should be true.
std::cout << "probing succed"<< std::endl;
connect(audio, SIGNAL(audioBufferProbed(QAudioBuffer)), this,
SLOT(processBuffer(QAudioBuffer)));
}
this line:
if (audio->setSource(player))
always return false!
when I replace QMediaPlayer by QAudioRecorder the setSource function works well.
do you have any idea to do that, or m'I in a wrong direction?
otherwise is there other way to split audio from video file.
thanks a lot
From the documentation on QMediaPlayer, I would gather that since the property audioAvailable can change, the default is that audioAvailable is false.
If there is no audio available, the documentation of setSource states that
"If the media object does not support monitoring audio, this function
will return false."
Try loading an actual piece of media, that has audio available (check that first) before trying to set the source

How to play an audio file with continously updated QBuffer (for audio file) with Phonon?

I’m using Phonon player to play the audio files.
Scenarios:
Files played from local drive : Plays properly.
Files played from remote drive : As the audio files are on a USB device I have to keep updating the buffer (QBuffer) and simultaneously play the file. But for some reason the file is not playing in Phonon player. Can anyone please tell me the right way to play the audio file while the buffer is still getting updated?
//Code
Phonon::MediaObject* m_pMediaObject = new Phonon::MediaObject(this);
Phonon::AudioOutput* audioOutput = new Phonon::AudioOutput(Phonon::MusicCategory, this);
Phonon::Path path = Phonon::createPath(m_pMediaObject, audioOutput);
QBuffer m_pBufferLoop = new QBuffer(this);
m_pBufferLoop->open(QIODevice::Append || | QIODevice::ReadWrite);
functionToUpdateBuffer();//updates the buffer dynamically.
m_pMediaObject->setCurrentSource(m_pBufferLoop);
m_pMediaObject->play();
Nothing happens after I call play(). But if I give the complete buffer then the same code works fine.