When I load and play a ogg file I hear sound no problem. When I change no code and specify a file that ends with mp3 I get an SdlException "Unrecognized sound file type". The autocomplete text says it supports wave, mp3, ogg and others but it appears it isn't supporting mp3. I tried more then one mp3 file.
How do I load mp3s? Here is my quick test code in a winform app
Video.SetVideoMode(320, 240, 32, false, false, false, true);
//var s = new Sound(#"a.ogg");
var s = new Sound(#"a.mp3");
Task.Run(() =>
{
var c = s.Play();
//System.Threading.Thread.Sleep(3000);
Task.Delay(3000).Wait();
c.Pause();
System.Threading.Thread.Sleep(500);
c.Resume();
//s.Stop();
});
In the official documentation says:
"SDL.NET does not support MP3 format because the underlying C library, SDL_mixer, does not officially support it"
Source
http://sourceforge.net/apps/mediawiki/cs-sdl/index.php?title=Audio
SDL Mixer (if you like implement)
http://www.libsdl.org/projects/SDL_mixer/
Related
How can I add chapters to MP4 files with GStreamer 1.18?
I have a Visual Studio 2015 C++ project that writes a video (H264) and audio (aac) stream to the disk using mp4mux. Now I would like to add chapters to the MP4 file that are compatible with regular video players like VLC.
I have tried to follow the documentation to create a GstToc and a dummy GstTocEntry, but it doesn't appear to be written to the file:
GstToc* toc = gst_toc_new(GstTocScope::GST_TOC_SCOPE_CURRENT);
GstTocEntry* new_entry = gst_toc_entry_new(GstTocEntryType::GST_TOC_ENTRY_TYPE_CHAPTER, "some_uid");
gst_toc_entry_set_start_stop_times(new_entry, 0, 50);
gst_toc_append_entry(toc, new_entry);
I then also tried to generate a new toc event and pass it to the video GstElement vin:
gboolean result = gst_element_send_event(vin, gst_event_new_toc(toc, true));
Did I miss anything in order to map the GstToc to the video stream? Do I need to tell mp4mux to process the toc? Or is this not supported?
GStreamer documentation seems to imply, that there is GstToc support in mp4mux:
Below hollow bullet point o indicate no support and filled bullets ***
indicate that this feature is handled.
MP4: * elst
The elst atom contains a list of edits. Each edit consists of (length, start, play-back speed).
I didn't find much on the elst atom or what an edit is. I tried using GST_TOC_ENTRY_TYPE_EDITION instead of GST_TOC_ENTRY_TYPE_CHAPTER, but that didn't change anything.
This page mentions "preliminary code in MP4 supporting chapters" in GStreamer.
I have seen ways to inject metadata files into existing mp4 files with ffmpeg and other tools which are not available on our system unfortunately. I could try to inject the chapters into the mp4 file header manually, but I'd very much like to avoid this post-processing step for obvious reasons. Any help would be greatly appreciated!
HiI'm trying to create a "Speech to text" app that can transcribe any audio/video file. I've created an app based on this post and it works great for WAV files. But if I use an MP3 file, the line hr = cpInputStream->BindToFile(wInputFileName.c_str(), SPFM_OPEN_READONLY, &sInputFormat.FormatId(), sInputFormat.WaveFormatExPtr(), SPFEI_ALL_EVENTS); returns
The Parameter is incorrect
The question is, can I use MP3 files as input for SAPI? and if yes, how do I determine the correct format for the call to hr = sInputFormat.AssignFormat(SPSF_16kHz16BitStereo) because SPSF_16kHz16BitStereo will certainly not be correct and I don't think we should hardcode it.
I am using a nodeJS library naudio —link— to record sound from a 2 microphones (total 4 channel audio with each microphone being stereo). This library spits out a .raw file in the following specs: 16 BIT, 48000Hz Sample Rate, Channel Count 4
// var portAudio = require('../index.js');
var portAudio = require('naudiodon');
var fs = require('fs');
//Create a new instance of Audio Input, which is a ReadableStream
var ai = new portAudio.AudioInput({
channelCount: 4,
sampleFormat: portAudio.SampleFormat16Bit,
sampleRate: 48000,
deviceId: 13
});
ai.on('error', console.error);
//Create a write stream to write out to a raw audio file
var ws = fs.createWriteStream('rawAudio_final.raw');
//Start streaming
ai.pipe(ws);
ai.start();
process.once('SIGINT', ai.quit);
Instead of the .raw file, I am trying to convert this to two individual .wav files. With the above encoding and information, what would be the best way to do so? I tried to dig around for easy ways to deinterleaving and getting .wav but seem to be hitting a wall.
The addon is a wrapper around a C++ library called portaudio which according to its documentation supports writing to a WAV file.
What you could do is extend the addon and bind a NodeJS function to the underlying C++ function that write to WAV.
This will give you a good performance if it is an issue.
If you want something easier you could look up utilities that do the conversion and call them from within your script using ex like this
Look similar to this question.
You may also take a look here to know how to create wav file from javascript.
I have a file with .amr extension, and I want to get it's sample rate and number of channels using Microsoft Media Foundation. Further, I want to decode and get the uncompressed data.
I can successfully get those from .aac .mp4 and other file types but not from from a .amr file (or 3.gp file which contains .amr track).
So, for other types I do:
IMFSourceReader *m_pReader;
IMFMediaType *m_pAudioType;
MFCreateSourceReaderFromURL(filePath, NULL, &m_pReader);
m_pReader->SetStreamSelection(MF_SOURCE_READER_ALL_STREAMS, false);
m_pReader->SetStreamSelection(MF_SOURCE_READER_FIRST_AUDIO_STREAM, true);
m_pReader->GetCurrentMediaType(MF_SOURCE_READER_FIRST_AUDIO_STREAM, &m_pAudioType);
UINT32 numChannels,sampleRate;
m_pAudioType->GetUINT32(MF_MT_AUDIO_NUM_CHANNELS, &numChannels);
m_pAudioType->GetUINT32(MF_MT_AUDIO_SAMPLES_PER_SECOND, &sampleRate);
Consider there are no any errors during this code.
For .amr files, some garbage is being written in the numChannels and sampleRate.
Does anyone have experience with this and knows how to recognize and/or get proper channels and sample rate for further decoding?
BTW, Windows Media Player plays this file with no problems.
Thanks in advance.
So I found out that it supports decoding for .amr files not encoding.
Just before we get this properties:
UINT32 numChannels,sampleRate;
m_pAudioType->GetUINT32(MF_MT_AUDIO_NUM_CHANNELS, &numChannels);
m_pAudioType->GetUINT32(MF_MT_AUDIO_SAMPLES_PER_SECOND, &sampleRate);
We have to set a new media type to our Source Reader
m_pAudioType->SetGUID(MF_MT_MAJOR_TYPE, MFMediaType_Audio)
m_pAudioType->SetGUID(MF_MT_SUBTYPE, MFAudioFormat_Float)
m_pReader->SetCurrentMediaType(MF_SOURCE_READER_FIRST_AUDIO_STREAM, NULL, m_pAudioType);
Thanks for taking some time to read my question.
I'm developping a C++ application using Qt and windows API.
I'm recording the microphone output in small 10s audio files in raw format, and I want to convert them to aac format.
I have tried to read as many things as I could, and thought it would be a great idea to start from windows media foundation transcode API.
Problem is, I can't seem to use a .raw or .pcm file in the "CreateObjectFromUrl" function, and so I'm pretty much stuck here for the moment. It keeps on failing. The hr return code equals 3222091460. I have tried to pass an .mp3 file to the function and of course it works, so no url-human-failure involved.
MF_OBJECT_TYPE ObjectType = MF_OBJECT_INVALID;
IMFSourceResolver* pSourceResolver = NULL;
IUnknown* pUnkSource = NULL;
// Create the source resolver.
hr = MFCreateSourceResolver(&pSourceResolver);
if (FAILED(hr))
{
qDebug() << "Failed !";
}
// Use the source resolver to create the media source.
hr = pSourceResolver->CreateObjectFromURL(
sURL, // URL of the source.
MF_RESOLUTION_MEDIASOURCE, // Create a source object.
NULL, // Optional property store.
&ObjectType, // Receives the created object type.
&pUnkSource // Receives a pointer to the media source.
);
The MFCreateSourceResolver works fine, but CreateObjectFromURL does not succeed :(
So I have two questions for you folks :
Is it possible to encode raw audio files to aac files using windows media foundation ?
If yes, what should I read to accomplish what I want ?
I want to point out that I can't just use ffmpeg or libav because I can't afford any license for my software, and don't want it to be under the GPL license. But if there are alternatives to windows media foundations to encode raw audio files to aac, I would be glad to hear them.
And finally, sorry for my bad english, this is obviously not my native language and I'm sorry if I made your eyes bleed. (and happy if I made you laugh)
Have a nice day
The hr return code equals 3222091460
Those are HRESULT codes. Use this "ShowHresult" tool to have them conveniently decoded for you. The code means 0xC00D36C4 MF_E_UNSUPPORTED_BYTESTREAM_TYPE "The byte stream type of the given URL is unsupported."
The problem is basically that there is no support for these raw files, .WAV is a good source for raw audio - the file holds both format descriptor and the payload.
You can obviously read data from the raw audio file yourself and compress into AAC using Media Foundation's AAC Encoder via its IMFTransform interface. This is reasonably easy and you have AAC data on the output to e.g. write into raw .AAC.
Alternate options to Media Foundation is DirectShow (there are suitable codecs, though I thought it might be not so easy to start), libfaac, FFmpeg's libavcodec (available under LGPL, not GPL).