I have some question about Gstreamer. I want to use Gstreamer to write buffer to file descriptor when receiving audio/video. I've tried to look up in GstBuffer function but It's cannot decoding binary file to audio. (I've been using Audacity to convert this raw binary file to media.) Please suggest me to implement that.
PS. Sorry for my bad in English, I'll try to more practices. :)
Related
I need a software that can stream audio with mp3 format.The audio will come from the microphone at the same time.
I have a software that can stream sound with alaw and ulaw codecs.
And I have an another program that can stream recorded mp3 file. Not capture from the microphone.
I can make stream with VLC.Dotnet wrapper but I didn't succeed with directshow.(namely microphone)
Here my Vlc.Dotnet code;
myVlcControl.Play("dshow://");
myVlcControl.Play(new Uri("dshow://"));
It did not work with this codes. I don't know what causes the problem.
My second software can stream sound that captures form microphone in real time. But its codec format is alaw not mp3. I did not find any converter that convert linear to mp3 file. I find a converter that convert linear to alaw. this is the link Linear to Alaw Codec
I know the LAME and NAudio but it converts wav file to mp3. I need linear to mp3(like in the link)
I am very confused. I really do not know which way to go.
1. Find a codec linear to mp3 (It's very complicated) ?
2. Learn VLC direct Show usage on .NET ?
Thank you so much in advance.
*VLC.DotNet, axVLCPlugin21, LAME, ffmpeg....
As soon as I've successfully solved this problem with VLC.Dotnet wrapper. The problem is compiling with x64 architectural. When I was compiled with x86 architectural, The problem solved.
I'm developing app which sends mpeg2ts stream using FFMPEG API.(avio_open, avformat_new_stream etc..)
The problem is that the app already has AAC-LC audio so audio frame does not need to be encoded because my app just bypass data received from socket buffer.
To open and send mpegts using FFMPEG, I must have AVFormattContext data which is created from FFMPEG API for encoder as far as I know.
Can I create AVFormatContext manually with encoded AAC-LC data? or I should decode and encode the data? The information I know is samplerate, codec, bitrate..
Any help will be greatly appreciated. Thanks in advance.
Yes, you can use the encoded data as-is if your container supports it. There are two steps involved here - encoding and muxing. Encoding compress the data, muxing mixes it together in the output file, so the packets are properly interleaved. Muxing example in FFMpeg distribution helped me with this.
You might also take a look at the following class: https://sourceforge.net/p/karlyriceditor/code/HEAD/tree/src/ffmpegvideoencoder.cpp - this file is from one of my projects, and contains video encoder. Starting from the line 402 you'll see the setup for non-converted audio - it is kind of a hackish way, but it worked. Unfortunately I still end up reencoding audio because for my formats it was not possible to achieve frame-perfect synchronization which I needed
I'm a reasonably advanced C++ programmer, as a bit of background. At this point, I'm wanting to experiment a bit with sound. Rather than use a library to load and play files, I'm wanting to figure out how to actually do that myself, for the understanding. For this application, I would like to read in a .wav file (I already have that part down), then output that data to the speakers. How do I push a waveform or the data from the file to the speakers on my computer? I'm on Windows, by the way.
You can read this article about how to set up the audio device and how to stream data into the device for playback on Windows. If using this library is too high-level for you and you'd like to go deeper and write your own decoding of WAV files and outputting that to a sound card, you have far more research to do than what's appropriate for an answer here.
I am working in C++ with ogg/vorbis
I have an array with raw PCM data decoded from a vorbis file (.ogg). The .ogg file has been decoded using libvorbis using vorbis_synthesis_pcmout. This fills a multidimentional array with the raw PCM for each channel.
I'm sure Gstreamer is capable of reading pure PCM, I have searched for the pluginto but to no avail. I'm sure I am just overlooking something simple here.
You might be looking for appsrc .
A PCM file is generally stored in .wav format. SO you can use wavparse plugin to play raw PCM.
I learned how to encode wav into an mp3 using lame_encode_buffer_interleaved from this question: Is there any LAME c++ wraper\simplifier (working on Linux Mac and Win from pure code)?
Now I want to decode the mp3 back into wav. I know there's lame_decode but I don't know how to use it since it requires two pcm buffers (pcm_l and pcm_r). I don't understand how to put them together into a well-formed wav file, because I don't really know how they works.
Now can someone provide a simple working example on decoding an mp3 into a wav using lame in C/C++?
Thanks.
Take a look into the lame frontend source code. Start at the lame_decoder() function in the .../frontend/lame_main.c file, it decodes an MP3 file and writes the wave header.