how to use DirectShow to render audio in C++ - c++

I am just starting to learn DirectShow with C++. I need to use DirectShow to record the audio and write it to a WAV file on the disk. I heard from other people that Win 7 does not allow for rendering audio using DirectShow.
In addition, I would like to know how should I start with recoding audio using DirectShow with C++? If there is sample source, it would be great.
Thanks in advance.

I think you may have misunderstood these other people. Windows Media Foundation is aimed to be the successor of DirectShow, but DirectShow is still a very valid technology on Windows 7.
The easiest thing to accomplish what you want to do, is to get it right using the GraphEdit tool first ( I assume you want to do this programmatically).
Create a graph that contains your audio device, a WavDestFilter, and a file writer.
Source -> WavDest -> File Writer
Play the graph. Stop the graph and you should have created a .wav file with the recorded audio. If you can get this right, then you need to do the whole thing programmatically.
There are a couple of samples in the SDK that show you how to programmatically add filters to a graph and connect them, that should enable you to get started.
WRT the WavDestFilter, IIRC it might not be in all versions of the SDK, you'll have to find an appropriate one. You also need to build it, and regsvr32 it, so that it will show up in your list of available filters in GraphEdit.
If this all seems a bit much, I would read through the DirectShow documentation on MSDN to at least get an overview of DirectShow.

Related

What is the path from BITMAP[+WAVE(s)] to RTSP (Twitch) via C/C++ in Windows?

So I'm trying to get a basic tool to output video/audio(s) to Twitch. I'm new to this side (AV) of programming so I'm not even sure what to look for. I'm trying to use mainly Windows infrastructure and third party where not available.
What are the steps of getting raw bitmap and wave data into a codec and then into a rtsp client and finally showing up on Twitch? I'm not looking for code. I'm looking for concepts so I can search for as I'm not absolutely sure what to search for. I'd rather not go through OBS source code to figure it out and use that as last resort.
So I capture the monitor via Output Duplication and also the Sound on the system as a wave and the microphone as another wave. I'm trying to push this to Twitch. I know that there's Media Foundation on Windows but I don't know how far to streaming it can get as I assume there no netcode integrated in it? And also the libav* collection in FFMPEG.
What are the basic steps of sending bitmap/wave to Twitch via any of thee above libraries or even others as long as they work on Windows. Please don't add code, I just need a not very long conceptual explanation and I'll take it from there. Try to cover also how bitrate and framerate gets regulated (do I have do it or the codec does it)?
Assume absolute noob level in this area (concept-wise not code-wise).

Analysing audio data for attributes at time intervals

I've been wanting to play around with audio parsing for a while now but I haven't really been able to find the correct library for what I want to do.
I basically just want to parse through a sound file and get amplitudes/frequencies and other relevant information at certain times during the song (like every 10 ms or so) so I can graph the data for example where the song speeds up a lot and where it gets really loud.
I've looked at OpenAL quite a bit but it doesn't look like it provides this ability, other than that I have not had much luck with finding out where to start. If anyone has done this or used a library which can do this a point in the right direction would be greatly appreciated. Thanks!
For parsing and decoding audio files I had good results with libsndfile, which runs on Windows/OSX/Linux and is open source (LGPL license). This library does not support mp3 (the author wants to avoid licensing issues), but it does support FLAC and Ogg/Vorbis.
If working with closed source libraries is not a problem for you, then an interesting option could be the Quicktime SDK from Apple. This SDK is available for OSX and Windows and is free for registered developers (you can register as an Apple developer for free as well). With the QT SDK you can parse all the file formats that the Quicktime Player supports, and that includes .mp3. The SDK gives you access to all the codecs installed by QuickTime, so you can read .mp3 files and have them decoded to PCM on the fly. Note that to use this SDK you have to have the free QuickTime Player installed.
As far as signal processing libraries I honestly can't recommend any, as I have written my own functions (for speech recognition, in case you are curious). There are a few open source projects that seem interesting listed in this page.
I recommend that you start simple, for example working on analyzing amplitude data, which is readily available from the PCM samples without having to do any processing. Being able to visualize the data is very useful, I have found Audacity to be an excellent visualization tool, and since it is open source you can build your own tests inside it.
Good luck!

combining separate audio and video files into one file C++

I am working on a C++ project with openCV. It is a simple web cam application with basic features like capturing pictures and videos. I have already been able to save video (w/o audio). Since openCV doesnot support audio processing, I was wondering if there is any way I can record audio separately in a different file and later combine those together to get one video file.
While searching on the internet, I did hear something about using ffmpeg with openCV. But I just cant figure out how to do it exactly.....
Can you guys help me? I would be very grateful... Thankyou!
P.S. I have used openCV and QT (for GUI)
As you said, opencv doesn't by itself deal with audio. However once you get a separate audio and video file, you can combine them using a technique called muxing. There are many many ways to do this. I use VirtualDub for most of my muxing needs, although it is windows only (not sure if that's a problem). I know ffmpeg is also capable of muxing (via the command line interface), I can't recall what the command is. There's also mplayer and a multitude of other programs out there to do this.
as far as i know openCV is good for video/image processing. To support audio processing, you can use other libraries e.g. PortAudio or C-sound.

c++ convert/play videos and images

I'm looking for build in library for converting videos/images. i heard something about DirectShow. Do you know any library you have used to convert videos/images?
For transcoding (converting one video format to another) using Directshow is bit tricky, you want to use Media Foundation for this job.
There is Transcode API available in Media Foundation to achieve this task. This link has more details on Transcode API, tutorials and samples to get you started.
You can use DirectShow for grabbing images from video stream. For it you must create your own filter node. It is complex task because of filter is COM object that will work within chain (DirectShow filter graph) of other filter nodes - codecs. So after creating you need register your filter in system. As for me i think you can try it because you can use all registered codecs in system and as result get decompressed/final image into your filter. As other solution i think that you can try to use modules from some open source media player. For example try VideoLAN but as i know it is big thing and not easy to use.
Good luck!

Video mixer filter

I need to find a video filter in order to mix multiple video streams (let's say, maximum 4).
I've found a video mixer filter from MediaLooks and is ok, but the problem is that i'm trying to use it in a school project (for the entire semester) and so the 30 days trial is kind of unacceptable.
So my question to you is that: are you aware of a free direct show filter that could help. If this is not working then it means i must write one. The problem here is that i don't know from where to start.
If you need output to the display, you can use the VMR. If you need output to file, then I think you will need to write something. The standard solution to this is to write an allocator/presenter plugin for the VMR that allows you to get back the mixed video and then save it somewhere. This is more efficient that a fully software-only mixer filter.
G
I finally ended up by implementing my own filter.
The VideoMixerRender9 (and 7) will do the trick for you. You can set the opacity and area each video going into the VMR9. I suggest playing with it from within graphedit.
I would also like to suggest skipping that all together. If you use WPF, you will get far more media capabilities, much easier.
If you want low level DirectShow support, you can try my project, WPF Mediakit. I have a control called MediaUriElement that is similar to WPF's MediaElement.