I'm struggling with mixing two audio streams into single output stream. MFNode has an AudioMixerMFT but TopoEdit crashes when I try to build a topology like this & execute it:
Note: I tried TopoEdit that comes with Windows SDK 7.1 & also the one with few fixes by the author of "Developing Microsoft® Media Foundation Applications"
I thought it could be some issue with TopoEdit so I built the Topology in code (by modifying the code from Ch#9 of "Developing Microsoft® Media Foundation Applications") but it still failed with 'E_UNEXPECTED Catastrophic failure' on mediaEvent->GetStatus(&hrStatus) inside HRESULT CPlayer::ProcessEvent(CComPtr<IMFMediaEvent>& mediaEvent) on Session Start event.
Now at this point I thought it could be some issue with AudioMixerMFT so I wrote a Custom MFT with 2 inputs that acts like a simple pass-through (Only sends 1st input & ignores 2nd one). And I built a topology in TopoEdit like and it worked:
But when I connected 'Audio 2.wav' to MFT, it crashed. Now I tried to use this custom MFT in my own code & it worked again with single input but failed with 'E_UNEXPECTED Catastrophic failure' when applied two inputs.
Not sure what could be the problem, I started to doubt if multiple input MFT is supported, I came across a post http://social.msdn.microsoft.com/Forums/windowsdesktop/en-US/21596e11-c4e2-480a-b28f-9e2f5fa8820d/mutlinput-and-multioutput (yes it is quite old) that says it is not supported.
Is there anyone out there who was able to run AudioMixerMFT from MFNode successfully? Any alternates to Microsoft Media Foundation? or Any hint would be appreciated. Thanks
MFNode is my open source project.
If you read the MFNode's documentation, you will see that TopoEdit does not handle more than one inputstream in a MFT. And yes TopoEdit crashes. You can fix the bug in TopoEdit source code. It is just a null pointer that TopoEdit does not checked. But unfortunatly, it does not solve the problem. TopoEdit is not able to call ProcessInput twice on the two input streams, before calling ProcessOutput.
You have to provide a custom media session to make it work (implement IMFMediaSession).
In a next update of MFNode Project, i will provide a player to use all the MFNode, and especially the MFNode Audio Mixer.
EDIT: in tededit.cpp, TopoEdit crashes at CTedEditorVisualObjectEventHandler::NotifyObjectDeleted :
...
CTedTopologyNode* pNode = m_pEditor->FindNode(pConn->GetOutputNodeID());
...
pNode can be null pointer and TopoEdit does not check.
EDIT
I've updated my project. Check MFNodePlayer. I use a custom MediaSession to handle the wave mixer topology.
It works well but it is not perfect because of two things. If you stop the topo and then replay, it fails (because i must stop all source, and perhaps reset the time clock and bytestream). Second, there is a function wich handles IMFTransform in a recursive way. It is hard to debug.
I will fix later.
PS : Special thanks to "Developing Microsoft Media Foundation Applications" book. It helps me a lot to create a custom MediaSession.
Related
I need your advice. I'd like to develop the app for audio/video splitting using Metro interface.
Usually I use DirectShow for it using the follow schema: create a grabber, add it to DS graph, capture by it the audio/video streams and pass them to my AVstream drivers for splitting. But in new program I want to use Media Foundation and insert it into UWP.
How I see my new app. It must have Metro interface for common control: choice of sources, adding parameters, changing modes and etc. I'd like to use MediaCapture class for capture of streams and rendering them too. Here I don't see any problems, MSDN has many samples for it. But I have no ideas how to insert a grabber between source and render.
Which operations a grabber will do:
Receive input stream from MediaCapture.
Stream converting : YUV -> RGB, adding effects and etc.
Send output stream for rendering (MediaCapture) and to my AVstream driver for splitting with any apps (Skype, Adobe Flash Player, Edge, ....).
How to make a grabber. In MSDN I found three ways:
Sample Grabber Sink (https://msdn.microsoft.com/en-us/library/windows/desktop/hh184779(v=vs.85).aspx). No problem to receive/control/send stream in MF dll. But I don't know how to link that dll with MediaCapture?
Source Reader (https://msdn.microsoft.com/en-us/library/windows/desktop/dd940436(v=vs.85).aspx). The same problems, plus the Source Reader doesn't work for playback.
Custom MFT? Any case MediaCapture allows to connect to MFT by AddEffectAsync().
My environment: MS Windows 10, MS Visual Studio Community 2015.
Thank you for any ideas.
This question and UWP are not actual for me at all. I found the following:
"Some apps can work intensively in background, for example it maybe video converting, online financial data processing and more.
Now UWP application will suspended when it go offscreen."
https://wpdev.uservoice.com/forums/110705-universal-windows-platform/suggestions/9950598-exclude-suspending-in-desktop
So if the user minimizes the program window, then the program stops a video stream.
I have a directshow Video renderer redived from CBaseVideoRenderer. The renderer is used in a graph that receives data from a live source (BDA). It looks like the connections are established properly, but the video rendering immediately ends because there is no sample. However, audio Rendering works, ie I can hear the sound while DoRenderSample of my renderer is never called.
Stepping through the code in the debugger, I found out that in CBaseRenderer::StartStreaming, the stream ends immedately, because the member m_pMediaSample is NULL. If I replace my renderer with the EVR renderer, it shows frames, ie the stream is not ending before the first frame for the EVR renderer, but only for my renderer.
Why is that and how can I fix it? I implemented (following the sample from http://www.codeproject.com/Articles/152317/DirectShow-Filters-Development-Part-Video-Render) what I understand as the basic interface (CheckMediaType, SetMediaType and DoRenderSample), so I do not see any possibility to influence what is happening here...
Edit: This is the graph as seen from the ROT:
What I basically try to do is capturing a DVB stream that uses VIDEOINFOHEADER2, which is not supported by the standard sample grabber. Although the channel is a public German TV channel without encryption, could it be that this is a DRM issue?
Edit 2: I have attached my renderer to another source (a Blackmagic Intensity Shuttle). It seams that the source causes the issue, because I get samples in the other graph.
Edit 3: Following Roman's Suggestion, I have created a transform filter. The graph looks like
an has unfortunately the same problem, ie I do not get any sample (Transform is not called).
You supposedly chose wrong path of fetching video frames out of media pipeline. So you are implementing a "network renderer", something that terminates the pipeline to further send data to network.
A renderer which accepts the feed sounds appropriate. Implementing a custom renderer, however, is an untypical task and then there is not so much information around on this. Additionally, a fully featured renderer is typically equipped with sample scheduling part, which end of stream delivery - things relatively easy to break when you customize it through inheriting from base classes. That is, while the approach sounds good, you might want to compare it to another option you have, which is...
A combination of Sample Grabber + Null Renderer, two standard filters, which you can attach your callback to and get frames having the pipeline properly terminated. The problem here is that standard Sample Grabber does not support VIDEOINFOHEADER2. With another video decoder you could possibly have the feed decoded info VIDEOINFOHEADER, which is one option. And then improvement of Sample Grabber itself is another solution: DirectX SDK Extras February 2005 (dxsdk_feb2005_extras.exe) was the SDK which included a filter similar to standard Sample Grabber called Grabber \DirectShow\Samples\C++\DirectShow\Filters\Grabber. It is/was available in source code and provided with a good description text file. It is relatively easy to extend to allow it accept VIDEOINFOHEADER2 and make payload data available to your application this way.
The easiest way to get data out of a DirectShow graph, if youњre not going to use
MultiMedia Streaming, is probably to write your own TransInPlace filter, a sub-variety
of a Transform filter. Then connect this filter to the desired stream of data you wish to
monitor, and then run, pause, seek, or otherwise control the graph. The data, as it passes
through the transform filter, can be manipulated however you want. We call this kind of
filter, a Њsample grabberћ. Microsoft released a limited-functionality sample grabber
with DX8.0. This filter is limited because it doesnњt deal with DV Data or mediatypes
with a format of VideoInfo2. It doesnњt allow the user to receive prerolled samples.
(Whatњs a preroll sample? See the DX8.1 docs) Its ЊOneShotћ mode also has some problems.
To add to this, the Grabber sample is pretty simple itself - perhaps 1000 lines of code all together, including comments.
Looks like your decoder or splitter isn't demuxing the video frames. Look further up the chain to see what filters are supplying your renderer pin with data, chances are its only recognising audio.
Try dropping the file into Graphedit (there's a better one on the web BTW) and see what filters it creates.
Then look at the samples in the DirectShow SDK.
This is going to be my first question in StackOverflow after several days looking for an explanation. Please, be gentle with me for asking because I know my problem is a bit bizarre to be a general problem.
I made a MF capture video application, based in the Microsoft example 'CaptureToFile'. It did work on Windows 7 x64. I upgraded to Visual Studio 2013 without problems. Problems arose when I try to put all the development on a Windows 8.1 x64 machine.
The app compiles and executes without error, but it's UNABLE to capture samples by using m_pReader->ReadSample() in asynchronous mode; only the first two samples arrive to OnReadSample method; and there must be 'control' samples, because the IMFSample is null in all of them. After that, the app gets 'hanged' waiting for data.
I've tried the original MFCaptureToFile sample with the same sad results.
Of course, I think hardware and software are similar (the same capture card with the same driver version, both are desktop PC...)
Do you know any possible reason for this behaviour? in Win7 everything is working flawless! Or at least, if you could light me a bit about new paths for finding what's happening
Thanks in advance
UPDATE: There is another 'player' in the game. Looking into the threads, I see that a worker thread is in 'RTWorkQ.dll', the real-time work queue container, specific only for Windows 8. I will go on investigating. In the meantime, if you have any idea, something to share, I'll be glad to hear you.
UPDATE 2: I've modified the sample MFCaptureToFile to get the video samples synchronously, because I thought the problem could be due to the asynchronous behaviour; related with the queues. I've to say that the problem persist even with this change. The second time it tries to read a sample, the application gets 'hanged' waiting for a reading that doesn't never arrives.
UPDATE 3: I've tried with the CaptureEngine sample application that uses another MF way to capture video (MFCaptureEngine). It builds and runs flawlessly but doesn't show any images when starting the 'preview' and doesn't record any useful, only non-playable files.
UPDATE 4: I've installed Visual Studio 2010 Ultimate in Windows 8 PRO. The sample MFCaptureToFile fails again in the sample. It's unable to read a 2nd sample from the frame grabber. I'm starting to think that can be an incompatibility between the capture card (Datapath VisionRGB-E1S) and Windows 8 PRO despite the driver assures it works fine in this platform and the test program shows images. Tomorrow I'm going to try the test with an external USB webcam.
Finally, I have figured out the reason of this problem.
With Windows 8.1 release Microsoft has introduced New AVStream Interfaces for Windows 8.1
There is a small but very important change in KS_FRAME_INFO structure - the new FrameCompletionNumber member.
An identifying sequence number for the frame in the completed queue.
This number is used to verify proper frame order. When this value is
0, the frame was cancelled. This member is available starting with
Windows 8.1.
DirectShow doesn't care about this number. And MediaFoundation cares.
So, you cannot just fix that on your user-mode side. The manufacture developers must release an update. Btw, I have two webcams - Logitech C270 and Creative Live Socialize HD. Logitech supports Metro while Creative does not.
I have successfully updated my driver with only a few lines of code (to set up FrameCompletionNumber properly).
UPD. similar thread http://www.osronline.com/showthread.cfm?link=255004
It must be a problem of the frame grabber Datapath VisionRGB-E1S. I've tried with the brand-new USB webcam LifeCam Studio, and everything worked fine.
I will left for other future thread why this unpaired behaviour between Windows 8 and Windows 7, but it could be something related to the User-mode access...
I had the same kind of issue:
IMFSourceReader was obtained successfully
reader->SetCurrentMediaType() reported no error.
reader->ReadSample() was successful.
then OnReadSample() was called only once and the hrStatus argument 0x80070491
For me, the issue was that I modified the video subtype IMFMediaType, then applied to the reader as current media type.
I've stumbled through some code to enumerate my microphone devices (with some help), and am able to grab the "friendly name" and "clsid" information from each device.
I've done some tinkering with GraphEd.exe to try and figure out how I can take audio from directshow and write it to a file (I'm not currently concerned about the format, wav should be fine), and can't seem to find the right combination.
One of the articles I've read linked to this Windows SDK sample, but when I examined the code, I ended up getting pretty confused at how to use that code, ie. setting the output file, or specifying which audio capture device to use.
I also came across a codeguru article that has a nicely featured audio recorder, but it does not have an interface for selecting the audio device, and I can't seem to find where it statically picks which recording device to use.
I think I'd be most interested in figuring out how to use the Windows SDK sample, but any explanation on either of the two approaches would be fantastic.
Edit: I should mention my knowledge and ability as a win32 COM programmer is very low on the scale, so if this is easy, just explain it to me like I'm five, please.
Recording audio into file with DirectShow needs you to build the right filter graph, as you should have figured out already. The parts include:
The device itself, which you instantiate via moniker (not CLSID!), it is typically PCM format
Multiplexer component that converts streams into container format
File Writer Filter that takes file-compatible stream and writes into a file
The tricky moment is #2 since there is not standard component available. Windows SDK samples however contains the missing part - WavDest Filter Sample. Building it and making it ready for use, you can build a graph that records from device into .WAV file.
Your graph will look like this, and it's built easily programmatically as well:
I noticed that I have a variation of WavDest installed with Google Earth - for the case you have troubles building it yourself and you will be looking for prebuilt binary.
You can instruct ffmpeg to record from a directshow device, and output to a file.
I need to split a PCM audio stream with up to 16 channels into several stereo streams.
As I haven't found anything capable of doing that, I'm trying to write my first directshow filter.
Anything capable of splitting the audio would be very welcomed but I'm assuming that I must do it so there's what I've done:
At first, I tried to create a filter based on ITransformFilter. However, it seems that it's made thinking of filters with only one input pin and one output pin. As I need several output pins, I disregarded it, however perhaps it can be adapted more easily than I thought, so any advice is highly appreciated.
Then, I begin basing on IBaseFilter. I managed to do something. I create the necessary output pins when the input pin gets connected, and destroy them when the input gets disconnected. However, when I connect any output pin to an ACM Wrapper (just to test it), the input tries to reconnect, destroying all my output pins.
I tried to just not destroy them, but then I checked the media type of my input pin and it had changed to a stereo stream. I'm not calling QueryAccept from my code.
How could I avoid the reconnection, or what's the right way to do a demuxer filter?
Edit 2010-07-09:
I've come back to ITransformFilter, but I'm creating the necessary pins. However I've encountered the same problem as with IBaseFilter: When I connect my output pin to an ACM Wrapper, the input pins changes its mediatype to 2 channels.
Not sure how to proceed now...
You can take a look at the DMOSample in the Windows Server 2003 R2 Platform SDK. It is also included in older directx sdk's, but not in newer windows sdk's. You can locate it in Samples\Multimedia\DirectShow\DMO\DMOSample. Here is the documentation of this sample.
I have seen someone create a filter based on this which had a stereo input and two mono outputs. Unfortunately I cannot post the sourcecode.