DirectShow video stream ends immediately (m_pMediaSample is NULL) - c++

I have a directshow Video renderer redived from CBaseVideoRenderer. The renderer is used in a graph that receives data from a live source (BDA). It looks like the connections are established properly, but the video rendering immediately ends because there is no sample. However, audio Rendering works, ie I can hear the sound while DoRenderSample of my renderer is never called.
Stepping through the code in the debugger, I found out that in CBaseRenderer::StartStreaming, the stream ends immedately, because the member m_pMediaSample is NULL. If I replace my renderer with the EVR renderer, it shows frames, ie the stream is not ending before the first frame for the EVR renderer, but only for my renderer.
Why is that and how can I fix it? I implemented (following the sample from http://www.codeproject.com/Articles/152317/DirectShow-Filters-Development-Part-Video-Render) what I understand as the basic interface (CheckMediaType, SetMediaType and DoRenderSample), so I do not see any possibility to influence what is happening here...
Edit: This is the graph as seen from the ROT:
What I basically try to do is capturing a DVB stream that uses VIDEOINFOHEADER2, which is not supported by the standard sample grabber. Although the channel is a public German TV channel without encryption, could it be that this is a DRM issue?
Edit 2: I have attached my renderer to another source (a Blackmagic Intensity Shuttle). It seams that the source causes the issue, because I get samples in the other graph.
Edit 3: Following Roman's Suggestion, I have created a transform filter. The graph looks like
an has unfortunately the same problem, ie I do not get any sample (Transform is not called).

You supposedly chose wrong path of fetching video frames out of media pipeline. So you are implementing a "network renderer", something that terminates the pipeline to further send data to network.
A renderer which accepts the feed sounds appropriate. Implementing a custom renderer, however, is an untypical task and then there is not so much information around on this. Additionally, a fully featured renderer is typically equipped with sample scheduling part, which end of stream delivery - things relatively easy to break when you customize it through inheriting from base classes. That is, while the approach sounds good, you might want to compare it to another option you have, which is...
A combination of Sample Grabber + Null Renderer, two standard filters, which you can attach your callback to and get frames having the pipeline properly terminated. The problem here is that standard Sample Grabber does not support VIDEOINFOHEADER2. With another video decoder you could possibly have the feed decoded info VIDEOINFOHEADER, which is one option. And then improvement of Sample Grabber itself is another solution: DirectX SDK Extras February 2005 (dxsdk_feb2005_extras.exe) was the SDK which included a filter similar to standard Sample Grabber called Grabber \DirectShow\Samples\C++\DirectShow\Filters\Grabber. It is/was available in source code and provided with a good description text file. It is relatively easy to extend to allow it accept VIDEOINFOHEADER2 and make payload data available to your application this way.
The easiest way to get data out of a DirectShow graph, if youњre not going to use
MultiMedia Streaming, is probably to write your own TransInPlace filter, a sub-variety
of a Transform filter. Then connect this filter to the desired stream of data you wish to
monitor, and then run, pause, seek, or otherwise control the graph. The data, as it passes
through the transform filter, can be manipulated however you want. We call this kind of
filter, a Њsample grabberћ. Microsoft released a limited-functionality sample grabber
with DX8.0. This filter is limited because it doesnњt deal with DV Data or mediatypes
with a format of VideoInfo2. It doesnњt allow the user to receive prerolled samples.
(Whatњs a preroll sample? See the DX8.1 docs) Its ЊOneShotћ mode also has some problems.
To add to this, the Grabber sample is pretty simple itself - perhaps 1000 lines of code all together, including comments.

Looks like your decoder or splitter isn't demuxing the video frames. Look further up the chain to see what filters are supplying your renderer pin with data, chances are its only recognising audio.
Try dropping the file into Graphedit (there's a better one on the web BTW) and see what filters it creates.
Then look at the samples in the DirectShow SDK.

Related

Grabber for splitting in UWP

I need your advice. I'd like to develop the app for audio/video splitting using Metro interface.
Usually I use DirectShow for it using the follow schema: create a grabber, add it to DS graph, capture by it the audio/video streams and pass them to my AVstream drivers for splitting. But in new program I want to use Media Foundation and insert it into UWP.
How I see my new app. It must have Metro interface for common control: choice of sources, adding parameters, changing modes and etc. I'd like to use MediaCapture class for capture of streams and rendering them too. Here I don't see any problems, MSDN has many samples for it. But I have no ideas how to insert a grabber between source and render.
Which operations a grabber will do:
Receive input stream from MediaCapture.
Stream converting : YUV -> RGB, adding effects and etc.
Send output stream for rendering (MediaCapture) and to my AVstream driver for splitting with any apps (Skype, Adobe Flash Player, Edge, ....).
How to make a grabber. In MSDN I found three ways:
Sample Grabber Sink (https://msdn.microsoft.com/en-us/library/windows/desktop/hh184779(v=vs.85).aspx). No problem to receive/control/send stream in MF dll. But I don't know how to link that dll with MediaCapture?
Source Reader (https://msdn.microsoft.com/en-us/library/windows/desktop/dd940436(v=vs.85).aspx). The same problems, plus the Source Reader doesn't work for playback.
Custom MFT? Any case MediaCapture allows to connect to MFT by AddEffectAsync().
My environment: MS Windows 10, MS Visual Studio Community 2015.
Thank you for any ideas.
This question and UWP are not actual for me at all. I found the following:
"Some apps can work intensively in background, for example it maybe video converting, online financial data processing and more.
Now UWP application will suspended when it go offscreen."
https://wpdev.uservoice.com/forums/110705-universal-windows-platform/suggestions/9950598-exclude-suspending-in-desktop
So if the user minimizes the program window, then the program stops a video stream.

How to feed video data into a DirectShow filter to compress/encode and save to file?

First of all, here is what I'm trying to accomplish:
We'd like to add the capability to our commercial application to generate a video file to visualize data. It should be saved in a reasonably compressed format. It is important that the encoding library/codecs are licensed such that we're allowed to use and sell our software without restriction. It's also important that the media format can easily be played by a customer, i.e. can be played by Windows Media Player without requiring a codec pack to be installed.
I'm trying to use DirectShow in c++ by creating a source filter with one output pin that generates the video. I'm closely following the DirectShow samples called Bouncing Ball and Push Source. In GraphEdit I can successfully connect to a video renderer and see the video play. I have also managed to connect to AVI Mux and then file writer to write an uncompressed AVI file. The only issue with this is the huge file size. However, I have not been able to save the video in a compressed format. This problem also happens with the Ball and Push Source samples.
I can connect the output pin to a WM ASF Writer, but when I click play I get "This graph can't play. Unspecified error (Return code: 0x80004005)."
I can't even figure out how to connect to the WMVideo9 Encoder DMO ("These filters cannot agree on a connection"). I could successfully save to mjpeg, but compression was not very substantial.
Please let me know if I'm doing something wrong in GraphEdit or if my source filter code needs to be modified. Alternatively, if there is another (non-DirectShow) option that would work for me I'm open to suggestions. Thanks.
You don't give details to help you with your modification of the filters, however Ball sample generates output which can be written to a file.
Your choice of WM ASF Writer filter is okay - it is a stock filter and it is more or less easy to deal with. There is however a caveat: you need to select video only profile on the filter first, and then connect video input. WM ASF Writer won't run with an unconnected input pin, and in default state it also has an audio input. Of course this can also be done programmatically.
The graph can be as simple as this, and it can be run and it generates a playable file.

DirectShow - how to determine if stream is valid (C++)

I have been tasked with fixing a bug in a medical application that, among other things, can capture snapshots from intra-oral video cameras. It uses a DirectShow SampleGrabber for this task. I must make the disclaimer that I have not worked with DirectShow so I'm trying to get up to speed as I go. I understand basically how the various components work together.
Anyway the bug itself is seemingly trivial but I can't figure out a workaround. Due to the modular nature of this system, the preview window is part of a separate graph than the one created by SampleGrabber (it's a long story but this is due to legacy code to support previous devices). When the camera is active we can take snapshots and everything is happy. When the camera is turned off, the SampleGrabber takes a dark frame but DirectShow is crashing when releasing the IAMStreamConfig interface created in the preview module (access violation). It seems like for some reason the SampleGrabber graph is somehow corrupting the graph built in the preview module. Due to the nature of this application, I cannot show any source here, but essentially here's what I want to accomplish:
I need to be able to detect if the camera is actually on or not. The problem I'm having is that when the camera is plugged in (USB), it seems to look to the system like it is on and returning a video stream, it's just that the stream contains no real data. When I check the state of the capture filter with the GetState method, it claims it is running; also when I check the video format properties it returns the correct properties. It seems to me like the button on the camera simply turns on/off the camera sensor itself, but the device is still returning a blank stream when the camera is off. Something must be different though, because it doesn't crash with the sensor is actually on and returning live video.
Does anybody have an idea of how I could determine if the stream is blank or has live video? IE, are there any exposed interfaces or methods I could call to determine this? I have looked through all of the various interfaces in MSDN's DirectShow documentation but haven't been able to find a way to do this.
If you don't want the callback function of your sample grabber be called, then you may consider adding a special transform filter between the sample grabber and the source filter (or right after the source filter), and what this transform filter does it to check whether the input sample is corrupted and block those corrupted sample. This basically requires you to implement your own Transform() function:
HRESULT CRleFilter::Transform(IMediaSample *pSource, IMediaSample *pDest)
In the filter you connected after the source filter (or the earliest filter you have access to), check the IMediaSample it receives by the receive() function:
HRESULT Receive(IMediaSample *pSample);
In case you're using ISampleGrabber, then you should set its call back function by using ISampleGrabber::SetCallback
HRESULT SetCallback(
ISampleGrabberCB *pCallback,
long WhichMethodToCallback
);
This requires you to implement a class extends ISampleGrabberCB. After that, you can check your received sample in function SampleCB
HRESULT SampleCB(
double SampleTime,
IMediaSample *pSample
);
There is no universal way to tell whether camera is connected or whether the stream is blank or not. You typically have one of the following scenarios:
you stop receiving any samples when camera is off
you receive samples with all pixels zeroed out or, fully blue picture or a sort of this
Some cameras have signal loss notification, but it's model specific as well as notification method.
So in first case you just stop having callback called. And to cover the second one you need to check the frame whether it's filled with solid color entirely. When you capture raw video (uncompressed) this is a fairly simple thing to do.

How to split audio or write demuxer filter in directshow?

I need to split a PCM audio stream with up to 16 channels into several stereo streams.
As I haven't found anything capable of doing that, I'm trying to write my first directshow filter.
Anything capable of splitting the audio would be very welcomed but I'm assuming that I must do it so there's what I've done:
At first, I tried to create a filter based on ITransformFilter. However, it seems that it's made thinking of filters with only one input pin and one output pin. As I need several output pins, I disregarded it, however perhaps it can be adapted more easily than I thought, so any advice is highly appreciated.
Then, I begin basing on IBaseFilter. I managed to do something. I create the necessary output pins when the input pin gets connected, and destroy them when the input gets disconnected. However, when I connect any output pin to an ACM Wrapper (just to test it), the input tries to reconnect, destroying all my output pins.
I tried to just not destroy them, but then I checked the media type of my input pin and it had changed to a stereo stream. I'm not calling QueryAccept from my code.
How could I avoid the reconnection, or what's the right way to do a demuxer filter?
Edit 2010-07-09:
I've come back to ITransformFilter, but I'm creating the necessary pins. However I've encountered the same problem as with IBaseFilter: When I connect my output pin to an ACM Wrapper, the input pins changes its mediatype to 2 channels.
Not sure how to proceed now...
You can take a look at the DMOSample in the Windows Server 2003 R2 Platform SDK. It is also included in older directx sdk's, but not in newer windows sdk's. You can locate it in Samples\Multimedia\DirectShow\DMO\DMOSample. Here is the documentation of this sample.
I have seen someone create a filter based on this which had a stereo input and two mono outputs. Unfortunately I cannot post the sourcecode.

Frame accurate synchronizing of subtitle files with MPEG video using DirectShow

This is a problem I have been dealing with for a while, and haven't been able to get a good answer (even from Microsoft). I'm using the generic dump filter to write hardware compressed MPEG files out to disk. In the graph, I also have a SampleGrabber filter that gets called on every frame. From the SampleGrabber callback, I get a subtitle, along with the DirectShow timestamp and write them out to a SAMI (.smi) subtitle file. This all seems to be working, as the SAMI file contains the correct subtitles for every frame. However, I have a few problems:
The first few (usually 3 or 4) DirectShow timestamps are all 0. If I'm getting callbacks from the SampleGrabber, shouldn't these timestamps be incrementing?
When I begin playback, the first timestamp shown is about 10-20 subtitles into the SAMI file. I'd assume the first frame would show the first timestamp in the file.
This is probably related to #2, but the subtitles are not synchronized to the appropriate frames in the file. They can sometimes be up to 40 frames late.
I'm using DirectShow via C++, capturing with a Hauppauge HVR-1800 under Windows XP SP3 (with latest drivers 09/08/2008), and playing back under Media Player Classic 6.4.9.0. Any ideas are welcome.
Are you using getting the incoming IMediaSample's GetTime or GetMediaTime. GetTime is what you want as it respresents the streams presentations time.
Be sure to also check the incoming IMediaSample's isPreRoll function. Preroll samples should be ignored as they will be output again during playback. Another thing I would do is make sure that your sample grabber is as far downstream in your filtergraph as it can be. Preferably after any demuxer's and renderers.
Also see the article on TimeStamps in the DirectShow documentation. It outlines the other caveats of using timestamps.
Of course, even after all of the tips above, there is still no absolute guarantee as to how a particular DirectShow filter is going to behave.