Saving frames in a directshow filter - c++

I have an application that relies on a C++ DirectShow transform filter, for the purpose of analyzing what is going on step by step I want to save each frame from the camera that the filter processes. What would be the simplest method to achieve this in the filter itself?

Can you insert sample grabber filter into the graph?
Beware though: the frame will be in some pixel format, and if it's not RGB24 you'll be in a lot of trouble to analyze it. If possible, configure your input source to RGB24 and you'll be on your way.
Expand your question if you need more info.

Related

Playing directshow streams asynchronously into composited output

I'm a DirectShow newbie. I'm trying to get DirectShow to playback a set of media files but NOT simultaneously.
I've tried allocating one graph and using RendeFile to add each file into it but when I invoke IMediaControl::Run, they ALL begin playing at the same time.
I've tried allocating one graph and one IMediaControl per file and then calling Run at different times on each. This works, the streams play independantly.
How do I combine the streams to an output window?
Is it possible to have a master surface on which the other streams are rendered into rectangles?
Since the streams are not in the same graph, can it be done?
What do I use for a surface or output?
Thanks
All filters are expected to change state in a graph together, so you indeed need a separate graph for every file you suppose to play back independently from others.
If you are going to play files simply side by side, without effects and overlapping etc., the easiest option is to use separate video renderers and use them as controls, properly positioning them in your UI.
If you instead want something more sophisticated, then there are two ways to choose from: you either take decompressed video/audio out of DirectShow filter graph using Sample Grabber or a similar filter, and then you are responsible for presenting data yourself with other APIs. Or you implement a custom allocator/presented (also known as renderless mode of operation of video renderer) and finely control the output of video, which in particular allows to get a frame into texture or offscreen surface leaving presentation itself to you.

Write to Directshow source filter

I have got a directshow source filter which is based on http://tmhare.mvps.org/downloads/vcam.zip. I want to write the webcam frames that has been manipulated using opencv by my (separate) application, to this virtual webcam (Directshow filter). How can I do this?
Any helpful code snippets please?
A good practice for manipulating frames in Directshow is adding a SampleGrabber filter after your source filter. see
The SampleGrabber purpose is to manipulate frames.

Using Async_reader and Wave Parser in DirectShow filter graph results in video seeking issues

Some background:
I am attempting to create a DirectShow source filter based on the pushsource example from the DirectShow SDK. This essentially outputs a set of bitmaps, each of which can last for a long time (for example 30 seconds), to a video. I have set up a filter graph which uses Async_reader with a Wave Parser for audio and my new filter to push the video (the filter is a CSourceStream and I populate my frames in the FillBuffer function). These are both connected to a WMASFWriter to output a WMV.
The problem:
When I attempt to seek through the resulting video, I have to wait until a bitmap's start time occurs before it is displayed. For example, if I'm currently seeing bitmap 4 and skip back to the time which bitmap 2 is displayed the video output will not change until the third bitmap starts. Initially I wondered if I wasn't allowing FillBuffer to be called enough (as at the moment it's only once per bitmap) however I have since noted that when the audio track is very short (just a second long perhaps), I can seek through the video as expected. Is there a another way I should be introducing audio into the filter graph? Do I need to perform some kind of indexing when the WMV has been rendered? I'm at a bit of a loss...
You may need to do indexing as a post-processing step. Try indexing it with Windows Media File Editor from Windows Media Encoder SDK and see if this improves seeking.
Reducing key frame interval in the encoder profile may improve seeking. This can be done in Windows Media Profile Editor from the SDK. Note that this will cause file size increase.

Direct Show Capture filter "wrapper"

I need to write a DirectShow capture filter that wraps the "real"
video device (fitler) and deinterlaces the captured video. From the
interface perspective, this has to be a separate video device
available in enumerator and when choosen, it connects to a real video
device and inserts a transform filter (deinterlace) between video
device output pin and the its own output pin. My question is - is my
approach correct? I want to simply develop a DShow capture video
filter, instantiate a transform filter within and connect pins from my
filter automatically. Is there a better way to "inject" a transfrom
filter between a real video device and the application that uses it?
Regards
Dominik Tomczak
To deinterlace without a wrapper, you can create a transform filter and give it a very high merit, that way it can be automatically added (injected) to graphs. See MatrixMixer which does something simular for audio.
If you really need a wrapper, create a second graph with the original video device and the transform filter. Then transfer the output into the graph where your wrapper filter is in. See GMFBridge for an example how to use the output of graph A as the input of graph B.

How to overlay direct3d in directshow

I am looking for a tutorial or documentation on how to overlay direct3d on top of a video (webcam) feed in directshow.
I want to provide a virtual web cam (a virtual device that looks like a web cam to the system (ie. so that it be used where ever a normal webcam could be used like IM video chats)
I want to capture a video feed from a webcam attached to the computer.
I want to overlay a 3d model on top of the video feed and provide that as the output.
I had planned on doing this in directshow only because it looked possible to do this in it. If you have any ideas about possible alternatives, I am all ears.
I am writing c++ using visual studio 2008.
Use the Video Mixing Renderer Filter to render the video to a texture, then render it to the scene as a full screen quad. After that you can render the rest of the 3D stuff on top and then present the scene.
Are you after a filter that sits somewhere in the graph that renders D3D stuff over the video?
If so then you need to look at deriving a filter from CTransformFilter. Something like the EZRGB example will give you something to work from. Basically once you have this sorted your filter needs to do the Direct 3D rendering and, literally, insert the resulting image into the direct show stream. Alas you can't render the Direct3D directly to a direct show video frame so you will have to do your rendering then lock the front/back buffer and copy the 3D data out and into the direct show stream. This isn't ideal as it WILL be quite slow (compared to standard D3D rendering) but its the best you can do, to my knowledge.
Edit: In light of your update what you want is quite complicated. You need to create a source filter (You should look at the CPushSource example) to begin with. Once you've done that you will need to register it as a video capture source. Basically you need to do this by using the IFilterMapper2::RegisterFilter call in your DLLRegisterServer function and pass in a class ID of "CLSID_VideoInputDeviceCategory". Adding the Direct3D will be as I stated above.
All round you want to spend as much time reading through the DirectShow samples in the windows SDK and start modifying them to do what YOU want them to do.