DirectShow: Video-Preview and Image (with working code) - c++

Questions / Issues
If someone can recommend me a good free hosting site I can provide the whole project file.
As mentioned in the text below the TakePicture() method is not working properly on the HTC HD 2 device. It would be nice if someone could look at the code below and tell me if it is right or wrong what I'm doing.
Introduction
I recently asked a question about displaying a video preview, taking camera image and rotating a video stream with DirectShow. The tricky thing about the topic is, that it's very hard to find good examples and the documentation and the framework itself is very hard to understand for someone who is new to windows programming and C++ in general.
Nevertheless I managed to create a class that implements most of this features and probably works with most mobile devices. Probably because the DirectShow implementation depends a lot on the device itself. I could only test it with the HTC HD and HTC HD2, which are known as quite incompatible.
HTC HD
Working: Video preview, writing photo to file
Not working: Set video resolution (CRASH), set photo resolution (LOW quality)
HTC HD 2
Working: Set video resolution, set photo resolution
Problematic: Video Preview rotated
Not working: Writing photo to file
To make it easier for others by providing a working example, I decided to share everything I have got so far below. I removed all of the error handling for the sake of simplicity. As far as documentation goes, I can recommend you to read the MSDN documentation, after that the code below is pretty straight forward.
void Camera::Init()
{
CreateComObjects();
_captureGraphBuilder->SetFiltergraph(_filterGraph);
InitializeVideoFilter();
InitializeStillImageFilter();
}
Dipslay a video preview (working with any tested handheld):
void Camera::DisplayVideoPreview(HWND windowHandle)
{
IVideoWindow *_vidWin;
_filterGraph->QueryInterface(IID_IMediaControl,(void **) &_mediaControl);
_filterGraph->QueryInterface(IID_IVideoWindow, (void **) &_vidWin);
_videoCaptureFilter->QueryInterface(IID_IAMVideoControl,
(void**) &_videoControl);
_captureGraphBuilder->RenderStream(&PIN_CATEGORY_PREVIEW,
&MEDIATYPE_Video, _videoCaptureFilter, NULL, NULL);
CRect rect;
long width, height;
GetClientRect(windowHandle, &rect);
_vidWin->put_Owner((OAHWND)windowHandle);
_vidWin->put_WindowStyle(WS_CHILD | WS_CLIPSIBLINGS);
_vidWin->get_Width(&width);
_vidWin->get_Height(&height);
height = rect.Height();
_vidWin->put_Height(height);
_vidWin->put_Width(rect.Width());
_vidWin->SetWindowPosition(0,0, rect.Width(), height);
_mediaControl->Run();
}
HTC HD2: If set SetPhotoResolution() is called FindPin will return E_FAIL. If not, it will create a file full of null bytes. HTC HD: Works
void Camera::TakePicture(WCHAR *fileName)
{
CComPtr<IFileSinkFilter> fileSink;
CComPtr<IPin> stillPin;
CComPtr<IUnknown> unknownCaptureFilter;
CComPtr<IAMVideoControl> videoControl;
_imageSinkFilter.QueryInterface(&fileSink);
fileSink->SetFileName(fileName, NULL);
_videoCaptureFilter.QueryInterface(&unknownCaptureFilter);
_captureGraphBuilder->FindPin(unknownCaptureFilter, PINDIR_OUTPUT,
&PIN_CATEGORY_STILL, &MEDIATYPE_Video, FALSE, 0, &stillPin);
_videoCaptureFilter.QueryInterface(&videoControl);
videoControl->SetMode(stillPin, VideoControlFlag_Trigger);
}
Set resolution: Works great on HTC HD2. HTC HD won't allow SetVideoResolution() and only offers one low resolution photo resolution:
void Camera::SetVideoResolution(int width, int height)
{
SetResolution(true, width, height);
}
void Camera::SetPhotoResolution(int width, int height)
{
SetResolution(false, width, height);
}
void Camera::SetResolution(bool video, int width, int height)
{
IAMStreamConfig *config;
config = NULL;
if (video)
{
_captureGraphBuilder->FindInterface(&PIN_CATEGORY_PREVIEW,
&MEDIATYPE_Video, _videoCaptureFilter, IID_IAMStreamConfig,
(void**) &config);
}
else
{
_captureGraphBuilder->FindInterface(&PIN_CATEGORY_STILL,
&MEDIATYPE_Video, _videoCaptureFilter, IID_IAMStreamConfig,
(void**) &config);
}
int resolutions, size;
VIDEO_STREAM_CONFIG_CAPS caps;
config->GetNumberOfCapabilities(&resolutions, &size);
for (int i = 0; i < resolutions; i++)
{
AM_MEDIA_TYPE *mediaType;
if (config->GetStreamCaps(i, &mediaType,
reinterpret_cast<BYTE*>(&caps)) == S_OK )
{
int maxWidth = caps.MaxOutputSize.cx;
int maxHeigth = caps.MaxOutputSize.cy;
if(maxWidth == width && maxHeigth == height)
{
VIDEOINFOHEADER *info =
reinterpret_cast<VIDEOINFOHEADER*>(mediaType->pbFormat);
info->bmiHeader.biWidth = maxWidth;
info->bmiHeader.biHeight = maxHeigth;
info->bmiHeader.biSizeImage = DIBSIZE(info->bmiHeader);
config->SetFormat(mediaType);
DeleteMediaType(mediaType);
break;
}
DeleteMediaType(mediaType);
}
}
}
Other methods used to build the filter graph and create the COM objects:
void Camera::CreateComObjects()
{
CoInitialize(NULL);
CoCreateInstance(CLSID_CaptureGraphBuilder, NULL, CLSCTX_INPROC_SERVER,
IID_ICaptureGraphBuilder2, (void **) &_captureGraphBuilder);
CoCreateInstance(CLSID_FilterGraph, NULL, CLSCTX_INPROC_SERVER,
IID_IGraphBuilder, (void **) &_filterGraph);
CoCreateInstance(CLSID_VideoCapture, NULL, CLSCTX_INPROC,
IID_IBaseFilter, (void**) &_videoCaptureFilter);
CoCreateInstance(CLSID_IMGSinkFilter, NULL, CLSCTX_INPROC,
IID_IBaseFilter, (void**) &_imageSinkFilter);
}
void Camera::InitializeVideoFilter()
{
_videoCaptureFilter->QueryInterface(&_propertyBag);
wchar_t deviceName[MAX_PATH] = L"\0";
GetDeviceName(deviceName);
CComVariant comName = deviceName;
CPropertyBag propertyBag;
propertyBag.Write(L"VCapName", &comName);
_propertyBag->Load(&propertyBag, NULL);
_filterGraph->AddFilter(_videoCaptureFilter,
L"Video Capture Filter Source");
}
void Camera::InitializeStillImageFilter()
{
_filterGraph->AddFilter(_imageSinkFilter, L"Still image filter");
_captureGraphBuilder->RenderStream(&PIN_CATEGORY_STILL,
&MEDIATYPE_Video, _videoCaptureFilter, NULL, _imageSinkFilter);
}
void Camera::GetDeviceName(WCHAR *deviceName)
{
HRESULT hr = S_OK;
HANDLE handle = NULL;
DEVMGR_DEVICE_INFORMATION di;
GUID guidCamera = { 0xCB998A05, 0x122C, 0x4166, 0x84, 0x6A, 0x93, 0x3E,
0x4D, 0x7E, 0x3C, 0x86 };
di.dwSize = sizeof(di);
handle = FindFirstDevice(DeviceSearchByGuid, &guidCamera, &di);
StringCchCopy(deviceName, MAX_PATH, di.szLegacyName);
}
Full header file:
#ifndef __CAMERA_H__
#define __CAMERA_H__
class Camera
{
public:
void Init();
void DisplayVideoPreview(HWND windowHandle);
void TakePicture(WCHAR *fileName);
void SetVideoResolution(int width, int height);
void SetPhotoResolution(int width, int height);
private:
CComPtr<ICaptureGraphBuilder2> _captureGraphBuilder;
CComPtr<IGraphBuilder> _filterGraph;
CComPtr<IBaseFilter> _videoCaptureFilter;
CComPtr<IPersistPropertyBag> _propertyBag;
CComPtr<IMediaControl> _mediaControl;
CComPtr<IAMVideoControl> _videoControl;
CComPtr<IBaseFilter> _imageSinkFilter;
void GetDeviceName(WCHAR *deviceName);
void InitializeVideoFilter();
void InitializeStillImageFilter();
void CreateComObjects();
void SetResolution(bool video, int width, int height);
};
#endif

Unfortunately I can't share the solution here, because of legal reasons.
Nevertheless I can tell you that video and image capturing with full resolution support is possible on the HTC HD 2 without using the HTC HD specific libraries.
Hint: You will probably need a NULL renderer.

I recently ran into a problem using an approach like this, where the snapshot would work the first time around, but failed the second. The problem was similar to yours, where setting the resolution would cause the FindPin to fail, but it only did it the second time.
The fix was to release the config object at the end of SetResolution!
config->Release();
After that, it worked every time.

Related

Transparent glitches while drawing GIF

I wrote some solution to draw gifs, using Direct2D GIF sample:
// somewhere in render loop
IWICBitmapFrameDecode* frame = nullptr;
IWICFormatConverter* converter = nullptr;
gifDecoder->GetFrame(current_frame, &frame);
if (frame)
{
d2dWICFactory->CreateFormatConverter(&converter);
if (converter)
{
ID2D1Bitmap* temp = nullptr;
converter->Initialize(frame, GUID_WICPixelFormat32bppPBGRA,
WICBitmapDitherTypeNone, NULL, 0.f, WICBitmapPaletteType::WICBitmapPaletteTypeCustom);
tar->CreateBitmapFromWicBitmap(converter, NULL, &temp);
scale->SetValue(D2D1_SCALE_PROP_SCALE, D2D1::Vector2F(1, 1));
scale->SetInput(0, temp);
target->DrawImage(scale);
SafeRelease(&temp);
}
}
SafeRelease(&frame);
SafeRelease(&converter);
Where gifDecoder is obtained this way:
d2dWICFactory->CreateDecoderFromFilename(pathToGif, NULL, GENERIC_READ, WICDecodeMetadataCacheOnLoad, &gifDecoder);
But in my program almost with all gifs, all frames, except the first one, for some reason have horizontal transparent lines. Obviously, when I view the same gif in browser or in another program there are no that holes.
I tried to change pixel format, pallet type, but that glitches are still there.
What do I do wrong?
Basic steps:
On image load, in case of gif, for caching purposes store not all frames as ready to draw ID2D1Bitmaps but only the image's decoder. Every time decode the frame, get bitmap and draw it. At least, at first animation loop frames bitmaps and metadata (again, for each frame) can be cached.
For each gif create compatible target with the size of gif screen (obtained from metadata) for composing.
Compose animation on render: get metadata for current frame -> if current frame has disposal = true, clear compose target -> draw current frame to compose target (even if disposal true) -> draw compose target to render target,
Note, that frames may have different sizes and even offsets.
Approximate code:
ID2D1DeviceContext *renderTarget;
class GIF
{
ID2D1BitmapRenderTarget *compose;
IWICBitmapDecoder *decoder; // it seems like precache all frames costs too much time, it will be faster to obtain each frame each time (maybe cache during first loop)
float naturalWidth; // gif screen width
float naturalHeight; // gif screen height
int framesCount;
int currentFrame;
unsigned int lastFrameEpoch;
int x;
int y;
int width; // image width needed to be drawn
int height; // image height
float fw; // scale factor width;
float fh; // scale factor height
GIF(const wchar_t *path, int x, int y, int width, int height)
{
// Init, using WIC obtain here image's decoder, naturalWidth, naturalHeight
// framesCount and other optional stuff like loop information
// using IWICMetadataQueryReader obtained from the whole image,
// not from zero frame
compose = nullptr;
currentFrame = 0;
lastFrameEpoch = epoch(); // implement Your millisecond function
Resize(width, height);
}
void Resize(int width, int height)
{
this->width = width;
this->height = height;
// calculate scale factors
fw = width / naturalWidth;
fh = height / naturalHeight;
if(compose == nullptr)
{
renderTarget->CreateCompatibleRenderTarget(D2D1::SizeF(naturalWidth, naturalHeight), &compose);
compose->Clear(D2D1::ColorF(0,0,0,0));
}
}
void Render()
{
IWICBitmapFrameDecode* frame = nullptr;
decoder->GetFrame(currentFrame, &frame);
IWICFormatConverter* converter = nullptr;
d2dWICFactory->CreateFormatConverter(&converter);
converter->Initialize(frame, GUID_WICPixelFormat32bppPBGRA,
WICBitmapDitherTypeNone, NULL, 0.f, WICBitmapPaletteType::WICBitmapPaletteTypeCustom);
IWICMetadataQueryReader* reader = nullptr;
frame->GetMetadataQueryReader(&reader);
PROPVARIANT propValue;
PropVariantInit(&propValue);
char disposal = 0;
float l = 0, t = 0; // frame offsets (may have)
HRESULT hr = S_OK;
reader->GetMetadataByName(L"/grctlext/Delay", &propValue);
if (SUCCEEDED((propValue.vt == VT_UI2 ? S_OK : E_FAIL)))
{
UINT frameDelay = 0;
UIntMult(propValue.uiVal, 10, &fr);
frameDelay = fr;
if (frameDelay < 16)
{
frameDelay = 16;
}
if(epoch() - lastFrameEpoch < frameDelay)
{
SafeRelease(&reader);
SafeRelease(&frame);
SafeRelease(&converter);
return;
}
}
if (SUCCEEDED(reader->GetMetadataByName(
L"/grctlext/Disposal",
&propValue)))
{
hr = (propValue.vt == VT_UI1) ? S_OK : E_FAIL;
if (SUCCEEDED(hr))
{
disposal = propValue.bVal;
}
}
PropVariantClear(&propValue);
{
hr = reader->GetMetadataByName(L"/imgdesc/Left", &propValue);
if (SUCCEEDED(hr))
{
hr = (propValue.vt == VT_UI2 ? S_OK : E_FAIL);
if (SUCCEEDED(hr))
{
l = static_cast<FLOAT>(propValue.uiVal);
}
PropVariantClear(&propValue);
}
}
{
hr = reader->GetMetadataByName(L"/imgdesc/Top", &propValue);
if (SUCCEEDED(hr))
{
hr = (propValue.vt == VT_UI2 ? S_OK : E_FAIL);
if (SUCCEEDED(hr))
{
t = static_cast<FLOAT>(propValue.uiVal);
}
PropVariantClear(&propValue);
}
}
renderTarget->CreateBitmapFromWicBitmap(converter, NULL, &temp);
compose->BeginDraw();
if (disposal == 2)
compose->Clear(D2D1::ColorF(0, 0, 0, 0));
auto ss = temp->GetSize();
compose->DrawBitmap(temp, D2D1::RectF(l, t, l + ss.width, t + ss.height));
compose->EndDraw();
ID2D1Bitmap* composition = nullptr;
compose->GetBitmap(&composition);
// You need to create scale effect to have antialised resizing
scale->SetValue(D2D1_SCALE_PROP_SCALE, D2D1::Vector2F(fw, fh));
scale->SetInput(0, composition);
auto p = D2D1::Point2F(x, y);
renderTarget->DrawImage(iscale, p);
SafeRelease(&temp);
SafeRelease(&composition);
SafeRelease(&reader);
SafeRelease(&frame);
SafeRelease(&converter);
}
}*gif[100] = {nullptr};
inline void Render()
{
renderTarget->BeginDraw();
for(int i = 0; i < 100 && gif[i] != nullptr; i++)
gif[i]->Render();
renderTarget->EndDraw();
}

Media Foundation: Cannot change a FPS on webcam

I try to replace codes with Directshow ("DS") on Media Foundation ("MF") in my app and met one problem - cannot set a needed fps using MF on a webcam. MF allowed me to set only 30 fps. If I try to set 25 fps, I always get the error 0xc00d5212 on SetCurrentMediaType(). In DS I could change that parameter.
My codes:
ASSERT(m_pReader); //IMFSourceReader *m_pReader;
IMFMediaType *pNativeType = NULL;
IMFMediaType *pType = NULL;
UINT32 w = 1280;
UINT32 h = 720;
UINT32 fps = 25; // or 30
DWORD dwStreamIndex = MF_SOURCE_READER_FIRST_VIDEO_STREAM;
// Find the native format of the stream.
HRESULT hr = m_pReader->GetNativeMediaType(dwStreamIndex, 0, &pNativeType);
if (FAILED(hr))
{
//error
}
GUID majorType, subtype;
// Find the major type.
hr = pNativeType->GetGUID(MF_MT_MAJOR_TYPE, &majorType);
if (FAILED(hr))
{
//error
}
// Define the output type.
hr = MFCreateMediaType(&pType);
if (FAILED(hr))
{
//error
}
hr = pType->SetGUID(MF_MT_MAJOR_TYPE, majorType);
if (FAILED(hr))
{
//error
}
// Select a subtype.
if (majorType == MFMediaType_Video)
{
subtype= MFVideoFormat_RGB24;
}
else
{
//error
}
hr = pType->SetGUID(MF_MT_SUBTYPE, subtype);
if (FAILED(hr))
{
//error
}
hr = MFSetAttributeSize(pType, MF_MT_FRAME_SIZE, w, h);
if (FAILED(hr))
{
//error
}
hr = MFSetAttributeSize(pType, MF_MT_FRAME_RATE, fps, 1);
if (FAILED(hr))
{
//error
}
hr = m_pReader->SetCurrentMediaType(dwStreamIndex, NULL, pType);
if (FAILED(hr))
{// hr = 0xc00d5212
//!!!!!error - if fps == 25
}
return hr;
Thanks for any help.
It might so happen that the camera does not support flexible frame rate values, and can work with only among the supported set, for example: 10, 15, 20, 24, 30 fps. You should be able to enumerate supported media types and choose the one that works for you - those media types typically include frame rate options.
Even though Media Foundation and DirectShow video capture eventually ends up in the same backend, there might be discrepancies in behavior. Specifically, you are working with Media Foundation higher level API that internally interfaces to a media source, and it might so happens that frame rate leads to 0xC00D5212 MF_E_TOPO_CODEC_NOT_FOUND "No suitable transform was found to encode or decode the content" confusion even though technically the driver can capture in respective mode.
See also:
Get all supported FPS values of a camera in Microsoft Media Foundation
Media Foundation Video/Audio Capture Capabilities
I've added the timer for fps control imitation into the codes. So at the start I set 30 fps , then by fps scale I skip some frames for my app.
Thank you for help.

How To Fix A Vector Changing The Variables Of All The Objects?

I've Created An Image Class That Loads A Bitmap And Then Draws and Handles The Bitmap, But When I Create A Vector Of This Image Class And Load The Bitmap Of 3 Objects The Last Object (3rd) Sets All Of The Objects To It's Bitmap.
Loading Bitmaps:
PlayerCard[0].GetBitmap(L"program files/Creature Cards/Mana_Wyrm.png");
PlayerCard[1].GetBitmap(L"program files/Creature Cards/Snowchugger.png");
PlayerCard[2].GetBitmap(L"program files/Creature Cards/Water_Elemental.png");
What Renders In The Window Is 3 Water Elementals.
GetBitmap(wchar_t* filename): Image Function
CoCreateInstance(CLSID_WICImagingFactory, 0, CLSCTX_INPROC_SERVER, IID_IWICImagingFactory, (LPVOID*)&wicFactory);
wicFactory->CreateDecoderFromFilename(filename, 0, GENERIC_READ, WICDecodeMetadataCacheOnLoad, &wicDecoder);
wicDecoder->GetFrame(0, &wicFrame);
wicFactory->CreateFormatConverter(&wicConverter);
wicConverter->Initialize(wicFrame, GUID_WICPixelFormat32bppPBGRA, WICBitmapDitherTypeNone, 0, 0.0, WICBitmapPaletteTypeCustom);
graphics.GetRenderTarget()->CreateBitmapFromWicBitmap(wicConverter, 0, &bmp);
if (wicFactory) wicFactory->Release();
if (wicDecoder) wicDecoder->Release();
if (wicFrame) wicFrame->Release();
if (wicConverter) wicConverter->Release();
Image Header:Image.h
public:
void PassGraphics(Graphics gfx);
void Init();
void Unload();
void GetBitmap(wchar_t* filename);
void Draw(float x, float y);
private:
IWICImagingFactory *wicFactory;
IWICBitmapDecoder *wicDecoder;
IWICBitmapFrameDecode *wicFrame;
IWICFormatConverter *wicConverter;
ID2D1Bitmap* bmp;
Graphics graphics;
Not sure why, but the for loop that was used to draw the vector images was the problem.
for (float i = 0; i <= 769; i += 384)
{
for (int c = 0; c < 3; c++)
{
PlayerCard[c].Draw(i, 100);
}
}
I replaced this for loop to a manual draw call function.
PlayerCard[0].Draw(0, 100);
PlayerCard[1].Draw(384, 100);
PlayerCard[2].Draw(769, 100);

How can I render 3d graphics in a directshow source filter

I need to render a simple texture mapped model as the output of a directshow source filter. The 3d rendering doesnt need to come from Direct3D, but that would be nice. OpenGL or any other provider would be fine assuming I can fit it into the context of the DirectShow source filter.
visual studio 2008 c++
With direct3d I have found that you can call GetRenderTargetData from the d3d device to get you access to the raw image bytes that you can then copy into the source filters image buffer
Here is example code of how to get the d3d render
void CaptureRenderTarget(IDirect3DDevice9* pdev)
{
IDirect3DSurface9* pTargetSurface=NULL;
HRESULT hr=pdev->GetRenderTarget(0,&pTargetSurface);
if(SUCCEEDED(hr))
{
D3DSURFACE_DESC desc;
hr=pTargetSurface->GetDesc(&desc);
if(SUCCEEDED(hr))
{
IDirect3DTexture9* pTempTexture=NULL;
hr=pdev->CreateTexture(desc.Width,desc.Height,1,0,desc.Format,D3DPOOL_SYSTEMMEM,&pTempTexture,NULL);
if(SUCCEEDED(hr))
{
IDirect3DSurface9* pTempSurface=NULL;
hr=pTempTexture->GetSurfaceLevel(0,&pTempSurface);
if(SUCCEEDED(hr))
{
hr=pdev->GetRenderTargetData(pTargetSurface,pTempSurface);
if(SUCCEEDED(hr))
{
//D3DXSaveTextureToFile(L"Output.png",D3DXIFF_PNG,pTempTexture,NULL);
D3DLOCKED_RECT data;
hr=pTempTexture->LockRect(0, &data, NULL, 0);
if(SUCCEEDED(hr))
{
BYTE *d3dPixels = (BYTE*)data.pBits;
}
pTempTexture->UnlockRect(0);
}
pTempSurface->Release();
}
pTempTexture->Release();
}
}
pTargetSurface->Release();
}
}

IMovieControl::Run fails on Windows XP?

Actually, it only fails the second time it's called. I'm using a windowless control to play video content, where the video being played could change while the control is still on screen. Once the graph is built the first time, we switch media by stopping playback, replacing the SOURCE filter, and running the graph again. This works fine under Vista, but when running on XP, the second call to Run() returns E_UNEXPECTED.
The initialization goes something like this:
// Get the interface for DirectShow's GraphBuilder
mGB.CoCreateInstance(CLSID_FilterGraph, NULL, CLSCTX_INPROC_SERVER);
// Create the Video Mixing Renderer and add it to the graph
ATL::CComPtr<IBaseFilter> pVmr;
pVmr.CoCreateInstance(CLSID_VideoMixingRenderer9, NULL, CLSCTX_INPROC);
mGB->AddFilter(pVmr, L"Video Mixing Renderer 9");
// Set the rendering mode and number of streams
ATL::CComPtr<IVMRFilterConfig9> pConfig;
pVmr->QueryInterface(IID_IVMRFilterConfig9, (void**)&pConfig);
pConfig->SetRenderingMode(VMR9Mode_Windowless);
pVmr->QueryInterface(IID_IVMRWindowlessControl9, (void**)&mWC);
And here's what we do when we decide to play a movie. RenderFileToVideoRenderer is borrowed from dshowutil.h in the DirectShow samples area.
// Release the source filter, if it exists, so we can replace it.
IBaseFilter *pSource = NULL;
if (SUCCEEDED(mpGB->FindFilterByName(L"SOURCE", &pSource)) && pSource)
{
mpGB->RemoveFilter(pSource);
pSource->Release();
pSource = NULL;
}
// Render the file.
hr = RenderFileToVideoRenderer(mpGB, mPlayPath.c_str(), FALSE);
// QueryInterface for DirectShow interfaces
hr = mpGB->QueryInterface(&mMC);
hr = mpGB->QueryInterface(&mME);
hr = mpGB->QueryInterface(&mMS);
// Read the default video size
hr = mpWC->GetNativeVideoSize(&lWidth, &lHeight, NULL, NULL);
if (hr != E_NOINTERFACE)
{
if (FAILED(hr))
{
return hr;
}
// Play video at native resolution, anchored at top-left corner.
RECT r;
r.left = 0;
r.top = 0;
r.right = lWidth;
r.bottom = lHeight;
hr = mpWC->SetVideoPosition(NULL, &r);
}
// Run the graph to play the media file
if (mMC)
{
hr = mMC->Run();
if (FAILED(hr))
{
// We get here the second time this code is executed.
return hr;
}
mState = Running;
}
if (mME)
{
mME->SetNotifyWindow((OAHWND)m_hWnd, WM_GRAPHNOTIFY, 0);
}
Anybody know what's going on here?
Try calling IMediaControl::StopWhenReady before removing the source filter.
When are you calling QueryInterface directly? you can use CComQIPtr<> to warp the QI for you. This way you won't have to call Release as it will be called automatically.
The syntax look like this: CComPtr<IMediaControl> mediaControl = pGraph;
In FindFilterByName() instead of passing live pointer pass a CComPtr, again so you won't have to call release explicitly.
Never got a resolution on this. The production solution was to just call IGraphBuilder::Release and rebuild the entire graph from scratch. There's a CPU spike and a slight redraw delay when switching videos, but it's less pronounced than we'd feared.