scanning pages using WIA or TWAIN - c++

Edit: Are there any tutorials on how to use WIA or TWAIN in c++, that explain how to scan pages, adjust settings (DPI, using automatic feeder etc.) and save them as PNG files?
I'd like to use WIA to scan pages and store them as png files. If the scanner supports automatic feeding I'd also like to use that feature. Currently I am following the steps of this tutorial and am stuck at the section Transferring Image Data in WIA 2.0.
So far my scanner has been found and I am able to create the device, and an IWiaItem2* has been created. How can use it to scan at 300dpi and store the result as png file?
The tutorial is not clear about how to start the scan process or how to set dpi for scanning, so I hope someone can help me with the code.
This is essentially the code for getting all local devices:
bool init(IWiaDevMgr2* devMgr)
{
//creating the device manager
*devMgr = 0;
CoCreateInstance( CLSID_WiaDevMgr2, 0, CLSCTX_LOCAL_SERVER, IID_IWiaDevMgr2, (void**)&devMgr);
//enumerating wia devices
IEnumWIA_DEV_INFO* enumDevInfo = 0;
HRESULT hr = devMgr->EnumDeviceInfo( WIA_DEVINFO_ENUM_LOCAL, &enumDevInfo);
if(SUCCEEDED(hr))
{
//loop until an error occurs or end of list
while(hr == S_OK)
{
IWiaPropertyStorage* storage = 0;
hr = enumDevInfo->Next( 1, &storage, 0);
if(hr == S_OK)
{
readProperties(storage);
storage->Release();
storage = 0;
}
}
//set hr to ok, so no error code is returned
if(hr == S_FALSE) hr = S_OK;
enumDevInfo->Release();
enumDevInfo = 0;
}
return SUCCEEDED(hr);
}
void readProperties(IWiaPropertyStorage* storage)
{
PROPSPEC propSpec[2] = {0};
PROPVARIANT propVar[2] = {0};
const ULONG propCount = sizeof(propSpec) / sizeof(propSpec[0]);
propSpec[0].ulKind = PRSPEC_PROPID;
propSpec[0].propid = WIA_DIP_DEV_ID;
propSpec[1].ulKind = PRSPEC_PROPID;
propSpec[1].propid = WIA_DIP_DEV_NAME;
HRESULT hr = storage->ReadMultiple(propCount, propSpec, propVar);
if(SUCCEEDED(hr))
{
Device* dev = new Device(propVar[0].bstrVal, propVar[1].bstrVal);
devices.push_back( dev );
FreePropVariantArray( propCount, propVar );
}
}
Afterwards a device is initialized like this:
bool createDevice(BSTR id, IWiaItem2** item)
{
*item = 0;
HRESULT hr = devMgr->CreateDevice( 0, deviceId, item);
return SUCCEEDED(hr);
}
Then the items are enumerated:
bool enumerateItems(IWiaItem2* item)
{
LONG itemType = 0;
HRESULT hr = item->GetItemType(&itemType);
if(SUCCEEDED(hr))
{
if(itemType & WiaItemTypeFolder || itemType & WiaItemTypeHasAttachments)
{
IEnumWiaItem2* enumItem = 0;
hr = item->EnumChildItems(0, &enumItem );
while(hr == S_OK)
{
IWiaItem2* child = 0;
hr = enumItem->Next( 1, &child, 0 );
if(hr == S_OK)
{
hr = enumerateItems( child );
child->Release();
child = 0;
}
}
if(hr == S_FALSE) hr = S_OK;
enumItem->Release();
enumItem = 0;
}
}
return SUCCEEDED(hr);
}
Now that everything has been initialized I'd like to implement a scan function. However, the code provided at the tutorial is for transferring files and folders and not for scanning images.
void scanAndSaveAsPNG(IWiaItem2* item, unsigned int dpi, std::string targetPath)
{
}
EDIT:
I installed the latest version available of the scanner driver (WIA and TWAIN) and after checking the supported commands using this code
void printCommands(IWiaItem2* i)
{
IEnumWIA_DEV_CAPS* caps = 0;
HRESULT h = item->EnumDeviceCapabilities(WIA_DEVICE_COMMANDS, &caps);
if(SUCCEEDED(h))
{
ULONG count = 0;
caps->GetCount(&count);
if(count > 0)
{
WIA_DEV_CAP* cap = new WIA_DEV_CAP[ count ];
ULONG fetched;
caps->Next(count, cap, &fetched);
for(int i = 0; i < fetched; i++)
{
std::cout << bstr_t( cap[i].bstrName ) << "\n";
}
}
caps->Release();
}
}
I noticed it only lists WIA Synchronize command. I am not sure if I didn't initialize the device correctly, or if the device doesn't support all WIA commands although the driver is installed.
So unless this problem is solved I am alternatively also looking for the same code based on TWAIN.

You want to use IWiaItem2::DeviceCommand which sends a command to the image capture device. The list of commands you can send are listed here.

Related

AvSetMmThreadCharacteristicsW for UWP

I'm working on a WASAPI UWP audio application with cpp/winrt which needs to take audio from an input and send it to an output after being processed.
I want to set my audio thread characteristics with AvSetMmThreadCharacteristicsW(L"Pro Audio", &taskIndex), but I just noticed this function (and most of avrt.h) is limited to WINAPI_PARTITION_DESKTOP and WINAPI_PARTITION_GAMES.
I think I need this because when my code is integrated into my UWP app, the audio input is full of discontinuity, and I have no issue in my test code which uses the avrt API.
Is there another way to configure my thread for audio processing?
Edit: here is my test program https://github.com/loics2/test-wasapi. The interesting part happens in the AudioStream class. I can't share my UWP app, but I can copy as is these classes into a Windows Runtime Component.
Edit 2: here's the audio thread code :
void AudioStream::StreamWorker()
{
WAVEFORMATEX* captureFormat = nullptr;
WAVEFORMATEX* renderFormat = nullptr;
RingBuffer<float> captureBuffer;
RingBuffer<float> renderBuffer;
BYTE* streamBuffer = nullptr;
unsigned int streamBufferSize = 0;
unsigned int bufferFrameCount = 0;
unsigned int numFramesPadding = 0;
unsigned int inputBufferSize = 0;
unsigned int outputBufferSize = 0;
DWORD captureFlags = 0;
winrt::hresult hr = S_OK;
// m_inputClient is a winrt::com_ptr<IAudioClient3>
if (m_inputClient) {
hr = m_inputClient->GetMixFormat(&captureFormat);
// m_audioCaptureClient is a winrt::com_ptr<IAudioCaptureClient>
if (!m_audioCaptureClient) {
hr = m_inputClient->Initialize(
AUDCLNT_SHAREMODE_SHARED,
AUDCLNT_STREAMFLAGS_EVENTCALLBACK,
0,
0,
captureFormat,
nullptr);
hr = m_inputClient->GetService(__uuidof(IAudioCaptureClient), m_audioCaptureClient.put_void());
hr = m_inputClient->SetEventHandle(m_inputReadyEvent.get());
hr = m_inputClient->Reset();
hr = m_inputClient->Start();
}
}
hr = m_inputClient->GetBufferSize(&inputBufferSize);
// multiplying the buffer size by the number of channels
inputBufferSize *= 2;
// m_outputClient is a winrt::com_ptr<IAudioClient3>
if (m_outputClient) {
hr = m_outputClient->GetMixFormat(&renderFormat);
// m_audioRenderClientis a winrt::com_ptr<IAudioRenderClient>
if (!m_audioRenderClient) {
hr = m_outputClient->Initialize(
AUDCLNT_SHAREMODE_SHARED,
AUDCLNT_STREAMFLAGS_EVENTCALLBACK,
0,
0,
captureFormat,
nullptr);
hr = m_outputClient->GetService(__uuidof(IAudioRenderClient), m_audioRenderClient.put_void());
hr = m_outputClient->SetEventHandle(m_outputReadyEvent.get());
hr = m_outputClient->Reset();
hr = m_outputClient->Start();
}
}
hr = m_outputClient->GetBufferSize(&outputBufferSize);
// multiplying the buffer size by the number of channels
outputBufferSize *= 2;
while (m_isRunning)
{
// ===== INPUT =====
// waiting for the capture event
WaitForSingleObject(m_inputReadyEvent.get(), INFINITE);
// getting the input buffer data
hr = m_audioCaptureClient->GetNextPacketSize(&bufferFrameCount);
while (SUCCEEDED(hr) && bufferFrameCount > 0) {
m_audioCaptureClient->GetBuffer(&streamBuffer, &bufferFrameCount, &captureFlags, nullptr, nullptr);
if (bufferFrameCount != 0) {
captureBuffer.write(reinterpret_cast<float*>(streamBuffer), bufferFrameCount * 2);
hr = m_audioCaptureClient->ReleaseBuffer(bufferFrameCount);
if (FAILED(hr)) {
m_audioCaptureClient->ReleaseBuffer(0);
}
}
else
{
m_audioCaptureClient->ReleaseBuffer(0);
}
hr = m_audioCaptureClient->GetNextPacketSize(&bufferFrameCount);
}
// ===== CALLBACK =====
auto size = captureBuffer.size();
float* userInputData = (float*)calloc(size, sizeof(float));
float* userOutputData = (float*)calloc(size, sizeof(float));
captureBuffer.read(userInputData, size);
OnData(userInputData, userOutputData, size / 2, 2, 48000);
renderBuffer.write(userOutputData, size);
free(userInputData);
free(userOutputData);
// ===== OUTPUT =====
// waiting for the render event
WaitForSingleObject(m_outputReadyEvent.get(), INFINITE);
// getting information about the output buffer
hr = m_outputClient->GetBufferSize(&bufferFrameCount);
hr = m_outputClient->GetCurrentPadding(&numFramesPadding);
// adjust the frame count with the padding
bufferFrameCount -= numFramesPadding;
if (bufferFrameCount != 0) {
hr = m_audioRenderClient->GetBuffer(bufferFrameCount, &streamBuffer);
auto count = (bufferFrameCount * 2);
if (renderBuffer.read(reinterpret_cast<float*>(streamBuffer), count) < count) {
// captureBuffer is not full enough, we should fill the remainder with 0
}
hr = m_audioRenderClient->ReleaseBuffer(bufferFrameCount, 0);
if (FAILED(hr)) {
m_audioRenderClient->ReleaseBuffer(0, 0);
}
}
else
{
m_audioRenderClient->ReleaseBuffer(0, 0);
}
}
exit:
// Cleanup code
}
I removed the error handling code for clarity, most of it is :
if (FAILED(hr))
goto exit;
#IInspectable was right, there's something wrong with my code : the audio processing is done by a library which then calls callbacks with some results.
In my callback, I try to raise a winrt::event, but it sometimes takes more than 50ms. When it happens, it blocks the audio thread, and creates discontinuity...

Media Foundation - How to select input from crossbar?

Good Evening
I'm trying to put together a little video capture DLL using Media Foundation which is largely adapted from the SimpleCapture and CaptureEngine SDK samples. The whole thing works great when I the capture source is my inbuilt webcam.
The problem is when I try using an external USB frame grabber with multiple inputs (s-video, composite & stereo audio) there is no preview stream. I have got a function that uses the DShow interface which seems to successfully change the selected input on the cross bar (verified by looking at the options in AMCAP after running function) However I have not been able to get the device to preview at all.
Is there a Media Foundation way of selecting inputs from the crossbar? Or if not, can you suggest what may be wrong with the crossbar input selection code below:
HRESULT CaptureManager::doSelectInputUsingCrossbar(std::wstring deviceName, long input)
{
IGraphBuilder *pGraph = NULL;
ICaptureGraphBuilder2 *pBuilder = NULL;
IBaseFilter* pSrc = NULL;
ICreateDevEnum *pDevEnum = NULL;
IEnumMoniker *pClassEnum = NULL;
IMoniker *pMoniker = NULL;
bool bCameraFound = false;
IPropertyBag *pPropBag = NULL;
bool crossbarSet = false;
HRESULT hr = S_OK;
if (!(input == PhysConn_Video_Composite || input == PhysConn_Video_SVideo))
{
return S_FALSE;
}
// Create the Filter Graph Manager.
hr = CoCreateInstance(CLSID_FilterGraph, NULL,
CLSCTX_INPROC_SERVER, IID_IGraphBuilder, (void **)&pGraph);
if (SUCCEEDED(hr))
{
// Create the Capture Graph Builder.
hr = CoCreateInstance(CLSID_CaptureGraphBuilder2, NULL,
CLSCTX_INPROC_SERVER, IID_ICaptureGraphBuilder2,
(void **)&pBuilder);
if (SUCCEEDED(hr))
{
pBuilder->SetFiltergraph(pGraph);
}
else return hr;
}
else return hr;
//////////////////////////
// chooses the default camera filter
hr = CoCreateInstance(CLSID_SystemDeviceEnum, NULL, CLSCTX_INPROC,
IID_ICreateDevEnum, (void **)&pDevEnum);
if (FAILED(hr))
{
return E_FAIL;
}
// Create an enumerator for video capture devices.
if (FAILED(pDevEnum->CreateClassEnumerator(CLSID_VideoInputDeviceCategory, &pClassEnum, 0)))
{
return E_FAIL;
}
if (pClassEnum == NULL)
{
CheckHR(hr, "Class enumerator is null - no input devices detected?");
pDevEnum->Release();
return E_FAIL;
}
while (!bCameraFound && (pClassEnum->Next(1, &pMoniker, NULL) == S_OK))
{
HRESULT hr = pMoniker->BindToStorage(0, 0, IID_IPropertyBag, (void**)(&pPropBag));
if (FAILED(hr))
{
pMoniker->Release();
continue; // Skip this one, maybe the next one will work.
}
// Find the description or friendly name.
VARIANT varName;
VariantInit(&varName);
hr = pPropBag->Read(L"Description", &varName, 0);
if (FAILED(hr))
{
hr = pPropBag->Read(L"FriendlyName", &varName, 0);
}
if (SUCCEEDED(hr))
{
if (0 == wcscmp(varName.bstrVal, deviceName.c_str()))
{
pMoniker->BindToObject(0, 0, IID_IBaseFilter, (void**)&pSrc);
bCameraFound = true;
break;
}//if
VariantClear(&varName);
}
pPropBag->Release();
pMoniker->Release();
}//while
pClassEnum->Release();
pDevEnum->Release();
if (!bCameraFound)//else
{
CheckHR(hr, "Error: Get device Moniker, No device found");
goto done;
}
hr = pGraph->AddFilter(pSrc, L"video capture adapter");
if (FAILED(hr))
{
CheckHR(hr, "Can't add capture device to graph");
goto done;
}
///////////////
IAMCrossbar *pxBar = NULL;
hr = pBuilder->FindInterface(&PIN_CATEGORY_CAPTURE, &MEDIATYPE_Interleaved, pSrc, IID_IAMCrossbar, (void **)&pxBar);
if (FAILED(hr))
{
hr = pBuilder->FindInterface(&PIN_CATEGORY_CAPTURE, &MEDIATYPE_Video, pSrc, IID_IAMCrossbar, (void **)&pxBar);
if (FAILED(hr))
{
CheckHR(hr, "Failed to get crossbar filter");
goto done;
}
}//if
else
{
CheckHR(hr, "Failed to get crossbar (capture)");
goto done;
}
LONG lInpin, lOutpin;
hr = pxBar->get_PinCounts(&lOutpin, &lInpin);
BOOL IPin = TRUE; LONG pIndex = 0, pRIndex = 0, pType = 0;
while (pIndex < lInpin)
{
hr = pxBar->get_CrossbarPinInfo(IPin, pIndex, &pRIndex, &pType);
if (pType == input)
{
break;
}
pIndex++;
}
BOOL OPin = FALSE; LONG pOIndex = 0, pORIndex = 0, pOType = 0;
while (pOIndex < lOutpin)
{
hr = pxBar->get_CrossbarPinInfo(OPin, pOIndex, &pORIndex, &pOType);
if (pOType == PhysConn_Video_VideoDecoder)
{
break;
}
pIndex++;
}
hr = pxBar->Route(pOIndex, pIndex);
done:
SafeRelease(&pPropBag);
SafeRelease(&pMoniker);
SafeRelease(&pxBar);
SafeRelease(&pGraph);
SafeRelease(&pBuilder);
SafeRelease(&pSrc);
return hr;
}
USB frame grabber is not supported by Microsoft Media Foundation. Microsoft Media Foundation has supporting of live sources via USB Video Class devices driver. The modern web cameras support such driver, but USB frame grabber does not.
USB frame grabber support of DirectShow filters which based on PUSH model processing pipeline - sources send media samples into the media graph, while Microsoft Media Foundation uses PULL model processing pipeline - sinks send request for the new media samples and sources "listen" for such request.
With best.

How to get ITimeTrigger interface for a Task Scheduler task?

I'm trying to get the "random delay" value for an arbitrary task scheduler task. I came up with the following C++ code:
//IRegisteredTask* pRegisteredTask for task:
// Name: "Regular Maintenance"
// Folder: "\Microsoft\Windows\TaskScheduler"
ITaskDefinition* pTaskDef = NULL;
HRESULT hr = pRegisteredTask->get_Definition(&pTaskDef);
if(SUCCEEDED(hr))
{
//Get triggers
ITriggerCollection* pTriggerCol = NULL;
hr = pTaskDef->get_Triggers(&pTriggerCol);
if(SUCCEEDED(hr))
{
//Get number of triggers
LONG nTriggerCnt = 0;
hr = pTriggerCol->get_Count(&nTriggerCnt);
if(SUCCEEDED(hr))
{
//Look through all triggers
for(LONG t = 0; t < nTriggerCnt; t++)
{
ITrigger* pTrigger = NULL;
hr = pTriggerCol->get_Item(t + 1, &pTrigger);
if(SUCCEEDED(hr))
{
//Get time trigger interface
ITimeTrigger *pTimeTrigger = NULL;
hr = pTrigger->QueryInterface(IID_ITimeTrigger, (void**)&pTimeTrigger);
if(SUCCEEDED(hr))
{
BSTR bstrRndDelay = NULL;
hr = pTimeTrigger->get_RandomDelay(&bstrRndDelay);
if(SUCCEEDED(hr))
{
//Check random delay
}
SysFreeString(bstrRndDelay);
}
else
{
//0x80004002 (E_NOINTERFACE) error happens here
}
if(pTimeTrigger)
pTimeTrigger->Release();
}
if(pTrigger)
pTrigger->Release();
}
}
}
if(pTriggerCol)
pTriggerCol->Release();
}
if(pTaskDef)
pTaskDef->Release();
I'm testing the code snippet above on the task that I know has "random delay" set up, i.e., \Microsoft\Windows\TaskScheduler->Regular Maintenance on my Windows 8.1:
but for some reason when I try to get the ITimeTrigger interface I'm getting error 0x80004002 or E_NOINTERFACE (as I marked in the code above.)
Any idea what am I doing wrong?
For accessing Random delay in Task scheduler, we can't do it directly you have to access by all interfaces.
For suppose I am using IDailyTigger or IWeeklyTigger Interface from this you have to access random delay.
IWeeklyTigger* lpWeeklyTrigger;
lpWeeklyTrigger->get_RandomDelay(&lcRandomDelay);
IDailyTigger* lpDailyTrigger;
lpDailyTrigger->get_RandomDelay(&lcRandomDelay);
This way you have to do for each trigger.

DirectShow and openCV. Read video file and process

I used search on StackOverflow, but I didn't find the answer. I'm developing an application and use OpenCV, but I need to work with different videos (mostly *.avi), so I decided to use DirectShow. I was able to create simple application, but I can't find any explanation how to get frames from *.avi, without creation of ActiveWindow. In fact, I need only to read video with DirectShow and then I'll use OpenCV to process and show video.
Any help appreciated. Thanks in advance!
Excuse me for my awful English.
Create a graph with a NULL render. Also look at sample grabber example in directshow SDK. It shows how to grab a frame for a graph. You can then pass on the frame to openCV for processing.
Basically you want to connect something like this:
Source -> Sample Grabber -> Null renderer
Download graphEdit or GraphEdit+ and you can visually represent these filters. As an example I went ahead and built a graph from my local webcam to a sample grabber connectted to a null renderer. The C# code generated by GraphEdit+ is this:
//Don't forget to add reference to DirectShowLib in your project.
using System;
using System.Collections.Generic;
using System.Runtime.InteropServices.ComTypes;
using System.Runtime.InteropServices;
using DirectShowLib;
namespace graphcode
{
class Program
{
static void checkHR(int hr, string msg)
{
if (hr < 0)
{
Console.WriteLine(msg);
DsError.ThrowExceptionForHR(hr);
}
}
static void BuildGraph(IGraphBuilder pGraph)
{
int hr = 0;
//graph builder
ICaptureGraphBuilder2 pBuilder = (ICaptureGraphBuilder2)new CaptureGraphBuilder2();
hr = pBuilder.SetFiltergraph(pGraph);
checkHR(hr, "Can't SetFiltergraph");
Guid CLSID_SampleGrabber = new Guid("{C1F400A0-3F08-11D3-9F0B-006008039E37}"); //qedit.dll
Guid CLSID_NullRenderer = new Guid("{C1F400A4-3F08-11D3-9F0B-006008039E37}"); //qedit.dll
//add Integrated Camera
IBaseFilter pIntegratedCamera = CreateFilter(#"#device:pnp:\\?\usb#vid_04f2&pid_b221&mi_00#7&34997cec&0&0000#{65e8773d-8f56-11d0-a3b9-00a0c9223196}\global");
hr = pGraph.AddFilter(pIntegratedCamera, "Integrated Camera");
checkHR(hr, "Can't add Integrated Camera to graph");
//add SampleGrabber
IBaseFilter pSampleGrabber = (IBaseFilter)Activator.CreateInstance(Type.GetTypeFromCLSID(CLSID_SampleGrabber));
hr = pGraph.AddFilter(pSampleGrabber, "SampleGrabber");
checkHR(hr, "Can't add SampleGrabber to graph");
//add Null Renderer
IBaseFilter pNullRenderer = (IBaseFilter)Activator.CreateInstance(Type.GetTypeFromCLSID(CLSID_NullRenderer));
hr = pGraph.AddFilter(pNullRenderer, "Null Renderer");
checkHR(hr, "Can't add Null Renderer to graph");
//connect Integrated Camera and SampleGrabber
hr = pGraph.ConnectDirect(GetPin(pIntegratedCamera, "Capture"), GetPin(pSampleGrabber, "Input"), null);
checkHR(hr, "Can't connect Integrated Camera and SampleGrabber");
//connect SampleGrabber and Null Renderer
hr = pGraph.ConnectDirect(GetPin(pSampleGrabber, "Output"), GetPin(pNullRenderer, "In"), null);
checkHR(hr, "Can't connect SampleGrabber and Null Renderer");
}
static void Main(string[] args)
{
try
{
IGraphBuilder graph = (IGraphBuilder)new FilterGraph();
Console.WriteLine("Building graph...");
BuildGraph(graph);
Console.WriteLine("Running...");
IMediaControl mediaControl = (IMediaControl)graph;
IMediaEvent mediaEvent = (IMediaEvent)graph;
int hr = mediaControl.Run();
checkHR(hr, "Can't run the graph");
bool stop = false;
int n = 0;
while (!stop)
{
System.Threading.Thread.Sleep(500);
Console.Write(".");
EventCode ev;
IntPtr p1, p2;
if (mediaEvent.GetEvent(out ev, out p1, out p2, 0) == 0)
{
if (ev == EventCode.Complete || ev == EventCode.UserAbort)
{
Console.WriteLine("Done!");
stop = true;
}
else
if (ev == EventCode.ErrorAbort)
{
Console.WriteLine("An error occured: HRESULT={0:X}", p1);
mediaControl.Stop();
stop = true;
}
mediaEvent.FreeEventParams(ev, p1, p2);
}
// stop after 10 seconds
n++;
if (n > 20)
{
Console.WriteLine("stopping..");
mediaControl.Stop();
stop = true;
}
}
}
catch (COMException ex)
{
Console.WriteLine("COM error: " + ex.ToString());
}
catch (Exception ex)
{
Console.WriteLine("Error: " + ex.ToString());
}
}
public static IBaseFilter CreateFilter(string displayName)
{
int hr = 0;
IBaseFilter filter = null;
IBindCtx bindCtx = null;
IMoniker moniker = null;
try
{
hr = CreateBindCtx(0, out bindCtx);
Marshal.ThrowExceptionForHR(hr);
int eaten;
hr = MkParseDisplayName(bindCtx, displayName, out eaten, out moniker);
Marshal.ThrowExceptionForHR(hr);
Guid guid = typeof(IBaseFilter).GUID;
object obj;
moniker.BindToObject(bindCtx, null, ref guid, out obj);
filter = (IBaseFilter)obj;
}
finally
{
if (bindCtx != null) Marshal.ReleaseComObject(bindCtx);
if (moniker != null) Marshal.ReleaseComObject(moniker);
}
return filter;
}
[DllImport("ole32.dll")]
public static extern int CreateBindCtx(int reserved, out IBindCtx ppbc);
[DllImport("ole32.dll")]
public static extern int MkParseDisplayName(IBindCtx pcb, [MarshalAs(UnmanagedType.LPWStr)] string szUserName, out int pchEaten, out IMoniker ppmk);
static IPin GetPin(IBaseFilter filter, string pinname)
{
IEnumPins epins;
int hr = filter.EnumPins(out epins);
checkHR(hr, "Can't enumerate pins");
IntPtr fetched = Marshal.AllocCoTaskMem(4);
IPin[] pins = new IPin[1];
while (epins.Next(1, pins, fetched) == 0)
{
PinInfo pinfo;
pins[0].QueryPinInfo(out pinfo);
bool found = (pinfo.name == pinname);
DsUtils.FreePinInfo(pinfo);
if (found)
return pins[0];
}
checkHR(-1, "Pin not found");
return null;
}
}
}
You'd still need to actually capture the sample frame, but as was mentioned in the post above you can look into the sampleGrabber SDK on MSDN to find out how to do that.

multiple web cams in AMCap

I built the AMCap sample under direct show in SDK. It is able to handle 2 or more web cam, how do I modify the program for me to use them simultaneously… like press one button that says ‘start capture’ and make sure that all the cameras start capturing and one button that says ‘stop capture’ to stop all the cameras. i want the frames from different cameras to be save in different files. I am new to C++ and any kind of help is appreciated ! Thanks for your time!
Use GraphEdit tool. There you can build you own graph with all video input devices connected. Please see http://en.wikipedia.org/wiki/GraphEdit
If you're just starting out with DirectShow programming, this might be a bit of a hill climb, but I hope it's of some help, or points you in the right direction.
M$DN has a page describing how to select a capture device. It's a little thin on examples, so I've provided a simple implementation below.
The theory is that you need to enumerate all devices in the CLSID_VideoInputDeviceCategory, and attempt to create a render graph for each valid item in the category.
First I'll show you the code to iterate the names of the devices in the category. Below are 3 static functions you can use to iterate the devices in a category. Once you've added these functions to your project, you can list the devices in the Video Input Device category by calling the following function:
ListDevicesInCategory(CLSID_VideoInputDeviceCategory);
Okay, here are the three functions. ListDevicesInCategory() is where the work happens. It depends on other two functions, FindDeviceInCategory() and CountDevicesInCategory()
#define RFAIL(x) { HRESULT hr = x; if(FAILED(hr)) {return hr;} }
static HRESULT FindDeviceInCategory(IBaseFilter** pSrc, const IID& cls, wstring& wFilterName,int devNum)
{
CComPtr<ICreateDevEnum> spDevEnum;
CComPtr<IEnumMoniker> spEnum;
int i;
RFAIL( spDevEnum.CoCreateInstance(CLSID_SystemDeviceEnum) );
RFAIL( spDevEnum->CreateClassEnumerator(cls, &spEnum, 0) );
if(spEnum == 0)
return E_FAIL;
for(i = 0; i >= 0; i++)
{
CComPtr<IMoniker> spiMoniker;
if( spEnum->Next(1, &spiMoniker, 0) != S_OK )
return E_FAIL;
if( devNum == i)
{
CComVariant varName;
CComPtr<IPropertyBag> spiPropBag;
RFAIL(spiMoniker->BindToStorage(0, 0, IID_IPropertyBag,reinterpret_cast<void**>(&spiPropBag)));
RFAIL(spiPropBag->Read(L"FriendlyName", &varName, 0));
RFAIL(spiMoniker->BindToObject(0, 0, IID_IBaseFilter, reinterpret_cast<void**>(pSrc)));
wFilterName = V_BSTR(&varName);
varName.Clear();
return S_OK;
}
}
return E_FAIL;
}
static HRESULT CountDevicesInCategory( int *pCount, const IID& categoryClass )
{
// pass in a category class like CLSID_VideoInputDeviceCategory, writes the count of the number of filters in that category
// available on the local machine
HRESULT hr = S_OK;
CComPtr<ICreateDevEnum> spIDevEnum;
CComPtr<IEnumMoniker> spIEnum;
CComPtr<IMoniker> spIMoniker;
hr = CoCreateInstance(CLSID_SystemDeviceEnum, 0, CLSCTX_INPROC_SERVER, IID_ICreateDevEnum, reinterpret_cast<void**>(&spIDevEnum));
if( ! SUCCEEDED(hr) || hr == S_FALSE )
{
*pCount = 0;
return hr;
}
hr = spIDevEnum->CreateClassEnumerator( categoryClass, &spIEnum, 0 );
if( ! SUCCEEDED(hr) || hr == S_FALSE )
{
*pCount = 0;
return hr;
}
if( spIEnum )
{
while( spIEnum->Next(1, &spIMoniker, 0) == S_OK )
{
(*pCount) ++;
spIMoniker.Detach();
}
}
return S_OK;
}
static HRESULT ListDevicesInCategory( const GUID & cls )
{
CComPtr<IBaseFilter> spSource;
wstring * psNextFilter = NULL;
int nDeviceNum = 0;
int nTotalNumDevices = 0;
HRESULT hr = S_OK;
bool bComplete = false;
DeviceNames CaptureDeviceNames;
if( FAILED(CountDevicesInCategory( &nTotalNumDevices, (IID)cls )) )
bComplete = TRUE;
if( nTotalNumDevices == 0 )
bComplete = TRUE;
while( ! bComplete )
{
psNextFilter = new std::wstring;
hr = FindDeviceInCategory( &spSource, (IID)cls, *psNextFilter, nDeviceNum++ );
if( SUCCEEDED(hr) && spSource )
{
if ( *psNextFilter )
{
wcout << *psNextFilter << endl;
delete *psNextFilter;
psNextFilter = NULL;
}
spSource.Release();
spSource = NULL;
}
else
bComplete = TRUE;
}
return S_OK;
}
Once you've identified an item in a category that you're interested in, you can add it to a graph with the IGraphBuilder::AddFilter call.
To add a filter to your graph, you'll first need to get an IBaseFilter* to that filter. I've got one more function for you to do this with.
Define an IBaseFilter smart pointer:
CComPtr<IBaseFilter> spSource;
Attach to the filter:
m_spSource.Attach(
GetFilter(CLSID_VideoInputDeviceCategory, CComBSTR(L"Osprey-450e Video Device 1A"))
);
Here's that last function - GetFilter:
static IBaseFilter * GetFilter( REFCLSID clsidDeviceClass, CComBSTR & sName )
{
HRESULT hr;
IBaseFilter * pRetFilter = NULL;
ICreateDevEnum * pSysDevEnum = NULL;
IEnumMoniker * pEnum = NULL;
IMoniker * pMoniker = NULL;
int nSameSrcCounter = 0;
hr = CoCreateInstance(CLSID_SystemDeviceEnum, NULL, CLSCTX_INPROC_SERVER, IID_ICreateDevEnum, (void**)&pSysDevEnum);
if( pSysDevEnum )
hr = pSysDevEnum->CreateClassEnumerator(clsidDeviceClass, &pEnum, 0);
if (hr != S_OK)
return NULL;
USES_CONVERSION;
while ( pEnum->Next(1, &pMoniker, NULL) == S_OK )
{
IPropertyBag *pPropBag = NULL;
VARIANT var;
VariantInit(&var);
pMoniker->BindToStorage(0, 0, IID_IPropertyBag, (void **)&pPropBag);
hr = pPropBag->Read(L"FriendlyName", &var, 0);
if (SUCCEEDED(hr))
{
if(sName == OLE2T(var.bstrVal))
{
hr = pMoniker->BindToObject(NULL, NULL, IID_IBaseFilter, (void**)&pRetFilter);
if (FAILED(hr))
pRetFilter = NULL;
VariantClear(&var);
pPropBag->Release();
pMoniker->Release();
break;
}
}
VariantClear(&var);
pPropBag->Release();
pMoniker->Release();
}
pSysDevEnum->Release();
pEnum->Release();
return pRetFilter;
}