WASAPI: Choosing a wave format for exclusive output - c++

I'm trying to open an exclusive stream with an output device using WASAPI. I'm having trouble choosing an acceptable format, since there appear to be no hints as to what formats are accepted by a given device.
In my case, IAudioClient::GetMixFormat(), which would otherwise return a sort of default format for the device, returns a format that can't be used in exclusive mode (IAudioClient::IsFormatSupported() returns AUDCLNT_E_UNSUPPORTED_FORMAT). I don't know where to go from there. There's a ridiculous number of combinations of wave format parameters - do I literally have to iterate through every one of them until something works?

Well, I asked the MSDN forums and they came up with a good answer.
You need to check the device's default device format via IMMDevice::OpenPropertyStore(), and subsequently IPropertyStore::GetValue(), not IAudioClient::GetMixFormat(). Here is the code that retrieved an acceptable WAVEFORMATEX structure:
//CoInitialize/Enumerate devices
IPropertyStore* store = nullptr;
hr = device->OpenPropertyStore(STGM_READ, &store);
if (FAILED(hr)) {
ExitProcess(1);
}
PROPVARIANT prop;
hr = store->GetValue(PKEY_AudioEngine_DeviceFormat, &prop);
if (FAILED(hr)) {
ExitProcess(2);
}
hr = device->Activate (
__uuidof(IAudioClient),
CLSCTX_ALL,
NULL,
(void**)&audioClient
);
device->Release();
device = nullptr;
if (FAILED(hr)) {
ExitProcess(3);
}
hr = audioClient->IsFormatSupported (
AUDCLNT_SHAREMODE_EXCLUSIVE,
(PWAVEFORMATEX)prop.blob.pBlobData,
NULL
);
if (FAILED(hr)) {
ExitProcess(4);
}
The final value of hr is S_OK.

Related

WPD get media dimensions

I want to get the width and hight of a image file on a WPD via IPortableDeviceValues.
According to the Windows Dev Center every object whose type is WPD_CONTENT_TYPE_IMAGE (which they are) requires to provide WPD_MEDIA_WIDTH/WPD_MEDIA_HEIGHT but I always get a error.
HRESULT MyPortableDevice::getIntValue(IPortableDeviceProperties* properties, PCWSTR objectID, const PROPERTYKEY& key, DWORD* value)
{
ComPtr<IPortableDeviceValues> objectProperties;
ComPtr<IPortableDeviceKeyCollection> propertiesToRead;
HRESULT hr = CoCreateInstance(CLSID_PortableDeviceKeyCollection,
nullptr,
CLSCTX_INPROC_SERVER,
IID_PPV_ARGS(&propertiesToRead));
if (SUCCEEDED(hr)) {
HRESULT tempHr = S_OK;
tempHr = propertiesToRead->Add(key);
}
if (SUCCEEDED(hr)) {
hr = properties->GetValues(objectID,
propertiesToRead.Get(),
&objectProperties);
}
if (SUCCEEDED(hr)) {
ULONG intValue = 0;
hr = objectProperties->GetUnsignedIntegerValue(key, &intValue);
if (SUCCEEDED(hr)) {
value = &intValue;
intValue = 0;
}
}
return hr;
I always get a error value from
hr = objectProperties->GetUnsignedIntegerValue(key, &intValue);
hr = 0x80070490 and I can't find this error code here
Does anyone know what's wrong?
The error you got is:
Error code: (HRESULT) 0x80070490 (2147943568) - Element not found.
The reason you got this error most probably is that actually phone apps developers usually just ignore some of properties.
I have connected my phone to PC and checked some images with WPD Information Tool , and I get only such fields for .jpg screenshot:
I think in most cases you need to read content of the picture to stream and check it's parameters directly. Maybe in some formats you can read only some "header" part and get WIDTH and HEIGHT from there.

How to play a sound (mp3 / wav) on windows using native API on selected output device

I simply want to be able to create options for my program, so that user can pick which output device will be used to play sounds, like this one in MS Lync:
I originally created my program in Qt and I asked similar (but not identical) question here Qt5+ How to set default audio device for QMediaPlayer
I figured out that Qt is too much bugged for this and this is impossible, so I lowered my requirements and I will use native windows API as these are probably only solution here. This unfortunately requires rewrite of some parts of my program, and now I am following this guide on msdn: https://msdn.microsoft.com/en-us/library/windows/desktop/dd371455%28v=vs.85%29.aspx
I basically want to be able to do following:
List all available output devices and display them on preferences form - I already have a working code for that using IMMDeviceEnumerator
Let user pick a device they want to use for output of my program - I already have that part
Create a function, let's call it PlaySound(string path) that if called with path of .wav or .mp3 file would use the preferred IMMDevice and play a file through it - this is what I need help with
Because I was using Qt so far and I have pretty much no idea of MS windows internals, I have no idea how could one take a file stored somewhere on disk and play it using windows API's especially using that selected IMMDevice which user set in their preferences. I was googling and searching through documentation, but I could only work extremely complex and weird solutions, such as https://msdn.microsoft.com/en-us/library/windows/desktop/dd316756%28v=vs.85%29.aspx
I could even find some examples where you can play mp3 file using MCI device, but that didn't really explain how to alter preferred output device, so it isn't very useful for my use.
I understand that low-level API is probably not going to offer some simple "playmyfile" function, but it would be nice to have at least some example of super-simple solution or some tutorial that would play media files using selected output device on windows so that I could use that as a starting reference. I have a working active IMMDevice, now I just need to make it possible to play mp3 / wav files through it.
NOTE: This is not some generic "how to play a sound on windows" question. I need to be able to play that sound on selected audio output device. For my program only (just like MS Lync, VLC media player or any other advanced audio program can). I don't want to change system global preferences (default device etc).
I managed to do that but surprisingly using windows native libraries called "DirectShow" which are primarily designed for video rendering, but can handle audio as well.
How to:
Enumerate output devices
This functions iterates over all audio devices detected by OS and store them in a list.
void Options::Initialize()
{
#ifdef WIN
HRESULT hr;
ICreateDevEnum *pSysDevEnum = NULL;
hr = CoCreateInstance(CLSID_SystemDeviceEnum, NULL, CLSCTX_INPROC_SERVER, IID_ICreateDevEnum, (void **)&pSysDevEnum);
if (FAILED(hr))
return;
IEnumMoniker *pEnumCat = NULL;
hr = pSysDevEnum->CreateClassEnumerator(CLSID_AudioRendererCategory, &pEnumCat, 0);
if (hr == S_OK)
{
// Enumerate the monikers.
IMoniker *pMoniker = NULL;
ULONG cFetched;
while (pEnumCat->Next(1, &pMoniker, &cFetched) == S_OK)
{
IPropertyBag *pPropBag;
hr = pMoniker->BindToStorage(0, 0, IID_IPropertyBag, (void **)&pPropBag);
if (SUCCEEDED(hr))
{
// To retrieve the filter's friendly name, do the following:
VARIANT varName;
VariantInit(&varName);
hr = pPropBag->Read(L"FriendlyName", &varName, 0);
if (SUCCEEDED(hr))
{
OutputDevice device;
device.Name = QString((QChar*)varName.bstrVal, wcslen(varName.bstrVal));
Options::devices.append(device);
}
VariantClear(&varName);
pPropBag->Release();
}
pMoniker->Release();
}
pEnumCat->Release();
}
pSysDevEnum->Release();
#endif
}
Create a filter for device that user selected
Iterate over all devices once more and make a filter for that which was selected by user
HRESULT hr;
ICreateDevEnum *pSysDevEnum = NULL;
hr = CoCreateInstance(CLSID_SystemDeviceEnum, NULL, CLSCTX_INPROC_SERVER, IID_ICreateDevEnum, (void **)&pSysDevEnum);
if (FAILED(hr))
{
Error("Failed SystemDeviceEnum");
return;
}
IEnumMoniker *pEnumCat = NULL;
QSettings s;
hr = pSysDevEnum->CreateClassEnumerator(CLSID_AudioRendererCategory, &pEnumCat, 0);
IBaseFilter *pFilter = NULL;
if (hr == S_OK)
{
// Enumerate the monikers.
IMoniker *pMoniker = NULL;
ULONG cFetched;
int i = 0;
while (pEnumCat->Next(1, &pMoniker, &cFetched) == S_OK)
{
IPropertyBag *pPropBag;
hr = pMoniker->BindToStorage(0, 0, IID_IPropertyBag, (void **)&pPropBag);
if (SUCCEEDED(hr))
{
// retrieve the filter's friendly name now
VARIANT varName;
VariantInit(&varName);
hr = pPropBag->Read(L"FriendlyName", &varName, 0);
if (SUCCEEDED(hr))
{
QString name = QString((QChar*)varName.bstrVal, wcslen(varName.bstrVal));
if (s.value("d:" + name).toBool())
{
hr = pMoniker->BindToObject(NULL, NULL, IID_IBaseFilter, (void**)&pFilter);
// now we got the filter in pFilter so we can play sound using that filter
PlayWin(pFilter, path);
}
}
VariantClear(&varName);
pPropBag->Release();
}
pMoniker->Release();
}
pEnumCat->Release();
}
pSysDevEnum->Release();
Play the sound using the filter for our device
In this function device is pFilter from previous function
HRESULT hr = CoCreateInstance(CLSID_FilterGraph, NULL, CLSCTX_INPROC_SERVER, __uuidof(IGraphBuilder), (void **)&x->pGraph);
if (FAILED(hr))
{
Error("ERROR - Could not create the Filter Graph Manager.");
return;
}
hr = x->pGraph->QueryInterface(IID_IBasicAudio, (void**)&x->pOutput);
if (FAILED(hr))
{
Error("ERROR - Could not create the IBasicAudio.");
return;
}
x->pFlx = device;
if (device)
x->pGraph->AddFilter(device, L"fd");
hr = x->pGraph->QueryInterface(__uuidof(IMediaControl), (void **)&x->pControl);
hr = x->pGraph->QueryInterface(__uuidof(IMediaEvent), (void **)&x->pEvent);
// Build the graph.
hr = x->pGraph->RenderFile(path, NULL);
if (SUCCEEDED(hr))
{
// Run the graph.
hr = x->pControl->Run();
}
else
{
Error("Unable to play: " + QString::fromWCharArray(path));
}
This code on itself is of course not going to compile out of box, but it gives you a clue how to do this, in nutshell:
Retrieve list of all devices and store it somewhere, so that we can create dialog for user
Before we play a sound, we check which device user selected and create a filter for it
We apply the filter to DirectShow BasicAudio which is itself able to play any media file supported by system codecs.
Documentation on msdn: https://msdn.microsoft.com/en-us/library/windows/desktop/dd407292%28v=vs.85%29.aspx

Capture preview to Enhanced video renderer

I am trying to basically render a preview from a capture card (720p) from a PS3 to the enhanced video render.
Ideally, I would like something like this:
I used to do this:
hr = m_pCapture->RenderStream (&PIN_CATEGORY_CAPTURE, &MEDIATYPE_Video, m_pSrcFilter, NULL, NULL);
But I find that it only renders to an old default renderer, which is not adequate enough to stretch the image to 1080p (image becomes pixelated). [http://msdn.microsoft.com/en-us/library/aa930715.aspx ]
I want to use the enhanced video render as the sink but I have no idea how to. I viewed the tutorials here: http://msdn.microsoft.com/en-us/library/windows/desktop/ff625867%28v=vs.85%29.aspx
And tried to put my code in but it would not render.
Here is a snippet of the code that sets the source. Assume that setResolution will set the AM_MEDIA_TYPE format and that getVideoSourceByKeyword will get the AVermedia capture card device.
HRESULT DShowPlayer::SetPreviewDevice(PCWSTR keyname)
{
IBaseFilter *pSource = NULL;
// Create a new filter graph. (This also closes the old one, if any.)
HRESULT hr = CoCreateInstance(CLSID_CaptureGraphBuilder2, NULL,
CLSCTX_INPROC_SERVER, IID_PPV_ARGS(&m_pCapture));
if (FAILED(hr))
{
goto done;
}
hr = InitializeGraph();
if (FAILED(hr))
{
goto done;
}
// Add the source filter to the graph.
hr = getVideoSourceByKeyword(keyname, &pSource);
if (FAILED(hr))
{
goto done;
}
hr = m_pGraph->AddFilter(pSource, L"Source filter");
if (FAILED(hr))
{
goto done;
}
setResolution(pSource, 1280, 720);
// Try to render the streams.
hr = RenderStreams(pSource);
if (FAILED(hr))
{
goto done;
}
hr = m_pControl->Run();
done:
if (FAILED(hr))
{
TearDownGraph();
}
SafeRelease(&pSource);
return hr;
}
When the code runs RenderStreams, this is the code (from http://msdn.microsoft.com/en-us/library/windows/desktop/ff625878%28v=vs.85%29.aspx):
// Enumerate the pins on the source filter.
hr = pSource->EnumPins(&pEnum);
if (FAILED(hr))
{
goto done;
}
// Loop through all the pins
IPin *pPin;
while (S_OK == pEnum->Next(1, &pPin, NULL))
{
PIN_INFO pInfo;
pPin->QueryPinInfo(&pInfo);
// Try to render this pin.
// It's OK if we fail some pins, if at least one pin renders.
HRESULT hr2 = pGraph2->RenderEx(pPin, AM_RENDEREX_RENDERTOEXISTINGRENDERERS, NULL);
pPin->Release();
if (SUCCEEDED(hr2))
{
bRenderedAnyPin = TRUE;
}
}
In visual studio I debugged at the pin to get the source name ("Capture" pin name of the AVermedia capture card). It said it was successful to attach to the render at RenderEx however at
hr = m_pControl->Run();
It fails and there error is device is not connected.
I also tried to get the EVR renderer directly and tried to render the stream:
IBaseFilter* render;
m_pVideo->getRender(&render);
m_pGraph->AddFilter(render, L"EVR Filter");
hr = m_pCapture->RenderStream(&PIN_CATEGORY_CAPTURE, &MEDIATYPE_Video, pSource, NULL, render);
if (FAILED(hr))
{
goto done;
}
But it fails and says that VFW_E_NOT_IN_GRAPH.
What I am asking: I am still pretty new at learning Directshow and I would like to be able to preview the capture card with EVR. I found no comprehensive tutorials or source code to do this. If you need anymore information, I can add more.
Thanks in advance.
EVR can be used programmatically very much the same way as VMR-7/9. The only difference is that EVR needs "windowless" mode, while earlier renderers supported also "windowed" mode where you need minimal initialization of the renderer.
I suppose you can see video on EVR in GraphEdit? You should be able to do so, just use Preview pin, not Capture. Or, connect Capture through Smart Tee filter and its preview output.
The error codes suggest that you don't build graph correctly. In particular, VFW_E_NOT_IN_GRAPH says your filter is not in graph and hence invalid argument. You don't need to use getRender, just CoCreateInstance the EVR the usual and straightforward way. At the first moment you get an error you are interested in putting everything on hold and reviewing the filter graph topology you have at the moment.
Windows SDK samples contain \Samples\multimedia\directshow\vmr9\windowless which shows VMR-9 in windowless mode, this is supposedly the closest starting point to just switch from VMR-9 to EVR.

Capturing Video to an AVI File with DirectShow

I'm trying to write a C++ application with directshow that saves video capture to file.
The steps in the code are:
1. Create the Capture Graph Builder
2. Create the System Device Enumerator
3. Create the System Device Enumerator - in order to get capture filter
4. Create an enumerator for the video capture category
5. Create query to capture the video
Attaching the code
// gets the device filter
HRESULT getDeviceFilter(REFCLSID clsid, int order, IBaseFilter **pCap)
{
ICreateDevEnum *pDevEnum = NULL;
IEnumMoniker *pEnum = NULL;
// Create the System Device Enumerator.
HRESULT hr = CoCreateInstance(CLSID_SystemDeviceEnum, NULL,
CLSCTX_INPROC_SERVER, IID_ICreateDevEnum,
reinterpret_cast<void**>(&pDevEnum));
if (SUCCEEDED(hr))
{
// Create an enumerator for the video capture category.
hr = pDevEnum->CreateClassEnumerator( clsid, &pEnum, 0);
}
IMoniker *pMoniker = NULL;
if (pEnum->Next(1, &pMoniker, NULL) == S_OK)
hr = pMoniker->BindToObject(0, 0, IID_IBaseFilter, (void**)pCap);
return hr;
}
int main()
{
IGraphBuilder *pGraph = 0;
ICaptureGraphBuilder2 *pBuild = 0;
IBaseFilter *pCap = 0;
HRESULT hr = CoInitialize(NULL);
// Create the Capture Graph Builder.
hr = CoCreateInstance(CLSID_CaptureGraphBuilder2,
NULL,
CLSCTX_INPROC_SERVER,
IID_ICaptureGraphBuilder2,
(void**)&pBuild );
ICreateDevEnum *pDevEnum = NULL;
IEnumMoniker *pEnum = NULL;
// Create the System Device Enumerator.
hr = CoCreateInstance(CLSID_SystemDeviceEnum,
NULL,
CLSCTX_INPROC_SERVER,
IID_ICreateDevEnum,
reinterpret_cast<void**>(&pDevEnum));
IBaseFilter *pMux = 0;
hr = pBuild->SetOutputFileName(&MEDIASUBTYPE_Avi, // Specifies AVI for the target file.
L"C:\\Example.avi", // File name.
&pMux, // Receives a pointer to the mux.
NULL); // (Optional) Receives a pointer to the file sink.
// gets the first device, VDM tv card
hr = getDeviceFilter(CLSID_VideoInputDeviceCategory, 0, &pCap);
hr = pBuild->RenderStream(&PIN_CATEGORY_CAPTURE, // Pin category.
&MEDIATYPE_Video, // Media type.
pCap, // Capture filter.
NULL, // Intermediate filter (optional).
pMux); // Mux or file sink filter.
// Release the mux filter.
pMux->Release();
IConfigAviMux *pConfigMux = NULL;
hr = pMux->QueryInterface(IID_IConfigAviMux, (void**)&pConfigMux);
if (SUCCEEDED(hr))
{
pConfigMux->SetMasterStream(1);
pConfigMux->Release();
}
return 0;
}
However, upon calling RenderStream I get an E_INVALIDARG error
Any suggestions?
Thanks
Take a look at this topic. It seems you have missed some steps.
First of all, you are not using pGraph anywhere. You should create a graph manager, and then initialize the graph builder by providing it with a pointer to the graph manager using SetFilterGraph.
// Create the Filter Graph Manager.
hr = CoCreateInstance(CLSID_FilterGraph, 0, CLSCTX_INPROC_SERVER,
IID_IGraphBuilder, (void**)&pGraph);
if (SUCCEEDED(hr))
{
// Initialize the Capture Graph Builder.
pBuild->SetFiltergraph(pGraph);
// ...
}
Second, you are using filters that are not managed by a graph manager. Quoting from here:
All of the filters specified by pSource, pIntermediate, and pSink must be added to the graph prior to calling the method.
You will have to add the filters pCap and pMux to the graph manager you created earlier, using AddFilter. You should do this before calling RenderStream. This is so because RenderStream eventually calls connection methods on the manager.
If the above steps do not solve your problem, there are several other things you can try.
Device Filter. You are using the first device of CLSID_VideoInputDeviceCategory, but are you sure this is the correct device? Webcams and such are also included in this category. Make sure that there are no other devices of the same category connected, and try again.
Connection. Every device is different. It could be that your device can't be directly connected to the mux. In this case, we will have to figure out why, and determine whether you need to connect additional filters (like decoders). GraphEdit is a very fast way to find these filters.
Pin Category/Media Type. In my experience, E_INVALIDARG is 90% of the time caused by the first 2 parameters of RenderStream. Try setting the pin category or the media type to NULL.
System Device Enumerator: As you described yourself, you are creating a system device enumerator twice. This seems odd to me, why not use one, for both purposes?
If your code still does not work, you should provide me with more information. Have you achieved your goals when using GraphEdit? How does your VDM TV card filter look like (pins, media types)?

How to render and save captured Video/Audio into a custom file/filter format in DirectShow?

Basiclly, I want to capture audio/video. Run it through a mp4 muxer and save it to a file on disk. Before I used ICaptureGraphBuilder2, but it seems unusuable when saving to custom formats.
My code so far,
I enumerate video/audio devices. In this sample I only try to capture audio. I get the correct device and use GetPin to enumerate the filters pins to get its output pin.
hr = pMoniker2->BindToObject(0, 0, IID_IBaseFilter, (void**)&pSrc2);
hr = pGraph->AddFilter(pSrc2, L"AudioCap");
hr = GetPin(pSrc2, PINDIR_OUTPUT, &outPin);
This is the custom filter, a MP4 muxer. It properly loads and I can get the input pin and connect it with my output pin. So far so good.
HRESULT hr = CreateObjectFromPath(TEXT("c:\\filters\\mp4mux.dll"), clsid, &pUnk);
if (SUCCEEDED(hr))
{
IBaseFilterPtr pFilter = pUnk;
HRESULT hr = pGraph->AddFilter(pFilter, L"Private Filter");
hr = GetPin(pFilter, PINDIR_INPUT, &inPin);
}
hr = pGraph->Connect(outPin, inPin);
This is where I get lost, I can't find out how to take the next steps to Render and save the output to a file on disk. So any help with the next steps would be greatfully appreciated, thanks in advance!
EDIT: Filesink code
AM_MEDIA_TYPE mType;
mType.majortype = MEDIATYPE_Video;
mType.subtype = MEDIASUBTYPE_H264;
mType.bFixedSizeSamples = FALSE;
mType.bTemporalCompression = TRUE;
mType.lSampleSize = 0;
mType.formattype = FORMAT_None;
mType.pUnk = NULL;
mType.cbFormat = 0;
mType.pbFormat = NULL;
//Not 100% sure about the setup of the media format.
IBaseFilter * iFiltera = NULL;
IFileSinkFilter* iFilter = NULL;
IGraphBuilder *pGraph;
hr = pMoniker2->BindToObject(0, 0, IID_IBaseFilter, (void**)&pSrc2); //audio capture
hr = CoCreateInstance(CLSID_FilterGraph, NULL, CLSCTX_INPROC_SERVER, IID_IGraphBuilder, (void**)&pGraph);
hr = CoCreateInstance(CLSID_FileWriter, NULL, CLSCTX_INPROC_SERVER, IID_IBaseFilter, (void**)&iFiltera);
hr = pBuild->SetFiltergraph(pGraph);
hr = pGraph->AddFilter(pSrc2, L"AudioCap");
hr = GetPin(pSrc2, PINDIR_OUTPUT, &outPin); //ADDED
hr = pGraph->AddFilter(iFiltera, L"FileWriter");
hr = iFiltera->QueryInterface(IID_IFileSinkFilter, (void**)&iFilter);
iFilter->SetFileName((LPCOLESTR)"c:\\wav\\tester.mp4", NULL); //UPDATED mType set to NULL
HRESULT hr = CreateObjectFromPath(TEXT("c:\\filters\\mp4mux.dll"), clsid, &pUnk);
IBaseFilterPtr pFilter = pUnk;
if (SUCCEEDED(hr))
{
HRESULT hr = pGraph->AddFilter(pFilter, L"Private Filter");
hr = GetPin(pFilter, PINDIR_INPUT, &inPin); //mux in
hr = GetPin(pFilter, PINDIR_OUTPUT, &mOutPin); //mux out
hr = GetPin(iFiltera, PINDIR_INPUT, &filePin); // filewriter in
}
hr = pGraph->Connect(outPin, inPin); //connect audio out and mux in
hr = pGraph->Connect(mOutPin, filePin); //connect mux out and file in; Error 0x80040217(VFW_E_CANNOT_CONNECT?) //works now
//ADDED code
IMediaControl *pMC = NULL;
hr = pGraph->QueryInterface(IID_IMediaControl, (void **)&pMC);
hr = pMC->Run();
Sleep(4000);
hr = pMC->Stop();
You need to have an idea what filter graph topology you need for specific task. You are doing capture, here - fine. So you have audio capture filter you provided code snippet for. Then you either compress audio (where your preferred choice should be AAC AKA MPEG-4 Part 3, provided that you are to produce MP4 file), or keep audio uncompressed PCM. Then you connect MPEG-4 multiplexer as you do. The multiplexer produces output stream and you are expected to complete the pipeline with File Writer Filter.
You can manually build the chain in GraphEdit SDK application (or there are alternate richer tools). Your filter graph looks like this:
Note that you can expose filter graph in your application, and connect to it remotely and inspect the topology. This makes debugging much easier. Starting/stopping the filter graph (IMediaControl::Run, ::Stop from code) creates you the file.
My understanding is that you get lost immediately after adding multiplexer. Now you need to find its output pin, add File Writer, query its IFileSinkFilter, set the destination file name using it, find its input pin, connect the two unconnected pins (mux output, writer input). Your pipeline is ready to run.