HResult 0x80040204 from IMediaObject::ProcessInput - c++

I get this Hresult when i resample PCM Sound to a IEEE:Float Sound with DirectXMediaResampler.
Changing the bits per sample with the same sampling rate is no problem. Also resamppling from IEEE:Float to PCM.
This HResult is not documented in context with a DMO object.
And it doesn't happen on every resampling but periodically.
Does anyone know or could guess what it means.

That's DMO_E_NOTACCEPTING; the documentation says:
DMO_E_NOTACCEPTING: Data cannot be accepted.
You can see the code that generates this in dmoimpl.h, although without the derived DMO code I don't think that helps (it means the DMO's InternalAcceptingInput method didn't return S_OK).
I guess this all means that the ResamplerDMO doesn't like your input data. Is it definitely set up correctly?

Related

IMediaSample returned by Sample Grabber has unexpected buffer size

I was working on a audio/video capture library for Windows using Media Foundation. However, I faced the issue described in this post for some webcams on Windows 8.1. I have thus decided to have another implementation using Directshow to support in my app the webcams for which the drivers haven't been updated yet.
The library works quite well, but I have noticed a problem with some webcams for which the sample (IMediaSample) returned has not the expected size according to the format that was set before starting the camera.
For example, I have the case where the format set has a subtype of MEDIASUBTYPE_RGB24 (3 bytes per pixels) and the frame size is 640x480. The biSizeImage (from BITMAPINFOHEADER) is well 640*480*3 = 921600 when applying the format.
The IAMStreamConfig::SetFormat() method succeed to apply the format.
hr = pStreamConfig->SetFormat(pmt);
I also set the format to the Sample Grabber Interface as follow:
hr = pSampleGrabberInterface->SetMediaType(pmt);
I applied the format before starting the graph.
However, in the callback (ISampleGrabberCB::SampleCB), I'm receiving a sample of size 230400 (which might be a buffer for a frame of size 320x240 (320*240*3=230400)).
HRESULT MyClass::SampleCB(double sampleTime, IMediaSample *pSample)
{
unsigned char* pBuffer= 0;
HRESULT hr = pSample->GetPointer((unsigned char**) &pBuffer);
if(SUCCEEDED(hr) {
long bufSize = pSample->GetSize();
//bufSize = 230400
}
}
I tried to investigate the media type returned using the IMediaSample::GetMediaType() method, but the media type is NULL, which means, according to the documentation of the GetMediaType method that the media type has not changed (so I guess, it's still the media type I've applied successfully using the IAMStreamConfig::SetFormat() function).
HRESULT hr = pSample->GetMediaType(&pType);
if(SUCCEEDED(hr)) {
if(pType==NULL) {
//it enters here => the media type has not changed!
}
}
Why the sample buffer size returned is not the expected size in this case? How can I solve this issue?
Thanks in advance!
Sample Grabber callback will always return "correct" size in terms that it matches actual data size and formats used in streaming pipeline.
If you are seeing the mismatch, it means that your filter graph topology differs from what you expect it to be. You need to review the graph (esp. using remote connection by GraphEdit), inspect media types and check why it was built incorrectly. For example, you could be applying format of your interest after connecting pins, which is too late.
See also:
How can I reverse engineer a DirectShow graph?

Windows Audio Endpoint API. Getting the names of my Audio Devices

My main goal at the moment is to get detailed information about all of the local machine's Audio Endpoint Devices. That is the objects representing the audio peripherals. I want to be able to choose which device to record from based on some logic (or eventually allow the user to manually do so).
Here's what I've got so far. I'm pretty new to c++ so dealing with all of these abstract classes is getting a bit tricky so feel free to comment on code quality as well.
//Create vector of IMMDevices
UINT endpointCount = NULL;
(*pCollection).GetCount(&endpointCount);
std::vector<IMMDevice**> IMMDevicePP; //IMMDevice seems to contain all endpoint devices, so why have a collection here?
for (UINT i = 0; i < (endpointCount); i++)
{
IMMDevice* pp = NULL;
(*pCollection).Item(i, &pp);
IMMDevicePP.assign(1, &pp);
}
My more technical goal at present is to get objects that implement this interface: http://msdn.microsoft.com/en-us/library/windows/desktop/dd371414(v=vs.85).aspx
This is a type that is supposed to represent a single Audio Endpoint device whereas the IMMDevice seems to contain a collection of devices. However IMMEndpoint only contains a method called GetDataFlow so I'm unsure if that will help me. Again the goal is to easily select which endpoint device to record and stream audio from.
Any suggestions? Am I using the wrong API? This API definitely has good commands for the actual streaming and sampling of the audio but I'm a bit lost as to how to make sure I'm using the desired device.
WASAPI will allow you to do what you need so you're using the right API. You're mistaken about IMMDevice representing a collection of audio devices though, that is IMMDeviceCollection. IMMDevice represents a single audio device. By "device", WASAPI does't mean audio card as you might expect, rather it means a single input/output on such card. For example an audio card with analog in/out + digital out will show up as 3 IMMDevices each with it's own IMMEndpoint. I'm not sure what detailed info you're after but it seems to me IMMDevice will provide you with everything you need. Basically, you'll want to do something like this:
Create an IMMDeviceEnumerator
Call EnumAudioEndpoints specifying render, capture or both, to enumerate into an IMMDeviceCollection
Obtain individual IMMDevice instances from IMMDeviceCollection
Device name and description can be queried from IMMDevice using OpenPropertyStore (http://msdn.microsoft.com/en-us/library/windows/desktop/dd370812%28v=vs.85%29.aspx). Additional supported device details can be found here: http://msdn.microsoft.com/en-us/library/windows/desktop/dd370794%28v=vs.85%29.aspx.
IMMDevice instances obtained from IMMDeviceCollection will also be instances of IMMEndpoint, use QueryInterface to switch between the two. However, as you noted, this will only tell you if you've got your hands on a render or capture device. Much easier to only ask for what you want directly on EnumAudioEndpoints.
About code quality: use x->f() instead if (*x).f(), although it's technically the same thing the -> operator is the common way to call a function through an object pointer
Don't use vector::assign, apparently that replaces the contents of the entire vector on each call so you'll end up with a collection of size 1 regardless of the number of available devices. Use push_back instead.
After enumerating your IMMDevices as Sjoerd stated it is a must to retrieve the IPropertyStore
information for the device. From there you have to extract the PROPVARIANT object as such:
PROPERTYKEY key;
HRESULT keyResult = (*IMMDeviceProperties[i]).GetAt(p, &key);
then
PROPVARIANT propVari;
HRESULT propVariResult = (*IMMDeviceProperties[i]).GetValue(key, &propVari);
according to these documents:
http://msdn.microsoft.com/en-us/library/windows/desktop/bb761471(v=vs.85).aspx
http://msdn.microsoft.com/en-us/library/windows/desktop/aa380072(v=vs.85).aspx
And finally to navigate the large PROPVARIANT structure in order to get the friendly name of the audio endpoint device simply access the pwszVal member of the PROPVARIANT structure as illustrated here:
http://msdn.microsoft.com/en-us/library/windows/desktop/dd316594(v=vs.85).aspx
All about finding the right documentation!

Get encoder name from SinkWriter or ICodecAPI or IMFTransform

I'm using the SinkWriter in order to encode video using media foundation.
After I initialize the SinkWriter, I would like to get the underlying encoder it uses, and print out its name, so I can see what encoder it uses. (In my case, the encoder is most probably the H.264 Video Encoder included in MF).
I can get references to the encoder's ICodecAPI and IMFTransform interface (using pSinkWriter->GetServiceForStream), but I don't know how to get the encoder's friendly name using those interfaces.
Does anyone know how to get the encoder's friendly name from the sinkwriter? Or from its ICodecAPI or IMFTransform interface?
This is by far an effective solution and i am not 100% sure it works, but what could be done is:
1) At start-up enumerate all the codecs that could be used (as i understand in this case H264 encoders) and subscribe to setting change event
MFT_REGISTER_TYPE_INFO TransformationOutput = { MFMediaType_Video, MFVideoFormat_H264 };
DWORD nFlags = MFT_ENUM_FLAG_ALL;
UINT32 nCount = 0;
CLSID* pClsids;
MFTEnum( MFT_CATEGORY_VIDEO_ENCODER, nFlags, NULL, &TransformationOutput, NULL, &pClsids, &nCount);
// Ok here we assume nCount is 1 and we got the MS encoder
ICodecAPI *pMsEncoder;
hr = CoCreateInstance(pClsids[0], NULL, CLSCTX_INPROC_SERVER, __uuidof(ICodecAPI), (void**)&pMsEncoder);
// nCodecIds is supposed to be an array of identifiers to distinguish the sender
hr = pMsEncoder->RegisterForEvent(CODECAPI_AVEncVideoOutputFrameRate, (LONG_PTR)&nCodecIds[0]);
2) Not 100% sure if the frame rate setting is also set when the input media type for the stream is set, but anyhow you can try to set the same property on the ICodecAPI you retrieved from the SinkWriter. Then after getting the event you should be able to identify the codec by comparing lParam1 to the value passed. But still this is very poor since it relies on the fact that all the encoders support the event notification and requires unneeded parameter changing if my hypothesis about the event being generated on stream construction is wrong.
Having IMFTransform you don't have a friendly name of the encoder.
One of the options you have is to check transform output type and compare to well known GUIDs to identify the encoder, in particular you are going to have a subtype of MFVideoFormat_H264 with H264 Encoder MFT.
Another option is to reach CLSID of the encoder (IMFTransform does not get you it, but you might have it otherwise such as via IMFActivate or querying MFT_TRANSFORM_CLSID_Attribute attribute, or via IPersist* interfaces). Then you could look registry up for a friendly name or enumerate transforms and look your one in that list by comparing CLSID.

Proper implementation of libspotify get_audio_buffer_stats callback

Can anyone help decipher the correct implementation of the libspotify get_audio_buffer_stats callback. Specifically, we are supposed to populate a sp_audio_buffer_stats buffer, consisting of samples and stutter?
According to the Docs:
int samples - Samples in buffer.
int stutter - Number of stutters (audio dropouts) since last query.
I'm wondering about "samples." What exactly is this referring to?
The music playback (audio_delivery) callback has a num_frames variable, but then you have the issue of audio format (channels and/or sample_rate).
Is it correct to set "samples" to total amount of "num_frames" currently in my buffer? Or do I need to run some math based on total "num_samples", "channels", and "sample_rate"
It should be the number of frames in your output buffer. I.e. int samples is slightly misnamed and should probably be called int frames instead.

Finding out when the samplegrabber is ready in DirectShow

I am continuing work on my DirectShow application and am just putting the finishing touches on it. What the program is doing is going through a video file in 1 second intervals and capturing from the samplegrabber the current buffer and processing it before moving on. However, I was noticing that I was getting duplicate images in my tests to which I found out that DirectShow has not incremented through the video in that 1 second interval fast enough. My question is if there is a way to check when DirectShow is ready for me to call the samplegrabber to get the current frame and to process it. At the moment I call sleep for 1 second but there has to be a better method. Thank you in advance for the help.
EDIT
I just tried running a check to see if the video's position is equal to the next position that I would like to grab and process. That decreased the number of duplicate frames but I still see them showing up in chunks.
I always let the DS framework handle the processing rate:
in the main application thread, configure the sample grabber callback and then when the callback is triggered, you get the media sample as well as sample time: at this point you can process the sample if the appropriate interval i.e. 1 second has elapsed.
What do you mean you call sleep for a second and from where (which thread) do you call it?
If you're doing this from inside the callback you are effectively blocking the DirectShow pipeline? Perhaps if you could explain your setup in more detail I could be more helpful.
/// Callback that is triggered per media sample
/// Note this all happens in the DirectShow streaming thread!
STDMETHODIMP SampleCB( double dSampleTime, IMediaSample * pSample )
{
// check timestamp and if one second has elapsed process sample accordingly
// else do nothing
...
// Get byte pointer
BYTE* pbData(NULL);
HRESULT hr = pSample->GetPointer(&pbData);
if (FAILED(hr))
{
return hr;
}
...
}
P.S if you want to process the samples as fast as possible, you can set the sample timestamp to NULL in your callback.
// set time to NULL to allow for fast rendering since the
// the video renderer controls the rendering rate according
// to the timestamps
pSample->SetTime(NULL, NULL);
Try setting your graph timer to NULL. It will allow to:
process the file as fast as possible
will relieve you of the issues you have.
Of course, it won't work if you are rendering the file to screen at the same time.