I'm trying to create a mixing table with Wasapi.
I create and retrieve samples from Input Card then send it to Output Speaker.
But how to create matrix with subvoice that will not being playing through Speaker ?
I can't found any documentation on MSDN.
Maybe :
IAudioClient always depend on a IMMDevice, but is it possible to create an IAudioClient with both IAudioRenderClient and IAudioCaptureClient, this way i can send sample to the IAudioRenderClient and then retrieve sample from the IAudioCaptureClient to send it anew to another IAudioRenderClient this time linked to a device or to a file by example ?
Related
I can not seem to find any tutorial on the internet for my question.
All the simple guide is not suitable for UWP.
For Example,
To use WASAPI there are these steps
enumerate devices
capture audio
play (render) audio back
But the enumerating step, The client must call CoCreateInstance. But from my understanding this function is not support in UWP. Also I failed at Line 30 when following this code.
So, I try to understand This, C++ UWP using WASAPI, But I can't find any Enumerate part and this project is very complicate for me.
It include a lot of other files (DeviceState.h, common.h)
And I failed to extract the code to create my own application.
My question is how can I capture audio on c++ UWP app with WASAPI?
If this question is too board, I will change my question to How to enumerate audio device in c++ UWP application?.
And the reason why I use WASAPI is because I want to access the data stored in the Buffer.
Edit:
For enumerating.
https://github.com/Microsoft/Windows-universal-samples/blob/7c7832e1f144e4fc836603fd70e1352024a5fe1a/Samples/WindowsAudioSession/cpp/Scenario1.xaml.cpp#L85
Yes, you can use WASAPI to do audio capturing in UWP and this is what is done in the sample you have referenced (https://github.com/Microsoft/Windows-universal-samples/tree/master/Samples/WindowsAudioSession).
For the enumeration, the main function is DeviceInformation::FindAllAsync with this selector MediaDevice::GetAudioCaptureSelector it will allow you to list the capture devices.
For the stream capturing, the main function you need is
ActivateAudioInterfaceAsync, it will allow you to create an IAudioClient from a device id (specific device) or a device class (render or capture) if you just need to use the default device.
Once you have this IAudioClient you can use it to get an IAudioClientCapture, basically the things that you have seen in the sample.
I am working on project where I need to read a USB camera's input, put some effects on it and then send that data to a virtual camera so it can be accessed by skype etc.
I have compiled and used the vcam filter. I was also able to make a few changes in FillBuffer method. I now need to know that is it possible to send data to vcam filter from another application or do I need to write another filter.
The vcam project you currently have as a template is the interface to other video consuming applications like Skype, those which use DirectShow API to access video capture devices and match in platform/bitness with your filter.
You are responsible for developing the rest of the supposed filter: you either access real device right in your filter (simplifying the task greatly, this is what you fill your FillBuffer with, the code that generates video from another source), or alternatively you are to implement interprocess communication so that FillBuffer implementation could transfer data from another application.
Nethier vcam nor any of standard DriectShow samples offer functionality to cover interprocess communication, and you might also need to deal with other complications: one application and multiple instances of filters to consume video, platform mismatch etc.
See also:
How to implement a "source filter" for splitting camera video based on Vivek's vcam?
I am able to generate a QRCode for Plain Text using libQrencode with the API QRcode_encodeString() and Reading using Zxing Lib APIS.
Is there anyway I can create QRCode for Wifi Connection or File Transfer Request like http://zxing.appspot.com/generator and On reading the Generated QRCode Wifi gets connected or File Transfer should start.
QRCode is basically transporting data in form of Image.
Well, it will vary clear with the above line, we need to create the Plain Text with the information needed and Write the Parser to get the individual details needed.
For Example:
For a Wifi Network, we need the below Fields of information:
SSID
PASSWORD
NETWORK TYPE
Append all the info in single text like
"SSID:name,PASSWORD:passkey,NETWORKTYPE:type" (We can use Json rather writing a Custom Parser.
I will post the Complete Code soon.
I have a directshow Video renderer redived from CBaseVideoRenderer. The renderer is used in a graph that receives data from a live source (BDA). It looks like the connections are established properly, but the video rendering immediately ends because there is no sample. However, audio Rendering works, ie I can hear the sound while DoRenderSample of my renderer is never called.
Stepping through the code in the debugger, I found out that in CBaseRenderer::StartStreaming, the stream ends immedately, because the member m_pMediaSample is NULL. If I replace my renderer with the EVR renderer, it shows frames, ie the stream is not ending before the first frame for the EVR renderer, but only for my renderer.
Why is that and how can I fix it? I implemented (following the sample from http://www.codeproject.com/Articles/152317/DirectShow-Filters-Development-Part-Video-Render) what I understand as the basic interface (CheckMediaType, SetMediaType and DoRenderSample), so I do not see any possibility to influence what is happening here...
Edit: This is the graph as seen from the ROT:
What I basically try to do is capturing a DVB stream that uses VIDEOINFOHEADER2, which is not supported by the standard sample grabber. Although the channel is a public German TV channel without encryption, could it be that this is a DRM issue?
Edit 2: I have attached my renderer to another source (a Blackmagic Intensity Shuttle). It seams that the source causes the issue, because I get samples in the other graph.
Edit 3: Following Roman's Suggestion, I have created a transform filter. The graph looks like
an has unfortunately the same problem, ie I do not get any sample (Transform is not called).
You supposedly chose wrong path of fetching video frames out of media pipeline. So you are implementing a "network renderer", something that terminates the pipeline to further send data to network.
A renderer which accepts the feed sounds appropriate. Implementing a custom renderer, however, is an untypical task and then there is not so much information around on this. Additionally, a fully featured renderer is typically equipped with sample scheduling part, which end of stream delivery - things relatively easy to break when you customize it through inheriting from base classes. That is, while the approach sounds good, you might want to compare it to another option you have, which is...
A combination of Sample Grabber + Null Renderer, two standard filters, which you can attach your callback to and get frames having the pipeline properly terminated. The problem here is that standard Sample Grabber does not support VIDEOINFOHEADER2. With another video decoder you could possibly have the feed decoded info VIDEOINFOHEADER, which is one option. And then improvement of Sample Grabber itself is another solution: DirectX SDK Extras February 2005 (dxsdk_feb2005_extras.exe) was the SDK which included a filter similar to standard Sample Grabber called Grabber \DirectShow\Samples\C++\DirectShow\Filters\Grabber. It is/was available in source code and provided with a good description text file. It is relatively easy to extend to allow it accept VIDEOINFOHEADER2 and make payload data available to your application this way.
The easiest way to get data out of a DirectShow graph, if youњre not going to use
MultiMedia Streaming, is probably to write your own TransInPlace filter, a sub-variety
of a Transform filter. Then connect this filter to the desired stream of data you wish to
monitor, and then run, pause, seek, or otherwise control the graph. The data, as it passes
through the transform filter, can be manipulated however you want. We call this kind of
filter, a Њsample grabberћ. Microsoft released a limited-functionality sample grabber
with DX8.0. This filter is limited because it doesnњt deal with DV Data or mediatypes
with a format of VideoInfo2. It doesnњt allow the user to receive prerolled samples.
(Whatњs a preroll sample? See the DX8.1 docs) Its ЊOneShotћ mode also has some problems.
To add to this, the Grabber sample is pretty simple itself - perhaps 1000 lines of code all together, including comments.
Looks like your decoder or splitter isn't demuxing the video frames. Look further up the chain to see what filters are supplying your renderer pin with data, chances are its only recognising audio.
Try dropping the file into Graphedit (there's a better one on the web BTW) and see what filters it creates.
Then look at the samples in the DirectShow SDK.
I have made a sample application which constructs a filter graph to capture audio from the microphone and stream it to a file. Is there any filter which allows me to stream to a memory buffer instead?
I'm following the approach outlined in an article on msdn and are currently using the CLSID_FileWriter object to write the audio to file. This works nicely, but I cannot figure out how to write to a memory buffer.
Is there such a memory sink filter or do I have to create it myself? (I would prefer one which is bundled with windows XP)
The easiest way to do this (although not the most elegant) is to use a Sample Grabber filter followed by a Null Renderer filter to terminate the graph. This will enable you to get access to the raw media stream using the sample grabber's ISampleGrabber interface. Once you have the samples you can do what you like with them.
Use IMediaSample::GetPointer to retrieve a pointer to the buffer's raw data.