I am taking my first dives in to the WASAPI system of windows and I do not know if what I want is even possible with the windows API.
I am attempting to write program that will record the sound from various programs and break each in to a separate recorded track/audio file. From the reseacrch I have done I know the unit I need to record is the various audio sessions being rendered to a endpoint, and the normal way of recording is by taking the render endpoint and performing a loopback. However from what I have read so far in the MSDN the only interaction with sessions I can do is through IAudioSessionControl and that does not provide me with a way to get a copy of the stream for the session.
Am I missing something that would allow me to do this with the WASAPI (or some other windows API) and get the individual sessions (or individual streams) before they are mixed together to form the endpoint or is this a imposable goal?
The mixing takes place inside the API (WASAPI) and you don't have access to buffers of other audio clients, esp. that they don't exist in the context of the current process in first place. Perhaps one's best (not so good, but there are no better alternatives) way would be to hook the API calls and intercept data on its way to WASAPI, if the task in question permits dirty tricks like this.
Related
I am trying to write a pro music/audio processing application, and I would like to be able to interact with the audio inputs/outputs at a very low level - ideally something allowing me to apply effects to the audio inputs and output this in real-time, similar to programs like Logic, Ableton etc.
I have written a pretty basic program that detects audio endpoint devices and can change their volumes using the MMDevice interface, but this is nowhere near the functionality I would like.
I have learned from the Microsoft docs that the four core-audio APIs are:
MMDevice
WASAPI
DeviceTopology
EndpointVolume
but it doesn't seem like any of these have the capabilities that I need. I'm thinking that I will need to be able to interact with the speakers at the level of setting the position of the membrane at a given time.
Is this even possible? If so, what can I use to do this?
The Windows Audio Session API (WASAPI) is the best bet for this purpose. It allows interaction with audio endpoints and setting up audio streams (which are streams of data that you can send or receive in real time). A good example is here.
I am working on project where I need to read a USB camera's input, put some effects on it and then send that data to a virtual camera so it can be accessed by skype etc.
I have compiled and used the vcam filter. I was also able to make a few changes in FillBuffer method. I now need to know that is it possible to send data to vcam filter from another application or do I need to write another filter.
The vcam project you currently have as a template is the interface to other video consuming applications like Skype, those which use DirectShow API to access video capture devices and match in platform/bitness with your filter.
You are responsible for developing the rest of the supposed filter: you either access real device right in your filter (simplifying the task greatly, this is what you fill your FillBuffer with, the code that generates video from another source), or alternatively you are to implement interprocess communication so that FillBuffer implementation could transfer data from another application.
Nethier vcam nor any of standard DriectShow samples offer functionality to cover interprocess communication, and you might also need to deal with other complications: one application and multiple instances of filters to consume video, platform mismatch etc.
See also:
How to implement a "source filter" for splitting camera video based on Vivek's vcam?
I intend to build an adapter exposing methods such as "StartCapture" and "StopCapture" using mediafoundation and would like multiple clients to be able to access these methods simultaneously from a single webcam device.
Currently the code I've seen in samples allows only for 1 stream to capture data in a file (the other ends up being empty).
Does mediafoundation allow for simultaneous device access? If so how?
Video input devices have traditionally been exclusive use resources. Once one client started a session, other clients cannot use the camera before it's released by the running session.
Windows 10 Anniversary Update introduced so called Frame Server, which is a middleware layer that, as was advertised, shared the camera between clients under certain circumstances.
This puts an end to the "exclusive" use of devices, and it's arguably a change that Windows should have made long ago. Third-party software for sharing cameras between applications exists, but the operating system should support this scenario natively, as it already does for audio devices.
To my best knowledge this does not work. At least it did not work for a few cameras I tried, and as of now the sharing does not work with Windows 10 Creators Update. Quote possibly though certain cameras/modes exist for which the feature is implementing the sharing.
I was wondering if it is possible to capture a copy of the audio output in Qt so I can process it. Here they said it's possible to monitor the playback, but I think it's only possible if you use a self made music player, which I don't want. I want to capture the signal from no matter where it is player (youtube, spotify, facebook, etc.). Is there a way to analyze this data with Qt? Is it possible to set my output of my soundcard as a QMediaSource?
Thank you in advance.
In general, no, that isn't possible, simply because your process (and therefore the Qt library that is loaded into your process) does not have access to that information. (I believe this lack of access is deliberate; since if it did have access like that, there might be security and/or privacy implications, i.e. app A could use it to spy on the audio output of app B, etc)
There may be an OS-specific mechanism that you can use; for example, if you are running your program under MacOS/X, you can install the SoundFlower audio driver that can function as a loopback device, allowing programs to read audio from its "audio input" that was previously routed to its "audio output". But without that kind of external support, it's not currently possible to record the computer's audio output via Qt.
Hi I'm working on a c++ project that I'm trying to keep OS independent and I have two processes which need to communicate. I was thinking about setting up a 3rd process (possibly as a service?) to coordinate the other two, asynchronously.
Client 1 will tell the intermediate process when data is ready, and send the data to it. The intermediate process will then hold this data until client 2 tells it that it is ready for the data. If the intermediate process has not received new data from client 1, it will tell client 2 to wait.
Since I am trying to keep this OS independent I don't really know what to use. I have looked into using MPI but it doesn't really seem to fit this purpose. I have also looked into Boost.ASIO, Named Pipes, RPC's and RCF. Im currently programming in Windows but I'd like to avoid using the WIN_API so that the code could potentially be compiled in Linux.
Here's a little more detail on the two processes.
We have a back end process/model (client 1) that will receive initial inputs from a GUI (client 2, written in Qt) via the intermediate process. The model will then proceed to work until the end condition is met, sending data to the server as it becomes ready. The GUI will ask the intermediate process for data on regular intervals and will be told to wait if the model has not updated the data. As the data becomes available from the model we also want to be able to keep any previous data from the current session for exporting to a file if the user chooses to do so (i.e., we'll want the GUI to issue a command to the interface to export (or load) the data).
My modification privleges of the the back end/model are minimal, other than to adhere to the design outlined above. I have a decent amount of c++ experience but not much parallel/asynchronous application experience. Any help or direction is greatly appreciated.
Standard BSD TCP/IP socket are mostly platform independent. They work with some minor differences on both windows and Unices (like linux).
PS windows does not support AF_UNIX sockets.
I'd checkout the boost.interprocess library. If the two processes are on the same machine it has a number of different ways to communicate between processes, and do so in an platform independent manner.
I am not sure if you have considered the messaging system but if you are sending structured data between processes you should consider looking at google protocol buffers.
These related to the content of the messaging (what is passed) rather than how they are passed.
boost::asio is platform independent although it doesn't imply C++ at both ends. Of course, when you are using C++ you can use boost::asio as your form of transport.