I intend to build an adapter exposing methods such as "StartCapture" and "StopCapture" using mediafoundation and would like multiple clients to be able to access these methods simultaneously from a single webcam device.
Currently the code I've seen in samples allows only for 1 stream to capture data in a file (the other ends up being empty).
Does mediafoundation allow for simultaneous device access? If so how?
Video input devices have traditionally been exclusive use resources. Once one client started a session, other clients cannot use the camera before it's released by the running session.
Windows 10 Anniversary Update introduced so called Frame Server, which is a middleware layer that, as was advertised, shared the camera between clients under certain circumstances.
This puts an end to the "exclusive" use of devices, and it's arguably a change that Windows should have made long ago. Third-party software for sharing cameras between applications exists, but the operating system should support this scenario natively, as it already does for audio devices.
To my best knowledge this does not work. At least it did not work for a few cameras I tried, and as of now the sharing does not work with Windows 10 Creators Update. Quote possibly though certain cameras/modes exist for which the feature is implementing the sharing.
Related
I'm right now reading the microsoft documentation about drivers and core audio apis. At the moment I'm still confuse which way to go to achieve what I need.
I have an audio application which is Standalone and coded with framework JUCE in C++. And I need to build a Windows solution that would capture the audio stream that is going to an audio endpoint device to use it as an input of my audio application.
This stream must have an unaltered volume: always 1.0 (no matter if the hardware volume is changed or muted).
I must be able to choose between the different endpoint devices, for exemple if I have an external soundcard that is plugged, my audio application should be able to intercept and copy the stream that is going to that external soundcard, or do the same for the stream that is going to the built-in speakers.
The idea is to capture the output streams before they are modified by hardware volume modifications, and make a copy of them routed to my application without changing the output routing and behaviour.
The microsoft documentation is very furnished, but even if the WASAPI provides a lot of ways to capture and stream from audio endpoint devices, I'm not sure it is possible to get an unaltered volume, as it will always capture what's exactly coming out of the speakers.
This is why I don't know If I can implement a feature directly in my audio application that will get the streams I want with WASAPIs or if I have to code a proper Audio Driver that would make a copy of the streams I want for my application to be able to use these streams.
The documentations I refer to:
Audio Drivers design guide
Core Audio APIs / WASAPI
Thanks for the help,
Best,
Maxime
Sometimes the volume control is implemented in software, and sometimes it is implemented in hardware. You can call IAudioEndpointVolume::QueryHardwareSupport to see if the volume control for the audio endpoint you're working with is implemented in hardware or software.
Sometimes the audio loopback is implemented in software, and sometimes it is implemented in hardware. There is no API to tell which.
If the audio loopback is implemented in software, and the volume control is implemented in hardware, then you will get back the data you want.
If the audio loopback is implemented in hardware, or the volume control is implemented in software, the the audio data you get back has already had the volume adjustment applied.
What does your application do with the audio data it receives? The primary use case for audio loopback data is echo cancelation, where you usually WANT the volume to be applied.
I am trying to write a pro music/audio processing application, and I would like to be able to interact with the audio inputs/outputs at a very low level - ideally something allowing me to apply effects to the audio inputs and output this in real-time, similar to programs like Logic, Ableton etc.
I have written a pretty basic program that detects audio endpoint devices and can change their volumes using the MMDevice interface, but this is nowhere near the functionality I would like.
I have learned from the Microsoft docs that the four core-audio APIs are:
MMDevice
WASAPI
DeviceTopology
EndpointVolume
but it doesn't seem like any of these have the capabilities that I need. I'm thinking that I will need to be able to interact with the speakers at the level of setting the position of the membrane at a given time.
Is this even possible? If so, what can I use to do this?
The Windows Audio Session API (WASAPI) is the best bet for this purpose. It allows interaction with audio endpoints and setting up audio streams (which are streams of data that you can send or receive in real time). A good example is here.
I was wondering if it is possible to capture a copy of the audio output in Qt so I can process it. Here they said it's possible to monitor the playback, but I think it's only possible if you use a self made music player, which I don't want. I want to capture the signal from no matter where it is player (youtube, spotify, facebook, etc.). Is there a way to analyze this data with Qt? Is it possible to set my output of my soundcard as a QMediaSource?
Thank you in advance.
In general, no, that isn't possible, simply because your process (and therefore the Qt library that is loaded into your process) does not have access to that information. (I believe this lack of access is deliberate; since if it did have access like that, there might be security and/or privacy implications, i.e. app A could use it to spy on the audio output of app B, etc)
There may be an OS-specific mechanism that you can use; for example, if you are running your program under MacOS/X, you can install the SoundFlower audio driver that can function as a loopback device, allowing programs to read audio from its "audio input" that was previously routed to its "audio output". But without that kind of external support, it's not currently possible to record the computer's audio output via Qt.
Hi I'm working on a c++ project that I'm trying to keep OS independent and I have two processes which need to communicate. I was thinking about setting up a 3rd process (possibly as a service?) to coordinate the other two, asynchronously.
Client 1 will tell the intermediate process when data is ready, and send the data to it. The intermediate process will then hold this data until client 2 tells it that it is ready for the data. If the intermediate process has not received new data from client 1, it will tell client 2 to wait.
Since I am trying to keep this OS independent I don't really know what to use. I have looked into using MPI but it doesn't really seem to fit this purpose. I have also looked into Boost.ASIO, Named Pipes, RPC's and RCF. Im currently programming in Windows but I'd like to avoid using the WIN_API so that the code could potentially be compiled in Linux.
Here's a little more detail on the two processes.
We have a back end process/model (client 1) that will receive initial inputs from a GUI (client 2, written in Qt) via the intermediate process. The model will then proceed to work until the end condition is met, sending data to the server as it becomes ready. The GUI will ask the intermediate process for data on regular intervals and will be told to wait if the model has not updated the data. As the data becomes available from the model we also want to be able to keep any previous data from the current session for exporting to a file if the user chooses to do so (i.e., we'll want the GUI to issue a command to the interface to export (or load) the data).
My modification privleges of the the back end/model are minimal, other than to adhere to the design outlined above. I have a decent amount of c++ experience but not much parallel/asynchronous application experience. Any help or direction is greatly appreciated.
Standard BSD TCP/IP socket are mostly platform independent. They work with some minor differences on both windows and Unices (like linux).
PS windows does not support AF_UNIX sockets.
I'd checkout the boost.interprocess library. If the two processes are on the same machine it has a number of different ways to communicate between processes, and do so in an platform independent manner.
I am not sure if you have considered the messaging system but if you are sending structured data between processes you should consider looking at google protocol buffers.
These related to the content of the messaging (what is passed) rather than how they are passed.
boost::asio is platform independent although it doesn't imply C++ at both ends. Of course, when you are using C++ you can use boost::asio as your form of transport.
I'd like to develop a very simple program to map PC keyboard to a piano keyboard: each time the user press a key a MIDI event is generated and a stand-alone sampler/synth (such as SFZ+ or other) receives these events and plays a sound.
I am able to generate MIDI events (using midiOutShortMsg), but they are sent directly to the FM synth (and so played by it), I'd like to send them to an external software. The code must be in C/C++.
Could you help me?
Tnx.
You should look at JACK
JACK is system for handling real-time,
low latency audio (and MIDI). It runs
on GNU/Linux, Solaris, FreeBSD, OS X
and Windows (and can be ported to
other POSIX-conformant platforms). It
can connect a number of different
applications to an audio device, as
well as allowing them to share audio
between themselves. Its clients can
run in their own processes (ie. as
normal applications), or can they can
run within the JACK server (ie. as a
"plugin"). JACK also has support for
distributing audio processing across a
network, both fast & reliable LANs as
well as slower, less reliable WANs.
JACK was designed from the ground up
for professional audio work, and its
design focuses on two key areas:
synchronous execution of all clients,
and low latency operation. More
background information is available.
Available as source or binaries here.
You must have used "midiOutOpen" to open the device.
What if you select another device id ?
sounds like you are not opening the correct device. midiOutOpen takes a device ID as second parameter; did you check if the one you pass is the correct one (using midiOutGetNumDevs and midiOutGetDevCaps)?
Many software synths don't set themselves up as Windows MIDI devices. Try using the freeware LoopBe1 to connect virtual cables between MIDI apps.