Playing sounds over the microphone in c++ - c++

I am making a program in C++ for Windows XP that requires sound to be played so that any program that is currently recording the microphone can hear it, but it will not come out of the speakers. There seems to be no "real" way of doing it, but it is possible to go into "sndvol32 -R" and set the Wave out mix or similar as the current input device. Then you can turn the master volume to 0, play the sound, turn it back up, and reset the input device to the microphone. Is there a way of doing this transparently, or setting the current input device using functions, so that you dont have to see sndvol32 pop up?
Thanks

Doing this would require a complicated kernel-level driver.
Fortunately for you, someone has already done this (it's not free, but it's a fantastic program).

Related

No audio output from one of two streams when rendering directly to WASAPI

I've been stuck on this problem for weeks now and Google is no help, so hopefully some here can help me.
I am programming a software sound mixer in C++, getting audio packets from the network and Windows microphones, mixing them together as PCM, and then sending them back out over the network and to speakers/USB headsets. This works. I have a working setup using the PortAudio library to handle the interface with Windows. However, my supervisors think the latency could be reduced between this software and our system, so in an attempt to lower latency (and better handle USB headset disconnects) I'm now rewriting the Windows interface layer to directly use WASAPI. I can eliminate some buffers and callbacks doing this, and theoretically use the super low latency interface if that's still not fast enough for the higher ups.
I have it only partially working now, and the partially part is what is killing me here. Our system has the speaker and headphones as three separate mono audio streams. The speaker is mono, and the headset is combined from two streams to be stereo. I'm outputting this to windows as two streams, one for a device of the user's choice for speaker, and one of another device of the user's choice for headset. For testing, they're both outputting to the default general stereo mix on my system.
I can hear the speaker perfectly fine, but I cannot hear the headset, no matter what I try. They both use the same code path, they both go through a WMF resampler to convert to 2 channel audio at the sample rate Windows wants. But I can hear the speaker, but never the headset stream.
It's not an exclusive mode problem: I'm using shared mode on all streams, and I've even specifically tried cutting down the streams to only the headset, in case one was stomping the other or something, and still the headset has no audio output.
It's not a mixer problem upstream, as I haven't modified any code from when it worked with PortAudio streams. I can see the audio passing through the mixer and to the output via my debug visualizers.
I can see the data going into the buffer I get from the system, when the system calls back to ask for audio. I should be hearing something, static even, but I'm getting nothing. (At one point, I bypassed the ring buffer entirely and put random numbers directly into the buffer in the callback and I still got no sound.)
What am I doing wrong here? It seems like Windows itself is the problem or something, but I don't have the expertise on Windows APIs to know what, and I'm apparently the most expert for this stuff in my company. I haven't even looked yet as to why the microphone input isn't working, and I've been stuck on this for weeks now. If anyone has any suggestions, it'd be much appreciated.
Check the re-sampled streams: output the stereo stream to the speaker, and output the mono stream to the handset.
Use IAudioClient::IsFormatSupported to check supported formats for the handset.
Verify your code using an mp3 file. Use two media players to play different files with different devices simultaneously.

Changin mp3 speed in Qt and C++ [QMediaPlayer]

I'm trying to develop a little application in which you can load a mp3 file and play it in variable speeds! (I know it already exists :-) )
I'm using Qt and C++. I already have the basic player but I'm stuck with the rate thing, because I want to change the rate smoothly (like in Mixxx) without stopping the playback! The QMediaPlayer always stops if I change the value and creates a gap in the sound. Also I don't want the pitch to change!
I already found something called "SoundTouch" but now I'm completely clueless what to do with it, how to process my mp3 data and how to get it to the player! The "SoundTouch" Library is capable of doing what I want, i got that from the samples on the homepage.
How do I have to import the mp3 file, so I can process it with the SoundTouch functions
How can I play the output from the SoundTouch function? (Perhaps QMediaPlayer can do the job?)
How is that stuff done live? I have to do some kind of stream I guess? So I can change the speed during play and keep on playing without gaps. Graphicaly in my head it has to be something that sits between the data and the player, where all data has to go through live, with a small buffer (20-50 ms or so) behind to avoid gaps during processing future data.
Any help appreciated! I'm also open to any another solution then "SoundTouch" as long as I can stay with Qt/C++!
(Second thing: I want to view a waveform overview aswell as moving part of it (around actual position of the song), so I could also use hints on how to get the waveform data)
Thanks in advance!
As of now (Qt 5.5) this is impossible to do with QMediaPlayer only. You need to do the following:
Decode the audio using GStreamer, FFMpeg or (new) QAudioDecoder: http://doc.qt.io/qt-5/qaudiodecoder.html - this will give you raw PCM stream;
Apply SoundTouch or some other library to this raw data to change the pitch. If GPL is ok, take a look at http://nsound.sourceforge.net/examples/index.html, if you develop proprietary stuff, STK might be a better choice: https://ccrma.stanford.edu/software/stk/
Output the modified data into audio device by using QAudioOutput.
This strategy uses Qt as much as possible, and brings you the best platform coverage (you still lose Android though as it does not support QAudioOutput)

Why can't I set master volume for USB/Firewire Audio interface with IAudioEndpointVolume::SetMasterVolumeLevelScalar

I am trying to fix an Audacity bug that revolves around portmixer. The output/input level is settable using the mac version of portmixer, but not always in windows. I am debugging portmixer's window code to try to make it work there.
Using IAudioEndpointVolume::SetMasterVolumeLevelScalar to set the master volume works fine for onboard sound, but using pro external USB or firewire interfaces like the RME Fireface 400, the output volume won't change, although it is reflected in Window's sound control panel for that device, and also in the system mixer.
Also, outside of our program, changing the master slider for the system mixer (in the taskbar) there is no effect - the soundcard outputs the same (full) level regardless of the level the system says it is at. The only way to change the output level is using the custom app that the hardware developers give with the card.
The IAudioEndpointVolume::QueryHardwareSupport function gives back ENDPOINT_HARDWARE_SUPPORT_VOLUME so it should be able to do this.
This behavior exists for both input and output on many devices.
Is this possibly a Window's bug?
It is possible to workaround this by emulating (scaling) the output, but this is not preferred as it is not functionally identical - better to let the audio interface do the scaling (esp. for input if it involves a preamp).
The cards you talk about -like the RME- ones simply do not support setting the master or any other level through software, and there is not much you can do about it. This is not a Windows bug. One could argue that giving back ENDPOINT_HARDWARE_SUPPORT_VOLUME is a bug though, but that likely originates from the driver level, not Windows itself.
The only solution I found so far is hooking up a debugger (or adding a dll hook) to the vendor supplied software and looking at the DeviceIOControl calls it makes (those are the ones used to talk to the hardware) while setting the volume in the vendor software. Pretty hard to do this for every single card, but probably worth doing for a couple of pro cards. Especially for Audacity, for open source audio software it's actually not that bad so I can imagine some people being really happy if the volume on their card could be set by it. (at the time we were exclusively using an RME Multiface I spent quite some time in figuring out the DeviceIOControl calls, but in the end it was definitely worth it as I could set the volume in dB for any point in the matrix)

Gain sole control of Audio Output, DirectSound

I am creating a basic signal generator and decided to use my audio card as the analogue output. I chose to use DirectSound because... it seemed like a good option.
I have it up and running quite nicely, but I now realize that my code using secondary buffers and as such any other sounds on the computer get mixed in with my generated signal. This is something of an issue, as when I'm running a motor I don't want it to get sent an MSN poke noise as a command.
In order to gain total control I've attempted to take over the system by setting my cooperative level to DSSCL_WRITEPRIMARY. All in all this strategy is really giving me a headache as I am running into error after error trying to get this set up. The documentation on using the primary buffer isn't great and I can't find any really good examples.
So my question is:
Does anyone have a good, working example of taking over and writing to the primarybuffer.
Is there a simpler way of outputing a waveform to the audio card, and ensuring that my application has full and sole control?
Thank you
only thing I've seen related is:
http://blogs.msdn.com/b/matthew_van_eerde/archive/2009/04/03/sample-wasapi-exclusive-mode-event-driven-playback-app-including-the-hd-audio-alignment-dance.aspx

Linux, C++ audio capturing (just microphone) library

I'm developing a musical game, it's like a singstar but instead of singing, you have to play the recorder. It's called oFlute, and it's still in early development stage.
In the game, I capture the microphone input, then run a simple FFT analysis and compare the results to typical recorder's frequencies, thus getting the played note.
At the beginning, the audio library I was using was RtAudio, but I don't remember why I switched to PortAudio, which is what I'm currently using. The problem is that, from time to time, either it crashes randomly or stops capturing, like if there were no sound coming from the microphone.
My question is, what's the best option to capture microphone input on Linux? I just need to open, read, and close a flow of bytes from the microphone.
I've been reading this guide, and (un)surprisingly it says:
I don't think that PortAudio is very good API for Unix-like operating systems.
So, what do you recommend me?
PortAudio is a strange choice given the other options.
I would personally abstract away from everything and use GStreamer. Audio can be a horrible mess on Linux (speaking as a long term sufferer). Letting Gstreamer deal with that lets you forget about it, move along and not have to think about it again.
OpenAL is probably the most popular for game dev though and it should support most systems (although you will have "fun" getting it playing nice with PulseAudio).
I'd certainly make sure you're developing for the most popular setup (which is PulseAudio at the moment, I reckon) so you don't end up in a situation where you release and you're plunged into a pool of people moaning about the sound not working.
And don't listen to the nonsense about PulseAudio - it might be new and it might take up a few more resources than a barebones ALSA system but it's certainly not mired with latency issues. Asking people to remove it isn't an option with modern desktop distros as it's so tightly integrated (and useful too).