encrypt decrypt voice - phone-call

I want to develop a program like below image.
When you are in a voice call, if you start myApplication, then your voice first go to my program and encrypt with my code then send it over voice,
and other side voice decrypt in my application and play in speaker.
my friend said I can not do this and I should use voip or data channel.

you can't because voice transmission has lots of noise and it's only because of human hearing error that we can't understand there is even a noise in the first place. voip or data channel securely transmite data so you can easily decode what you encoded, but only using voice will generate too much noise that the received data is almost unusable.

Related

No audio output from one of two streams when rendering directly to WASAPI

I've been stuck on this problem for weeks now and Google is no help, so hopefully some here can help me.
I am programming a software sound mixer in C++, getting audio packets from the network and Windows microphones, mixing them together as PCM, and then sending them back out over the network and to speakers/USB headsets. This works. I have a working setup using the PortAudio library to handle the interface with Windows. However, my supervisors think the latency could be reduced between this software and our system, so in an attempt to lower latency (and better handle USB headset disconnects) I'm now rewriting the Windows interface layer to directly use WASAPI. I can eliminate some buffers and callbacks doing this, and theoretically use the super low latency interface if that's still not fast enough for the higher ups.
I have it only partially working now, and the partially part is what is killing me here. Our system has the speaker and headphones as three separate mono audio streams. The speaker is mono, and the headset is combined from two streams to be stereo. I'm outputting this to windows as two streams, one for a device of the user's choice for speaker, and one of another device of the user's choice for headset. For testing, they're both outputting to the default general stereo mix on my system.
I can hear the speaker perfectly fine, but I cannot hear the headset, no matter what I try. They both use the same code path, they both go through a WMF resampler to convert to 2 channel audio at the sample rate Windows wants. But I can hear the speaker, but never the headset stream.
It's not an exclusive mode problem: I'm using shared mode on all streams, and I've even specifically tried cutting down the streams to only the headset, in case one was stomping the other or something, and still the headset has no audio output.
It's not a mixer problem upstream, as I haven't modified any code from when it worked with PortAudio streams. I can see the audio passing through the mixer and to the output via my debug visualizers.
I can see the data going into the buffer I get from the system, when the system calls back to ask for audio. I should be hearing something, static even, but I'm getting nothing. (At one point, I bypassed the ring buffer entirely and put random numbers directly into the buffer in the callback and I still got no sound.)
What am I doing wrong here? It seems like Windows itself is the problem or something, but I don't have the expertise on Windows APIs to know what, and I'm apparently the most expert for this stuff in my company. I haven't even looked yet as to why the microphone input isn't working, and I've been stuck on this for weeks now. If anyone has any suggestions, it'd be much appreciated.
Check the re-sampled streams: output the stereo stream to the speaker, and output the mono stream to the handset.
Use IAudioClient::IsFormatSupported to check supported formats for the handset.
Verify your code using an mp3 file. Use two media players to play different files with different devices simultaneously.

How to send audio data playing on PC to C++ program as input

I'm a beginner when it comes to programming and I wanted to do a personal project in C++ to develop my skills. The project I had in mind involves playing audio on my laptop (running Windows 10), analyzing it, and sending data to an arduino that will change the color and brightness of LED lights in sync with the audio that's playing. I would like it so that I can simply, for example, just play a song on Spotify or a music video on Youtube etc. and the program will get data from that audio stream as an input. Elsewhere I've seen programs use audio from recorded WAV files or streams from a microphone as input, but not what I have in mind. I want to use this program for parties, so using a microphone as a workaround wouldn't be ideal.
Is this even possible? And if so how should I approach this problem? Are there certain APIs I should look to or what? If the program gets audio as the input, would I still be able to play music on something like a bluetooth speaker as well? Or can it only send data to one place at a time?
My roommate who is much better at programming than me accomplished this on Mac using Swift, and while I don't have a Mac, would using Linux instead make this easier?
Modern windows has “Stereo Mix” recording device, just for that. Here’s how to enable: https://technicalustad.com/enable-stereo-mix-in-windows-10/
After that setup, in your C++ program use any recording API you want.
Here’s a sample that does what you ask for, opens a recording device, starts recording, and sends audio samples to the class provided in the argument: https://learn.microsoft.com/en-us/windows/win32/coreaudio/capturing-a-stream You probably want to trade CPU time for latency for your application, i.e. don’t Sleep for hnsActualDuration/REFTIMES_PER_MILLISEC/2, change into Sleep( 0 ) or Sleep( 1 )

Reading audio stream to output device

I was curious if there is a way to read the data that is being sent to an audio output. My end goal is to capture the audio and then send it over serial for audio processing. I'm using a Windows computer.
The thing that seems to be making this more difficult is that I'm not reading the captured microphone input, but rather the streamed speaker output.
Can anybody help me out?
A more or less easy way is to take advantage of Stereo Mix device, where available. This way you have an audio capture device, which makes you available device audio output mixed down. You can read from this device as if it were a real audio input device such as Line In, or a microphone, using standard and well documented APIs or audio libraries.
Other options are more sophisticated and require both hooking into system and deeper understanding of the internals: you either hook audio APIs to intercept what applications send to audio outputs, or you install a virtual audio device the applications use and you have the data available from.

Designing live video stream for wxWidgets

In my application we will present the video stream from a traffic camera to a client viewer. (And eventually several client viewers.) The client should have the ability to watch the live video or rewind the video and watch earlier footage including video that occurred prior to connecting with the video stream. We intend to use wxWidgets to view the video and within that we will probably use the wxMediaCtrl.
Now, from the above statements some of you might be thinking "Hey, he doesn't know what he's talking about." And you would be correct! I'm new to these concepts and I confused by the surplus of information. Are the statements above reasonable? Can anyone recommend a basic server/client architecture for this? We will definitely be using C++ wxWidgets for the GUI, but perhaps wxMediaCtrl is not what I want... should I be directly using something like the ffmpeg libraries?
Our current method seems less than optimal. The server extracts a bitmap from each video frame and then waits for the single client to send a "next frame" message, at which point the server sends the bitmap. Effectively we've recreated our own awkward, non-standard, inefficient, and low-functionality video streaming protocol and viewer. There has to be something better!
You should check out this C++ RTMP Server: http://www.rtmpd.com/. I quickly downloaded, compiled and successfully tested it without any real problems (on Ubuntu Maverick). The documentation is pretty good if a little all over the place. I suspect that once you have a streaming media server capable of supporting the typical protocols (which rtmpd seems to do), then writing a client should fall into place naturally, especially if you're using wxWidgets as the interface api. Of course, it's easy to write that here, from the comfort of my living room, it'll be a different story when you're knee deep in code :)
you can modify your software such that:
The server connect, server grabs an image, passes it to ffmpeg establishing stream, then copy the encoded data from ffmpeg stream and send to client via network, if the connection drops, close the ffmpeg stream.
Maybe you can use the following to your own advantage:
http://www.kirsle.net/blog.html?u=kirsle&id=63
There is a player called VLC. It has a library for c++ and you can use it to embed the player in your GUI application. It supports a very wide range of protocols. So you should leave connecting, retrieving and playing jobs to VLC and take care of the starting and stopping jobs only. It would be easy and probably a better solution than doing it yourself.
For media playing facility, both music and audio, you can a look on GStream. And talking about the server, I think Twisted (A network library in Python) should be good option. The famous live video social website justin.tv is based on Twisted. Here you can read the story from here. Also, I built a group of server for streaming audio on Twisted, too. They can serve thousands of listener on line in same time.

How can I stream video from my application to the web?

I have an application that grabs video from multiple webcams, does some image processing, and displays the result on the screen. I'd like to be able to stream the video output on to the web - preferably to some kind of distribution service rather than connecting to clients directly myself.
So my questions are:
Do such streaming distribution services exist? I'm thinking of something like ShoutCAST relays, but for video. I'm aware of ustream.tv, but I think they just take a direct webcam connection rather than allow you to send any stream.
If so, is there a standard protocol for doing this?
If so, is there a free library implementation of this protocol for Win32?
Ideally I'd just like to throw a frame of video in DIB format at a SendToServer(bitmap) function, and have it compress, send, and distribute it for me ;)
Take a look at video LAN client (or VLC for short) as a means for streaming video.
As for distribution sites, I don't know how well it works with ustream.tv and similar new services.
ustream.tv works by using Adobe Flash's support for reading input from a webcam. To fake it out, you need a fake webcam driver. Looking on the ustream.tv site, they point to an application called WebCamMax that allows effects and splicing in video. It works by creating a pseudo-webcam that mixes video from one or more cameras along with other sources. Since that app can do it, your own code could do that too, although you'll probably need to write a Windows driver to get it all working correctly.