Transmit Sound/Audio from One Cisco Phone to Another - rtp

I need to transfer audio from one Cisco IP phone to another. Right now, I am streaming music using VLC media player through RTP to a Cisco IP phone (model CP-9971). On the phone, I send the following post request to start listening to the music from VLC.
<CiscoIPPhoneExecute><ExecuteItem URL="RTPRx:ipaddressA:port"/></CiscoIPPhoneExecute>
I am able to listen to the music from the ip phone after sending this xml. However, I am unable to send the music from this phone to another phone. I have used the following xml to transmit RTP stream to another phone.
<CiscoIPPhoneExecute><ExecuteItem URL="RTPTx:ipaddressB:port"/> </CiscoIPPhoneExecute>
I have then used this XML to receive the RTP on the second phone.
<CiscoIPPhoneExecute><ExecuteItem URL="RTPRx:ipaddressB:port"/></CiscoIPPhoneExecute>
The music plays on the first phone and the music is not transmitted to the second phone. On the second phone, I can hear anything that is being transmitted from the mic of the first phone (like a regular call from first to second phone).
So my question is if there is way to transmit the audio from the headset of the first phone to the mic of the first phone so that the second phone can listen to the music. I don't even know if this is the right approach. I just need any kind of audio to be transmitted between these two phones during a call for sound quality reports. Any help would be greatly appreciated.
This is the guide that I am using right now.
https://developer.cisco.com/fileMedia/download/0d2f0d08-c7a4-48b9-8bc2-0bf69ab27382

What are you describing is not really possible, especially via the IP Phone Services XML interface.
I'm not entirely clear on your use-case, but the scenario is vaguely similar to what happens during a 'built-in-bridge' based recording or silent-monitoring call. For those feature, with a call in progress, a recording/monitoring request is made (via JTAPI or TAPI) which causes the phone to create an 'invisible' additional call to the target destination number, copying the media stream. It should be possible to build an an that uses a CTI port to call a target phone and play an audio file towards it, then issue a JTAPI silent-monitor request to the target phone to fork a call to a destination number - which would be a 2nd CTI port controlled by the app, which would answer and receive the forked media stream.
See the JTAPI Developer Guide for further information about CTI ports and silent monitoring.

Related

Stream Audio Via TCP in UE4

I am trying to build a Virtual Assistant in UE4. I need to somehow send my response from DialogFlow to UE4 for use in the Oculus Lipsync Plugin.
Basically, I have 3 media options for the response:
-16-Bit Linear PCM
-mp3
-Ogg Opus
I have a TCP Server and Client connection set up between a Python Script and UE4, so I can send data to and from easily.
I have my sequencing correct so one script waits for the full Byte Array to be sent via the socket etc.
Basically, I want to send the responses to each Query the user sends to DialogFlow INTO UE4 Via my TCP Socket, and be able to play and access that Audio within UE4.
I need to somehow stream the response from DialogFlow into UE4 as it gives me the responses.
Is what I'm trying to do even possible? I'm just trying to stream in Audio for use within UE4 and I am really struggling to get it working. Very annoying as this is the last piece of the puzzle I need to finish....
Please let me know if you have any advice or help you can offer!

stream audio from browser to WebRTC native C++ application

I manged to run WebRTC peerconnection example, but it is not running on the browser.
I'm trying to find a way to stream both video and audio from browser to my native program.
Is there any way?
It can be done. WebRTC is designed to work in a peer-to-peer manner between two WebRTC agents (typically a Web Browser). Your native program needs to become the second peer.
If you need to rely on open source components a good starting point is:
OpenSSL for the DTLS key exchange.
libsrtp to encrypt the RTP packets.
ffmpeg to decode the PCM audio from the browser (libvpx if you need to do video).
You'll also need to handle the ICE negotiation which requires processing STUN messages. Also extract the media payloads from the RTP packets. All these steps are also after you've determined a signalling method to exchange the SDP offer and answer between you app and the browser.
As you've probably realised starting from scratch it's a major task. There are probably some commercial libraries that will do the job and save you a lot of pain.
If that doesn't scare you and you do still want to make an attempt using open source components this example "may" help. The sample is doing the reverse of what you've asked and is sending a video stream to Chrome rather than receiving an audio stream. The useful aspect is the connection negotiation. The sample program is able to get RTP packets flowing which is often the main problem.
The example is also using Windows Media Foundation which is Windows specific. It also has lots of shortcuts particularly with the RTP and STUN packet processing.

How to design a GET HTTP request without knowing in advance the number of arguments?

I am toying with a little arduino project: a wifi scanner.
The device scans all available wifi networks and sends data to a web server. The data are {SSID, RSSI, MAC} for each signal.
In order to reduce power consumption, I want the device to send only one GET request to my server.
Something like : mywebpage.com/incomingData/VERIZON/13/361728/iphone/40/2820240/
(Here I have two networks.)
But I don't know in advance the number of networks!
How would you handle this design situation?

Icecast multiple source same mountpoint/stream

I've been trying to find an answer to this question and not sure i possible.
The scenario:
My friend and I want to host a live stock trading alert broadcast. I have Icecast setup successfully on a linux server and am able to broadcast my voice using the BUTT encoder/client. This all works fine. But is there anyway to get my friend in a different location broadcasting on the same mountpoint/stream? I've tried starting BUTT as a second client on the same mountpoint, and it simply won't connect. I we set up a different mounpoint/stream, the end user (with a web player) can only listen to one stream at a time by default.
So is there anyway to mix the streams? Share the stream with two sources?
My only thought at this point is to have two web players on the web page, have them hidden and auto start them at the same time when the user gets to the page.
Thanks,
Max
It is not possible, Icecast is not intended for this usecase, so you might want to use something like Mumble to talk together and stream the Mumble audio to Icecast, instead of having both of you streaming to Icecast.

How to send SMS through 'Dongle' using C++

I have a USB dongle connected to my laptop which is used to get the internet connection. No need to say it has a sim card and it is possible to send/receive SMS as well. I want to know how can I get the SMS and send SMS using my own C++ windows program, through this SIM card. Is there a way to access the SIM card and do these? Any libraries? I haven't done any USB programming anyway.
Edit
I just found it is possible with something called "AT Commands" - How to Auto send SMS via Broadband USB dongle?
But the link in the answer is dead. Even though it is AT Command, which lib should I install in order to use it?
At (Attention) commands can be used to interact with the USB dongle. Each manufacturer has their own At-commands, so you will have to find out one which suits your model (mine was Huawei e173-u). Some of the common ones can be found in the Hayes command set :
Hayes Command Set (Wikipedia)
Introduction to At commands
You will need to find out which COM port your dongle uses from the Device Manager, then use a serial-port terminal like Putty to test out whether the commands are supported by your dongle. As the libraries developed for sending SMS's are mostly for .Net, you may need to use an SMS gateway instead.