I am using udpxy + icecast to convert a number of multicast webradio streams into unicast streams. Unfortunately the multicast streams are mpegts while my clients only support mp3 streams.
I know I can transcode the incoming streams using ffmpeg and publish them directly on my icecast server using something like:
ffmpeg -i <incoming url> -codec:a libmp3lame -ab 256k -ac 1 -content_type audio/mpeg -f mp3 icecast://source:<pwd>#<icecastserver>/<mountpoint>
However I have about 150 incoming radio stations and at most 10 simultaneous clients so I do not want to be transcoding all stations all of the time. Is there a way to configure icecast to read the stream from stdout of another executable? So when a listener connects to a new webradio mount, icecast would start the executable and uses the output as a stream.
I tried using on-connect to start the ffmpeg command above but then I have a chicken-and-egg problem as the icecast mount needs to exist to run on-connect but then the ffmpeg command cannot create it (or it is too slow to start)
I tried creating a script in the webroot but icecast just sends out the content in stead of executing it (yes, it was executable).
Any ideas on how to do this?
Related
I am trying to loop audio from my Icecast server 24/7.
I have seen examples where people talk about storing their audio files on the EC2 instance or in an S3 bucket.
Do I also need a source client running on my EC2 Instance to be able to stream audio to the server? Or is there a way to play static files from Icecast?
Icecast and SHOUTcast servers work by passing a live audio stream from a source on to the users. You need something to produce a single audio stream in realtime from those source files.
The flow looks something like this:
Basically, you'll need to do everything you would in a normal radio studio, but automated. You'll stream the files from your bucket, play them to a raw audio stream, send that stream to your encoder to be compressed with the codec, and then sent to your streaming servers for distribution.
You can't simply push your audio files as-is to the Icecast server, for a few reasons:
Stream must be realtimeThe server doesn't really know or care about the timing of the stream. It takes the data its given and sends that off to the client. Therefore, if you push data faster than realtime, the server will attempt to deliver it to the client at this faster rate. Some clients will attempt to buffer this fast stream, but most will put backpressure on the stream, causing the TCP window to close, causing the client to eventually get far enough behind that the server drops the connection.
Consistent format is requiredChances are, your source files have varying sample rate, channel count, and even codec. Most clients are unable to take a change in sample rate or channel count mid-stream. I don't know of any client that supports a codec change mid-stream. (Theoretically possible with Ogg and Matroska/WebM, but yeah... not worth messing with.)
Stream should be free of ID3 tags and other file format cruftIf you simply PUT your files directly to your Icecast server, the output stream will contain more than just the audio data. At a minimum, you'd want to remove all that. Depending on your container format, you'll need to deal with timestamps as well.
Solutions
There are a handful of ways to solve this:
Radio automation softwareMany folks simply run something like RadioDJ on cloud-based servers. If you already have a radio station that uses automation, this might be a good solution. It can be expensive though, and not as flexible. You could even go as low as VLC or something for playout, but then you wouldn't have music transitions and what not.
Custom playout script (recommended)I use a browser engine, such as Chromium, and script my channels with normal JavaScript. From there, I take the output stream and pass it off to FFmpeg to encode and send to the streaming servers. This works really well, as I can do all my work in a language everybody knows, and I have easy access to data on cloud-hosted services. I can use the Web Audio API to mix and blend audio based on what's happening in realtime. As an alternative, there is Liquidsoap, but I do not recommend it these days as its language is difficult to deal with and it is not as flexible as a browser engine.
I have some legacy code that need to be configured on a new server. The server is a Wowza Media server 3.1. I have installed it and moved all application data.
The use for it is to record web camera videos from web and then be able to play them back.
I have already got it working to record video on the webpage and an .fla file is created correctly on the server.
But the playback doesn't work because there is no mp4 file.
What I have figured out is that there are two applications in Wowza application folder
AppRecordVideo and AppVod
These folder also resides in the content folder. On the previous server there were a mp4 file for each Flv file. But on the new server only the .flv file is created. Nothing has changed in the web application so I guess that there is something that should run on the server that converts the .flv file to mp4 and place it it the right content folder.
The flv file is streamable but I want to stream mp4 instead.
Any Idea on what is failing?
First of all; I highly recommend to update your server to Wowza 4. Wowza 3.x is rather old by now and Wowza 4 has a web-interface that makes it easier to understand the configuration and working of your server.
To play video files via a Wowza server; you must place them in the designated content folder. By default this is the subfolder /content/ in your Wowza installation folder, but the exact path is defined in the Application.xml for the specific application. So if you have an application called "AppVod", then navigate to /conf/AppVod and read the Application.xml there, specifically the Root/Application/Streams/StorageDir value.
If you want to convert FLV files to MP4, the simplest solution is to use a tool like ffmpeg. With the latest version of ffmpeg you can do something like
ffmpeg -i myfile.flv -c copy myfile.mp4
This assumes that the video format in your FLV file is H.264 and the audio format is AAC. If not, you must do so-called "transcoding", e.g.
ffmpeg -i myfile.flv -c:v libx264 -c:a libfdk_aac -b:v 1000000 -b:a 128000 myfile.mp4
That will give you a 1Mbps video and 128Kbps audio. Of course there are lots of other ffmpeg options - feel free to Google for them or read it all on https://ffmpeg.org/documentation.html, and I bet there are many useful ffmpeg questions here on SO too.
Place the MP4 in the /content folder and then try to play e.g. with ffplay or VLC (the latter can also be played with HTML5-based players like https://hls-js.netlify.com/demo:
rtmp://your-server-ip/AppVod/myfile.mp4
http://your-server-ip:1935/AppVod/myfile.mp4/playlist.m3u8
I record some calls on my PBX and save them as .wav files in /tmp/ on the PBX server. I would then like to transcode them to mp3 and email them to various recipients as attachments.
My concern is that transcoding from wav to mp3 can be resource intensive as the number of users grow so I would like to send the wav file along with its metadata (CallerID, email adresses of recipients, time and date recorded) to another server that will be dedicated to transcoding to mp3 and emailing the resulting files. This offloads the PBX server to only handling calls and it also doesn't hog the call while waiting for the conversion to finish.
I am not sure how to proceed to transmit the metadata and the files to the transcoding server.
I thought of feeding the wav file and the metadata to a PHP script running on the transcoding server with cURL but would that be the most efficient way?
I also though about transferring the wave file over a shared NFS mount with unique directory names and have the metadata saved in a text file along a cron to process whatever jobs are waiting there every 5 minutes. The process of extracting the metadata from the text file seems a bit convoluted and not very elegant either.
I am quite interested to read how more seasoned coders would approach and solve this problem.
Cheers!
instead of pushing the file from the asterisk server, i would rather try pulling it from the transcoding machine. at the end of each transcoding operation i would check to see if there are any more files in the source directory, and pull the oldest one that i found, or sleep for a few seconds if there is nothing to do and try again. a shell script should be good enough. you can throttle the load of your encoding processor, have one or more encoding processes running simultaneously, etc. NFS, ftp or scp would be just about as good.
I've developed an app which sends RTP packets to a local ip client. So the client has to listen on the specified port (rtp://:#portnumber, on VLC) to play the streamed data. Right now i'm going to develop the code needed to create the SDP file needed to start streaming.
My doubt is, how to send this file to the client? At the beginning of the RTP stream?
Really n00b at this point. Any help will be useful.
Thanks
VLC specifically supports RTSP, HTTP, SAP protocols for establishing session and communication. And of course the local file system (file://)
so basically you can call vlc in some manner like this (I cannot test it but should be like this):
vlc file://path/to/sdp-file
or
vlc rtsp://server-path:port/sdpfile.sdp
and so on
Aside from storing the SDP file in the local system, perhaps HTTP would be easiest if you have up and running http server on your server machine.
I've compiled with VS the live555 source code, and it works just fine if I try to stream locally a file
e.g.
Command Line:
live555.exe myfile.mp3
VLC Connection String
rtsp://169.254.1.231:8554/myfile.mp3
but if I try to stream it over the internet, VLC communicates with live555, but live555 won't send data to him
Command Line
live555.exe myfile.mp3
VLC Connection String
rtsp://80.223.43.123:8554/myfile.mp3
I've already forwarded the 8554 port (both tcp/udp) and tried to disable my firewall but this doesn't solve.
How is that?
To troubleshoot:
Are you streaming RTP over RTSP: have you checked the "Use RTP over RTSP (TCP)" option in VLC? You can check this in VLC under the preferences: input/codecs->Demuxers->RTP/RTSP. You can try to see if this solves the problem in which case it could be that UDP is blocked.
You speak of forwarding. Do you mean port forwarding from one machine to the RTSP server? if so-> if you are not doing RTP over RTSP, then you would also need to forward the ports for the media which is not the same as the RTSP port (554 or 8554). These ports are exchanged during the RTSP SETUP. If you do RTP over RTSP the media is interleaved over 554 or 8554 and you don't have to worry about this.
Also, another good debugging tool is the live555 openRTSP application. You can run it from the command line and specify "-t" for RTP over RTSP, which is basically what the VLC option does. You can specify "-T" for HTTP tunneling, etc and it allows you to write captured media packets to file, etc.