I have some legacy code that need to be configured on a new server. The server is a Wowza Media server 3.1. I have installed it and moved all application data.
The use for it is to record web camera videos from web and then be able to play them back.
I have already got it working to record video on the webpage and an .fla file is created correctly on the server.
But the playback doesn't work because there is no mp4 file.
What I have figured out is that there are two applications in Wowza application folder
AppRecordVideo and AppVod
These folder also resides in the content folder. On the previous server there were a mp4 file for each Flv file. But on the new server only the .flv file is created. Nothing has changed in the web application so I guess that there is something that should run on the server that converts the .flv file to mp4 and place it it the right content folder.
The flv file is streamable but I want to stream mp4 instead.
Any Idea on what is failing?
First of all; I highly recommend to update your server to Wowza 4. Wowza 3.x is rather old by now and Wowza 4 has a web-interface that makes it easier to understand the configuration and working of your server.
To play video files via a Wowza server; you must place them in the designated content folder. By default this is the subfolder /content/ in your Wowza installation folder, but the exact path is defined in the Application.xml for the specific application. So if you have an application called "AppVod", then navigate to /conf/AppVod and read the Application.xml there, specifically the Root/Application/Streams/StorageDir value.
If you want to convert FLV files to MP4, the simplest solution is to use a tool like ffmpeg. With the latest version of ffmpeg you can do something like
ffmpeg -i myfile.flv -c copy myfile.mp4
This assumes that the video format in your FLV file is H.264 and the audio format is AAC. If not, you must do so-called "transcoding", e.g.
ffmpeg -i myfile.flv -c:v libx264 -c:a libfdk_aac -b:v 1000000 -b:a 128000 myfile.mp4
That will give you a 1Mbps video and 128Kbps audio. Of course there are lots of other ffmpeg options - feel free to Google for them or read it all on https://ffmpeg.org/documentation.html, and I bet there are many useful ffmpeg questions here on SO too.
Place the MP4 in the /content folder and then try to play e.g. with ffplay or VLC (the latter can also be played with HTML5-based players like https://hls-js.netlify.com/demo:
rtmp://your-server-ip/AppVod/myfile.mp4
http://your-server-ip:1935/AppVod/myfile.mp4/playlist.m3u8
Related
I don’t know if I can say “I’m sorry for ask” but I spent more than a week looking for a solution without success. I have a Jetson Nano and with OpenCV I get and process an image at 4fps, I need to send this video to a web server to allow the client connected to the server get the video. Everything need to be written in C++.
Because a need a low latency I did test with GStreamer and WebRTC without success. I don’t have any web server ready, so I can use any implementation.
Anyone know where I can find some example implementation with this schema?
You can use mediasoup to send data to the server to then send the stream with rtp to another endpoint like gstreamer or ffmpeg.
Here is a recording project where data is sent from the browser -> server -> gstreamer -> file.
Mediasoup is written in c++ and has a wrapper for js.
I had similar problem and used such example from GStreamer WebRTC official repo. It's written in Python for Janus Gateway video rooms but I think it can be easily rewritten in C++ as you need.
In the code for OpenCV, I used V4L2Loopback as a virtual output device to be used as input for GStreamer WebRTC example.
I hope such approach may help you.
I think no need to send it to a Web Server. In Gstreamer examples [https://github.com/GStreamer/gst-examples]. The SendOnly example sends a video to a Web Client Using WebRTC. You can modify it to send an OpenCV mat.
I am using udpxy + icecast to convert a number of multicast webradio streams into unicast streams. Unfortunately the multicast streams are mpegts while my clients only support mp3 streams.
I know I can transcode the incoming streams using ffmpeg and publish them directly on my icecast server using something like:
ffmpeg -i <incoming url> -codec:a libmp3lame -ab 256k -ac 1 -content_type audio/mpeg -f mp3 icecast://source:<pwd>#<icecastserver>/<mountpoint>
However I have about 150 incoming radio stations and at most 10 simultaneous clients so I do not want to be transcoding all stations all of the time. Is there a way to configure icecast to read the stream from stdout of another executable? So when a listener connects to a new webradio mount, icecast would start the executable and uses the output as a stream.
I tried using on-connect to start the ffmpeg command above but then I have a chicken-and-egg problem as the icecast mount needs to exist to run on-connect but then the ffmpeg command cannot create it (or it is too slow to start)
I tried creating a script in the webroot but icecast just sends out the content in stead of executing it (yes, it was executable).
Any ideas on how to do this?
I am working on a live-streaming prototype, I have been reading a lot about how live-streaming works and many different approaches but I still can't find a live-streaming stack that suits my needs...
These are the requirements for my prototype:
1)The video/audio recording must come from a web browser using the webcam, the idea is that the client preferably shouldn't need to install plugins or do anything complicated(maybe installing Flash player plugin is acceptable, only for recording the video, the viewers should be able to view the stream without plugins).
2)It can't be peer to peer since I also need to store the entire video in my server (or in Amazon s3 servers for example) for viewing later.
3)The viewers should also be able to watch the stream without the need of installing anything, from their web browsers, say Chrome and Firefox for example. We want to use the HTML5 video tag if possible.
4)The prototype should be constructed without expending money preferably. I have seen that AWS-Cloudfront and Wowza offer free trials so we are thinking about using these 2 services.
5)The prototype should be able to maintain 1 live stream at a time and 2 viewers, just that, so there are no restrictions regarding this.
Any suggestions?
I am specially stuck/confused with the uploading/encoding video part of the architecture(I am new to streaming and all the formats/codecs/protocols/technologies are making it really hard to digest).
As of right now, I came across WebRTC that apparently allows me to do what I want, record and encode video from the browser using the webcam, but this API only works with HTTPS sites. Are there any alternatives that work with HTTP sites?
The other part that I am not completely sure about is the need for an encoding server, for example Wowza Streaming Engine, why do I need it? Isn't it enough if I use for example WebRTC for encoding the video and then I just send it to the distribution service (AWS-Cloudfront for example)? I do understand that the encoding server would allow me to support many different devices since it will create lots of different encodings and serve many different HTTP protocols, but do I need it for this prototype? I just want to make a 1 format (MP4 for example) live-stream that can be viewed in 2 web browsers, that's all, I don't need variety of formats nor support for different bandwidths or devices.
Base on your requirement, WebRTC is good way.
API only works with HTTPS sites. Are there any alternatives that work
with HTTP sites?
No. Currently Firefox is only browser is allow WebRTC on HTTP, but finally it need HTTPS
For doing this prototype you need to go with the Wowza WebRTC.
While going with wowza all the streams are delivered from the wowza only.So it become a routed WebRTC.
Install Wowza - https://www.wowza.com/docs/how-to-install-and-configure-wowza-streaming-engine
Enable the WebRTC - https://www.wowza.com/docs/how-to-use-webrtc-with-wowza-streaming-engine
Downaload and configure the Streamlock. or Selfsigned JKS file - https://www.wowza.com/docs/how-to-request-an-ssl-certificate-from-a-certificate-authority
Download the sample WebRTC - https://www.wowza.com/_private/webrtc/
Publish stream using the Publish HTML and Play through the Play HTML ( Supported Chrome,Firefox & Opera Browsers)
For MP4 files in WebRTC : you need to enable the transcoder with h264 & aac. Also you need to enable the option Record all the incoming Streams in the properties of application which you are creating for the WebRTC ( Not the DVR ).Using the File writer module save all the recorded files in a custom location.By using a custom script(Bash,Python) Move all the Transcoded files to the s3 bucket, Deliver through cloudfront.
I am an absolute beginner with this. I installed Wowza and wanted to play VOD. The test works, the sample.mp4 plays. However when I try what looks like the same URL shown in the test player, which in my case is:
http://192.168.5.76:1935/voddec8/mp4:sample.mp4/manifest.f4m
I get the contents of the manifest. When I try to remove the manifest.f4m so that the URL is just:
http://192.168.5.76:1935/voddec8/mp4:sample.mp4
EDIT: Could it be that I need to be in a player, not a browser?
I just get info about the Wowza server:
Wowza Streaming Engine 4 Trial Edition (Expires: Jun 07, 2016) 4.3.0 build16025
What am I doing wrong?
Wowza does not support progressive download (meaning basic http download), you can do that with any web server. Then you could open something like this in the browser http://192.168.5.76:1935/voddec8/sample.mp4
If you want to play an HLS stream (playlist.m3u8) or HDS (manifest.f4m) you need to call it through a player that supports those protocols, like Jwplayer, Flowplayer,... or an application like VLC. Mobile devices (iphones, and moder Android should open HLS stream directly though.
You can also open the stream using rtmp, but you need a player (flash based) using an url like this rtmp://192.168.5.76:1935/voddec8/mp4:sample.mp4
You need to check what is the protocol best suited for you.
if you want to deliver the vod as progressive method , you can try nimble streamer (wmspanel.com)
which will gives you additional security for progressive delivery method, and if this is for web delivery i prefer to deliver using hls method (.m3u8) which is the best method to avoid your sever bandwidth also.
If you have installed wowza on local system, the URL pattern will be your IP address. You can switch wowza to listen on any port (should be open port).
Ex: rtmp://10.136.15.1:1935/vod/mp4:bunny.mp4
Instead of running it under 1935, you can switch it to listen on port:80.
You can try running your vod video in VLC player, as highlighted below.
Once this is running, you can embed it in your application using open source media player (strobe player) : http://matbury.com/strobe/
I want to be able to create an application that can read and publish an RTMP stream.
Using OpenCV i could read rtp due to it's ffmpeg backend.
Stream video from ffmpeg and capture with OpenCV
C++ RTMP is another possibility, but this is an RTMP server so it mainly requests and sends files. Although open source, i am unsure how to build or integrate this into a Visual Studio application in such a way as to make the function calls available to my project.
OTher sources indicate that OpenCV's RTSP isn't great.
http://workingwithcomputervision.blogspot.co.nz/2012/06/issues-with-opencv-and-rtsp.html
How can you run a streaming server, such as RTMP C++ and get the raw data out. OpenCV can encode and decode image data for streaming, but how can you link the two?
Could a C++ application pipe a stream together? How could i interface with that stream to send it more images? Also, for receiving images?
Regards,
cRMTPServer and LibRTMP works well.