Wowza Streaming Live Server Data two questions - wowza

Does Wowza Streaming Server (used as live streaming services) care about the direction of data stream?
Possibility of Wowza Server as a server side mode and client side mode
Since I am trying to send data stream via LTE, the cost of sending data stream is very high. So I am wondering if it is possible to have live stream data only when the request is present.
Thank you

It's not 100% clear what do you mean by "Wowza acting as a client" to me. Wowza can be configured to be an edge instance, that pulls stream from an origin Wowza server, when you want to have multiple, possibly geographically distributed stream sources.
Or, if you think about the other case, Wowza accepts the stream from encoders, in which case the data flow direction is also towards Wowza. This is the case when you install for example a Flash Media Live Encoder, or a VLC and push your encoded stream "into" Wowza, which then distributes this stream to all other players.
But obviously Wowza won't act as a video player to you.. :-)
Can you clarify your first question, maybe I can give a better answer then.

Related

How do I stream audio files to my Icecast server running on an EC2 instance?

I am trying to loop audio from my Icecast server 24/7.
I have seen examples where people talk about storing their audio files on the EC2 instance or in an S3 bucket.
Do I also need a source client running on my EC2 Instance to be able to stream audio to the server? Or is there a way to play static files from Icecast?
Icecast and SHOUTcast servers work by passing a live audio stream from a source on to the users. You need something to produce a single audio stream in realtime from those source files.
The flow looks something like this:
Basically, you'll need to do everything you would in a normal radio studio, but automated. You'll stream the files from your bucket, play them to a raw audio stream, send that stream to your encoder to be compressed with the codec, and then sent to your streaming servers for distribution.
You can't simply push your audio files as-is to the Icecast server, for a few reasons:
Stream must be realtimeThe server doesn't really know or care about the timing of the stream. It takes the data its given and sends that off to the client. Therefore, if you push data faster than realtime, the server will attempt to deliver it to the client at this faster rate. Some clients will attempt to buffer this fast stream, but most will put backpressure on the stream, causing the TCP window to close, causing the client to eventually get far enough behind that the server drops the connection.
Consistent format is requiredChances are, your source files have varying sample rate, channel count, and even codec. Most clients are unable to take a change in sample rate or channel count mid-stream. I don't know of any client that supports a codec change mid-stream. (Theoretically possible with Ogg and Matroska/WebM, but yeah... not worth messing with.)
Stream should be free of ID3 tags and other file format cruftIf you simply PUT your files directly to your Icecast server, the output stream will contain more than just the audio data. At a minimum, you'd want to remove all that. Depending on your container format, you'll need to deal with timestamps as well.
Solutions
There are a handful of ways to solve this:
Radio automation softwareMany folks simply run something like RadioDJ on cloud-based servers. If you already have a radio station that uses automation, this might be a good solution. It can be expensive though, and not as flexible. You could even go as low as VLC or something for playout, but then you wouldn't have music transitions and what not.
Custom playout script (recommended)I use a browser engine, such as Chromium, and script my channels with normal JavaScript. From there, I take the output stream and pass it off to FFmpeg to encode and send to the streaming servers. This works really well, as I can do all my work in a language everybody knows, and I have easy access to data on cloud-hosted services. I can use the Web Audio API to mix and blend audio based on what's happening in realtime. As an alternative, there is Liquidsoap, but I do not recommend it these days as its language is difficult to deal with and it is not as flexible as a browser engine.

Web LiveStreaming WebRTC and Sockets (Flask Backend)

I want to build a live streaming app.
My thought process:
Get the Video/Audio data from the
navigator.mediaDevices.getUserMedia(constraints); [client-streamer]
create rooms using sockets(Socket.IO or WebSockets from flask) [backend]
Send the data in 1 to the room members using sockets.
display the media on the client-side.
Is that correct? How should I do it?
how do I broadcast data to specific room members and not to everyone? (flask)
How to consistently send data from the streamer -> server -> room members. the stream is given from 1 is an object, where is the data?
any other better ideas will be great! thanks.
I need to implement the server-side by myself without help from libraries that will do the work for me.
Implementing a streaming platform is not trivial. Unfortunately, it is not as simple as emitting chunks received from the MediaRecorder with onndatavailable and forwarding them to users using a WebSocket server - this is not scalable nor efficient nor reliable.
Below are some strategies you can try for different types of scenarios:
P2P: If you want to have simple peer-to-peer streaming, you can use WebRTC to achieve that with a simple socket.io server for signaling purposes.
Conference: Here things start to get more complicated. You will need a media server if you want to be somewhat scalable. One approach is to route your stream to the users using an SFU or MCU. This will take care of forwarding/processing media to different peers efficiently.
Broadcast: Here things are also non-trivial. Common WebRTC-based architectures include ingesting the WebRTC stream and forward that to an HLS server which will let your stream chunks available for clients through a CDN, or perform RTP forwarding of the WebRTC stream, convert it to RTMP using something like FFmpeg and deliver it through Youtube Live or Twitch to leverage from their infrastructure.
Be aware that the last 2 items are resource-intensive and will certainly not be cheap to maintain.
Below are some open source projects that could help you along the way:
Janus
MediaSoup
AntMedia
Jitsi
Good luck!
Explaining all this is far beyond the scope of a Stack Overflow answer.
Here are a few hints:
You need to use the MediaRecorder API to capture compressed data from your gUM (getUserMedia) stream. MediaRecorder support is inconsistent between makes and models of browser. though.
It kicks a Blob into its onndatavailable handler every so often.
They're compressed as a webm data stream.
You can push those Blobs to a server with socket.io, and the server can turn around and push them to whatever clients you want to.
Playing the webm on the clients is tricky. You may, on some makes and models of browsers, be able to feed the webm stream to the Media Source API using appendBuffer(). But some browsers cannot consume the webm streams.
These webm streams are useless to a player without all their Blob data in order. You can't just start sending a new client the Blobs of the stream when they sign in; you have to restart the MediaRecorder.
(You may be able to make it work without a MediaRecorder restart if you send the first few k bytes of the stream to each new client before sending the current Blob. Extracting those bytes is an intricate programming job involving the ebml package to parse the webm stream and extract the prologue. I have not proven this concept.)
Because getting all this to work -- originator -- server -- viewer is such a pain in the xxx neck, you may want to investigate using something like mediasoup instead. It uses WebRTC transport rather than socket.io, and works cross-platform.

IoT - live video streaming from devices

I have a requirement which requires live streaming solution. Here is the requirement.
There will be 5000 IoT devices. Each device is capable of streaming live video. There will be about 1000 users. Each user can own 1 or multiple devices. Whenever the user wants to view live streaming of a device they own they should be able to do so. So if user1 owns device1 only user1 should be able to view the live streaming from this device and no one else. The user credentials and device mappings are stored in a database. The device is connected to the server using MQTT protocol and the users connect to the server using HTTPS REST API.
How do I go about implementing the server for this. What protocol should I use?
I have been searching for a solution on the internet. I came across Amazon Media Live but it seemed limited in that I could only have 100 inputs per channel and 5 channels. Also the documentation states that the streaming inputs must already be streaming when channel is started. But my requirement is more like the streaming source would initiate streaming whenever required.
Does anyone have any idea on how to use AWS MediaLive for this task or if I should use MediaLive at all.
Peer to peer streaming of video from the device to the user's app is also a possibility. Assume the embedded device has linux os on it is there a viable peer to peer solution for this problem where the device stream the video to multiple user on mobile apps directly. I have no been able to find any such solutions on the internet.
You can use DXS (Data Stream Exchange system), and also you can look at this tech talk which will explain you how to do it
https://www.youtube.com/watch?v=DoDzfRU4rEU&list=PLZWI9MjJG-V_Y52VWLPZE1KtUTykyGTpJ&index=2&t=0s
For anyone in future doing something similar, I did some more research on the internet and it seems like Amazon Kinesis Video Streams does what is required. I have not implemented anything yet but hopefully it will work well for the requirements.

How to make a streaming relay using gstreamer?

I would like to make of some sort of a streaming server. I would like it to receive RTSP streams over the net from live streams (e.g. webcam, ipcam, etc.) then broadcast that same stream on my local network using a different URL. I know gstreamer can do it quite well but I don't know how. I'm quite confused with the way the documentation is written. Can somebody help me?
If you would like to retransmit the video streams using RTSP as well, you can use GStreamer RTSP Server. There is a lot of examples on the Internet how to use it. The best source of the examples is the gst-rtsp-server's examples directory:
http://cgit.freedesktop.org/gstreamer/gst-rtsp-server/tree/examples
As you want to retransmit existing RTSP streams, you'll need to use the rtspsrc element for reception of the remote streams.
I think you are looking for something like this: https://github.com/jayridge/rtsprelay. It configures one rtsp server to receive clients on two urls with a record link and a play link.
This example uses a dynamic form:
https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1454
rtsp://server/path?uri=encoded-URI
you HTML encode the destination in URI form and add a path where it should register this camera to. The first time you connect; it will take some time, after that; the sessions are re-used.

Video recorder that can be used with Wowza

Do you guys have any suggestions on free/paid solutions for recording video from webcam and storing it on the wowza media server? I tried the wowza recording example but I couldn't figure out how I can set the server connection as a variable.
Thanks!
Try HDFVR, a paid flash application, but very good.
The wowza recording example should work for this. When you say you couldn't "figure out how to set the server connection as a variable", is that from the client to tell Wowza to record, or to play back the already recorded stream? Using the built-in http provider here http://www.wowza.com/forums/content.php?123 will allow you to make post requests to the Wowza server to start and stop recording.