I use JWPlayer 6.7 as a client to show an .mp4 video. The video is hosted in an S3 bucket on amazon and is only accessible through a private rtmp cloudfront distribution. This works fine for devices that support flash (which can use RTMP), but it does not work for iOS devices that can only use HTML5 video (that does not support RTMP as I have learned).
I use the code listed below. The fallback (the second file item in the sources list) needs to be http instead of rtmp, because of the HTML5 player. The distribution I use in the example below is the same in both cases, but I guess it cannot handle the http call because it is an rtmp distribution (, right?).
So the question is: how do I set up the amazon cloudfront distributions to get this working? I would prefer to be able to use the same mp4 file in the S3 bucket and for the file to be streamed in the HTML5 player instead of downloaded (is that possible?) The video needs to be private (using a private distribution and requiring a key to see it) in both cases (rtmp and http)
Many thanks!
jwplayer('video').setup({
playlist: [{
image: '//d12q7hepqvd422.cloudfront.net/image.png',
sources: [
{file: 'rtmp://s3e5mnr1tue3qm.cloudfront.net/cfx/st/name&Key-Pair-Id=APKAIAS7DDQFOAHAHOTQ'},
{file: 'http://s3e5mnr1tue3qm.cloudfront.net/cfx/st/name&Key-Pair-Id=APKAIAS7DDQFOAHAHOTQ'}
]
}],
primary: 'flash',
flashplayer: '//d12q7hepqvd422.cloudfront.net/global/js/jwplayer6.7.4071/jwplayer.flash.swf?v=2',
html5player: '//d12q7hepqvd422.cloudfront.net/global/js/jwplayer6.7.4071/jwplayer.htm5.js?v=2',
width: '940',
height: '403'
});
Related
I am using AWS MediaLive and MediaPackage to deliver a HLS Livestream.
However if the stream ends there is always one Minute available in the .m3u8 playlist.
The settings "Startover window (sec.): 0" does not seem to solve this.
Deleting and creating new .m3u8 playlist would be very inconviniert because all players would have to be updatet.
Do anyone have an advice?
Cheers, Richy
Thanks for your post. If i understand correctly, you are referring to the MediaPackage endpoint which serves up a manifest with the last known segments, (60 seconds worth of segments by default).
There are several ways to alter or stop this behavior. I suggest testing some of these methods to see which you prefer:
[a] Delete the public-facing MediaPackage endpoint shortly (perhaps 10s) after your event ends. All subsequent requests to that endpoint will return an error. Segments already retrieved and cached by the player will not be affected, but no new data will be served. Note: you may also maintain a private endpoint on the same Channel to allow for viewing + harvesting of the streamed content if you wish.
[b] Use an AWS CloudFront CDN Distribution with a short Time to Live (TTL) in front of your MediaPackage Channel (which acts as the origin) to deliver content segments to your viewers. When the event ends, you can immediately disable or delete this CDN Distribution, and all requests for content segments will return an error. Segments already retrieved and cached by the player will not be affected, but no new data will be served from this distribution.
[c] Encrypt the content using MediaPackage encryption, then disable the keys at the end of the event. This Same approach applies to CDN Authorization headers, which you can mandate for the event playback and then delete after event completes.
[e] Use DNS redirection to your MediaPackage endpoint. When the event ends, remove the DNS redirector so that any calls to the old domain will fail.
I think one or a combination of these methods will work for you. Good Luck!
I want to stream the microphone audio from the web browser to AWS S3.
Got it working
this.recorder = new window.MediaRecorder(...);
this.recorder.addEventListener('dataavailable', (e) => {
this.chunks.push(e.data);
});
and then when user clicks on stop upload the chunks new Blob(this.chunks, { type: 'audio/wav' }) as multiparts to AWS S3.
But the problem is if the recording is 2-3 hours longer then it might take exceptionally longer and user might close the browser before waiting for the recording to complete uploading.
Is there a way we can stream the web audio directly to S3 while it's going on?
Things I tried but can't get a working example:
Kineses video streams, looks like it's only for real time streaming between multiple clients and I have to write my own client which will then save it to S3.
Thought to use kinesis data firehose but couldn't find any client data producer from brower.
Even tried to find any resource using aws lex or aws ivs but I think they are just over engineering for my use case.
Any help will be appreciated.
You can set the timeslice parameter when calling start() on the MediaRecorder. The MediaRecorder will then emit chunks which roughly match the length of the timeslice parameter.
You could upload those chunks using S3's multipart upload feature as you already mentioned.
Please note that you need a library like extendable-media-recorder if you want to record a WAV file since no browser supports that out of the box.
I have been trying to read the AWS Lambda#Edge documentation, but I still cannot figure out if the following option is possible.
Assume I have an object (image.jpg, with size 32922 bytes) and I have setup AWS as static website. So I can retrieve:
$ GET http://example.com/image.jpg
I would like to be able to also expose:
$ GET http://example.com/image
Where the response body would be a multipart/related file (for example). Something like this :
--myboundary
Content-Type: image/jpeg;
Content-Length: 32922
MIME-Version: 1.0
<actual binary jpeg data from 'image.jpg'>
--myboundary
Is this something supported out of the box in the AWS Lambda#Edge API ? or should I use another solution to create such response ? In particular it seems that the response only deal with text or base64 (I would need binary in my case).
I finally was able to find complete documentation. I eventually stumble upon:
API Gateway - PORT multipart/form-data
which refers to:
Enabling binary support using the API Gateway console
The above documentation specify the steps to handle binary data. Pay attention that you need to base64 encode the response from lambda to pass it to API Gateway.
I'm using rtp_forward from the videoroom plugin in Janus-Gateway to stream WebRTC.
My target pipeline looks like this:
WebRTC --> Janus-Gateway --> (RTP_Forward) MediaLive RTP_Push Input
I've achieved this:
WebRTC --> Janus-Gateway --> (RTP-Forward) Janus-Gateway [Streaming Plugin]
I've tried multiple rtp_forward requests, like:
register = {"request": "rtp_forward", "publisher_id": 8097546391494614, "room": 1234, "video_port": 5000, "video_ptype": 100, "host": "medialive_rtp_input", "secret": "adminpwd"}
But medialive just doesn't receive any stream. Anything I'm missing?
I'm not familiar with AWS MediaLive: initially I thought that, since most media servers like this expect RTMP and not RTP, that was the cause of the issue, but it looks like it does indeed support a plain RTP input mode. At this point this is very likely a codec issue: probably MediaLive doesn't support the codecs your browser is sending (opus and vp8?). Looking at the supported codecs, this seems to be the issue: https://docs.aws.amazon.com/medialive/latest/ug/inputs-supported-containers-and-codecs.html
You can probably get video working if you use H.264 in the browser, but audio is always Opus and definitely not AAC, so you'll need an intermediate node to do transcoding.
Since you're using RTP PUSH, Are you pushing stream it to correct RTP endpoint provided by AWS ? If so, you can see alerts in health check if Medialive received it but it failed to read or corrupted. You'll see error is any of these pieplines where you're pushing the stream, if you don't see anything which means some Network problem, try RTMP as it's on TCP and should get something in packet capturer.
https://docs.aws.amazon.com/medialive/latest/ug/monitoring-console.html
I noticed that uploading small files to S3 bucket is very slow. For a file with size of 100KB, it takes 200ms to upload. Both the bucket and our app are in Oregon. App is hosted on EC2.
I googled it and found some blogs; e.g. http://improve.dk/pushing-the-limits-of-amazon-s3-upload-performance/
It's mentioned that http can bring much speed gain than https.
We're using boto 2.45; I'm wondering whether both uses https or http by default? Or is there any param to configure this behavior in boto?
Thanks in advance!
The boto3 client includes a use_ssl parameter:
use_ssl (boolean) -- Whether or not to use SSL. By default, SSL is used. Note that not all services support non-ssl connections.
Looks like it's time for you to move to boto3!
I tried boto3, which has a nice parameter "use_ssl" in connection constructor. However, it turned out that boto3 is significantly slower than boto2.... there're actually already many posts online about this issue.
Finally, I found that, in boto2, there's also a similar param "is_secure"
self.s3Conn = S3Connection(config.AWS_ACCESS_KEY_ID, config.AWS_SECRET_KEY, host=config.S3_ENDPOINT, is_secure=False)
Setting is_secure to False saves us about 20ms. Not bad..........