I have an application which streams video like(NetFlix, Youtube).
I am trying to host it in the AWS platform. I have found two different options with this:
first one is store video files in s3.
the second one is store video files in AWS MediaStore.
In my existing platform, I have a problem with downloading video through IDM by end users.
So, I have to prevent downloading the video from IDM.
How can I do this in the AWS platform? Which AWS service will suit my case of preventing downloading?
Please take note of data-out charge when you use AWS as the primary mean to serve your video streams. Personally I found It prohibitively expensive to use AWS's service to serve your video
Netflix for example use S3 as a part of main storage for their video streams.
To the question of which service you can use to hide direct link / download link from AWS. Currently there is no service provided natively by AWS for that purpose
Related
I'm using the Agora SDK & RestAPI to recording life streams and upload them on AWS S3, the sdk comes with snapshoting feature that's Im planning to use for stream thumbnails.
Both Cloud Recording and Snapshot Recording are integrated with my app and works perfectly,
the Remaining problem is that the snapshots are named down to the microseconds
Agora snapshot services image file naming
From my overview, The services works as follow, Mobile App sends data to my Server, My server make requests to the Agora API so It joins the Life Stream channel and starts snapshoting and saving the images to AWS, so I suppose it's impossible to have time synchronization between AWS, AgoraRestAPI, my Server & my mobile app.
I've gone through their docs and I can't find anything about retrieving the file names.
I was thinking maybe I can have a Lambda Function that retrieves the last added file on a given bucket/folder? but due to my lack of knowledge on AWS and Lambda functions I don't know how's that or if it's possible.
any suggestions would be appreciated
What I need:
- load video from client
- cut this video on chunks by timepoints
- store thuis chunks
- provide access to this video chunks for web-users
Could you please give some advices how to properly build this process using AWS infrastructure?
You have a very broad question, so you cannot expect a very detailed answer. But lets start at least with basic puzzle pieces.
AWS may provide you infrastructure and services to support your case.
load video from client
Commonly the uploads are to be stored in an S3 bucket.
cut this video on chunks by timepoints
Once the video is uploaded, you may use the Elastic Transcoder service or any application on a virtual machine (AWS EC2, AWS Batch,..) to process the uploaded video files. You can use the Elastic Transcoder to generate clips (chunks)
store thuis chunks - provide access to this video chunks for web-users
The chunks can be stored in S3 again and you can make a web app to reference (give access) to the stored chunks
This is at least basic overview, but based on your question it may be a good start
I'm working on an app for on-demand HTTP Live video streaming using Amazon AWS. I was able to set up Amazon's default video-on-demand HLS workflow using the link below (i.e. video is uploaded, auto-encoded and stored in a different bucket with a unique ID). I'm trying to find a way to automatically group videos by category (in DynamoDB or another database) when I upload them. Has anyone done something similar before? Do I need to use a Lambda function?
https://docs.aws.amazon.com/solutions/latest/video-on-demand/appendix-a.html
FYI - in case anyone else is looking for a way to do this. You can upload your video to AWS and use a javascript lambda function to automatically categorize them in a nosql database
I am just getting started on looking at speech to text conversion. I want to transcribe mp3 files, but can convert them if needed. It looks as though the Google and the IBM offerings allow you to send a file and get a transcript back. However all the examples I see for Amazon require you to somehow put the file to be transcribed into S3 storage before conversion. Is that right or am I missing something? Can you just send a file to Amazon and get the transcription back without having to delve into S3?
The start_transcription_job() API call requires the input file to be in Amazon S3, in the same region as the Transcribe service being called.
It is also possible to Use Amazon Transcribe Streaming, which can perform real-time transcription. However, the sample code that has been provided is only in Java.
See: aws-samples/aws-transcribe-streaming-example-java: Example Java Application using AWS SDK creating streaming transcriptions via AWS Transcribe
Well, amazon uses s3 to perform the transcribe service and there is no way around it.
Use goolge or ibm one if you are worried about the calls from s3.. but i wont be amazed to see the same response times across all three services.
Im developing a mobile app that will use AWS for its backend services. In the app I need to upload video files to S3 on a frequent basis, and I'm wondering what the recommended architecture would look like to make this scalable and efficient. Traffic could be high, and file sizes could be large.
-On one hand, I could upload directly to S3 using the S3 API on the client side. This would be the easiest option, but Im not sure of the negative implications associated with it.
-The other way to do it would be to go through an EC2 instance and handle the request using some PHP scripts and upload from there.
So my question is... Are these two options equal, or are there major drawbacks to one of them opposed to another? I will already have EC2 instances configured for database access if that makes any difference in how you approach the question.
I will recommend using "upload directly to S3 using the S3 API on the client side" as you can speed up the upload process by using AWS S3 part upload as your video files are going to large.
The second method will put extra CPU usage load on your EC2 instance as the script processing and upload to S3 will utilize CPU for the process.