How to retrieve last added files programmatically from Amazon S3 - amazon-web-services

I'm using the Agora SDK & RestAPI to recording life streams and upload them on AWS S3, the sdk comes with snapshoting feature that's Im planning to use for stream thumbnails.
Both Cloud Recording and Snapshot Recording are integrated with my app and works perfectly,
the Remaining problem is that the snapshots are named down to the microseconds
Agora snapshot services image file naming
From my overview, The services works as follow, Mobile App sends data to my Server, My server make requests to the Agora API so It joins the Life Stream channel and starts snapshoting and saving the images to AWS, so I suppose it's impossible to have time synchronization between AWS, AgoraRestAPI, my Server & my mobile app.
I've gone through their docs and I can't find anything about retrieving the file names.
I was thinking maybe I can have a Lambda Function that retrieves the last added file on a given bucket/folder? but due to my lack of knowledge on AWS and Lambda functions I don't know how's that or if it's possible.
any suggestions would be appreciated

Related

Where does AWS Amplify DataStore store its data?

I have been looking for a Firebase Firestore equivalence to AWS World, I came across AWS Amplify and saw the DataStore module. From watching this video I see it does pretty much exactly what I expected, that is being about to have a real-time data mechanism, I watched this video from the official Amplify website but I haven't really seen where the data get a store or is this just a pub-sub kind of mechanism where I need to additionally write code which writes data to where I want it to be store e.g to DynamoDb or any other destination on Cloud?

Work around for handling CPU Intensive task in aws ec2?

I have created a django application (running on aws ec2) which convert media file from one format to another format ,but during this process it consume CPU resource due to which I have to pay charges to aws.
I am trying to find a work around where my local pc (ubuntu) takes care of CPU intensive task and final result is uploaded to s3 bucket which I can share with user.
Solution :- One possible solution is that when user upload media file (html upload form) it goes to s3 bucket and at the same time via socket connection the s3 bucket file link is send to my ubuntu where it download file, process it and upload back to s3 bucket.
Could anyone please suggest me better solution as it seems to be not efficient.
Please note :- I have decent internet connection and computer which can handle backend very well but i not in state to pay throttle charges to aws.
Best solution for this is to create separate lambda function for this task. Trigger lambda whenever someone upload files on S3. Lambda will process files and store back to S3.

Appropriate AWS Service for Media(video) Streaming

I have an application which streams video like(NetFlix, Youtube).
I am trying to host it in the AWS platform. I have found two different options with this:
first one is store video files in s3.
the second one is store video files in AWS MediaStore.
In my existing platform, I have a problem with downloading video through IDM by end users.
So, I have to prevent downloading the video from IDM.
How can I do this in the AWS platform? Which AWS service will suit my case of preventing downloading?
Please take note of data-out charge when you use AWS as the primary mean to serve your video streams. Personally I found It prohibitively expensive to use AWS's service to serve your video
Netflix for example use S3 as a part of main storage for their video streams.
To the question of which service you can use to hide direct link / download link from AWS. Currently there is no service provided natively by AWS for that purpose

Uploading Directly to S3 vs Uploading Through EC2

Im developing a mobile app that will use AWS for its backend services. In the app I need to upload video files to S3 on a frequent basis, and I'm wondering what the recommended architecture would look like to make this scalable and efficient. Traffic could be high, and file sizes could be large.
-On one hand, I could upload directly to S3 using the S3 API on the client side. This would be the easiest option, but Im not sure of the negative implications associated with it.
-The other way to do it would be to go through an EC2 instance and handle the request using some PHP scripts and upload from there.
So my question is... Are these two options equal, or are there major drawbacks to one of them opposed to another? I will already have EC2 instances configured for database access if that makes any difference in how you approach the question.
I will recommend using "upload directly to S3 using the S3 API on the client side" as you can speed up the upload process by using AWS S3 part upload as your video files are going to large.
The second method will put extra CPU usage load on your EC2 instance as the script processing and upload to S3 will utilize CPU for the process.

synch gs bucket with s3 bucket (lambda style)

Simple problem, i have got a google bucket which gets content 3 times a day from an external provider. I want to fetch this content as soon as it arrives and push it onto a S3 bucket. I have been able to achieve this via running my python scripts as a cron job. But I have to provide high availability and such if i follow this route.
My idea was to set this up in aws lambda, so I don't have to sweat the infrastructure limitations. Any pointers on this marriage between gs and lambda. I am not a native Node speaker so any pointers will be really helpful.
GCS can send object notifications when an object is created/updated. You can catch the notifications (which are HTTP post requests) by a simple web app hosted on GAE, and then handle the file transfer to S3. Highly available, event driven solution.