How to transform images directly uploaded to S3 from a Heroku node app? - amazon-web-services

I have an app where I'm allowing users to upload images. I'm working on having them upload images directly to AWS S3, rather than a pass-through (sounds like it really ties up your Heroku dynos if done using pass through). However, I would like to perform transformations to the assets they upload (for example, re-sizing, compressing image quality to reduce file size and creating thumbnail versions). Since the files are being directly uploaded to S3, I can't perform any transformations until they are finished uploading to Amazon.
I'm not sure of the best way to handle this, but I'm thinking:
User uploads an image through a file input form field that is directly uploaded to S3.
Once that is successfully completed, that image url from Amazon is saved to my Heroku database.
Then, I can take that image and perform all those transformations to it.
Re-uploading the image to S3 as cropped, compressed and thumbnailed version.
Persisting the url for the new edited images in my Heroku database.
Is this the best workflow to solve this problem, or is there a more efficient solution? Thanks!

Here are some alternatives to re-processing the pictures in Heroku:
Image processing with AWS Lambda
Configure your Amazon S3 bucket to trigger an AWS Lambda function when a picture is uploaded. The Lambda function could transform the image automatically.
See: Tutorial: Using AWS Lambda with Amazon S3
Transform pictures upon retrieval
Instead of transforming and storing the images, use an online service that can transform the images on-demand, eg:
Cloudinary
Imgix

Related

Create AWS container without programatic access

I need to create a AWS container with ECS. However, I don't have programatic access for push the image in the ECR repository and I don't found other way to create a container.
My question is: Is there another way to create a container without programatic access?
I found a way to upload the image in Amazon S3 (compress the image into a .zip), but I don't know how to use the image after the upload.
I found a way to upload the image in Amazon S3 (compress the image into a .zip)
That isn't going to work. You need to upload the image to ECR, not S3. ECS does not support loading images from S3.
Is there another way to create a container without programatic access?
You can't upload images to ECR without programmatic access. It isn't possible to upload images to ECR with only username/password access.
Formal (Correct) way:
Probably a CodeBuild job that builds the image and pushes it, possibly wrapped up with CodePipeline
Hacky way:
Maybe a lambda that pulls the zip, unpacks the image and pushes to ecr? Would definitely not be a pattern you want to house long term but might get the job done?

How can I upload videos to Amazon S3 (for HLS Stream) and categorize them?

I'm working on an app for on-demand HTTP Live video streaming using Amazon AWS. I was able to set up Amazon's default video-on-demand HLS workflow using the link below (i.e. video is uploaded, auto-encoded and stored in a different bucket with a unique ID). I'm trying to find a way to automatically group videos by category (in DynamoDB or another database) when I upload them. Has anyone done something similar before? Do I need to use a Lambda function?
https://docs.aws.amazon.com/solutions/latest/video-on-demand/appendix-a.html
FYI - in case anyone else is looking for a way to do this. You can upload your video to AWS and use a javascript lambda function to automatically categorize them in a nosql database

Connecting S3 - Lambda - EC2 - Elasticsearch

In my project users upload images into a S3 bucket. I have created a tensor flow resnet model to interpret the contents of the image. Based on the tensor flow interpretation, the data is to be stored in an elasticsearch instance.
For this, I have created a S3 Bucket, a lambda function that gets triggered when an image is loaded, and AWS elasticsearch instance. Since my tf models are large, I have zipped them and put it in a S3 bucket and uploaded the s3 url to lambda.
Issue: Since my unzipped files were larger than 266 mb, I could not complete the lambda function.
Alternative approach: Instead of S3 Bucket - I am thinking of creating a ec2 instance - with larger volume size to store images and receive the images directly into ec2 instance instead of s3. However, since I will be receiving images in millions within a year, I am not sure if this will be scalable.
I can think of two approaches here:
You side load the app. The lambda can be a small bootstrap script that downloads your app from s3 and unzips it. This is a popular pattern in server less frameworks. You pay for this during a cold start of the lambda so you will need to keep it warm in a production env.
You can store images in s3 itself and create event on image upload with destination SQS. Then you can use ec2 to pull the sqs messages for new messages periodically and process them using your tf models.

Is there any way to upload 50000 image files to Amazon S3 Bucket from a list of URLs

Is there any way to upload 50000 image files to Amazon S3 Bucket. The 50000 image file URLs are saved in a .txt file. Can someone please tell me a better way to do this.
It sounds like your requirement is: For each image URL listed in a text file, copy the images to an Amazon S3 bucket.
There is no in-built capability with Amazon S3 to do this. Instead, you would need to write an app that:
Reads the text file and, for each URL
Downloads the image
Uploads the image to Amazon S3
Doing this on an Amazon EC2 instance would be the fastest, due to low latency between S3 and EC2.
You could also get fancy and do it via Amazon EMR. It would be the fastest due to parallel processing, but would require knowledge of how to use Hadoop.
If you have a local copy of the images, you could order an AWS Snowball and use it to transfer the files to Amazon S3. However, it would probably be faster just to copy the files over the Internet (rough guess... at 1MB per file, total volume is 50GB).

How to Generate Thumbnail from AWS S3 Media Cache Source in Wowza

Could anyone let me know how to generate thumbnail from AWS S3 media cache source?
I stream videos directly from AWS S3 bucket and want to find a way to generate a thumbnail.
Base on AWS. You can use Amazon Elastic Transcoder https://aws.amazon.com/elastictranscoder/details/
Thumbnails: Amazon Elastic Transcoder can generate thumbnails of your
output video for you. You can set the size of the thumbnails, aspect
ratio and other parameters including how many thumbnails you would
like to have generated. Generating multiple thumbnails is useful if
you want to add chapter markers, provide a visual scan function or
simply choose the most representative thumbnail for your content.
Another way. You can use ffmpeg to generate thumbnails
Cheers,
Patrick