Create AWS container without programatic access - amazon-web-services

I need to create a AWS container with ECS. However, I don't have programatic access for push the image in the ECR repository and I don't found other way to create a container.
My question is: Is there another way to create a container without programatic access?
I found a way to upload the image in Amazon S3 (compress the image into a .zip), but I don't know how to use the image after the upload.

I found a way to upload the image in Amazon S3 (compress the image into a .zip)
That isn't going to work. You need to upload the image to ECR, not S3. ECS does not support loading images from S3.
Is there another way to create a container without programatic access?
You can't upload images to ECR without programmatic access. It isn't possible to upload images to ECR with only username/password access.

Formal (Correct) way:
Probably a CodeBuild job that builds the image and pushes it, possibly wrapped up with CodePipeline
Hacky way:
Maybe a lambda that pulls the zip, unpacks the image and pushes to ecr? Would definitely not be a pattern you want to house long term but might get the job done?

Related

Is there a way to remove image metadata from all existing images in S3?

I have a huge S3 bucket of images, it's impractical to manually edit each one or write a script to strip the metadata one by one, is there another way to strip the image metadata from all of the images?
To be clear: I mean the image metadata (Exif etc), NOT the s3 metadata.
Thank you!
No, I recommend using S3 Batch Operations + AWS Lambda scripts to do this.

Edit image file in S3 bucket using AWS Lambda

Some images which is already uploaded on AWS S3 bucket and of course there is a lot of image. I want to edit and replace those images and I want to do it on AWS server, Here I want to use aws lambda.
I already can do my job from my local pc. But it takes a very long time. So I want to do it on server.
Is it possible?
Unfortunately directly editing file in S3 is not supported Check out the thread. To overcome the situation, you need to download the file locally in server/local machine, then edit it and re-upload it again to s3 bucket. Also you can enable versions
For node js you can use Jimp
For java: ImageIO
For python: Pillow
or you can use any technology to edit it and later upload it using aws-sdk.
For lambda function you can use serverless framework - https://serverless.com/
I have made youtube videos long back. This is related to how get started with aws-lambda and serverless
https://www.youtube.com/watch?v=uXZCNnzSMkI
You can trigger a Lambda using the AWS SDK.
Write a Lambda to process a single image and deploy it.
Then locally use the AWS SDK to list the images in the bucket and invoke the Lambda (asynchronously) for each file using invoke. I would also save somewhere which files have been processed so you can continue if something fails.
Note that the default limit for Lambda is 1000 concurrent executions, so to avoid reaching the limit you can send messages to an SQS queue (which then triggers the Lambda) or just retry when invoke throws an error.

How to automate AWS Elastic Transcoder Jobs for s3 buckets?

What I want: To add watermarks to all video files that are uploaded to the S3 bucket (mov, mp4, etc.). Then overwrite the file with it's same name with the newly transcoded file that has the watermark on it.
So, I was able to manually do this by creating a pipeline and job with elastic transcoder, but this is manual. I want this done the moment a file is uploaded to the server, overwrite the file with the new file and boom.
One, this should be a feature already but not sure why it isnt.
And two, How can I have this automatically done? Any advise? I know its possible just not sure exactly where to start here
You need S3 bucket, a lambda along with your transcoder pipeline.
Elastic transcoder is backbone of your process.
To automate transcoding, create lambda function which gets triggered by an S3 event .
More detailed explanation is here .

Connecting S3 - Lambda - EC2 - Elasticsearch

In my project users upload images into a S3 bucket. I have created a tensor flow resnet model to interpret the contents of the image. Based on the tensor flow interpretation, the data is to be stored in an elasticsearch instance.
For this, I have created a S3 Bucket, a lambda function that gets triggered when an image is loaded, and AWS elasticsearch instance. Since my tf models are large, I have zipped them and put it in a S3 bucket and uploaded the s3 url to lambda.
Issue: Since my unzipped files were larger than 266 mb, I could not complete the lambda function.
Alternative approach: Instead of S3 Bucket - I am thinking of creating a ec2 instance - with larger volume size to store images and receive the images directly into ec2 instance instead of s3. However, since I will be receiving images in millions within a year, I am not sure if this will be scalable.
I can think of two approaches here:
You side load the app. The lambda can be a small bootstrap script that downloads your app from s3 and unzips it. This is a popular pattern in server less frameworks. You pay for this during a cold start of the lambda so you will need to keep it warm in a production env.
You can store images in s3 itself and create event on image upload with destination SQS. Then you can use ec2 to pull the sqs messages for new messages periodically and process them using your tf models.

How to transform images directly uploaded to S3 from a Heroku node app?

I have an app where I'm allowing users to upload images. I'm working on having them upload images directly to AWS S3, rather than a pass-through (sounds like it really ties up your Heroku dynos if done using pass through). However, I would like to perform transformations to the assets they upload (for example, re-sizing, compressing image quality to reduce file size and creating thumbnail versions). Since the files are being directly uploaded to S3, I can't perform any transformations until they are finished uploading to Amazon.
I'm not sure of the best way to handle this, but I'm thinking:
User uploads an image through a file input form field that is directly uploaded to S3.
Once that is successfully completed, that image url from Amazon is saved to my Heroku database.
Then, I can take that image and perform all those transformations to it.
Re-uploading the image to S3 as cropped, compressed and thumbnailed version.
Persisting the url for the new edited images in my Heroku database.
Is this the best workflow to solve this problem, or is there a more efficient solution? Thanks!
Here are some alternatives to re-processing the pictures in Heroku:
Image processing with AWS Lambda
Configure your Amazon S3 bucket to trigger an AWS Lambda function when a picture is uploaded. The Lambda function could transform the image automatically.
See: Tutorial: Using AWS Lambda with Amazon S3
Transform pictures upon retrieval
Instead of transforming and storing the images, use an online service that can transform the images on-demand, eg:
Cloudinary
Imgix