Deploying imgproxy to AWS with Fargate - amazon-web-services

I would like to deploy imgproxy to AWS using Fargate to serve different sizes/formats of images from an s3 bucket. Ideally also behind Cloudfront.
Imgproxy has a docker image
docker pull darthsim/imgproxy:latest
docker run -p 8080:8080 -it darthsim/imgproxy
and serving from s3 is supported, e.g.:
docker run -p 8080:8080 -e AWS_ACCESS_KEY_ID=XXXX -e AWS_SECRET_ACCESS_KEY=YYYYYYXXX -e IMGPROXY_USE_S3=true -e IMGPROXY_S3_REGION=us-east-1 -it darthsim/imgproxy
Deploy with Fargate
I followed the Fargate wizard and chose "Custom"
The container
I set up the container as follows. Using the imgproxy Docker image and mapping port 8080, which I think is the one it usually runs on?
In the advanced section, I set the command as
docker run -p 8080:8080 -e IMGPROXY_USE_S3=true -e IMGPROXY_S3_REGION=us-east-1 -it darthsim/imgproxy
The task
I left this as the defaults:
The service
For the service, I chose to use a load balancer:
The results
After waiting for the launch to complete, I went to the load balancer and copied the DNS name:
http://.us-east-1.elb.amazonaws.com:8080/
But I got 503 Service Temporarily Unavailable
It seems the task failed to start
Status reason CannotStartContainerError: Error response from daemon: OCI runtime create failed: container_linux.go:370: starting container process caused: exec: "docker run -p 8080:8080 -e IMGPROXY_USE_S3=true -e IMGPROXY_S3_REGION=us-east-1 -it darthsim/imgproxy": st
Entry point ["docker run -p 8080:8080 -e IMGPROXY_USE_S3=true -e IMGPROXY_S3_REGION=us-east-1 -it darthsim/imgproxy"]
Command ["docker run -p 8080:8080 -e IMGPROXY_USE_S3=true -e IMGPROXY_S3_REGION=us-east-1 -it darthsim/imgproxy"]
Help
I'm looking initially to figure out how to get this deployed in basic form, maybe I need to do more with IAM roles so it doesn't need the AWS creds? Maybe something in the config was not right?
Then I'd also like to figure out how to bring cloudfront into the pictuire too.

Turns out I was overcomplicating this.
The CMD and ENTRYPOINT can be left blank.
I then simply set the environment variables:
AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY
IMGPROXY_S3_REGION
IMGPROXY_USE_S3 true
After waiting then for the task to go from PENDING to RUNNING, I can go copy the DNS name of the load balancer and be greeted by the imgproxy "hello" page.
The IAM Role vs creds
I didn't get this working via an IAM role for the task. I tried giving the ecsTaskExecutionRole s3 read permissions, but in the absence of AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY in the container environment imgproxy complained about missing creds.
In the end I just created a user with an s3 policy allowing read access to the relevant s3 bucket and copied the id and access key to the environment as per above.
If anyone knows how to get an IAM role working that would be nice to know.
Cloudfront
This was just a case of setting the cloudfront origin to be load balancer for the cluster and setting its http port to be 8080 to match imgproxy.
Signed URLs
Just need to add the following to the environment variables
IMGPROXY_KEY
IMGPROXY_SALT
and they can be generated with echo $(xxd -g 2 -l 64 -p /dev/random | tr -d '\n').
After setting these, the simple /insecure URL will not work.
In Python the signed url can be generated from the imgproxy example code. Note that here the url on line 11 should be the s3 url for the image, e.g "s3://somebucket/art/1.png". And you need to replace the key and salt with the hex encoded ones from the ECS environment.

Related

GCSFuse not finding default credentials when running a cloud run app docker locally

I am working on mounting a Cloud Storage Bucket to my Cloud Run App, using the example and code from the official tutorial https://cloud.google.com/run/docs/tutorials/network-filesystems-fuse
The application uses docker only (no cloudbuild.yaml)
The docker file compiles with out issue using command:
docker build --platform linux/amd64 -t fusemount .
I then start docker run with the following command
docker run --rm -p 8080:8080 -e PORT=8080 fusemount
and when run gcsfuse is triggered with both the directory endpoint and the bitbucket URL
gcsfuse --debug_gcs --debug_fuse gs://<my-bucket> /mnt/gs
But the connection fails:
022/12/11 13:54:35.325717 Start gcsfuse/0.41.9 (Go version go1.18.4)
for app "" using mount point: /mnt/gcs 2022/12/11 13:54:35.618704
Opening GCS connection...
2022/12/11 13:57:26.708666 Failed to open connection: GetTokenSource:
DefaultTokenSource: google: could not find default credentials. See
https://developers.google.com/accounts/docs/application-default-credentials
for more information.
I have already set up the application-defaut credentials with the following command:
gcloud auth application-default login
and I have a python based cloud function project that I have tested on the same local machine which has no problem accessing the same storage bucket with the same default login credentials.
What am I missing?
Google libraries search for ~/.config/gcloud when using APPLICATION_DEFAULT authorization approach.
Your local Docker container doesn't contain this config when running locally.
So, you might want to mount it when running a container:
$ docker run --rm -v /home/$USER/.config/gcloud:/root/.config/gcloud -p 8080:8080 -e PORT=8080 fusemount
Some notes:
I'm not sure which OS you are using, so that replace /home/$USER with a real path to your home
Same, I'm not sure your image has /root home, so make sure that path from 1. is mounted properly
Make sure your local user is authorized to gcloud cli, as you mentioned, using this command gcloud auth application-default login
Let me know, if this helped.
If you are using docker and not using Google Compute engine (GCE), did you try mounting service account key when running container and using that key while mounting GCSFuse ?
If you are building and deploying to Cloud run, did you grant required permissions mentioned in https://cloud.google.com/run/docs/tutorials/network-filesystems-fuse#ship-code?

How can I install aws cli, from WITHIN the ECS task?

Question:
How can I install aws cli, from WITHIN the ECS task ?
DESCRIPTION:
I'm using a docker container to run the logstash application (it is part of the elastic family).
The docker image name is "docker.elastic.co/logstash/logstash:7.10.2"
This logstash application needs to write to S3, thus it needs AWS CLI installed.
If aws is not installed, it crashes.
# STEP 1 #
To avoid crashing, when I used this application only as a docker, I ran it in a way that I caused the 'logstash start' to be delayed, after docker container was started.
I did this by adding "sleep" command to an external docker-entrypoint file, before it starts the logstash.
This is how it looks in the docker-entrypoint file:
sleep 120
if [[ -z $1 ]] || [[ ${1:0:1} == '-' ]] ; then
exec logstash "$#"
else
exec "$#"
fi
# EOF
# STEP 2 #
run the docker with "--entrypoint" flag so it will use my entrypoint file
docker run \
-d \
--name my_logstash \
-v /home/centos/DevOps/psifas_logstash_docker-entrypoint:/usr/local/bin/psifas_logstash_docker-entrypoint \
-v /home/centos/DevOps/logstash.conf:/usr/share/logstash/pipeline/logstash.conf \
-v /home/centos/DevOps/logstash.yml:/usr/share/logstash/config/logstash.yml \
--entrypoint /usr/local/bin/psifas_logstash_docker-entrypoint \
docker.elastic.co/logstash/logstash:7.10.2
# STEP 3 #
install aws cli and configure aws cli from the server hosting the docker:
docker exec -it -u root <DOCKER_CONTAINER_ID> yum install awscli -y
docker exec -it <DOCKER_CONTAINER_ID> aws configure set aws_access_key_id <MY_aws_access_key_id>
docker exec -it <DOCKER_CONTAINER_ID> aws configure set aws_secret_access_key <MY_aws_secret_access_key>
docker exec -it <DOCKER_CONTAINER_ID> aws configure set region <MY_region>
This worked for me,
Now I want to "translate" this flow into an AWS ECS task.
in ECS I will use parameters instead of running the above 3 "aws configure" commands.
MY QUESTION
How can I do my 3rd step, installing aws cli, from WITHIN the ECS task ? (meaning not to run it on the EC2 server hosting the ECS cluster)
When I was working on the docker I also thought of these options to use the aws cli:
find an official elastic docker image containing both logstash and aws cli. <-- I did not find one.
create such an image by myself and use. <-- I prefer not , because I want to avoid the maintenance of creating new custom images when needed (e.g when new version of logstash image is available).
Eventually I choose the 3 steps above, but I'm open to suggestion.
Also, My tests showed that running 2 containers within the same ECS task:
logstah
awscli
and then the logstash container will use the aws cli container
(image "amazon/aws-cli") is not working.
THANKS A LOT IN ADVANCE :-)
Your option #2, create the image yourself, is really the best way to do this. Anything else is going to be a "hack". Also, you shouldn't be running aws configure for an image running in ECS, you should be assigning a IAM role to the task, and the AWS CLI will pick that up and use it.
Mark B, your answer helped me to solve this. Thanks!
writing here the solution in case it will help somebody else.
There is no need to install AWS CLI, in the logstash docker container running inside the ECS task.
Inside the logstash container (from image "docker.elastic.co/logstash/logstash:7.10.2") there is AWS SDK to connect to the S3.
The only thing required is to allow the ECS Task execution role, access to S3.
(I attached AmazonS3FullAccess policy)

How to run a simple Docker container when an EC2 is launched in an AWS auto-scaling group?

$ terraform version
Terraform v0.14.4
I'm using Terraform to create an AWS autoscaling group, and it successfully launches an EC2 via a launch template, also created by the same Terraform plan. I added the following user_data definition in the launch template. The AMI I'm using already has Docker configured, and has the Docker image that I need.
user_data = filebase64("${path.module/docker_run.sh}")
and the docker_run.sh file contains simple
docker run -p 80:3000 -d 1234567890.dkr.ecr.us-east-1.amazonaws.com/node-app:latest
However, when I ssh to the EC2 instance, the container is NOT running. What am I missing?
Update:
Per Marcin's comment, I see the following in in /var/log/cloud-init-output.log
Jan 11 22:11:45 cloud-init[3871]: __init__.py[WARNING]: Unhandled non-multipart (text/x-not-multipart) userdata: 'docker run -p 80:3000 -d...'
From AWS docs and what you've posted the likely reason is that you are missing /bin/bash in your docker_run.sh:
User data shell scripts must start with the #! characters and the path to the interpreter you want to read the script (commonly /bin/bash).
Thus your docker_run.sh should be:
#!/bin/bash
docker run -p 80:3000 -d 1234567890.dkr.ecr.us-east-1.amazonaws.com/node-app:latest
If this still fails, please check /var/log/cloud-init-output.log on the instance for errors.

Docker unable to connect AWS EC2 cloud

Hi I am able to deploy my spring boot application in my local docker container(1.11.2) in Windows-7.I followed the below steps to run the docker image in AWS EC2(Free Account:eu-central-1) but getting error
Step 1
Generated Amazon "AccessKeyID" and "SecretKey".
Step 2
Created new repository and it shows 5 Steps to push my docker image in AWS EC2.
Step 3
Installed Amazon CLI and run "aws configure" and configured all the details.
While running aws iam list-users --output table it shows all the user list
Step 4
Run the following command in Docker Container aws ecr get-login --region us-west-2
It returns the docker login.
While running the docker login it returns the following error :
XXXX#XXXX MINGW64 ~
$ docker login -u AWS -p <accessKey>/<secretKey>
Uwg
Error response from daemon: Get https://registry-1.docker.io/v2/: unauthorized:
incorrect username or password
XXXX#XXXX MINGW64 ~
$ gLBBgkqhkiG9w0BBwagggKyMIICrgIBADCCAqcGCSqGSIb3DQEHATAeBglghkgBZQMEAS4wEQQME8
Zei
bash: gLBBgkqhkiG9w0BBwagggKyMIICrgIBADCCAqcGCSqGSIb3DQEHATAeBglghkgBZQMEAS4wEQQ
ME8Zei: command not found
XXXX#XXXX MINGW64 ~
$ lJnpBND9CwzAgEQgIICeLBms72Gl3TeabEXDx+YkK9ZlbyGxPmsuVI/rq81tDeIC68e0Ma+ghg3Dt
Bus
bash: lJnpBND9CwzAgEQgIICeLBms72Gl3TeabEXDx+YkK9ZlbyGxPmsuVI/rq81tDeIC68e0Ma+ghg
3DtBus: No such file or directory
I didn't get proper answer in google.It would be great if some one guide me to resolve this issue.Thanks in advance.
Your command is not pointing to your ECR endpoint, but to DockerHub. Using Linux, normally I would simply run:
$ eval $(aws ecr get-login --region us-west-2)
This is possible because the get-login command is a wrapper that retrieves a new authorization token and formats the docker login command. You only need to execute the formatted command (in this case with eval)
But if you really want to run the docker login manually, you'll have to specify the authorization token and the endpoint of your repository:
$ docker login -u AWS -p <password> -e none https://<aws_account_id>.dkr.ecr.<region>.amazonaws.com
Where <password> is actually the authorization token (which can be generated by the aws ecr get-authorization-token command).
Please refer to the documentation for more details: http://docs.aws.amazon.com/cli/latest/reference/ecr/index.html

Pass AWS credentials (IAM role credentials) to code running in Docker container

When running code on an EC2 instance, the SDK I use to access AWS resources, automagically talks to a locally linked web server on 169.254.169.254 and gets that instances AWS credentials (access_key, secret) that are needed to talk to other AWS services.
Also there are other options, like setting the credentials in environment variables or passing them as command line args.
What is the best practice here? I really prefer to let the container access the 169.254.169.254 (by routing the requests) or even better run a proxy container that mimics the behavior of the real server at 169.254.169.254.
Is there already a solution out there?
The EC2 metadata service will usually be available from within docker (unless you use a more custom networking setup - see this answer on a similar question).
If your docker network setup prevents it from being accessed, you might use the ENV directive in your Dockerfile or pass them directly during run, but keep in mind that credentials from IAM roles are automatically rotated by AWS.
Amazon does have some mechanisms for allowing containers to access IAM roles via the SDK and either routing/forwarding requests through the ECS agent container or the host. There is way too much to copy and paste, but using --net host is the LEAST recommended option because without additionally filters that allows your container full access to anything it's host has permission to do.
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-iam-roles.html
declare -a ENVVARS
declare AWS_ACCESS_KEY_ID AWS_SECRET_ACCESS_KEY AWS_SESSION_TOKEN
get_aws_creds_local () {
# Use this to get secrets on a non AWS host assuming you've set credentials via some mechanism in the past, and then don't pass in a profile to gitlab-runner because it doesn't see the ~/.aws/credentials file where it would look up profiles
awsProfile=${AWS_PROFILE:-default}
AWS_ACCESS_KEY_ID=$(aws --profile $awsProfile configure get aws_access_key_id)
AWS_SECRET_ACCESS_KEY=$(aws --profile $awsProfile configure get aws_secret_access_key)
AWS_SESSION_TOKEN=$(aws --profile $awsProfile configure get aws_session_token)
}
get_aws_creds_iam () {
TEMP_ROLE=$(aws sts assume-role --role-arn "arn:aws:iam::123456789012:role/example-role" --role-session-name AWSCLI-Session)
AWS_ACCESS_KEY_ID=$(echo $TEMP_ROLE | jq -r . Credentials.RoleAccessKeyID)
AWS_SECRET_ACCESS_KEY=$(echo $TEMP_ROLE | jq -r . Credentials.RoleSecretKey)
AWS_SESSION_TOKEN=$(echo $TEMP_ROLE | jq -r . Credentials.RoleSessionToken)
}
get_aws_creds_local
get_aws_creds_iam
ENVVARS=("AWS_ACCESS_KEY_ID=$ACCESS_KEY_ID" "AWS_SECRET_ACCESS_KEY=$ACCESS_SECRET_ACCESS_KEY" "AWS_SESSION_TOKEN=$AWS_SESSION_TOKEN")
# passing creds into GitLab runner
gitlab-runner exec docker stepName $(printf " --env %s" "${ENVVARS[#]}")
# using creds with a docker container
docker run -it --rm $(printf " --env %s" "${ENVVARS[#]}") amazon/aws-cli sts get-caller-identity