What is the best practice to pull a Docker image located in a repository in ECS from an EC2 instance?
I pushed Docker images into my repository located under ECS.
I would like to launch an EC2 instance and pull these images from it.
I am used to take advantage of the ECS task. To just run a Docker container for 5min, I need to go to Auto-Scale, set the minimum at 1, go to the ECS page, wait for an instance to be up and run my task. Too annoying for my personal use. I'd like to run it quickly and stop it quickly.
I wanted to simply run my Docker container but ok, that's not possible, then I am thinking of creating an EC2 template that will directly run my Docker container inside an EC2 instance.
How to do it?
How can I handle the keys/users and the AWS CLI inside my EC2? (Access/Secret Access Key are limited to 30min, I can't write it clearly in the User Data of an EC2 instance/template)
I think my need is very basic and I couldn't find the best way to do it. Blog articles mainly explain how to run Docker on Linux, not the best way to do it on AWS.
This can be accomplished with a combination of the EC2 instance role, and a script that performs docker login followed by a docker pull for your pushed image.
Pre-requisites: An EC2 instance with the AWS CLI and Docker installed.
First, you'll have to add the inbuilt AmazonEC2ContainerRegistryReadOnly IAM policy to your EC2 instance's IAM role (this grants read access to all pushed images). If you'd like things to be more restrictive, you can use the following policy instead:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "GrantSingleImageReadOnlyAccess",
"Effect": "Allow",
"Action": [
"ecr:BatchCheckLayerAvailability",
"ecr:GetDownloadUrlForLayer",
"ecr:GetRepositoryPolicy",
"ecr:DescribeRepositories",
"ecr:ListImages",
"ecr:DescribeImages",
"ecr:BatchGetImage"
],
"Resource": "<aws-account-id>.dkr.ecr.<region>.amazonaws.com/<image-name>"
},
{
"Sid": "GrantECRAuthAccess",
"Effect": "Allow",
"Action": "ecr:GetAuthorizationToken",
"Resource": "*"
}
]
}
Next, you'll have to create a script to perform login and image pull for you. A typical script would look something like this:
$(aws ecr get-login-password --region <region>);
docker pull <aws-account-id>.dkr.ecr.<region>.amazonaws.com/<image-name>:<optional-tag>;
Note that this script will have to run as the root user for proper Docker daemon access.
Another way of solving this all together would be to look into automation options for ECS tasks.
Related
I am learning AWS and I have the following task in an online training course:
Configure the MongoDB VM as highly privileged – configure an instance
profile to the VM and add the permission “ec2:*” as a custom policy.
I am trying to work out what that means. Is the task asking for a role that enables the VM instance to have full control over all EC2 resources?
If I understand it correctly, then I think the following policy would implement it.
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"ec2:*"
],
"Effect": "Allow",
"Resource": "arn:aws:ec2:*:*:instance"
}
]
}
My understanding is that this policy is saying any EC2 instance can perform any EC2 action. Is that right?
I would say you are almost correct. Roles are attached to individual services which means your particular VM can perform any Ec2 action on this resource arn:aws:ec2:*:*:instance.
There is a difference in saying any ec2 can perform ec2 action instead that ec2 instance can perform any ec2 action to which this role is attached.
I have an EKS cluster and EC2. I would like to create an instance profile and attach it to the EC2 - this profile should allow ONLY READ access to the EKS cluster.
Will the following policy be apt for this requirement?:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"eks:ListNodegroups",
"eks:DescribeFargateProfile",
"eks:ListTagsForResource",
"eks:ListAddons",
"eks:DescribeAddon",
"eks:ListFargateProfiles",
"eks:DescribeNodegroup",
"eks:ListUpdates",
"eks:DescribeUpdate",
"eks:AccessKubernetesApi",
"eks:DescribeCluster",
"eks:ListClusters",
"eks:DescribeAddonVersions"
],
"Resource": "*"
}
]
}
Depends on what you mean by "read" access.
If you mean "read" from AWS perspective as in being able to use the AWS CLI to tell you about EKS, yes that would be sufficient to get you started. This will not include any kubectl commands.
But if you mean read as in being able to execute kubectl commands against the cluster, then you will not be able to achieve that with this.
To implement read access to the cluster itself using kubectl commands, your EC2 instance will need a minimum of the following IAM permissions:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"eks:DescribeCluster",
"eks:ListClusters"
],
"Resource": "*"
}
]
}
With this, your EC2 will be able to execute eksctl utils write-kubeconfig --cluster=cluster-name to configure the kubeconfig. This also assumes you have the required components installed on your EC2 to run kubectl.
You also need to set up permissions within your cluster because the IAM permissions alone don't actually grant any visibility within the cluster itself.
The role you assign to your EC2 would need to be added to the aws-auth configmap in the kube-system namespace. See Managing users or IAM roles for your cluster from AWS docs for examples.
Unfortunately I don't believe there's a simple out-of-the-box RBAC role you can use that gives you read-only access to the entire cluster. Kubernetes provides four user-facing roles and of them, only the system:masters group has cluster-wide access.
Have a look at Kubernetes Using RBAC Authorization documentation - specifically on user-facing roles.
You will need to design a permission strategy to fit your needs, but you do have the default role view that you can start from. The default view user-facing role is tied to a ClusterRoleBinding and was designed / intended to be used in a namespace specific capacity.
Permissions and RBAC for Kubernetes is a very deep rabbit-hole.
If I am running container in AWS ECS using EC2, then I can access running container and execute any command.
ie.
docker exec -it <containerid> <command>
How can I run commands in the running container or access container in AWS ECS using Fargate?
Update(16 March, 2021):
AWS announced a new feature called ECS Exec which provides the ability to exec into a running container on Fargate or even those running on EC2. This feature makes use of AWS Systems Manager(SSM) to establish a secure channel between the client and the target container. This detailed blog post from Amazon describes how to use this feature along with all the prerequisites and the configuration steps.
Original Answer:
With Fargate you don't get access to the underlying infrastructure so docker exec doesn't seem possible. The documentation doesn't mention this explicitly but it's mentioned in this Deep Dive into AWS Fargate presentation by Amazon where this is mentioned on slide 19:
Some caveats: can’t exec into the container, or access the underlying
host (this is also a good thing)
There's also some discussion about it on this open issue in ECS CLI github project.
You could try to run an SSH server inside a container to get access but I haven't tried it or come across anyone doing this. It also doesn't seem like a good approach so you are limited there.
AWS Fargate is a managed service and it makes sense not to allow access into containers.
If you need to troubleshoot the container you can always increase the log level of your app running in containers. Best practices on working with containers says
"Docker containers are in fact immutable. This means that a running
container never changes because in case you need to update it, the
best practice is to create a new container with the updated version of
your application and delete the old one."
Hope it helps.
You need to provide a "Task role" for a Task Definition (this is different than the "Task execution role"). This can be done by first going to IAM
IAM role creation
IAM > roles > create role
custom trust policy > copy + paste
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "ecs-tasks.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
Add permission > Create Policy
JSON > replace YOUR_REGION_HERE & YOUR_ACCOUNT_ID_HERE & CLUSTER_NAME > copy + paste
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ssmmessages:CreateControlChannel",
"ssmmessages:CreateDataChannel",
"ssmmessages:OpenControlChannel",
"ssmmessages:OpenDataChannel"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"logs:DescribeLogGroups"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"logs:CreateLogStream",
"logs:DescribeLogStreams",
"logs:PutLogEvents"
],
"Resource": "arn:aws:logs:YOUR_REGION_HERE:YOUR_ACCOUNT_ID_HERE:log-group:/aws/ecs/CLUSTER_NAME:*"
}
]
}
Give it a name
go back to Add permissions > search by name > check > Next
Give a role name > create role
ECS new task
go back to ECS > go to task definition and create a new revision
select your new role for "Task role" (different than "Task execution role") > update Task definition
go to your service > update > ensure revision is set to latest > finish update of the service
current task and it should auto provision your new task with its new role.
try again
Commands I used to exec in
enables execute command
aws ecs update-service --cluster CLUSTER_NAME --service SERVICE_NAME --region REGION --enable-execute-command --force-new-deployment
adds ARN to environment for easier cli. Does assume only 1 task running for the service, otherwise just manually go to ECS and grab arn and set them for your cli
TASK_ARN=$(aws ecs list-tasks --cluster CLUSTER_NAME --service SERVICE_NAME --region REGION --output text --query 'taskArns[0]')
see the task,
aws ecs describe-tasks --cluster CLUSTER_NAME --region REGION --tasks $TASK_ARN
exec in
aws ecs execute-command --region REGION --cluster CLUSTER_NAME --task $TASK_ARN --container CONTAINER --command "sh" --interactive
As on 16 March 2021, AWS has introduced ECS Exec which can be used to run command on container running in either EC2 or Fargate.
URL will be available at
https://aws.amazon.com/about-aws/whats-new/2021/03/amazon-ecs-now-allows-you-to-execute-commands-in-a-container-running-on-amazon-ec2-or-aws-fargate/
I have an Amazon account with a K8S cluster which is able to pull images from the same account's ECR repository.
But, my company have another account with another ECR repository. How can I pull image from this "external" ECR repository ?
I'am also a Rancher user and I used to do this by installing a special container (https://github.com/rancher/rancher-ecr-credentials) which does the job.
Is there something equivalent for Kubernetes?
Thanks for your precious help
Since you already have this setup for pulling images from the same account, you can do this with IAM policy level or ECR permissions, in your other AWS account set up a policy specifying the AWS account number (where k8s is) that will be able to pull images
For example grant pull permissions in the ECR Permissions tab
{
"Version": "2008-10-17",
"Statement": [
{
"Sid": "k8s-aws-permissions",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::aws_account_number:root"
},
"Action": [
"ecr:GetDownloadUrlForLayer",
"ecr:BatchGetImage",
"ecr:BatchCheckLayerAvailability"
]
}
]
}
I understand we need to login to ECR to pull docker image from AWS ECR. How can I make it anonymous? Since we separate code, data and infrastructure (all open source) separate we do not find a need for the infrastructure part to be private.
I was able to find the way to create permission with *, not sure how can I make it anonymous so that anyone who wants to download does not need an IAM user access.
Below is the policy,
{
"Version": "2008-10-17",
"Statement": [
{
"Sid": "AllowPublic",
"Effect": "Allow",
"Principal": "*",
"Action": [
"ecr:GetDownloadUrlForLayer",
"ecr:BatchGetImage",
"ecr:BatchCheckLayerAvailability"
]
}
]
}
Not sure how can I create an anonymous IAM user as well.
If you read the FAQ
Q: Can Amazon ECR host public container images?
Amazon ECR currently supports private images. However, using IAM resource-based permissions, you can configure policies for each repository to allow access to IAM users, roles, or other AWS accounts.
The only workaround I can think of is probably putting a EC2 machine and the using NGINX to proxy_pass to the ECR url and using the EC2 IP for docker repo
Starting 1 Dec 2020, You can use ECR public to pull container images anonymously.
Links to How To & Launch Announcement
Anyone who pulls images anonymously gets 500 GB of free data bandwidth
each month after which they can sign up for or sign in to an AWS
account. Simply authenticating with an AWS account increases free data
bandwidth to 5 TB each month when pulling images from the internet.
And finally, workloads running in AWS will get unlimited data
bandwidth from any region when pulling from ECR Public.