EC2 instance policy (profile) not allowing pull image from ecr - amazon-web-services

I have attached the following policy to my EC2 instance
{
"Sid": "ECRPull",
"Effect": "Allow",
"Action": [
"ecr:Describe*",
"ecr:Get*",
"ecr:List*",
"ecr:BatchGetImage",
"ecr:BatchCheckLayerAvailability"
],
"Resource": [
"arn:aws:ecr:eu-west-1:123456789:repository/my-repo"
]
}
However the following docker pull from within the instance fails, why?
docker pull 123456789.dkr.ecr.eu-west-1.amazonaws.com/my-repo:sometag
no basic auth credentials

docker command doesn't call AWS IAM service as it's not part of AWS. You've to use an aws command before issuing docker commands.
aws ecr get-login gives you a temporary token.
Use the token and username as AWS to do docker login
docker login –u AWS –p <token from previous command> –e none https://<aws_account_id>.dkr.ecr.us-east-1.amazonaws.com

Related

EC2 instance unable to access its instance role with awscli and ecs-agent

I've currently writing a Terraform module for EC2 ASGs with ECS. Everything about starting the instances works, including IAM-dependent actions such as KMS-encrypted volumes. However, the ECS agent fails with:
Error getting ECS instance credentials from default chain: NoCredentialProviders: no valid providers in chain.
Unfortunately, most posts I find about this are about the CLI being run without credentials configured, however this should of course use the instance profile.
The posts I do find regarding that are about missing IAM policies for the instance role. However, I do have this trust relationship
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": [
"ec2.amazonaws.com",
"ecs.amazonaws.com"
]
},
"Action": "sts:AssumeRole"
}
]
}
(Added ECS because someone on SO had it in there, I don't think that's right. Also removed some conditions.)
It has these policies attached:
AmazonSSMManagedInstanceCore
AmazonEC2ContainerServiceforEC2Role
When I connect via SSH and try to run awscli commands, I get the error
Unable to locate credentials. You can configure credentials by running "aws configure".
running any command.
But with
curl http://169.254.169.254/latest/meta-data/iam/info
I see the correct instance profile ARN and with
curl http://169.254.169.254/latest/meta-data/identity-credentials/ec2/security-credentials/ec2-instance
I see temporary credentials. With these in the awscli configuration,
aws sts get-caller-identity
returns the correct results. There are no iptables rules or routes blocking access to the metadata service and I've deactivated IMDSv2 token requirement for the time being.
I'm using the latest stable version of ECS-optimized Amazon Linux 2.
What might be the issue here?

Why can't I get to the UI for AWS Airflow using webtoken?

In AWS Console, I did the following:
Created an S3 bucket & key: s3://my-airflow and s3://my-airflow/dags
Setup an Airflow Environment.
Created and attached a Service role as described here: https://docs.aws.amazon.com/mwaa/latest/userguide/mwaa-create-role.html
Attached a Policy to allow my user to generate a token like this:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "airflow:CreateWebLoginToken",
"Resource": [
"arn::airflow:us-west-2:<accountID>:role/myAirflowEnv/Admin"
]
}
]
}
Then using the cli I requested the token like this:
aws mwaa create-web-login-token --name myAirflowEnv --region us-west-2
It worked and returned a webToken
6. I then piece-mealed together the ui link as suggested (within 60 sec):
https://{generated0-uuid}-vpce.c0.us-west-2.airflow.amazonaws.com/aws_mwaa/aws-console-sso?login=true#{webToken}
and pasted it in my browser.
ISSUE: The page just spins, timesout, nothing.
AWS, what is the secret?
I figured out what the issue is...
In my case, the Webserver is not in the in a Public Subnet Group.
Note: There is no need to call the CLI tool, there is a popout link in the AWS console to go to the UI.

How to pull ECS docker image from an EC2 instance?

What is the best practice to pull a Docker image located in a repository in ECS from an EC2 instance?
I pushed Docker images into my repository located under ECS.
I would like to launch an EC2 instance and pull these images from it.
I am used to take advantage of the ECS task. To just run a Docker container for 5min, I need to go to Auto-Scale, set the minimum at 1, go to the ECS page, wait for an instance to be up and run my task. Too annoying for my personal use. I'd like to run it quickly and stop it quickly.
I wanted to simply run my Docker container but ok, that's not possible, then I am thinking of creating an EC2 template that will directly run my Docker container inside an EC2 instance.
How to do it?
How can I handle the keys/users and the AWS CLI inside my EC2? (Access/Secret Access Key are limited to 30min, I can't write it clearly in the User Data of an EC2 instance/template)
I think my need is very basic and I couldn't find the best way to do it. Blog articles mainly explain how to run Docker on Linux, not the best way to do it on AWS.
This can be accomplished with a combination of the EC2 instance role, and a script that performs docker login followed by a docker pull for your pushed image.
Pre-requisites: An EC2 instance with the AWS CLI and Docker installed.
First, you'll have to add the inbuilt AmazonEC2ContainerRegistryReadOnly IAM policy to your EC2 instance's IAM role (this grants read access to all pushed images). If you'd like things to be more restrictive, you can use the following policy instead:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "GrantSingleImageReadOnlyAccess",
"Effect": "Allow",
"Action": [
"ecr:BatchCheckLayerAvailability",
"ecr:GetDownloadUrlForLayer",
"ecr:GetRepositoryPolicy",
"ecr:DescribeRepositories",
"ecr:ListImages",
"ecr:DescribeImages",
"ecr:BatchGetImage"
],
"Resource": "<aws-account-id>.dkr.ecr.<region>.amazonaws.com/<image-name>"
},
{
"Sid": "GrantECRAuthAccess",
"Effect": "Allow",
"Action": "ecr:GetAuthorizationToken",
"Resource": "*"
}
]
}
Next, you'll have to create a script to perform login and image pull for you. A typical script would look something like this:
$(aws ecr get-login-password --region <region>);
docker pull <aws-account-id>.dkr.ecr.<region>.amazonaws.com/<image-name>:<optional-tag>;
Note that this script will have to run as the root user for proper Docker daemon access.
Another way of solving this all together would be to look into automation options for ECS tasks.

How can I run commands in a running container in AWS ECS using Fargate

If I am running container in AWS ECS using EC2, then I can access running container and execute any command.
ie.
docker exec -it <containerid> <command>
How can I run commands in the running container or access container in AWS ECS using Fargate?
Update(16 March, 2021):
AWS announced a new feature called ECS Exec which provides the ability to exec into a running container on Fargate or even those running on EC2. This feature makes use of AWS Systems Manager(SSM) to establish a secure channel between the client and the target container. This detailed blog post from Amazon describes how to use this feature along with all the prerequisites and the configuration steps.
Original Answer:
With Fargate you don't get access to the underlying infrastructure so docker exec doesn't seem possible. The documentation doesn't mention this explicitly but it's mentioned in this Deep Dive into AWS Fargate presentation by Amazon where this is mentioned on slide 19:
Some caveats: can’t exec into the container, or access the underlying
host (this is also a good thing)
There's also some discussion about it on this open issue in ECS CLI github project.
You could try to run an SSH server inside a container to get access but I haven't tried it or come across anyone doing this. It also doesn't seem like a good approach so you are limited there.
AWS Fargate is a managed service and it makes sense not to allow access into containers.
If you need to troubleshoot the container you can always increase the log level of your app running in containers. Best practices on working with containers says
"Docker containers are in fact immutable. This means that a running
container never changes because in case you need to update it, the
best practice is to create a new container with the updated version of
your application and delete the old one."
Hope it helps.
You need to provide a "Task role" for a Task Definition (this is different than the "Task execution role"). This can be done by first going to IAM
IAM role creation
IAM > roles > create role
custom trust policy > copy + paste
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "ecs-tasks.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
Add permission > Create Policy
JSON > replace YOUR_REGION_HERE & YOUR_ACCOUNT_ID_HERE & CLUSTER_NAME > copy + paste
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ssmmessages:CreateControlChannel",
"ssmmessages:CreateDataChannel",
"ssmmessages:OpenControlChannel",
"ssmmessages:OpenDataChannel"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"logs:DescribeLogGroups"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"logs:CreateLogStream",
"logs:DescribeLogStreams",
"logs:PutLogEvents"
],
"Resource": "arn:aws:logs:YOUR_REGION_HERE:YOUR_ACCOUNT_ID_HERE:log-group:/aws/ecs/CLUSTER_NAME:*"
}
]
}
Give it a name
go back to Add permissions > search by name > check > Next
Give a role name > create role
ECS new task
go back to ECS > go to task definition and create a new revision
select your new role for "Task role" (different than "Task execution role") > update Task definition
go to your service > update > ensure revision is set to latest > finish update of the service
current task and it should auto provision your new task with its new role.
try again
Commands I used to exec in
enables execute command
aws ecs update-service --cluster CLUSTER_NAME --service SERVICE_NAME --region REGION --enable-execute-command --force-new-deployment
adds ARN to environment for easier cli. Does assume only 1 task running for the service, otherwise just manually go to ECS and grab arn and set them for your cli
TASK_ARN=$(aws ecs list-tasks --cluster CLUSTER_NAME --service SERVICE_NAME --region REGION --output text --query 'taskArns[0]')
see the task,
aws ecs describe-tasks --cluster CLUSTER_NAME --region REGION --tasks $TASK_ARN
exec in
aws ecs execute-command --region REGION --cluster CLUSTER_NAME --task $TASK_ARN --container CONTAINER --command "sh" --interactive
As on 16 March 2021, AWS has introduced ECS Exec which can be used to run command on container running in either EC2 or Fargate.
URL will be available at
https://aws.amazon.com/about-aws/whats-new/2021/03/amazon-ecs-now-allows-you-to-execute-commands-in-a-container-running-on-amazon-ec2-or-aws-fargate/

Kubernetes pull private external amazon ECR images

I have an Amazon account with a K8S cluster which is able to pull images from the same account's ECR repository.
But, my company have another account with another ECR repository. How can I pull image from this "external" ECR repository ?
I'am also a Rancher user and I used to do this by installing a special container (https://github.com/rancher/rancher-ecr-credentials) which does the job.
Is there something equivalent for Kubernetes?
Thanks for your precious help
Since you already have this setup for pulling images from the same account, you can do this with IAM policy level or ECR permissions, in your other AWS account set up a policy specifying the AWS account number (where k8s is) that will be able to pull images
For example grant pull permissions in the ECR Permissions tab
{
"Version": "2008-10-17",
"Statement": [
{
"Sid": "k8s-aws-permissions",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::aws_account_number:root"
},
"Action": [
"ecr:GetDownloadUrlForLayer",
"ecr:BatchGetImage",
"ecr:BatchCheckLayerAvailability"
]
}
]
}