I'm trying to build a docker image from a Pipeline account and push it into the ECR of another account (Dev).
While I'm able to docker push from codebuild to an ECR repo within the same account (Pipeline), I'm having difficulty doing this for an external AWS account ECR.
The policy attached to the ECR repo on the Dev account:
{
"Version": "2008-10-17",
"Statement": [
{
"Sid": "AllowCrossAccountPush",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::<pipelineAccountID>:role/service-role/<codebuildRole>"
},
"Action": [
"ecr:BatchCheckLayerAvailability",
"ecr:CompleteLayerUpload",
"ecr:GetDownloadUrlForLayer",
"ecr:InitiateLayerUpload",
"ecr:PutImage",
"ecr:UploadLayerPart"
]
}
]
}
On my pipeline account, the service role running the build project matches the ARN on the policy above, and my buildspec contains the following snippet that pushes the image:
- $(aws ecr get-login --no-include-email --region us-east-1 --registry-ids <DevAccount>)
- docker tag <imageName>:latest $ECR_REPO_DEV:latest
- docker push $ECR_REPO_DEV:latest
Codebuild is able to log into ECR successfully, but when it tries to actually push the image, I get:
*denied: User: arn:aws:sts::<pipelineAccountID>:assumed-role/<codebuildRole>/AWSCodeBuild-413cfca0-133a-4f37-b505-a94668201e26 is not authorized to perform: ecr:InitiateLayerUpload on resource: arn:aws:ecr:us-east-1:<DevAccount>:repository/<repo>*
Additionally, I've gone ahead and made sure that the IAM policy for role (residing on the codepipeline account) has permissions for this repo:
{
"Sid": "CrossAccountRepo",
"Effect": "Allow",
"Action": "ecr:*",
"Resource": "arn:aws:ecr:us-east-1:<DevAccount>:repository/sg-api"
}
I have little idea now on what I could be missing. The only thing that comes to mind is having the build run with a cross-account role but I'm not even sure that's possible. My goal is to have the build pipeline separate from the dev. account as I hear that's best practice.
Suggestions?
Thanks in advance.
Based on my understanding of this and the error message above, the most common cause is that the ECR repository does not have a policy which would allow the CodeBuild IAM role to access it.
Please set this policy on the ECR Repo:
{
"Version": "2008-10-17",
"Statement": [
{
"Sid": "AllowCrossAccountPush",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::<dev acount>:root"
},
"Action": [
"ecr:GetDownloadUrlForLayer",
"ecr:BatchCheckLayerAvailability",
"ecr:PutImage",
"ecr:InitiateLayerUpload",
"ecr:UploadLayerPart",
"ecr:CompleteLayerUpload"
]
}
]
}
Ref: https://docs.aws.amazon.com/AmazonECR/latest/userguide/repository-policy-examples.html#IAM_allow_other_accounts
Please add this policy on the CodeBuild service role:
{
"Sid": "CrossAccountRepo",
"Effect": "Allow",
"Action": "ecr:*",
"Resource": "*"
}
Related
I already have an existing service account with IAM role that is currently able to pull in secrets in my application pod - so I know this is setup correctly.
However, I also want to give additional s3 policies to the same service account that will enable my pod to write and read to s3 bucket, but this fails to work as I get 403 forbidden error?
I'm able to exec into my pod and run aws s3 ls s3://my-bucket and can see the contents of my s3 bucket and can also push to the bucket from within the pod (which tells me the service account has been configured correctly), but strangely I can't do this from the actual application UI due to this 403 forbidden error message? I know the service account is still being used because it's still pulling in the secrets fine but fails to use the additional s3 policies.
I should also note that when I attach the exact same policy on the EKS worker node level the application works fine and this 403 s3 error goes away but doesn't work when using service account. Any ideas on what this could be?
My bucket isn't using any encryption at this stage.
Trust relationship for IAM role:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Federated": "arn:aws:iam::<account-num>:oidc-provider/oidc.eks.eu-west-1.amazonaws.com/id/<oidc-num>"
},
"Action": "sts:AssumeRoleWithWebIdentity",
"Condition": {
"StringEquals": {
"oidc.eks.eu-west-1.amazonaws.com/id/<oidc-num>:sub": "system:serviceaccount:<app-namespace>:<service-account>"
}
}
}
]
}
IAM policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Action": "secretsmanager:GetSecretValue",
"Resource": [
"arn:aws:secretsmanager:eu-west-1:<account>:secret:<secret>"
]
},
{
"Sid": "",
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:PutLifecycleConfiguration",
"s3:ListBucket",
"s3:GetObject",
"s3:GetLifecycleConfiguration",
"s3:DeleteObject"
],
"Resource": [
"arn:aws:s3:::bucket/*",
"arn:aws:s3:::bucket"
]
}
]
}
Service account is annotated with IAM role
kubectl describe sa service-account
Name: service-account
Namespace: app-namespace
Labels: app.kubernetes.io/managed-by=Helm
Annotations: eks.amazonaws.com/role-arn: arn:aws:iam::account:role/IAM-role-to-attach
meta.helm.sh/release-name: release
meta.helm.sh/release-namespace: app-namesapce
Image pull secrets: <none>
Mountable secrets: secret
Tokens: token
Events: <none>
I want to upload to S3 from a Fargate task. Can this be achieved by only specifying a ExecutionRoleArn as opposed to specifying a both a ExecutionRoleArn and a TaskRoleArn?
If I specify a ExecutionRoleArn that has the following Permission Policies attached:
Custom policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:PutObject"
],
"Resource": "arn:aws:s3:::example_bucket/*"
}
]
}
AmazonECSTaskExecutionRolePolicy:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ecr:GetAuthorizationToken",
"ecr:BatchCheckLayerAvailability",
"ecr:GetDownloadUrlForLayer",
"ecr:BatchGetImage",
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Resource": "*"
}
]
}
With the following trust relationship:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"Service": [
"events.amazonaws.com",
"lambda.amazonaws.com",
"ecs-tasks.amazonaws.com"
]
},
"Action": "sts:AssumeRole"
}
]
}
Would this be sufficient to allow the task to upload to S3? Or do I need to define a TaskRoleArn?
The ExecutionRoleArn is used by the service to setup the task correctly, this includes pulling any images down from ECR.
The TaskRoleArn is used by the task to give it the permissions it needs to interact with other AWS Services (such as S3).
Technically both Arns could be the same, however I would suggest separating them to be different roles to avoid confusion over the permissions required for both of the scenarios the role is used for.
Additionally you should have the endpoint for ecs.amazonaws.com. In fact the full list of services depending on how you're using ECS are below (although most could be removed such as spot if you're not using spot, or autoscaling if you're not using autoscaling).
"ecs.amazonaws.com",
"ecs-tasks.amazonaws.com",
"spot.amazonaws.com",
"spotfleet.amazonaws.com",
"ecs.application-autoscaling.amazonaws.com",
"autoscaling.amazonaws.com"
In the case of Fargate, both IAM role pay different role
Execution Role
This is role is mandatory and you can not run the task without this role even if you add ExecuationRole policy in Task Role
To produce this error just set Execution role =None, you will not able to launch the task.
AWS Forums (Unable to create a new revision of Task Definition)
Task Role
This role is optional and you can add s3 related permission in this role,
Optional IAM role that tasks can use to make API requests to authorized AWS services.
Your police seems okay,
Create ecs_s3_upload_role
Add below policy
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:PutObject"
],
"Resource": "arn:aws:s3:::example_bucket/*"
}
]
}
Now Fargate Task will able to upload to S3 bucket.
Your policies don't include any s3 related permissions. Thus you should define your s3 permissions in a task role:
With IAM roles for Amazon ECS tasks, you can specify an IAM role that can be used by the containers in a task.
How can I get my ECS Fargate cluster to pull a container from a private Docker registry that is in another AWS account? If the private Docker registry is in the same account I don't need authentication but I get CannotPullContainerError: Error response from daemon: pull access denied for <account id>.dkr.ecr.ap-southeast-2.amazonaws.com/project/container, repository does not exist or may require 'docker login'
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/private-auth.html talks about authentication and private registries but doesn't seem to mention my use case of an ECR in another AWS account. It looks like I can add access to a defined list of AWS accounts in the ECR permissions but possibly there are other approaches? The passwords that generated for ECR only seem to last for 12 hours so that won't work.
In the account running your fargate service the task execution role specified in your task definition should have permissions to pull an image.
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"ecr:BatchCheckLayerAvailability",
"ecr:GetDownloadUrlForLayer",
"ecr:BatchGetImage"
],
"Resource": "arn:aws:ecr:eu-west-1:<account id>:repository/project",
"Effect": "Allow"
},
{
"Action": "ecr:GetAuthorizationToken",
"Resource": "*",
"Effect": "Allow"
}
]}
And in the account containing the ECR you should specify in ecr policy that the account containing the task execution role is allowed to access the repository. In the ecr policy below the whole account is added which might be a bit much security wise but that something you can tighten.
{
"Version": "2008-10-17",
"Statement": [
{
"Sid": "PullImage",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::<accountid>:root"
},
"Action": [
"ecr:BatchGetImage",
"ecr:DescribeImages",
"ecr:DescribeRepositories",
"ecr:GetDownloadUrlForLayer",
"ecr:GetLifecyclePolicy",
"ecr:GetLifecyclePolicyPreview",
"ecr:GetRepositoryPolicy",
"ecr:ListImages"
]
}
]
}
I'm having an issue with a seemingly trivial task of getting CodeDeploy to deploy Github code to an AutoScaling Group in a Blue/Green Deployment.
I have a Pipeline setup, a Deployment Group setup, and the AutoScaling Group, but it fails when it gets to the actual deployment:
I went to my role and it seems like it has sufficient policies for it to go through with the blue/green deployment:
Is there a policy that I'm not considering that needs to be attached to this role?
I found the answer in this link:
https://h2ik.co/2019/02/28/aws-codedeploy-blue-green/
Without wanting to take the credit, only one statement was missing from #PeskyGnat :
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"iam:PassRole",
"ec2:CreateTags",
"ec2:RunInstances"
],
"Resource": "*"
}
]
}
I was also getting the error:
"The IAM role does not give you permission to perform operations in the following AWS service: AmazonAutoScaling. Contact your AWS administrator if you need help. If you are an AWS administrator, you can grant permissions to your users or groups by creating IAM policies."
I figured out the 2 permissions needed to get past this error, I created the policy below and attached it to the Code Deploy role:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"iam:PassRole",
"ec2:RunInstances",
"ec2:CreateTags"
],
"Resource": "*"
}
]
}
I'm trying to follow the steps to use amazon elastic container registry (ECR) but I'm stuck at the first step, which is obtaining an auth token.
Command:
aws ecr get-login
Or...
aws ecr get-login --region=us-west-2
Error:
An error occurred (AccessDeniedException) when calling the GetAuthorizationToken operation: User: arn:aws:iam::9#####4:user/### is not authorized to perform: ecr:GetAuthorizationToken on resource: *
AWS Permissions on user for region us-west-2:
AmazonEC2FullAccess
AmazonEC2ContainerRegistryFullAccess
Billing
AdministratorAccess
AmazonECS_FullAccess
AWS CLI version:
aws help --version
aws-cli/1.14.40 Python/3.6.4 Darwin/18.2.0 botocore/1.8.44
Please note that you're not allowing your user any access to ECR in a sense.
Try adding the following policy to the user:
{
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Action": [
"ecr:GetAuthorizationToken",
"ecr:BatchCheckLayerAvailability",
"ecr:GetDownloadUrlForLayer",
"ecr:GetRepositoryPolicy",
"ecr:DescribeRepositories",
"ecr:ListImages",
"ecr:DescribeImages",
"ecr:BatchGetImage"
],
"Resource": "*"
}]
}
https://docs.aws.amazon.com/AmazonECR/latest/userguide/ecr_managed_policies.html#AmazonEC2ContainerRegistryReadOnly
As general advice, ecr:GetAuthorizationToken must apparently be applied to all resources * but all other ECR permissions can be scoped. When setting up ECR permissions as below, that user/role can login into Docker via CLI and still only access those images that user/role is supposed to.
Example with scoped permissions:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ecr:BatchGetImage",
"ecr:BatchCheckLayerAvailability",
"ecr:CompleteLayerUpload",
"ecr:GetDownloadUrlForLayer",
"ecr:InitiateLayerUpload",
"ecr:PutImage",
"ecr:UploadLayerPart"
],
"Resource": "arn:aws:ecr:${region}:${account}:repository/${repo-name}"
},
{
"Effect": "Allow",
"Action": "ecr:GetAuthorizationToken",
"Resource": "*"
}
]
}
From my observations, scoping ecr:GetAuthorizationToken to any ECR repositories will result in AccessDeniedException.