I have bootstrapped CDK toolkit stack in this way
npx cdk bootstrap \
--trust 158******206 \
--toolkit-stack-name **** \
--qualifier ****\
--cloudformation-execution-policies arn:aws:iam::aws:policy/AdministratorAccess \
As a result, CDK toolkit stack has these resources:
ContainerAssetsRepository
DeploymentActionRole
FileAssetsBucketEncryptionKey
FileAssetsBucketEncryptionKeyAlias
FilePublishingRole
FilePublishingRoleDefaultPolicy
ImagePublishingRole
ImagePublishingRoleDefaultPolicy
StagingBucket
StagingBucketPolicy
Then I try to deploy CDK stack via IAM user and it's work correctly. I use this command:
cdk deploy --require-approval never --toolkit-stack-name **** --profile user-1
If I try deploy via STS I received this error
Error: Could not assume role in target account (did you bootstrap the environment with the right '--trust's?): User: arn:aws:sts::448*****770:assumed-role/cdktoolkit-test-role/91cb8d5a-57e9-4d73-9f66-ddc630b637f2 is not authorized to perform: sts:TagSession on resource: arn:aws:iam::448*****770:role/cdk-event-proc-deploy-role-448******770-us-east-1
My iam-sts-config.yml
---
aws_iam:
- type: sts-access-keys
version: V2
config:
iam_assume_role_name: cdktoolkit-test-role
Then I add
AWS_ACCESS_KEY_ID=***
AWS_SECRET_ACCESS_KEY=***
AWS_SESSION_TOKEN=***
AWS_DEFAULT_REGION=***
There is my trust relationship policy for the role cdk-event-proc-deploy-role:
{
"Version": "2008-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::448******770:root"
},
"Action": "sts:AssumeRole"
},
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::158*****206:root"
},
"Action": "sts:AssumeRole"
}
]
}
If I edit and add manually "Action": "sts:TagSession" in to trust relationship policy. I can deploy my stack.
So, my question is, could I set up a custom trust relationship policy when I bootstrap CDK toolkit stack for my roles?
I found only this parameter --trust but it's added only a new Principal could I add additional Actions?
I'm trying to build a docker image from a Pipeline account and push it into the ECR of another account (Dev).
While I'm able to docker push from codebuild to an ECR repo within the same account (Pipeline), I'm having difficulty doing this for an external AWS account ECR.
The policy attached to the ECR repo on the Dev account:
{
"Version": "2008-10-17",
"Statement": [
{
"Sid": "AllowCrossAccountPush",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::<pipelineAccountID>:role/service-role/<codebuildRole>"
},
"Action": [
"ecr:BatchCheckLayerAvailability",
"ecr:CompleteLayerUpload",
"ecr:GetDownloadUrlForLayer",
"ecr:InitiateLayerUpload",
"ecr:PutImage",
"ecr:UploadLayerPart"
]
}
]
}
On my pipeline account, the service role running the build project matches the ARN on the policy above, and my buildspec contains the following snippet that pushes the image:
- $(aws ecr get-login --no-include-email --region us-east-1 --registry-ids <DevAccount>)
- docker tag <imageName>:latest $ECR_REPO_DEV:latest
- docker push $ECR_REPO_DEV:latest
Codebuild is able to log into ECR successfully, but when it tries to actually push the image, I get:
*denied: User: arn:aws:sts::<pipelineAccountID>:assumed-role/<codebuildRole>/AWSCodeBuild-413cfca0-133a-4f37-b505-a94668201e26 is not authorized to perform: ecr:InitiateLayerUpload on resource: arn:aws:ecr:us-east-1:<DevAccount>:repository/<repo>*
Additionally, I've gone ahead and made sure that the IAM policy for role (residing on the codepipeline account) has permissions for this repo:
{
"Sid": "CrossAccountRepo",
"Effect": "Allow",
"Action": "ecr:*",
"Resource": "arn:aws:ecr:us-east-1:<DevAccount>:repository/sg-api"
}
I have little idea now on what I could be missing. The only thing that comes to mind is having the build run with a cross-account role but I'm not even sure that's possible. My goal is to have the build pipeline separate from the dev. account as I hear that's best practice.
Suggestions?
Thanks in advance.
Based on my understanding of this and the error message above, the most common cause is that the ECR repository does not have a policy which would allow the CodeBuild IAM role to access it.
Please set this policy on the ECR Repo:
{
"Version": "2008-10-17",
"Statement": [
{
"Sid": "AllowCrossAccountPush",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::<dev acount>:root"
},
"Action": [
"ecr:GetDownloadUrlForLayer",
"ecr:BatchCheckLayerAvailability",
"ecr:PutImage",
"ecr:InitiateLayerUpload",
"ecr:UploadLayerPart",
"ecr:CompleteLayerUpload"
]
}
]
}
Ref: https://docs.aws.amazon.com/AmazonECR/latest/userguide/repository-policy-examples.html#IAM_allow_other_accounts
Please add this policy on the CodeBuild service role:
{
"Sid": "CrossAccountRepo",
"Effect": "Allow",
"Action": "ecr:*",
"Resource": "*"
}
Context
This was a CodeStar project initially, and then it grew into something bigger. We reused the Beanstalk application to create the stage and prod environments and kept the initially-created dev environment as-is.
We updated the CodePipeline to deploy to our new environments using "Elastic Beanstalk" as the Provider. (While CodeStar had setup a deployment using CloudFormation for the environment it automatically provisioned in the Beanstalk application.)
The problem
The deployment fails due to an error which mentions autoscaling:DescribeAutoScalingGroups as not being authorized to be executed by the CodePipeline's IAM Role.
Here is the whole error message displayed in CodePipeline:
Insufficient permissions
Deployment failed.
The provided role does not have sufficient permissions: User:
arn:aws:sts::xxx:assumed-role/CodeStarWorker-xxx-on-cod-ToolChain/yyy
is not authorized to perform: autoscaling:DescribeAutoScalingGroups
(Service: AmazonAutoScaling; Status Code: 403; Error Code:
AccessDenied; Request ID: 905ee6ef-d75d-4cf8-b5f3-e6b16a5f6477)
Service:AmazonAutoScaling, Message:User:
arn:aws:sts::xxx:assumed-role/CodeStarWorker-xxx-on-cod-ToolChain/yyy
is not authorized to perform: autoscaling:DescribeAutoScalingGroups
Failed to deploy application.
Service:AmazonAutoScaling, Message:User:
arn:aws:sts::xxx:assumed-role/CodeStarWorker-xxx-on-cod-ToolChain/yyy
is not authorized to perform: autoscaling:DescribeAutoScalingGroups
IAM
Here is the CodePipeline Role's content (aka CodeStarWorker-xxx-on-cod-ToolChain):
And here is the associated Permission Boundary (originally generated by CodeStar, and eventually updated by us to try to get this whole thing to work):
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": "ssm:GetParameters",
"Resource": "*",
"Condition": {
"StringEquals": {
"ssm:ResourceTag/awscodestar:projectArn": "arn:aws:codestar:yyy:xxx:project/xxx-on-cod"
}
}
},
{
"Sid": "VisualEditor1",
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:CreateBucket",
"iam:PassRole",
"secretsmanager:GetSecretValue"
],
"Resource": [
"arn:aws:s3:::aws-codestar-yyy-xxx/xxx-on-cod/ssh/*",
"arn:aws:s3:::elasticbeanstalk-yyy-xxx/*",
"arn:aws:s3:::elasticbeanstalk-yyy-xxx",
"arn:aws:s3:::awscodestar-remote-access-yyy/*",
"arn:aws:s3:::awscodestar-remote-access-signatures-yyy/*",
"arn:aws:iam::xxx:role/CodeStarWorker-xxx-on-cod-CloudFormation",
"arn:aws:secretsmanager:yyy:xxx:secret:xxx"
]
},
{
"Sid": "VisualEditor4",
"Effect": "Allow",
"Action": [
"s3:*",
"codebuild:*",
"ec2:Describe*",
"ec2:*SecurityGroup*",
"iam:PassRole"
],
"Resource": [
"*"
]
},
{
"Sid": "VisualEditor14",
"Effect": "Allow",
"Action": [
"logs:*"
],
"Resource": [
"arn:aws:logs:yyy:xxx:log-group:/aws/elasticbeanstalk/*"
]
},
{
"Sid": "VisualEditor6",
"Effect": "Allow",
"Action": [
"elasticbeanstalk:CreateApplicationVersion",
"elasticbeanstalk:UpdateEnvironment"
],
"Resource": [
"*"
]
},
{
"Sid": "VisualEditor5",
"Effect": "Allow",
"Action": [
"autoscaling:DescribeAutoScalingGroups",
"autoscaling:SuspendProcesses",
"autoscaling:ResumeProcesses",
"autoscaling:DescribeScalingActivities"
],
"Resource": [
"arn:aws:autoscaling:yyy:xxx:autoScalingGroup:*"
]
},
{
"Sid": "VisualEditor2",
"Effect": "Allow",
"Action": [
"logs:CreateLogStream",
"sns:Get*",
"sns:Publish",
"logs:DescribeLogGroups",
"cloudtrail:StartLogging",
"lambda:ListFunctions",
"cloudtrail:CreateTrail",
"sns:Subscribe",
"xray:Put*",
"logs:CreateLogGroup",
"logs:PutLogEvents",
"sns:List*"
],
"Resource": "*"
},
{
"Sid": "VisualEditor3",
"Effect": "Allow",
"Action": "*",
"Resource": [
"arn:aws:cloudformation:yyy:xxx:stack/awseb-e-mjdwv9ptcz-stack/2d588c80-5284-11ea-a1d4-068f4db663b8",
"arn:aws:cloudformation:yyy:xxx:stack/awseb-e-mjdwv9ptcz-stack/2d588c80-5284-11ea-a1d4-068f4db663b8/*",
"arn:aws:cloudformation:yyy:xxx:stack/awscodestar-xxx-on-cod-*",
"arn:aws:codebuild:yyy:xxx:project/xxx-on-cod",
"arn:aws:codecommit:yyy:xxx:xxx-on-codecommit",
"arn:aws:codepipeline:yyy:xxx:xxx-on-cod-Pipeline",
"arn:aws:elasticbeanstalk:yyy:xxx:*/xxx-on-cod*",
"arn:aws:s3:::aws-codestar-yyy-xxx-xxx-on-cod-pipe",
"arn:aws:s3:::aws-codestar-yyy-xxx-xxx-on-cod-pipe/*",
"arn:aws:s3:::elasticbeanstalk-yyy-xxx/resources/environments/e-fp3mwptx9q",
"arn:aws:s3:::elasticbeanstalk-yyy-xxx/resources/environments/e-fp3mwptx9q/*",
"arn:aws:s3:::elasticbeanstalk-yyy-xxx/resources/environments/e-mjdwv9ptcz",
"arn:aws:s3:::elasticbeanstalk-yyy-xxx/resources/environments/e-mjdwv9ptcz/*"
]
}
]
}
Pipeline
As you can see, we have two CodeBuild because the first one is the one set up by CodeStar, and the second one is the one that slightly modifies the output artefact so that it is in the right format for a direct upload into Beanstalk.
The succeeded deployment is the one from CodeStar (using CloudFormation Provider), the next one is the failed one (using Beanstalk Provider).
CodeStar CodeBuild (buildspec.yml)
The output artefact is used by the CloudFormation deployment:
version: 0.2
phases:
install:
runtime-versions:
java: openjdk8
commands:
# Upgrade AWS CLI to the latest version
- pip install --upgrade awscli
pre_build:
commands:
- cd $CODEBUILD_SRC_DIR
- mvn clean compile test
build:
commands:
- mvn war:exploded
post_build:
commands:
- cp -r .ebextensions/ target/ROOT/
- aws cloudformation package --template template.yml --s3-bucket $S3_BUCKET --output-template-file template-export.yml
# Do not remove this statement. This command is required for AWS CodeStar projects.
# Update the AWS Partition, AWS Region, account ID and project ID in the project ARN on template-configuration.json file so AWS CloudFormation can tag project resources.
- sed -i.bak 's/\$PARTITION\$/'${PARTITION}'/g;s/\$AWS_REGION\$/'${AWS_REGION}'/g;s/\$ACCOUNT_ID\$/'${ACCOUNT_ID}'/g;s/\$PROJECT_ID\$/'${PROJECT_ID}'/g' template-configuration.json
artifacts:
type: zip
files:
- target/ROOT/**/*
- .ebextensions/**/*
- 'template-export.yml'
- 'template-configuration.json'
Our CodeBuild (buildspec-two.yml)
The output artefact is used by the (failing) Beanstalk deployment:
# Everything up to that point is the very same as the code from above
artifacts:
type: zip
base-directory: 'target/ROOT'
files:
- ./**/*
- .ebextensions/**/*
Conclusion
I've no idea how the deployment could fail since both the Permission Boundary and the base IAM Role mention that autoscaling:DescribeAutoScalingGroups.
Moreover, the deployment to the CodeStar environment is running fine, yet that particular environment which fails the deployment comes from an exact replicate (in terms of configuration).
Any ideas?
(Moreover, the initial dev environment, just as much as the newly-created stage environment, don't even have an AutoScalingGroup associated to them... so I have no idea why the deployment is even trying to do that.)
(And I've looked in S3 to make sure both Artefacts being deployed have the same structure.)
This is a tough one to troubleshoot, but from what I can see there are a couple of potential issues. One is that the 'DescribeAutoScalingGroups' action does not support a resource-level permission, so it must be an asterisk as the resource, and not the resource arn. You could try just removing the:
"Resource": [
"arn:aws:autoscaling:yyy:xxx:autoScalingGroup:*"
]
in the permissions boundary, and replace it with
"Resource": [
"*"
]
and see if that solves the issue.
Second, the 'AWSCodeDeployFullAccess' role does not contain the 'DescribeAutoScalingGroups' action in the policy. You may need to replace and/or add the 'AWSCodeDeployRole' to be able to use that action. That might solve it.
CodeStar projects are pretty locked down when it comes to permissions, so it can get pretty complex expanding the project. Check here:
https://docs.aws.amazon.com/codestar/latest/userguide/add-iam-role.html
and here:
https://docs.aws.amazon.com/codestar/latest/userguide/adh-policy-examples.html
I am using the serverless framework to deploy and program my aws lambda function and since my function is ready for production I need to remove the sensitive keys and decided to use aws systems manager (ssm parameter store) to use these keys in a secure manner, but on deployment, I receive the following error message related to the use of these keys. I thought it might be something related to the Iam Role that I manually associated with the lambda, but I'm not sure what would be off with it.
Error:
Serverless Information ----------------------------------
##########################################################################################
# 47555: 0 of 2 promises have settled
# 47555: 2 unsettled promises:
# 47555: ssm:mg-production-domain~true waited on by: undefined
# 47555: ssm:mg-production-api-key~true waited on by: undefined
# This can result from latent connections but may represent a cyclic variable dependency
##########################################################################################
YAML:
provider:
name: aws
runtime: nodejs10.x
stage: dev
region: us-east-1
environment:
MG_PRODUCTION_DOMAIN: ${ssm:mg-production-domain~true}
MG_PRODUCTION_API_KEY: ${ssm:mg-production-api-key~true}
Here is the Iam Role policy I added to the lambda, but I believe there is probably a better way to do this by adding the Iam Role via the YAML file:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": "ssm:DescribeParameters",
"Resource": "*"
},
{
"Sid": "VisualEditor1",
"Effect": "Allow",
"Action": "ssm:GetParameters",
"Resource": "arn:aws:ssm:us-east-1:*account-id*:parameter/*"
},
{
"Sid": "VisualEditor2",
"Effect": "Allow",
"Action": "ssm:GetParameter",
"Resource": "arn:aws:ssm:us-east-1:*account-id*:parameter/*"
}
]
}
I am creating my own Docker image so that I can use my own models in AWS SageMaker. I sucessfully created a Docker image using command line inside the Jupyter Notebook in SageMaker ml.t2.medium instance using a customized Dockerfile:
REPOSITORY TAG IMAGE ID CREATED SIZE
sklearn latest 01234212345 6 minutes ago 1.23GB
But when I run in Jupyter:
! aws ecr create-repository --repository-name sklearn
I get the following error:
An error occurred (AccessDeniedException) when calling the CreateRepository operation: User: arn:aws:sts::1234567:assumed-role/AmazonSageMaker-ExecutionRole-12345/SageMaker is not authorized to perform: ecr:CreateRepository on resource: *
I already set up SageMaker, EC2, EC2ContainerService permissions and the following policy for EC2Container but I still get the same error.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"sagemaker:*",
"ec2:*"
],
"Resource": "*"
}
]
}
Any idea on how I can solve this issue?
Thanks in advance.
I solved the problem. We must set a permission at SageMaker Execution Role as following:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ecr:*" ],
"Resource": "*"
}
]}