New version of CDK deployment strategy allows a user assume roles like cdk-{qualifier}-deploy.
I want to give a developer an ability to perform cdk diff from their local machine, but behind the scenes cdk diff assumes cdk-{qualifier}-deploy role, which is the power I'm saving for CICD pipeline with a secret IAM user and trust to deploy.
I removed the deploy ability from the local IAM user and ran some CDK Diff. The stack does quite a bit with lambda, ecs, r53, and vpcs.
Here's the policy i was able to give local user to make this work:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "CDKDiff",
"Effect": "Allow",
"Action": [
"cloudformation:DescribeStacks",
"cloudformation:GetTemplate"
],
"Resource": [
"arn:aws:cloudformation:us-east-1:123:stack/OnDemandStackUE1/*",
"arn:aws:cloudformation:sa-east-1:123:stack/OnDemandStackSE1/*",
"arn:aws:cloudformation:eu-west-2:123:stack/OnDemandStackEU2/*"
]
}
]
}
Now i get output for a small change:
Assuming role failed: User: arn:aws:iam::123:user/dummy is not authorized to perform: sts:AssumeRole on resource: arn:aws:iam::123:role/cdk-me-deploy-role
current credentials could not be used to assume '...deploy-role..', but are for the right account. Proceeding anyway.
Resources
[~] AWS::Lambda::Function ecsd_lambda ecslambda3D927DBA
└─ [~] Timeout
├─ [-] 10
└─ [+] 9
Looks like it works with a most minimal user permissions to DescribeStacks and GetTemplate. Is this an adequate solution? Should i try to have a role instead via synthesizer stack or something?
Related
It seems to be impossible to allow developers to create Lambdas and create or maintain SAM Applications in AWS without essentially having AdministratorAccess policies attached to their developer's role. AWS documents a suggested IAM setup where everyone is simply Administrator, or only has IAMFullAccess, or a even more specific set of permissions containing "iam:AttachRolePolicy" which all boils down to still having enough access to grant the AdministratorAccess permission to anyone at will with just 1 API call.
Besides creating a new AWS Account for each SAM or Lambda deployment there doesn't seem to be any secure way to manage this, but I really hope I'm missing something obvious. Perhaps someone knows of a combination of tags, permission boundaries and IAM Paths that would alleviate this?
The documentation I refer to: https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/sam-permissions.html which opens with:
There are three main options for granting a user permission to manage
serverless applications. Each option provides users with different
levels of access control.
Grant administrator permissions.
Attach necessary AWS managed policies.
Grant specific AWS Identity and Access Management (IAM) permissions.
Further down, a sample application is used to specify slightly more specific permissions:
For example, the following AWS managed policies are sufficient to
deploy the sample Hello World application:
AWSCloudFormationFullAccess
IAMFullAccess
AWSLambda_FullAccess
AmazonAPIGatewayAdministrator
AmazonS3FullAccess
AmazonEC2ContainerRegistryFullAccess
And at the end of the document an AWS IAM Policy document describes a set of permissions which is rather lengthy, but contains the mentioned "iam:AttachRolePolicy" permission with a wildcard resource for roles it may be applied on.
AWS has a PowerUserAccess managed policy which is meant for developers. It gives them access to most of the services and no access to admin activities including IAM, Organization and Account management.
You can create an IAM Group for developers (Say Developers) and add the managed policy PowerUserAccess to the group. Add developers to this group.
For deploying with SAM, the developers would need a few IAM permissions to create roles, tag roles. While rolling back a CloudFormation Stack, they may need a few delete permissions. While allowing the developers to create new roles for Lambda functions, you need to ensure they don't escalate privileges by using permissions boundary. A good starting point again would be to set the permissions boundary to PowerUserAccess. (until you figure out what is the right level of permissions)
Create a Policy something like this
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "ReadRole",
"Effect": "Allow",
"Action": [
"iam:GetRole",
"iam:GetRolePolicy",
"iam:ListRoleTags"
],
"Resource": "arn:aws:iam::ReplaceWithYourAWSAccountNumber:role/*FunctionRole*"
},
{
"Sid": "TagRole",
"Effect": "Allow",
"Action": [
"iam:UntagRole",
"iam:TagRole"
],
"Resource": "arn:aws:iam::ReplaceWithYourAWSAccountNumber:role/*FunctionRole*"
},
{
"Sid": "WriteRole",
"Effect": "Allow",
"Action": [
"iam:DeleteRole",
"iam:DeleteRolePolicy",
"iam:AttachRolePolicy",
"iam:PutRolePolicy",
"iam:PassRole",
"iam:DetachRolePolicy"
],
"Resource": "arn:aws:iam::ReplaceWithYourAWSAccountNumber:role/*FunctionRole*"
},
{
"Sid": "CreateRoleWithPermissionsBoundry",
"Effect": "Allow",
"Action": [
"iam:CreateRole"
],
"Resource": "arn:aws:iam::ReplaceWithYourAWSAccountNumber:role/*FunctionRole*",
"Condition": {
"StringEquals": {
"iam:PermissionsBoundary": "arn:aws:iam::aws:policy/PowerUserAccess"
}
}
}
]
}
Note: It assumes the Lambda function names in the SAM template contains the word Function in them. (Replace the AWS Account Number in the ARNs).
Now you can attach the above policy to the Developers IAM Group. (This would give the SAM deployment permissions to all the developers)
Or you can create another IAM Group for SAM developers (Say SAM-Developers) and attach the above policy to the SAM-Developers group. Now add the appropriate developers (who need to deploy using SAM) to this new IAM group (SAM-Developers).
Define the Permissions Boundary in the SAM templates as well.
Here is an example PermissionsBoundary in SAM template.
Globals:
Function:
Timeout: 15
PermissionsBoundary: arn:aws:iam::aws:policy/PowerUserAccess
With that, the developers should be able to deploy using SAM provided they do not have any restrictive permission boundary.
You can set the permission boundary to AdministratorAccess for the developers or create a new Policy which combines the permissions of PowerUserAccess and the above defined policy for 'SAM' deployments. Then set this new Policy as the permission boundary for the developers.
This solution is for reference and you can build upon this. The PowerUserAccess has been set as the permissions boundary for the Lambda function roles. The PowerUserAccess is too permissive and you should further work on this to find out the right level of permission for your developers and the Lambda functions.
Sidenote: You can use this policy to allow the users to manage their own credentials.
I'm trying to create some infrastructure for a service I am building on AWS using AWS Fargate. I'm using SSM as a value store for some of my application configuration, so I need both the regular permissions for Fargate as well as additional permissions for SSM. However, after banging my head against this particular wall for a while, I've come to the conclusion that I just don't understand AWS IAM in general or this problem in particular, so I'm here for help.
The basis of my IAM code comes from this tutorial; the IAM code is actually not in that tutorial but rather in this file in the github repo linked to that tutorial. I presume I need to retain that STS permission for something although I'm not entirely sure what.
I've converted the IAM code from the tutorial into a JSON document because I find JSON easier to work with than the Terraform native thing. Here's what I've come up with. It doesn't work. I would like to know why it doesn't work and how to fix it. Please ELI5 (explain like I'm 5 years old) because I know nothing about this.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ssm:GetParameters",
"secretsmanager:GetSecretValue",
"kms:Decrypt",
"sts:AssumeRole"
],
"Principal": {
"Service": ["ecs-tasks.amazonaws.com"]
}
}
]
}
At a minimum, your ECS task should have below permissions:
Ability to assume a role
Resource level permissions
In the example, you have referred, An IAM Role is created with the following:
A trust relationship is attached. <-- To enable ECS task to assume an IAM role
AWS managed policy AmazonECSTaskExecutionRolePolicy is attached. <-- Resource permissions
So, in order to retrieve the SSM parameter values, add below resource permissions.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ssm:Describe*",
"ssm:Get*",
"ssm:List*"
],
"Resource": [
"arn:aws:ssm:*:*:parameter/{your-path-hierarchy-to-parameter}/*"
]
}
]
}
If your Secrets uses KMS, then grant necessary kms permissions (kms:Decrypt). Refer specifying-sensitive-data for reference.
I'm looking for some advice on best practices managing an AWS Elastic Beanstalk application.
I have an app with 2 different environments which I refer to as prod and dev. I would like to allow deployments to the dev env to all collaborators and limit deployment to prod to only one user.
What is the best way to do that?
ElasticBeanstalk tightly integrates with IAM.
Allowing or Denying a user a specific action on a specific resource can be achieved by attaching the correct policy to the role being assumed.
The ElasticBeanstalk docs have a specific section explaining IAM permissions in EB, and the last example on the page is effectively what you’re looking for. Modify the policy shown to your needs and attach it to the users or groups of users you wish to deny access to the production environment.
Your policy is going to look something like this:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Deny",
"Action": [
"elasticbeanstalk:CreateApplication",
"elasticbeanstalk:DeleteApplication"
],
"Resource": [
"*"
]
},
{
"Effect": "Deny",
"Action": [
"elasticbeanstalk:CreateApplicationVersion",
"elasticbeanstalk:CreateConfigurationTemplate",
"elasticbeanstalk:CreateEnvironment",
"elasticbeanstalk:DeleteApplicationVersion",
"elasticbeanstalk:DeleteConfigurationTemplate",
"elasticbeanstalk:DeleteEnvironmentConfiguration",
"elasticbeanstalk:DescribeApplicationVersions",
"elasticbeanstalk:DescribeConfigurationOptions",
"elasticbeanstalk:DescribeConfigurationSettings",
"elasticbeanstalk:DescribeEnvironmentResources",
"elasticbeanstalk:DescribeEnvironments",
"elasticbeanstalk:DescribeEvents",
"elasticbeanstalk:DeleteEnvironmentConfiguration",
"elasticbeanstalk:RebuildEnvironment",
"elasticbeanstalk:RequestEnvironmentInfo",
"elasticbeanstalk:RestartAppServer",
"elasticbeanstalk:RetrieveEnvironmentInfo",
"elasticbeanstalk:SwapEnvironmentCNAMEs",
"elasticbeanstalk:TerminateEnvironment",
"elasticbeanstalk:UpdateApplicationVersion",
"elasticbeanstalk:UpdateConfigurationTemplate",
"elasticbeanstalk:UpdateEnvironment",
"elasticbeanstalk:RetrieveEnvironmentInfo",
"elasticbeanstalk:ValidateConfigurationSettings"
],
"Resource": [
"arn:aws:elasticbeanstalk:us-east-1:123456789012:environment/Test/Test-env-prod"
]
}
]
}
The above policy is going to prevent any user with this policy attached from Creating or Deleting any applications, and it's further going to deny the user from completing any of the listed actions on the resource ARN listed; the app named Test and the environment named Test-env-prod.
To restrict access to the specific environment you could use this policy and modify the ARN's region (us-east-1), account-number (123456789012), app-name (Test), and environment-name (Test-env-prod), to your specific needs.
You can find a list of ElasticBeanstalk resource ARN formats here.
I've tried to follow AWS instructions on setting ECR authorization to my user by giving the AmazonEC2ContainerRegistryFullAccess policy to my user.
However when I try to run on my PC the aws ecr get-login I get an error that I don't have permission.
An error occurred (AccessDeniedException) when calling the GetAuthorizationToken operation: User: arn:aws:iam::ACCOUNT_NUMBER:user/MY_USER is not authorized to perform: ecr:GetAuthorizationToken on resource: *
What have I done wrong ?
You must attach a policy to your IAM role.
I attached AmazonEC2ContainerRegistryFullAccess and it worked.
Here is a full answer, after I followed all steps - I was able to use ECR
The error can have a few meanings:
You are not authorized because you do not have ECR policy attached to your user
You are not authorized because you are using 2FA and using cli is not secure unless you set a temporary session token
You provided invalid credentials
Here is a list of all steps to get access (including handling 2FA)
First of all, you have to create a policy that gives you access to GetAuthorizationToken action in ECR.
Attach this policy either to a user or a group (groups/roles are IMHO always better approach, my vote to roles, e.g. DevOps)
Make sure you have AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY set in your environment. I recommend to use aws folder with credentials and profiles separated.
If you have 2FA enabled
You need to generate session token using this command
aws sts get-session-token --serial-number arn-of-the-mfa-device --token-code code-from-token. arn-of-the-mfa-device can be found in your profile, 2FA section. Token, is generated token from the device.
Update aws credentails with received AccessKeyId, SecretAccessKey, and SessionToken. AWS recommends having either cron job to refresh token, which means if you are doing it you are testing things, your prod resources most likely do not have 2FA enabled. You can increase session by providing --duration-seconds but only up to 36 hours. A good explanation can be found at authenticate-mfa-cli
This should do the job
I ended up using AmazonEC2ContainerRegistryPowerUser as seemed a better option than Full Access. Here are the policies I found as of June 2019:
The user must have GetAuthorizationToken for all resources on ECR. To make the policy tight, you can grant all actions only to the desired registry and only the ecr:GetAuthorizationToken to all resources. Here is an example policy to attach to your user:
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"ecr:BatchCheckLayerAvailability",
"ecr:BatchGetImage",
"ecr:CompleteLayerUpload",
"ecr:GetDownloadUrlForLayer",
"ecr:InitiateLayerUpload",
"ecr:PutImage",
"ecr:UploadLayerPart"
],
"Effect": "Allow",
"Resource": "<REPOSITORY_ARN_HERE>"
},
{
"Action": [
"ecr:GetAuthorizationToken",
],
"Effect": "Allow",
"Resource": "*"
}
]
}
Just as it appears in the error description, I have to allow action "GetAuthorizationToken" in my policy.
{
"Sid": "VisualEditor2",
"Effect": "Allow",
"Action": "ecr:GetAuthorizationToken",
"Resource": "*"
}
Note: This is not my full policy but a subsection of Statement.
I've found out that when 2FA is enabled there is no option to use the aws ecr get-login, once I've removed the 2FA from my account I got the authorization token
This was my guy EC2InstanceProfileForImageBuilderECRContainerBuilds
I had the same problem with ECS when I tried to push my container in the repository.
To solve it, I attached to my IAM role this : AmazonECS_FullAccess
In case you are trying to pull images from a PUBLIC AWS repository, you must add the following user permissions to your policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ecr-public:GetAuthorizationToken",
"sts:GetServiceBearerToken"
],
"Resource": "*"
}
]
}
Please see the full documentation here.
I have the same problem, but I have set the permission boundary only to s3 previously that causes the issue.
Removed the permission boundary ,it worked like a charm
For me:
- Effect: Allow
Sid: VisualEditor2
Action:
- ecr:GetAuthorizationToken
- ecr:BatchGetImage
- ecr:GetDownloadUrlForLayer
Resource: "*"
I've been testing my continuous deployment setup, trying to get to a minimal set of IAM permissions that will allow my CI IAM group to deploy to my "staging" Elastic Beanstalk environment.
On my latest test, my deployment got stuck. The last event in the console is:
Updating environment staging's configuration settings.
Luckily, the deployment will time out after 30 minutes, so the environment can be deployed to again.
It seems to be a permissions issue, because if I grant s3:* on all resources, the deployment works. It seems that when calling UpdateEnvironment, Elastic Beanstalk does something to S3, but I can't figure out what.
I have tried the following policy to give EB full access to its resource bucket:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:*"
],
"Resource": [
"arn:aws:s3:::elasticbeanstalk-REGION-ACCOUNT/resources/_runtime/_embedded_extensions/APP",
"arn:aws:s3:::elasticbeanstalk-REGION-ACCOUNT/resources/_runtime/_embedded_extensions/APP/*",
"arn:aws:s3:::elasticbeanstalk-REGION-ACCOUNT/resources/environments/ENV_ID",
"arn:aws:s3:::elasticbeanstalk-REGION-ACCOUNT/resources/environments/ENV_ID/*"
]
}
]
}
Where REGION, ACCOUNT, APP, and ENV_ID are my AWS region, account number, application name, and environment ID, respectively.
Does anyone have a clue which S3 action and resource EB is trying to access?
Shared this on your blog already, but this might have a broader audience so here it goes:
Following up on this, the ElastiBeanstalk team has provided me with the following answer regarding the S3 permissions:
"[...]Seeing the requirement below, would a slightly locked down version work? I've attached a policy to this case which will grant s3:GetObject on buckets starting with elasticbeanstalk. This is essentially to allow access to all elasticbeanstalk buckets, including the ones that we own. The only thing you'll need to do with our bucket is a GetObject, so this should be enough to do everything you need."
So it seems like ElasticBeanstalk is accessing buckets out of anyone's realm in order to work properly (which is kind of bad, but that's just the way it is).
Coming from this, the following policy will be sufficient for getting things to work with S3:
{
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::elasticbeanstalk-<region>-<account_id>",
"arn:aws:s3:::elasticbeanstalk-<region>-<account_id>/",
"arn:aws:s3:::elasticbeanstalk-<region>-<account_id>/*"
],
"Effect": "Allow"
},
{
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::elasticbeanstalk*",
"Effect": "Allow"
}
Obviously, you need to wrap this into a proper policy statement that IAM understands. All your previous assumptions about IAM policies have proven right though so I'm guessing this shouldn't be an issue.