Elastic Beanstalk deployment stuck on updating config settings - amazon-web-services

I've been testing my continuous deployment setup, trying to get to a minimal set of IAM permissions that will allow my CI IAM group to deploy to my "staging" Elastic Beanstalk environment.
On my latest test, my deployment got stuck. The last event in the console is:
Updating environment staging's configuration settings.
Luckily, the deployment will time out after 30 minutes, so the environment can be deployed to again.
It seems to be a permissions issue, because if I grant s3:* on all resources, the deployment works. It seems that when calling UpdateEnvironment, Elastic Beanstalk does something to S3, but I can't figure out what.
I have tried the following policy to give EB full access to its resource bucket:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:*"
],
"Resource": [
"arn:aws:s3:::elasticbeanstalk-REGION-ACCOUNT/resources/_runtime/_embedded_extensions/APP",
"arn:aws:s3:::elasticbeanstalk-REGION-ACCOUNT/resources/_runtime/_embedded_extensions/APP/*",
"arn:aws:s3:::elasticbeanstalk-REGION-ACCOUNT/resources/environments/ENV_ID",
"arn:aws:s3:::elasticbeanstalk-REGION-ACCOUNT/resources/environments/ENV_ID/*"
]
}
]
}
Where REGION, ACCOUNT, APP, and ENV_ID are my AWS region, account number, application name, and environment ID, respectively.
Does anyone have a clue which S3 action and resource EB is trying to access?

Shared this on your blog already, but this might have a broader audience so here it goes:
Following up on this, the ElastiBeanstalk team has provided me with the following answer regarding the S3 permissions:
"[...]Seeing the requirement below, would a slightly locked down version work? I've attached a policy to this case which will grant s3:GetObject on buckets starting with elasticbeanstalk. This is essentially to allow access to all elasticbeanstalk buckets, including the ones that we own. The only thing you'll need to do with our bucket is a GetObject, so this should be enough to do everything you need."
So it seems like ElasticBeanstalk is accessing buckets out of anyone's realm in order to work properly (which is kind of bad, but that's just the way it is).
Coming from this, the following policy will be sufficient for getting things to work with S3:
{
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::elasticbeanstalk-<region>-<account_id>",
"arn:aws:s3:::elasticbeanstalk-<region>-<account_id>/",
"arn:aws:s3:::elasticbeanstalk-<region>-<account_id>/*"
],
"Effect": "Allow"
},
{
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::elasticbeanstalk*",
"Effect": "Allow"
}
Obviously, you need to wrap this into a proper policy statement that IAM understands. All your previous assumptions about IAM policies have proven right though so I'm guessing this shouldn't be an issue.

Related

Access Denied in attempt to Create Project in AWS CodeBuild

According to the AWS CodeBuild documentation, the Create Project operation requires only the codebuild:CreateProject and iam:PassRole Actions to be granted. I have done this in my role's policy, and set the Resource to "*", but when I click on the Create Project button, I immediately get Access Denied with no further information. I have no problems doing the analogous operation in CodeArtifact, CodePipeline, and CodeCommit. If I set "s3:*", I do not get the error, so evidently it's an S3 permission I'm missing, but which one?
What I am trying to do is create a role with reduced permissions so that a user can run a build and manage CodeSuite resources (add and edit repositories, pipelines, etc.) without using Administrator privileges.
Here is my policy JSON:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:*Object",
"s3:*ObjectVersion",
"s3:*BucketAcl",
"s3:*BucketLocation",
"iam:*",
"codepipeline:*",
"codeartifact:*",
"codecommit:*",
"codebuild:*"
],
"Resource": "*"
}
]
}
(I am aware this configuration is inadvisable; I am trying to isolate the issue, and provide a minimum reproducible example)
With a little bit of educated trial and error, I narrowed it down to a List* Action, which is sufficiently specific for my purposes. I'm guessing it's ListObjects and ListObjectVersions, but I'm too lazy to confirm it.

AWS permissions for Fargate and SSM

I'm trying to create some infrastructure for a service I am building on AWS using AWS Fargate. I'm using SSM as a value store for some of my application configuration, so I need both the regular permissions for Fargate as well as additional permissions for SSM. However, after banging my head against this particular wall for a while, I've come to the conclusion that I just don't understand AWS IAM in general or this problem in particular, so I'm here for help.
The basis of my IAM code comes from this tutorial; the IAM code is actually not in that tutorial but rather in this file in the github repo linked to that tutorial. I presume I need to retain that STS permission for something although I'm not entirely sure what.
I've converted the IAM code from the tutorial into a JSON document because I find JSON easier to work with than the Terraform native thing. Here's what I've come up with. It doesn't work. I would like to know why it doesn't work and how to fix it. Please ELI5 (explain like I'm 5 years old) because I know nothing about this.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ssm:GetParameters",
"secretsmanager:GetSecretValue",
"kms:Decrypt",
"sts:AssumeRole"
],
"Principal": {
"Service": ["ecs-tasks.amazonaws.com"]
}
}
]
}
At a minimum, your ECS task should have below permissions:
Ability to assume a role
Resource level permissions
In the example, you have referred, An IAM Role is created with the following:
A trust relationship is attached. <-- To enable ECS task to assume an IAM role
AWS managed policy AmazonECSTaskExecutionRolePolicy is attached. <-- Resource permissions
So, in order to retrieve the SSM parameter values, add below resource permissions.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ssm:Describe*",
"ssm:Get*",
"ssm:List*"
],
"Resource": [
"arn:aws:ssm:*:*:parameter/{your-path-hierarchy-to-parameter}/*"
]
}
]
}
If your Secrets uses KMS, then grant necessary kms permissions (kms:Decrypt). Refer specifying-sensitive-data for reference.

AWS Elastic Beanstalk - protecting the production env from accidental deployments

I'm looking for some advice on best practices managing an AWS Elastic Beanstalk application.
I have an app with 2 different environments which I refer to as prod and dev. I would like to allow deployments to the dev env to all collaborators and limit deployment to prod to only one user.
What is the best way to do that?
ElasticBeanstalk tightly integrates with IAM.
Allowing or Denying a user a specific action on a specific resource can be achieved by attaching the correct policy to the role being assumed.
The ElasticBeanstalk docs have a specific section explaining IAM permissions in EB, and the last example on the page is effectively what you’re looking for. Modify the policy shown to your needs and attach it to the users or groups of users you wish to deny access to the production environment.
Your policy is going to look something like this:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Deny",
"Action": [
"elasticbeanstalk:CreateApplication",
"elasticbeanstalk:DeleteApplication"
],
"Resource": [
"*"
]
},
{
"Effect": "Deny",
"Action": [
"elasticbeanstalk:CreateApplicationVersion",
"elasticbeanstalk:CreateConfigurationTemplate",
"elasticbeanstalk:CreateEnvironment",
"elasticbeanstalk:DeleteApplicationVersion",
"elasticbeanstalk:DeleteConfigurationTemplate",
"elasticbeanstalk:DeleteEnvironmentConfiguration",
"elasticbeanstalk:DescribeApplicationVersions",
"elasticbeanstalk:DescribeConfigurationOptions",
"elasticbeanstalk:DescribeConfigurationSettings",
"elasticbeanstalk:DescribeEnvironmentResources",
"elasticbeanstalk:DescribeEnvironments",
"elasticbeanstalk:DescribeEvents",
"elasticbeanstalk:DeleteEnvironmentConfiguration",
"elasticbeanstalk:RebuildEnvironment",
"elasticbeanstalk:RequestEnvironmentInfo",
"elasticbeanstalk:RestartAppServer",
"elasticbeanstalk:RetrieveEnvironmentInfo",
"elasticbeanstalk:SwapEnvironmentCNAMEs",
"elasticbeanstalk:TerminateEnvironment",
"elasticbeanstalk:UpdateApplicationVersion",
"elasticbeanstalk:UpdateConfigurationTemplate",
"elasticbeanstalk:UpdateEnvironment",
"elasticbeanstalk:RetrieveEnvironmentInfo",
"elasticbeanstalk:ValidateConfigurationSettings"
],
"Resource": [
"arn:aws:elasticbeanstalk:us-east-1:123456789012:environment/Test/Test-env-prod"
]
}
]
}
The above policy is going to prevent any user with this policy attached from Creating or Deleting any applications, and it's further going to deny the user from completing any of the listed actions on the resource ARN listed; the app named Test and the environment named Test-env-prod.
To restrict access to the specific environment you could use this policy and modify the ARN's region (us-east-1), account-number (123456789012), app-name (Test), and environment-name (Test-env-prod), to your specific needs.
You can find a list of ElasticBeanstalk resource ARN formats here.

Access s3 bucket from different aws account

I am trying to restore a database as a part of our testing. The backups exists on the prod s3 account. My database is running as ec2 instance in dev account.
Can anyone tell me how can i access the prod s3 from dev account.
Steps:
- i created a role on prod account and with trusted relationship with the dev account
- i added a policy to the role.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "s3:ListAllMyBuckets",
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"s3:ListBucket",
"s3:GetBucketLocation"
],
"Resource": "arn:aws:s3:::prod"
},
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:PutObject",
"s3:DeleteObject"
],
"Resource": "arn:aws:s3:::prod/*"
}
]
}
on dev account i created a role and with assume policy
> { "Version": "2012-10-17", "Statement": [
> {
> "Effect": "Allow",
> "Action": "sts:AssumeRole",
> "Resource": "arn:aws:iam::xxxxxxxxx:role/prod-role"
> } ] }
But i am unable to access the s3 bucket, can someone point me where i am wrong.
Also i added the above policy to an existing role. so does that mean its not working because of my instance profile ( inconsistent error)
Please help and correct me if i am wrong anywhere. I am looking for a solution in terms of a role and not as a user.
Thanks in advance!
So lets recap: you want to access your prod bucket from the dev account.
There are two ways to do this, Method 1 is your approach however I would suggest Method 2:
Method 1: Use roles. This is what you described above, it's great, however, you cannot sync bucket to bucket if they're on different accounts as different access keys will need to be exported each time. You'll most likely have to sync the files from the prod bucket to the local fs, then from the local fs to the dev bucket.
How to do this:
Using roles, create a role on the production account that has access to the bucket. The trust relationship of this role must trust the role on the dev account that's assigned to the ec2 instance. Attach the policy granting access to the prod bucket to that role. Once that's all configured, the ec2 instance role in dev must be updated to allow sts:AssumeRole of that role you've defined in production. On the ec2 instance in dev you will need to run aws sts assume-role --role-arn <the role on prod> --role-session-name <a name to identify the session>. This will give you back 3 variables, AWS_SECRET_ACCESS_KEY, AWS_ACCESS_KEY_ID, and AWS_SESSION_TOKEN. On your ec2 instance, run set -a; AWS_SECRET_ACCESS_KEY=${secret_access_key};
AWS_ACCESS_KEY_ID=${access_key_id}; AWS_SESSION_TOKEN=${session_token}. Once those variables have been exported, you can run aws sts get-caller-identity and that should come back showing you that you're on the role you've provisioned in production. You should now be able to sync the files to the local system, and once that's done, unset the aws keys we set as env variables, then copy the files from the ec2 instance to the bucket in dev. Notice how there are two steps here to copy them? that can get quite annoying - look into method 2 on how to avoid this:
Method 2: Update the prod bucket policy to trust the dev account - this will mean you can access the prod bucket from dev and do a bucket to bucket sync/cp.
I would highly recommend you take this approach as it will mean you can copy directly between buckets without having to sync to the local fs.
To do this, you will need to update the bucket policy on the bucket in production to have a principals block that trusts the AWS account id of dev. An example of this is, update your prod bucket policy to look something like this:
NOTE: granting s3:* is bad, and granting full access to the account prob isnt suggested as anyone on the account with the right s3 permissions can now access this bucket, but for simplicity I'm going to leave this here:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Example permissions",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::DEV_ACC_ID:root"
},
"Action": [
"s3:*"
],
"Resource": [
"arn:aws:s3:::PROD_BUCKET_NAME",
"arn:aws:s3:::PROD_BUCKET_NAME/*"
]
}
]
}
Once you've done this, on the dev account, attach the policy in your main post to the dev ec2 instance role (the one that grants s3 access). Now when you connect to the dev instance, you do not have to export any environment variables, you can simply run aws s3 ls s3://prodbucket and it should list the files.
You can sync the files between the two buckets using aws s3 sync s3://prodbucket s3://devbucket --acl bucket-owner-full-control and that should copy all the files from prod to dev, and on top of that should update the ACLs of each file so that dev owns them (meaning you have full access to the files in dev).
You need to assume the role in the production account from the dev account. Call sts:AssumeRole and then use the credentials returned to access the bucket.
You can alternatively add a bucket policy that allows the dev account to read from the prod account. You wouldn't need the cross account role in the prod account in this case.

connecting Docker to a cloud provider, Amazon AWS

Context: I was going though Link to Amazon Web Services to create Swarms, in order to connect to my provider.
The role was created with success.
Then, while creating the policy, to associate to the role, a problem happened.
Problem:
An error occurred: Cannot exceed quota for PolicySize: 5120
As suggested by them, this is what I need to add in policy:
https://docs.docker.com/docker-for-aws/iam-permissions/
Did some research and people seem to like this solution:
https://github.com/docker/machine/issues/1655
How can I create the policy using the best method?
Noticing that the documentation in Docker is wrong - doesn't work in my case - what's the best method?
You are looking at the wrong instructions to connect docker-cloud to AWS follow these instructions: https://docs.docker.com/docker-cloud/infrastructure/link-aws/
It's the following 3 steps
Create AWS Policy for docker-cloud
Create a docker-cloud role and attache the policy from 1
Attach AWS role/account to docker-cloud
The policy in (1) above is pretty simple. It should be allowed to perform ec2 instances related actions (your screenshot of the policy looks like it doesn't provide ec2 permissions):
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"ec2:*",
"iam:ListInstanceProfiles"
],
"Effect": "Allow",
"Resource": "*"
}
]
}
The role must have the permissions to implement the policy.
For a detailed post on the deployment via docker-cloud see: https://blog.geografia.com.au/how-we-are-using-docker-cloud-for-automated-testing-and-deployments-of-applications-bb87ec3173e7