I have a really simple setup for my Serverless application that uses NodeJS. Everything builds just find in Bitbucket Pipelines except for the deployment through the standard command of serverless deploy, where I get the following error message:
User: arn:aws:iam::123456789012:user/bitbucket-build-user is not authorized to perform: cloudformation:DescribeStackResources on resource: arn:aws:cloudformation:my-region: 123456789012:stack/mylambda-dev/*
Locally it works just fine. Here's the Pipelines configuration:
image:
name: mydocker/serverless-docker:latest
username: $MY_DOCKER_HUB_USERNAME
password: $MY_DOCKER_HUB_PASSWORD
email: $MY_DOCKER_HUB_EMAIL
pipelines:
default:
- step:
script:
- npm install
- npm run lint
branches:
master:
- step:
script:
- npm install
- npm run lint
- serverless config credentials --overwrite --provider aws --key $MY_AWS_KEY --secret $MY_AWS_SECRET
- serverless deploy
Is there something I'm missing here?
Since Serverless uses AWS CloudFormation for a full deploy (the one you do with serverless deploy), the bitbucket-build-user has to have certain permissions to manage CloudFormation stacks. So at the bare minimum, you'll need a to attach a policy that looks like this:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"cloudformation:Describe*",
"cloudformation:List*",
"cloudformation:Get*",
"cloudformation:PreviewStackUpdate",
"cloudformation:CreateStack",
"cloudformation:UpdateStack",
"cloudformation:DeleteStack"
],
"Resource": "*"
}
}
Take a look at https://github.com/serverless/serverless/issues/1439 to get an idea what permissions bitbucket-build-user might need.
Personally, I just use https://github.com/dancrumb/generator-serverless-policy to generate those policies instead of writing them manually every time.
Related
I am building an amplify react app and trying to connect it to my private npm packages in my CodeArtifact repository.
In the build file amplify.yml, I added
preBuild:
commands:
- aws codeartifact login --tool npm --repository myrepo --domain mydomain --namespace mynamespace --domain-owner myid
- yarn install
and gave the amplify service role the following policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"codeartifact:GetAuthorizationToken",
"codeartifact:GetRepositoryEndpoint",
"codeartifact:ReadFromRepository"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": "sts:GetServiceBearerToken",
"Resource": "*",
"Condition": {
"StringEquals": {
"sts:AWSServiceName": "codeartifact.amazonaws.com"
}
}
}
]
}
This setup works for CodeBuild building Lambda functions, but in Amplify, I get
Successfully configured npm to use AWS CodeArtifact repository
after the login command and
error An unexpected error occurred: "<some-package-url>: Request failed \"401 Unauthorized\"".
when installing dependencies.
I debugged the environment in amplify build and did not find any AWS access key id or secret, but also don't know why.
Ok I resolved my issue by deleting yarn.lock and adding it to .gitignore.
The problem was, that yarn caches the resolved package address in yarn.lock. That address was in my CodeArtifact repository, because I was logged in while installing dependencies on my dev machine. Since yarn.lock is not in .gitignore by default, I just pushed it into the build. When yarn installs dependencies in build, it uses the cached addresses, which can't be reached anymore.
I am trying to get started with Terraform and am using GitLab CI/CD to interact with it. My Runner is unable to assume the IAM Role which has elevated privileges to create AWS resources. My Google-fu on this has failed me.
The error received is:
Error: error configuring Terraform AWS Provider: IAM Role
(my:arn) cannot be assumed. There are a number of possible causes of this - the most common are:
The credentials used in order to assume the role are invalid
The credentials do not have appropriate permission to assume the role
The role ARN is not valid
I have created an access/secret key in IAM and have attempted supplying these as GitLab CI/CD Variables, environment variables that I directly export in my before_script, and even the not-recommended hardcoding them into the provider stanza. No matter what, I still get this same error.
What is extra strange is that AWS shows that the key is being used. The "Last Used" column will always reflect a timestamp of the last attempt at running the pipeline. For better or worse, the key is part of my root AWS account - this is a sandbox project and I don't have any IAM Users, so, it's not clear to me how Terraform is unable to use these credentials to assume a Role when, according to AWS, it's able to access my account with them, and my account has root privileges.
Here is my provider.tf:
terraform {
required_version = ">= 0.14"
backend "s3" { }
}
provider "aws" {
region = "us-east-1"
access_key = "redacted"
secret_key = "redacted"
assume_role {
role_arn = "arn:aws:iam::redacted:role/gitlab-runner-role"
}
}
Here is the relevant section of my .gitlab-ci.yml for this stage:
.terraform_init: &terraform_init |-
terraform init -backend-config="bucket=my-terraform-state" -backend-config="region=us-east-1" -backend-config="key=terraform.tfstate"
tf-plan:
image:
name: hashicorp/terraform
entrypoint: [""]
stage: plan
before_script:
- *terraform_init
script:
- terraform plan -out=tfplan.plan
- terraform show --json tfplan.plan | convert_report > tfplan.json
needs:
- job: tf-val
tags:
- my-runner
My main.tf only contains a basic aws_instance stanza and my terraform validate stage (omitted above) says it's in ship-shape. These are the only 3 files in my repo.
My gitlab-runner-role only contains one Policy, gitlab-runner-policy, whose JSON is:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::*/*",
"arn:aws:s3:::my-terraform-state"
]
}
]
}
TIA for any advisement... really banging my head up against the wall on this one.
Turns out that assume_role is only needed for cross-account work. I was doing all of the work within my own account, so removing this allowed Terraform to just use the keys to do the work without needing a different IAM Role (or it's able to do what it needs to via the Role that is attached to the Runner as an instance profile). It's not clear to me why specifying assume_role anyway would result in an error, since the access should be there, but removing it has fixed this issue.
New version of CDK deployment strategy allows a user assume roles like cdk-{qualifier}-deploy.
I want to give a developer an ability to perform cdk diff from their local machine, but behind the scenes cdk diff assumes cdk-{qualifier}-deploy role, which is the power I'm saving for CICD pipeline with a secret IAM user and trust to deploy.
I removed the deploy ability from the local IAM user and ran some CDK Diff. The stack does quite a bit with lambda, ecs, r53, and vpcs.
Here's the policy i was able to give local user to make this work:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "CDKDiff",
"Effect": "Allow",
"Action": [
"cloudformation:DescribeStacks",
"cloudformation:GetTemplate"
],
"Resource": [
"arn:aws:cloudformation:us-east-1:123:stack/OnDemandStackUE1/*",
"arn:aws:cloudformation:sa-east-1:123:stack/OnDemandStackSE1/*",
"arn:aws:cloudformation:eu-west-2:123:stack/OnDemandStackEU2/*"
]
}
]
}
Now i get output for a small change:
Assuming role failed: User: arn:aws:iam::123:user/dummy is not authorized to perform: sts:AssumeRole on resource: arn:aws:iam::123:role/cdk-me-deploy-role
current credentials could not be used to assume '...deploy-role..', but are for the right account. Proceeding anyway.
Resources
[~] AWS::Lambda::Function ecsd_lambda ecslambda3D927DBA
└─ [~] Timeout
├─ [-] 10
└─ [+] 9
Looks like it works with a most minimal user permissions to DescribeStacks and GetTemplate. Is this an adequate solution? Should i try to have a role instead via synthesizer stack or something?
Currently have configured AWS Fargate service with ApplicationLoadBalancedFargateService via AWS CDK(Python), would like to enable ExecuteCommand on the fargate containers to get access over them.
But currently unable to find a method to enable Exec on this fargate service.
Any help on this would be much appreciated.
In case anyone needs the command to enable shell access from the cli.
aws ecs update-service --cluster <cluster_name> --service <service_name> --task-definition <task_definition_name> --enable-execute-command --force-new-deployment
On a bit of a side note: You will also need to create a custom AWS Service: ecs-tasks role that includes Systems Manager permissions in addition to the AmazonECSTaskExecutionRolePolicy that comes in the default ecs-tasks role ecsTaskExecutionRole.
Here's an example policy.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"ssmmessages:CreateDataChannel",
"ssmmessages:OpenDataChannel",
"ssmmessages:OpenControlChannel",
"ssmmessages:CreateControlChannel"
],
"Resource": "*"
}
]
}
One can now enable execute command via CDK:
declare const cluster: ecs.Cluster;
const loadBalancedFargateService = new ecsPatterns.ApplicationLoadBalancedFargateService(this, 'Service', {
cluster,
memoryLimitMiB: 1024,
desiredCount: 1,
cpu: 512,
taskImageOptions: {
image: ecs.ContainerImage.fromRegistry("amazon/amazon-ecs-sample"),
},
enableExecuteCommand: true
});
Source: https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.aws_ecs_patterns-readme.html#ecs-exec
There seem to be an open issue/feature request in the aws-cdk repo that describes your issue.
The workaround is using Escape Hatches:
cfn_service = self.web_service.service.node.default_child
cfn_service.add_override("Properties.EnableExecuteCommand", True)
From my experience I learned that the ApplicationLoadBalancedFargateService is still limited. In some cases it makes sense to use the FargateService and create the loadbalancer, portmappings,... yourself.
I have files stored on S3 and wrote .ebextensions config to automatically copy the them to new instances. I'm receiving this error in the Elastic Beanstalk console:
[Instance: INSTANCEID Module: AWSEBAutoScalingGroup ConfigSet: null] Command failed on instance. Return code: 1 Output: [CMD-AppDeploy/AppDeployStage0/EbExtensionPreBuild] command failed with error code 1: Error occurred during build: Failed to retrieve https://s3-us-west-1.amazonaws.com/MyBucket/MyFolder/_MyFile.txt: HTTP Error 403 : AccessDenied
My .ebextension config file has this section:
files:
"/target/file/path" :
mode: "000777"
owner: ec2-user
group: ec2-user
source: https://s3-us-west-1.amazonaws.com/_MyBucket_/_MyFolder_/_MyFile.txt
In attempting to make this file copying work, I've also relaxed permissions by giving the elastic beanstalk IAM role the standard read only access policy to all of S3. It's policy is this:
{
"Effect": "Allow",
"Action": [
"s3:Get*",
"s3:List*"
],
"Resource": "*"
}
Yet the prebuild copying step still fails. Did I give the source url in the correct format? Is there another security entity/policy involved? Help please :)
The documentation is very sketchy on the subject (probably an ideal candidate for StackExchange Docs!).
To do this correctly with .ebextensions, you need to allow the Beanstalk instance IAMs user in the bucket policy, setup an AWS::CloudFormation::Authentication: auth config and attach config to remote sources. This is kind of a hybrid of all the other answers, but all failed in one way or another for me.
Assuming your IAM instance role is aws-elasticbeanstalk-ec2-role:
Set your AWS bucket to allow the Beanstalk IAM User. Edit "bucket policy":
{
"Version": "2012-10-17",
"Id": "BeanstalkS3Copy",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"AWS": "<beanstalk_iam_role_arn>"
},
"Action": [
"s3:ListBucketVersions",
"s3:ListBucket",
"s3:GetObjectVersion",
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::<bucket_name>",
"arn:aws:s3:::<bucket_name>/*"
]
}
]
}
where:
beanstalk_iam_role_arn = the fully qualified instance IAMs role. See "IAM role" associated with a running instance if available or see environment configuration. Example: arn:aws:iam::12345689:role/aws-elasticbeanstalk-ec2-role
bucket_name = your bucket name
In your .ebextension/myconfig.config, add an S3 authentication block that uses your IAMs instance user:
Resources:
AWSEBAutoScalingGroup:
Metadata:
AWS::CloudFormation::Authentication:
S3Auth:
type: "s3"
buckets: ["bucket_name"]
roleName:
"Fn::GetOptionSetting":
Namespace: "aws:asg:launchconfiguration"
OptionName: "IamInstanceProfile"
DefaultValue: "aws-elasticbeanstalk-ec2-role"
Set bucket_name appropriately
Define a remote file and attach the S3 Authentication block:
"/etc/myfile.txt" :
mode: "000400"
owner: root
group: root
authentication: "S3Auth" # Matches to auth block above.
source: https://s3-eu-west-1.amazonaws.com/mybucket/myfile.txt
Set your source URL appropriately
Similar to chaseadamsio's answer, you can configure the role given to the EC2 instance with a policy to access S3 resources, then use the pre-installed AWS CLI utilities to move files around.
The way I approached this is to create a role dedicated to the given EB application, then attach a policy similar to:
"Statement": [
{
"Sid": "<sid>",
"Effect": "Allow",
"Action": [
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::<your_bucket_path>/*"
]
}
]
This gives your instance access, then to get the files, add a 'commands' block to your config such as:
commands:
01-get-file:
command: aws s3 cp s3://<your_bucket_path>/your-file.txt /home/ec2-user
02-execute-actions:
[unpack, run scripts, etc..]
Obviously you can use other AWS CLI utlities as needed. I found this solved a lot of problems I was having with S3 access and makes deployment a lot easier.
I found a solution to overcome this error. It turns out adding a Resources section to the .ebextensions config file makes it work. The entire file becomes:
files:
"/target/file/path" :
mode: "000777"
owner: ec2-user
group: ec2-user
source: https://s3-us-west-1.amazonaws.com/_MyBucket_/_MyFolder_/_MyFile.txt
Resources:
AWSEBAutoScalingGroup:
Metadata:
AWS::CloudFormation::Authentication:
S3Access:
type: S3
roleName: aws-elasticbeanstalk-ec2-role
buckets: _MyBucket
At this point, I don't know enough to grok why it has to be this way. Hopefully it can help someone who's lost move forward and eventually gain a better understanding. I based my answer on this link https://forums.aws.amazon.com/message.jspa?messageID=541634
An alternative to setting the .ebextensions config would be to set a policy on the aws-elasticbeanstalk-ec2-role within the IAM Manager (or create a new role specifically for your elastic beanstalk environments to sandbox your autoscaled ec2 instances.
To do so, go to the IAM manager within the web console, and click on "Roles" on the left side. You should see your instance name in the list of roles, clicking on that will take you to the administration page for that particular role. Attach a new role policy to the role under "Permissions" with a policy document matching what you want your ec2 to have permissions to do (in this case, you'd give it a policy to access an s3 bucket called _MyBucket and you should no longer need the Resources section in your .ebextensions config.
If you have your IAM role for the machine configured to get access to the file you can do the following in .ebextensions
commands:
01a_copy_file:
command: aws s3 cp s3://bucket/path/file /destination/