I have started getting the following error recently on release change action int eh AWS codePipeline console. Also attaching the screenshot
Action execution failed
Insufficient permissions The provided role does not have permissions
to perform this action. Underlying error: Access Denied (Service:
Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID:
CA26EF93E3DAF8F0; S3 Extended Request ID:
mKkobqLGbj4uco8h9wDOBjPeWrRA2ybCrEsVoSq/MA4IFZqJb6QJSrlNrKk/EQK40TfLbTbqFuQ=)
I can't find any resources online anywhere for this error code.
Your pipeline is trying to access a S3 bucket, but AWS CodePipeline ServiceRole does not have permission to access it. Create an IAM policy that provides access to S3 and attach it to the CodePipeline service role.
As #Jeevagan said, you must create a new IAM Policy that grant access to the Pipeline Buckets.
Do not forget to add the following actions:
Action:
- "s3:GetObject"
- "s3:List*"
- "s3:GetObjectVersion"
I lost a few minutes because of this one in particular: GetObjectVersion
By checking your codedeploy-output, you'll be able to see that the process is downloading a particular version of your artefact with the parameter "versionId".
Hope it will help.
You are missing the GetBucketVersioning action in your policy, so the correct example looks like below. I don't know why it's not mentioned anywhere in the reference/documentation:
- PolicyName: AccessRequiredByPipeline
PolicyDocument:
Version: '2012-10-17'
Statement:
- Action:
- s3:PutObject
- s3:GetObject
- s3:GetObjectVersion
Effect: Allow
Resource: !Sub ${YouBucket.Arn}/*
- Action:
- s3:GetBucketVersioning
Resource: !Sub ${YouBucket.Arn}
Effect: Allow
- Action:
- kms:GenerateDataKey
- kms:Decrypt
Effect: Allow
Resource: !GetAtt KMSKey.Arn
Another potential culprit that mascarades behind this error that references S3 is missing KMS permissions on the IAM Role for the CodePipeline. If you configured your CodePipeline to use KMS encryption, then the service role used/associated with the CodePipeline will also need KMS permissions to that KMS key in order to interact with the KMS encrypted objects in S3. In my experience, the missing KMS permissions will cause the same error message to appear which references S3.
I just ran into this issue, but the permissions were all set properly - I used the same CloudFormation template with other projects no problem. It turned out that the key name I was using in the S3 bucket was too long. Apparently it doesn't like anything more than 20 characters. Once I changed the key name in my S3 bucket (and all of its associated references in the CloudFormation template files), everything worked properly
I run into the same issue when I used cloud formation to build my CI/CD, my problem was the CodePipeline ArtifactStore pointed to the wrong location in the S3 ("codepipeline" a not allowed access folder in my case). Changing the ArtifactStore to an existing folder fixed my issue.
You can view pipeline details like where the SourceArtifact is pointed by following this link
Related
I am trying to deploy an AWS Lambda function that gets triggered when an AVRO file is written to an existing S3 bucket.
My serverless.yml configuration is as follows:
service: braze-lambdas
provider:
name: aws
runtime: python3.7
region: us-west-1
role: arn:aws:iam::<account_id>:role/<role_name>
stage: dev
deploymentBucket:
name: serverless-framework-dev-us-west-1
serverSideEncryption: AES256
functions:
hello:
handler: handler.hello
events:
- s3:
bucket: <company>-dev-ec2-us-west-2
existing: true
events: s3:ObjectCreated:*
rules:
- prefix: gaurav/lambdas/123/
- suffix: .avro
When I run serverless deploy, I get the following error:
ServerlessError: An error occurred: IamRoleCustomResourcesLambdaExecution - API: iam:CreateRole User: arn:aws:sts::<account_id>:assumed-role/serverless-framework-dev/jenkins_braze_lambdas_deploy is not authorized to perform: iam:CreateRole on resource: arn:aws:iam::<account_id>:role/braze-lambdas-dev-IamRoleCustomResourcesLambdaExec-1M5QQI6P2ZYUH.
I see some mentions of Serverless needing iam:CreateRole because of how CloudFormation works but can anyone confirm if that is the only solution if I want to use existing: true? Is there another way around it except using the old Serverless plugin that was used prior to the framework adding support for the existing: true configuration?
Also, what is 1M5QQI6P2ZYUH in arn:aws:iam::<account_id>:role/braze-lambdas-dev-IamRoleCustomResourcesLambdaExec-1M5QQI6P2ZYUH? Is it a random identifier? Does this mean that Serverless will try to create a new IAM role every time I try to deploy the Lambda function?
I've just encountered this, and overcome it.
I also have a lambda for which I want to attach an s3 event to an already existing bucket.
My place of work has recently tightened up AWS Account Security by the use of Permission Boundaries.
So i've encountered the very similar error during deployment
Serverless Error ---------------------------------------
An error occurred: IamRoleCustomResourcesLambdaExecution - API: iam:CreateRole User: arn:aws:sts::XXXXXXXXXXXX:assumed-role/xx-crossaccount-xx/aws-sdk-js-1600789080576 is not authorized to perform: iam:CreateRole on resource: arn:aws:iam::XXXXXXXXXXXX:role/my-existing-bucket-IamRoleCustomResourcesLambdaExec-LS075CH394GN.
If you read Using existing buckets on the serverless site, it says
NOTE: Using the existing config will add an additional Lambda function and IAM Role to your stack. The Lambda function backs-up the Custom S3 Resource which is used to support existing S3 buckets.
In my case I needed to further customise this extra role that serverless creates so that it is also assigned the permission boundary my employer has defined should exist on all roles. This happens in the resources: section.
If your employer is using permission boundaries you'll obviously need to know the correct ARN to use
resources:
Resources:
IamRoleCustomResourcesLambdaExecution:
Type: AWS::IAM::Role
Properties:
PermissionsBoundary: arn:aws:iam::XXXXXXXXXXXX:policy/xxxxxxxxxxxx-global-boundary
Some info on the serverless Resources config
Have a look at your own serverless.yaml, you may already have a permission boundary defined in the provider section. If so you'll find it under rolePermissionsBoundary, this was added in I think version 1.64 of serverless
provider:
rolePermissionsBoundary: arn:aws:iam::XXXXXXXXXXXX:policy/xxxxxxxxxxxx-global-boundary
If so, you can should be able to use that ARN in the resources: sample I've posted here.
For testing purpose we can use:
provider:
name: aws
runtime: python3.8
region: us-east-1
iamRoleStatements:
- Effect: Allow
Action: "*"
Resource: "*"
For running sls deploy, I would suggest you use a role/user/policy with Administrator privileges.
If you're restricted due to your InfoSec team or the like, then I suggest you have your InfoSec team have a look at docs for "AWS IAM Permission Requirements for Serverless Framework Deploy." Here's a good link discussing it: https://github.com/serverless/serverless/issues/1439. At the very least, they should add iam:CreateRole and that can get you unblocked for today.
Now I will address your individual questions:
can anyone confirm if that is the only solution if I want to use existing: true
Apples and oranges. Your S3 configuration has nothing to do with your error message. iam:CreateRole must be added to the policy of whatever/whoever is doing sls deploy.
Also, what is 1M5QQI6P2ZYUH in arn:aws:iam::<account_id>:role/braze-lambdas-dev-IamRoleCustomResourcesLambdaExec-1M5QQI6P2ZYUH? Is it a random identifier? Does this mean that serverless will try to create a new role every time I try to deploy the function?
Yes, it is a random identifier
No, sls will not create a new role every time. This unique ID is cached and re-used for updates to an existing stack.
If a stack is destroyed/recreated, it will assign a generate a new unique ID.
I am trying to create a full access role (using an AWS Managed Policy) to all EC2 instances to call AWS services via Cloudformation in YAML.
This is my code:
AWSTemplateFormatVersion: 2010-09-09
Description: Ansible Role
Resources:
AnsibleRole:
Type: 'AWS::IAM::Role'
Properties:
AssumeRolePolicyDocument:
Version: 2012-10-17
Statement:
- Effect: 'Allow'
Action: 'ec2:*'
Principal:
Service: 'ec2.awsamazon.com'
ManagedPolicyArns:
- 'arn:aws:iam::aws:policy/AmazonEC2FullAccess'
RoleName: 'EC2-FullAccess'
DeletionPolicy: Delete
I get the following error:
Invalid principal in policy: "SERVICE":"ec2.awsamazon.com"
(Service: AmazonIdentityManagement; Status Code: 400; Error
Code: MalformedPolicyDocument; Request ID: e43214f8-b6f9-11e9-9891-4dc84fd279dd)
I am perplexed as to why it doesn't recognize the service. Additionally, if I change Action: 'ec2:*' to Action: 'sts.AssumeRole' I get another error.
Any assistance is greatly appreciated.
There are multiple issues with your template:
The service identifier is malformed. It should be 'ec2.amazonaws.com'.
The action must be 'sts:AssumeRole'. This is the only action which is valid inside an IAM trust policy.
The DeletionPolicy is not necessary because it is the default for this resource.
Set the RoleName only if really necessary because IAM names are global on a per-account basis and you cannot execute multiple stacks when using this attribute.
For more information see the AWS CloudFormation template examples.
You use the correct managed policy ARN if you want to grant your new role permission to call all kinds of ec2 actions. If you want to restrict your Ansible role further, take a look at the example policies for EC2 in the docs [1][2]. They are much more restrictive (and thus secure) than the managed full access policy AmazonEC2FullAccess. Maybe also the other managed policies such as AmazonEC2ReadOnlyAccess [3] are feasible?
References
[1] https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ExamplePolicies_EC2.html
[2] https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-policies-for-amazon-ec2.html
[3] https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/UsingIAM.html#intro-to-iam
Below is the policy template created to restrict any Principal to do only below actions:
Resources:
MyPolicy:
Type: AWS::IAM::ManagedPolicy
Properties:
Description: RulesToCreateUpdatePolicy
ManagedPolicyName: some-policy
PolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: Allow
Action:
- "iam:CreatePolicy"
- "iam:DeletePolicy"
- "iam:CreatePolicyVersion"
Resource:
- !Sub "arn:aws:iam::${AWS::AccountId}:policy/xyz-lambda-*"
on a policy resource that starts with name xyz-lambda-.
This policy is assigned to EC2 host, with a role.
Does this policy name(like xyz-lambda-*) supposed to be already exist in AWS, before uploading this policy in AWS?
No, when you are specifying resource in your policy document, that resource doesn't need to exists at all.
If you take into consideration this action
iam:CreatePolicy
together with your resource, what it does is that it grants necessary permissions to create policy with that particular name xyz-lambda-*. It wouldn't make much of sense to require existence of such resource if the policy is granting permissions to create it in the first place.
When you consider the delete action
iam:DeletePolicy
if the resource doesn't exist then it does nothing. Once you create policy with the appropriate name, you will be able to delete it but it doesn't matter whether the policy existed before this ManagedPolicy was created or after or you have deleted and recreated policy with such name any number of times.
Lastly, since you have stated that this policy is attached to EC2 role then it should work without errors. But I would still recommend to grant iam:ListPolicies permission for any resource (policy) discovery that could be performed by an application running on EC2 instance. If you don't allow this action in your policy, your application will not be able to list policies and you would have to design some error prone workaround based on guessing or a strict naming scheme.
Policy name is not important. Resources unique by ARN only. IAM Resources unique within AWS account an if u don't create this resource before it's ok
I am trying to understand the below policy
Policies:
- PolicyName: InstanceIAMPolicy
PolicyDocument:
Version: 2012-10-17
Statement:
- Effect: Allow
Action:
- 'ssm:DescribeAssociation'
- 'ssm:GetDeployablePatchSnapshotForInstance'
- 'ssm:GetDocument'
- 'ssm:GetManifest'
- 'ssm:GetParameters'
- 'ssm:ListInstanceAssociations'
- 'ssm:PutComplianceItems'
- 'ssm:PutConfigurePackageResult'
- 'ssm:UpdateAssociationStatus'
- 'ssm:UpdateInstanceAssociationStatus'
- 'ssm:UpdateInstanceInformation'
Resource: '*'
Effect: Allow
Action:
- 'ec2messages:AcknowledgeMessage'
- 'ec2messages:FailMessage'
- 'ec2messages:GetEndpoint'
- 'ec2messages:GetMessages'
- 'ec2messages:SendReply'
Resource: '*'
My question is related to resource parameter mentioned as *. Does that mean that the actions can be performed on any resource within your AWS infrastructure ? I am really new to CloudFormation templates and AWS. Thanks for your help.
The short answer is YES.
In your template you have two sections under Statements. Each section is defining "allow" actions. For each section you are "allowing" the APIs for ALL RESOURCES. The first section is for SSM and the second is for SSM EC2Messages.
Note: based upon the allow actions, you can merge those two sections together.
This link will help you with CloudFormation Templates:
Working with AWS CloudFormation Templates
The CloudFormation template in your question is creating an IAM policy. Your question is really about how wildcards work in IAM policies. The * wildcard in an IAM policy Resource element means that something with this IAM policy applied to it can perform the listed actions against any resource in your AWS account.
The policy appears to be a policy you would apply to an EC2 instance profile to allow the AWS SSM agent to perform any SSM tasks on that EC2 instance. Since thee resource is specifie as the * wildcard then the SSM agent could, for example, download any SSM document you send it (ssm:GetDocument). This basically allows the SSM agent to work correctly on the EC2 instance, without requiring you to grant it specific access to each thing you need it to do, every time you trigger it in the future.
I am using public UpdateEnvironmentResult updateEnvironment(UpdateEnvironmentRequest updateEnvironmentRequest) methos of AWSElasticBeanstalkClient from my EC2instance but gets the following error
com.amazonaws.services.elasticbeanstalk.model.InsufficientPrivilegesException: You do not have permission to perform the 's3:CreateBucket' action. Verify that your S3 policies and your ACLs allow you to perform these actions. (Service: AWSElasticBeanstalk; Status Code: 403; Error Code: InsufficientPrivilegesException; Request ID: 412d8fab-0cfe-11e6-928e-e1e1532d705e)
at com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:1389)
at com.amazonaws.http.AmazonHttpClient.executeOneRequest(AmazonHttpClient.java:902)
at com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:607)
at com.amazonaws.http.AmazonHttpClient.doExecute(AmazonHttpClient.java:376)
at com.amazonaws.http.AmazonHttpClient.executeWithTimer(AmazonHttpClient.java:338)
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:287)
at com.amazonaws.services.elasticbeanstalk.AWSElasticBeanstalkClient.doInvoke(AWSElasticBeanstalkClient.java:2223)
at com.amazonaws.services.elasticbeanstalk.AWSElasticBeanstalkClient.invoke(AWSElasticBeanstalkClient.java:2193)
at com.amazonaws.services.elasticbeanstalk.AWSElasticBeanstalkClient.updateEnvironment(AWSElasticBeanstalkClient.java:2093)
My IAM role doesn't have access for s3:create bucket. But why does it need to create bucket? Is there any workaround?
It is uploading the application source bundle to S3.
Give your instance AWSElasticBeanstalkWebTier policy permission. That will give you instance access to buckets called elasticbeanstalk* only, which what the SDK will name the bucket.
This happened to me recently after I updated the policy of a Lambda function, from the deprecated AWSLambdaFullAccess to AWSLambda_FullAccess. If you are also using a SAM template for deploying your Lambda functions, extend the permissions by adding this to your template:
LambdaFunction:
Type: AWS::Serverless::Function
Properties:
Timeout: 270
Policies:
- AWSLambda_FullAccess
- AWSElasticBeanstalkManagedUpdatesCustomerRolePolicy