I am using public UpdateEnvironmentResult updateEnvironment(UpdateEnvironmentRequest updateEnvironmentRequest) methos of AWSElasticBeanstalkClient from my EC2instance but gets the following error
com.amazonaws.services.elasticbeanstalk.model.InsufficientPrivilegesException: You do not have permission to perform the 's3:CreateBucket' action. Verify that your S3 policies and your ACLs allow you to perform these actions. (Service: AWSElasticBeanstalk; Status Code: 403; Error Code: InsufficientPrivilegesException; Request ID: 412d8fab-0cfe-11e6-928e-e1e1532d705e)
at com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:1389)
at com.amazonaws.http.AmazonHttpClient.executeOneRequest(AmazonHttpClient.java:902)
at com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:607)
at com.amazonaws.http.AmazonHttpClient.doExecute(AmazonHttpClient.java:376)
at com.amazonaws.http.AmazonHttpClient.executeWithTimer(AmazonHttpClient.java:338)
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:287)
at com.amazonaws.services.elasticbeanstalk.AWSElasticBeanstalkClient.doInvoke(AWSElasticBeanstalkClient.java:2223)
at com.amazonaws.services.elasticbeanstalk.AWSElasticBeanstalkClient.invoke(AWSElasticBeanstalkClient.java:2193)
at com.amazonaws.services.elasticbeanstalk.AWSElasticBeanstalkClient.updateEnvironment(AWSElasticBeanstalkClient.java:2093)
My IAM role doesn't have access for s3:create bucket. But why does it need to create bucket? Is there any workaround?
It is uploading the application source bundle to S3.
Give your instance AWSElasticBeanstalkWebTier policy permission. That will give you instance access to buckets called elasticbeanstalk* only, which what the SDK will name the bucket.
This happened to me recently after I updated the policy of a Lambda function, from the deprecated AWSLambdaFullAccess to AWSLambda_FullAccess. If you are also using a SAM template for deploying your Lambda functions, extend the permissions by adding this to your template:
LambdaFunction:
Type: AWS::Serverless::Function
Properties:
Timeout: 270
Policies:
- AWSLambda_FullAccess
- AWSElasticBeanstalkManagedUpdatesCustomerRolePolicy
Related
I am trying to deploy lambda function with Serverless framework
I've added my ADMIN credentials in the aws cli
and I am getting this error message every time I try to deploy
Warning: Not authorized to perform: lambda:GetFunction for at least one of the lambda functions. Deployment will not be skipped even if service files did not change.
Error:
CREATE_FAILED: HelloLambdaFunction (AWS::Lambda::Function)
Resource handler returned message: "null (Service: Lambda, Status Code: 403, Request ID: ********)" (RequestToken: ********, HandlerErrorCode: GeneralServiceException)
I've also removed everything from my project and from the YML file and nothing worked
service: test
frameworkVersion: '3'
provider:
name: aws
runtime: nodejs12.x
iam:
role:
statements:
- Effect: "Allow"
Action:
- lambda:*
- lambda:InvokeFunction
- lambda:GetFunction
- lambda:GetFunctionConfiguration
Resource: "*"
functions:
hello:
handler: handler.hello
Deployments default to us-east-1 region and used the default profile set on the machine where the serverless command is run. Perhaps you dont have permission to deploy is that region or serverless is using a different profile than intended. (e.g If i run serverless from an EC2 and login separately, it would still use the default profile, i.e the EC2 instance Profile.)
Can you update your serverless.yml file to include the region as well.
provider:
name: aws
runtime: nodejs12.x
region: <region_id>
profile: profile name if not Default
When I tried to create a lambda function manually from the AWS website I found that I've no permission to view or create any lambda function
And after that I found that my account was suspended due to a behavior I've done that is not acceptable in AWS policy
I've followed the steps the support has sent me and then my account was back and everything worked fine
I am trying to deploy an AWS Lambda function that gets triggered when an AVRO file is written to an existing S3 bucket.
My serverless.yml configuration is as follows:
service: braze-lambdas
provider:
name: aws
runtime: python3.7
region: us-west-1
role: arn:aws:iam::<account_id>:role/<role_name>
stage: dev
deploymentBucket:
name: serverless-framework-dev-us-west-1
serverSideEncryption: AES256
functions:
hello:
handler: handler.hello
events:
- s3:
bucket: <company>-dev-ec2-us-west-2
existing: true
events: s3:ObjectCreated:*
rules:
- prefix: gaurav/lambdas/123/
- suffix: .avro
When I run serverless deploy, I get the following error:
ServerlessError: An error occurred: IamRoleCustomResourcesLambdaExecution - API: iam:CreateRole User: arn:aws:sts::<account_id>:assumed-role/serverless-framework-dev/jenkins_braze_lambdas_deploy is not authorized to perform: iam:CreateRole on resource: arn:aws:iam::<account_id>:role/braze-lambdas-dev-IamRoleCustomResourcesLambdaExec-1M5QQI6P2ZYUH.
I see some mentions of Serverless needing iam:CreateRole because of how CloudFormation works but can anyone confirm if that is the only solution if I want to use existing: true? Is there another way around it except using the old Serverless plugin that was used prior to the framework adding support for the existing: true configuration?
Also, what is 1M5QQI6P2ZYUH in arn:aws:iam::<account_id>:role/braze-lambdas-dev-IamRoleCustomResourcesLambdaExec-1M5QQI6P2ZYUH? Is it a random identifier? Does this mean that Serverless will try to create a new IAM role every time I try to deploy the Lambda function?
I've just encountered this, and overcome it.
I also have a lambda for which I want to attach an s3 event to an already existing bucket.
My place of work has recently tightened up AWS Account Security by the use of Permission Boundaries.
So i've encountered the very similar error during deployment
Serverless Error ---------------------------------------
An error occurred: IamRoleCustomResourcesLambdaExecution - API: iam:CreateRole User: arn:aws:sts::XXXXXXXXXXXX:assumed-role/xx-crossaccount-xx/aws-sdk-js-1600789080576 is not authorized to perform: iam:CreateRole on resource: arn:aws:iam::XXXXXXXXXXXX:role/my-existing-bucket-IamRoleCustomResourcesLambdaExec-LS075CH394GN.
If you read Using existing buckets on the serverless site, it says
NOTE: Using the existing config will add an additional Lambda function and IAM Role to your stack. The Lambda function backs-up the Custom S3 Resource which is used to support existing S3 buckets.
In my case I needed to further customise this extra role that serverless creates so that it is also assigned the permission boundary my employer has defined should exist on all roles. This happens in the resources: section.
If your employer is using permission boundaries you'll obviously need to know the correct ARN to use
resources:
Resources:
IamRoleCustomResourcesLambdaExecution:
Type: AWS::IAM::Role
Properties:
PermissionsBoundary: arn:aws:iam::XXXXXXXXXXXX:policy/xxxxxxxxxxxx-global-boundary
Some info on the serverless Resources config
Have a look at your own serverless.yaml, you may already have a permission boundary defined in the provider section. If so you'll find it under rolePermissionsBoundary, this was added in I think version 1.64 of serverless
provider:
rolePermissionsBoundary: arn:aws:iam::XXXXXXXXXXXX:policy/xxxxxxxxxxxx-global-boundary
If so, you can should be able to use that ARN in the resources: sample I've posted here.
For testing purpose we can use:
provider:
name: aws
runtime: python3.8
region: us-east-1
iamRoleStatements:
- Effect: Allow
Action: "*"
Resource: "*"
For running sls deploy, I would suggest you use a role/user/policy with Administrator privileges.
If you're restricted due to your InfoSec team or the like, then I suggest you have your InfoSec team have a look at docs for "AWS IAM Permission Requirements for Serverless Framework Deploy." Here's a good link discussing it: https://github.com/serverless/serverless/issues/1439. At the very least, they should add iam:CreateRole and that can get you unblocked for today.
Now I will address your individual questions:
can anyone confirm if that is the only solution if I want to use existing: true
Apples and oranges. Your S3 configuration has nothing to do with your error message. iam:CreateRole must be added to the policy of whatever/whoever is doing sls deploy.
Also, what is 1M5QQI6P2ZYUH in arn:aws:iam::<account_id>:role/braze-lambdas-dev-IamRoleCustomResourcesLambdaExec-1M5QQI6P2ZYUH? Is it a random identifier? Does this mean that serverless will try to create a new role every time I try to deploy the function?
Yes, it is a random identifier
No, sls will not create a new role every time. This unique ID is cached and re-used for updates to an existing stack.
If a stack is destroyed/recreated, it will assign a generate a new unique ID.
I am trying to create a full access role (using an AWS Managed Policy) to all EC2 instances to call AWS services via Cloudformation in YAML.
This is my code:
AWSTemplateFormatVersion: 2010-09-09
Description: Ansible Role
Resources:
AnsibleRole:
Type: 'AWS::IAM::Role'
Properties:
AssumeRolePolicyDocument:
Version: 2012-10-17
Statement:
- Effect: 'Allow'
Action: 'ec2:*'
Principal:
Service: 'ec2.awsamazon.com'
ManagedPolicyArns:
- 'arn:aws:iam::aws:policy/AmazonEC2FullAccess'
RoleName: 'EC2-FullAccess'
DeletionPolicy: Delete
I get the following error:
Invalid principal in policy: "SERVICE":"ec2.awsamazon.com"
(Service: AmazonIdentityManagement; Status Code: 400; Error
Code: MalformedPolicyDocument; Request ID: e43214f8-b6f9-11e9-9891-4dc84fd279dd)
I am perplexed as to why it doesn't recognize the service. Additionally, if I change Action: 'ec2:*' to Action: 'sts.AssumeRole' I get another error.
Any assistance is greatly appreciated.
There are multiple issues with your template:
The service identifier is malformed. It should be 'ec2.amazonaws.com'.
The action must be 'sts:AssumeRole'. This is the only action which is valid inside an IAM trust policy.
The DeletionPolicy is not necessary because it is the default for this resource.
Set the RoleName only if really necessary because IAM names are global on a per-account basis and you cannot execute multiple stacks when using this attribute.
For more information see the AWS CloudFormation template examples.
You use the correct managed policy ARN if you want to grant your new role permission to call all kinds of ec2 actions. If you want to restrict your Ansible role further, take a look at the example policies for EC2 in the docs [1][2]. They are much more restrictive (and thus secure) than the managed full access policy AmazonEC2FullAccess. Maybe also the other managed policies such as AmazonEC2ReadOnlyAccess [3] are feasible?
References
[1] https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ExamplePolicies_EC2.html
[2] https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-policies-for-amazon-ec2.html
[3] https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/UsingIAM.html#intro-to-iam
I have started getting the following error recently on release change action int eh AWS codePipeline console. Also attaching the screenshot
Action execution failed
Insufficient permissions The provided role does not have permissions
to perform this action. Underlying error: Access Denied (Service:
Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID:
CA26EF93E3DAF8F0; S3 Extended Request ID:
mKkobqLGbj4uco8h9wDOBjPeWrRA2ybCrEsVoSq/MA4IFZqJb6QJSrlNrKk/EQK40TfLbTbqFuQ=)
I can't find any resources online anywhere for this error code.
Your pipeline is trying to access a S3 bucket, but AWS CodePipeline ServiceRole does not have permission to access it. Create an IAM policy that provides access to S3 and attach it to the CodePipeline service role.
As #Jeevagan said, you must create a new IAM Policy that grant access to the Pipeline Buckets.
Do not forget to add the following actions:
Action:
- "s3:GetObject"
- "s3:List*"
- "s3:GetObjectVersion"
I lost a few minutes because of this one in particular: GetObjectVersion
By checking your codedeploy-output, you'll be able to see that the process is downloading a particular version of your artefact with the parameter "versionId".
Hope it will help.
You are missing the GetBucketVersioning action in your policy, so the correct example looks like below. I don't know why it's not mentioned anywhere in the reference/documentation:
- PolicyName: AccessRequiredByPipeline
PolicyDocument:
Version: '2012-10-17'
Statement:
- Action:
- s3:PutObject
- s3:GetObject
- s3:GetObjectVersion
Effect: Allow
Resource: !Sub ${YouBucket.Arn}/*
- Action:
- s3:GetBucketVersioning
Resource: !Sub ${YouBucket.Arn}
Effect: Allow
- Action:
- kms:GenerateDataKey
- kms:Decrypt
Effect: Allow
Resource: !GetAtt KMSKey.Arn
Another potential culprit that mascarades behind this error that references S3 is missing KMS permissions on the IAM Role for the CodePipeline. If you configured your CodePipeline to use KMS encryption, then the service role used/associated with the CodePipeline will also need KMS permissions to that KMS key in order to interact with the KMS encrypted objects in S3. In my experience, the missing KMS permissions will cause the same error message to appear which references S3.
I just ran into this issue, but the permissions were all set properly - I used the same CloudFormation template with other projects no problem. It turned out that the key name I was using in the S3 bucket was too long. Apparently it doesn't like anything more than 20 characters. Once I changed the key name in my S3 bucket (and all of its associated references in the CloudFormation template files), everything worked properly
I run into the same issue when I used cloud formation to build my CI/CD, my problem was the CodePipeline ArtifactStore pointed to the wrong location in the S3 ("codepipeline" a not allowed access folder in my case). Changing the ArtifactStore to an existing folder fixed my issue.
You can view pipeline details like where the SourceArtifact is pointed by following this link
When writing the AWS CloudFormation template to create a Lambda function, 'Code' field is required.
I found the documentation here: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-lambda-function-code.html
The document says you can specify the source of your Lambda function as a zip file in a S3 bucket. And in the S3Bucket field, it says "You can specify a bucket from another AWS account as long as the Lambda function and the bucket are in the same region."
If you put a bucket name in the S3Bucket field, it will try to find the bucket in the same AWS account. So my question is how can I specify a bucket from another AWS account?
A code snippet in yaml I created for the CFT:
MyLambdaFunction:
Type: AWS::Lambda::Function
Properties:
Handler: index.handler
Runtime: nodejs6.10
Role: !GetAtt LambdaRole.Arn
FunctionName: 'MyLambda'
MemorySize: 1024
Timeout: 30
Code:
S3Bucket: 'my-bucket'
S3Key: 'my-key'
An S3 bucket is an S3 bucket. It doesn't matter which AWS account it is in. If you have permission to access the bucket then you can access it.
Simply provide the name of the S3 bucket (it must be in the same region in this specific case) and make sure the credentials you are using are allowed access to the S3 bucket.
If you are deploying your Cloudformation stack in multiple AWS regions, you can quickly create identical S3 buckets in each of these regions using a tool like cfs3-uploader.