Not authorized to perform: lambda:GetFunction - amazon-web-services

I am trying to deploy lambda function with Serverless framework
I've added my ADMIN credentials in the aws cli
and I am getting this error message every time I try to deploy
Warning: Not authorized to perform: lambda:GetFunction for at least one of the lambda functions. Deployment will not be skipped even if service files did not change.
Error:
CREATE_FAILED: HelloLambdaFunction (AWS::Lambda::Function)
Resource handler returned message: "null (Service: Lambda, Status Code: 403, Request ID: ********)" (RequestToken: ********, HandlerErrorCode: GeneralServiceException)
I've also removed everything from my project and from the YML file and nothing worked
service: test
frameworkVersion: '3'
provider:
name: aws
runtime: nodejs12.x
iam:
role:
statements:
- Effect: "Allow"
Action:
- lambda:*
- lambda:InvokeFunction
- lambda:GetFunction
- lambda:GetFunctionConfiguration
Resource: "*"
functions:
hello:
handler: handler.hello

Deployments default to us-east-1 region and used the default profile set on the machine where the serverless command is run. Perhaps you dont have permission to deploy is that region or serverless is using a different profile than intended. (e.g If i run serverless from an EC2 and login separately, it would still use the default profile, i.e the EC2 instance Profile.)
Can you update your serverless.yml file to include the region as well.
provider:
name: aws
runtime: nodejs12.x
region: <region_id>
profile: profile name if not Default

When I tried to create a lambda function manually from the AWS website I found that I've no permission to view or create any lambda function
And after that I found that my account was suspended due to a behavior I've done that is not acceptable in AWS policy
I've followed the steps the support has sent me and then my account was back and everything worked fine

Related

Serveless not deploying to AWS

I'm new to Serverless and lambdas
I'm trying to deploy my serverless functions to AWS but it's showing this error:
This is my serverless.yml file:
service: aws-node-http-api-project
frameworkVersion: "3"
provider:
name: aws
runtime: nodejs12.x
region: ap-southeast-1
functions:
hello:
handler: handler.hello
events:
- httpApi:
path: /
method: get
I have set the AWS CLI and IAM user.
Not sure if this is related, but in my CloudFormation, it is showing one stack:
The stack was created, but the changeset failed to execute and hence the stack is stuck at the REVIEW_IN_PROGRESS state.
Find out why the changeset wasn't executed and fix it.
Delete the stack and redeploy.

API: s3:CreateBucket Access Denied when run a command 'sls deploy'

I'm a beginner in AWS and also serverless framework. I'd like to run PHP on AWS lambda function. So I need to set custom PHP runtime up using bref. So I run 'composer require bref/bref' and then I chose 'Web application' runtime.
I got this error when I run 'sls deploy' command.
Error:
CREATE_FAILED: ServerlessDeploymentBucket (AWS::S3::Bucket)
API: s3:CreateBucket Access Denied
I've already given S3 * permission for IAM user, user's group, user's role.
Please help me
what should I need to do and what is missing ?
This is my serverless.yml file
provider:
name: aws
region: ap-northeast-1
runtime: provided.al2
plugins:
- ./vendor/bref/bref
functions:
api:
handler: index.php
description: ''
timeout: 28 # in seconds (API Gateway has a timeout of 29 seconds)
layers:
- ${bref:layer.php-80-fpm}
events:
- httpApi: '*'
# Exclude files from deployment
package:
patterns:
- '!node_modules/**'
- '!tests/**'

Serverless framework deployment error: You're not authorized to access this resource

When I deploy my serverless framework project using AWS as provider I get:
You're not authorized to access this resource. - Please contact support and provide this identifier to reference this issue BLAHBLAH
I am logged into Serverless framework with serverless login
My serverless.yaml:
org: vladimirorg
app: vladimirapp
service: backend-rest
provider:
name: aws
runtime: nodejs12.x
apiGateway: {
shouldStartNameWithService: true
}
environment:
DYNAMODB_TABLE: ${self:service}-${opt:stage, self:provider.stage}
DYNAMODB_LOCAL_PORT: 9000
iamRoleStatements:
- Effect: Allow
Action:
- dynamodb:Query
- dynamodb:Scan
- dynamodb:GetItem
- dynamodb:PutItem
- dynamodb:UpdateItem
- dynamodb:DeleteItem
Resource: "arn:aws:dynamodb:#{AWS::Region}:#{AWS::AccountId}:table/${self:provider.environment.DYNAMODB_TABLE}"
functions:
create:
handler: src/handlers/create.create
events:
- http:
path: todos
method: post
cors: true
request:
schema:
application/json: ${file(src/schemas/create.json)}
...
I have found the root cause - if you wish to deploy serverless framework application you must use the exact same organization (org) and application name (app) as one you registered with serverless framework.
To find out your current app/org name, change them or create new app/org login to Serverless Framework’s dashboard account on https://app.serverless.com/ using same credentials you use for deploying and make sure you are using the exact org and app in your serverless.yaml file:
org: orgname <---
app: appname <---
service: backend-rest
...
So you can't just use any arbitrary org/app name, you must use exact org/app registered with Serverless framework.
I had to remove org: <org> in order for it to ask me again next time sls is run.
Try using serverless logout or deleting the ~\.serverlessrc file then run serverless login again and try your command
you need to specify AWS profile in your serverless.yml, and set your AWS account credential in ~/.aws/credentials like below:
[your_preferred_profile_name]
aws_access_key_id=AKIAZEIJOWEFJOIWEF
aws_secret_access_key=siAOEIF4+TdifOHeofoe+iJR8yFokT7uBmV4DEZ
And speciffy this profile in your serverless.yml file like this:
provider:
name: aws
runtime: nodejs12.x
stage: dev
region: us-east-1
profile: your_preferred_profile_name
The error you are getting is saying sls framework couldn't access to your AWS resources.
It means you didn't setup this AWS account credential in your local environment and serverless framework.

How to avoid giving `iam:CreateRole` permission when using existing S3 bucket to trigger Lambda function?

I am trying to deploy an AWS Lambda function that gets triggered when an AVRO file is written to an existing S3 bucket.
My serverless.yml configuration is as follows:
service: braze-lambdas
provider:
name: aws
runtime: python3.7
region: us-west-1
role: arn:aws:iam::<account_id>:role/<role_name>
stage: dev
deploymentBucket:
name: serverless-framework-dev-us-west-1
serverSideEncryption: AES256
functions:
hello:
handler: handler.hello
events:
- s3:
bucket: <company>-dev-ec2-us-west-2
existing: true
events: s3:ObjectCreated:*
rules:
- prefix: gaurav/lambdas/123/
- suffix: .avro
When I run serverless deploy, I get the following error:
ServerlessError: An error occurred: IamRoleCustomResourcesLambdaExecution - API: iam:CreateRole User: arn:aws:sts::<account_id>:assumed-role/serverless-framework-dev/jenkins_braze_lambdas_deploy is not authorized to perform: iam:CreateRole on resource: arn:aws:iam::<account_id>:role/braze-lambdas-dev-IamRoleCustomResourcesLambdaExec-1M5QQI6P2ZYUH.
I see some mentions of Serverless needing iam:CreateRole because of how CloudFormation works but can anyone confirm if that is the only solution if I want to use existing: true? Is there another way around it except using the old Serverless plugin that was used prior to the framework adding support for the existing: true configuration?
Also, what is 1M5QQI6P2ZYUH in arn:aws:iam::<account_id>:role/braze-lambdas-dev-IamRoleCustomResourcesLambdaExec-1M5QQI6P2ZYUH? Is it a random identifier? Does this mean that Serverless will try to create a new IAM role every time I try to deploy the Lambda function?
I've just encountered this, and overcome it.
I also have a lambda for which I want to attach an s3 event to an already existing bucket.
My place of work has recently tightened up AWS Account Security by the use of Permission Boundaries.
So i've encountered the very similar error during deployment
Serverless Error ---------------------------------------
An error occurred: IamRoleCustomResourcesLambdaExecution - API: iam:CreateRole User: arn:aws:sts::XXXXXXXXXXXX:assumed-role/xx-crossaccount-xx/aws-sdk-js-1600789080576 is not authorized to perform: iam:CreateRole on resource: arn:aws:iam::XXXXXXXXXXXX:role/my-existing-bucket-IamRoleCustomResourcesLambdaExec-LS075CH394GN.
If you read Using existing buckets on the serverless site, it says
NOTE: Using the existing config will add an additional Lambda function and IAM Role to your stack. The Lambda function backs-up the Custom S3 Resource which is used to support existing S3 buckets.
In my case I needed to further customise this extra role that serverless creates so that it is also assigned the permission boundary my employer has defined should exist on all roles. This happens in the resources: section.
If your employer is using permission boundaries you'll obviously need to know the correct ARN to use
resources:
Resources:
IamRoleCustomResourcesLambdaExecution:
Type: AWS::IAM::Role
Properties:
PermissionsBoundary: arn:aws:iam::XXXXXXXXXXXX:policy/xxxxxxxxxxxx-global-boundary
Some info on the serverless Resources config
Have a look at your own serverless.yaml, you may already have a permission boundary defined in the provider section. If so you'll find it under rolePermissionsBoundary, this was added in I think version 1.64 of serverless
provider:
rolePermissionsBoundary: arn:aws:iam::XXXXXXXXXXXX:policy/xxxxxxxxxxxx-global-boundary
If so, you can should be able to use that ARN in the resources: sample I've posted here.
For testing purpose we can use:
provider:
name: aws
runtime: python3.8
region: us-east-1
iamRoleStatements:
- Effect: Allow
Action: "*"
Resource: "*"
For running sls deploy, I would suggest you use a role/user/policy with Administrator privileges.
If you're restricted due to your InfoSec team or the like, then I suggest you have your InfoSec team have a look at docs for "AWS IAM Permission Requirements for Serverless Framework Deploy." Here's a good link discussing it: https://github.com/serverless/serverless/issues/1439. At the very least, they should add iam:CreateRole and that can get you unblocked for today.
Now I will address your individual questions:
can anyone confirm if that is the only solution if I want to use existing: true
Apples and oranges. Your S3 configuration has nothing to do with your error message. iam:CreateRole must be added to the policy of whatever/whoever is doing sls deploy.
Also, what is 1M5QQI6P2ZYUH in arn:aws:iam::<account_id>:role/braze-lambdas-dev-IamRoleCustomResourcesLambdaExec-1M5QQI6P2ZYUH? Is it a random identifier? Does this mean that serverless will try to create a new role every time I try to deploy the function?
Yes, it is a random identifier
No, sls will not create a new role every time. This unique ID is cached and re-used for updates to an existing stack.
If a stack is destroyed/recreated, it will assign a generate a new unique ID.

Cross-Account AWS CodePipeline cannot access CloudFormation deploy artifacts

I have a cross-account pipeline running in an account CI deploying resources via CloudFormation in another account DEV.
After deploying I save the artifact outputs as a JSON file and want to access it in another pipeline action via CodeBuild.
CodeBuild fails in the phase DOWNLOAD_SOURCE with the following messsage:
CLIENT_ERROR: AccessDenied: Access Denied status code: 403, request
id: 123456789, host id: xxxxx/yyyy/zzzz/xxxx= for primary source and
source version arn:aws:s3:::my-bucket/my-pipeline/DeployArti/XcUNqOP
The problem is likely that the CloudFormation, when executed in a different account, encrypt the artifacts with a different key than the pipeline itself.
Is it possible to give the CloudFormation an explicit KMS key to encrypt the artifacts with, or any other way how to access those artifacts back in the pipeline?
Everything works when executed from within a single account.
Here is my code snippet (deployed in the CI account):
MyCodeBuild:
Type: AWS::CodeBuild::Project
Properties:
Artifacts:
Type: CODEPIPELINE
Environment: ...
Name: !Sub "my-codebuild"
ServiceRole: !Ref CodeBuildRole
EncryptionKey: !GetAtt KMSKey.Arn
Source:
Type: CODEPIPELINE
BuildSpec: ...
CrossAccountCodePipeline:
Type: AWS::CodePipeline::Pipeline
Properties:
Name: "my-pipeline"
RoleArn: !GetAtt CodePipelineRole.Arn
Stages:
- Name: Source
...
- Name: StagingDev
Actions:
- Name: create-stack-in-DEV-account
InputArtifacts:
- Name: SourceArtifact
OutputArtifacts:
- Name: DeployArtifact
ActionTypeId:
Category: Deploy
Owner: AWS
Version: "1"
Provider: CloudFormation
Configuration:
StackName: "my-dev-stack"
ChangeSetName: !Sub "my-changeset"
ActionMode: CREATE_UPDATE
Capabilities: CAPABILITY_NAMED_IAM
# this is the artifact I want to access from the next action
# within this CI account pipeline
OutputFileName: "my-DEV-output.json"
TemplatePath: !Sub "SourceArtifact::stack/my-stack.yml"
RoleArn: !Sub "arn:aws:iam::${DevAccountId}:role/dev-cloudformation-role"
RoleArn: !Sub "arn:aws:iam::${DevAccountId}:role/dev-cross-account-role"
RunOrder: 1
- Name: process-DEV-outputs
InputArtifacts:
- Name: DeployArtifact
ActionTypeId:
Category: Build
Owner: AWS
Version: "1"
Provider: CodeBuild
Configuration:
ProjectName: !Ref MyCodeBuild
RunOrder: 2
ArtifactStore:
Type: S3
Location: !Ref S3ArtifactBucket
EncryptionKey:
Id: !GetAtt KMSKey.Arn
Type: KMS
CloudFormation generates output artifact, zips it and then uploads the file to S3.
It does not add ACL, which grants access to the bucket owner. So, you get a 403 when you try to use the CloudFormation output artifact further down the pipeline.
workaround is to have one more action in your pipeline immediately after CLoudFormation action for ex: Lambda function that can assume the target account role and update the object acl ex: bucket-owner-full-control.
mockora's answer is correct. Here is an example Lambda function in Python that fixes the issue, which you can configure as an Invoke action immediately after your cross account CloudFormation deployment.
In this example, you configure the Lambda invoke action user parameters setting as the ARN of the role you want the Lambda function to assume in remote account to fix the S3 object ACL. Obviously your Lambda function will need sts:AssumeRole permissions for that role, and the remote account role will need s3:PutObjectAcl permissions on the pipeline bucket artifact(s).
import os
import logging, datetime, json
import boto3
from aws_xray_sdk.core import xray_recorder
from aws_xray_sdk.core import patch_all
# X-Ray
patch_all()
# Configure logging
logging.basicConfig()
log = logging.getLogger()
log.setLevel(os.environ.get('LOG_LEVEL','INFO'))
def format_json(data):
return json.dumps(data, default=lambda d: d.isoformat() if isinstance(d, datetime.datetime) else str(d))
# Boto3 Client
client = boto3.client
codepipeline = client('codepipeline')
sts = client('sts')
# S3 Object ACLs Handler
def s3_acl_handler(event, context):
log.info(f'Received event: {format_json(event)}')
# Get Job
jobId = event['CodePipeline.job']['id']
jobData = event['CodePipeline.job']['data']
# Ensure we return a success or failure result
try:
# Assume IAM role from user parameters
credentials = sts.assume_role(
RoleArn=jobData['actionConfiguration']['configuration']['UserParameters'],
RoleSessionName='codepipeline',
DurationSeconds=900
)['Credentials']
# Create S3 client from assumed role credentials
s3 = client('s3',
aws_access_key_id=credentials['AccessKeyId'],
aws_secret_access_key=credentials['SecretAccessKey'],
aws_session_token=credentials['SessionToken']
)
# Set S3 object ACL for each input artifact
for inputArtifact in jobData['inputArtifacts']:
s3.put_object_acl(
ACL='bucket-owner-full-control',
Bucket=inputArtifact['location']['s3Location']['bucketName'],
Key=inputArtifact['location']['s3Location']['objectKey']
)
codepipeline.put_job_success_result(jobId=jobId)
except Exception as e:
logging.exception('An exception occurred')
codepipeline.put_job_failure_result(
jobId=jobId,
failureDetails={'type': 'JobFailed','message': getattr(e, 'message', repr(e))}
)
I've been using CodePipeline for cross account deployments for a couple of years now. I even have a GitHub project around simplifying the process using organizations. There are a couple of key elements to it.
Make sure your S3 bucket is using a CMK, not the default encryption key.
Make sure you grant access to that key to the accounts to which you are deploying. When you have a CloudFormation template, for example, that runs on a different account than where the template lives, the role that is being used on that account needs to have permissions to access the key (and the S3 bucket).
It's certainly more complex than that, but at no point do I run a lambda to change the object owner of the artifacts. Create a pipeline in CodePipeline that uses resources from another AWS account has detailed information on what you need to do to make it work.
CloudFormation should use the KMS encryption key provided in the artifact store definition of your pipeline: https://docs.aws.amazon.com/codepipeline/latest/APIReference/API_ArtifactStore.html#CodePipeline-Type-ArtifactStore-encryptionKey
Therefore, so long as you give it a custom key there and allow the other account to use that key too it should work.
This is mostly covered in this doc: https://docs.aws.amazon.com/codepipeline/latest/userguide/pipelines-create-cross-account.html