I'm a beginner in AWS and also serverless framework. I'd like to run PHP on AWS lambda function. So I need to set custom PHP runtime up using bref. So I run 'composer require bref/bref' and then I chose 'Web application' runtime.
I got this error when I run 'sls deploy' command.
Error:
CREATE_FAILED: ServerlessDeploymentBucket (AWS::S3::Bucket)
API: s3:CreateBucket Access Denied
I've already given S3 * permission for IAM user, user's group, user's role.
Please help me
what should I need to do and what is missing ?
This is my serverless.yml file
provider:
name: aws
region: ap-northeast-1
runtime: provided.al2
plugins:
- ./vendor/bref/bref
functions:
api:
handler: index.php
description: ''
timeout: 28 # in seconds (API Gateway has a timeout of 29 seconds)
layers:
- ${bref:layer.php-80-fpm}
events:
- httpApi: '*'
# Exclude files from deployment
package:
patterns:
- '!node_modules/**'
- '!tests/**'
Related
I am using the following configuration to deploy a couple lambda functions to different stages prod and dev on AWS. Both stages should be protected with an api key which is stored in SSM.
serverless.yml
service: my-service
frameworkVersion: "3"
provider:
name: aws
runtime: nodejs16.x
region: eu-central-1
apiGateway:
apiKeys:
- name: my-apikey
value: ${ssm:my-apikey}
functions:
v1_myfunc:
handler: src/api/myfunc/get_func.get
events:
- http:
path: /v1/myfunc
method: get
private: true
plugins:
- serverless-esbuild
- serverless-offline
- serverless-dotenv-plugin
My deployment scripts look like this:
package.json
"scripts": {
"deploy:dev": "serverless deploy --stage dev",
"deploy:prod": "serverless deploy --stage prod"
}
The problem:
When I deploy one of the stages then everything works fine. But if I deploy the other one afterwards, I always get the following error (in this case I deployed prod first, and then dev):
Deploying my-service to stage dev (eu-central-1)
✖ Stack my-service-dev failed to deploy (46s)
Environment: darwin, node 16.15.0, framework 3.23.0, plugin 6.2.2, SDK 4.3.2
Credentials: Local, "default" profile
Error:
Invalid API Key identifier specified
error Command failed with exit code 1.
Looking into AWS console, I noticed that the generated api key has the same id for both stacks (dev and prod). So, I'm guessing this is where the problem is: Both stacks sharing the same api key instance.
So, I tried to fix this by setting different api key names for each stage:
- name: my-apikey-${self:provider.stage}
value: ${ssm:my-apikey}
But this doesn't solve the problem, as I'm still getting this error:
Invalid API Key identifier specified
Question: How do I have to change my serverless.yml config to fix the issue?
The api key values for both stages need to be different:
service: my-service
frameworkVersion: "3"
provider:
name: aws
runtime: nodejs16.x
region: eu-central-1
apiGateway:
apiKeys:
${self:custom.apiTest.${sls:stage}}
functions:
v1_myfunc:
handler: src/api/myfunc/get_func.get
events:
- http:
path: /v1/myfunc
method: get
private: true
plugins:
- serverless-esbuild
- serverless-offline
- serverless-dotenv-plugin
custom:
apiTest:
dev:
- name: api-key-dev
value: 123 # Needs to be different vs prod!
prod:
- name: api-key-prod
value: 456 # Needs to be different vs dev!
I am trying to deploy lambda function with Serverless framework
I've added my ADMIN credentials in the aws cli
and I am getting this error message every time I try to deploy
Warning: Not authorized to perform: lambda:GetFunction for at least one of the lambda functions. Deployment will not be skipped even if service files did not change.
Error:
CREATE_FAILED: HelloLambdaFunction (AWS::Lambda::Function)
Resource handler returned message: "null (Service: Lambda, Status Code: 403, Request ID: ********)" (RequestToken: ********, HandlerErrorCode: GeneralServiceException)
I've also removed everything from my project and from the YML file and nothing worked
service: test
frameworkVersion: '3'
provider:
name: aws
runtime: nodejs12.x
iam:
role:
statements:
- Effect: "Allow"
Action:
- lambda:*
- lambda:InvokeFunction
- lambda:GetFunction
- lambda:GetFunctionConfiguration
Resource: "*"
functions:
hello:
handler: handler.hello
Deployments default to us-east-1 region and used the default profile set on the machine where the serverless command is run. Perhaps you dont have permission to deploy is that region or serverless is using a different profile than intended. (e.g If i run serverless from an EC2 and login separately, it would still use the default profile, i.e the EC2 instance Profile.)
Can you update your serverless.yml file to include the region as well.
provider:
name: aws
runtime: nodejs12.x
region: <region_id>
profile: profile name if not Default
When I tried to create a lambda function manually from the AWS website I found that I've no permission to view or create any lambda function
And after that I found that my account was suspended due to a behavior I've done that is not acceptable in AWS policy
I've followed the steps the support has sent me and then my account was back and everything worked fine
When I deploy my serverless framework project using AWS as provider I get:
You're not authorized to access this resource. - Please contact support and provide this identifier to reference this issue BLAHBLAH
I am logged into Serverless framework with serverless login
My serverless.yaml:
org: vladimirorg
app: vladimirapp
service: backend-rest
provider:
name: aws
runtime: nodejs12.x
apiGateway: {
shouldStartNameWithService: true
}
environment:
DYNAMODB_TABLE: ${self:service}-${opt:stage, self:provider.stage}
DYNAMODB_LOCAL_PORT: 9000
iamRoleStatements:
- Effect: Allow
Action:
- dynamodb:Query
- dynamodb:Scan
- dynamodb:GetItem
- dynamodb:PutItem
- dynamodb:UpdateItem
- dynamodb:DeleteItem
Resource: "arn:aws:dynamodb:#{AWS::Region}:#{AWS::AccountId}:table/${self:provider.environment.DYNAMODB_TABLE}"
functions:
create:
handler: src/handlers/create.create
events:
- http:
path: todos
method: post
cors: true
request:
schema:
application/json: ${file(src/schemas/create.json)}
...
I have found the root cause - if you wish to deploy serverless framework application you must use the exact same organization (org) and application name (app) as one you registered with serverless framework.
To find out your current app/org name, change them or create new app/org login to Serverless Framework’s dashboard account on https://app.serverless.com/ using same credentials you use for deploying and make sure you are using the exact org and app in your serverless.yaml file:
org: orgname <---
app: appname <---
service: backend-rest
...
So you can't just use any arbitrary org/app name, you must use exact org/app registered with Serverless framework.
I had to remove org: <org> in order for it to ask me again next time sls is run.
Try using serverless logout or deleting the ~\.serverlessrc file then run serverless login again and try your command
you need to specify AWS profile in your serverless.yml, and set your AWS account credential in ~/.aws/credentials like below:
[your_preferred_profile_name]
aws_access_key_id=AKIAZEIJOWEFJOIWEF
aws_secret_access_key=siAOEIF4+TdifOHeofoe+iJR8yFokT7uBmV4DEZ
And speciffy this profile in your serverless.yml file like this:
provider:
name: aws
runtime: nodejs12.x
stage: dev
region: us-east-1
profile: your_preferred_profile_name
The error you are getting is saying sls framework couldn't access to your AWS resources.
It means you didn't setup this AWS account credential in your local environment and serverless framework.
I have a cross-account pipeline running in an account CI deploying resources via CloudFormation in another account DEV.
After deploying I save the artifact outputs as a JSON file and want to access it in another pipeline action via CodeBuild.
CodeBuild fails in the phase DOWNLOAD_SOURCE with the following messsage:
CLIENT_ERROR: AccessDenied: Access Denied status code: 403, request
id: 123456789, host id: xxxxx/yyyy/zzzz/xxxx= for primary source and
source version arn:aws:s3:::my-bucket/my-pipeline/DeployArti/XcUNqOP
The problem is likely that the CloudFormation, when executed in a different account, encrypt the artifacts with a different key than the pipeline itself.
Is it possible to give the CloudFormation an explicit KMS key to encrypt the artifacts with, or any other way how to access those artifacts back in the pipeline?
Everything works when executed from within a single account.
Here is my code snippet (deployed in the CI account):
MyCodeBuild:
Type: AWS::CodeBuild::Project
Properties:
Artifacts:
Type: CODEPIPELINE
Environment: ...
Name: !Sub "my-codebuild"
ServiceRole: !Ref CodeBuildRole
EncryptionKey: !GetAtt KMSKey.Arn
Source:
Type: CODEPIPELINE
BuildSpec: ...
CrossAccountCodePipeline:
Type: AWS::CodePipeline::Pipeline
Properties:
Name: "my-pipeline"
RoleArn: !GetAtt CodePipelineRole.Arn
Stages:
- Name: Source
...
- Name: StagingDev
Actions:
- Name: create-stack-in-DEV-account
InputArtifacts:
- Name: SourceArtifact
OutputArtifacts:
- Name: DeployArtifact
ActionTypeId:
Category: Deploy
Owner: AWS
Version: "1"
Provider: CloudFormation
Configuration:
StackName: "my-dev-stack"
ChangeSetName: !Sub "my-changeset"
ActionMode: CREATE_UPDATE
Capabilities: CAPABILITY_NAMED_IAM
# this is the artifact I want to access from the next action
# within this CI account pipeline
OutputFileName: "my-DEV-output.json"
TemplatePath: !Sub "SourceArtifact::stack/my-stack.yml"
RoleArn: !Sub "arn:aws:iam::${DevAccountId}:role/dev-cloudformation-role"
RoleArn: !Sub "arn:aws:iam::${DevAccountId}:role/dev-cross-account-role"
RunOrder: 1
- Name: process-DEV-outputs
InputArtifacts:
- Name: DeployArtifact
ActionTypeId:
Category: Build
Owner: AWS
Version: "1"
Provider: CodeBuild
Configuration:
ProjectName: !Ref MyCodeBuild
RunOrder: 2
ArtifactStore:
Type: S3
Location: !Ref S3ArtifactBucket
EncryptionKey:
Id: !GetAtt KMSKey.Arn
Type: KMS
CloudFormation generates output artifact, zips it and then uploads the file to S3.
It does not add ACL, which grants access to the bucket owner. So, you get a 403 when you try to use the CloudFormation output artifact further down the pipeline.
workaround is to have one more action in your pipeline immediately after CLoudFormation action for ex: Lambda function that can assume the target account role and update the object acl ex: bucket-owner-full-control.
mockora's answer is correct. Here is an example Lambda function in Python that fixes the issue, which you can configure as an Invoke action immediately after your cross account CloudFormation deployment.
In this example, you configure the Lambda invoke action user parameters setting as the ARN of the role you want the Lambda function to assume in remote account to fix the S3 object ACL. Obviously your Lambda function will need sts:AssumeRole permissions for that role, and the remote account role will need s3:PutObjectAcl permissions on the pipeline bucket artifact(s).
import os
import logging, datetime, json
import boto3
from aws_xray_sdk.core import xray_recorder
from aws_xray_sdk.core import patch_all
# X-Ray
patch_all()
# Configure logging
logging.basicConfig()
log = logging.getLogger()
log.setLevel(os.environ.get('LOG_LEVEL','INFO'))
def format_json(data):
return json.dumps(data, default=lambda d: d.isoformat() if isinstance(d, datetime.datetime) else str(d))
# Boto3 Client
client = boto3.client
codepipeline = client('codepipeline')
sts = client('sts')
# S3 Object ACLs Handler
def s3_acl_handler(event, context):
log.info(f'Received event: {format_json(event)}')
# Get Job
jobId = event['CodePipeline.job']['id']
jobData = event['CodePipeline.job']['data']
# Ensure we return a success or failure result
try:
# Assume IAM role from user parameters
credentials = sts.assume_role(
RoleArn=jobData['actionConfiguration']['configuration']['UserParameters'],
RoleSessionName='codepipeline',
DurationSeconds=900
)['Credentials']
# Create S3 client from assumed role credentials
s3 = client('s3',
aws_access_key_id=credentials['AccessKeyId'],
aws_secret_access_key=credentials['SecretAccessKey'],
aws_session_token=credentials['SessionToken']
)
# Set S3 object ACL for each input artifact
for inputArtifact in jobData['inputArtifacts']:
s3.put_object_acl(
ACL='bucket-owner-full-control',
Bucket=inputArtifact['location']['s3Location']['bucketName'],
Key=inputArtifact['location']['s3Location']['objectKey']
)
codepipeline.put_job_success_result(jobId=jobId)
except Exception as e:
logging.exception('An exception occurred')
codepipeline.put_job_failure_result(
jobId=jobId,
failureDetails={'type': 'JobFailed','message': getattr(e, 'message', repr(e))}
)
I've been using CodePipeline for cross account deployments for a couple of years now. I even have a GitHub project around simplifying the process using organizations. There are a couple of key elements to it.
Make sure your S3 bucket is using a CMK, not the default encryption key.
Make sure you grant access to that key to the accounts to which you are deploying. When you have a CloudFormation template, for example, that runs on a different account than where the template lives, the role that is being used on that account needs to have permissions to access the key (and the S3 bucket).
It's certainly more complex than that, but at no point do I run a lambda to change the object owner of the artifacts. Create a pipeline in CodePipeline that uses resources from another AWS account has detailed information on what you need to do to make it work.
CloudFormation should use the KMS encryption key provided in the artifact store definition of your pipeline: https://docs.aws.amazon.com/codepipeline/latest/APIReference/API_ArtifactStore.html#CodePipeline-Type-ArtifactStore-encryptionKey
Therefore, so long as you give it a custom key there and allow the other account to use that key too it should work.
This is mostly covered in this doc: https://docs.aws.amazon.com/codepipeline/latest/userguide/pipelines-create-cross-account.html
I'm wondering how I can force my local serverless deployment to mimic what happens when I deploy it to AWS.
Here is my serverless yaml file:
service: payment # NOTE: update this with your service name
# You can pin your service to only deploy with a specific Serverless version
# Check out our docs for more details
# frameworkVersion: "=X.X.X"
environment:
SLS_DEBUG: "*"
provider:
name: aws
runtime: nodejs8.10
stage: production
region: ca-central-1
timeout: 60
role: ${file(../config/prod.env.json):ROLE}
vpc:
securityGroupIds:
- ${file(../config/prod.env.json):SECURITY_GROUP}
subnetIds:
- ${file(../config/prod.env.json):SUBNET}
apiGateway:
apiKeySourceType: HEADER
apiKeys:
- ${file(../config/prod.env.json):APIKEY}
package:
include:
- ../lib/**
functions:
- '${file(src/handlers/payment.serverless.yml)}'
plugins:
- serverless-offline
My file structure looks like this:
root
--- node_modules
--- lib
- models
--- payment
- serverless.yml
When I deploy it to AWS the lib folder gets put into the lambda function's folder but locally I need to define its path which usually is ../.../../
How can I make it so that locally or deployed I don't have to change the paths?
There is a docker container that is extremely close to aws lambda. You can deploy your serverless onto the container and trial and error with what you want done.
You can also create a lambda layer, which serverless supports, this way.