Two S3 Buckets are creating when Deploying using serverless framework - amazon-web-services

am trying to create S3 bucket using serverless framework. but when I deploy, it's creating two buckets one with the name I have mentioned in the severless.yml file and another bucket.
serverless.yml
service: aws-file-upload-tos3
provider:
name: aws
runtime: nodejs12.x
stage: dev
region: us-east-2
lambdaHashingVersion: 20201221
custom:
fileUploadBucketName: ${self:service}-${self:provider.stage}-bucket
resources:
Resources:
FileBucket:
Type: AWS::S3::Bucket
Properties:
BucketName: ${self:custom.fileUploadBucketName}
AccessControl: PublicRead
Buckets created are
why its creating two buckets like this

By default, serverless framework creates a bucket with a generated name like <service name>-serverlessdeploymentbuck-1x6jug5lzfnl7 to store your service's stack state.
Each successive version of your serverless app is bundled and uploaded by sls to the deployment bucket, and deployed from there.
I think you have some control over how sls does this if you use the serverless-deployment-bucket plugin.

By default, the Serverless Framework creates a number of things on your machine in order to deploy what you have configured in your serverles.yml. It then needs to make use of a service inside AWS called CloudFormation to actually create the resources you configured, like your S3 bucket. The best way to do this is to take the things it created on your machine and upload them to AWS to ensure that the deployment continues without interruption or issue and the best place to do that is S3.
So the Serverless Framework will always (by default) create its own S3 bucket entirely unrelated to what you configured as a location to store the files it generated on your AWS account, then point CloudFormation at it to build the things you configured to get built.
While you have some control over this deployment bucket there always needs to be one. And it is completely unrelated to the bucket you configured.

Related

How to manage different Serverless (with AWS Lambda) environments (ie "dev" and "prod")

I want to create a separate 'dev' AWS Lambda with my Serverless service.
I have deployed my production, 'prod', environment and tried to then deploy a development, 'dev', environment so that I can trial features without affecting customer experience.
In order to deploy the 'dev' environment I have:
Created a new serverless-dev.yml file
Updated the stage and profile fields in my .yml file:
provider:
name: aws
runtime: nodejs14.x
stage: dev
region: eu-west-2
profile: dev
memorySize: 128
timeout: 30
Also update the resources.Resources.<Logical Id>.Properties.RoleName value, as if I try to use the same role as my 'prod' Lambda, I get this message: clearbit-lambda-role-prod already exists in stack
resources:
Resources:
<My Logical ID>:
Type: AWS::IAM::Role
Properties:
Path: /my/cust/path/
RoleName: clearbit-lambda-role-dev # Change this name
Run command: sls deploy -c serverless-dev.yml
Is this the conventional method to achieve this? I can't find anything in the documentation.
Serverless Framework has support for stages out of the box. You don't need a separate configuration, you can just specify --stage <name-of-stage> when running .e.g sls deploy and it will automatically use that stage. All resources created by the Framework under the hood are including stage in it's names or identifiers. If you are defining some extra resources in resources section, you need to change them, or make sure they include stage in their names. You can get the current stage in configuration with ${sls:stage} and use that to construct names that are e.g. prefixed with stage.

Is there a way to use multiple aws profiles to deploy(update) serverless stack?

We have a team of 3 to 4 members so we wanted to do serverless deploy or update functions or resources using our own personnel AWS credentials without creating new stack but just updating the existing resources. Is there a way to do that? I am aware that we can set up --aws-profile and different profiles for different stages. I am also aware that we cloud just divide the resources into microservices and just deploy or update our own parts. Any help is appreciated.
This can be done as below:
Add the profile configuration as below, i ha e named it as devProfile.
service: new-service
provider:
name: aws
runtime: nodejs12.x
stage: dev
profile: devProfile
Each individual would set their credentials under their own machine as below:
aws configure --profile devProfile
If you have different credentials for different stage, then above serverless snippet can be implemented in parameterized way as below:
serverless.yml
custom:
stages:
- local
- dev
- prod
# default stage/environment
defaultStage: local
# default AWS region
defaultRegion: us-east-1
# config file / region / stage
configFile: ${file(./config/${opt:region,self:provider.region}/${self:provider.stage}.yml)}
Provider:
...
stage: ${opt:stage, self:custom.defaultStage}
...
profile: ${self:custom.configFile.aws.profile}
...
Create config/us-east-1/dev.yml
aws:
profile: devProfile
and config/us-east-1/prod.yml
aws:
profile: prodProfile
It sounds like you already know what to do but need a sanity check. So I'll tell you how I, and everyone else I know, handles this.
We prefix commands with AWS_PROFILE env var declared and we use --stage names.
E.g. AWS_PROFILE=mycompany sls deploy --stage shailendra.
Google aws configure for examples on how to set up awscli that uses the AWS_PROFILE var.
We also name the --stage with a unique ID, e.g. your name. This way, you and your colleagues all have individual CloudFormation stacks that work independently of eachother and there will be no conflicts.

AWS Serverless framework : Nested Stack or Cloudformation templates

I am using serverless framework -
https://serverless.com/framework/docs/providers/aws/guide/serverless.yml/
Before I deploy the serverless stack, there are some manual steps, which I need to perform -
Creating S3 buckets
Creating Cognito User Pools, App clients, etc.
3.....
The ARNs of these AWS resources which are created in the above steps, are configured as environment variables in the serverless.yml file.
Apart from this, I want to avoid the possible problem of reaching the AWS cloudformation limit of 200 resources in one stack.
What is the best way/tools to split this stack into two parts?
Are there any examples, in which output of one stack is used as environment variables in the another stack?
Another option, I am thinking is to use the Cloudformation template, which Serverless framework creates and then use it inside a nested CF stack.
Any better options/tools?
Yes. This is very much possible.
Assuming you are using deploying from the same AWS account and Region
Instead of manually creating resources, use serverless to deploy these resources on AWS and use:
resources:
Outputs:
BucketName:
Value:
Ref: S3BucketResource
Export:
Name: VariableNameToImport
You can directly import these variable names in your main serverless.yml file and set them to ENVIRONMENT variables like:
environment:
S3BucketName:
'Fn::ImportValue': VariableNameToImport
OPTION 2 (Easier approach)
Or you can simply use plugin: serverless-plugin-split-stacks

Automating Deployment of AWS API Gateway Stage

How would I go about automating the deployment of an AWS API Gateway via a Python script using Boto3? For example, if I have created a stage named "V1" in the AWS Console for API Gateway, how would I write a script to deploy that stage ("V1")?
The current process involves deploying the stage manually from the AWS Console and is not scriptable. For purposes of automation, I would like to have a script to do the same.
Consulting the Boto3 documentation, I see there's a method for creating a stage (http://boto3.readthedocs.io/en/latest/reference/services/apigateway.html#APIGateway.Client.create_stage), but none for deploying one.
If you want to stick with deploying via specific boto3 API calls, then you want to follow this rough sequence of boto3 API calls:
Use get_rest_apis to retrieve the API ID.
Possibly check if it's deployed already using get_deployments.
Use create_deployment to create the deployment. Use the stageName parameter to specify the stage to create.
Consider using create_base_path_mapping if needed.
Also consider using update_stage if you need to turn on something like logging.
To deploy a typical (API Gateway/Lambda) I would recommend AWS SAM, instead of writing own code.
It even supports Swagger and you can define your stages in SAM definition files.
e.g.
ApiGatewayApi:
Type: AWS::Serverless::Api
Properties:
StageName: v1
CacheClusterEnabled: true
CacheClusterSize: "0.5"
DefinitionUri: "swagger.yaml"
Variables:
[...]
[...]
MyFunction:
Type: AWS::Serverless::Function
Properties:
Handler: ratings.handler
Runtime: python3.6
Events:
Api:
Type: Api
Properties:
Path: /here
Method: get
RestApiId: !Ref ApiGatewayApi
Deployment is easily integrable into CD pipelines using AWS CLI
aws cloudformation package \
--template-file path/example.yaml \
--output-template-file serverless-output.yaml \
--s3-bucket s3-bucket-name
aws cloudformation deploy \
--template-file serverless-output.yaml \
--stack-name new-stack-name \
--capabilities CAPABILITY_IAM
See also: Deploying Lambda-based Applications
Yes, your current way of creating and deploying the apis manually through the AWS browser console is not very scriptable, but pretty much anything you can click in the console can be done with the AWS cli. It sounds to me like you want an automated CI / CD pipeline. Once you figure out what commands you would run with the aws cli, just add them to your CI pipeline and you should be good to go.
But actually, there's an even easier way. Go to AWS Codestar. Click "create new project" and check "Web Service", "Python", and "AWS Lambda". As of today there's only one Codestar template that fits all three, so choose that one. This will scaffold a full CI / CD pipeline (AWS CodePipeline) with one dev environment, hooked up to a git project. I think would be a good way for you so you can leverage the dev-opsy automated deployment stuff without having to worry about setting up and maintaining this on top of your main project.

CloudFormation doesn't deploy to API gateway stages on update

When I run CloudFormation deploy using a template with API Gateway resources, the first time I run it, it creates and deploys to stages. The subsequent times I run it, it updates the resources but doesn't deploy to stages.
Is that behaviour as intended? If yes, how'd I get it to deploy to stages whenever it updates?
(Terraform mentions a similar issue: https://github.com/hashicorp/terraform/issues/6613)
Seems like there is no way to easily create a new Deployment whenever one of your Cloudformation Resources changes.
One way to work around that would be to use a Lambda-backed Custom Resource (see http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/template-custom-resources.html).
The Lambda should create the new Deployment, only if one of your Resources has been updated. To determine if one of your Resources has been updated,
you will probably have to implement custom logic around this API call: http://docs.aws.amazon.com/AWSCloudFormation/latest/APIReference/API_DescribeStackEvents.html
In order to trigger updates on your Custom Resource, I suggest you supply a Cloudformation Parameter that will be used to force an update of your Custom Resource (e.g. the current time, or a version number).
Note that you will have to add a DependsOn clause to your Custom Resource that will include all Resources relevant to your API. Otherwise, your deployment might be created before all your API Resources are updated.
Hope this helps.
When your template specifies a deployment, CloudFormation will create that deployment only if it doesn't already exist. When you attempt to run it again, it observes that the deployment still exists so it won't recreate it, thus no deployment. You need a new resource id for the deployment so that it will create a new deployment. Read this for more information: https://currentlyunnamed-theclassic.blogspot.com/2018/12/mastering-cloudformation-for-api.html
CloudFormation in Amazon's words is:
AWS CloudFormation takes care of provisioning and configuring those resources for you
http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/Welcome.html
Redeployment of APIs is not a provisioning task... It is a promotion activity which is part of a stage in your software release process.
AWS CodePipeline is a continuous delivery service you can use to model, visualize, and automate the steps required to release your software.
http://docs.aws.amazon.com/codepipeline/latest/userguide/welcome.html
CodePipeline also supports execution of Lambda functions from Actions in the pipeline. So, as advised before, create a Lambda function to deploy your API but call it from Codepipeline instead of CloudFormation.
Consult this page for details:
http://docs.aws.amazon.com/codepipeline/latest/userguide/actions-invoke-lambda-function.html
I was using above approach but it looks to complicated to me just to deploy API gateway. If we are changing name of the resources then it takes time to delete and recreate the resources which increases down time for you application.
I'm following below approach to deploy API gateway to the stage using AWS CLI and it is not affecting the deployment with Cloudformation stack.
What I'm doing is, running below AWS CLI command after deployment is completed for API Gateway. It will update the existing stage with latest updates.
aws apigateway create-deployment --rest-api-id tztstixfwj --stage-name stg --description 'Deployed from CLI'
The answer here is to use the AutoDeploy property of the Stage:
Stage:
Type: AWS::ApiGatewayV2::Stage
Properties:
StageName: v1
Description: 'API Version 1'
ApiId: !Ref: myApi
AutoDeploy: true
Note that the 'DeploymentId' property must be unspecified when using 'AutoDeploy'.
See documentation, here: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-stage.html
From the blogspot post linked by TheClassic (best answer so far!), you have to keep in mind that if you aren't generating your templates with something that can insert a valid timestamp in place of $TIMESTAMP$, you must update that manually with a time stamp or otherwise unique ID. Here is my functional example, it successfully deletes the existing deployment and creates a new one, but I will have to update those unique values manually when I want to create another change set:
rDeployment05012019355:
Type: AWS::ApiGateway::Deployment
DependsOn: rApiGetMethod
Properties:
RestApiId:
Fn::ImportValue:
!Sub '${pApiCoreStackName}-RestApi'
StageName: !Ref pStageName
rCustomDomainPath:
Type: AWS::ApiGateway::BasePathMapping
DependsOn: [rDeployment05012019355]
Properties:
BasePath: !Ref pPathPart
Stage: !Ref pStageName
DomainName:
Fn::ImportValue:
!Sub '${pApiCoreStackName}-CustomDomainName'
RestApiId:
Fn::ImportValue:
!Sub '${pApiCoreStackName}-RestApi'
I may be late, but here are the options which which you do a redeployment if a API resources changes, may be helpful to people who still looking for options -
Try AutoDeploy to true. If you are using V2 version of deployment. Note that you need to have APIGW created through V2. V1 and V2 are not compatible to each other. https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-stage.html#cfn-apigatewayv2-stage-autodeploy
Lambda backed custom resource, Lambda inturn call createDeployment API - https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/template-custom-resources.html
CodePipeline that has an action that calls a Lambda Function much like the Custom Resource would - https://docs.aws.amazon.com/codepipeline/latest/userguide/actions-invoke-lambda-function.html
SAM(Serverless Application Model) follows a similar syntax to CloudFormation which simplifies the resource creation into abstractions and uses those to build and deploy a normal CloudFormation template. https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/what-is-sam.html
If you are using any abstraction layer to cloudformation like Sceptre, you can have a hook to call createDeployment after any-update to the resource https://sceptre.cloudreach.com/2.3.0/docs/hooks.html
I gone with third option since I kept using Sceptre for Cloudformation deployment. Implementing hooks in sceptre is easy as well.
Reading through this article, I did not come to a conclusion right away, as the information here is stretched through multiple sources. I try to sum up all the findings from here (and linked source) as my personal testing to help others avoid the hunt.
Important to know is that each API always has a dedicated URL. The associated stages only get a separate suffix. Updating the deployment does not change the URL, recreating the API does.
API
├─ RestAPI (incl. Resource, Methods etc)
├─ Deployment
├─ Stage - v1 https://6s...com/v1
├─ Stage - v2 https://6s...com/v2
Relation stage and deployment:
To deploy AWS API Gateway through CloudFormation (Cfn) you need a RestApi-Cfn-Resource and a Deployment-Cfn-Resource. If you give the Deployment-Resource a stage name, the deployment automatically creates a deployment on top of the "normal" creation. If you leave this out, the API is created without any stage. Either way, if you have a deployment, you can add n-stages to a deployment by linking the two, but a stage and its API always has only one deployment.
Updating simple API:
Now if you want to update this "simple API" just consisting of a RestAPI plus a deployment you face the issue, that if the deployment has a stage name - it can not be updated as it already "exists". To detect that the deployment has to be updated in the first place, you have to either add a timestamp or hash to the deployment resource name in CloudFormation else there is even no update triggered.
Solving the deployment update:
To now enable updating the deployment, you have to split deployment and stage up into separate Cfn-Resources. Meaning, you remove the stage name from the Deployment-Cfn-Resource and create a new Stage-Cfn-Resource which references the deployment resource. This way you can update the deployment. Still, the stage - the part you reference via URL - is not automatically updated.
Propagating the update from the deployment to your stages:
Now that we can update the deployment - aka the blueprint of the API - we can propagate the change to its respective stage. This step AS OF MY KNOWLEDGE is not possible using CloudFormation. Therefore, to trigger the update you either need to add a "custom resource" our you do it manually. Other "none" CloudFormation ways are summed up by #Athi's answer above, but no solution for me as I want to limit the used tooling.
If anybody has an example for the Lambda update, please feel free to ping me - then I would add it here. The links I found so far only reference a plain template.
I hope this helped others understanding the context a bit better.
Sources:
Problem description with Cfn-template, 2
Adding timestamp to deployment resource, 2
Using CodePipeline as a solution
Related question and CLI update answer
Related terraform issue
Related AWS forum thread
This worked for me :
cfn.yml
APIGatewayStage:
Type: 'AWS::ApiGateway::Stage'
Properties:
StageName: !Ref Environment
DeploymentId: !Ref APIGatewayDeployment$TIMESTAMP$
RestApiId: !Ref APIGatewayRestAPI
Variables:
lambdaAlias: !Ref Environment
MethodSettings:
- ResourcePath: '/*'
DataTraceEnabled: true
HttpMethod: "*"
LoggingLevel: INFO
MetricsEnabled: true
DependsOn:
- liveLocationsAPIGatewayMethod
- testJTAPIGatewayMethod
APIGatewayDeployment$TIMESTAMP$:
Type: 'AWS::ApiGateway::Deployment'
Properties:
RestApiId: !Ref APIGatewayRestAPI
DependsOn:
- liveLocationsAPIGatewayMethod
- testJTAPIGatewayMethod
bitbucket-pipelines.yml
script:
- python3 deploy_api.py
deploy_api.py
import time
file_name = 'infra/cfn.yml'
ts = str(time.time()).split(".")[0]
print(ts)
with open(file_name, 'r') as file :
filedata = file.read()
filedata = filedata.replace('$TIMESTAMP$', ts)
with open(file_name, 'w') as file:
file.write(filedata)
========================================================================
Read this for more information: https://currentlyunnamed-theclassic.blogspot.com/2018/12/mastering-cloudformation-for-api.html
If you have something to do the $TIMESTAMP$ replacement, I'd probably go with that as it's cleaner and you don't have to do any manual API Gateway management.
I have found that the other solutions posted here mostly do the job with one major caveat - you can't manage your Stage and Deployment separately in CloudFormation because whenever you deploy your API Gateway, you have some sort of downtime between when you deploy the API and when the secondary process (custom resource / lambda, code pipeline, what have you) creates your new deployment. This downtime is because CloudFormation only ever has the initial deployment tied to the Stage. So when you make a change to the Stage and deploy, it reverts back to the initial deployment until your secondary process creates your new deployment.
*** Note that if you are specifying a StageName on your Deployment resource, and not explicitly managing a Stage resource, the other solutions will work.
In my case, I don't have that $TIMESTAMP$ replacement piece, and I needed to manage my Stage separately so I could do things like enable caching, so I had to find another way. So the workflow and relevant CF pieces are as follows
Before triggering the CF update, see if the stack you're about to update already exists. Set stack_exists: true|false
Pass that stack_exists variable in to your CF template(s), all the way down to the stack that creates the Deployment and Stage
The following condition:
Conditions:
StackExists: !Equals [!Ref StackAlreadyExists, "True"]
The following Deployment and Stage:
# Only used for initial creation, secondary process re-creates this
Deployment:
DeletionPolicy: Retain
Type: AWS::ApiGateway::Deployment
Properties:
Description: "Initial deployment"
RestApiId: ...
Stage:
Type: AWS::ApiGateway::Stage
Properties:
DeploymentId: !If
- StackExists
- !Ref AWS::NoValue
- !Ref Deployment
RestApiId: ...
StageName: ...
Secondary process that does the following:
# looks up `apiId` and `stageName` and sets variables
CURRENT_DEPLOYMENT_ID=$(aws apigateway get-stage --rest-api-id <apiId> --stage-name <stageName> --query 'deploymentId' --output text)
aws apigateway create-deployment --rest-api-id <apiId> --stage-name <stageName>
aws apigateway delete-deployment --rest-api-id <apiId> --deployment-id ${CURRENT_DEPLOYMENT_ID}
Use SAM
AWS::Serverless::Api
This does the deployment for you when it does the Transformation