I am using serverless framework -
https://serverless.com/framework/docs/providers/aws/guide/serverless.yml/
Before I deploy the serverless stack, there are some manual steps, which I need to perform -
Creating S3 buckets
Creating Cognito User Pools, App clients, etc.
3.....
The ARNs of these AWS resources which are created in the above steps, are configured as environment variables in the serverless.yml file.
Apart from this, I want to avoid the possible problem of reaching the AWS cloudformation limit of 200 resources in one stack.
What is the best way/tools to split this stack into two parts?
Are there any examples, in which output of one stack is used as environment variables in the another stack?
Another option, I am thinking is to use the Cloudformation template, which Serverless framework creates and then use it inside a nested CF stack.
Any better options/tools?
Yes. This is very much possible.
Assuming you are using deploying from the same AWS account and Region
Instead of manually creating resources, use serverless to deploy these resources on AWS and use:
resources:
Outputs:
BucketName:
Value:
Ref: S3BucketResource
Export:
Name: VariableNameToImport
You can directly import these variable names in your main serverless.yml file and set them to ENVIRONMENT variables like:
environment:
S3BucketName:
'Fn::ImportValue': VariableNameToImport
OPTION 2 (Easier approach)
Or you can simply use plugin: serverless-plugin-split-stacks
Related
am trying to create S3 bucket using serverless framework. but when I deploy, it's creating two buckets one with the name I have mentioned in the severless.yml file and another bucket.
serverless.yml
service: aws-file-upload-tos3
provider:
name: aws
runtime: nodejs12.x
stage: dev
region: us-east-2
lambdaHashingVersion: 20201221
custom:
fileUploadBucketName: ${self:service}-${self:provider.stage}-bucket
resources:
Resources:
FileBucket:
Type: AWS::S3::Bucket
Properties:
BucketName: ${self:custom.fileUploadBucketName}
AccessControl: PublicRead
Buckets created are
why its creating two buckets like this
By default, serverless framework creates a bucket with a generated name like <service name>-serverlessdeploymentbuck-1x6jug5lzfnl7 to store your service's stack state.
Each successive version of your serverless app is bundled and uploaded by sls to the deployment bucket, and deployed from there.
I think you have some control over how sls does this if you use the serverless-deployment-bucket plugin.
By default, the Serverless Framework creates a number of things on your machine in order to deploy what you have configured in your serverles.yml. It then needs to make use of a service inside AWS called CloudFormation to actually create the resources you configured, like your S3 bucket. The best way to do this is to take the things it created on your machine and upload them to AWS to ensure that the deployment continues without interruption or issue and the best place to do that is S3.
So the Serverless Framework will always (by default) create its own S3 bucket entirely unrelated to what you configured as a location to store the files it generated on your AWS account, then point CloudFormation at it to build the things you configured to get built.
While you have some control over this deployment bucket there always needs to be one. And it is completely unrelated to the bucket you configured.
I have a UI5 web application which has no access to AWS Parameter store. However, it is deployed by CloudFormation, by an account which has this access - I already collect some values from SSM during deployment and use them as parameters for resources which are being deployed by template.yml file.
However, now I need some of these parameters not only during deployment, but also during actual run of the app. Does someone familiar with AWS and UI5 know, how can I store these values during deployment so I can use them later, during the run of the web application? Thank you.
If you can edit the CloudFormation template then you can add resources of type AWS:SSM:Parameter to the template which will create SSM Parameters as part of the deployment.
Example:
Resources:
BasicParameter:
Type: AWS::SSM::Parameter
Properties:
Name: /my/basic/parameter
Type: String
Value: my-basic-parameter-value
Description: A description of the parameter
Im creating API gateway stage using cloudformation.
ApiDeployment:
Type: AWS::ApiGateway::Deployment
Properties:
RestApiId: !Ref ExampleRestApi
StageName: dev
Here is the problem, Whenever I create a new API, I just need to deploy the stage using AWS console. is there any way that I can automate the deploy process so that no further console action is required.
When you define a Deployment resource like this, CloudFormation will create the deployment only on the first run. On the second run it will observe that the resource already exists and the CloudFormation definition did not change, so it won't create another deployment. To work around that, you can add something like a UUID/timestamp placeholder to the resource ID and replace it everytime before doing the CloudFormation update:
ApiDeployment#TIMESTAMP#:
Type: AWS::ApiGateway::Deployment
Properties:
RestApiId: !Ref ExampleRestApi
StageName: dev
This way you are still able to see your deployment history in the API Gateway console.
If you don't want to manipulate your template like this, you can also add a Lambda-backed Custom Resource to your CloudFormation stack. Using an AWS SDK, you can have the Lambda function automatically creating new deployments for you when the API was updated.
I've found berenbums response to be mostly correct, but there are a few things I don't like.
The proposed method of creating a resource like ApiDeployment#TIMESTAMP# doesn't keep the deployment history. This makes sense, since the old ApiDeployment#TIMESTAMP# element is being deleted and a new one is being created every time.
Using ApiDeployment#TIMESTAMP# creates a new deployment every time the template is deployed, which might be undesirable if the template is being deployed to create/update other resources.
Also, using ApiDeployment#TIMESTAMP# didn't work well when adding the StageDescription property. A potential solution is to add a static APIGwDeployment resource for the initial deployment (with StageDescription) and ApiDeployment#TIMESTAMP# for the updates.
The fundamental issue though, is that creating a new api gw deployment is not well suited for cloudformation (beyond the initial deployment). I think after the initial deployment, it's better to do an AWS API invocation to update the deployment (see https://docs.aws.amazon.com/apigateway/latest/developerguide/set-up-deployments.html).
In my particular case I created a small Ansible module to invoke aws apigateway create-deployment which updates an existing stage in one operation.
Usecase
I have a cloudformation Stack with more then 15 Lambdas in it. I can able to deploy the stack through Codepipeline which consists of two stages CodeCommit and CodeDeploy. In this approach all my lambda code is in cloudformation template(i.e.inline code). For Security concerns i want to change this Inline to S3 which inturn requires S3BucketName and S3Key.
As a temporary workaround
As of now i am zipping each lambda file and passing manually S3keyName and bucketname as a parameters to my stack .
Is there any way possible to do this step via Codepipeline ?
My Assumption on CodeBuild
I Know we can use the CodeBuild for it. But upto now i have seen CodeBuild is only used to build package.json file. But in my usecase i dont have any . And also i can see it is possible to specify cloudformation package command to wrap my lambda from local to S3 this command will generate S3 codeuri`, but this is for Serverless Applications where there will be single lambda but in my case i have 15.
What i had tried
I know that as soon as you give a git push to codecommit it will keep you code in S3. So what i thought is to get the S3BucketName and S3KeyName from the codecommit pushed file and pass these parameters to my CFN template. I can able to get the S3BucketName but S3KeyName i dont know how to get that ? And i dont know whether this tried apporach is a workable one ?
BTW i know i can use shell script just to automate this process. But is there a way possible to do it via CODE PIPELINE ?
Update--Tried Serverless Approach
Basically i run two build actions with two different runtimes(i.e.Nodejs,Python) which runs independently. So when i use serverless approach each build will create a template-export.yml file with codeuri of bucketlocation , that means i will have two template-export.yml files . One problem with Serverless approach it must have to create changeset and then it trigger Execute changeset. Because of that i need to merge those two template-export.yml files and run this create changeset action followed by execute changeset. But i didn't know is there a command to merge two SAM templates.Otherwise one template-export.yml stack will replace other template-export.yml stack.
Any help is appreciated
Thanks
If I'm understanding you right, you just need an S3 Bucket and Key to be piped into your Lambda CF template. To do this I'm using the ParameterOverrides declaration in my pipeline.
Essentially, the pipeline is a separate stack and picks up a CF template located in the root of my source. It then overrides two parameters in that template that point it to the appropriate S3 bucket/key.
- Name: LambdaDeploy
Actions:
- Name: CreateUpdateLambda
ActionTypeId:
Category: Deploy
Owner: AWS
Provider: CloudFormation
Version: 1
Configuration:
ActionMode: CREATE_UPDATE
Capabilities: CAPABILITY_IAM
RoleArn: !GetAtt CloudFormationRole.Arn
StackName: !Join
- ''
- - Fn::ImportValue: !Sub '${CoreStack}ProjectName'
- !Sub '${ModuleName}-app'
TemplatePath: SourceOut::cfn-lambda.yml
ParameterOverrides: '{ "DeploymentBucketName" : { "Fn::GetArtifactAtt" : ["BuildOut", "BucketName"]}, "DeploymentPackageKey": {"Fn::GetArtifactAtt": ["BuildOut", "ObjectKey"]}}'
Now, the fact that you have fifteen Lambda functions in this might throw a wrench in it. For that I do not exactly have an answer since I'm actually trying to do the exact same thing and package up multiple Lambdas in this kind of way.
There's documentation on deploying multiple Lambda functions via CodePipeline and CloudFormation here: https://docs.aws.amazon.com/lambda/latest/dg/build-pipeline.html
I believe this will still upload the function code to S3, but it will leverage AWS tooling to make this process simpler.
Is there a way to access auto-generated URLs for deployed resources before the deployment is finished? (like db host, lambda function URL, etc.)
I can access them after the deployment is finished, but sometimes I need to access them while building my stack. (E.g. use them in other resources).
What is a good solution to handle this use-case? I was thinking about outputting them into the SSM parameter store from CloudFormation template, but I'm not sure if this is even possible.
Thanks for any suggestion or guidance!
If "use them in other resources" means another serverless service or another CloudFormation stack, then use CloudFormation Outputs to export the values you are interested in. Then use CloudFormation ImportValue function to reference that value in another stack.
See https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/outputs-section-structure.html and https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/intrinsic-function-reference-importvalue.html
Within Serverless Framework, you can access a CloudFormation Output value using https://serverless.com/framework/docs/providers/aws/guide/variables/#reference-cloudformation-outputs
If you want to use the autogenerated value within the same stack, then just use CloudFormation GetAtt function. See https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/intrinsic-function-reference-getatt.html.
For example, I have a CloudFormation stack that outputs the URL for an ElasticSearch cluster.
Resources:
Search:
Type: AWS::Elasticsearch::Domain
Properties: <redacted>
Outputs:
SearchUrl:
Value: !GetAtt Search.DomainEndpoint
Export:
Name: myapp:search-url
Assuming that the CloudFormation stack name is "mystack", then in my Serverless service, I can reference the SearchUrl by:
custom:
searchUrl: ${cf:mystack.SearchUrl}
To add to bwinant's answer, ${cf:<stack name>.<output name>} does not work if you want to reference a variable in another stack which is located in another region. There is a plugin to achieve this called serverless-plugin-cloudformation-cross-region-variables. You can use it like so
plugins:
- serverless-plugin-cloudformation-cross-region-variables
custom:
myVariable: ${cfcr:ca-central-1:my-other-stack:MyVariable}