How to create re-usable blocks in CloudFormation - amazon-web-services

Scenario:
I have a serverless/cloudformation script that re-deploys the same code with different configurations to AWS as lambdas and exposes each lambda via API Gateway.
So far the only way I've been able to do this is via copious amounts of copy and paste within the same script.. but its starting to drive me up the walls... thus, as I'm a complete newby to AWS and, navigating the AWS docs and internet has yielded pretty bad results, in answering this... I'm trying my luck here.
Within a cloudformation script:
"Resources":{
"LambdaResourceNG":{
"Type":"AWS::Serverless::Function",
"Properties":{
"Handler":"some-handlername::foo::bar",
"Runtime":"dotnetcore2.0",
"Environment":{
"Variables":{
"PictureOptions__OriginalPictureSuffix":{
"Fn::Join":[
"",
[
"_",
"ng",
"_",
{
"Fn::FindInMap":[
"Environments",
{
"Ref":"EnvironmentValue"
},
"PictureOptionsOriginalPictureSuffix"
]
}
]
]
},
},
"Events":{
"Bar":{
"Type":"Api",
"Properties":{
"Path":"/ng/bar",
"Method":"POST"
}
},
"Foo":{
"Type":"Api",
"Properties":{
"Path":"/ng/foo",
"Method":"POST"
}
}
}
}
},
}
Question:
In the script block above.. The resource is called LambdaResourceNG. If I wanted to have another resource...LambdaResourceKE... with all appropriate sections changed to KE. How would I make a "function" that I could re-use within this erm... language?
I've already found out how to use maps to swap out variables based on some env value... but how would one go about creating reusable blocks of code/config?

If the existing CloudFormation nested stacks feature doesn't suffice and you need real programmability then the final CF template can be the output of a higher-level process.
There are tools available to create templates e.g. AWS Cloud Development Kit, Troposphere and cfndsl.
Another option would be to drive the creation of the final template from a CLI. It doesn't have to be particularly sophisticated, just something that includes a template engine (like jinja2 or handlebars). Then you can program the inclusion of reusable template fragments, dynamically inject values into those fragments, iterate over loops as necessary, and emit a final CloudFormation template (or a main template and set of nested templates).

You can nest a CloudFormation Stack within another using the AWS::CloudFormation::Stack resource type. Nested stacks cannot exist without their parent, deleting the parent stack will delete all nested stacks. Note that the TemplateURL must point to S3, and that is where the aws cloudformation package CLI command helps by uploading a local file there and replacing the URL in the template.
Cross-stack references also helps in modularizing templates. For example, a "database network" stack can export the subnet ids and other values for any future database stack to use. Note that modularization goes further than merging text, but declaring and managing the resources lifecycle relationships correctly.
Stacks can even be composed further and across different regions and accounts using StackSets. This may be quite helpful when managing applications provisioned per tenant or sub-organization. This is frequently the case in "self-service IT" that can be achieved using CloudFormation with other services like AWS Service Catalog and AWS Marketplace.

Nested stacks are clumsy in that you don't necessarily want an entire stack just for a single resource. CloudFormation Modules would solve this problem nicely (reference). You can even package multiple resources in a single module.
You can create reusable modules with pre-packaged properties, which:
Reduce boilerplate configuration
Enforce company-wide standards
Modules are deployed to CloudFormation Registry, where they can be versioned and used by anyone in your company. You can use Parameters in the module to pass in properties just like you would a standard AWS resource. You can then create the custom modules like this:
Resources:
LambdaResourceNG:
Type: YourCompany::LambdaApi::FooBarApi
Properties:
ApiName: ng
LambdaResource:
Type: YourCompany::LambdaApi::FooBarApi
Properties:
ApiName: ke

To create a reusable template in Cloudformation. There are couple of things you need to keep in mind
Use Nested stack : using nested stack you can create a small stack for each AWS Service ( i.e VPC,LoadBalancer ), which you can use in other projects
Use Parameters : Use perameters as much as possible
Use Conditions : AWS Cloudformation provide solution to add conditions, Using Conditions we can use same template to perform multiple tasks

Related

Evaluate AWS CDK Stack output to another Stack in different account

I am creating two Stack using AWS CDK. I use the first Stack to create an S3 bucket and upload lambda Zip file to the bucket using BucketDeployment construct, like this.
//FirstStack
const deployments = new BucketDeployment(this, 'LambdaDeployments', {
destinationBucket: bucket,
destinationKeyPrefix: '',
sources: [
Source.asset(path)
],
retainOnDelete: true,
extract: false,
accessControl: BucketAccessControl.PUBLIC_READ,
});
I use the second Stack just to generate CloudFormation template to my clients. In the second Stack, I want to create a Lambda function with parameters S3 bucket name and key name of the Lambda zip I uploaded in the 1st stack.
//SecondStack
const lambdaS3Bucket = "??"; //TODO
const lambdaS3Key = "??"; //TODO
const bucket = Bucket.fromBucketName(this, "Bucket", lambdaS3Bucket);
const lambda = new Function(this, "LambdaFunction", {
handler: 'index.handler',
runtime: Runtime.NODEJS_16_X,
code: Code.fromBucket(
bucket,
lambdaS3Key
),
});
How do I refer the parameters automatically from 2nd Lambda?
In addition to that, the lambdaS3Bucket need to have AWS::Region parameters so that my clients can deploy it in any region (I just need to run the first Stack in the region they require).
How do I do that?
I had a similar usecase to this one.
The very simple answer is to hardcode the values. The bucketName is obvious.
The lambdaS3Key You can look up in the synthesized template of the first stack.
More complex answer is to use pipelines for this. I've did this and in the build step of the pipeline I extracted all lambdaS3Keys and exported them as environment variable, so in the second stack I could reuse these in the code, like:
code: Code.fromBucket(
bucket,
process.env.MY_LAMBDA_KEY
),
I see You are aware of this PR, because You are using the extract flag.
Knowing that You can probably reuse this property for Lambda Key.
The problem of sharing the names between the stacks in different accounts remains nevertheless. My suggestion is to use pipelines and the exported constans there in the different steps, but also a local build script would do the job.
Do not forget to update the BucketPolicy and KeyPolicy if You use encryption, otherwise the customer account won't have the access to the file.
You could also read about the AWS Service Catalog. Probably this would be a esier way to share Your CDK products to Your customers (CDK team is going to support the out of the box lambda sharing next on)

AWS .NET Core 3.1 Mock Lambda Test Tool - How to set up 2 or more functions for local testing

So I have a very simple aws-lambda-tools-defaults.json in my project:
{
"profile": "default",
"region": "us-east-2",
"configuration": "Release",
"framework": "netcoreapp3.1",
"function-runtime": "dotnetcore3.1",
"function-memory-size": 256,
"function-timeout": 30,
"function-handler": "LaCarte.RestaurantAdmin.EventHandlers::LaCarte.RestaurantAdmin.EventHandlers.Function::FunctionHandler"
}
It works, I can test my lambda code locally which is great. But I want to be able to test multiple lambdas, not just one. Does anyone else know how to change the JSON so that I can run multiple lambdas in the mock tool?
Thanks in advance,
Simply remove the function-handler attribute from your aws-lambda-tools-defaults.json file and add a template attribute referencing your serverless.template (the AWS CloudFormation template used to deploy your lambda functions to your AWS cloud environment)
{
...
"template": "serverless.template"
...
}
Then, you can test you lambda function locally for example with The AWS .NET Mock Lambda Test Tool. So now you'll see the Function dropdown List has changed from listing the lambda function name you specified in your function-handler
to the list of lambda functions declared in your serverless.template file, and then you can test them all locally! :)
You can find more info in this discussion
Answering after a long time, but might help someone else. To deploy and test multiple lambdas from visual studio, you have to implement serverless.template. Check AWS SAM documentation.
You can start with this one - https://docs.aws.amazon.com/toolkit-for-visual-studio/latest/user-guide/lambda-build-test-severless-app.html

Cloudformation: The resource you requested does not exist

I have a cloudformation stack which has a Lambda function that is mapped as a trigger to an SQS queue.
What happened was that I had to delete the mapping and create it again manually cos I wanted to change the batch size. Now when I want to update the mapping the cloudformation throws an error with The resource you requested does not exist. message.
The resource mapping code looks like this:
"EventSourceMapping":{
"Properties":{
"BatchSize":5,
"Enabled":"true",
"EventSourceArn":{
"Fn::GetAtt":[
"ProcessorQueue",
"Arn"
]
},
"FunctionName":{
"Fn::GetAtt":[
"ProcessorLambda",
"Arn"
]
}
},
"Type":"AWS::Lambda::EventSourceMapping"
}
I know that I've deleted the mapping cloudformation created initially and added it manually which is causing the issue. How do I fix this? Cos I cannot push any update now.
Please help
What you did, from my perspective, it is a mistake. When you use Cloud Formation you are not suppose to apply changes manually. You can, and maybe that's fine since one may don't care about the stack once is created. But since you are trying to update the stack, this tells me that you want to keep the stack and update it on a time basis.
To narrow down your problem, first let make clear that the manually-created mapping is out of sync with your cloud formation stack. So, from a cloud formation perspective, it doesn't matter if you keep that mapping or not. I'm wondering, what would happen if you keep the manually-created mapping and create a new from Cloud Formation? Maybe it will complain, since you would have repeated mappings for the same pair of (lambda,queue). Try this:
Create a change for your stack, where you completely remove the EventSourceMapping resource from your script. This step is to basically clean loosing references. Apply the change set.
Then, and this is where I think you may get some kind of issue, add back again EventSourceMapping to your stack.
If you get errors in the step 2, like "this mapping already exists", you will have to remove the manually-created mapping from the console. And then try again step 2.
You probably know now that you should not have removed the resource manually. If you change the CF, you can update it without changing resources which did not change in CF. You can try to replace the resource with the exact same physical name https://aws.amazon.com/premiumsupport/knowledge-center/failing-stack-updates-deleted/ The other option is to remove the resource from CF, update, and then add it back and update again - from the same doc.
While comments above are valid, I found it interesting, that no one mentioned much simpler option: using SAM commands (sam build/sam deploy). It's understandable that during the development process and designing the architecture, there might be flaws and situations where manual input in the console is necessary, therefore there's something I reference to every time I have similar issue.
Simply comment out the chunk of code that is creating troubles, run sam build/deploy on top of it, CloudFormation stack will recognize that the resource no longer in the template and will delete it.
Now, since the resource is no longer in the architecture anyway(removed manually prior), it will have no issues passing the step and successfully updating the stack.
Then simply uncomment, make any necessary changes (if any) and deploy.
Works every time.

Does Deployment Manager have Cloud Functions support (and support for having multiple cloud functions)?

I'm looking at this repo and very confused about what's happening here: https://github.com/GoogleCloudPlatform/deploymentmanager-samples/tree/master/examples/v2/cloud_functions
In other Deployment Manager examples I see the "type" is set to the type of resource being deployed but in this example I see this:
resources:
- name: function
type: cloud_function.py # why not "type: cloudfunctions"?
properties:
# All the files that start with this prefix will be packed in the Cloud Function
codeLocation: function/
codeBucket: mybucket
codeBucketObject: function.zip
location: us-central1
timeout: 60s
runtime: nodejs8
availableMemoryMb: 256
entryPoint: handler
"type" is pointing to a python script (cloud_function.py) instead of a resource type. The script is over 100 lines long and does a whole bunch of stuff.
This looks like a hack, like its just scripting the GCP APIs? The reason I'd ever want to use something like Deployment Manager is to avoid a mess of deployment scripts but this looks like it's more spaghetti.
Does Deployment Manager not support Cloud Functions and this is a hacky workaround or is this how its supposed to work? The docs for this example are bad so I don't know what's happening
Also, I want to deploy multiple function into a single Deployment Manager stack- will have to edit the cloud_function.py script or can I just define multiple resources and have them all point to the same script?
Edit
I'm also confused about what these two imports are for at the top of the cloud_function.yaml:
imports:
# The function code will be defined for the files in function/
- path: function/index.js
- path: function/package.json
Why is it importing the actual code of the function it's deploying?
Deployment manager simply interacts with the different kind of Google APIs. This documentation gives you a list of supported resource types by Deployment manager. I would recommend you to run this command “gcloud deployment-manager types list | grep function” and you will find this “cloudfunctions.v1beta2.function” resource type is also supported by DM.
The template is using a gcp-type (that is in beta).The cloud_functions.py is a template. If you use a template, you can reuse it for multiple resources, you can this see example. For better understanding, easier to read/follow you can check this example of cloud functions through gcp-type.
I wan to add to the answer by Aarti S that gcloud deployment-manager types list | grep function doesn't work for me as I found how to all list of resource types, including resources that are in alpha:
gcloud beta deployment-manager types list --project gcp-types
Or just gcloud beta deployment-manager types list | grep function helps.

Can I have terraform keep the old versions of objects?

New to terraform, so perhaps it just not supposed to work this way. I want to use aws_s3_bucket_object to upload a package to a bucket- this is part of an app deploy. Im going to be changing the package for each deploy and I want to keep the old versions.
resource "aws_s3_bucket_object" "object" {
bucket = "mybucket-app-versions"
key = "version01.zip"
source = "version01.zip"
}
But after running this for a future deploy I will want to upload version02 and then version03 etc. Terraform replaces the old zip with the new one- expected behavior.
But is there a way to have terraform not destroy the old version? Is this a supported use case here or is this not how I'm supposed to use terraform? I wouldn't want to force this with an ugly hack if terraform doesn't have official support for doing something like what I'm trying to do here.
I could of course just call the S3 api via script, but it would be great to have this defined with the rest of the terraform definition for this app.
When using Terraform for application deployment, the recommended approach is to separate the build step from the deploy step and use Terraform only for the latter.
The responsibility of the build step -- which is implemented using a separate tool, depending on the method of deployment -- is to produce some artifact (an archive, a docker container, a virtual machine image, etc), publish it somewhere, and then pass its location or identifier to Terraform for deployment.
This separation between build and deploy allows for more complex situations, such as rolling back to an older artifact (without rebuilding it) if the new version has problems.
In simple scenarios it is possible to pass the artifact location to Terraform using Input Variables. For example, in your situation where the build process would write a zip file to S3, you might define a variable like this:
variable "archive_name" {
}
This can then be passed to whatever resource needs it using ${var.archive_name} interpolation syntax. To deploy a particular artifact, pass its name on the command line using -var:
$ terraform apply -var="archive_name=version01.zip"
Some organizations prefer to keep a record of the "current" version of each application in some kind of data store, such as HashiCorp Consul, and read it using a data source. This approach can be easier to orchestrate in an automated build pipeline, since it allows this separate data store to be used to indirectly pass the archive name between the build and deploy steps, without needing to pass any unusual arguments to Terraform itself.
Currently, you tell terraform to manage one aws_s3_bucket_object and terraform takes care of its whole life-cycle, meaning terraform will also replace the file if it sees any changes to it.
What you are maybe looking for is the null_resource. You can use it to run a local-exec provisioner to upload the file you need with a script. That way, the old file won't be deleted, as it is not directly managed by terraform. You'd still be calling the API via a script then, but the whole process of uploading to s3 would still be included in your terraform apply step.
Here an outline of the null_resource:
resource "null_resource" "upload_to_s3" {
depends_on = ["<any resource that should already be created before upload>"]
...
triggers = ["<A resource change that must have happened so terraform starts the upload>"]
provisioner "local-exec" {
command = "<command to upload local package to s3>"
}
}