How to create a template from existing resources? - amazon-web-services

I've heard about "CloudFormer" tool, to automatically generate a base template from existing resources on the cloud.
https://medium.com/#ridmag/how-to-use-aws-cloudformer-e8d848cfafe1
I can't find this tool in aws! Perhaps this is a old tool that has been removed?
I've heard about another not-Amazon product named "terraform.io" as well. Can Terraform do this? Can it produce a template (in its proprietary format and/or in the cloudformation format) as well?

CoudFormer is no longer maintained and deprecated by AWS. Instead, former2 can be used which is open sourced, developed by AWS Hero and used by AWS clients as explained in AWS blog:
How DNAnexus used the open source Former2 project to create infrastructure as code templates for their disaster recovery pipeline

Related

How to use AWS CLI to create a stack from scratch?

The problem
I'm approaching AWS, and the first test project will be a website, but i'm struggling on how to approach the resource and the tools to accomplish this.
AWS documentation is not really beginner-friendly, so to me it is like to being punched in the face at the first boxe training session.
First attempt
I've installed bot AWS and SAM cli tools, so what I would expect is to be able to create an empty stack at first and adding the resource one by one as the specifications are given/outlined, but instead what I see is that i need to give a template to the tool to create the new stack, but that means I need to know how to write it beforehand and therefore the template specifications for each resource type.
Second attempt
This lead me to create the stack and the related resources from the online console to get the final stack template, but then I need to test every new resource or any updated resource locally, so I have to copy the template from the online console to my machine and run the cli tools with this, but obviously it is not the desired development flow.
What I expected
Coming from a standard/classical web development I would expect to be able to create the project locally, test the related resources locally, version it, and delegate the deployment to the pipeline.
So what?
All this made me understand that "probably" I'm missing somenthing on how to use the aws cli tools and how the development for an aws-hosted application is meant to be done.
I'm not seeking for a guide on specific resource types like every single tutorial I've found online, but something on a higher level on how to handle a project development on aws, best practices and stuffs like that, I can then dig deeper on any resource later when needed.
AWS's Cloud Development Kit ticks the boxes on your specific criteria.
Caveat: the CDK has a learning curve in line with its power and flexibility. There are much easier ways to deploy a web app on AWS, like the higher-level AWS Amplify framework, with abstractions tailored to front-end devs who want to minimise the mental energy spent on the underlying infrastructure.
Each of the squillion AWS and 3rd Party deploy tools is great for somebody. Nevertheless, looking at your explicit requirements in "What I expected", we can get close to the CDK as an objective answer:
Coming from a standard/classical web development
So you know JS/Python. With the CDK, you code infrastructure as functions and classes, rather than 500 lines of YAML as with SAM. The CDK's reference implementation is in Typescript. JS/Python are also supported. There are step-by-step AWS online workshops for these and the other supported languages.
create the project locally
Most of your work will be done locally in your language of choice, with a cdk deploy CLI command to
bundle the deployment artefacts and send them up to the cloud.
test the related resources locally
The CDK has built-in testing and assertion support.
version it
"Deterministic deploy" is a CDK design goal. Commit your code and the generated deployment artefacts so you have change control over your infrastructure.
delegate the deployment to the pipeline
The CDK has good pipeline support: i.e. a push to the remote main branch can kick off a deploy.
AWS SAM is actually a good option if you are just trying to get your feet wet with AWS. SAM is an open-source wrapper around the aws-cli, which allows you to create aws resources like Lambda in say ~10 lines of code vs ~100 lines if you were to use the aws-cli directly. Yes, you'll need to learn SAM specific things like SAMtemplate and SAM-cli but it is pretty straightforward using this doc.
Once you get the hang of it, it would be easier to start looking under the hood of what/how SAM is doing things and get into the weeds with aws-cli if you wanted. Which will then allow you to build out custom solutions (using aws-cli) for your complex use cases that SAM may not support. Caveat: SAM is still pretty new and has open issues that could be a blocker for advanced features/complex use cases.

How to update existing Azure Managed Applications with a new package version?

I created a new package for my Azure Managed Application. How do I get existing instances of the Managed Application to upgrade to that package version (mainTemplate.json + viewDefinition.json)?
We were able to talk to a MSFT rep about this today. The information that we got is that any updates to a Managed Application and its resources must be pushed out manually by the publisher by their mechanism of choice (Azure CLI, ARM templates, Azure Portal, Terraform, etc) via the access that the publisher has to the resource group created for the Managed Application.
There is no way to just push up the new ARM template and have that roll out to deployed instances. He said you can re-publish the offer (if publishing via the Commercial Marketplace) with a new template if you want to make the new template available to be used by freshly-created instances, but that this will never affect instances of the Managed Application that already exist.
The rep agreed that the docs that state the following are misleading to how the process actually works:
You can make sure that all customers are using approved versions. Customers don't have to develop application-specific domain knowledge to manage these applications. Customers automatically acquire application updates without the need to worry about troubleshooting and diagnosing issues with the applications.
This "automatic" versioning process is one that the publisher is responsible for implementing on their own. There is actually no concept of versioning built in to Managed Applications.

can we deploy whole project in Google Cloud using only Code?

I have a project in Google cloud using the following resources
-BigQuery, Google functions (Python), google storage, Cloud Scheduler
is it possible to save the whole project as code and share it, so someone else can just use that code and deploy it using his own tenant ?
the reason, I am asking, I have published all the code and SQL queries in Github, but some users find it very hard to reproduce, they are not necessarily very familiar with Google Cloud, in an ideal situation, they need just to get a file and click deploy ?
When you create a solution for GCP we will commonly find that it consists of code, data and configuration. The code and data you can save in a source repository like GitHub ... but what of the configuration? What if your "solution" expects to have BQ datasets and tables or GCS buckets or Scheduler jobs defined? This is where you can create "Infrastructure As Code" definitions. Google supports its own IaC technology called Deployment Manager but you can also use the popular Terraform as it too has a GCP provider. The definitions for these IaC coordinators are typically text / yaml files that you can also package with your code. Sprinkle in some Make, Chef, Puppet for building apps and pushing code to deployment environments and you have a "build it from source" story. Study also the concepts of CI/CD and you will commonly find that the steps you perform for building CI/CD overlap with the steps for trivial deployment.
There are also projects such as terraformer that can do some kind of a job of reverse engineering an existing configuration to create IaC description that, when run elsewhere, will recreate the configuration.

Mapping git hash to deployed lambda

We would like to be able to understand the version of our software that is currently deployed to a particular AWS lambda. What are the best practices for tracking a code commit hash to an AWS lambda? We've looked at AWS tagging and we've also looked at AWS Lambda Aliasing but neither approach seems convenient or user-friendly. Are there other services that AWS provides?
Thanks
Without context and a better understanding of your use case around the commit hash, its difficult to give a directly useful answer, and as other answers have shown, there are many ways you could accomplish this. That being said, the commit hash of particular code is ultimately metadata and AWS has a solution for dealing with resource metadata: tags.
Best practice is to tag your resources with metadata. Almost all, if not all, AWS resources (including Lambda) support tags. As stated in the AWS documentation “tagging allows you to quickly search, filter, and manage resources” and subject to AWS limits your tags can be pretty much any key-value pair you like, including “commit: hash”.
The tagging strategy here would be to assign the commit hash to a tag, “commit” with the value “e63abe27”. You can tag resources manually through the console or you can apply tags as part of your build process.
Once tagged, at a high level, you would then be able to identify which commit is being used by listing the tags for the lambda in question. The CLI command would be something like:
aws lambda list-tags —resource arn:aws:lambda:us-east-1:123456789012:function:myFunction
You can learn more about tags and tagging strategy by reviewing the AWS docs here and you can download the Tagging Best Practice whitepaper here.
One alternative could be to generate a file with the Git SHA as part of your build system that is packaged together with the other files in the build artifact. The following script generates a gitSha.json file in the ${outputDir}:
#!/usr/bin/env bash
gitSha=$(git rev-parse --short HEAD)
printf "{\"gitSha\": \"%s\"}" ${gitSha} > "${outputDir}/git-sha.js"
Consequently, the gitSha.json may look like:
{"gitSha": "5c805638"}
This file can then be accessed either by downloading the package. Alternatively, you can create a function that inspects the file in runtime and returns its value to the caller, writes it to a log, or similar depending on your use case.
This script was implemented using bash and git rev-parse, but you can use any scripting language in combination with a Git library that you are comfortable with.
Best way to version Lambda is to Create Version option and adding these versions into an Alias.
We use this mechanism extensively for mapping a single AWS Lambda into different API gateway endpoints. We use environment variables provided by AWS Lambda to move all configurations outside the Lambda function. In each version of the Lambda, we change these environment variables as required and create a new version. This version can be mapped to an alias, which will help to keep the API gateway or the integration points intact (without any change for the integration)
if using serverless
try in serverless.yml
provider:
versionFunctions: true
and
functions:
MyFunction:
tags:
try serverless plugins, one of them
serverless plugin install --name serverless-version-tracker
it uses git tags as version
you need to manage git tugs then

Is there a way to unit test AWS Cloudformation template

When we say that cloudformation is 'Infrastructure as Code', the next question that immediately comes to mind is how can this code be tested.
Can we do some sort of basic unit test of this code
And I am discounting the cloudformation validation because that just is a way of doing syntactic validation, and that I can do with any other free JSON/YAML validator.
I am more inclined towards some sort of functional validation, possibly testing that I have defined all the variables that are used as references.
Possibly testing that whatever properties I am using are actually supported ones for that component
Not expected that it should test if the permissions are correct or that I have not exhausted my limits. But atleast something beyond the basic JSON/YAML syntax validation
Here's a breakdown of how several methods of testing software can be applied to CloudFormation templates/stacks:
Linting
For linting (checking CloudFormation-template code for syntax/grammar correctness), you can use the ValidateTemplate API to check basic template structure, and the CreateChangeSet API to verify your Resource properties in more detail.
Note that ValidateTemplate performs a much more thorough check than a simple JSON/YAML syntax checker- it validates correct Template Anatomy, correct syntax/usage of Intrinsic Functions, and correct resolution of all Ref values.
ValidateTemplate checks basic CloudFormation syntax, but doesn't verify your template's Resources against specific property schemas. For checking the structure of your template's Parameters, Resources and Properties against AWS Resource types, CreateChangeSet should return an error if any parameters or resource properties are not well-formed.
Unit testing
Performing unit testing first requires an answer to the question: what is the smallest self-contained unit of functionality that can/should be tested? For CloudFormation, I believe that the smallest testable unit is the Resource.
The official AWS Resource Types are supported/maintained by AWS (and are proprietary implementations anyway) so don't require any additional unit tests written by end-user developers.
However, your own Custom Resources could and should be unit-tested. This can be done using a suitable testing framework in the implementation's own language (e.g., for Lambda-backed Custom Resources, perhaps a library like lambda-tester would be a good starting point).
Integration testing
This is the most important and relevant type of testing for CloudFormation stacks (which mostly serve to tie various Resources together into an integrated application), and also the type that could use more refinement and best-practice development. Here are some initial ideas on how to integration-test CloudFormation code by actually creating/updating full stacks containing real AWS resources:
Using a scripting language, perform a CloudFormation stack creation using the language's AWS SDK. Design the template to return Stack Outputs reflecting behavior that you want to test. After the stack is created by the scripting language, compare the stack outputs against expected values (and then optionally delete the stack afterwards in a cleanup process).
Use AWS::CloudFormation::WaitCondition resources to represent successful tests/assertions, so that a successful stack creation indicates a successful integration-test run, and a failed stack creation indicates a failed integration-test run.
Beyond CloudFormation, one interesting tool worth mentioning in the space of testing infrastructure-as-code is kitchen-terraform, a set of plugins for Test Kitchen which allow you to write fully-automated integration test suites for Terraform modules. A similar integration-testing harness could eventually be built for CloudFormation, but doesn't exist yet.
This tool “cfn-nag” parses a collection of CloudFormation templates and applies rules to find code patterns that could lead to insecure infrastructure.  The results of the tool include the logical resource identifiers for violating resources and an explanation of what rule has been violated.
Further Reading: https://stelligent.com/2016/04/07/finding-security-problems-early-in-the-development-process-of-a-cloudformation-template-with-cfn-nag/
While there are quite a number of particular rules the tool will attempt to match, the rough categories are:
IAM and resource policies (S3 Bucket, SQS, etc.)
Matches policies that are overly permissive in some way (e.g. wildcards in actions or principals)
Security Group ingress and egress rules
Matches rules that are overly liberal (e.g. an ingress rule open to 0.0.0.0/0, port range 1-65535 is open)
Access Logs
Looks for access logs that are not enabled for applicable resources (e.g. Elastic Load Balancers and CloudFront Distributions)
Encryption
(Server-side) encryption that is not enabled or enforced for applicable resources (e.g. EBS volumes or for PutObject calls on an S3 bucket)
New tool is on the market now. Test all the CloudFormation things! (with TaskCat)
What is taskcat?
taskcat is a tool that tests AWS CloudFormation templates. It deploys your AWS CloudFormation template in multiple AWS Regions and generates a report with a pass/fail grade for each region. You can specify the regions and number of Availability Zones you want to include in the test, and pass in parameter values from your AWS CloudFormation template. taskcat is implemented as a Python class that you import, instantiate, and run.
usage
follow this document : https://aws.amazon.com/blogs/infrastructure-and-automation/up-your-aws-cloudformation-testing-game-using-taskcat/
notes
taskcat can't read AWS_PROFILE environment variable. you need define the profile in part of general if it is not default profile.
general:
auth:
default: dev-account
Ref: https://github.com/aws-quickstart/taskcat/issues/434
The testing as you described (at least beyond JSON parsing) can be achieved partially by parsing CloudFormation templates by programmatic libraries that are used to generate/read templates. They do not test the template explicitly but can throw an exception or error upon conditions where you use a property that is not defined for a resource.
Check out go-cloudformation: https://github.com/crewjam/go-cloudformation
Other than that you need to run the stack to see errors. I believe that testing IaaC is one of the main challenges in infrastructure automation. Not only unit testing but also integration testing and continuous validation.
Speaking specifically of CloudFormation, AWS recommends using the taskcat, which is a tool that deploys the infrastructure / template within all AWS regions, in this process it already performs code validation.
TaskCat Github repository: https://github.com/aws-quickstart/taskcat
In addition, through the Visual Studio code you can use the extension Cloud conformity template scanner or use the feature that has been currently purchased by trend micro to make security validations whose name has changed from cloud conformity to Trend Micro Template scanner.
It will basically perform the validation of the template and architectural code linked to the model and use case of the Well Architected Framework from AWS.
About template scanner: https://aws.amazon.com/blogs/apn/using-shift-left-to-find-vulnerabilities-before-deployment-with-trend-micro-template-scanner/
The VS Code Extension Cloud Conformity: https://marketplace.visualstudio.com/items?itemName=raphaelbottino.cc-template-scanner
In addition, there is a VS Code extension Linter that you can use as a pre-commit for validation, the name is: CloudFormation Linter.
CloudFormation Linter: https://marketplace.visualstudio.com/items?itemName=kddejong.vscode-cfn-lint
You can also use more advanced features if you want to implement an infra as code pipeline using DevSecOps "SEC", this is Scout suite. It has its own validator for the cloud that can be run in a build container, it will audit the cloud to validate if there are resources that are outside a security standard.
Scout Suite Github repository: https://github.com/nccgroup/ScoutSuite
If you want to go deeper in the case of using validation and resource testing / compliance on AWS, I recommend you study about 'Compliance as code' using the config service.
Link to a presentation of this service: https://www.youtube.com/watch?v=fBewaclMo2s
I couldn't find a real unit testing solution for Cloudformation templates so I created one. https://github.com/DontShaveTheYak/cloud-radar
Cloud-Radar lets you take a template, pass in the parameters you want to set. Then render that template to its final form. Meaning all conditionals and intrinsic functions are solved.
This allows you to take a template like this and write the following tests:
from pathlib import Path
from unittest.mock import mock_open, patch
import pytest
from cloud_radar.cf.unit import Template
#pytest.fixture
def template():
template_path = Path(__file__).parent / "../../templates/log_bucket/log_bucket.yaml"
return Template.from_yaml(template_path.resolve(), {})
def test_log_defaults(template):
result = template.render({"BucketPrefix": "testing"})
assert "LogsBucket" in result["Resources"]
bucket_name = result["Resources"]["LogsBucket"]["Properties"]["BucketName"]
assert "us-east-1" in bucket_name
def test_log_retain(template):
result = template.render(
{"BucketPrefix": "testing", "KeepBucket": "TRUE"}, region="us-west-2"
)
assert "LogsBucket" not in result["Resources"]
bucket = result["Resources"]["RetainLogsBucket"]
assert "DeletionPolicy" in bucket
assert bucket["DeletionPolicy"] == "Retain"
bucket_name = bucket["Properties"]["BucketName"]
assert "us-west-2" in bucket_name
Edit: If you are interested in testing Cloudformation templates, then checkout my blog series Hypermodern Cloudformation.
There's a bash library xsh-lib/aws, a tool of it can deploy AWS CloudFormation templates from CLI.
The tool can be found at: https://github.com/xsh-lib/aws/blob/master/functions/cfn/deploy.sh
It handles template validation and uploading, stack naming, policing, updating, cleaning, time outing, rollbacking, and status checking.
Besides the above, it also handles nested templates and non-inline Lambdas. That saves the work for uploading nested templates and non-inline Lambda functions which could drive people nuts if doing tests manually.
It supports a customized config file which is making the deployment easier. A real-world config file is here.
A simple example call of the tool from your CLI looks like this:
xsh aws/cfn/deploy -t template.json -s mystack -o OPTIONS=Param1Key=Param1Value
xsh-lib/aws is a library of xsh, in order to use it you will have to install xsh first.