How to set custom environment variables using Google Cloud Deployment Manager? - google-cloud-platform

Google Cloud Deployment Manager provides deployment-specific environment variables like Project ID and Deployment name.
def GenerateConfig(context):
resources = []
resources.append ({
'name': 'vm-' + context.env["deployment"],
'type': 'compute.v1.instance',
'properties': {
'serviceAccounts': [{
'email': context.env['project_number'] + '-compute#developer.gserviceaccount.com',
'scopes': [...]
}]
}
...}]
return {'resources': resources}
Reference: https://cloud.google.com/deployment-manager/docs/configuration/templates/use-environment-variables
I don't find any example to set any custom environment variable that can be used by my templates. If there is no support for this functionality, any hack on how to achieve this functionality will be very helpful

Google Cloud Deployment Manager, similar to terraform is an IaC service which will help us in provisioning, deleting and modifying the Infrastructure components. So, in general these variables will be fixed as they need to be understood by the cloud provider and in some cases like machine type the input is also selective that means you can’t enter your desired value you have to select from the list of options available. Can you provide more details on why you require this feature and what are you trying to achieve here? It will help us in better understanding your problem or use case.
updates:
Dinesh you can make use for template properties for encrypting your secrets instead of hard coding them into your code follow Best practices for using deployment manager document for more info. Defining template properties will help you in understanding how to use template properties in your code.

Related

Reading AWS AppConfig from .NET 6 Lambda Function

I've googled around but with no luck... I need to read some configuration values stored inside a AWS Systems Manager -> AppConfig configuration (stored as text, not flag) but I've not found a C# example...
I've also tried on the AWS Console to add the layer as specified here but with no success.
For now I've used a SecretManager but it's not the correct place to store the config information... can someone help me?
Thanks
Since your question is about C# example for AppConfig, you can take a look on https://github.com/ScottHand/AppConfigNETCoreSample/tree/5f5db8d375e4df92dd7dc1b8a16f42bfb042e645.
There is interesting issue with .NET client though that https://docs.aws.amazon.com/appconfig/2019-10-09/APIReference/API_GetConfiguration.html is deprecated in favor of StartConfigurationSession and GetLatestConfiguration, however .NET client do not have yet support for these methods https://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/appconfig/AmazonAppConfigAsyncClient.html
Recommended approach is though to go with lambda extension, so you can try to open another question with your problem of setting up the extension.
Alternatively you can use SSM Parameter Store which also might be suitable for your Lambda use case.
The Amazon.Extensions.Configuration.SystemsManager package might be helpful with what you are trying to achieve. You need to store your configuration as JSON.
This is how it can be implemented using the .NET Core Configuration mechanism.
builder.Configuration.SetBasePath(Environment.CurrentDirectory)
.AddJsonFile("appsettings.json", optional: false, reloadOnChange: true)
.AddSystemsManager($"/{builder.Configuration["AwsAppConfig:ApplicationId"]}/", TimeSpan.FromMinutes(5))
.AddAppConfigUsingLambdaExtension(builder.Configuration["AwsAppConfig:ApplicationId"], builder.Configuration["AwsAppConfig:EnvironmentId"], builder.Configuration["AwsAppConfig:ConfigurationProfileId"])
.Build();
For more information, you can check out the documentation here.

Update GCP asset labels

What is the most efficient way to update all assets labels per project?
I can list all project resources and their labels with gcloud asset search-all-resources --project=SomeProject. The command also returns the labels for those assets.
Is there something like gcloud asset update-labels?
I'm unfamiliar with the service but, APIs Explorer (Google's definitive service documentation), shows a single list method.
I suspect (!?) that you will need to iterate over all your resource types and update instances of them using any (there may not be) update (PATCH) method that permits label changes for that resource type.
This seems like a reasonable request and you may wish to submit a feature request using Google's issue tracker
gcloud does not seem to have a update-labels command.
You could try the Cloud Resource Manager API. For example, call the REST or Python API: https://cloud.google.com/resource-manager/docs/creating-managing-labels#update-labels

Mapping git hash to deployed lambda

We would like to be able to understand the version of our software that is currently deployed to a particular AWS lambda. What are the best practices for tracking a code commit hash to an AWS lambda? We've looked at AWS tagging and we've also looked at AWS Lambda Aliasing but neither approach seems convenient or user-friendly. Are there other services that AWS provides?
Thanks
Without context and a better understanding of your use case around the commit hash, its difficult to give a directly useful answer, and as other answers have shown, there are many ways you could accomplish this. That being said, the commit hash of particular code is ultimately metadata and AWS has a solution for dealing with resource metadata: tags.
Best practice is to tag your resources with metadata. Almost all, if not all, AWS resources (including Lambda) support tags. As stated in the AWS documentation “tagging allows you to quickly search, filter, and manage resources” and subject to AWS limits your tags can be pretty much any key-value pair you like, including “commit: hash”.
The tagging strategy here would be to assign the commit hash to a tag, “commit” with the value “e63abe27”. You can tag resources manually through the console or you can apply tags as part of your build process.
Once tagged, at a high level, you would then be able to identify which commit is being used by listing the tags for the lambda in question. The CLI command would be something like:
aws lambda list-tags —resource arn:aws:lambda:us-east-1:123456789012:function:myFunction
You can learn more about tags and tagging strategy by reviewing the AWS docs here and you can download the Tagging Best Practice whitepaper here.
One alternative could be to generate a file with the Git SHA as part of your build system that is packaged together with the other files in the build artifact. The following script generates a gitSha.json file in the ${outputDir}:
#!/usr/bin/env bash
gitSha=$(git rev-parse --short HEAD)
printf "{\"gitSha\": \"%s\"}" ${gitSha} > "${outputDir}/git-sha.js"
Consequently, the gitSha.json may look like:
{"gitSha": "5c805638"}
This file can then be accessed either by downloading the package. Alternatively, you can create a function that inspects the file in runtime and returns its value to the caller, writes it to a log, or similar depending on your use case.
This script was implemented using bash and git rev-parse, but you can use any scripting language in combination with a Git library that you are comfortable with.
Best way to version Lambda is to Create Version option and adding these versions into an Alias.
We use this mechanism extensively for mapping a single AWS Lambda into different API gateway endpoints. We use environment variables provided by AWS Lambda to move all configurations outside the Lambda function. In each version of the Lambda, we change these environment variables as required and create a new version. This version can be mapped to an alias, which will help to keep the API gateway or the integration points intact (without any change for the integration)
if using serverless
try in serverless.yml
provider:
versionFunctions: true
and
functions:
MyFunction:
tags:
try serverless plugins, one of them
serverless plugin install --name serverless-version-tracker
it uses git tags as version
you need to manage git tugs then

How do I know what key value pairs are available for deployment manager?

For example when I try to figure out what properties I can put into deployment manager for creating a bigquery table, I had to reference the REST API docs as the best place to find parameters and required fields.
Is there a good place from within gcloud command or online docs that are specific to deployment manager yamls? I would like to be able to reference required fields and optional fields for creating GCP resources. Currently it's very difficult to figure out.
From the documentation at: https://cloud.google.com/deployment-manager/docs/configuration/supported-resource-types
You can get a list of the supported resource types by running:
gcloud deployment-manager types list
That said the yaml reference from documentation on the that page looks pretty complete.
Edit: Refer to this github link for a list of deployment manager examples.
If anything you need is not described in the documentation/exemplary schemas there is a brutal walk around.
You can make an api call with developer console open (F12) and have a look on network activity where your call will be described with all used and available properties.
It will not provide any addtional information about implementation besides parameter's name itself, so you will have to follow rules covering alike parameter.

Is there a way to unit test AWS Cloudformation template

When we say that cloudformation is 'Infrastructure as Code', the next question that immediately comes to mind is how can this code be tested.
Can we do some sort of basic unit test of this code
And I am discounting the cloudformation validation because that just is a way of doing syntactic validation, and that I can do with any other free JSON/YAML validator.
I am more inclined towards some sort of functional validation, possibly testing that I have defined all the variables that are used as references.
Possibly testing that whatever properties I am using are actually supported ones for that component
Not expected that it should test if the permissions are correct or that I have not exhausted my limits. But atleast something beyond the basic JSON/YAML syntax validation
Here's a breakdown of how several methods of testing software can be applied to CloudFormation templates/stacks:
Linting
For linting (checking CloudFormation-template code for syntax/grammar correctness), you can use the ValidateTemplate API to check basic template structure, and the CreateChangeSet API to verify your Resource properties in more detail.
Note that ValidateTemplate performs a much more thorough check than a simple JSON/YAML syntax checker- it validates correct Template Anatomy, correct syntax/usage of Intrinsic Functions, and correct resolution of all Ref values.
ValidateTemplate checks basic CloudFormation syntax, but doesn't verify your template's Resources against specific property schemas. For checking the structure of your template's Parameters, Resources and Properties against AWS Resource types, CreateChangeSet should return an error if any parameters or resource properties are not well-formed.
Unit testing
Performing unit testing first requires an answer to the question: what is the smallest self-contained unit of functionality that can/should be tested? For CloudFormation, I believe that the smallest testable unit is the Resource.
The official AWS Resource Types are supported/maintained by AWS (and are proprietary implementations anyway) so don't require any additional unit tests written by end-user developers.
However, your own Custom Resources could and should be unit-tested. This can be done using a suitable testing framework in the implementation's own language (e.g., for Lambda-backed Custom Resources, perhaps a library like lambda-tester would be a good starting point).
Integration testing
This is the most important and relevant type of testing for CloudFormation stacks (which mostly serve to tie various Resources together into an integrated application), and also the type that could use more refinement and best-practice development. Here are some initial ideas on how to integration-test CloudFormation code by actually creating/updating full stacks containing real AWS resources:
Using a scripting language, perform a CloudFormation stack creation using the language's AWS SDK. Design the template to return Stack Outputs reflecting behavior that you want to test. After the stack is created by the scripting language, compare the stack outputs against expected values (and then optionally delete the stack afterwards in a cleanup process).
Use AWS::CloudFormation::WaitCondition resources to represent successful tests/assertions, so that a successful stack creation indicates a successful integration-test run, and a failed stack creation indicates a failed integration-test run.
Beyond CloudFormation, one interesting tool worth mentioning in the space of testing infrastructure-as-code is kitchen-terraform, a set of plugins for Test Kitchen which allow you to write fully-automated integration test suites for Terraform modules. A similar integration-testing harness could eventually be built for CloudFormation, but doesn't exist yet.
This tool “cfn-nag” parses a collection of CloudFormation templates and applies rules to find code patterns that could lead to insecure infrastructure.  The results of the tool include the logical resource identifiers for violating resources and an explanation of what rule has been violated.
Further Reading: https://stelligent.com/2016/04/07/finding-security-problems-early-in-the-development-process-of-a-cloudformation-template-with-cfn-nag/
While there are quite a number of particular rules the tool will attempt to match, the rough categories are:
IAM and resource policies (S3 Bucket, SQS, etc.)
Matches policies that are overly permissive in some way (e.g. wildcards in actions or principals)
Security Group ingress and egress rules
Matches rules that are overly liberal (e.g. an ingress rule open to 0.0.0.0/0, port range 1-65535 is open)
Access Logs
Looks for access logs that are not enabled for applicable resources (e.g. Elastic Load Balancers and CloudFront Distributions)
Encryption
(Server-side) encryption that is not enabled or enforced for applicable resources (e.g. EBS volumes or for PutObject calls on an S3 bucket)
New tool is on the market now. Test all the CloudFormation things! (with TaskCat)
What is taskcat?
taskcat is a tool that tests AWS CloudFormation templates. It deploys your AWS CloudFormation template in multiple AWS Regions and generates a report with a pass/fail grade for each region. You can specify the regions and number of Availability Zones you want to include in the test, and pass in parameter values from your AWS CloudFormation template. taskcat is implemented as a Python class that you import, instantiate, and run.
usage
follow this document : https://aws.amazon.com/blogs/infrastructure-and-automation/up-your-aws-cloudformation-testing-game-using-taskcat/
notes
taskcat can't read AWS_PROFILE environment variable. you need define the profile in part of general if it is not default profile.
general:
auth:
default: dev-account
Ref: https://github.com/aws-quickstart/taskcat/issues/434
The testing as you described (at least beyond JSON parsing) can be achieved partially by parsing CloudFormation templates by programmatic libraries that are used to generate/read templates. They do not test the template explicitly but can throw an exception or error upon conditions where you use a property that is not defined for a resource.
Check out go-cloudformation: https://github.com/crewjam/go-cloudformation
Other than that you need to run the stack to see errors. I believe that testing IaaC is one of the main challenges in infrastructure automation. Not only unit testing but also integration testing and continuous validation.
Speaking specifically of CloudFormation, AWS recommends using the taskcat, which is a tool that deploys the infrastructure / template within all AWS regions, in this process it already performs code validation.
TaskCat Github repository: https://github.com/aws-quickstart/taskcat
In addition, through the Visual Studio code you can use the extension Cloud conformity template scanner or use the feature that has been currently purchased by trend micro to make security validations whose name has changed from cloud conformity to Trend Micro Template scanner.
It will basically perform the validation of the template and architectural code linked to the model and use case of the Well Architected Framework from AWS.
About template scanner: https://aws.amazon.com/blogs/apn/using-shift-left-to-find-vulnerabilities-before-deployment-with-trend-micro-template-scanner/
The VS Code Extension Cloud Conformity: https://marketplace.visualstudio.com/items?itemName=raphaelbottino.cc-template-scanner
In addition, there is a VS Code extension Linter that you can use as a pre-commit for validation, the name is: CloudFormation Linter.
CloudFormation Linter: https://marketplace.visualstudio.com/items?itemName=kddejong.vscode-cfn-lint
You can also use more advanced features if you want to implement an infra as code pipeline using DevSecOps "SEC", this is Scout suite. It has its own validator for the cloud that can be run in a build container, it will audit the cloud to validate if there are resources that are outside a security standard.
Scout Suite Github repository: https://github.com/nccgroup/ScoutSuite
If you want to go deeper in the case of using validation and resource testing / compliance on AWS, I recommend you study about 'Compliance as code' using the config service.
Link to a presentation of this service: https://www.youtube.com/watch?v=fBewaclMo2s
I couldn't find a real unit testing solution for Cloudformation templates so I created one. https://github.com/DontShaveTheYak/cloud-radar
Cloud-Radar lets you take a template, pass in the parameters you want to set. Then render that template to its final form. Meaning all conditionals and intrinsic functions are solved.
This allows you to take a template like this and write the following tests:
from pathlib import Path
from unittest.mock import mock_open, patch
import pytest
from cloud_radar.cf.unit import Template
#pytest.fixture
def template():
template_path = Path(__file__).parent / "../../templates/log_bucket/log_bucket.yaml"
return Template.from_yaml(template_path.resolve(), {})
def test_log_defaults(template):
result = template.render({"BucketPrefix": "testing"})
assert "LogsBucket" in result["Resources"]
bucket_name = result["Resources"]["LogsBucket"]["Properties"]["BucketName"]
assert "us-east-1" in bucket_name
def test_log_retain(template):
result = template.render(
{"BucketPrefix": "testing", "KeepBucket": "TRUE"}, region="us-west-2"
)
assert "LogsBucket" not in result["Resources"]
bucket = result["Resources"]["RetainLogsBucket"]
assert "DeletionPolicy" in bucket
assert bucket["DeletionPolicy"] == "Retain"
bucket_name = bucket["Properties"]["BucketName"]
assert "us-west-2" in bucket_name
Edit: If you are interested in testing Cloudformation templates, then checkout my blog series Hypermodern Cloudformation.
There's a bash library xsh-lib/aws, a tool of it can deploy AWS CloudFormation templates from CLI.
The tool can be found at: https://github.com/xsh-lib/aws/blob/master/functions/cfn/deploy.sh
It handles template validation and uploading, stack naming, policing, updating, cleaning, time outing, rollbacking, and status checking.
Besides the above, it also handles nested templates and non-inline Lambdas. That saves the work for uploading nested templates and non-inline Lambda functions which could drive people nuts if doing tests manually.
It supports a customized config file which is making the deployment easier. A real-world config file is here.
A simple example call of the tool from your CLI looks like this:
xsh aws/cfn/deploy -t template.json -s mystack -o OPTIONS=Param1Key=Param1Value
xsh-lib/aws is a library of xsh, in order to use it you will have to install xsh first.