Finding referencing variables on Cloud formation template Fn:ImportValue - amazon-web-services

I have come across one of our existing AWS Infra CF templates and I found below snippet
"SubnetId" : { "Fn::ImportValue" : "hw-const-GreenBSubnet" },
"SecurityGroupIds": [
{ "Fn::ImportValue" : "hw-const-InfraSg" },
{ "Fn::ImportValue" : "hw-const-HttpsAllSg" },
I undestand the variable hw-const-InfraSg is export of some other stack output after reading through documentation, but how can i find what is the value of vaiable hw-const-InfraSg if i don't know the stack which created it.
does these variables can be configured some where else, please enlighten.

There are few ways. You could use CLI to list existing exports:
aws cloudformation list-exports
or you could go to CloudFormation console. In the console there is dedicated menu Exports that lists and allow you to search through all exports with their values:

Related

Trigger AWS Step Function via Cloudwatch and pass some variables

a file in S3 bucket triggers cloud watch event ( I am able to capture the url and key via $.detail.xxxx
Code below
How can I then pass these to a step function and from step function pass them to fargate instance as an environment variable
trying to use terraform's "aws_cloudwatch_event_target" however, I cannot find good examples of launching and passing inputs to step function
Here is the full function i have so far
resource "aws_cloudwatch_event_target" "cw-target" {
arn = aws_sfn_state_machine.my-sfn.arn
rule = aws_cloudwatch_event_rule.cw-event-rule.name
role_arn = aws_iam_role.my-iam.arn
input_transformer {
input_paths = {
bucket = "$.detail.requestParameters.bucketName"
}
}
input_template = <<TEMPLATE
{
"containerOverrides": [
{
"environment": [
{ "name": "BUCKET", "value": <bucket> },
]
}
]
}
TEMPLATE
}
on the cloudwatch event via the console I can see
{"bucket":"$.detail.requestParameters.bucketName"}
and
{
"containerOverrides": [
{
"environment": [
{ "name": "BUCKET", "value": <bucket> },
]
}
]
}
Just need to know how do I fetch this information inside the step function and then send it as ENV var when calling fargate
For using input transformers in AWS eventbridge, check this guide.
You can transform the payload of the event to your liking into the step function by using an InputPath (as you have already done) and an input template, where you use variables defined in you InputPath to define a new payload. This new payload will be used as input for the step function.
Here are more examples of input paths and templates: https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-transform-target-input.html
Edit:
If you want to start a fargate task with these environment variables, your best option is to indeed use the environment Overrides to specify new env variables on each task.
Old edit:
If you want to start a fargate task with these environment variables, you have two options:
Create a new task definition in you step function with the specified env vars, then start a new fargate task from this task definition.
2.only use 1 task definition created beforehand, and use env files in that task definition. More info here. Basically what happens is when the task is started the task will fetch a file from s3 and use the values in that file as env vars. Then you step function only has to contain a step to upload a file to s3, en sten start a fargate task using the existing task definition.

Use CDK deploy time token values in a launch template user-data script

I recently starting porting part of my infrastructure to AWS CDK. Previously, I did some experiments with Cloudformation templates directly.
I am currently facing the problem that I want to encode some values (namely the product version) in a user-data script of an EC2 launch template and these values should only be loaded at deployment time. With Cloudformation this was quite simple, I was just building my JSON file from functions like Fn::Base64 and Fn::Join. E.g. it looked like this (simplified)
"MyLaunchTemplate": {
"Type": "AWS::EC2::LaunchTemplate",
"Properties": {
"LaunchTemplateData": {
"ImageId": "ami-xxx",
"UserData": {
"Fn::Base64": {
"Fn::Join": [
"#!/bin/bash -xe",
{"Fn::Sub": "echo \"${SomeParameter}\""},
]
}
}
}
}
}
}
This way I am able to define the parameter SomeParameter on launch of the cloudformation template.
With CDK we can access values from the AWS Parameter Store either at deploy time or at synthesis time. If we use them at deploy time, we only get a token, otherwise we get the actual value.
I have achieved so far to read a value for synthesis time and directly encode the user-data script as base64 like this:
product_version = ssm.StringParameter.value_from_lookup(
self, f'/Prod/MyProduct/Deploy/Version')
launch_template = ec2.CfnLaunchTemplate(self, 'My-LT', launch_template_data={
'imageId': my_ami,
'userData': base64.b64encode(
f'echo {product_version}'.encode('utf-8')).decode('utf-8'),
})
With this code, however, the version gets read during synthesis time and will be hardcoded into the user-data script.
In order to be able to use dynamic values that are only resolved at deploy time (value_for_string_parameter) I would somehow need to tell CDK to write a Cloudformation template similar to what I have done manually before (using Fn::Base64 only in Cloudformation, not in Python). However, I did not find a way to do this.
If I read a value that is only to be resolved at deploy time like follows, how can I use it in the UserData field of a launch template?
latest_string_token = ssm.StringParameter.value_for_string_parameter(
self, "my-plain-parameter-name", 1)
It is possible using the Cloudformation intrinsic functions which are available in the class aws_cdk.core.Fn in Python.
These can be used when creating a launch template in EC2 to combine strings and tokens, e.g. like this:
import aws_cdk.core as cdk
# loads a value to be resolved at deployment time
product_version = ssm.StringParameter.value_for_string_parameter(
self, '/Prod/MyProduct/Deploy/Version')
launch_template = ec2.CfnLaunchTemplate(self, 'My-LT', launch_template_data={
'imageId': my_ami,
'userData': cdk.Fn.base64(cdk.Fn.join('\n', [
'#!/usr/bin/env bash',
cdk.Fn.join('=', ['MY_PRODUCT_VERSION', product_version]),
'git checkout $MY_PRODUCT_VERSION',
])),
})
This example could result in the following user-data script in the launch template if the parameter store contains version 1.2.3:
#!/usr/bin/env bash
MY_PRODUCT_VERSION=1.2.3
git checkout $MY_PRODUCT_VERSION

Access API gateway endpoint in cloudformation using custom resource

I want to be able to call a API gateway endpoint from within cloudformation and parse the response from the output and pass in relevant information to one of the other service in the cloudformation.
I have an api endpoint
https://123x123x.execute-api.eu-west-2.amazonaws.com/myendpoint/tenants
with
x-api-key: b8Yk6m63rq8XRnMDKa2PeWE3KvBcU7ZyFIn0Vvrty
Content-Type: application/json
which returns
{
"tenants": [
{
"tenantId": "tenant-1234",
"AZ": "us-west-2c",
"tenantUsers": 24,
"instanceType": "m1.small"
},
{
"tenantId": "tenant-2345",
"AZ": "us-west-2b",
"tenantUsers": 32,
"instanceType": "t2.micro"
},
{
"tenantId": "tenant-3456",
"AZ": "us-west-2a",
"tenantUsers": 12
"instanceType": "m1.large"
}
]}
I want to be able to set the InstanceTypeParameter which needs to be a list ["t2.micro", "m1.small", "m1.large"] retrieved from the above response and passed in as parameter in cloudformation as below.
"Ec2Instance" : {
"Type" : "AWS::EC2::Instance",
"Properties" : {
"InstanceType" : { "Ref" : "InstanceTypeParameter" },
"ImageId" : "ami-0ff8a91507f77f867"
}
}
I am assuming the only way to do this would be using a custom resource. Can someone help me develop that (atleast a pseudocode)?
You are correct, it must be a custom resource. Below I will provide general steps which can be fallowed to achieve your aim.
Develop a standalone lambda function. Just plain, regular function for now, which is going to call the API, gets its response, parse it prepare result you require based on input parameters you will provide. The aim is to test how such lambda function will work. Its like a blue-print for a custom resource to be developed.
Once you know how the lambda function will work, its time to prepare a custom resource. I recommend creating a new function for that using custom-resource-helper. The helper simplifies a lot development of custom resources. To use it, you will have to prepare a zip deployment package to bundle it with your function handler. Since you know from step 1 exactly how your function should work from step 1, you need to amend it to work in context of the helper. Adding modified the code into def create(event, context) of the helper should be enough. delete(event, context) can be empty as you are not creating any new physical resource in AWS. update(event, context) its up to you want to do with that.
Once you deploy your custom resource lambda, its time to actually create a custom resource in your CFN tempalte. General form is as follows:
MyGetExternalApiResponseResource:
Type: Custom::CallExternalAPI
Version: "1.0"
Properties:
ServiceToken: <ARN of function from step 2>
InputParameterToFunction1: <for example, api key>
InputParameterToFunction2: <for example, url of api to call>
Lots of debugging and troubleshooting. It will almost center not work first time.
Once it works, you can get return values from the custom resource, using either !Ref MyGetExternalApiResponseResource or !GetAtt MyGetExternalApiResponseResource.InstanceTypeParameter. Depends which way you prefare. Second way would be better probably, as the custom resource doesn't create physical resource. Usually !Ref would be used for id of physical resource created, e.g. id of an AMI, id of an instance.
To fully automate it, you would also deploy the code for custom lambda as a CFN template, instead of doing this manually. In this scenario your template would both create a custom resource lambda function, and a custom resource itself using the function.

API Gateway not importing exported definition

I am testing my backup procedure for an API, in my API Gateway.
So, I am exporting my API from the API Console within my AWS account, I then go into API Gateway and create a new API - "import from swagger".
I paste my exported definition in and create, which throws tons of errors.
From my reading - it seems this is a known issue / pain.
I suspect the reason for the error(s) are because I use a custom authorizer;
"security" : [ {
"TestAuthorizer" : [ ]
}, {
"api_key" : [ ]
} ]
I use this on each method, hence, I get a lot of errors.
The weird thing is that I can clone this API perfectly fine, hence, I assumed that I could take an exported definition and re-import without issues.
Any ideas on how I can correct these errors (preferably within my API gateway, so that I can export / import with no issues).
An example of one of my GET methods using this authorizer is:
"/api/example" : {
"get" : {
"produces" : [ "application/json" ],
"parameters" : [ {
"name" : "Authorization",
"in" : "header",
"required" : true,
"type" : "string"
} ],
"responses" : {
"200" : {
"description" : "200 response",
"schema" : {
"$ref" : "#/definitions/exampleModel"
},
"headers" : {
"Access-Control-Allow-Origin" : {
"type" : "string"
}
}
}
},
"security" : [ {
"TestAuthorizer" : [ ]
}, {
"api_key" : [ ]
} ]
}
Thanks in advance
UPDATE
The error(s) that I get when importing a definition I had just exported are:
Your API was not imported due to errors in the Swagger file.
Unable to put method 'GET' on resource at path '/api/v1/MethodName': Invalid authorizer ID specified. Setting the authorization type to CUSTOM or COGNITO_USER_POOLS requires a valid authorizer.
I get the message for each method in my API - so there is a lot.
Additionality, right at the end of the message, I get this:
Additionally, these warnings were found:
Unable to create authorizer from security definition: 'TestAuthorizer'. Extension x-amazon-apigateway-authorizer is required. Any methods with security: 'TestAuthorizer' will not be created. If this security definition is not a configured authorizer, remove the x-amazon-apigateway-authtype extension and it will be ignored.
I have tried with Ignoring the errors, same result.
Make sure you are exporting your swagger with both integrations and authorizers extensions.
Try exporting your swagger using AWS CLI:
aws apigateway get-export \
--parameters '{"extensions":"integrations,authorizers"}' \
--rest-api-id {api_id} \
--stage-name {stage_name} \
--export-type swagger swagger.json
The output will be sent to swagger.json file.
For more details about swagger custom extensions see this.
For anyone that may come across this issue.
After LOTS of troubleshooting and eventually involving the AWS Support Team, this has been resolved and identified as an AWS CLI client bug (confirmed from AWS Support Team);
Final response.
Thank you for providing the details requested. After going through the AWS CLI version and error details, I can confirm the error is because of known issue with Powershell AWS CLI. I apologize for inconvenience caused due to the error. To get around the error I recommend going through the following steps
Create a file named data.json in the current directory where the powershell command is to be executed
Save the following contents to file {"extensions":"authorizers,integrations"}
In Powershell console, ensure the current working directory is the same as the location where data.json is present
Execute the following command aws apigateway get-export --parameters file://data.json --rest-api-id APIID --stage-name dev --export-type swagger C:\temp\export.json
Using this, finally resolved my issue - I look forward to the fix in one of the upcoming versions.
PS - this is currently on the latest version:
aws --version
aws-cli/1.11.44 Python/2.7.9 Windows/8 botocore/1.5.7

Cloudformation - Redeploy environment that uses a recordset (with Jenkins)

TL;DR - What's the recommended way, using a CI server, to keep an AWS environment up to date, and always pointed to from the same CNAME?
We're just starting to use AWS with a new project, and as part of the project I've been tasked with creating a simple demo environment, and updating this environment each night to show the previous days progress.
I'm using Jenkins and the Cloudformation plugin to do this, and it works great in creating a simple EC2 instance in an existing security group, pointed to by a Route53 CNAME so it can be browsed at subdomain.example.com.
The problem I have is that I can't redeploy the same stack, because the recordset already exists, and CF won't overwrite it.
There are lots of guides on how to deploy an environment, but I'm struggling to find one on how to keep an environment up to date.
So I guess my question is: What's the recommended way, using a CI server, to keep an AWS environment up to date, and always pointed to from the same CNAME?
I agree with the comments in your question i.e. probably better to create a clean server and upload / update to it via continuous integration (Jenkins). Docker is super useful in this scenario which you mentioned in a later comment.
However, if you are leaning towards "immutable infrastructure" and want everything encapsulated in your CloudFormation template (Including creating a record in Route53) you could do something like the following code snippet in your AWS::CloudFormation::Init section - (See 'http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource- init.html' for more info)
"Resources": {
"MyServer": {
"Type": "AWS::EC2::Instance",
"Metadata": {
"AWS::CloudFormation::Init": {
"configSets" : { "Install" : [ "UpdateRoute53", "ConfigSet2, .... ] },
"UpdateRoute53" : {
"files" : {
"/usr/local/bin/cli53" : {
"source" : "https://github.com/barnybug/cli53/releases/download/0.6.3/cli53-linux-amd64",
"mode" : "000755", "owner" : "root", "group" : "root"
},
"/tmp/update_route53.sh" : {
"content" : { "Fn::Join" : ["", [
"#!/bin/bash\n\n",
"PRIVATE_IP=`curl http://169.254.169.254/latest/meta-data/local-ipv4/`\n",
"/usr/local/bin/cli53 rrcreate ",
{"Ref": "Route53HostedZone" },
" \"", { "Ref" : "ServerName" },
" 300 A $PRIVATE_IP\" --replace\n"
]]},
"mode" : "000755", "owner" : "root", "group" : "root"
}
},
"commands" : {
"01_UpdateRoute53" : {
"command" : "/tmp/update_route53.sh > /tmp/update-route53.log 2>&1"
}
}
}
}
},
"Properties": { ... }
}
}
....
I've ommitted large chunks of the template to focus on the important info. The "UpdateRoute53" section creates 2 files:
/usr/local/bin/cli53 - CLI53 is a great little wrapper program around AWS Route53 (as AWS CLI version of route53 is pretty horrible to use i.e. requires creating large chunks of JSON) - see https://github.com/barnybug/cli53 for more info on CLI53
/tmp/update_route53.sh - creates a script to upload to Route53 via the CLI53 script we installed in (1). This script determines the PRIVATE_IP via a curl comand to the special AWS meta data endpoint (see http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-metadata.html for more details). The "zone id" to the correct hosted zone is injected via a CloudFormation parameter (i.e. {"Ref": "Route53HostedZone" }). Finally the name of the record comes from the "ServerName" parameter but how this is set could vary from template to template.
In the "commands" section we run the script we created in "files" section (2) and output the results a log file in the /tmp folder.
NOTE (1) - The parameter Route53HostedZone can be declared as follows: -
"Route53HostedZone": {
"Description": "Route 53 hosted zone for updating internal DNS",
"Type": "AWS::Route53::HostedZone::Id",
"Default": "VIWIWK4PYAC23B"
}
The cool thing about the "AWS::Route53::HostedZone::Id") parameter type is that it displays a combo box (when running a CloudFormation template via the AWS web console) showing the zone name with the value being the Zone ID.
NOTE (2) - The --replace attribute in the CLI53 script overrides existing records which is probably what you want.
NOTE (3) - Another option would be to SSH via Jenkins (e.g. using the the "Publish Over SSH Plugin" - https://wiki.jenkins-ci.org/display/JENKINS/Publish+Over+SSH+Plugin), determine the private IP and using the CLI53 script update Route53 either from the server you've logged into or even the build server (when Jenkins is running).
Lots of options - hope you get it sorted! :-)