I want to be able to call a API gateway endpoint from within cloudformation and parse the response from the output and pass in relevant information to one of the other service in the cloudformation.
I have an api endpoint
https://123x123x.execute-api.eu-west-2.amazonaws.com/myendpoint/tenants
with
x-api-key: b8Yk6m63rq8XRnMDKa2PeWE3KvBcU7ZyFIn0Vvrty
Content-Type: application/json
which returns
{
"tenants": [
{
"tenantId": "tenant-1234",
"AZ": "us-west-2c",
"tenantUsers": 24,
"instanceType": "m1.small"
},
{
"tenantId": "tenant-2345",
"AZ": "us-west-2b",
"tenantUsers": 32,
"instanceType": "t2.micro"
},
{
"tenantId": "tenant-3456",
"AZ": "us-west-2a",
"tenantUsers": 12
"instanceType": "m1.large"
}
]}
I want to be able to set the InstanceTypeParameter which needs to be a list ["t2.micro", "m1.small", "m1.large"] retrieved from the above response and passed in as parameter in cloudformation as below.
"Ec2Instance" : {
"Type" : "AWS::EC2::Instance",
"Properties" : {
"InstanceType" : { "Ref" : "InstanceTypeParameter" },
"ImageId" : "ami-0ff8a91507f77f867"
}
}
I am assuming the only way to do this would be using a custom resource. Can someone help me develop that (atleast a pseudocode)?
You are correct, it must be a custom resource. Below I will provide general steps which can be fallowed to achieve your aim.
Develop a standalone lambda function. Just plain, regular function for now, which is going to call the API, gets its response, parse it prepare result you require based on input parameters you will provide. The aim is to test how such lambda function will work. Its like a blue-print for a custom resource to be developed.
Once you know how the lambda function will work, its time to prepare a custom resource. I recommend creating a new function for that using custom-resource-helper. The helper simplifies a lot development of custom resources. To use it, you will have to prepare a zip deployment package to bundle it with your function handler. Since you know from step 1 exactly how your function should work from step 1, you need to amend it to work in context of the helper. Adding modified the code into def create(event, context) of the helper should be enough. delete(event, context) can be empty as you are not creating any new physical resource in AWS. update(event, context) its up to you want to do with that.
Once you deploy your custom resource lambda, its time to actually create a custom resource in your CFN tempalte. General form is as follows:
MyGetExternalApiResponseResource:
Type: Custom::CallExternalAPI
Version: "1.0"
Properties:
ServiceToken: <ARN of function from step 2>
InputParameterToFunction1: <for example, api key>
InputParameterToFunction2: <for example, url of api to call>
Lots of debugging and troubleshooting. It will almost center not work first time.
Once it works, you can get return values from the custom resource, using either !Ref MyGetExternalApiResponseResource or !GetAtt MyGetExternalApiResponseResource.InstanceTypeParameter. Depends which way you prefare. Second way would be better probably, as the custom resource doesn't create physical resource. Usually !Ref would be used for id of physical resource created, e.g. id of an AMI, id of an instance.
To fully automate it, you would also deploy the code for custom lambda as a CFN template, instead of doing this manually. In this scenario your template would both create a custom resource lambda function, and a custom resource itself using the function.
Related
I have following AWS CDK backed solution:
Static S3 based webpage which communicates with
API Gateway which then sends data to
AWS lambda.
The problem is that S3 page needs to be aware of API gateway endpoint URL.
Obviously this is not achievable within the same CDK stack. So I have defined two stacks:
Backend (API gateway + lambda)
Frontend (S3 based static webpage)
They are linked as dependant in CDK code:
const app = new cdk.App();
const backStack = new BackendStack(app, 'Stack-back', {...});
new FrontendStack(app, 'Stack-front', {...}).addDependency(backStack, "API URL from backend is needed");
I try to share URL as follows.
Code from backend stack definition:
const api = new apiGW.RestApi(this, 'MyAPI', {
restApiName: 'My API',
description: 'This service provides interface towards web app',
defaultCorsPreflightOptions: {
allowOrigins: apiGW.Cors.ALL_ORIGINS,
}
});
api.root.addMethod("POST", lambdaIntegration);
new CfnOutput(this, 'ApiUrlRef', {
value: api.url,
description: 'API Gateway URL',
exportName: 'ApiUrl',
});
Code from frontend stack definition:
const apiUrl = Fn.importValue('ApiUrl');
Unfortunately, instead of URL I get token (${Token[TOKEN.256]}). At the same time, I see URL is resolved in CDK generated files:
./cdk.out/Stack-back.template.json:
"ApiUrlRef": {
"Description": "API Gateway URL",
"Value": {
"Fn::Join": [
"",
[
"https://",
{
"Ref": "MyAPI7DAA778AA"
},
".execute-api.us-west-1.",
{
"Ref": "AWS::URLSuffix"
},
"/",
{
"Ref": "MyAPIDeploymentStageprodA7777A7A"
},
"/"
]
]
},
"Export": {
"Name": "ApiUrl"
}
}
},
What I'm doing wrong?
UPD:
After advice of fedonev to pass data as props, situation did not changed much. Now url looks like that:
"https://${Token[TOKEN.225]}.execute-api.us-west-1.${Token[AWS.URLSuffix.3]}/${Token[TOKEN.244]}/"
I think important part I missed (which was also pointed by
Milan Gatyas) is how I create HTML with URL of gateway.
In my frontend-stack.ts, I use template file. After template is filled, I store it in S3:
const filledTemplatePath: string = path.join(processedWebFileDir,'index.html');
const webTemplate: string = fs.readFileSync(filledTemplatePath, 'utf8')
const Handlebars = require("handlebars")
let template = Handlebars.compile(webTemplate)
const adjustedHtml: string = template({ apiGwEndpoint: apiUrl.toString() })
fs.writeFileSync(filledTemplatePath, adjustedHtml)
// bucket
const bucket: S3.Bucket = new S3.Bucket(this, "WebsiteBucket",
{
bucketName: 'frontend',
websiteIndexDocument: 'index.html',
websiteErrorDocument: 'error.html',
publicReadAccess: true,
})
new S3Deploy.BucketDeployment(this, 'DeployWebsite', {
sources: [S3Deploy.Source.asset(processedWebFileDir)],
destinationBucket: bucket,
});
(I'm new to TS and web, please don't judge much :) )
Am I correct that S3 is populated on synth, deploy does not change anything and this is why I get tokens in html?
Will be grateful for a link or explanation so that I could understand the process better, there are so much new information to me that some parts are still quite foggy.
As #fedonev mentioned, the tokens are just placeholder values in the TypeScript application. CDK app replaces tokens with intrinsic functions when the CloudFormation template is produced.
However, your use case is different. You try to know the information inside the CDK app which is available only at synthesis time, and you can't use the intrinsic function to resolve the URL while being in CDK app to write to file.
If possible you can utilize the custom domain for the API Gateway. Then you can work with beforehand known custom domain in your static file and assign the custom domain to the API Gateway in your CDK App.
[Edit: rewrote the answer to reflect updates to the OP]
Am I correct that S3 is populated on synth, deploy does not change anything and this is why I get tokens in html?
Yes. The API URL will resolve only at deploy-time. You are trying to consume it at synth-time when you write to the template file. At synth-time, CDK represents not-yet-available values as Tokens like ${Token[TOKEN.256]}, the CDK's clever way of handling such deferred values.
What I'm doing wrong?
You need to defer the consumption of API URL until its value is resolved (= until the API is deployed). In most cases, passing constructs as props between stacks is the right approach. But not in your case: you want to inject the URL into the template file. As usual with AWS, you have many options:
Split the stacks into separate apps, deployed separately. Deploy BackendStack. Hardcode the url into FrontendStack. Quick and dirty.
Instead of S3, use Amplify front-end hosting, which can expose the URL to your template as an environment variable. Beginner friendly, has CDK support.
Add a CustomResource construct, which would be backed by a Lambda that writes the URL to the template file as part of the deploy lifecycle. This solution is elegant but not newbie-friendly.
Use a Pipeline to inject the URL variable as a build step during deploy. Another advanced approach.
I recently starting porting part of my infrastructure to AWS CDK. Previously, I did some experiments with Cloudformation templates directly.
I am currently facing the problem that I want to encode some values (namely the product version) in a user-data script of an EC2 launch template and these values should only be loaded at deployment time. With Cloudformation this was quite simple, I was just building my JSON file from functions like Fn::Base64 and Fn::Join. E.g. it looked like this (simplified)
"MyLaunchTemplate": {
"Type": "AWS::EC2::LaunchTemplate",
"Properties": {
"LaunchTemplateData": {
"ImageId": "ami-xxx",
"UserData": {
"Fn::Base64": {
"Fn::Join": [
"#!/bin/bash -xe",
{"Fn::Sub": "echo \"${SomeParameter}\""},
]
}
}
}
}
}
}
This way I am able to define the parameter SomeParameter on launch of the cloudformation template.
With CDK we can access values from the AWS Parameter Store either at deploy time or at synthesis time. If we use them at deploy time, we only get a token, otherwise we get the actual value.
I have achieved so far to read a value for synthesis time and directly encode the user-data script as base64 like this:
product_version = ssm.StringParameter.value_from_lookup(
self, f'/Prod/MyProduct/Deploy/Version')
launch_template = ec2.CfnLaunchTemplate(self, 'My-LT', launch_template_data={
'imageId': my_ami,
'userData': base64.b64encode(
f'echo {product_version}'.encode('utf-8')).decode('utf-8'),
})
With this code, however, the version gets read during synthesis time and will be hardcoded into the user-data script.
In order to be able to use dynamic values that are only resolved at deploy time (value_for_string_parameter) I would somehow need to tell CDK to write a Cloudformation template similar to what I have done manually before (using Fn::Base64 only in Cloudformation, not in Python). However, I did not find a way to do this.
If I read a value that is only to be resolved at deploy time like follows, how can I use it in the UserData field of a launch template?
latest_string_token = ssm.StringParameter.value_for_string_parameter(
self, "my-plain-parameter-name", 1)
It is possible using the Cloudformation intrinsic functions which are available in the class aws_cdk.core.Fn in Python.
These can be used when creating a launch template in EC2 to combine strings and tokens, e.g. like this:
import aws_cdk.core as cdk
# loads a value to be resolved at deployment time
product_version = ssm.StringParameter.value_for_string_parameter(
self, '/Prod/MyProduct/Deploy/Version')
launch_template = ec2.CfnLaunchTemplate(self, 'My-LT', launch_template_data={
'imageId': my_ami,
'userData': cdk.Fn.base64(cdk.Fn.join('\n', [
'#!/usr/bin/env bash',
cdk.Fn.join('=', ['MY_PRODUCT_VERSION', product_version]),
'git checkout $MY_PRODUCT_VERSION',
])),
})
This example could result in the following user-data script in the launch template if the parameter store contains version 1.2.3:
#!/usr/bin/env bash
MY_PRODUCT_VERSION=1.2.3
git checkout $MY_PRODUCT_VERSION
I'm trying to write a Python script that lets you rename a Lambda function by copying all of the code and configuration to a new function. As part of that process, I want to take all of the API Gateway methods that point to the old function and redirect them to point to the new function.
Is there a way to accomplish this with boto3?
Yes, it is doable, but you have to have two clients for APIGateway.
Following the calls you can make for doing this:
get_apis > gives all the APIs deployed, then further you can drill down to individual APIs using get_api
get_resources > This gives you all the paths in the chosen rest API.
get_integration > Gives you something like below:
{
"type": "AWS_PROXY",
"httpMethod": "POST",
"uri": "arn:aws:apigateway:us-east1-1:lambda:path/2015-03-31/functions/arn:aws:lambda:us-east-1:1234567890:function:myfunction/invocations",
"credentials": "arn:aws:iam::1234567890:role/myrole",
...
}
Summary: I can't specify a JSON object using CloudWatch target Input Transformer, in order to pass the object contents as an environment variable to a CodeBuild project.
Background:
I trigger an AWS CodeBuild job when an S3 bucket receives any new object. I have enabled CloudTrail for S3 operations so that I can use a CloudWatch rule that has my S3 bucket as an Event Source, with the CodeBuild project as a Target.
If I setup the 'Configure input' part of the Target, using Input Transformer, I can get single 'primitive' values from the event using the format below:
Input path textbox:
{"zip_file":"$.detail.requestParameters.key"}
Input template textbox:
{"environmentVariablesOverride": [ {"name":"ZIP_FILE", "value":<zip_file>}]}
And this works fine if I use 'simple' single strings.
However, for example, if I wish to obtain the entire 'resources' key, which is a JSON object, I need to have knowledge of each of the keys within, and the object structure, and manually recreate the structure for each key/value pair.
For example, the resources element in the Event is:
"resources": [
{
"type": "AWS::S3::Object",
"ARN": "arn:aws:s3:::mybucket/myfile.zip"
},
{
"accountId": "1122334455667799",
"type": "AWS::S3::Bucket",
"ARN": "arn:aws:s3:::mybucket"
}
],
I want the code in the buildspec in CodeBuild to do the heavy lifting and parse the JSON data.
If I specify in the input path textbox:
{"zip_file":"$.detail.resources"}
Then CodeBuild project never gets triggered.
Is there a way to get the entire JSON object, identified by a specific key, as an environment variable?
Check this...CodeBuild targets support all the parameters allowed by StartBuild API. You need to use environmentVariablesOverride in your JSON string.
{"environmentVariablesOverride": [ {"name":"ZIPFILE", "value":<zip_file>}]}
Please,avoid using '_' in the environment name.
TL;DR - What's the recommended way, using a CI server, to keep an AWS environment up to date, and always pointed to from the same CNAME?
We're just starting to use AWS with a new project, and as part of the project I've been tasked with creating a simple demo environment, and updating this environment each night to show the previous days progress.
I'm using Jenkins and the Cloudformation plugin to do this, and it works great in creating a simple EC2 instance in an existing security group, pointed to by a Route53 CNAME so it can be browsed at subdomain.example.com.
The problem I have is that I can't redeploy the same stack, because the recordset already exists, and CF won't overwrite it.
There are lots of guides on how to deploy an environment, but I'm struggling to find one on how to keep an environment up to date.
So I guess my question is: What's the recommended way, using a CI server, to keep an AWS environment up to date, and always pointed to from the same CNAME?
I agree with the comments in your question i.e. probably better to create a clean server and upload / update to it via continuous integration (Jenkins). Docker is super useful in this scenario which you mentioned in a later comment.
However, if you are leaning towards "immutable infrastructure" and want everything encapsulated in your CloudFormation template (Including creating a record in Route53) you could do something like the following code snippet in your AWS::CloudFormation::Init section - (See 'http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource- init.html' for more info)
"Resources": {
"MyServer": {
"Type": "AWS::EC2::Instance",
"Metadata": {
"AWS::CloudFormation::Init": {
"configSets" : { "Install" : [ "UpdateRoute53", "ConfigSet2, .... ] },
"UpdateRoute53" : {
"files" : {
"/usr/local/bin/cli53" : {
"source" : "https://github.com/barnybug/cli53/releases/download/0.6.3/cli53-linux-amd64",
"mode" : "000755", "owner" : "root", "group" : "root"
},
"/tmp/update_route53.sh" : {
"content" : { "Fn::Join" : ["", [
"#!/bin/bash\n\n",
"PRIVATE_IP=`curl http://169.254.169.254/latest/meta-data/local-ipv4/`\n",
"/usr/local/bin/cli53 rrcreate ",
{"Ref": "Route53HostedZone" },
" \"", { "Ref" : "ServerName" },
" 300 A $PRIVATE_IP\" --replace\n"
]]},
"mode" : "000755", "owner" : "root", "group" : "root"
}
},
"commands" : {
"01_UpdateRoute53" : {
"command" : "/tmp/update_route53.sh > /tmp/update-route53.log 2>&1"
}
}
}
}
},
"Properties": { ... }
}
}
....
I've ommitted large chunks of the template to focus on the important info. The "UpdateRoute53" section creates 2 files:
/usr/local/bin/cli53 - CLI53 is a great little wrapper program around AWS Route53 (as AWS CLI version of route53 is pretty horrible to use i.e. requires creating large chunks of JSON) - see https://github.com/barnybug/cli53 for more info on CLI53
/tmp/update_route53.sh - creates a script to upload to Route53 via the CLI53 script we installed in (1). This script determines the PRIVATE_IP via a curl comand to the special AWS meta data endpoint (see http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-metadata.html for more details). The "zone id" to the correct hosted zone is injected via a CloudFormation parameter (i.e. {"Ref": "Route53HostedZone" }). Finally the name of the record comes from the "ServerName" parameter but how this is set could vary from template to template.
In the "commands" section we run the script we created in "files" section (2) and output the results a log file in the /tmp folder.
NOTE (1) - The parameter Route53HostedZone can be declared as follows: -
"Route53HostedZone": {
"Description": "Route 53 hosted zone for updating internal DNS",
"Type": "AWS::Route53::HostedZone::Id",
"Default": "VIWIWK4PYAC23B"
}
The cool thing about the "AWS::Route53::HostedZone::Id") parameter type is that it displays a combo box (when running a CloudFormation template via the AWS web console) showing the zone name with the value being the Zone ID.
NOTE (2) - The --replace attribute in the CLI53 script overrides existing records which is probably what you want.
NOTE (3) - Another option would be to SSH via Jenkins (e.g. using the the "Publish Over SSH Plugin" - https://wiki.jenkins-ci.org/display/JENKINS/Publish+Over+SSH+Plugin), determine the private IP and using the CLI53 script update Route53 either from the server you've logged into or even the build server (when Jenkins is running).
Lots of options - hope you get it sorted! :-)