Get generated API key from AWS AppSync API created with CDK - amazon-web-services

I'm trying to access data from my stack where I'm creating an AppSync API. I want to be able to use the generated Stacks' url and apiKey but I'm running into issues with them being encoded/tokenized.
In my stack I'm setting some fields to the outputs of the deployed stack:
this.ApiEndpoint = graphAPI.url;
this.Authorization = graphAPI.graphqlApi.apiKey;
When trying to access these properties I get something like ${Token[TOKEN.209]} and not the values.
If I'm trying to resolve the token like so: this.resolve(graphAPI.graphqlApi.apiKey) I instead get { 'Fn::GetAtt': [ 'AppSyncAPIApiDefaultApiKey537321373E', 'ApiKey' ] }.
But I would like to retrieve the key itself as a string, like da2-10lksdkxn4slcrahnf4ka5zpeemq5i.
How would I go about actually extracting the string values for these properties?

The actual values of such Tokens are available only at deploy-time. Before then you can safely pass these token properties between constructs in your CDK code, but they are opaque placeholders until deployed. Depending on your use case, one of these options can help retrieve the deploy-time values:
If you define CloudFormation Outputs for a variable, CDK will (apart from creating it in CloudFormation), will, after cdk deploy, print its value to the console and optionally write it to a json file you pass with the --outputs-file flag.
// AppsyncStack.ts
new cdk.CfnOutput(this, 'ApiKey', {
value: this.api.apiKey ?? 'UNDEFINED',
exportName: 'api-key',
});
// at deploy-time, if you use a flag: --outputs-file cdk.outputs.json
{
"AppsyncStack": {
"ApiKey": "da2-ou5z5di6kjcophixxxxxxxxxx",
"GraphQlUrl": "https://xxxxxxxxxxxxxxxxx.appsync-api.us-east-1.amazonaws.com/graphql"
}
}
Alternatively, you can write a script to fetch the data post-deploy using the listGraphqlApis and listApiKeys commands from the appsync JS SDK client. You can run the script locally or, for advanced use cases, wrap the script in a CDK Custom Resource construct for deploy-time integration.

Thanks to #fedonev I was able to extract the API key and url like so:
const client = new AppSyncClient({ region: "eu-north-1" });
const command = new ListGraphqlApisCommand({ maxResults: 1 });
const res = await client.send(command);
if (res.graphqlApis) {
const apiKeysCommand = new ListApiKeysCommand({
apiId: res.graphqlApis[0].apiId,
});
const apiKeyResponse = await client.send(apiKeysCommand);
const urls = flatMap(res.graphqlApis[0].uris);
if (apiKeyResponse.apiKeys && res.graphqlApis[0].uris) {
sendSlackMessage(urls[1], apiKeyResponse.apiKeys[0].id || "");
}
}

Related

How to add api-key to google secret-manager

With Terraform GCP provider 4.30.0, I can now create an google maps api key and restrict it.
resource "google_apikeys_key" "maps-api-key" {
provider = google-beta
name = "maps-api-key"
display_name = "google-maps-api-key"
project = local.project_id
restrictions {
api_targets {
service = "static-maps-backend.googleapis.com"
}
api_targets {
service = "maps-backend.googleapis.com"
}
api_targets {
service = "places-backend.googleapis.com"
}
browser_key_restrictions {
allowed_referrers = [
"https://${local.project_id}.ey.r.appspot.com/*", # raw url to the app engine service
"*.example.com/*" # Custom DNS name to access to the app
]
}
}
}
The key is created and appears in the console as expected and I can see the API_KEY value.
When I deploy my app, I want it to read the API_KEY string.
My node.js app already reads secrets from secret manager, so I want to add it as a secret.
Another approach could be for the node client library to read the API credential directly, instead of using secret-manager, but I haven't found a way to do that.
I can't work out how to read the key string and store it in the secret.
The terraform resource describes the output
key_string - Output only. An encrypted and signed value held by this
key. This field can be accessed only through the GetKeyString method.
I don't know how to call this method in Terraform, to pass the value to a secret version.
This doesn't work.
v1 = { enabled = true, data = resource.google_apikeys_key.maps-api-key.GetKeyString }
Referencing attributes and arguments does not work the way you tried it. You did quote the correct attribute though, but just failed to specify it:
v1 = {
enabled = true,
data = resource.google_apikeys_key.maps-api-key.key_string
}
Make sure to understand how referencing attributes in Terraform works [1].
[1] https://www.terraform.io/language/expressions/references#references-to-resource-attributes

How get path params in CDK + APIGateway + Lambda

So, turns out I had it all along but I was logging it out incorrectly. I had been doing a Object.keys(event).forEach and console logging each key and value. I guess this didn't display the value as it's a nested object. Using JSON.stringify, as per #robC3's answer shows all the nested objects and values properly and is easier too! TL;DR just use curly braces in your gateway paths and they will be present in event.pathParameters.whateverYouCalledThem
I'm used to express land where you just write /stuff/:things in your route and then req.params.things becomes available in your handler for 'stuff'.
I'm struggling to get the same basic functionality in CDK. I have a RestAPI called 'api' and resources like so...
const api = new apigateway.RestApi(this, "image-cache-api", { //options })
const stuff = api.root.addResource("stuff")
const stuffWithId = get.addResource("{id}")
stuffWithId.addMethod("GET", new apigateway.LambdaIntegration(stuffLambda, options))
Then I deploy the function and call it at https://<api path>/stuff/1234
Then in my lambda I check event.pathParameters and it is this: {id: undefined}
I've had a look through the event object and the only place I can see 1234 is in the path /stuff/1234 and while I could just parse it out of that I'm sure that's not how it's supposed to work.
:/
Most of the things I have turned up while googling mention "mapping templates". That seems overly complicated for such a common use case so I had been working to the assumption there would be some sort of default mapping. Now I'm starting to think there isn't. Do I really have to specify a mapping template just to get access to path params and, if so, where should it go in my CDK stack code?
I tried the following...
stuffWithId.addMethod("GET", new apigateway.LambdaIntegration(stuffLambda, {
requestTemplate: {
"id": "$input.params('id')",
}
}))
But got the error...
error TS2559: Type '{ requestTemplate: { id: string; }; }' has no properties in common with type 'LambdaIntegrationOptions'.
I'm pretty confused as to whether I need requestTemplate, requestParametes, or something else entirely as all the examples I have found so far are for the console rather than CDK.
This works fine, and you can see where the full path, path params, query params, etc., are in the event structure when you test it in a browser.
// lambdas/handler.ts
// This code uses #types/aws-lambda for typescript convenience.
// Build first, and then deploy the .js handler.
import { APIGatewayProxyHandler, APIGatewayProxyResult } from 'aws-lambda';
export const main: APIGatewayProxyHandler = async (event, context, callback) => {
return <APIGatewayProxyResult> {
body: JSON.stringify([ event, context ], null, 4),
statusCode: 200,
};
}
// apig-lambda-proxy-demo-stack.ts
import * as path from 'path';
import { aws_apigateway, aws_lambda, Stack, StackProps } from 'aws-cdk-lib';
import { Construct } from 'constructs';
export class ApigLambdaProxyDemoStack extends Stack {
constructor(scope: Construct, id: string, props?: StackProps) {
super(scope, id, props);
const stuffLambda = new aws_lambda.Function(this, 'stuff-lambda', {
code: aws_lambda.Code.fromAsset(path.join('dist', 'lambdas')),
handler: 'handler.main',
runtime: aws_lambda.Runtime.NODEJS_14_X,
});
const api = new aws_apigateway.RestApi(this, 'image-cache-api');
const stuff = api.root.addResource('stuff');
const stuffWithId = stuff.addResource('{id}');
stuffWithId.addMethod('GET', new aws_apigateway.LambdaIntegration(stuffLambda));
}
}
This sample query:
https://[id].execute-api.[region].amazonaws.com/prod/stuff/1234?q1=foo&q2=bar
gives this response (excerpt):
If you want to handle arbitrary paths at a certain point in your API, you'll want to explore the IResource.addProxy() CDK method. For example,
api.root.addProxy({
defaultIntegration: new aws_apigateway.LambdaIntegration(stuffLambda),
});
That creates a {proxy+} resource at the API root in the example and would forward all requests to the lambda. Rather than configuring every single endpoint in API Gateway, you could handle them all in your same handler.
First thing to note is that all cdk using LambdaIntegration module actually have to be Post - Get methods with LambdaIntegration don't function as you are sending data to the Lambda. If you want to do a get specifically you have to write custom methods in the api for it.
Now, I have only done this in Python, but hopefully you can get the idea:
my_rest_api = apigateway.RestApi(
self, "MyAPI",
retain_deployments=True,
deploy_options=apigateway.StageOptions(
logging_level=apigateway.MethodLoggingLevel.INFO,
stage_name="Dev
)
)
a_resource = apigateway.Resource(
self, "MyResource",
parent=my_rest_api.root,
path_part="Stuff"
)
my_method = apigateway.Method(
self, "MyMethod",
http_method="POST",
resource=quoting_resource,
integration=apigateway.AwsIntegration(
service="lambda",
integration_http_method="POST",
path="my:function:arn"
)
)
your Resource construct defines your path - you can chain multiple resources together if you want to have methods off each level, or just put them all together in path_part - so you could have resourceA defined, and use it as the parent in resourceB - which would get you resourceAPathPart/resourceBPathPart/ to access your lambda.
or you can put it all together in resourceA with path_part = stuff/path/ect
I used the AwsIntegration method here instead of LambdaIntegration because, in the full code, I'm using stage variables to dynamically pick different lambdas depending on what stage im in, but the effect is rather similar

AWS CDK - Possible to access individual (JSON) value within a Secrets Manager secret when specifying secrets for a container?

I'm trying to put together a relatively simple stack on AWS CDK that involves an ApplicationLoadBalancedFargateService from aws-ecs-patterns.
My problem involves secrets. I have a secret in Secrets Manager that has several key/values (I think technically it's stored as a JSON doc, but AWS provides a key/val interface), and I need to pass them to my containers individually. I do this currently in an equivalent non-cdk (made in the console) stack by simply specifying the key, like this: arn:aws:secretsmanager:us-west-2:[acct]:secret/name-??????:KEY::, where `KEY is the secret key, and the correct value is inserted into the container as an env var.
When I try to do that with CDK, I get an error when I cdk synth:
`secretCompleteArn` does not appear to be complete; missing 6-character suffix
If I remove the last bit (:KEY::), it successfully synths, but my container isn't actually getting what I want.
This is how I'm trying to use it in my cdk (typescript) code:
new ApplicationLoadBalancedFargateService(this, 'Service', {
...
taskImageOptions: {
image: containerImage, // defined elsewhere
...
secrets: {
'DB_DATABASE': ecs.Secret.fromSecretsManager(
Secret.fromSecretCompleteArn(this, 'secret-DB_DATABASE',
'arn:aws:secretsmanager:us-west-2:[acct]:secret:secret/name-??????:KEY::')),
//there's really a few more, pulling keys from the same secret. Omitting for brevity
},
},
});
Is there a way to to make this work? Or do I need to change the way I store/use my secrets?
This is how you pass a specific key as environment variable to your container:
const mySecret = secretsmanager.Secret.fromSecretCompleteArn('<your arn>');
taskDefinition.addContainer('MyContainer', {
// ... other props ...
secrets: {
SECRET_KEY: ecs.Secret.fromSecretsManager(mySecret, 'specificKey'),
},
});
or with the ApplicationLoadBalancedFargateService:
new ApplicationLoadBalancedFargateService(this, 'Service', {
...
taskImageOptions: {
image: containerImage, // defined elsewhere
...
secrets: {
'DB_DATABASE': ecs.Secret.fromSecretsManager(mySecret, 'specificKey'),
},
},
});

I am learning to create AWS Lambdas. I want to create a "chain": S3 -> 4 Chained Lambda()'s -> RDS. I can't get the first lambda to call the second

I really tried everything. Surprisingly google has not many answers when it comes to this.
When a certain .csv file is uploaded to a S3 bucket I want to parse it and place the data into a RDS database.
My goal is to learn the lambda serverless technology, this is essentially an exercise. Thus, I over-engineered the hell out of it.
Here is how it goes:
S3 Trigger when the .csv is uploaded -> call lambda (this part fully works)
AAA_Thomas_DailyOverframeS3CsvToAnalytics_DownloadCsv downloads the csv from S3 and finishes with essentially the plaintext of the file. It is then supposed to pass it to the next lambda. The way I am trying to do this is by putting the second lambda as destination. The function works, but the second lambda is never called and I don't know why.
AAA_Thomas_DailyOverframeS3CsvToAnalytics_ParseCsv gets the plaintext as input and returns a javascript object with the parsed data.
AAA_Thomas_DailyOverframeS3CsvToAnalytics_DecryptRDSPass only connects to KMS, gets the encrcypted RDS password, and passes it along with the data it received as input to the last lambda.
AAA_Thomas_DailyOverframeS3CsvToAnalytics_PutDataInRds then finally puts the data in RDS.
I created a custom VPC with custom subnets, route tables, gateways, peering connections, etc. I don't know if this is relevant but function 2. only has access to the s3 endpoint, 3. does not have any internet access whatsoever, 4. is the only one that has normal internet access (it's the only way to connect to KSM), and 5. only has access to the peered VPC which hosts the RDS.
This is the code of the first lambda:
// dependencies
const AWS = require('aws-sdk');
const util = require('util');
const s3 = new AWS.S3();
let region = process.env;
exports.handler = async (event, context, callback) =>
{
var checkDates = process.env.CheckDates == "false" ? false : true;
var ret = [];
var checkFileDate = function(actualFileName)
{
if (!checkDates)
return true;
var d = new Date();
var expectedFileName = 'Overframe_-_Analytics_by_Day_Device_' + d.getUTCFullYear() + '-' + (d.getUTCMonth().toString().length == 1 ? "0" + d.getUTCMonth() : d.getUTCMonth()) + '-' + (d.getUTCDate().toString().length == 1 ? "0" + d.getUTCDate() : d.getUTCDate());
return expectedFileName == actualFileName.substr(0, expectedFileName.length);
};
for (var i = 0; i < event.Records.length; ++i)
{
var record = event.Records[i];
try {
if (record.s3.bucket.name != process.env.S3BucketName)
{
console.error('Unexpected notification, unknown bucket: ' + record.s3.bucket.name);
continue;
}
if (!checkFileDate(record.s3.object.key))
{
console.error('Unexpected file, or date is not today\'s: ' + record.s3.object.key);
continue;
}
const params = {
Bucket: record.s3.bucket.name,
Key: record.s3.object.key
};
var csvFile = await s3.getObject(params).promise();
var allText = csvFile.Body.toString('utf-8');
console.log('Loaded data:', {Bucket: params.Bucket, Filename: params.Key, Text: allText});
ret.push(allText);
} catch (error) {
console.log("Couldn't download CSV from S3", error);
return { statusCode: 500, body: error };
}
}
// I've been randomly trying different ways to return the data, none works. The data itself is correct , I checked with console.log()
const response = {
statusCode: 200,
body: { "Records": ret }
};
return ret;
};
While this shows how the lambda was set up, especially its destination:
I haven't posted on Stackoverflow in 7 years. That's how desperate I am. Thanks for the help.
Rather than getting each Lambda to call the next one take a look at AWS managed service for state machines, step functions which can handle this workflow for you.
By providing input and outputs you can pass output to the next function, with retry logic built into it.
If you haven't much experience AWS has a tutorial on setting up a step function through chaining Lambdas.
By using this you also will not need to account for configuration issues such as Lambda timeouts. In addition it allows your code to be more modular which improves testing the individual functionality, whilst also isolating issues.
The execution roles of all Lambda functions, whose destinations include other Lambda functions, must have the lambda:InvokeFunction IAM permission in one of their attached IAM policies.
Here's a snippet from Lambda documentation:
To send events to a destination, your function needs additional permissions. Add a policy with the required permissions to your function's execution role. Each destination service requires a different permission, as follows:
Amazon SQS – sqs:SendMessage
Amazon SNS – sns:Publish
Lambda – lambda:InvokeFunction
EventBridge – events:PutEvents

Writing a single file to multiple s3 buckets with gulp-awspublish

I have a simple single-page app, that is deployed to an S3 bucket using gulp-awspublish. We use inquirer.js (via gulp-prompt) to ask the developer which bucket to deploy to.
Sometimes the app may be deployed to several S3 buckets. Currently, we only allow one bucket to be selected, so the developer has to gulp deploy for each bucket in turn. This is dull and prone to error.
I'd like to be able to select multiple buckets and deploy the same content to each. It's simple to select multiple buckets with inquirer.js/gulp-prompt, but not simple to generate arbitrary multiple S3 destinations from a single stream.
Our deploy task is based upon generator-webapp's S3 recipe. The recipe suggests gulp-rename to rewrite the path to write to a specific bucket. Currently our task looks like this:
gulp.task('deploy', ['build'], () => {
// get AWS creds
if (typeof(config.awsCreds) !== 'object') {
return console.error('No config.awsCreds settings found. See README');
}
var dirname;
const publisher = $.awspublish.create({
key: config.awsCreds.key,
secret: config.awsCreds.secret,
bucket: config.awsCreds.bucket
});
return gulp.src('dist/**/*.*')
.pipe($.prompt.prompt({
type: 'list',
name: 'dirname',
message: 'Using the ‘' + config.awsCreds.bucket + '’ bucket. Which hostname would you like to deploy to?',
choices: config.awsCreds.dirnames,
default: config.awsCreds.dirnames.indexOf(config.awsCreds.dirname)
}, function (res) {
dirname = res.dirname;
}))
.pipe($.rename(function(path) {
path.dirname = dirname + '/dist/' + path.dirname;
}))
.pipe(publisher.publish())
.pipe(publisher.cache())
.pipe($.awspublish.reporter());
});
It's hopefully obvious, but config.awsCreds might look something like:
awsCreds: {
dirname: 'default-bucket',
dirnames: ['default-bucket', 'other-bucket', 'another-bucket']
}
Gulp-rename rewrites the destination path to use the correct bucket.
We can select multiple buckets by using "checkbox" instead of "list" for the gulp-prompt options, but I'm not sure how to then deliver it to multiple buckets.
In a nutshell, if $.prompt returns an array of strings instead of a string, how can I write the source to multiple destinations (buckets) instead of a single bucket?
Please keep in mind that gulp.dest() is not used -- only gulp.awspublish() -- and we don't know how many buckets might be selected.
Never used S3, but if I understand your question correctly a file js/foo.js should be renamed to default-bucket/dist/js/foo.js and other-bucket/dist/js/foo.js when the checkboxes default-bucket and other-bucket are selected?
Then this should do the trick:
// additionally required modules
var path = require('path');
var through = require('through2').obj;
gulp.task('deploy', ['build'], () => {
if (typeof(config.awsCreds) !== 'object') {
return console.error('No config.awsCreds settings found. See README');
}
var dirnames = []; // array for selected buckets
const publisher = $.awspublish.create({
key: config.awsCreds.key,
secret: config.awsCreds.secret,
bucket: config.awsCreds.bucket
});
return gulp.src('dist/**/*.*')
.pipe($.prompt.prompt({
type: 'checkbox', // use checkbox instead of list
name: 'dirnames', // use different result name
message: 'Using the ‘' + config.awsCreds.bucket +
'’ bucket. Which hostname would you like to deploy to?',
choices: config.awsCreds.dirnames,
default: config.awsCreds.dirnames.indexOf(config.awsCreds.dirname)
}, function (res) {
dirnames = res.dirnames; // store array of selected buckets
}))
// use through2 instead of gulp-rename
.pipe(through(function(file, enc, done) {
dirnames.forEach((dirname) => {
var f = file.clone();
f.path = path.join(f.base, dirname, 'dist',
path.relative(f.base, f.path));
this.push(f);
});
done();
}))
.pipe(publisher.cache())
.pipe($.awspublish.reporter());
});
Notice the comments where I made changes from the code you posted.
What this does is use through2 to clone each file passing through the stream. Each file is cloned as many times as there were bucket checkboxes selected and each clone is renamed to end up in a different bucket.