Where can I log & debug Velocity Template Language (VTL) in AWS AppSync? - amazon-web-services

Is there any easy way to log or debug VTL coming from Request Mapping Template & Response Mapping Template rather than sending Queries & Mutations to debug & log?
Also, is there any Playground to check & play with VTL just like we can do with JavaScript in Web Console?
Can we work with AWS AppSync offline & check if everything written in VTL works as expected?

A super nasty way to log and debug is using validate in the response mapping
$util.validate(false, $util.time.nowISO8601().substring(0, 10) )

Here's how I logged a value in my VTL resolver:
Add a "$util.error" statement in your request or response template and then make the graphql call.
For example, I wanted to see what was the arguments passed as an input into my resolver, so I added the $util.error statement at the beginning of my template. So, my template was now:
$util.error("Test Error", $util.toJson($ctx))
{
"version" : "2017-02-28",
"operation" : "PutItem",
"key": {
"id": $util.dynamodb.toDynamoDBJson($ctx.arguments.user.id)
},
"attributeValues": {
"name": $util.dynamodb.toDynamoDBJson($ctx.arguments.user.name)
}
}
Then from the "Queries" section of the AWS AppSync console, I ran the following mutation:
mutation MyMutation {
addUser(user: {id: "002", name:"Rick Sanchez"}) {
id
name
}
}
This displayed the log results from my resolver as follows:
{
"data": null,
"errors": [
{
"path": [
"addUser"
],
"data": null,
"errorType": "{\"arguments\":{\"user\":{\"id\":\"002\",\"name\":\"Rick Sanchez\"}},\"identity\":null,\"source\":null,\"result\":null,\"request\":{\"headers\":{\"x-forwarded-for\":\"112.133.236.59, 130.176.75.151\",\"sec-ch-ua-mobile\":\"?0\",\"cloudfront-viewer-country\":\"IN\",\"cloudfront-is-tablet-viewer\":\"false\",\"via\":\"2.0 a691085135305af276cea0859fd6b129.cloudfront.net (CloudFront)\",\"cloudfront-forwarded-proto\":\"https\",\"origin\":\"https://console.aws.amazon.com\",\"content-length\":\"223\",\"accept-language\":\"en-GB,en;q=0.9,en-US;q=0.8\",\"host\":\"raxua52myfaotgiqzkto2rzqdy.appsync-api.us-east-1.amazonaws.com\",\"x-forwarded-proto\":\"https\",\"user-agent\":\"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/87.0.4280.88 Safari/537.36 Edg/87.0.664.66\",\"accept\":\"*/*\",\"cloudfront-is-mobile-viewer\":\"false\",\"cloudfront-is-smarttv-viewer\":\"false\",\"accept-encoding\":\"gzip, deflate, br\",\"referer\":\"https://console.aws.amazon.com/\",\"x-api-key\":\"api-key-has-been-edited-out\",\"content-type\":\"application/json\",\"sec-fetch-mode\":\"cors\",\"x-amz-cf-id\":\"AvTMLvtxRq9M8J8XntvkDj322SZa06Fjtyhpf_fSXd-GmHs2UeomDg==\",\"x-amzn-trace-id\":\"Root=1-5fee036a-13f9ff472ba6a1211d499b8b\",\"sec-fetch-dest\":\"empty\",\"x-amz-user-agent\":\"AWS-Console-AppSync/\",\"cloudfront-is-desktop-viewer\":\"true\",\"sec-fetch-site\":\"cross-site\",\"sec-ch-ua\":\"\\\"Chromium\\\";v=\\\"87\\\", \\\" Not;A Brand\\\";v=\\\"99\\\", \\\"Microsoft Edge\\\";v=\\\"87\\\"\",\"x-forwarded-port\":\"443\"}},\"info\":{\"fieldName\":\"addUser\",\"parentTypeName\":\"Mutation\",\"variables\":{}},\"error\":null,\"prev\":null,\"stash\":{},\"outErrors\":[]}",
"errorInfo": null,
"locations": [
{
"line": 9,
"column": 3,
"sourceName": null
}
],
"message": "Test Error"
}
]
}

The answers to each of your 3 questions are as follows:
To unit test request/response mapping templates, you could use the method described in this blog post (https://mechanicalrock.github.io/2020/04/27/ensuring-resolvers-aren't-rejected.html).
A Playground for VTL experimentation exists in the AWS AppSync console where you you can edit and test the VTL for your resolvers.
The Amplify framework has a mock functionality which mocks AppSync, the AppSync VTL environment and DynamoDB (using DynamoDB Local). This would allow you to perform e2e tests locally.

Looks like you are looking for this new VTL logging utility such as
$util.log.info(Object) : Void
Documentation:
https://docs.aws.amazon.com/appsync/latest/devguide/utility-helpers-in-util.html

When I realized how much a pain it was to debug VTL, I created a lambda (nodejs) that logged the contents of my VTL template.
// my nodejs based debug lambda -- very basic
exports.handler = (event, context, callback) => {
const origin = context.request || 'oops';
if (context && context.prev) {
console.log('--------with context----------------');
console.log({ prev: context.prev.result, context, origin });
console.log({ stash: context.stash });
console.log('--------END: with context----------------');
callback(null, context.prev.result);
}
console.log('inside - LOGGING_DEBUGGER');
console.log({ event, context: context || null, origin });
callback(null, event);
};
This lambda helped me debug many issues inside my pipeline resolvers. However, I forgot if I used it as a direct lambda or with request+response templates.
To use it, I put values that I wanted to debug into $ctx.stash in my other pipeline functions. Then in my pipeline, I added the "debugger" function after this step -- in case there was an issue where my pipeline would blow up before a fatal error occurred.

Check the $util.log.info(Object) : Void from CloudWatch logging utils
PS: you need to turn on logging to Amazon CloudWatch Logs plus setting Field resolver log level on ALL more details here.

Related

AWS Appsync enhanced subscription filtering not working

AWS enhanced subscription filtering feature documentation recommends to add the following response mapping template:
## Response Mapping Template - onSpecialTicketCreated subscription
$extensions.setSubscriptionFilter($util.transform.toSubscriptionFilter($util.parseJson($ctx.args.filter)))
$util.toJson($context.result)
When using a simple request mapping template the subscription will not return any data:
{
"version": "2017-02-28"
}
The documentation does not mention the following (got it though one month of back and forth with support):
In the request template the payload must be set:
{
"version": "2017-02-28",
"payload": {}
}
If you have fieldLevel resolver note that for some reason they are executed when the subscription is set up and a payload is set. To accommodate this make sure they handle that i.e. event.source.id is not defined. For them to work pass the args as payload:
{
"version": "2017-02-28",
"payload": $util.toJson($ctx.args)
}
Using this frontend side filters like {"filter" : "{\"severity\":{\"le\":2}}"} will work again

Which functions should I use to read aws lambda log

Once my lambda run is finished, I am getting this payload as a result:
{
"version": "1.0",
"timestamp": "2020-09-30T19:20:03.360Z",
"requestContext": {
"requestId": "2de65baf-f630-48a7-881a-ce3145f1127d",
"functionArn": "arn:aws:lambda:us-east-2:044739556748:function:puppeteer:$LATEST",
"condition": "Success",
"approximateInvokeCount": 1
},
"responseContext": {
"statusCode": 200,
"executedVersion": "$LATEST"
}
}
I would like to read logs of my run from cloudwatch and also memory usage which I can see in lambda monitoring tab:
How can do it via sdk? Which functions should I use?
I am using nodejs.
You need to discover the log stream name that has been assigned to the Lambda function invocation. This is available inside the Lambda function's context.
exports.handler = async (event, context) => {
console.log('context', context);
};
Results in the following log:
context { callbackWaitsForEmptyEventLoop: [Getter/Setter],
succeed: [Function],
fail: [Function],
done: [Function],
functionVersion: '$LATEST',
functionName: 'test-log',
memoryLimitInMB: '128',
logGroupName: '/aws/lambda/test-log',
logStreamName: '2020/10/03/[$LATEST]f123a3c1bca123df8c12e7c12c8fe13e',
clientContext: undefined,
identity: undefined,
invokedFunctionArn: 'arn:aws:lambda:us-east-1:123456781234:function:test-log',
awsRequestId: 'e1234567-6b7c-4477-ac3d-74bc62b97bb2',
getRemainingTimeInMillis: [Function: getRemainingTimeInMillis] }
So, the CloudWatch Logs stream name is available in context.logStreamName. I'm not aware of an API to map a Lambda request ID to a log stream name after the fact, so you may need to return this or somehow persist the mapping.
Finding logs of a specific request-id can be done via AWS cloudwatch API.
You can use [filterLogEvents][1] API to extract (using regex) the relevant START and REPORT logs to gather the relevant information of the memory usage (You will also get the log stream name in the response for future use).
If you want to gather all the relevant logs of a specific invocation you will need to query create pairs of START and REPORT logs and query for all the logs in the specific timeframe between them on a specific log stream.

AWS Pinpoint/Ionic - "Resource not found" error when trying to send push through CLI

I am new at programming with AWS services, so some fundamental things are pretty hard for me. Recently, I was asked to develop an app that used Amazon Pinpoint to send push notifications, as a test for considering future implementations.
As you can see in another question I posted in here (Amazon Pinpoint and Ionic - Push notifications not working when app is in background), I was having trouble trying to send push notifications to users when my app is running in the background. The app was developed using Ionic by following these steps.
When I was almost giving up, I decided to try sending the pushes directly through Firebase, and it finally worked. Some research took me to this question, in which another user described the problem as only happening in AWS Console, so the solution would be to use CLI. After searching a little about it, I found this tutorial about how to sending pinpoint messages to users using CLI, that seems to be what I wanted. Combining it with this documentation about phonegap plugin, I was able to generate a JSON I thought could be a solution:
{
"ApplicationId":"io.ionic.starter",
"MessageRequest":{
"Addresses": {
"": {
"BodyOverride": "",
"ChannelType": "GCM",
"Context": {
"": ""
},
"RawContent": "",
"Substitutions": {},
"TitleOverride": ""
}
},
"Context": {
"": ""
},
"Endpoints": {"us-east-1": {
"BodyOverride": "",
"Context": {},
"RawContent": "",
"Substitutions": {},
"TitleOverride": ""
}
},
"MessageConfiguration": {
"GCMMessage": {
"Action": "OPEN_APP",
"Body": "string",
"CollapseKey": "",
"Data": {
"": ""
},
"IconReference": "",
"ImageIconUrl": "",
"ImageUrl": "",
"Priority": "High",
"RawContent": "{\"data\":{\"title\":\"sometitle\",\"body\":\"somebody\",\"url\":\"insertyourlinkhere.com\"}}",
"RestrictedPackageName": "",
"SilentPush": false,
"SmallImageIconUrl": "",
"Sound": "string",
"Substitutions": {},
"TimeToLive": 123,
"Title": "",
"Url": ""
}
}
}
}
But when I executed it in cmd with aws pinpoint send-messages --color on --region us-east-1 --cli-input-json file://test.json, I got the response An error occurred (NotFoundException) when calling the SendMessages operation: Resource not found.
I believe I didn't write the JSON file correctly, since it's my first time doing this. So please, if any of you know what I am doing wrong, no mattering which step I misunderstood, I would appreciate the help!
"Endpoints" field in the Message request deals with the endpoint id (the identifier associated with an end user device while registering to pinpoint and not the region.)
In case if you haven't registered any endpoints with Pinpoint, you can use the "Addresses" field. After registering the GCM Channel in Amazon Pinpoint, you can get the GCM device token from your device and specify it here.
Here is a sample for sending direct messages using Amazon Pinpoint Note: The example deals with sending SMS message. You should have registered a SMS channel first and created an endpoint with the endpoint id as "test-endpoint1". Otherwise, you can use the "Addresses" field instead of "Endpoints" field.
aws pinpoint send-messages --application-id $APP_ID --message-request '{"MessageConfiguration": {"SMSMessage":{"Body":"hi hello"}},"Endpoints": {"test-endpoint1": {}}}
Also Note: ApplicationId is generated by Pinpoint. When you visit the Pinpoint console and choose your application, the URL will be of the format
https://console.aws.amazon.com/pinpoint/home/?region=us-east-1#/apps/someverybigstringhere/
Here "someverybigstringhere" is the ApplicationId and not the name you give for your project.

requestParameters returning "Invalid mapping expression specified: true"

I'm configuring a lambda function's API gateway integration with the Serverless Framework version 0.4.2.
My problem is with defining an endpoint's request parameters. The AWS docs for API gateway entry says:
requestParameters
Represents request parameters that can be accepted by Amazon API Gateway. Request parameters are represented as a key/value map, with a source as the key and a Boolean flag as the value. The Boolean flag is used to specify whether the parameter is required. A source must match the pattern method.request.{location}.{name}, where location is either querystring, path, or header. name is a valid, unique parameter name. Sources specified here are available to the integration for mapping to integration request parameters or templates.
As I understand it, the config in the s-function.json is given directly to the AWS CLI, so I've specified the request parameters in the format:
"method.request.querystring.startYear": true. However, I'm receiving an Invalid mapping expression specified: true error. I've also tried specifying the config as "method.request.querystring.startYear": "true" with the same result.
s-function.json:
{
"name": "myname",
// etc...
"endpoints": [
{
"path": "mypath",
"method": "GET",
"type": "AWS",
"authorizationType": "none",
"apiKeyRequired": false,
"requestParameters": {
"method.request.querystring.startYear": true,
"method.request.querystring.startMonth": true,
"method.request.querystring.startDay": true,
"method.request.querystring.currentYear": true,
"method.request.querystring.currentMonth": true,
"method.request.querystring.currentDay": true,
"method.request.querystring.totalDays": true,
"method.request.querystring.volume": true,
"method.request.querystring.userId": true
},
// etc...
}
],
"events": []
}
Any ideas? Thanks in advance!
It looks like the requestParameters in the s-function.json file is meant for configuring the integration request section, so I ended up using:
"requestParameters": {
"integration.request.querystring.startYear" : "method.request.querystring.startYear",
"integration.request.querystring.startMonth" : "method.request.querystring.startMonth",
"integration.request.querystring.startDay" : "method.request.querystring.startDay",
"integration.request.querystring.currentYear" : "method.request.querystring.currentYear",
"integration.request.querystring.currentMonth" : "method.request.querystring.currentMonth",
"integration.request.querystring.currentDay" : "method.request.querystring.currentDay",
"integration.request.querystring.totalDays" : "method.request.querystring.totalDays",
"integration.request.querystring.volume" : "method.request.querystring.volume",
"integration.request.querystring.userId" : "method.request.querystring.userId"
},
This ended up adding them automatically to the method request section on the dashboard as well:
I could then use them in the mapping template to turn them into a method post that would be sent as the event into my Lambda function. Right now I have a specific mapping template that I'm using, but I may in the future use Alua K's suggested method for mapping all of the inputs in a generic way so that I don't have to configure a separate mapping template for each function.
You can pass query params to your lambda like
"requestTemplates": {
"application/json": {
"querystring": "$input.params().querystring"
}
}
In lambda function access querystring like this event.querystring
First, you need to execute a put-method command for creating the Method- Request with query parameters:
aws apigateway put-method --rest-api-id "yourAPI-ID" --resource-id "yourResource-ID" --http-method GET --authorization-type "NONE" --no-api-key-required --request-parameters "method.request.querystring.paramname1=true","method.request.querystring.paramname2=true"
After this you can execute the put-integration command then only this will work. Otherwise it will give invalid mapping error
"requestParameters": {
"integration.request.querystring.paramname1" : "method.request.querystring.paramname1",
"integration.request.querystring.paramname2" : "method.request.querystring.paramname2",
Make sure you're using the right end points as well. There are two types or some such in AWS.. friend of mine got caught out with that in the past.

In AWS API Gateway, How do I include a stage parameter as part of the event variable in Lambda (Node)?

I have a stage variable set up called "environment".
I would like to pass it through in a POST request as part of the JSON.
Example:
Stage Variables
environment : "development"
JSON
{
"name": "Toli",
"company": "SomeCompany"
}
event variable should look like;
{
"name": "Toli",
"company": "SomeCompany",
"environment": "development"
}
So far the best I could come up with was the following mapping template (under Integration Request):
{
"body" : $input.json('$'),
"environment" : "$stageVariables.environment"
}
Then in node I do
exports.handler = function(event, context) {
var environment = event.environment;
// hack to merge stage and JSON
event = _.extend(event.body, {
environment : environment
});
....
If your API Gateway method use Lambda Proxy integration, all your stage variables will be available via the event.stageVariables object.
For the project I'm currently working on, I created a simple function that goes over all the properties in event.stageVariables and appends them to process.env (e.g.: Object.assign(process.env, event.stageVariables);)
Your suggestion of using a mapping template to pass-through the variable would be the recommended solution for this type of workflow.
You can also access the stage name in the $context object.
Integration Request:
{
"environment" : "$context.stage"
}