I am having issues finding good sources for / figuring out how to correctly add server-side validation to my AppSync GraphQL mutations.
In essence I used AWS dashboard to define my AppSync schema, hence had DynamoDB tables created for me, plus some basic resolvers set up for the data.
No I need to achieve following:
I have a player who has inventory and gold
Player calls purchaseItem mutation with item_id
Once this mutation is called I need to perform some checks in resolver i.e. check if item_id exists int 'Items' table of associated DynamoDB, check if player has enough gold, again in 'Players' table of associated DynamoDB, if so, write to Players DynamoDB table by adding item to their inventory and new subtracted gold amount.
I believe most efficient way to achieve this that will result in less cost and latency is to use "Apache Velocity" templating language for AppSync?
It would be great to see example of this showing how to Query / Write to DynamoDB, handle errors and resolve the mutation correctly.
For writing to DynamoDB with VTL use the following tutorial
you can start with the PutItem template. My request template looks like this:
{
"version" : "2017-02-28",
"operation" : "PutItem",
"key" : {
"noteId" : { "S" : "${context.arguments.noteId}" },
"userId" : { "S" : "${context.identity.sub}" }
},
"attributeValues" : {
"title" : { "S" : "${context.arguments.title}" },
"content": { "S" : "${context.arguments.content}" }
}
}
For query:
{ "version" : "2017-02-28",
"operation" : "Query",
"query" : {
## Provide a query expression. **
"expression": "userId = :userId",
"expressionValues" : {
":userId" : {
"S" : "${context.identity.sub}"
}
}
},
## Add 'limit' and 'nextToken' arguments to this field in your schema to implement pagination. **
"limit": #if(${context.arguments.limit}) ${context.arguments.limit} #else 20 #end,
"nextToken": #if(${context.arguments.nextToken}) "${context.arguments.nextToken}" #else null #end
}
This is based on the Paginated Query template.
What you want to look at is at Pipeline Resolvers:
https://docs.aws.amazon.com/appsync/latest/devguide/pipeline-resolvers.html
Yes, this requires the VTL (Velocity Template)
That allows you to perform read, writes, validation, and anything you'd like using VTL. What you basically do is chain the inputs and outputs into the next template and make the required process.
Here's a Medium post showing you how to do it:
https://medium.com/#dabit3/intro-to-aws-appsync-pipeline-functions-3df87ceddac1
In other words, what you can do is:
Have one template that queries the database, pipeline the result to another template that validates the result and inserts it if succeeds or fails it.
Related
I'm new to AWS AppSync and I have adopted the single table pattern in DynamoDB. Now I am trying to create an item based on a particular field value in the existing item in the same table. For example, I have a table called transaction which holds 2 types of records.
Request
Response
As you can see the above table, I can insert (PutItem) multiple responses for a particular request. Before I insert a new response, I need to validate whether the request (RequestID) is already exists. Is there any way to do via conditional expression in the resolver? Below is my current request resolver code which is not working as expected.
#set( $Id = $util.autoId() )
{
"version" : "2017-02-28",
"operation" : "PutItem",
"key" : {
"PK": $util.dynamodb.toDynamoDBJson("USER#$ctx.args.input.UserId"),
"SK": $util.dynamodb.toDynamoDBJson("RESPONSE#$Id"),
},
"attributeValues" : $util.dynamodb.toMapValuesJson($ctx.args.input),
"condition": {
"expression": "SK = :SK",
"expressionValues" : {
":SK" : {
"S" : "REQUEST#${ctx.args.input.RequestId}"
}
}
}
}
You could do this in one request mapping template by using DynamoDB transactions, see (https://docs.aws.amazon.com/appsync/latest/devguide/tutorial-dynamodb-transact.html).
In one of the "transactWriteItems", you update an arbitrary value (or perhaps, number of responses) in the request item with a condition that checks if the request item exists with that requestId in the SK. If the conditions succeeds, the request item is updated.
Also make sure you have the response item in your "transactWriteItems" so that the response item is also written if the request condition passes.
You won't be able to do a conditional PutItem based on another entry no. In this case you'd want to use pipeline resolvers. In your first function you'd fetch the request item and in the second you can then do your PutItem condition - the previous GetItem result will be available as $ctx.prev.result.
I have an AppSync pipeline resolver. The first function queries an ElasticSearch database for the DynamoDB keys. The second function queries DynamoDB using the provided keys. This was all working well until I ran into the 1 MB limit of AppSync. Since most of the data is in a few attributes/columns I don't need, I want to limit the results to just the attributes I need.
I tried adding AttributesToGet and ProjectionExpression (from here) but both gave errors like:
{
"data": {
"getItems": null
},
"errors": [
{
"path": [
"getItems"
],
"data": null,
"errorType": "MappingTemplate",
"errorInfo": null,
"locations": [
{
"line": 2,
"column": 3,
"sourceName": null
}
],
"message": "Unsupported element '$[tables][dev-table-name][projectionExpression]'."
}
]
}
My DynamoDB function request mapping template looks like (returns results as long as data is less than 1 MB):
#set($ids = [])
#foreach($pResult in ${ctx.prev.result})
#set($map = {})
$util.qr($map.put("id", $util.dynamodb.toString($pResult.id)))
$util.qr($map.put("ouId", $util.dynamodb.toString($pResult.ouId)))
$util.qr($ids.add($map))
#end
{
"version" : "2018-05-29",
"operation" : "BatchGetItem",
"tables" : {
"dev-table-name": {
"keys": $util.toJson($ids),
"consistentRead": false
}
}
}
I contacted the AWS people who confirmed that ProjectionExpression is not supported currently and that it will be a while before they will get to it.
Instead, I created a lambda to pull the data from DynamoDB.
To limit the results form DynamoDB I used $ctx.info.selectionSetList in AppSync to get the list of requested columns, then used the list to specify the data to pull from DynamoDB. I needed to get multiple results, maintaining order, so I used BatchGetItem, then merged the results with the original list of IDs using LINQ (which put the DynamoDB results back in the correct order since BatchGetItem in C# does not preserve sort order like the AppSync version does).
Because I was using C# with a number of libraries, the cold start time was a little long, so I used Lambda Layers pre-JITed to Linux which allowed us to get the cold start time down from ~1.8 seconds to ~1 second (when using 1024 GB of RAM for the Lambda).
AppSync doesn't support projection but you can explicitly define what fields to return in the response template instead of returning the entire result set.
{
"id": "$ctx.result.get('id')",
"name": "$ctx.result.get('name')",
...
}
I have two tables, cuts and holds. In response to an event in my application I want to be able to move the values from a hold entry with a given id across to the cuts table. The naïve way is to do an INSERT and then a DELETE.
How do I run multiple sql statements in an AppSync resolver to achieve that result? I have tried the following (replacing sql by statements and turning it into an array) without success.
{
"version" : "2017-02-28",
"operation": "Invoke",
#set($id = $util.autoId())
"payload": {
"statements": [
"INSERT INTO cuts (id, rollId, length, reason, notes, orderId) SELECT '$id', rollId, length, reason, notes, orderId FROM holds WHERE id=:ID",
"DELETE FROM holds WHERE id=:ID"
],
"variableMapping": {
":ID": "$context.arguments.id"
},
"responseSQL": "SELECT * FROM cuts WHERE id = '$id'"
}
}
If you're using the "AWS AppSync Using Amazon Aurora as a Data Source via AWS Lambda" found here https://github.com/aws-samples/aws-appsync-rds-aurora-sample, you won't be able to send multiple statements in the sql field
If you are using the AWS AppSync integration with the Aurora Serverless Data API, you can pass up to 2 statements in a statements array such as in the example below:
{
"version": "2018-05-29",
"statements": [
"select * from Pets WHERE id='$ctx.args.input.id'",
"delete from Pets WHERE id='$ctx.args.input.id'"
]
}
You will be able to do it as the following.If you're using the "AWS AppSync Using Amazon Aurora as a Data Source via AWS Lambda" found here https://github.com/aws-samples/aws-appsync-rds-aurora-sample.
In the resolver, add the "sql0" & "sql1" field (you can name them what ever you want) :
{
"version" : "2017-02-28",
"operation": "Invoke",
#set($id = $util.autoId())
"payload": {
"sql":"INSERT INTO cuts (id, rollId, length, reason, notes, orderId)",
"sql0":"SELECT '$id', rollId, length, reason, notes, orderId FROM holds WHERE id=:ID",
"sql1":"DELETE FROM holds WHERE id=:ID",
"variableMapping": {
":ID": "$context.arguments.id"
},
"responseSQL": "SELECT * FROM cuts WHERE id = '$id'"
}
}
In your lambda, add the following piece of code:
if (event.sql0) {
const inputSQL0 = populateAndSanitizeSQL(event.sql0, event.variableMapping, connection);
await executeSQL(connection, inputSQL0);
}
if (event.sql1) {
const inputSQL1 = populateAndSanitizeSQL(event.sql1, event.variableMapping, connection);
await executeSQL(connection, inputSQL1);
}
With this approach, you can send to your lambda as much sql statements as you want,and then your lambda will execute them.
So here is my schema:
type Model {
PartitionKey: ID!
Name: String
Version: Int
FBX: String
# ms since epoch
CreatedAt: AWSTimestamp
Description: String
Tags: [String]
}
type Query {
getAllModels(count: Int, nextToken: String): PaginatedModels!
}
type PaginatedModels {
models: [Model!]!
nextToken: String
}
I would like to call 'getAllModels' and have all of it's data, and all of it's tags be filled in.
But here is the thing. Tags are stored via sort keys. Like so
PartionKey | SortKey
Model-0 | Model-0
Model-0 | Tag-Tree
Model-0 | Tag-Building
Is it possible to transform the 'Tag' sort keys into the Tags: [String] array in the schema via a DynamoDB resolver? Or must I do something extra fancy through a lambda? Or is there a smarter way to do this?
To clarify, are you storing objects like this in DynamoDB:
{ PartitionKey (HASH), Tag (SortKey), Name, Version, FBX, CreatedAt, Description }
and using a DynamoDB Query operation to fetch all rows for a given HashKey.
Query #PartitionKey = :PartitionKey
and getting back a list of objects some of which have a different "Tag" value and one of which is "Model-0" (aka the same value as the partition key) and I assume that record contains all other values for the record. E.G.
[
{ PartitionKey, Tag: 'ValueOfPartitionKey', Name, Version, FBX, CreatedAt, ... },
{ PartitionKey, Tag: 'Tag-Tree' },
{ PartitionKey: Tag: 'Tag-Building' }
]
You can definitely write resolver logic without too much hassle that reduces the list of model objects into a single object with a list of "Tags". Let's start with a single item and see how to implement a getModel(id: ID!): Model query:
First define the response mapping template that will get all rows for a partition key:
{
"version" : "2017-02-28",
"operation" : "Query",
"query" : {
"expression": "#PartitionKey = :id",
"expressionValues" : {
":id" : {
"S" : "${ctx.args.id}"
}
},
"expressionNames": {
"#PartitionKey": "PartitionKey" # whatever the table hash key is
}
},
# The limit will have to be sufficiently large to get all rows for a key
"limit": $util.defaultIfNull(${ctx.args.limit}, 100)
}
Then to return a single model object that reduces "Tag" to "Tags" you can use this response mapping template:
#set($tags = [])
#set($result = {})
#foreach( $item in $ctx.result.items )
#if($item.PartitionKey == $item.Tag)
#set($result = $item)
#else
$util.qr($tags.add($item.Tag))
#end
#end
$util.qr($result.put("Tags", $tags))
$util.toJson($result)
This will return a response like this:
{
"PartitionKey": "...",
"Name": "...",
"Tags": ["Tag-Tree", "Tag-Building"],
}
Fundamentally I see no problem with this but its effectiveness depends upon your query patterns. Extending this to the getAll use is doable but will require a few changes and most likely a really inefficient Scan operation due to the fact that the table will be sparse of actual information since many records are effectively just tags. You can alleviate this with GSIs pretty easily but more GSIs means more $.
As an alternative approach, you can store your Tags in a different "Tags" table. This way you only store model information in the Model table and tag information in the Tag table and leverage GraphQL to perform the join for you. In this approach have Query.getAllModels perform a "Scan" (or Query) on the Model table and then have a Model.Tags resolver that performs a Query against the Tag table (HK: ModelPartitionKey, SK: Tag). You could then get all tags for a model and later create a GSI to get all models for a tag. You do need to consider that now the nested Model.Tag query will get called once per model but Query operations are fast and I've seen this work well in practice.
Hope this helps :)
This might be a stupid question, but I really cannot find a way to do that.
So, I have DynamoDB tables and I have schema in AppSync api. In a table, for each row, there is a field which has a list as its value. So how can I append multiple items into this list without replacing the existing items? How should I write the resolver of that mutation?
Here is the screenshot of my table:
And you can see there are multiple programs in the list.
How can I just append two more programs.
Here is a new screenshot of my resolver:
screenshot of resolver
I want to add a existence check method in UpdateItem operation. But the current code does not work. The logic I want is that use the "contains" method to see whether the "toBeAddedProgramId" already exists. But the question is, how to extract the current program id list from User table and how to make the program id list a "list" type (since the contains method only take String set and String).
I hope this question makes sense. Thanks so much guys.
Best,
Harrison
To append items to a list, you should use the DynamoDB UpdateItem operation.
Here is an example if you're using DynamoDB directly
In AWS AppSync, you can use the DynamoDB data source and specify the DynamoDB UpdateItem operation in your request mapping template.
Your UpdateItem request template could look like the following (modify it to serve your needs):
{
"version" : "2017-02-28",
"operation" : "UpdateItem",
"key" : {
"id" : { "S" : "${context.arguments.id}" }
},
"update" : {
"expression" : "SET #progs = list_append(#progs, :vals)",
"expressionNames": {
"#progs" : "programs"
},
"expressionValues": {
":vals" : {
"L": [
{ "M" : { "id": { "S": "49f2c...." }}},
{ "M" : { "id": { "S": "931db...." }}}
]
}
}
}
}
We have a tutorial here that goes into more details if you are interested in learning more