AWS AppSync & DynamoDB Single Table Design: PutItem based on another item's field value - amazon-web-services

I'm new to AWS AppSync and I have adopted the single table pattern in DynamoDB. Now I am trying to create an item based on a particular field value in the existing item in the same table. For example, I have a table called transaction which holds 2 types of records.
Request
Response
As you can see the above table, I can insert (PutItem) multiple responses for a particular request. Before I insert a new response, I need to validate whether the request (RequestID) is already exists. Is there any way to do via conditional expression in the resolver? Below is my current request resolver code which is not working as expected.
#set( $Id = $util.autoId() )
{
"version" : "2017-02-28",
"operation" : "PutItem",
"key" : {
"PK": $util.dynamodb.toDynamoDBJson("USER#$ctx.args.input.UserId"),
"SK": $util.dynamodb.toDynamoDBJson("RESPONSE#$Id"),
},
"attributeValues" : $util.dynamodb.toMapValuesJson($ctx.args.input),
"condition": {
"expression": "SK = :SK",
"expressionValues" : {
":SK" : {
"S" : "REQUEST#${ctx.args.input.RequestId}"
}
}
}
}

You could do this in one request mapping template by using DynamoDB transactions, see (https://docs.aws.amazon.com/appsync/latest/devguide/tutorial-dynamodb-transact.html).
In one of the "transactWriteItems", you update an arbitrary value (or perhaps, number of responses) in the request item with a condition that checks if the request item exists with that requestId in the SK. If the conditions succeeds, the request item is updated.
Also make sure you have the response item in your "transactWriteItems" so that the response item is also written if the request condition passes.

You won't be able to do a conditional PutItem based on another entry no. In this case you'd want to use pipeline resolvers. In your first function you'd fetch the request item and in the second you can then do your PutItem condition - the previous GetItem result will be available as $ctx.prev.result.

Related

AWS AppSync RDS: $util.rds.toJSONObject() Nested Objects

I am using Amazon RDS with AppSync. I've created a resolver that join two tables to get One-to-One association between them and returns columns from both tables. What I would like to do is to be able to put nest some columns under a key in the resulting parsed JSON object evaluated using $util.rds.toJSONObject().
Here's the schema:
type Parent {
col1: String
col2: String
child: Child
}
type Child {
col3: String
col4: String
}
Here's the resolver:
{
"version": "2018-05-29",
"statements": [
"SELECT parent.*, child.col3 AS `child.col3`, child.col4 AS `child.col4` FROM parent LEFT JOIN child ON parent.col1 = child.col3"
]
}
I tried naming the resulting column with dot-syntax but, $util.rds.toJSONObject() doesn't put col3 and col4 under child key. The reason it should is because otherwise, Apollo won't be able to cache and parse the entity.
Note: Dot-syntax is not documented anywhere. Usually, some ORMs use dot-syntax technique to convert SQL rows to proper nested JSON objects.
The #Aaron_H comment and answer were useful for me, but the response mapping template provided in the answer didn't work for me.
I managed to get a working response mapping template for my case which is similar for the case in question. In images below you will find info for query -> message(id: ID) { ... } (one message and the associated user will be returned):
SQL request to user table;
SQL request to message table;
SQL JOIN tables request for message id=1;
GraphQL Schema;
Request and response templates;
AWS AppSync query.
https://github.com/xai1983kbu/apollo-server/blob/pulumi_appsync_2/bff_pulumi/graphql/resolvers/Query.message.js
Next example for query messages
https://github.com/xai1983kbu/apollo-server/blob/pulumi_appsync_2/bff_pulumi/graphql/resolvers/Query.messages.js
Assuming your resolver is expecting to return a list of Parent types, ie. [Parent!]!, you can write your response mapping template logic like this:
#if($ctx.error)
$util.error($ctx.error.message, $ctx.error.type)
#end
#set($output = $utils.rds.toJsonObject($ctx.result)[0])
## Make sure to handle instances where fields are null
## or don't exist according to your business logic
#foreach( $item in $output )
#set($item.child = {
"col3": $item.get("child.col3"),
"col4": $item.get("child.col4")
})
#end
$util.toJson($output)

AppSync GraphQL mutation server logic in resolvers

I am having issues finding good sources for / figuring out how to correctly add server-side validation to my AppSync GraphQL mutations.
In essence I used AWS dashboard to define my AppSync schema, hence had DynamoDB tables created for me, plus some basic resolvers set up for the data.
No I need to achieve following:
I have a player who has inventory and gold
Player calls purchaseItem mutation with item_id
Once this mutation is called I need to perform some checks in resolver i.e. check if item_id exists int 'Items' table of associated DynamoDB, check if player has enough gold, again in 'Players' table of associated DynamoDB, if so, write to Players DynamoDB table by adding item to their inventory and new subtracted gold amount.
I believe most efficient way to achieve this that will result in less cost and latency is to use "Apache Velocity" templating language for AppSync?
It would be great to see example of this showing how to Query / Write to DynamoDB, handle errors and resolve the mutation correctly.
For writing to DynamoDB with VTL use the following tutorial
you can start with the PutItem template. My request template looks like this:
{
"version" : "2017-02-28",
"operation" : "PutItem",
"key" : {
"noteId" : { "S" : "${context.arguments.noteId}" },
"userId" : { "S" : "${context.identity.sub}" }
},
"attributeValues" : {
"title" : { "S" : "${context.arguments.title}" },
"content": { "S" : "${context.arguments.content}" }
}
}
For query:
{ "version" : "2017-02-28",
"operation" : "Query",
"query" : {
## Provide a query expression. **
"expression": "userId = :userId",
"expressionValues" : {
":userId" : {
"S" : "${context.identity.sub}"
}
}
},
## Add 'limit' and 'nextToken' arguments to this field in your schema to implement pagination. **
"limit": #if(${context.arguments.limit}) ${context.arguments.limit} #else 20 #end,
"nextToken": #if(${context.arguments.nextToken}) "${context.arguments.nextToken}" #else null #end
}
This is based on the Paginated Query template.
What you want to look at is at Pipeline Resolvers:
https://docs.aws.amazon.com/appsync/latest/devguide/pipeline-resolvers.html
Yes, this requires the VTL (Velocity Template)
That allows you to perform read, writes, validation, and anything you'd like using VTL. What you basically do is chain the inputs and outputs into the next template and make the required process.
Here's a Medium post showing you how to do it:
https://medium.com/#dabit3/intro-to-aws-appsync-pipeline-functions-3df87ceddac1
In other words, what you can do is:
Have one template that queries the database, pipeline the result to another template that validates the result and inserts it if succeeds or fails it.

Map different Sort Key responses to Appsync Schema values

So here is my schema:
type Model {
PartitionKey: ID!
Name: String
Version: Int
FBX: String
# ms since epoch
CreatedAt: AWSTimestamp
Description: String
Tags: [String]
}
type Query {
getAllModels(count: Int, nextToken: String): PaginatedModels!
}
type PaginatedModels {
models: [Model!]!
nextToken: String
}
I would like to call 'getAllModels' and have all of it's data, and all of it's tags be filled in.
But here is the thing. Tags are stored via sort keys. Like so
PartionKey | SortKey
Model-0 | Model-0
Model-0 | Tag-Tree
Model-0 | Tag-Building
Is it possible to transform the 'Tag' sort keys into the Tags: [String] array in the schema via a DynamoDB resolver? Or must I do something extra fancy through a lambda? Or is there a smarter way to do this?
To clarify, are you storing objects like this in DynamoDB:
{ PartitionKey (HASH), Tag (SortKey), Name, Version, FBX, CreatedAt, Description }
and using a DynamoDB Query operation to fetch all rows for a given HashKey.
Query #PartitionKey = :PartitionKey
and getting back a list of objects some of which have a different "Tag" value and one of which is "Model-0" (aka the same value as the partition key) and I assume that record contains all other values for the record. E.G.
[
{ PartitionKey, Tag: 'ValueOfPartitionKey', Name, Version, FBX, CreatedAt, ... },
{ PartitionKey, Tag: 'Tag-Tree' },
{ PartitionKey: Tag: 'Tag-Building' }
]
You can definitely write resolver logic without too much hassle that reduces the list of model objects into a single object with a list of "Tags". Let's start with a single item and see how to implement a getModel(id: ID!): Model query:
First define the response mapping template that will get all rows for a partition key:
{
"version" : "2017-02-28",
"operation" : "Query",
"query" : {
"expression": "#PartitionKey = :id",
"expressionValues" : {
":id" : {
"S" : "${ctx.args.id}"
}
},
"expressionNames": {
"#PartitionKey": "PartitionKey" # whatever the table hash key is
}
},
# The limit will have to be sufficiently large to get all rows for a key
"limit": $util.defaultIfNull(${ctx.args.limit}, 100)
}
Then to return a single model object that reduces "Tag" to "Tags" you can use this response mapping template:
#set($tags = [])
#set($result = {})
#foreach( $item in $ctx.result.items )
#if($item.PartitionKey == $item.Tag)
#set($result = $item)
#else
$util.qr($tags.add($item.Tag))
#end
#end
$util.qr($result.put("Tags", $tags))
$util.toJson($result)
This will return a response like this:
{
"PartitionKey": "...",
"Name": "...",
"Tags": ["Tag-Tree", "Tag-Building"],
}
Fundamentally I see no problem with this but its effectiveness depends upon your query patterns. Extending this to the getAll use is doable but will require a few changes and most likely a really inefficient Scan operation due to the fact that the table will be sparse of actual information since many records are effectively just tags. You can alleviate this with GSIs pretty easily but more GSIs means more $.
As an alternative approach, you can store your Tags in a different "Tags" table. This way you only store model information in the Model table and tag information in the Tag table and leverage GraphQL to perform the join for you. In this approach have Query.getAllModels perform a "Scan" (or Query) on the Model table and then have a Model.Tags resolver that performs a Query against the Tag table (HK: ModelPartitionKey, SK: Tag). You could then get all tags for a model and later create a GSI to get all models for a tag. You do need to consider that now the nested Model.Tag query will get called once per model but Query operations are fast and I've seen this work well in practice.
Hope this helps :)

Append item to list using AWS AppSync to DynamoDB

This might be a stupid question, but I really cannot find a way to do that.
So, I have DynamoDB tables and I have schema in AppSync api. In a table, for each row, there is a field which has a list as its value. So how can I append multiple items into this list without replacing the existing items? How should I write the resolver of that mutation?
Here is the screenshot of my table:
And you can see there are multiple programs in the list.
How can I just append two more programs.
Here is a new screenshot of my resolver:
screenshot of resolver
I want to add a existence check method in UpdateItem operation. But the current code does not work. The logic I want is that use the "contains" method to see whether the "toBeAddedProgramId" already exists. But the question is, how to extract the current program id list from User table and how to make the program id list a "list" type (since the contains method only take String set and String).
I hope this question makes sense. Thanks so much guys.
Best,
Harrison
To append items to a list, you should use the DynamoDB UpdateItem operation.
Here is an example if you're using DynamoDB directly
In AWS AppSync, you can use the DynamoDB data source and specify the DynamoDB UpdateItem operation in your request mapping template.
Your UpdateItem request template could look like the following (modify it to serve your needs):
{
"version" : "2017-02-28",
"operation" : "UpdateItem",
"key" : {
"id" : { "S" : "${context.arguments.id}" }
},
"update" : {
"expression" : "SET #progs = list_append(#progs, :vals)",
"expressionNames": {
"#progs" : "programs"
},
"expressionValues": {
":vals" : {
"L": [
{ "M" : { "id": { "S": "49f2c...." }}},
{ "M" : { "id": { "S": "931db...." }}}
]
}
}
}
}
We have a tutorial here that goes into more details if you are interested in learning more

AWS iot dynamodb rule ${value} NoSuchElementException

I'm trying to set an AWS IOT rule to send data to DynamoDB without the help of a lambda.
My rule query statement is : SELECT *, topic() AS topic, timestamp() AS timestamp FROM '+/#'
My data is fine in AWS IOT as I'm successfully retrieving it with a lambda. However, even by following the developer guide to create the rule, in order to get the information passed on to Dynamo, by setting the 2 form fields with ${topic} and ${timestamp} as it should work, I get nothing in DynamoDB and I can find the following exception in Cloudwatch :
MESSAGE:Dynamo Insert record failed. The error received was NoSuchElementException. Message arrived on: myTopic/data, Action: dynamo, Table: myTable, HashKeyField: topic, HashKeyValue: , RangeKeyField: Some(timestamp), RangeKeyValue:
HashKeyValue and RangeKeyValue seem to be empty. Why ?
I also posted the question on the AWS forum : https://forums.aws.amazon.com/thread.jspa?threadID=267987
Suppose your devide sends this payload:
mess={"reported":
{"light": "blue","Temperature": int(temp_data)),
"timestamp": str(pd.to_datetime(time.time()))}}
args.message=mess
You should query as:
SELECT message.reported.* FROM '#'
Then, set up DynamoDB hash key value as ${MessageID()}
You will get:
MessageID || Data
1527010174562 { "light" : { "S" : "blue" }, "Temperature" : { "N" : "41" }, "timestamp" : { "S" : "1970-01-01 00:00:01.527010174" }}
Then you can easily extract values using Lambda and send to S3 via Data Pipeline or to Firehose to create a data stream.