Fine Grained Access to a single Item - GetItem, DynamoDB Resolver AppSync - amazon-web-services

How do I provide Fine Grained Access to a Single item in AppSync. I have the following resolver for the GetItem operation.
{
"version": "2017-02-28",
"operation": "GetItem",
"key": {
"identityId": $util.dynamodb.toDynamoDBJson($ctx.args.identityId),
"id": $util.dynamodb.toDynamoDBJson($ctx.args.id),
},
"condition": {
"expression": "attribute_exists(#author) AND #author = :author",
"expressionNames": {
"#identityId": "identityId",
"#id": "id",
"#author": "author"
},
"expressionValues": {
":author" : { "S" : "${ctx.identity.cognitoIdentityId}" }
}
}
}
However when I run the query I got:
GraphQL error: Unsupported element '$[condition]'.
Which is ok, because according to the documentation there is not condition key for this operation https://docs.aws.amazon.com/appsync/latest/devguide/resolver-mapping-template-reference-dynamodb.html#aws-appsync-resolver-mapping-template-reference-dynamodb-getitem
My Question
How can I filter/restrict access to items belonging to the particular author (Fine grained access) if I cannot put conditions?

You can filter your result in the response mapping template such as below. From my understanding, you are getting the author field from the cognitoIdentityId and your item has a different primary key hence why you can't use the author when querying.
#if($context.result["author"] == $ctx.identity.cognitoIdentityId)
$utils.toJson($context.result);
#else
$utils.unauthorized()
#end

Related

Why do I get 'Item is required' when updating an item in DynamoDB using AWS Step Functions?

I'm trying to push a DynamoDB record through Step Functions while setting up a conditional expression but for some reason, I'm getting the error:
There are Amazon States Language errors in your state machine definition. Fix the errors to continue.
The field 'Item' is required but was missing (at /States/MyStep/Parameters)
I don't want to push an Item. I want to use an update expression.
Here's my code:
{
"StartAt": "MyStep",
"States": {
"MyStep": {
"Type": "Task",
"Resource": "arn:aws:states:::dynamodb:putItem",
"Parameters": {
"TableName.$": "$.table_name",
"Key": {
"test_id_path_method_executor_id": {"S.$": "$.update_key.test_id_path_method_executor_id"},
"result_timestamp": {"S.$": "$.update_key.result_timestamp"}
},
"ConditionExpression": "#max_db < :max_values",
"ExpressionAttributeValues": {
":max_values": {"N.$": "$.result_value"}
},
"ExpressionAttributeNames": {
"#max_db": "max"
},
"UpdateExpression": "SET #max_db = :max_values"
},
"Next": "EndSuccess"
},
"EndSuccess": {
"Type":"Succeed"
}
}
}
What's the issue?
There are 2 main DynamoDB APIs for modifying items:
PutItem
UpdateItem
In a nutshell, PutItem 'updates' the item by replacing it & because of this, it requires the replacement item to be passed. This is the API call you're using and is why you're getting The field 'Item' is required but was missing. It's correct, Item is required when using PutItem.
Instead, you need to use UpdateItem which does not require the new item in full & will modify the attributes of an item based on an update expression (which is what you have).
In your step function definition, replace:
"Resource": "arn:aws:states:::dynamodb:putItem",
With:
"Resource": "arn:aws:states:::dynamodb:updateItem",

Appsync custom resolver error "Unable to transform Response Template"

I am trying to write "BatchPutItem" custom resolver, so that I can create multiple items (not more than 25 at a time), which should accept a list of arguments and then perform a batch operation.
Here is the code which I have in CusomtResolver:
#set($pdata = [])
#foreach($item in ${ctx.args.input})
$util.qr($item.put("createdAt", $util.time.nowISO8601()))
$util.qr($item.put("updatedAt", $util.time.nowISO8601()))
$util.qr($item.put("__typename", "UserNF"))
$util.qr($item.put("id", $util.defaultIfNullOrBlank($item.id, $util.autoId())))
$util.qr($pdata.add($util.dynamodb.toMapValues($item)))
#end
{
"version" : "2018-05-29",
"operation" : "BatchPutItem",
"tables" : {
"Usertable1-staging": $utils.toJson($pdata)
}
}
Response in the query console section:
{
"data": {
"createBatchUNF": null
},
"errors": [
{
"path": [
"createBatchUserNewsFeed"
],
"data": null,
"errorType": "MappingTemplate",
"errorInfo": null,
"locations": [
{
"line": 2,
"column": 3,
"sourceName": null
}
],
"message": "Unsupported operation 'BatchPutItem'. Datasource Versioning only supports the following operations (TransactGetItems,PutItem,BatchGetItem,Scan,Query,GetItem,DeleteItem,UpdateItem,Sync)"
}
]
}
And the query is :
mutation MyMutation {
createBatchUNF(input: [{seen: false, userNFUserId: "userID", userNFPId: "pID", user NFPOwnerId: "ownerID"}]) {
items {
id
seen
}
}
}
Conflict detection is also turned off
And when I check cloud logs I found this error:
I was able to solve this issue by disabling DataStore for entire API.
When we create a new project/backend app using amplify console, and in data tab it asks us to enable data store so that we can start modelling our GraphQL api, that's the reason which enables versioning for the entire API, and it prevents batchWrite to execute.
So in order to solve this issue, we can follow these steps:
[Note: I am using #aws-amplify/cli]
$ amplify update api
? Please select from one of the below mentioned services: GraphQL
? Select from the options below: Disable DataStore for entire API
And then amplify push
This will fix this issue.

How to specify attributes to return from DynamoDB through AppSync

I have an AppSync pipeline resolver. The first function queries an ElasticSearch database for the DynamoDB keys. The second function queries DynamoDB using the provided keys. This was all working well until I ran into the 1 MB limit of AppSync. Since most of the data is in a few attributes/columns I don't need, I want to limit the results to just the attributes I need.
I tried adding AttributesToGet and ProjectionExpression (from here) but both gave errors like:
{
"data": {
"getItems": null
},
"errors": [
{
"path": [
"getItems"
],
"data": null,
"errorType": "MappingTemplate",
"errorInfo": null,
"locations": [
{
"line": 2,
"column": 3,
"sourceName": null
}
],
"message": "Unsupported element '$[tables][dev-table-name][projectionExpression]'."
}
]
}
My DynamoDB function request mapping template looks like (returns results as long as data is less than 1 MB):
#set($ids = [])
#foreach($pResult in ${ctx.prev.result})
#set($map = {})
$util.qr($map.put("id", $util.dynamodb.toString($pResult.id)))
$util.qr($map.put("ouId", $util.dynamodb.toString($pResult.ouId)))
$util.qr($ids.add($map))
#end
{
"version" : "2018-05-29",
"operation" : "BatchGetItem",
"tables" : {
"dev-table-name": {
"keys": $util.toJson($ids),
"consistentRead": false
}
}
}
I contacted the AWS people who confirmed that ProjectionExpression is not supported currently and that it will be a while before they will get to it.
Instead, I created a lambda to pull the data from DynamoDB.
To limit the results form DynamoDB I used $ctx.info.selectionSetList in AppSync to get the list of requested columns, then used the list to specify the data to pull from DynamoDB. I needed to get multiple results, maintaining order, so I used BatchGetItem, then merged the results with the original list of IDs using LINQ (which put the DynamoDB results back in the correct order since BatchGetItem in C# does not preserve sort order like the AppSync version does).
Because I was using C# with a number of libraries, the cold start time was a little long, so I used Lambda Layers pre-JITed to Linux which allowed us to get the cold start time down from ~1.8 seconds to ~1 second (when using 1024 GB of RAM for the Lambda).
AppSync doesn't support projection but you can explicitly define what fields to return in the response template instead of returning the entire result set.
{
"id": "$ctx.result.get('id')",
"name": "$ctx.result.get('name')",
...
}

How do I run mutlple SQL statements in an AppSync resolver?

I have two tables, cuts and holds. In response to an event in my application I want to be able to move the values from a hold entry with a given id across to the cuts table. The naïve way is to do an INSERT and then a DELETE.
How do I run multiple sql statements in an AppSync resolver to achieve that result? I have tried the following (replacing sql by statements and turning it into an array) without success.
{
"version" : "2017-02-28",
"operation": "Invoke",
#set($id = $util.autoId())
"payload": {
"statements": [
"INSERT INTO cuts (id, rollId, length, reason, notes, orderId) SELECT '$id', rollId, length, reason, notes, orderId FROM holds WHERE id=:ID",
"DELETE FROM holds WHERE id=:ID"
],
"variableMapping": {
":ID": "$context.arguments.id"
},
"responseSQL": "SELECT * FROM cuts WHERE id = '$id'"
}
}
If you're using the "AWS AppSync Using Amazon Aurora as a Data Source via AWS Lambda" found here https://github.com/aws-samples/aws-appsync-rds-aurora-sample, you won't be able to send multiple statements in the sql field
If you are using the AWS AppSync integration with the Aurora Serverless Data API, you can pass up to 2 statements in a statements array such as in the example below:
{
"version": "2018-05-29",
"statements": [
"select * from Pets WHERE id='$ctx.args.input.id'",
"delete from Pets WHERE id='$ctx.args.input.id'"
]
}
You will be able to do it as the following.If you're using the "AWS AppSync Using Amazon Aurora as a Data Source via AWS Lambda" found here https://github.com/aws-samples/aws-appsync-rds-aurora-sample.
In the resolver, add the "sql0" & "sql1" field (you can name them what ever you want) :
{
"version" : "2017-02-28",
"operation": "Invoke",
#set($id = $util.autoId())
"payload": {
"sql":"INSERT INTO cuts (id, rollId, length, reason, notes, orderId)",
"sql0":"SELECT '$id', rollId, length, reason, notes, orderId FROM holds WHERE id=:ID",
"sql1":"DELETE FROM holds WHERE id=:ID",
"variableMapping": {
":ID": "$context.arguments.id"
},
"responseSQL": "SELECT * FROM cuts WHERE id = '$id'"
}
}
In your lambda, add the following piece of code:
if (event.sql0) {
const inputSQL0 = populateAndSanitizeSQL(event.sql0, event.variableMapping, connection);
await executeSQL(connection, inputSQL0);
}
if (event.sql1) {
const inputSQL1 = populateAndSanitizeSQL(event.sql1, event.variableMapping, connection);
await executeSQL(connection, inputSQL1);
}
With this approach, you can send to your lambda as much sql statements as you want,and then your lambda will execute them.

AppSync GraphQL mutation server logic in resolvers

I am having issues finding good sources for / figuring out how to correctly add server-side validation to my AppSync GraphQL mutations.
In essence I used AWS dashboard to define my AppSync schema, hence had DynamoDB tables created for me, plus some basic resolvers set up for the data.
No I need to achieve following:
I have a player who has inventory and gold
Player calls purchaseItem mutation with item_id
Once this mutation is called I need to perform some checks in resolver i.e. check if item_id exists int 'Items' table of associated DynamoDB, check if player has enough gold, again in 'Players' table of associated DynamoDB, if so, write to Players DynamoDB table by adding item to their inventory and new subtracted gold amount.
I believe most efficient way to achieve this that will result in less cost and latency is to use "Apache Velocity" templating language for AppSync?
It would be great to see example of this showing how to Query / Write to DynamoDB, handle errors and resolve the mutation correctly.
For writing to DynamoDB with VTL use the following tutorial
you can start with the PutItem template. My request template looks like this:
{
"version" : "2017-02-28",
"operation" : "PutItem",
"key" : {
"noteId" : { "S" : "${context.arguments.noteId}" },
"userId" : { "S" : "${context.identity.sub}" }
},
"attributeValues" : {
"title" : { "S" : "${context.arguments.title}" },
"content": { "S" : "${context.arguments.content}" }
}
}
For query:
{ "version" : "2017-02-28",
"operation" : "Query",
"query" : {
## Provide a query expression. **
"expression": "userId = :userId",
"expressionValues" : {
":userId" : {
"S" : "${context.identity.sub}"
}
}
},
## Add 'limit' and 'nextToken' arguments to this field in your schema to implement pagination. **
"limit": #if(${context.arguments.limit}) ${context.arguments.limit} #else 20 #end,
"nextToken": #if(${context.arguments.nextToken}) "${context.arguments.nextToken}" #else null #end
}
This is based on the Paginated Query template.
What you want to look at is at Pipeline Resolvers:
https://docs.aws.amazon.com/appsync/latest/devguide/pipeline-resolvers.html
Yes, this requires the VTL (Velocity Template)
That allows you to perform read, writes, validation, and anything you'd like using VTL. What you basically do is chain the inputs and outputs into the next template and make the required process.
Here's a Medium post showing you how to do it:
https://medium.com/#dabit3/intro-to-aws-appsync-pipeline-functions-3df87ceddac1
In other words, what you can do is:
Have one template that queries the database, pipeline the result to another template that validates the result and inserts it if succeeds or fails it.