AWS AppSync Nested Resolver - how to reuse arguments from parent - amazon-web-services

Heyo. I've got an AppSync resolver with a field that is attached to a resolver. The query accepts an argument that is the same argument the inner resolver would need. For the sake of terseness, I'd like to just pass it down from context instead of having to specify it. The datasource for the resolver is a dynamoDB table
Say the schema looks like
type Query {
getThings(key: String!): AResult!
}
type AResult {
getOtherThings(key: String!): String!
}
I could construct a query as such
query Query {
getThings(key: "123") {
getOtherThings(key: "123")
}
}
Which is clumsy and redundant. Ideally, I'd just want to create a query that looks like
query Query {
getThings(key: "123") {
getOtherThings
}
}
And the resolver can pull key from the context of the request and reuse it.
The request template for getOtherThings resolver looks like:
{
"version": "2017-02-28",
"operation": "Query",
"query": {
"expression" : "key = :key",
"expressionValues" : {
":key" : $util.dynamodb.toDynamoDBJson($context.arguments.key)
}
}
}
But $context.guments.key is null. As is $context.args.key and $ctx.args.key and $ctx.arguments.key. If I examine the logs from the request when executing getThings I can see the expected arguments:
{
"logType": "RequestMapping",
"path": [
"getThings"
],
"fieldName": "getThings",
"context": {
"arguments": {
"key": "123"
},
"stash": {},
"outErrors": []
},
"fieldInError": false,
"errors": [],
"parentType": "Query"
}
So I surmise that the context does not persist between the parent resolver (getThings) and its child resolver (getOtherThings), but I can't make this out from the logs.
Is this even possible - I'm coming up dry on searching through AWS logs

The answer lies in ctx.source. ctx.source is a map of the parent field, so I can grab it from there.
{
"logType": "RequestMapping",
"path": [
"getThings"
],
"source": {
"key":"123"
},
"fieldName": "getThings",
"context": {
"arguments": {
"key": "123"
},
"stash": {},
"outErrors": []
},
"fieldInError": false,
"errors": [],
"parentType": "Query"
}

Related

What's the best practice for unmarshalling data returned from a dynamo operation in aws step functions?

I am running a state machine running a dynamodb query (called using CallAwsService). The format returned looks like this:
{
Items: [
{
"string" : {
"B": blob,
"BOOL": boolean,
"BS": [ blob ],
"L": [
"AttributeValue"
],
"M": {
"string" : "AttributeValue"
},
"N": "string",
"NS": [ "string" ],
"NULL": boolean,
"S": "string",
"SS": [ "string" ]
}
}
]
}
I would like to unmarshall this data efficiently and would like to avoid using a lambda call for this
The CDK code we're currently using for the query is below
interface FindItemsStepFunctionProps {
table: Table
id: string
}
export const FindItemsStepFunction = (scope: Construct, props: FindItemStepFunctionProps): StateMachine => {
const { table, id } = props
const definition = new CallAwsService(scope, 'Query', {
service: 'dynamoDb',
action: 'query',
parameters: {
TableName: table.tableName,
IndexName: 'exampleIndexName',
KeyConditionExpression: 'id = :id',
ExpressionAttributeValues: {
':id': {
'S.$': '$.path.id',
},
},
},
iamResources: ['*'],
})
return new StateMachine(scope, id, {
logs: {
destination: new LogGroup(scope, `${id}LogGroup`, {
logGroupName: `${id}LogGroup`,
removalPolicy: RemovalPolicy.DESTROY,
retention: RetentionDays.ONE_WEEK,
}),
level: LogLevel.ALL,
},
definition,
stateMachineType: StateMachineType.EXPRESS,
stateMachineName: id,
timeout: Duration.minutes(5),
})
}
Can you unmarshall the data downstream? I'm not too well versed on StepFunctions, do you have the ability to import utilities?
Unmarshalling DDB JSON is as simple as calling the unmarshall function from DynamoDB utility:
https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/modules/_aws_sdk_util_dynamodb.html
You may need to do so downstream as StepFunctions seems to implement the low level client.
Step functions still don't make it easy enough to call DynamoDB directly from a step in a state machine without using a Lambda function. The main missing parts are the handling of the different cases of finding zero, one or more records in a query, and the unmarshaling of the slightly complicated format of DynamoDB records. Sadly the $utils library is still not supported in step functions.
You will need to implement these two in specific steps in the graph.
Here is a diagram of the steps that we use as DynamoDB query template:
The first step is used to provide parameters to the query. This step can be omitted and define the parameters in the query step:
"Set Query Parameters": {
"Type": "Pass",
"Next": "DynamoDB Query ...",
"Result": {
"tableName": "<TABLE_NAME>",
"key_value": "<QUERY_KEY>",
"attribute_value": "<ATTRIBUTE_VALUE>"
}
}
The next step is the actual query to DynamoDB. You can also use GetItem instead of Query if you have the record keys.
"Type": "Task",
"Parameters": {
"TableName": "$.tableName",
"IndexName": "<INDEX_NAME_IF_NEEDED>",
"KeyConditionExpression": "#n1 = :v1",
"FilterExpression": "#n2.#n3 = :v2",
"ExpressionAttributeNames": {
"#n1": "<KEY_NAME>",
"#n2": "<ATTRIBUTE_NAME>",
"#n3": "<NESTED_ATTRIBUTE_NAME>"
},
"ExpressionAttributeValues": {
":v1": {
"S.$": "$.key_value"
},
":v2": {
"S.$": "$.attribute_value"
}
},
"ScanIndexForward": false
},
"Resource": "arn:aws:states:::aws-sdk:dynamodb:query",
"ResultPath": "$.ddb_record",
"ResultSelector": {
"result.$": "$.Items[0]"
},
"Next": "Check for DDB Object"
}
The above example seems a bit complicated, using both ExpressionAttributeNames and ExpressionAttributeValues. However, it makes it possible to query on nested attributes such as item.id.
In this example, we only take the first item response with $.Items[0]. However, you can take all the results if you need more than one.
The next step is to check if the query returned a record or not.
"Check for DDB Object": {
"Type": "Choice",
"Choices": [
{
"Variable": "$.ddb_record.result",
"IsNull": false,
"Comment": "Found Context Object",
"Next": "Parse DDB Object"
}
],
"Default": "Do Nothing"
}
And lastly, to answer your original question, we can parse the query result, in case that we have one:
"Parse DDB Object": {
"Type": "Pass",
"Parameters": {
"string_object.$": "$.ddb_record.result.string_object.S",
"bool_object.$": "$.ddb_record.result.bool_object.Bool",
"dict_object": {
"nested_dict_object.$": "$.ddb_record.result.item.M.name.S",
},
"dict_object_full.$": "States.StringToJson($.ddb_record.result.JSON_object.S)"
},
"ResultPath": "$.parsed_ddb_record",
"End": true
}
Please note that:
Simple strings are easily converted by "string_object.$": "$.ddb_record.result.string_object.S"
The same for numbers or booleans by "bool_object.$": "$.ddb_record.result.bool_object.Bool")
Nested objects are parsing the map object ("item.name.$": "$.ddb_record.result.item.M.name.S", for example)
Creation of a JSON object can be achieved by using States.StringToJson
The parsed object is added as a new entry on the flow using "ResultPath": "$.parsed_ddb_record"

AWS Appsync Queries and Mutations return null, but data written in DynamoDB

I executed AppSync query at AWS Appsync Console.
The mutation finished without error, and data written to DynamoDB successful, but mutation's response is:
{
"data": {
"createCheckRequest": null
}
}
and query's response is
{
"data": {
"getCheckRequests": null
},
"errors": [
{
"path": [
"getCheckRequests"
],
"locations": null,
"message": "Can't resolve value (/getCheckRequests) : type mismatch error, expected type LIST"
}
]
}
My mapping for mutation:
Request:
{
"version" : "2017-02-28",
"operation" : "PutItem",
"key" : {
"id": $util.dynamodb.toDynamoDBJson($util.autoId()),
"createdDate": $util.dynamodb.toDynamoDBJson($util.time.nowISO8601())
},
"attributeValues" : $util.dynamodb.toMapValuesJson($ctx.args.input)
}
Response:
#if($ctx.error)
$util.error($ctx.error.message, $ctx.error.type)
#end
#if($ctx.result.items.size()>0)
$util.toJson($ctx.result.items[0])
#else
null
#end
Do I miss anything?

Insert directly multi data rows with multi column to DynamoDB using API Gateway

I'm trying to insert data to DynamoDB directly with API Gateway. I can insert single, but get stuck with insert multiple data rows. May be problem with the mapping template (always get com.amazon.coral.service#SerializationException error)
My dynamoDB table structure ex_table: time, column1, column2
My gateway mapping template:
#set($inputRoot = $input.json('$.items')})
{
"TableName": "ex_table",
"Item":
[
#foreach($elem in $inputRoot) {
"time": {"S": $input.json('$.time')},
"column1": "$elem.column1.S",
"column2": "$elem.column2.S",
}#if($foreach.hasNext),#end
#end
]
}
And this is my request body:
{
"time": "2021-03-31 16:50:00",
"items": [
{
"column1": "Item1",
"column2": "Attr1"
},
{
"column1": "Item2",
"column2": "Attr2"
}
]
}
Can you guys help me this. Many thanks !
I think what you're looking for is "BatchWriteItem" in the DynamoDB API Reference. Instead of the normal Item{...} syntax for single writes, BatchWriteItem takes a slightly diff style:
{"RequestItems":{
"TableName": [
{"PutRequest" : {
"Item": {
"Name":{"S" : "Some Name"},
"Category":{"S" : "Some Category"}
}
}},
{"PutRequest" : {
"Item": {
"Name":{"S" : "Some Name 2"},
"Category":{"S" : "Some Category 2"}
}
}}
]
}}
Combined with changing the action within APIGateway to BatchWriteItem and updating your mapping template like this (untested) should get you there (or close):
#set($inputRoot = $input.json('$.items')})
{
"RequestItems": {
"ex_table" : [
#foreach($elem in $inputRoot) {
"PutRequest" : {
"Item" : {
"time": {"S": $input.json('$.time')},
"column1": "$elem.column1.S",
"column2": "$elem.column2.S",
}
}
}#if($foreach.hasNext),#end
#end
]
}
}

Run query by regex on _id field in Elasticsearch

I'm trying to run a regex query in Elastic search based on a field called _id, but I'm getting this error:
Can only use wildcard queries on keyword and text fields - not on
[_id] which is of type [_id]
I've tried regexp:
{
"query": {
"regexp": {
"_id": {
"value": "test-product-all-user_.*",
"flags" : "ALL",
"max_determinized_states": 10000,
"rewrite": "constant_score"
}
}
}
}
and wildcard:
{
"query": {
"wildcard": {
"_id": {
"value": "test-product-all-user_.*",
"boost": 1.0,
"rewrite": "constant_score"
}
}
}
}
But both threw the same error.
This is the complete error just in case:
{ "error": {
"root_cause": [
{
"type": "query_shard_exception",
"reason": "Can only use wildcard queries on keyword and text fields - not on [_id] which is of type [_id]",
"index_uuid": "Cg0zrr6dRZeHJ8Jmvh5HMg",
"index": "explore_segments_v3"
}
],
"type": "search_phase_execution_exception",
"reason": "all shards failed",
"phase": "query",
"grouped": true,
"failed_shards": [
{
"shard": 0,
"index": "explore_segments_v3",
"node": "-ecTRBmnS2OgjHrrq6GCOw",
"reason": {
"type": "query_shard_exception",
"reason": "Can only use wildcard queries on keyword and text fields - not on [_id] which is of type [_id]",
"index_uuid": "Cg0zrr6dRZeHJ8Jmvh5HMg",
"index": "explore_segments_v3"
}
}
] }, "status": 400 }
_id is a special kind of feild in Elasticsearch , It's not really an indexed field like other text fields, it's actually "generated" based on the UID of the document.
You can refer to this link for more information https://www.elastic.co/guide/en/elasticsearch/reference/current/mapping-id-field.html
As per the documentation it only supports limited type of queries (term, terms, match, query_string, simple_query_string), and if you want to do more advanced text search like wildcard or regexp you will need to index the ID into an actual text field on the document itself.

How to iterate and get the properties and values of a JSONAPI response in emberJS?

I have following ember request
this.store.createRecord('food_list',requestObj
).save().then((response) => {
console.log(response);
console.log(response.id); // This is working
console.log(response.food_list_code); //this does NOT work !!!!!
}
It will call an API and save a record to database and then returns following response.
{
"links": {
"self": "/api/food_list"
},
"data": {
"type": "",
"id": "da6b8615-3f4334-550544442",
"attributes": {
"food_list_date": "2013-02-14 23:35:19",
"food_list_id": "da6b8615-3f4334-550544442",
"food_list_code": "GORMA",
},
"relationships": {
"food_list_parameters": {
"data": [
{
"type": "food_list_parameter",
"id": "RERAFFASD9ASD09ASDFA0SDFASD"
}
]
},
"food_new_Name": {
"data": {
"type": "food_new_Name",
"id": "AKASDJFALSKDFKLSDF23W32KJ2L23"
}
}
},
"links": {
"self": "/api/BLAH/BLAH/BLAH"
}
}
}
but since above response is a JSONAPI in form of an ember object, I dont know how to parse it.
If I try to get response.id, I get the string da6b8615-3f4334-550544442
But how to get value for food_list_code in response block. Or how to iterate the response object to get "food_list_code" and "food_list_date" ?
The output for console.log(response) is as following ember class
Class {__ember1500143184544: "ember1198", store: Class, _internalModel: InternalModel, currentState...
I appreciate your help.
M.