"type mismatch error, expected type LIST" for querying a one-to-many relationship in AppSync - amazon-web-services

The schema:
type User {
id: ID!
createdCurricula: [Curriculum]
}
type Curriculum {
id: ID!
title: String!
creator: User!
}
The resolver to query all curricula of a given user:
{
"version" : "2017-02-28",
"operation" : "Query",
"query" : {
## Provide a query expression. **
"expression": "userId = :userId",
"expressionValues" : {
":userId" : {
"S" : "${context.source.id}"
}
}
},
"index": "userIdIndex",
"limit": #if(${context.arguments.limit}) ${context.arguments.limit} #else 20 #end,
"nextToken": #if(${context.arguments.nextToken}) "${context.arguments.nextToken}" #else null #end
}
The response map:
{
"items": $util.toJson($context.result.items),
"nextToken": #if(${context.result.nextToken}) "${context.result.nextToken}" #else null #end
}
The query:
query {
getUser(id: "0b6af629-6009-4f4d-a52f-67aef7b42f43") {
id
createdCurricula {
title
}
}
}
The error:
{
"data": {
"getUser": {
"id": "0b6af629-6009-4f4d-a52f-67aef7b42f43",
"createdCurricula": null
}
},
"errors": [
{
"path": [
"getUser",
"createdCurricula"
],
"locations": null,
"message": "Can't resolve value (/getUser/createdCurricula) : type mismatch error, expected type LIST"
}
]
}
The CurriculumTable has a global secondary index titled userIdIndex, which has userId as the partition key.
If I change the response map to this:
$util.toJson($context.result.items)
The output is the following:
{
"data": {
"getUser": {
"id": "0b6af629-6009-4f4d-a52f-67aef7b42f43",
"createdCurricula": null
}
},
"errors": [
{
"path": [
"getUser",
"createdCurricula"
],
"errorType": "MappingTemplate",
"locations": [
{
"line": 4,
"column": 5
}
],
"message": "Unable to convert \n{\n [{\"id\":\"87897987\",\"title\":\"Test Curriculum\",\"userId\":\"0b6af629-6009-4f4d-a52f-67aef7b42f43\"}],\n} to class java.lang.Object."
}
]
}
If I take that string and run it through a console.log in my frontend app, I get:
{
[{"id":"2","userId":"0b6af629-6009-4f4d-a52f-67aef7b42f43"},{"id":"1","userId":"0b6af629-6009-4f4d-a52f-67aef7b42f43"}]
}
That's clearly an object. How do I make it... not an object, so that AppSync properly reads it as a list?
SOLUTION
My response map had a set of curly braces around it. I'm pretty sure that was placed there in the generator by Amazon. Removing them fixed it.

I think I'm not seeing the complete view of your schema, I was expecting something like:
schema {
query: Query
}
Where Query is RootQuery, in fact you didn't share us your Query definition. Assuming you have the right Query definition. The main problem is in your response template.
> "items": $util.toJson($context.result.items)
This means that you are passing a collection named: *"items"* to Graphql query engine. And you are referring this collection as "createdCurricula". So solve this issue your response-mapping-template is the right place to fix. How? just replace the above line with the following.
"createdCurricula": $util.toJson($context.result.items),
Please the main thing to note here is, the mapping template is a bridge between your datasources and qraphql, feel free to make any computation, or name mapping but don't forget that object names in that response json are the one should match in schema/query definition.
Thanks.
Musema

change to result type to $util.toJson($ctx.result.data.posts)

The exception msg says that it expected a type list.
Looking at:
{
[{"id":"2","userId":"0b6af629-6009-4f4d-a52f-67aef7b42f43"},{"id":"1","userId":"0b6af629-6009-4f4d-a52f-67aef7b42f43"}]
}
I don't see that createdCurricula is a LIST.
What is currently in DDB is:
"id": "0b6af629-6009-4f4d-a52f-67aef7b42f43",
"createdCurricula": null

Related

Appflow upsert error : ID does not exist in the destination connector

Creating a appflow from S3 bucket to salesforce through CDK with upsert option.
Using existing connection to From S3 to Salesforce -
new appflow.CfnConnectorProfile(this, 'Connector',{
"connectionMode": "Public",
"connectorProfileName":"connection_name",
"connectorType":"Salesforce"
})
Destination flow Code -
new appflow.CfnFlow(this, 'Flow', {
destinationFlowConfigList: [
{
"connectorProfileName": "connection_name",
"connectorType": "Salesforce",
"destinationConnectorProperties": {
"salesforce": {
"errorHandlingConfig": {
"bucketName": "bucket-name",
"bucketPrefix": "subfolder",
},
"idFieldNames": [
"ID"
],
"object": "object_name",
"writeOperationType": "UPSERT"
}
}
}
],
..... other props ....
}
tasks: [
{
"taskType":"Filter",
"sourceFields": [
"ID",
"Some other fields",
...
],
"connectorOperator": {
"salesforce": "PROJECTION"
}
},
{
"taskType":"Map",
"sourceFields": [
"ID"
],
"taskProperties": [
{
"key":"SOURCE_DATA_TYPE",
"value":"Text"
},
{
"key":"DESTINATION_DATA_TYPE",
"value":"Text"
}
],
"destinationField": "ID",
"connectorOperator": {
"salesforce":"PROJECTION"
}
},
{
.... some other mapping fields.....
}
But the problem is - "Invalid request provided: AWS::AppFlow::FlowCreate Flow request failed: [ID does not exist in the destination conne ctor]
According to the error, how to fix the problem with the existing connector which results in ID does not exist in the destination connector
PS: ID is defined in the flow code. But still it is saying ID is not found.
I think your last connector operator should be:
"connectorOperator": {
"salesforce":"NO_OP"
}
instead of:
"connectorOperator": {
"salesforce":"PROJECTION"
}
since you are mapping the field ID into itself without any transformations whatsoever.

Secondary Index not working for Database using #key

I should get the DynamoDb id for Justin. The call doesn't seem to fail. If i console.log(returned) i get an [object Object]. When i try to get to the returned.data.getIdFromUserName.id or returned.data.getIdFromUserName.email (anything else in the table) i get undefined. What am i missing?
Returned data:
{
"data": {
"getIdFromUserName": {
"items": [
{
"id": "3a5a2ks4-f137-41e2-a604-594e0c52a298",
"userName": "Justin",
"firstname": "null",
"weblink": "#JustinTimberlake",
"email": "iuiubiwewe#hotmail.com",
"mobileNum": "+0123456789",
"profilePicURI": "null",
"listOfVideosSeen": null,
"userDescription": "I wanna rock your body, please stay",
"isBlocked": false,
"GridPairs": null
}
],
"nextToken": null
}
}
}
I'd suggest getting a better idea of what console.log(returned) is printing.
Try console.log(JSON.stringify(returned, null, 2)) to inspect what is being returned.
EDIT: The data you're working with looks like this:
{
"data": {
"getIdFromUserName": {
"items": [
{
"id": "3a5a2ks4-f137-41e2-a604-594e0c52a298",
"userName": "Justin",
"firstname": "null",
"weblink": "#JustinTimberlake",
"email": "iuiubiwewe#hotmail.com",
"mobileNum": "+0123456789",
"profilePicURI": "null",
"listOfVideosSeen": null,
"userDescription": "I wanna rock your body, please stay",
"isBlocked": false,
"GridPairs": null
}
],
"nextToken": null
}
}
}
Pay close attention to the structure of that response. Both data and getIdFromUserName are maps. The content of data.getIdFromUserName is an array named items. Therefore, data.getIdFromUserName.items is an array containing the results of your query. You'll need to iterate over that array to get the data you are looking for.
For example, data.getIdFromUserName.items[0].id would be 3a5a2ks4-f137-41e2-a604-594e0c52a298
To access the email it would be data.getIdFromUserName.items[0].email.

How do I insert an optional field as null using AppSync Resolvers and Aurora?

I have an optional String field, notes, that is sometimes empty. If it's empty I want to insert null, otherwise I want to insert the string.
Here is my resolver -
{
"version" : "2017-02-28",
"operation": "Invoke",
#set($id = $util.autoId())
#set($notes = $util.defaultIfNullOrEmpty($context.arguments.notes, 'null'))
"payload": {
"sql":"INSERT INTO things VALUES ('$id', :NOTES)",
"variableMapping": {
":NOTES" : $notes
},
"responseSQL": "SELECT * FROM things WHERE id = '$id'"
}
}
With this graphql
mutation CreateThing{
createThing() {
id
notes
}
}
I get -
{
"data": {
"createRoll": {
"id": "6af68989-0bdc-44e2-8558-aeb4c8418e93",
"notes": "null"
}
}
}
when I really want null without the quotes.
And with this graphql -
mutation CreateThing{
createThing(notes: "Here are some notes") {
id
notes
}
}
I get -
{
"data": {
"createThing": {
"id": "6af68989-0bdc-44e2-8558-aeb4c8418e93",
"notes": "Here are some notes"
}
}
}
which is what I want.
How do I get a quoteless null and a quoted string into the same field?
TL;DR you should use $util.toJson() to print the $context.arguments.notes correctly. Replace your $notes assignment with
#set($notes = $util.toJson($util.defaultIfNullOrEmpty($context.arguments.notes, null)))
Explanation:
The reason is VTL prints whatever the toString() method returns and your call to
$util.defaultIfNullOrEmpty($context.arguments.notes, 'null') will return the string "null", which will be printed as "null".
If you replace with $util.defaultIfNullOrEmpty($context.arguments.notes, null) then it will return a null string. However, VTL will print $notes because that is the way it handles null references. In order to print null, which is the valid JSON representation of null, we have to serialize it to JSON. So the correct statement is:
#set($notes = $util.toJson($util.defaultIfNullOrEmpty($context.arguments.notes, null)))
Full test:
I'm assuming you started with the RDS sample provided in the AWS AppSync console and modified it. To reproduce, I updated the content field in the Schema to be nullable:
type Mutation {
...
createPost(author: String!, content: String): Post
...
}
type Post {
id: ID!
author: String!
content: String
views: Int
comments: [Comment]
}
and I modified the posts table schema so content can also be null there: (inside the Lambda function)
function conditionallyCreatePostsTable(connection) {
const createTableSQL = `CREATE TABLE IF NOT EXISTS posts (
id VARCHAR(64) NOT NULL,
author VARCHAR(64) NOT NULL,
content VARCHAR(2048),
views INT NOT NULL,
PRIMARY KEY(id))`;
return executeSQL(connection, createTableSQL);
}
This is the request template for the createPost mutation:
{
"version" : "2017-02-28",
"operation": "Invoke",
#set($id = $util.autoId())
"payload": {
"sql":"INSERT INTO posts VALUES ('$id', :AUTHOR, :CONTENT, 1)",
"variableMapping": {
":AUTHOR" : "$context.arguments.author",
":CONTENT" : $util.toJson($util.defaultIfNullOrEmpty($context.arguments.content, null))
},
"responseSQL": "SELECT id, author, content, views FROM posts WHERE id = '$id'"
}
}
and response template:
$util.toJson($context.result[0])
The following query:
mutation CreatePost {
createPost(author: "Me") {
id
author
content
views
}
}
returns:
{
"data": {
"createPost": {
"id": "b42ee08c-956d-4b89-afda-60fe231e86d7",
"author": "Me",
"content": null,
"views": 1
}
}
}
and
mutation CreatePost {
createPost(author: "Me", content: "content") {
id
author
content
views
}
}
returns
{
"data": {
"createPost": {
"id": "c6af0cbf-cf05-4110-8bc2-833bf9fca9f5",
"author": "Me",
"content": "content",
"views": 1
}
}
}
We were looking into the same issue. For some reason, the accepted answer does not work for us. Maybe because it's a beta feature and there is a new resolver version (2018-05-29 vs 2017-02-28, changes here: Resolver Mapping Template Changelog).
We use this for the time being using NULLIF():
{
"version": "2018-05-29",
"statements": [
"INSERT INTO sales_customers_addresses (`id`, `customerid`, `type`, `company`, `country`, `email`) VALUES (NULL, :CUSTOMERID, :TYPE, NULLIF(:COMPANY, ''), NULLIF(:COUNTRY, ''), :EMAIL)"
],
"variableMap": {
":CUSTOMERID": $customerid,
":TYPE": "$type",
":COMPANY": "$util.defaultIfNullOrEmpty($context.args.address.company, '')",
":COUNTRY": "$util.defaultIfNullOrEmpty($context.args.address.country, '')",
":EMAIL": "$context.args.address.email"
}
}

How to resolve parent to child relationship with AppSync

I have schema looking like below
type Post {
id: ID!
creator: String!
createdAt: String!
like: Int!
dislike: Int!
frozen: Boolean!
revisions:[PostRevision!]
}
type PostRevision {
id: ID!
post: Post!
content: String!
author: String!
createdAt: String!
}
type Mutation {
createPost(postInput: CreatePostInput!): Post
}
I would like to be able to batch insert Post and PostRevision at the same time when i run createPost mutation; however, VTL is giving me a much of hard time.
I have tried below
## Variable Declarations
#set($postId = $util.autoId())
#set($postList = [])
#set($postRevisionList = [])
#set($post = {})
#set($revision = {})
## Initialize Post object
$util.qr($post.put("creator", $ctx.args.postInput.author))
$util.qr($post.put("id", $postId))
$util.qr($post.put("createdAt", $util.time.nowEpochMilliSeconds()))
$util.qr($post.put("like", 0))
$util.qr($post.put("dislike", 0))
$util.qr($post.put("frozen", false))
## Initialize PostRevision object
$util.qr($revision.put("id", $util.autoId()))
$util.qr($revision.put("author", $ctx.args.postInput.author))
$util.qr($revision.put("post", $postId))
$util.qr($revision.put("content", $ctx.args.postInput.content))
$util.qr($revision.put("createdAt", $util.time.nowEpochMilliSeconds()))
## Listify objects
$postList.add($post)
$postRevisionList.add($revision)
{
"version" : "2018-05-29",
"operation" : "BatchPutItem",
"tables" : {
"WHISPR_DEV_PostTable": $util.toJson($postList),
"WHISPR_DEV_PostRevisionTable": $util.toJson($postRevisionList)
}
}
So basically I am reconstructing the document in the resolver of createPost so that I can add Post then also add ID of the post to postReivision However when I run below code
mutation insertPost{
createPost(postInput:{
creator:"name"
content:"value"
}){
id
}
}
I get following error
{
"data": {
"createPost": null
},
"errors": [
{
"path": [
"createPost"
],
"data": null,
"errorType": "MappingTemplate",
"errorInfo": null,
"locations": [
{
"line": 2,
"column": 3,
"sourceName": null
}
],
"message": "Expected JSON object but got BOOLEAN instead."
}
]
}
What am I doing wrong?
I know it would be easier to resolve with lambda function but I do not want to double up the cost for no reason. Any help would be greatly appreciated. Thanks!
If anyone still needs the answer for this (this question is still the #1 google hit for the mentioned error message):
The problem is the return value of the add() method, which returns a boolean value.
To fix this, just wrap the add() methods into $util.qr, as you are already doing for the put() methods:
$util.qr(($postList.add($post))
$util.qr(($postRevisionList.add($revision))
It looks like you are missing a call to $util.dynamodb.toDynamoDBJson which is causing AppSync to try to put plain JSON objects into DynamoDB when DynamoDB requires a DynamoDB specific input structure where each attribute instead of being a plain string like "hello world!" is an object { "S": "hello world!" }. The $util.dynamodb.toDynamoDBJson helper handles this for you for convenience. Can you please try adding the toDynamoDBJson() to these lines:
## Listify objects
$postList.add($util.dynamodb.toDynamoDBJson($post))
$postRevisionList.add($util.dynamodb.toDynamoDBJson($revision))
Hope this helps :)

Deleting row using composite key

I have the table 'column_defn' with the following schema. The keys are column_name,database_name and table_name
column_name STRING(130) NOT NULL
database_name STRING(150) NOT NULL
table_name STRING(130) NOT NULL
column_description STRING(1000) NOT NULL
I am trying to delete a row using the following REST request
{
"session":"xxxxxxxxx"
"singleUseTransaction": {
"readWrite": {}
},
"mutations": [
{
"delete": {
"table": "column_defn",
"keySet": {
"keys": [
[
{
"column_name": "testd"
},
{
"table_name": "test atbd"
},
{
"database_name": "ASDFDFS"
}
]
]
}
}
}
]
}
but I keep getting the following error. Any idea as to where is wrong in the above request
{
"error": {
"code": 400,
"message": "Invalid value for column database_name in table column_defn: Expected STRING.",
"status": "FAILED_PRECONDITION"
}
}
Update: The following request seems to be successful. At least it was returning the success code 200 and the commitTimestamp. However, the row didn't get deleted
{
"singleUseTransaction": {
"readWrite": {}
},
"mutations": [
{
"delete": {
"table": "column_defn",
"keySet": {
"keys": [
[
"testd",
"dsafd",
"test atbd"
]
]
}
}
}
]
}
keys should contain an array-of-arrays. In the outer array, there will be one entry for each row you are trying to delete. Each inner array will be the ordered list of key-values that define a single row (order matters). So in your example, you want:
"keys": [["testd","ASDFDFS","test atbd"]]
Note that the original question is inconsistent in the true ordering of the keys in the table. The above answer assumes the primary key is defined something like:
PRIMARY KEY(column_name,database_name,table_name)