Does AppSync support nested single mutation?
I want to call a single mutation which will insert records into two tables, eg: User and Roles tables in DynamoDB.
Something like this for example:
createUser(
input: {
Name: "John"
Email: "user#domain.com"
LinesRoles: [
{ Name: "Role 1" }
{ Name: "Role 2" }
]
}) {
Id
Name
LinesRoles {
Id
Name
}
}
Do I need to create two resolvers in AppSync for User and Roles to insert the records in both tables?
I can think of three ways to achieve this:
Use a BatchPutItem to save records into two tables at once. However, you won’t be able to use any ConditionExpression
Use a pipeline resolver with two AppSync functions where one function makes a PutItem to the Roles table and the other to the User table. However, you need to be ok with potentially inconsistent scenarios where the record has been inserted in one table but not in the other.
Use a Lambda resolver that does the write to 2 tables inside a DynamoDB transaction.
Related
I am trying to use a workflow as aws appsync-> lambda1-> athena-> SNS-> lambda2
I want to understand how can I use two different lambda resolvers in the appsync api. So while calling a get task function I am passing customer_id as an input to lambda1 and it is supposed to invoke athena. Once the query is successful SNS will publish a message and lambda 2 is invoked to display the result. Here is how my schema looks like:
type Query {
getTask(cust_id: String!): String
}
type Task {
cust_id: String!
description: String!
}
schema {
query: Query
}
I've currently been handling batch operations with a for loop, but obviously, this is not the best approach, especially as I'm adding an 'upload by CSV' option, which will take 1000+ putItems.
I searched around for the best ways to implement this, specifically this link:
https://docs.aws.amazon.com/appsync/latest/devguide/tutorial-dynamodb-batch.html
However, even after following those steps mentioned I'm not able to achieve a batch operation. Below is my code for a 'batch delete' operation.
Here is my schema.graphql file:
type Client #model #auth(rules: [{ allow: owner }]) {
id: ID!
name: String!
company: String
phone: String
email: String
}
type Mutation {
batchDelete(ids: [ID]): [Client]
}
I then create two new files. One request mapping template and one response mapping template.
#set($clientsdata = [])
#foreach($item in ${ctx.args.clients})
$util.qr($clientsdata.delete($util.dynamodb.toMapValues($item)))
#end
{
"version" : "2018-05-29",
"operation" : "BatchDeleteItem",
"tables" : {
"Clients": $utils.toJson($clientsdata)
}
}
and then as per the tutorial a "simple pass through" response mapping template:
$util.toJson($ctx.result.data.Posts)
However now when I run the batchdelete command, I keep getting nothing returned.
Would really appreciate guidance on this!
When it comes to performing DynamoDB batch operations in tandem with Amplify, note that the table name specified in the schema is actually different per environment, i.e. your "Client" table wouldn't be recognized as "Clients" as you have stated it in the request mapping template, but rather the name it is given on Amplify push, per environment.
E.g. Client-<some alphanumeric number>-envName
Add the full name of the table to your request and response mapping templates.
Also your foreach statement should read:
#foreach($item in ${ctx.args.clientsdata}) wherein you iterate through each of the items in the array that is passed as the argument to the context object.
Hope this helps.
I'm building a simple multiplayer game using AppSync (to avoid having to manage WebSocket from zero) and Amplify (specifically on Android, using Kotlin).
The problem is that when players are connected to a game, should listen for updates for the other players (consider that I have a table on dynamo with Primary Key a UUID and the corresponding game-id and position of the player).
So what I've done, is that after the player joins his game session register itself to onUpdate of Player... however now he receives the notification of the update of all the player... and even though I can easily filter this incoming events client side, I think it's a terrible idea, since AppSync have to notify to all registered users, all the updates.
Instead I would like to specify a filter on the subscription, like when you perform a query, something like:
Amplify.API.subscribe(
ModelSubscription.onUpdate(
Player::class.java,
Player.GAMEROOM.eq(currentPlayer.gameroom.id) // only players in the same gameroom
),
{ Log.i("ApiQuickStart", "Subscription established") },
{ onCreated ->
...
},
{ onFailure -> Log.e("ApiQuickStart", "Subscription failed", onFailure) },
{ Log.i("ApiQuickStart", "Subscription completed") }
)
Are there ways to obtain this?
Since this is the only tipe of subscription that I will do to Player Update, maybe with a custom resolver something might be done?
You need to add one more param into your subscription, and its subscribed mutations. I guess your GraphQL schema looks like
type Subscription {
onUpdate(gameRoomId: ID!, playerId: ID): UpdateUserResult
#aws_subscribe(mutations: ["leaveCurrentGameRoom", <other_interested_mutation>])
}
type Mutation {
leaveCurrentGameRoom: UpdateUserResult
}
type UpdateUserResult {
gameRoomId: ID!
playerId: ID!
// other fields
}
You notice Subscription.onUpdate has an optional param, playerId. The subscription can be used as follows
onUpdate("room-1") // AppSync gives updates of all players in "room-1"
onUpdate("room-1", "player-A") // AppSync gives updates of "player-A" in "room-1"
In order for AppSync to filter out irrelevant updates, the subscribed mutations, such as leaveCurrentGameRoom, have to return a type which includes the two fields, gameRoomId, and playerId. Please note that you may also update the resolvers of the mutations to resolve data for the fields
Here is another example of subscription arguments
https://docs.aws.amazon.com/appsync/latest/devguide/aws-appsync-real-time-data.html#using-subscription-arguments
I am using Amazon RDS with AppSync. I've created a resolver that join two tables to get One-to-One association between them and returns columns from both tables. What I would like to do is to be able to put nest some columns under a key in the resulting parsed JSON object evaluated using $util.rds.toJSONObject().
Here's the schema:
type Parent {
col1: String
col2: String
child: Child
}
type Child {
col3: String
col4: String
}
Here's the resolver:
{
"version": "2018-05-29",
"statements": [
"SELECT parent.*, child.col3 AS `child.col3`, child.col4 AS `child.col4` FROM parent LEFT JOIN child ON parent.col1 = child.col3"
]
}
I tried naming the resulting column with dot-syntax but, $util.rds.toJSONObject() doesn't put col3 and col4 under child key. The reason it should is because otherwise, Apollo won't be able to cache and parse the entity.
Note: Dot-syntax is not documented anywhere. Usually, some ORMs use dot-syntax technique to convert SQL rows to proper nested JSON objects.
The #Aaron_H comment and answer were useful for me, but the response mapping template provided in the answer didn't work for me.
I managed to get a working response mapping template for my case which is similar for the case in question. In images below you will find info for query -> message(id: ID) { ... } (one message and the associated user will be returned):
SQL request to user table;
SQL request to message table;
SQL JOIN tables request for message id=1;
GraphQL Schema;
Request and response templates;
AWS AppSync query.
https://github.com/xai1983kbu/apollo-server/blob/pulumi_appsync_2/bff_pulumi/graphql/resolvers/Query.message.js
Next example for query messages
https://github.com/xai1983kbu/apollo-server/blob/pulumi_appsync_2/bff_pulumi/graphql/resolvers/Query.messages.js
Assuming your resolver is expecting to return a list of Parent types, ie. [Parent!]!, you can write your response mapping template logic like this:
#if($ctx.error)
$util.error($ctx.error.message, $ctx.error.type)
#end
#set($output = $utils.rds.toJsonObject($ctx.result)[0])
## Make sure to handle instances where fields are null
## or don't exist according to your business logic
#foreach( $item in $output )
#set($item.child = {
"col3": $item.get("child.col3"),
"col4": $item.get("child.col4")
})
#end
$util.toJson($output)
how do I restrict dynamoddb user access to its own data owned by them.
I came access Using IAM Policy Conditions for Fine-Grained Access Control
"Condition": {
"ForAllValues:StringEquals": {
"dynamodb:LeadingKeys": [
"${www.amazon.com:user_id}"
],
"dynamodb:Attributes": [
"UserId",
"GameTitle",
"Wins",
"Losses",
"TopScore",
"TopScoreDateTime"
]
}
}
problem with above condition is that my partition key is not the userId.
its something else.
here is what my DB looks like like
Hash : "Sales" # its just plain text Sales
Range Date # its date
attributes : Sale : [ # array of maps
{
name : abc,
userId : idabc,
some-other : stuff
},
{
name : xyz,
userId : idxyz,
some-other : stuff
}
]
any idea how to restrict access based on sale[x].userId ?
or any better design how to handle this kind of design ?
I use Date range to query 90% of the data.
other option is to use different table for each and every logical table.
like sales,expense,payroll etc but I don't want to create different tables
and it defeats the purpose or NoSQL I guess.
FYI I am using javascript sdk to access dynamodb from browser.
app has 3 different user types
customer (access to its own data)
merchants (its own data and access to its customer data)
admin (access to all the data)
I think for this I have to create 3 different userPools, correct me if I am wrong.
but cant restrict access to own data, if I use partition key as userId
then querying for merchants becomes difficult.
any suggestion on how do I handle this db design?
thanx