Filter AppSync Subscription - amazon-web-services

I'm building a simple multiplayer game using AppSync (to avoid having to manage WebSocket from zero) and Amplify (specifically on Android, using Kotlin).
The problem is that when players are connected to a game, should listen for updates for the other players (consider that I have a table on dynamo with Primary Key a UUID and the corresponding game-id and position of the player).
So what I've done, is that after the player joins his game session register itself to onUpdate of Player... however now he receives the notification of the update of all the player... and even though I can easily filter this incoming events client side, I think it's a terrible idea, since AppSync have to notify to all registered users, all the updates.
Instead I would like to specify a filter on the subscription, like when you perform a query, something like:
Amplify.API.subscribe(
ModelSubscription.onUpdate(
Player::class.java,
Player.GAMEROOM.eq(currentPlayer.gameroom.id) // only players in the same gameroom
),
{ Log.i("ApiQuickStart", "Subscription established") },
{ onCreated ->
...
},
{ onFailure -> Log.e("ApiQuickStart", "Subscription failed", onFailure) },
{ Log.i("ApiQuickStart", "Subscription completed") }
)
Are there ways to obtain this?
Since this is the only tipe of subscription that I will do to Player Update, maybe with a custom resolver something might be done?

You need to add one more param into your subscription, and its subscribed mutations. I guess your GraphQL schema looks like
type Subscription {
onUpdate(gameRoomId: ID!, playerId: ID): UpdateUserResult
#aws_subscribe(mutations: ["leaveCurrentGameRoom", <other_interested_mutation>])
}
type Mutation {
leaveCurrentGameRoom: UpdateUserResult
}
type UpdateUserResult {
gameRoomId: ID!
playerId: ID!
// other fields
}
You notice Subscription.onUpdate has an optional param, playerId. The subscription can be used as follows
onUpdate("room-1") // AppSync gives updates of all players in "room-1"
onUpdate("room-1", "player-A") // AppSync gives updates of "player-A" in "room-1"
In order for AppSync to filter out irrelevant updates, the subscribed mutations, such as leaveCurrentGameRoom, have to return a type which includes the two fields, gameRoomId, and playerId. Please note that you may also update the resolvers of the mutations to resolve data for the fields
Here is another example of subscription arguments
https://docs.aws.amazon.com/appsync/latest/devguide/aws-appsync-real-time-data.html#using-subscription-arguments

Related

AWS Amplify GraphQL subscriptions fail with "Cannot return null for non-nullable type: 'AWSDateTime' within parent 'Todo' (/onCreateChat/createdAt)"

I'm trying to add todo from lambda function in AWS.
I created flutter project, added api (Graphql basic Todo sample), added function (enabled mutations).
Lambda is working and effectively adding entry to TODO list. Unfortunately, subscription started in flutter returns error:
Cannot return null for non-nullable type: 'AWSDateTime' within parent 'Chat' (/onCreateChat/createdAt)
I see this problem got solved in Github where aleksvidak states.
I had the similar problem and what I realised is that mutation which in fact triggers the subscription has to have the same response fields as the ones specified for the subscription response. This way it works for me.
This seems to solve many people's problem. Unfortunately, I don't understand what it means for basic TODO sample code.
Mutation:
type Mutation {
createTodo(input: CreateTodoInput!, condition: ModelTodoConditionInput): Todo
...
Subscription:
type Subscription {
onCreateTodo(filter: ModelSubscriptionTodoFilterInput): Todo
#aws_subscribe(mutations: ["createTodo"])
...
Isn't this code aligned with what Aleksvidak said? Mutation has (Todo) the same response type as Subscription (Todo), right?
For my case it was missing updatedAt within Query that I perform inside Lambda function.
const query = /* GraphQL */ `
mutation CREATE_TODO($input: CreateTodoInput!) {
createTodo(input: $input) {
id
name
updatedAt // <-- this was missing
}
}
`;

Creating Batch Operations with AWS Amplify [GraphQL, DataStore, AppSync]

I've currently been handling batch operations with a for loop, but obviously, this is not the best approach, especially as I'm adding an 'upload by CSV' option, which will take 1000+ putItems.
I searched around for the best ways to implement this, specifically this link:
https://docs.aws.amazon.com/appsync/latest/devguide/tutorial-dynamodb-batch.html
However, even after following those steps mentioned I'm not able to achieve a batch operation. Below is my code for a 'batch delete' operation.
Here is my schema.graphql file:
type Client #model #auth(rules: [{ allow: owner }]) {
id: ID!
name: String!
company: String
phone: String
email: String
}
type Mutation {
batchDelete(ids: [ID]): [Client]
}
I then create two new files. One request mapping template and one response mapping template.
#set($clientsdata = [])
#foreach($item in ${ctx.args.clients})
$util.qr($clientsdata.delete($util.dynamodb.toMapValues($item)))
#end
{
"version" : "2018-05-29",
"operation" : "BatchDeleteItem",
"tables" : {
"Clients": $utils.toJson($clientsdata)
}
}
and then as per the tutorial a "simple pass through" response mapping template:
$util.toJson($ctx.result.data.Posts)
However now when I run the batchdelete command, I keep getting nothing returned.
Would really appreciate guidance on this!
When it comes to performing DynamoDB batch operations in tandem with Amplify, note that the table name specified in the schema is actually different per environment, i.e. your "Client" table wouldn't be recognized as "Clients" as you have stated it in the request mapping template, but rather the name it is given on Amplify push, per environment.
E.g. Client-<some alphanumeric number>-envName
Add the full name of the table to your request and response mapping templates.
Also your foreach statement should read:
#foreach($item in ${ctx.args.clientsdata}) wherein you iterate through each of the items in the array that is passed as the argument to the context object.
Hope this helps.

AWS API Gateway only allows the first element of the form data and ignores the rest

I have been trying to push data into AWS SQS using AWS API Gateway, the data I send is in the form of application/x-www-form-urlencoded.
And it looks somewhat like this:
fruits[]: apple
fruits[]: mango
fruits[]: banana
season: summer
Now when I poll the data from AWS SQS, I see only fruits[]=apple has been stored and all others are ignored.
This is my current mapping template to push in SQS:
Action=SendMessage&MessageBody=$input.body
Looks like it has multiple $input.body but if that is the case then its kinda impossible to capture random data coming in.
I am new to AWS API Gateway, thanks in advance. :D
After a lot of research and stuff, I was able to decipher this mystery.
the value of $input.body is:
fruits[]=apple&fruits[]=mango&fruits[]=banana&season=summer
Now only MessageBody is pushed in SQS, so according to my template, the resulting query string which was forming, was:
Action=SendMessage&MessageBody=fruits[]=apple&fruits[]=mango&fruits[]=banana&season=summer
only fruits[]=apple is falling under MessageBody and all other becomes separate query objects and hence were ignored.
I just had to tweak the template to:
Action=SendMessage&MessageBody=$util.urlEncode($input.body)
So the resulting query string does not include any more & or = and every thing falls under MessageBody
Edits are welcomed
try this
Request:
POST apigateway/stage/resource?query=test
{
"season": "summer",
"list": [apple,mango,banana]
}
Mapping:
#set($inputRoot = $input.path('$'))
{
"query": "$input.params('query')",
"id": "$inputRoot.season",
"list": $inputRoot.list
}
https://aws.amazon.com/blogs/compute/using-api-gateway-mapping-templates-to-handle-changes-in-your-back-end-apis/
https://docs.aws.amazon.com/apigateway/latest/developerguide/example-photos.html

AppSync - creating nested mutation with array and objects?

Does AppSync support nested single mutation?
I want to call a single mutation which will insert records into two tables, eg: User and Roles tables in DynamoDB.
Something like this for example:
createUser(
input: {
Name: "John"
Email: "user#domain.com"
LinesRoles: [
{ Name: "Role 1" }
{ Name: "Role 2" }
]
}) {
Id
Name
LinesRoles {
Id
Name
}
}
Do I need to create two resolvers in AppSync for User and Roles to insert the records in both tables?
I can think of three ways to achieve this:
Use a BatchPutItem to save records into two tables at once. However, you won’t be able to use any ConditionExpression
Use a pipeline resolver with two AppSync functions where one function makes a PutItem to the Roles table and the other to the User table. However, you need to be ok with potentially inconsistent scenarios where the record has been inserted in one table but not in the other.
Use a Lambda resolver that does the write to 2 tables inside a DynamoDB transaction.

AWS AppSync Event Subscription based on Location

I understand how to perform geo-spatial queries through AppSync to find events within a distance range from a gps coordinate by attaching a resolver linked to ElasticSesarch, as described here.
However, what if I want my client to subscribe to new events being created within this distance range as well?
user subscribes to a location
if an event is created near that location, notify user
I know I can attach resolver to subscription types but it seems like it forces you to provide a data source when I just want to filter subscriptions by checking distance between gps coordinates.
This is a great question and I would think there are a couple ways to solve this. The tough part here is that you are going to have figure out a way to ask the question "What subscriptions are interested in an event at this location". Here is one possible path forward.
The following assumes these schema parts:
// Whatever custom object has a location
type Post {
id: ID!
title: String
location: Location
}
input PublishPostInput {
id: ID!
title: String
location: Location
subscriptionID: ID
}
type PublishPostOutput {
id: ID!
title: String
location: Location
subscriptionID: ID
}
type Location {
lat: Float,
lon: Float
}
input LocationInput {
lat: Float,
lon: Float
}
# A custom type to hold custom tracked subscription information
# for location discover
type OpenSubscription {
subscriptionID: ID!
location: Location
timestamp: String!
}
type OpenSubscriptionConnection {
items: [OpenSubscription]
nextToken: String
}
type Query {
# Query elasticsearch index for relevant subscriptions
openSubscriptionsNear(location: LocationInput, distance: String): OpenSubscriptionConnection
}
type Mutation {
# This mutation uses a local resolver (e.g. a resolver with a None data source) and simply returns the input as is.
publishPostToSubscription(input: PublishPostInput): PublishPostOutput
}
type Subscription {
# Anytime someone passes an object with the same subscriptionID to the "publishPostToSubscription" mutation field, get updated.
listenToSubscription(subscriptionID: ID!): PublishPostOutput
#aws_subscribe(mutations:["publishPostToSubscription"])
}
Assuming you are using DynamoDB as your primary source of truth, setup a DynamoDB stream that invokes a "PublishIfInRange" lambda function. That "PublishIfInRange" function would look something like this
// event - { location: { lat, lon }, id, title, ... }
function lambdaHandler(event) {
const relevantSubscriptions = await callGraphql(`
query GetSubscriptions($location: LocationInput) {
openSubscriptionsNear(location:$location, distance: "10 miles") {
subscriptionID
}
}
`, { variables: { location: event.location }})
for (const subscription of relevantSubscriptions) {
callGraphql(`
mutation PublishToSubscription($subID: ID!, $obj: PublishPostInput) {
publishPostToSubscription(input: $obj) {
id
title
location { lat lon }
subscriptionID
}
}
`, { variables: { input: { ...subscription, ...event }}})
}
}
You will need to maintain a registry of subscriptions indexed by location. One way to do this is to have your client app call a mutation that creates a subscription object with a location and subscriptionID (e.g. mutation { makeSubscription(loc: $loc) { ... } } assuming you are using $util.autoId() to generate the subscriptionID in the resolver). After you have the subscriptionID, you can make the subscription call through graphql and pass in the subscriptionID as an argument (e.g.subscription { listenToSubscription(subscriptionID: "my-id") { id title location { lat lon } } }). When you make this above subscription call, AppSync creates a topic and authorizes the current user to subscribe to that topic. The topic is unique to subscription field being called and the set of arguments passed to the subscription field. In other words, the topic only receives objects
Now whenever an object is created, the record goes to the lambda function via DynamoDB streams. The lambda function queries elasticsearch for all open subscriptions near that object and then publishes a record to each of those open subscriptions.
I believe this should get you reasonably far but if you have millions of users in tight quarters you will likely run into scaling issues. Hope this helps