I understand how to perform geo-spatial queries through AppSync to find events within a distance range from a gps coordinate by attaching a resolver linked to ElasticSesarch, as described here.
However, what if I want my client to subscribe to new events being created within this distance range as well?
user subscribes to a location
if an event is created near that location, notify user
I know I can attach resolver to subscription types but it seems like it forces you to provide a data source when I just want to filter subscriptions by checking distance between gps coordinates.
This is a great question and I would think there are a couple ways to solve this. The tough part here is that you are going to have figure out a way to ask the question "What subscriptions are interested in an event at this location". Here is one possible path forward.
The following assumes these schema parts:
// Whatever custom object has a location
type Post {
id: ID!
title: String
location: Location
}
input PublishPostInput {
id: ID!
title: String
location: Location
subscriptionID: ID
}
type PublishPostOutput {
id: ID!
title: String
location: Location
subscriptionID: ID
}
type Location {
lat: Float,
lon: Float
}
input LocationInput {
lat: Float,
lon: Float
}
# A custom type to hold custom tracked subscription information
# for location discover
type OpenSubscription {
subscriptionID: ID!
location: Location
timestamp: String!
}
type OpenSubscriptionConnection {
items: [OpenSubscription]
nextToken: String
}
type Query {
# Query elasticsearch index for relevant subscriptions
openSubscriptionsNear(location: LocationInput, distance: String): OpenSubscriptionConnection
}
type Mutation {
# This mutation uses a local resolver (e.g. a resolver with a None data source) and simply returns the input as is.
publishPostToSubscription(input: PublishPostInput): PublishPostOutput
}
type Subscription {
# Anytime someone passes an object with the same subscriptionID to the "publishPostToSubscription" mutation field, get updated.
listenToSubscription(subscriptionID: ID!): PublishPostOutput
#aws_subscribe(mutations:["publishPostToSubscription"])
}
Assuming you are using DynamoDB as your primary source of truth, setup a DynamoDB stream that invokes a "PublishIfInRange" lambda function. That "PublishIfInRange" function would look something like this
// event - { location: { lat, lon }, id, title, ... }
function lambdaHandler(event) {
const relevantSubscriptions = await callGraphql(`
query GetSubscriptions($location: LocationInput) {
openSubscriptionsNear(location:$location, distance: "10 miles") {
subscriptionID
}
}
`, { variables: { location: event.location }})
for (const subscription of relevantSubscriptions) {
callGraphql(`
mutation PublishToSubscription($subID: ID!, $obj: PublishPostInput) {
publishPostToSubscription(input: $obj) {
id
title
location { lat lon }
subscriptionID
}
}
`, { variables: { input: { ...subscription, ...event }}})
}
}
You will need to maintain a registry of subscriptions indexed by location. One way to do this is to have your client app call a mutation that creates a subscription object with a location and subscriptionID (e.g. mutation { makeSubscription(loc: $loc) { ... } } assuming you are using $util.autoId() to generate the subscriptionID in the resolver). After you have the subscriptionID, you can make the subscription call through graphql and pass in the subscriptionID as an argument (e.g.subscription { listenToSubscription(subscriptionID: "my-id") { id title location { lat lon } } }). When you make this above subscription call, AppSync creates a topic and authorizes the current user to subscribe to that topic. The topic is unique to subscription field being called and the set of arguments passed to the subscription field. In other words, the topic only receives objects
Now whenever an object is created, the record goes to the lambda function via DynamoDB streams. The lambda function queries elasticsearch for all open subscriptions near that object and then publishes a record to each of those open subscriptions.
I believe this should get you reasonably far but if you have millions of users in tight quarters you will likely run into scaling issues. Hope this helps
Related
I'm trying to add todo from lambda function in AWS.
I created flutter project, added api (Graphql basic Todo sample), added function (enabled mutations).
Lambda is working and effectively adding entry to TODO list. Unfortunately, subscription started in flutter returns error:
Cannot return null for non-nullable type: 'AWSDateTime' within parent 'Chat' (/onCreateChat/createdAt)
I see this problem got solved in Github where aleksvidak states.
I had the similar problem and what I realised is that mutation which in fact triggers the subscription has to have the same response fields as the ones specified for the subscription response. This way it works for me.
This seems to solve many people's problem. Unfortunately, I don't understand what it means for basic TODO sample code.
Mutation:
type Mutation {
createTodo(input: CreateTodoInput!, condition: ModelTodoConditionInput): Todo
...
Subscription:
type Subscription {
onCreateTodo(filter: ModelSubscriptionTodoFilterInput): Todo
#aws_subscribe(mutations: ["createTodo"])
...
Isn't this code aligned with what Aleksvidak said? Mutation has (Todo) the same response type as Subscription (Todo), right?
For my case it was missing updatedAt within Query that I perform inside Lambda function.
const query = /* GraphQL */ `
mutation CREATE_TODO($input: CreateTodoInput!) {
createTodo(input: $input) {
id
name
updatedAt // <-- this was missing
}
}
`;
I've currently been handling batch operations with a for loop, but obviously, this is not the best approach, especially as I'm adding an 'upload by CSV' option, which will take 1000+ putItems.
I searched around for the best ways to implement this, specifically this link:
https://docs.aws.amazon.com/appsync/latest/devguide/tutorial-dynamodb-batch.html
However, even after following those steps mentioned I'm not able to achieve a batch operation. Below is my code for a 'batch delete' operation.
Here is my schema.graphql file:
type Client #model #auth(rules: [{ allow: owner }]) {
id: ID!
name: String!
company: String
phone: String
email: String
}
type Mutation {
batchDelete(ids: [ID]): [Client]
}
I then create two new files. One request mapping template and one response mapping template.
#set($clientsdata = [])
#foreach($item in ${ctx.args.clients})
$util.qr($clientsdata.delete($util.dynamodb.toMapValues($item)))
#end
{
"version" : "2018-05-29",
"operation" : "BatchDeleteItem",
"tables" : {
"Clients": $utils.toJson($clientsdata)
}
}
and then as per the tutorial a "simple pass through" response mapping template:
$util.toJson($ctx.result.data.Posts)
However now when I run the batchdelete command, I keep getting nothing returned.
Would really appreciate guidance on this!
When it comes to performing DynamoDB batch operations in tandem with Amplify, note that the table name specified in the schema is actually different per environment, i.e. your "Client" table wouldn't be recognized as "Clients" as you have stated it in the request mapping template, but rather the name it is given on Amplify push, per environment.
E.g. Client-<some alphanumeric number>-envName
Add the full name of the table to your request and response mapping templates.
Also your foreach statement should read:
#foreach($item in ${ctx.args.clientsdata}) wherein you iterate through each of the items in the array that is passed as the argument to the context object.
Hope this helps.
I'm building a simple multiplayer game using AppSync (to avoid having to manage WebSocket from zero) and Amplify (specifically on Android, using Kotlin).
The problem is that when players are connected to a game, should listen for updates for the other players (consider that I have a table on dynamo with Primary Key a UUID and the corresponding game-id and position of the player).
So what I've done, is that after the player joins his game session register itself to onUpdate of Player... however now he receives the notification of the update of all the player... and even though I can easily filter this incoming events client side, I think it's a terrible idea, since AppSync have to notify to all registered users, all the updates.
Instead I would like to specify a filter on the subscription, like when you perform a query, something like:
Amplify.API.subscribe(
ModelSubscription.onUpdate(
Player::class.java,
Player.GAMEROOM.eq(currentPlayer.gameroom.id) // only players in the same gameroom
),
{ Log.i("ApiQuickStart", "Subscription established") },
{ onCreated ->
...
},
{ onFailure -> Log.e("ApiQuickStart", "Subscription failed", onFailure) },
{ Log.i("ApiQuickStart", "Subscription completed") }
)
Are there ways to obtain this?
Since this is the only tipe of subscription that I will do to Player Update, maybe with a custom resolver something might be done?
You need to add one more param into your subscription, and its subscribed mutations. I guess your GraphQL schema looks like
type Subscription {
onUpdate(gameRoomId: ID!, playerId: ID): UpdateUserResult
#aws_subscribe(mutations: ["leaveCurrentGameRoom", <other_interested_mutation>])
}
type Mutation {
leaveCurrentGameRoom: UpdateUserResult
}
type UpdateUserResult {
gameRoomId: ID!
playerId: ID!
// other fields
}
You notice Subscription.onUpdate has an optional param, playerId. The subscription can be used as follows
onUpdate("room-1") // AppSync gives updates of all players in "room-1"
onUpdate("room-1", "player-A") // AppSync gives updates of "player-A" in "room-1"
In order for AppSync to filter out irrelevant updates, the subscribed mutations, such as leaveCurrentGameRoom, have to return a type which includes the two fields, gameRoomId, and playerId. Please note that you may also update the resolvers of the mutations to resolve data for the fields
Here is another example of subscription arguments
https://docs.aws.amazon.com/appsync/latest/devguide/aws-appsync-real-time-data.html#using-subscription-arguments
I am getting Network Error {"type":"WriteError"} on my apollo query. Query executes just fine as well as it arrives to the client. But there is issue writing it tot he store. Any ides what can be going wrong? This is the query:
fragment BpmnProcessInstanceItemTask on BpmnTaskInstance {
id
dateStarted
dateFinished
task {
name
__typename
}
performer {
name
__typename
}
performerRoles
__typename
}
fragment BpmnProcessInstanceItem on BpmnProcessInstance {
id
status
process {
name
description
type
__typename
}
owner {
name
__typename
}
tasks {
...BpmnProcessInstanceItemTask
__typename
}
dateStarted
dateFinished
__typename
}
query BpmnProcessInstancesQuery($input: BpmnProcessInstancesInput!) {
bpmnProcessInstancesQuery(input: $input) {
...BpmnProcessInstanceItem
__typename
}
}
"
I just ran into this myself, and found the solution here. It's happening because the query is loading data without an id (or ids), which then can't be merged with the existing data in the cache.
Take the following example query:
{
viewer {
id
fullName
groups {
id
name
}
}
}
The returned data will be stored in the cache with one entry for the viewer and one entry per group:
User:asidnajksduih6
Group:9p8h2uidbjqshd
Group:d9a78h92lnasax
If a subsequent query looked like:
{
viewer {
id
fullName
groups {
name
}
}
}
There could be a conflict because it's unclear which groups should be updated in the cache (the result set will not include group ids).
The solution appears to be to always use ids in your queries wherever possible. This avoids the merge issue and it improves the chances for subsequent unrelated queries to have a cache hit.
The above describes a cause and a solution. Possible symptoms of this problem include: rendering stale data, or rendering no data even though the results are in your cache. As pointed out here, these errors happen silently, however they can be seen via the Apollo chrome extension in the "queries" tab.
The BpmnProcessInstanceItemTask fragment if overlapping the existing tasks object through the __typename field. The same in this code:
query BpmnProcessInstancesQuery($input: BpmnProcessInstancesInput!) {
bpmnProcessInstancesQuery(input: $input) {
...BpmnProcessInstanceItem
__typename <-- *same field stored twice*
}
}
I see in the examples how to to pass a message string to amazon sns sdk's publish method. However, is there an exmaple of how to pass a custom object as the message? I tried setting "MessageStructure" to "json" but then I get InvalidParameter: Invalid parameter: Message Structure - No default entry in JSON message body error. Where should I be passing the object values into in the params?
Any examples?
var params = {
Message: JSON.stringify(item),
MessageStructure: 'json',
TopicArn: topic
//MessageAttributes: item
};
return sns.publishAsync(params);
There is no SDK-supported way to pass a custom object as a message-- messages are always strings. You can, of course, make the string a serialized version of your object.
MessageStructure: 'json' is for a different purpose-- when you want to pass different strings to different subscription types. In that case, you make the message a serialized json object with AWS-defined structure, where each element defines the message to send to a particular type of subscription (email, sqs, etc). Even in that case, the messages themselves are just strings.
MessageAttributes are parameters you add to the message to support specific subscription types. If you are using SNS to talk to Apple's IOS notification service, for example, you might have to supply additional message parameters or authentication keys-- MessageAttributes provide a mechanism to do this. This is described in this AWS documentation.
An example is shown here: https://docs.aws.amazon.com/sns/latest/api/API_Publish.html#API_Publish_Example_2
The JSON format for Message is as follows:
{
"default": "A message.",
"email": "A message for email.",
"email-json": "A message for email (JSON).",
"http": "A message for HTTP.",
"https": "A message for HTTPS.",
"sqs": "A message for Amazon SQS."
}
So, assuming what you wanted to pass is an object, the way it worked for me was:
const messageObjToSend = {
...
}
const params = {
Message: JSON.stringify({
default: JSON.stringify( messageObjToSend )
}),
MessageStructure: 'json',
TopicArn: 'arn:aws:sns...'
}
Jackson 2 has pretty good support to convert object to JSON String and vice versa.
To String
Cat c = new Cat();
ObjectMapper mapper = new ObjectMapper();
String s = mapper.writeValueAsString(c);
To Object
Cat obj = mapper.readValue(s,Cat.class);
The message needs to be a JSON object and the default property needs to be added and should contain the JSON you want included in the email.
var defaultMessage = { "default": item };
var params = {
Message: defaultMessage, /*JSON.stringify(item),*/
---------^
MessageStructure: 'json',
TopicArn: topic
//MessageAttributes: item
};
return sns.publishAsync(params);
Using python,
boto3.client("sns").publish(
TopicArn=sns_subscription_arn,
Subject="subject",
Message=json.dumps({"default": item}),
--------^
MessageStructure="json",
)
FYI, if you go to this SNS topic in the AWS Console you can "publish message" and choose "Custom payload for each delivery protocol.". Here you will see a template of the email and the "default" property is tagged for "Sample fallback message".