How can I publish to subscriber based on connection id via go library `graph-gophers` - amazon-web-services

I am using graph-gophers library to build graphql application on the server side. I have a lambda works as resolvers and websocket apigateway which works as transport layer. The lambda will take the request from apigateway then call graph-gophers library schema methods to trigger resolver. It works well for query and mutation. But I am not sure how to make it work for subscription.
The library graph-gophers requires the subscription resolver to return a go channels which it listens to. but in case of lambda which is a short lived application, it can't keep alive for a long time. That means I can't use channel for publishing data.
What I am doing is to save the socket connection id on a database, when there is a need to publish data, I will grab the connection id from db to find all the subscribers. But I don't know how to trigger the publish in this case. Anyone has any idea about how to do that?

Related

AWS API Gateway - Is there a way to append metadata to the connection session, so it propagates to disconnect when that is triggered?

So I need to build a WebSocket API for my org. The requirements from the business are pretty typical websocket pattern stuff except for one detail:
This websocket api will be used by different teams in our org, and each team needs its own separate activeconnections dynamodb table.
Now in a typical websocket api, there would be a single connections table that the connect and disconnect lambda functions write/delete to. Also, the hooks in the websocket api ensure that the connectionId needed to identify a connection/session are always in the event.requestContext. Easy peasy for a single connections table.
However, In my approach of having a separate active connections db/table for each team, it gets more complicated. Yes, it's true that for the connect lambda, It is very easy to code so that it expects a "TeamDatabaseID" from somewhere in the initial connection request - Headers, queryStringParameters, etc.
The problem is in the subsequent disconnect that could be triggered from either client or server. The disconnect hook will run the disconnect function, and pass in the default requestContext with the connectionId, but with no TeamDatabaseID - which the disconnect lambda needs to have access to in order to know which database to delete from.
Is there a way to do this? Is there some notion of a context object that I can set values in from the initial connection, so that when the disconnect happens, the teamDatabaseID is propagated in some way to the subsequent disconnect lambda? I tried writing to the requestContext - and that seems to only be alive for the execution of the given lambda.
Instead of having a single Amazon API Gateway Web Socket API for multiple teams, could you instead have one Web Socket API per team?

How should I handle asynchronous processes that occur after API calls in AWS?

I'm designing the backend for a website that uses API Gateway and Lambda to handle API requests, many of which target a MySQL DB on RDS. Some processes need to happen asynchronously but I'm debating which is best practice or cleaner.
In the given scenario, every time a user creates a new row in a certain table, let's say an email also needs to be sent asynchronously. There are many other scenarios similar to this but this will set precedent.
Option 1: In the lambda that handles the API request, first write to the MySQL instance to add the new row. When the response from MySQL comes back successful, write to something like SQS which will later be read from another lambda that sends an email. When the response from SQS is successful that the record was added to the queue, send a 201 response saying the REST API call was successful.
Option 2: In the lambda that handles the API request, write to the MySQL instance to add the new row. When the response from the MySQL comes back successful, send a 201 response saying the REST API call was successful. Then set up a DMS (data migration service) task that runs indefinitely to send database modification binlogs to a kinesis stream which will trigger a lambda that will handle all DB changes, read the change as a new row in a certain table, and send an email.
Option 1:
less infrastructure
more direct tracking of logic from an API call
1 extra http call (to sqs) delaying response times for an api for a web page
Option 2:
more infrastructure (dms task, replication instance)
scaling out shards may mean loss of ordering when processes binlog events if ordering is a requirement (it is)
side question: Are you able to choose hash key for kinesis for dms tasks from mysql?
a single codebase for reacting to all modifications in the DB may actually make following logic in code simpler
Is this the tradeoff or am I missing something? What is best practice in this scenario?
Option 1 in my view seems most logical, but I would replace SQS and second lambda with SNS. So, modified option 1 could be:
Option 1: In the lambda that handles the API request, first write to the MySQL instance to add the new row. When the response from MySQL comes back successful, publish confirmation message to SNS that sends an email. When the response from SNS is successful send a 201 response saying the REST API call was successful.
This should be faster, cheaper and easier to implement then using SQS and second lambda for sending email.

Send push notifications/emails when a query/mutation happends in AppSync/Aurora

I am using AppSync with Aurora/RDS.
I would like that in some cases, when a query/mutation is sent to the db, then, after that, I want to send an email and push notification, but this should be detached from the query/mutation, that is, it does not matter if it fails or works.
At the moment I see all these options:
Can you tell me which one I should use?
Create a query that calls a lambda function that sends the
push/email and call it from the client once the actual
query/mutation is done. I don't like this because the logic is in
the client rather than the server. Seems easy to implement, and I
guess it is easy to ignore the result of the second operation from a
client point of view.
A variation of the previous one. Pack both operations in a single
network request. With GraphQL, that is easy, but I don't want the
client waits for the second operation. (Is it possible to create
lambda functions that return immediately, like a trigger of other
functions?)
Attach my queries/mutations to lambda functions instead of RDS
directly. Then, those lambda functions call other lambda functions
for notifications. Seems more difficult to program, but more
micro-services architecture friendly. Probably this is the best one,
not sure.
Use SQL triggers and call lambda functions from those triggers. I
don't know if this is even possible. Researching...
Use pipelines resolvers. The first one is the query/mutation, the
second one is the lambda function that sends the push/email. I would
say this is a bad option because I don't want the client to wait for
the second operation or manage the logic when the second resolver
fails.
Amazon RDS Events: It appears it is possible to attach lambda
functions to specific AWS RDS events.
https://docs.aws.amazon.com/lambda/latest/dg/services-rds.html It
seems it is about creating DBs, restoring... and that kind of
things. I don't see anything like creating a row, updating a row...
So, I discard this unless I am wrong.
Invoking a Lambda Function with an Aurora MySQL Stored Procedure
CALL mysql.lambda_async ( lambda_function_ARN,lambda_function_input )
https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.Integrating.Lambda.html
"For example, you might want to send a notification using Amazon
Simple Notification Service (Amazon SNS) whenever a row is inserted
into a specific table in your database." That is exactly what I am
looking for. I like this idea, but I don't know if that is possible
with Aurora Serverless. Researching... It seems it is not possible
when using server-less:
https://www.reddit.com/r/aws/comments/a9szid/aurora_serverless_call_lambda/
Use step functions: No idea about how to use it.
Somehow, attach this lambda notification function to GraphQL/AppSync
instead of the database, but I guess it is not a good idea because I
need to read the database to the push notification token and the
email of the use who is going to receive the notifications.
Which method do you recommend me? I am using amplify cli.
Thanks a lot.
Currently AWS AppSync can only send notifications when the app is active. We are looking into implementation of the non active case.
If you want to send notifications when the app is not active, you can use the push notifications on iOS: silent push/interactive push or push notifications on Android.
If you want to send emails, voice/text message or notifications on phone when the app is not active, you can integrate with Amazon Pinpiont.

Sending newly created primary key value back to app using Lambda

My mobile app sends a message to SQS which triggers a Lambda function
that inserts data into a SQL DB.
When it creates the new row, it generates a Primary key. I want to send that
new primary key value to my mobile app before my lambda function is done running.
Should I use SNS to send the value? All opinions appreciated!
A few ideas come to mind:
1) When you mobile app create the SQS message, it should include some sort of callback information in the payload so that the Lambda knows how to reach back to the mobile app and send the primary key information.
2) This sounds like this should be synchronous REST API call. Instead of the mobile app creating a message on a queue, could it instead be invoking your lambda function via a synchronous API Gateway request which can then directly return the primary key to the caller.

Best way to retrieve active calls without making request each second?

We need to create a monitor that will show any income calls in our extranet in live time.
We were able to show active calls by using /account/~/extension/~/active-calls, however, to achieve what we need we would need to make a request each second which I guess will be blocked by rate limits.
Is there a better solution for it?
Thanks
Subscription (Push Notification) API resource empowers developers to enable the client application(s) to create a single subscription (to one or more extension's) and continually receive push notifications in real time for each subscribed extension.When using this approach for your application(s) to receive events on your RingCentral account, no polling is involved.
You can create a subscription using either of the below-mentioned transportType for receiving push notifications:
PubNub
WebHook
Notifications which the client wants to receive can be specified by the event filters which are set in the subscription request. The event filter is exposed as a URL, pointing to the required RingCentral API resource. Currently the following event types are available for notifications: extensions, messages and presence. They are described in detail below:
Notifications Event Types
You can take a look at the Subscription API below:
Subscription API
If you are interested in Subscribing to Push notifications via WebHook then we have an Easy-to-follow Quickstart guide here:
RingCentral Webhooks Quickstart Guide