Sending newly created primary key value back to app using Lambda - amazon-web-services

My mobile app sends a message to SQS which triggers a Lambda function
that inserts data into a SQL DB.
When it creates the new row, it generates a Primary key. I want to send that
new primary key value to my mobile app before my lambda function is done running.
Should I use SNS to send the value? All opinions appreciated!

A few ideas come to mind:
1) When you mobile app create the SQS message, it should include some sort of callback information in the payload so that the Lambda knows how to reach back to the mobile app and send the primary key information.
2) This sounds like this should be synchronous REST API call. Instead of the mobile app creating a message on a queue, could it instead be invoking your lambda function via a synchronous API Gateway request which can then directly return the primary key to the caller.

Related

How can I publish to subscriber based on connection id via go library `graph-gophers`

I am using graph-gophers library to build graphql application on the server side. I have a lambda works as resolvers and websocket apigateway which works as transport layer. The lambda will take the request from apigateway then call graph-gophers library schema methods to trigger resolver. It works well for query and mutation. But I am not sure how to make it work for subscription.
The library graph-gophers requires the subscription resolver to return a go channels which it listens to. but in case of lambda which is a short lived application, it can't keep alive for a long time. That means I can't use channel for publishing data.
What I am doing is to save the socket connection id on a database, when there is a need to publish data, I will grab the connection id from db to find all the subscribers. But I don't know how to trigger the publish in this case. Anyone has any idea about how to do that?

Can a lambda return a response and wait for a new body without closing the session?

I am running a puppeteer function in AWS Lambda and I have a scenario that the user makes a POST request to the lambda with his username and email. The function is going to check if they are valid in a website and return the JSON to the user with the answer. Is it possible to use the same lambda session to receive another input/body from the user?
The reason I need it to be the same session is because each time an user and email is sent to the lambda, the puppeteer website is going to generate unique ID's that need to be used AFTER the user sends his data in that exact moment because it is logged into the website with an unique session.
I'm currently running this function in a NodeJS and it is fine because the session isnt going to be closed but the session is closed once the lambda returns the first response.
Like people mentioned above, Lambda function is stateless resource and you can ultimately use dynamoDB to store any values such session ID or so.
Additionally, if the Lambda function should wait for response or any updated values by querying DynamoDB, then you can implement AWS Step Function or Airflow which provides the "wait" state.
See what States you can leverage in the AWS Docs.

How should I handle asynchronous processes that occur after API calls in AWS?

I'm designing the backend for a website that uses API Gateway and Lambda to handle API requests, many of which target a MySQL DB on RDS. Some processes need to happen asynchronously but I'm debating which is best practice or cleaner.
In the given scenario, every time a user creates a new row in a certain table, let's say an email also needs to be sent asynchronously. There are many other scenarios similar to this but this will set precedent.
Option 1: In the lambda that handles the API request, first write to the MySQL instance to add the new row. When the response from MySQL comes back successful, write to something like SQS which will later be read from another lambda that sends an email. When the response from SQS is successful that the record was added to the queue, send a 201 response saying the REST API call was successful.
Option 2: In the lambda that handles the API request, write to the MySQL instance to add the new row. When the response from the MySQL comes back successful, send a 201 response saying the REST API call was successful. Then set up a DMS (data migration service) task that runs indefinitely to send database modification binlogs to a kinesis stream which will trigger a lambda that will handle all DB changes, read the change as a new row in a certain table, and send an email.
Option 1:
less infrastructure
more direct tracking of logic from an API call
1 extra http call (to sqs) delaying response times for an api for a web page
Option 2:
more infrastructure (dms task, replication instance)
scaling out shards may mean loss of ordering when processes binlog events if ordering is a requirement (it is)
side question: Are you able to choose hash key for kinesis for dms tasks from mysql?
a single codebase for reacting to all modifications in the DB may actually make following logic in code simpler
Is this the tradeoff or am I missing something? What is best practice in this scenario?
Option 1 in my view seems most logical, but I would replace SQS and second lambda with SNS. So, modified option 1 could be:
Option 1: In the lambda that handles the API request, first write to the MySQL instance to add the new row. When the response from MySQL comes back successful, publish confirmation message to SNS that sends an email. When the response from SNS is successful send a 201 response saying the REST API call was successful.
This should be faster, cheaper and easier to implement then using SQS and second lambda for sending email.

Send push notifications/emails when a query/mutation happends in AppSync/Aurora

I am using AppSync with Aurora/RDS.
I would like that in some cases, when a query/mutation is sent to the db, then, after that, I want to send an email and push notification, but this should be detached from the query/mutation, that is, it does not matter if it fails or works.
At the moment I see all these options:
Can you tell me which one I should use?
Create a query that calls a lambda function that sends the
push/email and call it from the client once the actual
query/mutation is done. I don't like this because the logic is in
the client rather than the server. Seems easy to implement, and I
guess it is easy to ignore the result of the second operation from a
client point of view.
A variation of the previous one. Pack both operations in a single
network request. With GraphQL, that is easy, but I don't want the
client waits for the second operation. (Is it possible to create
lambda functions that return immediately, like a trigger of other
functions?)
Attach my queries/mutations to lambda functions instead of RDS
directly. Then, those lambda functions call other lambda functions
for notifications. Seems more difficult to program, but more
micro-services architecture friendly. Probably this is the best one,
not sure.
Use SQL triggers and call lambda functions from those triggers. I
don't know if this is even possible. Researching...
Use pipelines resolvers. The first one is the query/mutation, the
second one is the lambda function that sends the push/email. I would
say this is a bad option because I don't want the client to wait for
the second operation or manage the logic when the second resolver
fails.
Amazon RDS Events: It appears it is possible to attach lambda
functions to specific AWS RDS events.
https://docs.aws.amazon.com/lambda/latest/dg/services-rds.html It
seems it is about creating DBs, restoring... and that kind of
things. I don't see anything like creating a row, updating a row...
So, I discard this unless I am wrong.
Invoking a Lambda Function with an Aurora MySQL Stored Procedure
CALL mysql.lambda_async ( lambda_function_ARN,lambda_function_input )
https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.Integrating.Lambda.html
"For example, you might want to send a notification using Amazon
Simple Notification Service (Amazon SNS) whenever a row is inserted
into a specific table in your database." That is exactly what I am
looking for. I like this idea, but I don't know if that is possible
with Aurora Serverless. Researching... It seems it is not possible
when using server-less:
https://www.reddit.com/r/aws/comments/a9szid/aurora_serverless_call_lambda/
Use step functions: No idea about how to use it.
Somehow, attach this lambda notification function to GraphQL/AppSync
instead of the database, but I guess it is not a good idea because I
need to read the database to the push notification token and the
email of the use who is going to receive the notifications.
Which method do you recommend me? I am using amplify cli.
Thanks a lot.
Currently AWS AppSync can only send notifications when the app is active. We are looking into implementation of the non active case.
If you want to send notifications when the app is not active, you can use the push notifications on iOS: silent push/interactive push or push notifications on Android.
If you want to send emails, voice/text message or notifications on phone when the app is not active, you can integrate with Amazon Pinpiont.

Lambda boto3 background functions with api

I'm trying to build a basic AWS Lambda API and function setup to do the following:
Part 1: Client calls function with api and runs both a background 1 min function to process data and a quick messesge to client in browser.
Part 2: When background function is complete it returns 302 redirect to the client with a generated link.
I'm stuck on Part 2. How can I go from the background function to the API back to the client?
I'm using python boto3 for my Lambda scripts.
This is AWS Lambda so your client doesn't have a persistent connection to the server-side code.
Here is an idea of one way to build this:
your client makes an API request that triggers a Lambda function
on invocation, your Lambda function generates a new, unique id (a UUID), writes that to DynamoDB so that this UUID can later be associated with the result of the background processing
the Lambda kicks off the background processing, passing the UUID to it
the Lambda returns the generated UUID to the client
the background processing happens asynchronously, ultimately writing any results to the DynamoDB item associated with the UUID that triggered it
the client polls another API periodically, say every 10s, sending in the UUID it was given
the polled Lambda takes the presented UUID, does a lookup in DynamoDB and returns a 302 redirect to a URL result, or an indication that the results aren't ready yet (e.g. HTTP 404)
some process that you create removes the item from DynamoDB later (or not)