how to get notice that one key has been created in redis - django

I have a outer service which will insert data to redis(I can get the keys of the data),but this may take some times , so the question is, how can I get to know that the data is comming.I want to show that in a page based django??

Use PUB/SUB of redis.
When your other service inserts new data, publish key on some channel...
So your django subscribes on channel "datachanged"
./redis-cli subscribe "datachanged"
And your service send event over channel
./redis-cli set "key:abc123" "some value"
./redis-cli publish "datachanged" "key:abc123"
Also you can use "Redis Keyspace Notifications" if your redis is newer then 2.8.0 (http://redis.io/topics/notifications)

Related

Google Cloud Pub/Sub retrieve message by ID

Problem: My use case is I want to publish thousends of messages to Google Cloud Pub/Sub with a 5min retention period but only retrieve specific messages by their ID - So a cloud function will retrieve one message by ID using the Nodejs SDK and all the untreated messages will be deleted by the retention policy. All the current examples mention are to handle random messages from the subscriber.
Is it possible to just pull 1 message by id or any other metadata and close the connection.
There is no way to retrieve individual messages by ID, no. It doesn't really fit into the expected use cases for Cloud Pub/Sub where the publishers and subscribers are meant to be decoupled, meaning the subscriber inherently doesn't know the message IDs prior to receiving the messages.
You may instead want to transmit the messages via whatever mechanism you are using to making the subscribers aware of the message IDs. Or, if you know at publish time which messages will ultimately need to be retrieved, you could add an attribute to the message to indicate this and use filtering.

How should I handle asynchronous processes that occur after API calls in AWS?

I'm designing the backend for a website that uses API Gateway and Lambda to handle API requests, many of which target a MySQL DB on RDS. Some processes need to happen asynchronously but I'm debating which is best practice or cleaner.
In the given scenario, every time a user creates a new row in a certain table, let's say an email also needs to be sent asynchronously. There are many other scenarios similar to this but this will set precedent.
Option 1: In the lambda that handles the API request, first write to the MySQL instance to add the new row. When the response from MySQL comes back successful, write to something like SQS which will later be read from another lambda that sends an email. When the response from SQS is successful that the record was added to the queue, send a 201 response saying the REST API call was successful.
Option 2: In the lambda that handles the API request, write to the MySQL instance to add the new row. When the response from the MySQL comes back successful, send a 201 response saying the REST API call was successful. Then set up a DMS (data migration service) task that runs indefinitely to send database modification binlogs to a kinesis stream which will trigger a lambda that will handle all DB changes, read the change as a new row in a certain table, and send an email.
Option 1:
less infrastructure
more direct tracking of logic from an API call
1 extra http call (to sqs) delaying response times for an api for a web page
Option 2:
more infrastructure (dms task, replication instance)
scaling out shards may mean loss of ordering when processes binlog events if ordering is a requirement (it is)
side question: Are you able to choose hash key for kinesis for dms tasks from mysql?
a single codebase for reacting to all modifications in the DB may actually make following logic in code simpler
Is this the tradeoff or am I missing something? What is best practice in this scenario?
Option 1 in my view seems most logical, but I would replace SQS and second lambda with SNS. So, modified option 1 could be:
Option 1: In the lambda that handles the API request, first write to the MySQL instance to add the new row. When the response from MySQL comes back successful, publish confirmation message to SNS that sends an email. When the response from SNS is successful send a 201 response saying the REST API call was successful.
This should be faster, cheaper and easier to implement then using SQS and second lambda for sending email.

Propagating Error messages in Google Cloud Platform (GCP)

I am building a near real time service. The input is a cloud storage bucket and blob path to a photo image. This horizontally-scalable service is made up of multiple components including ML models running on k8s and Google Cloud Functions, each of which has a chance of failing for a variety of reasons. The ML models are independent and run in parallel. Each component is triggered by a PubSub push message topic unique to the component. Running the entire flow for one photo may take 15 seconds.
I want to return a meaning error message back to the service requester telling which component failed if there is a failure. Essentially, I want to report which image failed and where it failed.
What is the recommended practice for returning an error back to the requester?
There is no built in service for this. But, because you already use PubSub for asynchronous call, I propose to use it also to push back the error.
You can do this in 2 flavors
First, create a PubSub topic for the errors, let's say 'error_topic'
1. Without message customization
In the PubSub message, the requester put which it is in the attribute (let's say 'requester' attribute name)
In the consumer service, if an error occurs, return an error code (500 for example) for push subscription or a NACK in pull subscription.
Configure the PubSub subscription to manage retry and dead letter topic (the dead letter topic is 'error_topic')
Then, create one subscription per requester on the 'error_topic' (use the filter capability for this) and consume the message in the requester services
2. With message customization
In the PubSub message, the requester put which it is in the attribute (let's say 'requester' attribute name)
The consumer service that raises the error create a new message with custom information and copies the 'requester' attribute value and then puts it in attribute of the message in the 'error_topic' (let's say 'original_requester' attribute name).
Then, create one subscription per requester on the 'error_topic' (use the filter capability for this) and consume the message in the requester services

Sending newly created primary key value back to app using Lambda

My mobile app sends a message to SQS which triggers a Lambda function
that inserts data into a SQL DB.
When it creates the new row, it generates a Primary key. I want to send that
new primary key value to my mobile app before my lambda function is done running.
Should I use SNS to send the value? All opinions appreciated!
A few ideas come to mind:
1) When you mobile app create the SQS message, it should include some sort of callback information in the payload so that the Lambda knows how to reach back to the mobile app and send the primary key information.
2) This sounds like this should be synchronous REST API call. Instead of the mobile app creating a message on a queue, could it instead be invoking your lambda function via a synchronous API Gateway request which can then directly return the primary key to the caller.

Keyspace event in AWS Redis

I have enabled the "notify-keyspace-events" for redis node, and getting the event published on key change on subscription.
But, I want to understand, what redis does with the events to be published if there are no subscribers to any key.
Any information or links, which could help me understand will be appreciated.
It is a fire and forget model. If there are no subscribers available, it will drop those events. It will even drop even if the subscriber is not available or will not be able to take those events.
Documentation from Redis:
https://redis.io/topics/notifications
Snippet from documentation,
Because Redis Pub/Sub is fire and forget currently there is no way to
use this feature if your application demands reliable notification of
events, that is, if your Pub/Sub client disconnects, and reconnects
later, all the events delivered during the time the client was
disconnected are lost.