I have enabled the "notify-keyspace-events" for redis node, and getting the event published on key change on subscription.
But, I want to understand, what redis does with the events to be published if there are no subscribers to any key.
Any information or links, which could help me understand will be appreciated.
It is a fire and forget model. If there are no subscribers available, it will drop those events. It will even drop even if the subscriber is not available or will not be able to take those events.
Documentation from Redis:
https://redis.io/topics/notifications
Snippet from documentation,
Because Redis Pub/Sub is fire and forget currently there is no way to
use this feature if your application demands reliable notification of
events, that is, if your Pub/Sub client disconnects, and reconnects
later, all the events delivered during the time the client was
disconnected are lost.
Related
What is the best practice to check if AWS IoT Core thing is still offline?
Being able to query the state of an AWS IoT thing will for many be an essential part of their application. Lucky AWS has a best practise on how to get lifecycle events here: https://docs.aws.amazon.com/iot/latest/developerguide/life-cycle-events.html
It says that we should check if device is still offline, before performing any actions.
I'm handling it on nodeJs server (listening to events), so the question is, what's the best way to handle it?
For now the plan is, to create some storage (redis?), and implement some timeout(5-10 sec), if I received disconnect event, I'll put it in DB, wait timeout, and if no other messages regarding this device will come (Connected), I'll do some logic.
Is this right approach?
The point is, not to use SQS from aws.
And as AWS docs says, the order of messages is not guaranteed, so what's the best practise to handle it?)
If your device emits a signal at every periodic intervals, then you can treat that as a heartbeat signal.
You can maintain a timer (x minutes/hours etc) and wait for the heartbeat signal from the device.
If the timer times out and you have not received the hearbeat signal, then it is safe to assume that the device has gone offline. Such events are easy to model as a detector model in the IoT Events.
This example from AWS IoT Events is doing exactly the same thing.
I am looking into building a simple solution where producer services push events to a message queue and then have a streaming service make those available through gRPC streaming API.
Cloud Pub/Sub seems well suited for the job however scaling the streaming service means that each copy of that service would need to create its own subscription and delete it before scaling down and that seems unnecessarily complicated and not what the platform was intended for.
On the other hand Kafka seems to work well for something like this but I'd like to avoid having to manage the underlying platform itself and instead leverage the cloud infrastructure.
I should also mention that the reason for having a streaming API is to allow for streaming towards a frontend (who may not have access to the underlying infrastructure)
Is there a better way to go about doing something like this with the GCP platform without going the route of deploying and managing my own infrastructure?
If you essentially want ephemeral subscriptions, then there are a few things you can set on the Subscription object when you create a subscription:
Set the expiration_policy to a smaller duration. When a subscriber is not receiving messages for that time period, the subscription will be deleted. The tradeoff is that if your subscriber is down due to a transient issue that lasts longer than this period, then the subscription will be deleted. By default, the expiration is 31 days. You can set this as low as 1 day. For pull subscribers, the subscribers simply need to stop issuing requests to Cloud Pub/Sub for the timer on their expiration to start. For push subscriptions, the timer starts based on when no messages are successfully delivered to the endpoint. Therefore, if no messages are published or if the endpoint is returning an error for all pushed messages, the timer is in effect.
Reduce the value of message_retention_duration. This is the time period for which messages are kept in the event a subscriber is not receiving messages and acking them. By default, this is 7 days. You can set it as low as 10 minutes. The tradeoff is that if your subscriber disconnects or gets behind in processing messages by more than this duration, messages older than that will be deleted and the subscriber will not see them.
Subscribers that cleanly shut down could probably just call DeleteSubscription themselves so that the subscription goes away immediately, but for ones that shut down unexpectedly, setting these two properties will minimize the time for which the subscription continues to exist and the number of messages (that will never get delivered) that will be retained.
Keep in mind that Cloud Pub/Sub quotas limit one to 10,000 subscriptions per topic and per project. Therefore, if a lot of subscriptions are created and either active or not cleaned up (manually, or automatically after expiration_policy's ttl has passed), then new subscriptions may not be able to be created.
I think your original idea was better than ephemeral subscriptions tbh. I mean it works, but it feels totally unnatural. Depending on what your requirements are. For example, do clients only need to receive messages while they're connected or do they all need to get all messages?
Only While Connected
Your original idea was better imo. What I probably would have done is to create a gRPC stream service that clients could connect to. The implementation is essentially an observer pattern. The consumer will receive a message and then iterate through the subscribers to do a "Send" to all of them. From there, any time a client connects to the service, it just registers itself with that observer collection and unregisters when it disconnects. Horizontal scaling is passive since clients are sticky to whatever instance they've connected to.
Everyone always get the message, if eventually
The concept is similar to the above but the client doesn't implicitly un-register from the observer on disconnect. Instead, it would register and un-register explicitly (through a method/command designed to do so). Modify the 'on disconnected' logic to tell the observer list that the client has gone offline. Then the consumer's broadcast logic is slightly different. Now it iterates through the list and says "if online, then send, else queue", and send the message to a ephemeral queue (that belongs to the client). Then your 'on connect' logic will send all messages that are in queue to the client before informing the consumer that it's back online. Basically an inbox. Setting up ephemeral, self-deleting queues is really easy in most products like RabbitMQ. I think you'll have to do a bit of managing whether or not it's ok to delete a queue though. For example, never delete the queue unless the client explicitly unsubscribes or has been inactive for so long. Fail to do that, and the whole inbox idea falls apart.
The selected answer above is most similar to what I'm subscribing here in that the subscription is the queue. If I did this, then I'd probably implement it as an internal bus instead of an observer (since it would be unnecessary) - You create a consumer on demand for a connecting client that literally just forwards the message. The message consumer subscribes and unsubscribes based on whether or not the client is connected. As Kamal noted, you'll run into problems if your scale exceeds the maximum number of subscriptions allowed by pubsub. If you find yourself in that position, then you can unshackle that constraint by implementing the pattern above. It's basically the same pattern but you shift the responsibility over to your infra where the only constraint is your own resources.
gRPC makes this mechanism pretty easy. Alternatively, for web, if you're on a Microsoft stack, then SignalR makes this pretty easy too. Clients connect to the hub, and you can publish to all connected clients. The consumer pattern here remains mostly the same, but you don't have to implement the observer pattern by hand.
(note: arrows in diagram are in the direction of dependency, not data flow)
We need to create a monitor that will show any income calls in our extranet in live time.
We were able to show active calls by using /account/~/extension/~/active-calls, however, to achieve what we need we would need to make a request each second which I guess will be blocked by rate limits.
Is there a better solution for it?
Thanks
Subscription (Push Notification) API resource empowers developers to enable the client application(s) to create a single subscription (to one or more extension's) and continually receive push notifications in real time for each subscribed extension.When using this approach for your application(s) to receive events on your RingCentral account, no polling is involved.
You can create a subscription using either of the below-mentioned transportType for receiving push notifications:
PubNub
WebHook
Notifications which the client wants to receive can be specified by the event filters which are set in the subscription request. The event filter is exposed as a URL, pointing to the required RingCentral API resource. Currently the following event types are available for notifications: extensions, messages and presence. They are described in detail below:
Notifications Event Types
You can take a look at the Subscription API below:
Subscription API
If you are interested in Subscribing to Push notifications via WebHook then we have an Easy-to-follow Quickstart guide here:
RingCentral Webhooks Quickstart Guide
So I need a second pair of eyes to correct or confirm my understand standing of Amazon SQS. From my understanding, you can add an unlimited amount of messages to one queue. A message can be 256 KB in size, and if it needs to be larger than that, you can use amazon s3 to store 2 GB. Reading around online, it appears there are many use cases for this queuing service. For example one use case of SQS can act as a database buffer.
But here's what I'm looking to do.. I'm looking to make a real time messaging system. My current functionality acts like more of a message board, so the implementation just inserts into the database then reads the data and packages it into JSON to be inserted on SQLITE mobile phone. That works great, but I'm getting a lot of requests from people to make it real-time.
So what I'm wondering is can I utilize amazon SQS to write and read messages for a chat application? So in my theoretical use case of SQS would have a message queue to write to, and pull from the that queue every second to check for messages on mobile. But here's where I'm confused. Since you cannot "Query" a particular message from the queue, would it make sense to have a queue per user then a generic queue for the app server to read from? Or am I just talking crazy and should spend cognitive resources thinking about implementing an open connection on an Ec2 instance?
Any help would be great,
Thanks!
Have you thought about using Amazon SNS to push the chat messages to your mobile devices? Each user publishes to a topic and the readers subscribe to that topic. You just have to be ok with missing messages if the app isn't running.
If you only have a few (or maybe, less than 100) users, you could have thought of having one SQS queue per user. If that is not so, the solution won't be operationally feasible.
If you were to have one generic queue, SQS won't help because it doesn't allow querying for a given field in all available messages.
I can think of following options for your use case:
Setup one Redis cluster, possibly on Amazon ElastiCache. Have one message List per user.
One Messages table in MySQL, possibly on AWS RDS. This will provide an easy way to query messages for a given user.
You can also use DynamoDB in #2.
In IEventProcessor.ProcessEventsAsync I want to store events in a persisted store. It's possible this store is unavailable and messages cannot be persisted. How to sign these messages to be redelivered later?
The store may be down only for some hours, but until it's up again every message is affected and cannot be persisted.
I don't think you can mark a particular event to be delivered in eventhub, unlike ServiceBus queue. However, eventhub does provide retention policy and offset for each event, which make possible to reprocess an old event. You can read more in the "checkpointing" section from this document: https://azure.microsoft.com/en-us/documentation/articles/event-hubs-overview/
Adding to Tyler response, i suppose that you could use the some kind of "Poison Message"/Dead letter queue approaches. Event Hub does not have that functionality, but Service Bus Queues do.
Anyway, i think that it should be a programmatic approach, not something inside of the backend.
There is a good article about something else, but approach is alike what i meant:
https://www.dougv.com/2015/07/handling-poison-messages-in-an-azure-service-bus-queue/