I want to integrate my application through Event Hub with multiple type of devices like Mobile app, different type of Embedded system etc. All different type of senders sending data in their specific format and they need their specific handler as well. Like shown below
Mobile APP (Partition key “MobileAPP”) = Consumer Group 1
Embedded System 1 (Partition key “Embedded1”) = Consumer Group 2
Embedded System 2 (Partition key “Embedded2”) = Consumer Group 2
So can you please tell me how I should specify above binding in Event Hub implementation so that each type of message should handle by their particular consumer group?
Normally I see on Receiver side only default consumer group name mentioned. But I can during EventProcessorHost implementation we can create new Consumergroup with method namespaceManager.CreateConsumerGroupIfNotExists(ehd.Path, consumerGroupName). But not able to understand how I make sure that all messages that associate to particular partition key will be handling by their associate consumer group. Where should I mentioned their PartitionKey, ConsumerGroup binding.
In short, there is no straight forward way to specify PartitionKey to ConsumerGroup binding.
Here's why:
EventHubs is a high throughput durable stream which offers stream-level semantics.
Simply put, imagine it to be equivalent to a simple in-Memory stream where you get a Cursor on the Stream (using EventHubClient ReceiveByOffset or ReceiveByTimeStamp Api's) and call ReadNext() to get next events. If you want that such a stream to hold events at huge scale - 1 day's worth of data - and you want it to be persistent (cases where even if your app, processing the Stream, crashes, you don't want to loose data) - that's when you need EventHub.
Now coming to your Question, the feature you are for - is to Filter events based on a Property on the Event - which is not a Stream level Operation - but an Event level operation.
Typical approach to implement it yourself - is to Pull events from EventHubs (the event stream) and have a worker to Process (in your case, Filter by PartitionKey) events and push them to individual Queues (or you could even partition your data to push a group of devices to Topics and have subscriptions which pulls data off - with filters).
Now, first question to answer before you decide on using EventHubs is : Do you foresee the Scale requirements offered by EventHubs vs "Directly using ServiceBus Topics" which provides the exact Semantics you are looking for.
HTH!
Sree
Related
I am using EventHubConsumerClient.ReadEventsAsync method to read events in eventHub. It works perfectly when I use default eventHub. However, when I route it to a new eventHub I am getting EventHubsExeception(ConsumerDisconnected) from time to time. From the documentation. It says this happen due to A client was forcefully disconnected from an Event Hub instance. This typically occurs when another consumer with higher OwnerLevel asserts ownership over the partition and consumer group. I almost got this exception every time. Only a few time it works. Anyone know how to resolve this? Or is there a better way to read message from eventHub? I don't want to use eventProcessorClient since it requires blobContainerClient
for the code, I followed the sample
await using var consumerClient = new EventHubConsumerClient(
EventHubConsumerClient.DefaultConsumerGroupName,
eventHubConnectionString,
eventHubName
);
await foreach (PartitionEvent partitionEvent in consumerClient.ReadEventsAsync(cancelToken)){
...
}
The error that you're seeing is very specific to a single scenario: another client has opened an AMQP link to one of the partitions you're reading from and has requested that the Event Hubs service give it exclusive access. This results in the Event Hubs service terminating your link with an AMQP error code of Stolen which the Event Hubs SDK translates into the form that you're seeing. (source)
These requests for exclusive access are enforced on a consumer group level. In your snippet, you're using the default consumer group, which is apparently also used by other consumers. As a best practice, I'd recommend that you create a unique consumer group for each application that is reading from the Event Hub - unless you specifically want them to interact.
In your case, your client is not requesting exclusive access, so anyone that is will take precedence. If you were to create a new consumer group and use that to configure your client, I would expect your disconnect errors to stop.
Problem Statement
Informal State
We have some scenarios where the integration layer (a combination of AWS SNS/SQS components and etc.) is also responsible for the data distribution to target systems. Those are mostly async flows. In this case, we send a confirmation to a caller that we have received the data and will take a responsibility for the data delivery. Here, although the data is not originated from the integration layer we are still holding it and need to make sure that the data is not lost, for example, if the consumers are down or if messages, on-error, are sent to the DLQs and hence being automatically deleted after the retention period.
Solution Design
Currently my idea was to proceed with a back-up of the SQS/DLQ queues based upon CloudWatch configured alerts using ApproximateAgeOfOldestMessage metric with some applied threshold (something like the below):
Msg Expiration Event if ApproximateAgeOfOldestMessage / Message retention > Threshold
Now, more I go forward with this idea and more I doubt that this might be actually the right approach…
In particular, I would like to build something unobtrusive that can be "attached" to our SQS queues and dump the messages that are about to expire in some repository, like for example the AWS S3. Then have a procedure to recover the messages from S3 to the same original queue.
The above procedure contains many challenges like: message identification and consumption (receive message is not design to "query" for specific messages), message dump in the repository with a reference to the source queue, etc. which would suggest to me that the above approach might be a complex over-kill.
That being said, I'm aware of other "alternatives" (such as this) but I would appreciate if you could answer to the specific technical details described above, without trying to challenge the "need" instead.
Similar to Mark B's suggestion, you can use the SQS extended client (https://github.com/awslabs/amazon-sqs-java-extended-client-lib) to send all your messages through S3 (which is a configuration knob: https://github.com/awslabs/amazon-sqs-java-extended-client-lib/blob/master/src/main/java/com/amazon/sqs/javamessaging/ExtendedClientConfiguration.java#L189).
The extended client is a drop-in replacement for the AmazonSQS interface so it minimizes the intrusion on business logic - usually it's a matter of just changing your dependency injection.
I am looking into building a simple solution where producer services push events to a message queue and then have a streaming service make those available through gRPC streaming API.
Cloud Pub/Sub seems well suited for the job however scaling the streaming service means that each copy of that service would need to create its own subscription and delete it before scaling down and that seems unnecessarily complicated and not what the platform was intended for.
On the other hand Kafka seems to work well for something like this but I'd like to avoid having to manage the underlying platform itself and instead leverage the cloud infrastructure.
I should also mention that the reason for having a streaming API is to allow for streaming towards a frontend (who may not have access to the underlying infrastructure)
Is there a better way to go about doing something like this with the GCP platform without going the route of deploying and managing my own infrastructure?
If you essentially want ephemeral subscriptions, then there are a few things you can set on the Subscription object when you create a subscription:
Set the expiration_policy to a smaller duration. When a subscriber is not receiving messages for that time period, the subscription will be deleted. The tradeoff is that if your subscriber is down due to a transient issue that lasts longer than this period, then the subscription will be deleted. By default, the expiration is 31 days. You can set this as low as 1 day. For pull subscribers, the subscribers simply need to stop issuing requests to Cloud Pub/Sub for the timer on their expiration to start. For push subscriptions, the timer starts based on when no messages are successfully delivered to the endpoint. Therefore, if no messages are published or if the endpoint is returning an error for all pushed messages, the timer is in effect.
Reduce the value of message_retention_duration. This is the time period for which messages are kept in the event a subscriber is not receiving messages and acking them. By default, this is 7 days. You can set it as low as 10 minutes. The tradeoff is that if your subscriber disconnects or gets behind in processing messages by more than this duration, messages older than that will be deleted and the subscriber will not see them.
Subscribers that cleanly shut down could probably just call DeleteSubscription themselves so that the subscription goes away immediately, but for ones that shut down unexpectedly, setting these two properties will minimize the time for which the subscription continues to exist and the number of messages (that will never get delivered) that will be retained.
Keep in mind that Cloud Pub/Sub quotas limit one to 10,000 subscriptions per topic and per project. Therefore, if a lot of subscriptions are created and either active or not cleaned up (manually, or automatically after expiration_policy's ttl has passed), then new subscriptions may not be able to be created.
I think your original idea was better than ephemeral subscriptions tbh. I mean it works, but it feels totally unnatural. Depending on what your requirements are. For example, do clients only need to receive messages while they're connected or do they all need to get all messages?
Only While Connected
Your original idea was better imo. What I probably would have done is to create a gRPC stream service that clients could connect to. The implementation is essentially an observer pattern. The consumer will receive a message and then iterate through the subscribers to do a "Send" to all of them. From there, any time a client connects to the service, it just registers itself with that observer collection and unregisters when it disconnects. Horizontal scaling is passive since clients are sticky to whatever instance they've connected to.
Everyone always get the message, if eventually
The concept is similar to the above but the client doesn't implicitly un-register from the observer on disconnect. Instead, it would register and un-register explicitly (through a method/command designed to do so). Modify the 'on disconnected' logic to tell the observer list that the client has gone offline. Then the consumer's broadcast logic is slightly different. Now it iterates through the list and says "if online, then send, else queue", and send the message to a ephemeral queue (that belongs to the client). Then your 'on connect' logic will send all messages that are in queue to the client before informing the consumer that it's back online. Basically an inbox. Setting up ephemeral, self-deleting queues is really easy in most products like RabbitMQ. I think you'll have to do a bit of managing whether or not it's ok to delete a queue though. For example, never delete the queue unless the client explicitly unsubscribes or has been inactive for so long. Fail to do that, and the whole inbox idea falls apart.
The selected answer above is most similar to what I'm subscribing here in that the subscription is the queue. If I did this, then I'd probably implement it as an internal bus instead of an observer (since it would be unnecessary) - You create a consumer on demand for a connecting client that literally just forwards the message. The message consumer subscribes and unsubscribes based on whether or not the client is connected. As Kamal noted, you'll run into problems if your scale exceeds the maximum number of subscriptions allowed by pubsub. If you find yourself in that position, then you can unshackle that constraint by implementing the pattern above. It's basically the same pattern but you shift the responsibility over to your infra where the only constraint is your own resources.
gRPC makes this mechanism pretty easy. Alternatively, for web, if you're on a Microsoft stack, then SignalR makes this pretty easy too. Clients connect to the hub, and you can publish to all connected clients. The consumer pattern here remains mostly the same, but you don't have to implement the observer pattern by hand.
(note: arrows in diagram are in the direction of dependency, not data flow)
Does this need to be implemented or is it in Channels already?
If I have a channel group with multiple consumers subscribed to it and one consumer is sent the message is the message lost to the rest of the consumers or does the message persist until all consumers see the message?
Or does the message persist for time until time is expired regardless of consumers seeing it or not?
The Group objects manages delivery to all consumers (where possible) and message expiry. But note that delivery is not ensured.
From the documentation:
Channels implements this abstraction as a core concept called Groups ...
[Groups] also automatically manage expiry of the group members - when the channel starts having messages expire on it due to non-consumption, we go in and remove it from all the groups it’s in as well ...
One thing channels do not do, however, is guarantee delivery. If you need certainty that tasks will complete, use a system designed for this with retries and persistence (e.g. Celery)
In the context of writing a Messenger chat bot in a cloud environment, I'm facing some concurrency issues.
Specifically, I would like to ensure that incoming messages from the same conversation are processed one after the other.
As a constraint, I'm processing the messages with workers in a Cloud environment (i.e the worker pool is of variable size and worker instances are potentially short-lived and may crash). Also, low latency is important.
So abstracting a little, my requirements are:
I have a stream of incoming messages
each of these messages has a 'topic key' (the conversation id)
the set of topics is not known ahead-of-time and is virtually infinite
I want to ensure that messages of the same topic are processed serially
on a cluster of potentially ephemeral workers
if possible, I would like reliability guarantees e.g making sure that each message is processed exactly once.
My questions are:
Is there a name for this concurrency scenario?.
Are there technologies (message brokers, coordination services, etc.) which implement this out of the box?
If not, what algorithms can I use to implement this on top of lower-level concurrency tools? (distributed locks, actors, queues, etc.)
I don't know of a widely-accepted name for the scenario, but a common strategy to solve that type of problem is to route your messages so that all messages with the same topic key end up at the same destination. A couple of technologies that will do this for you:
With Apache ActiveMQ, HornetQ, or Apache ActiveMQ Artemis, you could use your topic key as the JMSXGroupId to ensure all messages with the same topic key are processed in-order by the same consumer, with failover
With Apache Kafka, you could use your topic key as the partition key, which will also ensure all messages with the same topic key are processed in-order by the same consumer
Some message broker vendors refer to this requirement as Message Grouping, Sticky Sessions, or Sticky Message Load Balancing.
Another common strategy on messaging systems with weaker delivery/ordering guarantees (like Amazon SQS) is to simply include a sequence number in the message and leave it up to the destination to resequence and request redelivery of missing messages as needed.
I think you can fix this by using a queue and a set. What I can think of is sending every message object in queue and processing it as first in first out. But while adding it in queue add topic name in set and while taking it out for processing remove topic name from set.
So now if you have any topic in set then don't add another message object of same topic in queue.
I hope this will help you. All the best :)