I am connecting 3rd-party applications like Shopify via AWS partner event sources to EventBridge. The 3rd-party events are emitted on an application-specific custom EventBus and each event triggers and event rule (on that bus) that transforms the 3rd-party event into a common format start the execution of common StepFunction. That means the event bus, event rule and target (state machine) are hardly-coupled resources.
AWS EventBridge supports bus-to-bus event routing. That means events on a custom event bus can be forwarded to the default event bus. Then, on the default event bus, an event rule can trigger the common StepFunction. That would mean 3rd-party apps are only loosely-coupled with shared resources because their events are jus re-emitted on the default event bus where the actual hard coupling takes place.
For whatever reason, the input transformer of events via an event rule is not possible if the event target is another event bus. That would mean I have to move all 3rd-party event transformations from their respective custom event buses into rules on the default event bus.
Of course, I could have a Lambda function as common target for each custom event bus that simply re-emits the events on the default event bus. But, are there any feasible and/or cheaper workarounds?
Hopefully this post will catch the attention of some AWS employees who can tell us if this feature is expected in the near future.
Related
I have written an event rule that targets my lambda and is connected to the event bus. I have tested the sample event against my event pattern with test-event-pattern in the aws cli and in the rules section in the aws console and the event pattern matches.
However, while raising an event to the eventbridge my rule is not being triggered, but the event is successfully put in the event bus.
A strange thing I've also noticed is that the rule isn't showing up in cloudwatch metrics.
I can confirm that the rule is enabled and has the correct connections and permissions and I would like help figuring out why it still isn't being matched with my sample events
Is the rule created against the correct event bus? Are you adding the event to one, and creating the rule on another.
this is the architecture that I have now.
Lambda (Put events to event bus) -> Event Bridge -> Event bus in another AWS account
Right now, lambda is putting events using putEvents API to Event Bridge. Now I want to send these events to another Event bus but in a different AWS account. I'm wondering what kind of event pattern should I need to create for the rule?
The event pattern does not change. But the target changes. In your case, to forward events to different account you have to choose special target for that:
How to subscribe to AWS Event Bus events from client side applications ex: NodeJS app, Angular client or a Mobile client app ?
In December 2020, an email from AWS Marketing has presented the advantages to use the Event-Driven architecture . Following the documentation and the tutorials, soon I stumble into the wall of not finding a way to subscribe to this events from a client side application.
The email states:
4 Reasons to Care About Event-Driven Architectures
Are you looking to scale and build robust applications without delays and dependencies? We break down the basics of event-driven architectures, how they work, and show you ways to get started. Learn how event-driven architectures can help you:
Scale and fail independently - No more dependencies
Develop with agility - No custom polling code
Audit with ease - Use your event router to define policies
Cut costs - Stop paying for continuous polling
The disappointing part is that there is no example of libraries to be integrated in the client side code to subscribe to those event. Googling does not return any significant result and the only current library for node: #aws-sdk/client-eventbridge-node only expose a send and destroy methods.
There is no way to directly subscribe to an Amazon EventBridge bus, as it doesn't provide publish/subscribe functionality. In order to process events in EventBridge you create event rules that filter and send matching events to targets. You can find all targets available to EventBridge rules on this list: Amazon EventBridge Targets.
One of these targets could be an Amazon SNS topic, which provides pub/sub functionality, i.e. your client application can subscribe to the topic to automatically receive the respective events.
This may sound complicated at first, but the implementation is strictly following the principle of separating concerns. It provides simple building blocks--like Lego pieces--that you can put together in order to create truly loosely coupled architectures.
This diagram shows the functionality in scope of Amazon Event Bridge and how it communicates with other services and applications.
Services that allow you to subscribe as you want it (directly delivering subscribed messages to your code over a tcp connection such as a websocket) are:
AppSync - websocket
IOT Core - websocket, mqtt
SQS - long poll
Kafka
(From the top of my head)
So a straightforward serverless solution for you could be:
Event bridge —> SQS —> your code
I often use AppSync for this purpose. But eventbridge is cool too.
if you want to avoid polling and you cant/want use AWS Lambda then you can reverse the problem and call an api on your application from EventBridge with a rule.
You can create API Targets in EventBridge:
API destination
I am using EventBridge as event bus in our application. Based on its doc: https://aws.amazon.com/eventbridge/faqs/, the latency between sending and receiving an event is half second which is unacceptable in my application.
I am thinking about other alternatives. Kinesis has a problem about filtering events. Once a consumer attaches on a stream, it needs to provide some logics to filter out uninterested events. Since I am using lambda as the consumer and there will be many uninterested events trigger my lambda which will lead to high AWS bill.
AWS SNS can only support target of AWS services.
Another option is Kafka. But I can't find what the latency is when using AWS managed Kafka service.
What is the lowest latency event sourcing solution when using AWS
Kinesis is probably the best way to go now, thanks to the newly release "event filtering" feature. This allows you to configure an event source mapping which filters kinesis (or SQS, Dynamo Streams) events.
Doing this means you can use Kinesis as an event bus, without having to invoke a lambda with every event.
See: https://aws.amazon.com/about-aws/whats-new/2021/11/aws-lambda-event-filtering-amazon-sqs-dynamodb-kinesis-sources/
I am not very experienced in AWS, so I would like to check my design diagram. The workflow for processing Inspection Events can go in the following way:
Inspection Events belonging to a certain job should be processed by EventHandlers separately for every job. Then the handling results are persisted in S3. After finishing processing and persisting for a particular job the EventConsumer should retrieve the results from S3 based on a processing finishing message.
So, for directing events to EventHandlers I show the SNS topic InspectionEvent
Since handling events for a particular job requires significant resources, we think about creating a Lambda for every job.
How to create a trigger for this Lambda? On the diagram I showed the DynamoDB, that could have trigger the EventHandler Lambda if an Inspection Event with a new job appears. Then EventHandler retrieves events from the DynamoDB for a particular job and after finishing publishes the Finishing Event to another SNS topic – EventProcessingEnds. The EventConsumer gets the message for a particular job and retrieves the results from S3.
The image is attached. Does this design have sense? What else can be suggested?
First, I'd recommend using SQS instead of SNS for starting both the job triggering circle you have there, and for the event consumer square near the bottom of your diagram.
AWS Lambda can read events from an SQS queue, S3 change, or change in dynamodb via an event, so I recommend you use this instead of SNS which is typically used for mobile push notifications, or situations where you need many-to-many messaging / notifications to a group. In this case, SQS is what you want - for a couple reasons:
It works very well for a producer/consumer pattern.
You can hook up an Elastic Beanstalk consumer app to an SQS queue and have it auto-scale up based on the size of the queue for consumers that execute on a server.
AWS Lambda can read events directly from an SQS queue, so in a serverless job processing queue, your number of asynchronously running lambda functions will scale well as queue throughput goes up.
Since situations like this is what SQS was designed for, it is full of features you can customize to tailor your solution to what you need. https://aws.amazon.com/sqs/features/
For triggering a lambda function from some of these sources using events:
(dynamodb) https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Streams.Lambda.html
(S3) https://aws.amazon.com/premiumsupport/knowledge-center/lambda-configure-s3-event-notification/
(SQS) https://docs.aws.amazon.com/lambda/latest/dg/with-sqs.html
Furthermore, for your diagram, check out this tool which is great for AWS diagrams: https://cloudcraft.co/
I'm not sure what to recommend when event processing ends - as it looks like that triggers the event consumer, so I'm not sure what requirement that satisfies. Please feel free to leave a comment and anyone here can help elaborate on best practices for how to notify certain functions/resources when the event is finished - depending on what you're trying to do.
Good luck, hope this helps.