this is the architecture that I have now.
Lambda (Put events to event bus) -> Event Bridge -> Event bus in another AWS account
Right now, lambda is putting events using putEvents API to Event Bridge. Now I want to send these events to another Event bus but in a different AWS account. I'm wondering what kind of event pattern should I need to create for the rule?
The event pattern does not change. But the target changes. In your case, to forward events to different account you have to choose special target for that:
Related
I want to simply send an event from a lambda function TO an event bridge. Every thing that I search online is the other way around. And If I go to the AWS console and try to find lambda as an event source, nothing comes up.
Can I send a custom event to an event bridge from Lambda?
Yes you can, but only to a custom EB bus and using AWS SDK's put_events. In other words, you have to do it programmatically. There is no automatic integration between lambda and custom EB bus.
I have written an event rule that targets my lambda and is connected to the event bus. I have tested the sample event against my event pattern with test-event-pattern in the aws cli and in the rules section in the aws console and the event pattern matches.
However, while raising an event to the eventbridge my rule is not being triggered, but the event is successfully put in the event bus.
A strange thing I've also noticed is that the rule isn't showing up in cloudwatch metrics.
I can confirm that the rule is enabled and has the correct connections and permissions and I would like help figuring out why it still isn't being matched with my sample events
Is the rule created against the correct event bus? Are you adding the event to one, and creating the rule on another.
I am connecting 3rd-party applications like Shopify via AWS partner event sources to EventBridge. The 3rd-party events are emitted on an application-specific custom EventBus and each event triggers and event rule (on that bus) that transforms the 3rd-party event into a common format start the execution of common StepFunction. That means the event bus, event rule and target (state machine) are hardly-coupled resources.
AWS EventBridge supports bus-to-bus event routing. That means events on a custom event bus can be forwarded to the default event bus. Then, on the default event bus, an event rule can trigger the common StepFunction. That would mean 3rd-party apps are only loosely-coupled with shared resources because their events are jus re-emitted on the default event bus where the actual hard coupling takes place.
For whatever reason, the input transformer of events via an event rule is not possible if the event target is another event bus. That would mean I have to move all 3rd-party event transformations from their respective custom event buses into rules on the default event bus.
Of course, I could have a Lambda function as common target for each custom event bus that simply re-emits the events on the default event bus. But, are there any feasible and/or cheaper workarounds?
Hopefully this post will catch the attention of some AWS employees who can tell us if this feature is expected in the near future.
I'm running some integration tests on a microservice I built in AWS. One of the tests is to assert that the service triggers an AWS EventBridge event, as downstream services will need to subscribe to this event.
My question is, how do I test this in the context of my mircroservice?
I need to just assert that the event was fired in AWS. I was hoping the AWS SDK would allow some way of asserting this e.g. being able to subscribe to an event on some long polling type operation, but haven't been able to find anything.
NOTE: Not looking for test double spy answers please. The level of testing I'm doing requires confirming that an actual event has been fired in AWS EventBridge
You can create a rule for your specific event and target an SQS.
You can then read from the SQS (using long polling) and assert the event has fired.
You can check the CloudWatch metrics for your rule such as TriggeredRules, Invocations, and FailedInvocations for debugging.
Check the logging and monitoring in Amazon EventBridge here
If the rule is triggered by an event from an AWS service you can also use the TestEventPattern action to Test the event pattern of our rule with a test event to make sure the event pattern of your rule is correctly set.
For more info on how to use the TestEventPattern se TestEventPattern docs
guys need small help, I have a use case, where I want to set up a communication service.
using SQS, SQs is going to receive a different type of events to be communicated. Now we have a single lambda function which does a single communication. let's say one email Lambda, Slack lambda, etc.
how I can invoke different lambda based on queue attributes. I was planning to use SQS as an event source and something kind of this architecture link to sample architeture
here in the above, we can handle rate limiting and concurrency at the lambda service level
simplified works if event type is A invoke Lambda A if the event type is B invoke a lambda B
and both events are in same SQS
all suggestions are welcome
Your problem is a SQS message can only be read by one service at a time. When it is being read, it is invisible to anyone else. You can only have one Lambda consumer and there isn't any partitioning or routing in SQS besides setting up another SQS topic. Multiple consumers are implemented Kensis or AWS MSK (Kafka)
What you are trying to accomplish is called a fan out. This is a common cloud architecture. What you probably want to do is publish initially to SNS. Then with SNS you can filter and route to multiple SQS topics for each of the message types and each SQS topic would then be consumed by it's own Lambda.
Check out a tutorial here:
https://docs.aws.amazon.com/sns/latest/dg/sns-common-scenarios.html