So I'm trying to find the source of a strange bug which causes unusual invocation spikes for a particular lambda function. So far, I've added logging functionality to the lambdas and redeployed to glean more info about the context and event objects that trigger the lambda.
So I want to know where these events are originating, and from those aforementioned logged event objects I've found the TopicArn that is the culprit, but how do I go about finding the guilty publisher in this relationship? Any ideas or something I'm overlooking?
Do you have CloudTrail enabled? You should be able to use CloudTrail to log all the calls to your SNS topics.
Depending on how you logged you might want to attach an SQS queue to the topic too. This would give you the full packet. I can see in one of mine that there is something like:
{
"version": "0",
"id": "7f47b81a-10cc-4b28-be35-123456789",
"detail-type": "Scheduled Event",
"source": "aws.events",
"account": "123456789",
"time": "2017-02-03T18:28:52Z",
"region": "us-east-1",
"resources": [
"arn:aws:events:us-east-1:123456789:rule\/5_min_scheduler"
],
"detail": {
}
}
This is, obviously, from a cloudwatch scheduled event but it does have a source. I don't know for sure that yours will but it's easy to have a topic push to a queue in addition to the Lambda to assist in debugging.
Related
I want to schedule events via Event bridge, so that
Event Bridge will send the events to SNS and subscribe with SQS, then in my springboot application i will listen to SQS ..
but the problem here is, i cannot find a way to provide details in this event.
i want to send something like this:
{
"version": "0",
"id": "89d1a02d-5ec7-412e-82f5-13505f849b41",
"detail-type": "Scheduled Event",
"source": "aws.events",
"time": "2016-12-30T18:44:49Z",
"detail": {"use-case-name": "Update all customers"}
}
is there any possibility i can put details in there?
i try to configure like this
but the event is still does not have any information in details
{
"version": "0",
"id": "7e62a5fa-2f75-d89d-e212-40dad2b9ae43",
"detail-type": "Scheduled Event",
"source": "aws.events",
"resources": [
"..."
],
"detail": {}
}
You can use Target's Input or InputTransformer attribute to send information to target (SNS/SQS in your scenario). You can pass a static JSON message or modify input message depending on the event data.
Note: AWS Eventbridge console has these fields so you can test them without writing code. You won't see target input information on sample event details but if you go to SQS console and see available messages (Poll for messages), you can confirm that messages passed to SQS include the JSON string you defined in the EventBridge side.
SQS sample message:
In my pipeline I have an event notification on an S3 bucket which triggers an SNS topic. That SNS topic in turn has a lambda function subscribed to it. I need the SNS topic to send a hard coded message body to the lambda because it get's used in that function.
Since the SNS topic publishes the message automatically when the S3 event notification is set off I am wondering if and how I can edit the message that gets sent to lambda?
To be clear: I want the same message sent every time. The goal is for lambda to get a variable which is only dependent on which topic the lambda was triggered from.
Currently I am building this through the UI but will eventually code it in terraform for production.
When Amazon SNS triggers an AWS Lambda function, the information it sends includes SNS TopicArn.
You could use that ARN to determine which SNS Topic triggered the Lambda function, and therefore which action it should process.
{
"Records": [
{
"EventSource": "aws:sns",
"EventVersion": "1.0",
"EventSubscriptionArn": "arn:aws:sns:us-east-1:{{{accountId}}}:ExampleTopic",
"Sns": {
"Type": "Notification",
"MessageId": "95df01b4-ee98-5cb9-9903-4c221d41eb5e",
"TopicArn": "arn:aws:sns:us-east-1:123456789012:ExampleTopic",
"Subject": "example subject",
"Message": "example message",
"Timestamp": "1970-01-01T00:00:00.000Z",
"SignatureVersion": "1",
"Signature": "EXAMPLE",
"SigningCertUrl": "EXAMPLE",
"UnsubscribeUrl": "EXAMPLE",
"MessageAttributes": {
"Test": {
"Type": "String",
"Value": "TestString"
},
"TestBinary": {
"Type": "Binary",
"Value": "TestBinary"
}
}
}
}
]
}
Rather than having Amazon S3 send a message to Amazon SNS directly, you might be able to configure an Amazon CloudWatch Events rule that triggers on object creation and sends a Constant as part of the message to Amazon SNS, like this:
If large files are being uploaded, you might also need to trigger it on CompleteMultipartUpload.\
You could also have the rule trigger the AWS Lambda function directly (without going via Amazon SNS), depending upon your use-case. A Constant can also be specified for this.
Although AWS considers using git webhooks to be antiquated practice, the documentation on aws codestar connections seems to be a bit scarce. I want to create a generic pipeline that can be triggered when a new repository is committed to for the first time (that it contains a folder of TF config). To do this, I need to be able to monitor when an aws codestar connection is used. I think that doing it this way will mean that I can build something that scales better.
But there doesn't appear to be a well documented way to monitor when 'anything' accesses a codestar connection:
https://docs.aws.amazon.com/service-authorization/latest/reference/list_awscodestarconnections.html#awscodestarconnections-actions-as-permissions
In the image above, one can see that there is an action that happens that needs a permission to work, but that is not directly accessible. In cloud trail, I found an action with a payload like this:
"eventTime": "2021-07-06T11:22:46Z",
"eventSource": "codestar-connections.amazonaws.com",
"eventName": "UseConnection",
"awsRegion": "us-east-1",
"sourceIPAddress": "codepipeline.amazonaws.com",
"userAgent": "codepipeline.amazonaws.com",
"requestParameters": {
"connectionArn": "arn:aws:codestar-connections:*:connection/",
"referenceType": "COMMIT",
"reference": {
"FullRepositoryId": "GitHub-User/Github-Repo",
"Commit": "SHA"
}
},
I believe that this is enough for me to use for what I want. I could create an SNS notification with a Lambda listener when this event triggers, but that requires setting up infrastructure to monitor CloudTrail events.
But while I was researching this, I noticed that AWS event bridge appears to know about codestar connections:
Note, if I take this a bit further, I can get something that looks like this:
{
"source": [
"aws.codestar-connections"
],
"detail-type": [
"AWS API Call via CloudTrail"
],
"detail": {
"eventSource": [
"codestar-connections.amazonaws.com"
]
}
}
... but I see no sample events, as it appears that I should, if they were there. And I'm unable to find documentation describing how to make codestar connections log the the UseConnection event to cloudwatch.
If this can be used, instead, then I can use a more direct approach without needing to build the infrastructure to monitor the CloudTrail events.
Can this be done?
EventBridge/CloudTrail pass the below json string to my lambda function when the results get too long.
Is there anyway to view the responseElements like paginators or NextToken?
"responseElements":{
"omitted":true,
"originalSize":175918,
"reason":"responseElements too large"
}
I'm using the following EventBridge pattern
{
"source": ["aws.ec2"],
"detail-type": ["AWS API Call via CloudTrail"],
"detail": {
"eventSource": ["ec2.amazonaws.com"],
"eventName": ["RunInstances"]
}
}
This is a limitation of CloudTrail, so at this time it's not be possible to pass that information from CloudTrail if it exceeds 100KB.
Potential work-around that may be useful to others with this message is to create an EventBridge rule to track EC2 instance state changes. So instead of monitoring the api call runinstances look for instances changing into the state running triggering from that as this should have a smaller response.
https://docs.aws.amazon.com/AmazonCloudWatch/latest/events/CloudWatch-Events-tutorial-CloudWatch-Logs.html
I'm new to AWS and here is the task I'm trying to solve.
SQS queue is set up and from time to time new messages are coming to it. I want to set up Lambda and retrieve those messages and perform some business logic on the content of that messages.
Searching across AWS site and Internet in general I understood that SQS itself can't be a trigger for Lambda, hence I need to set up Cloud Watch that will trigger Lambda by schedule (every minute for example). Here is code example from aws github how to consume a message.
So far so good. Now, when creating Lambda itself, I need to specify the input type to implement RequestHandler interface:
public interface RequestHandler<I, O> {
O handleRequest(I var1, Context var2);
}
But if my Lambda is not expecting any input, it will go to SQS on its own and pull the messages does it make any sense to have input?
Can I leave it void or even use some other method signature at all (of course not implementing that interface in this case)?
Here your Lambda will get a reference to the cloudwatch trigger.
You might not be interested in that but there can be instances where the Lambda wants to know the trigger details even if the trigger is a cloudwatch alarm
The following is an example event:
{ "version": "0", "id": "53dc4d37-cffa-4f76-80c9-8b7d4a4d2eaa",
"detail-type": "Scheduled Event", "source": "aws.events", "account":
"123456789012", "time": "2015-10-08T16:53:06Z", "region": "us-east-1",
"resources": [
"arn:aws:events:us-east-1:123456789012:rule/my-scheduled-rule" ],
"detail": {} }