Amazon EventBridge rule S3 put object event cannot trigger the AWS StepFunction - amazon-web-services

After setting the EventBridge, S3 put object event still cannot trigger the StepFuction.
However, I tried to change the event rule to EC2 status. It's working !!!
I also try to change the rule to S3 all event, but it still not working.
Amazon EventBridge:
Event pattern:
{
"source": ["aws.s3"],
"detail-type": ["AWS API Call via CloudTrail"],
"detail": {
"eventSource": ["s3.amazonaws.com"],
"eventName": ["PutObject"],
"requestParameters": {
"bucketName": ["MY_BUCKETNAME"]
}
}
Target(s):
Type:Step Functions state machine
ARN:arn:aws:states:us-east-1:xxxxxxx:stateMachine:MY_FUNCTION_NAME
Reference:https://docs.aws.amazon.com/step-functions/latest/dg/tutorial-cloudwatch-events-s3.html

Your step function isn't being triggered because the PutObject events aren't being published to cloudtrail. S3 operations are classified as data events so you must enable Data events when creating your cloudTrail. The tutorial says next next and create which seems to suggest no additional options need to be selected. By default, Data Events on the next step (step 2 - Choose log events - as of this writing) is not checked. You have to check it and fill up the bottom part to specify if all buckets/events are to be logged.

Related

AWS cloud watch event pattern to detect S3 buckets creation/modification with public access

I am trying to create an AWS Cloud watch event which will trigger an email whenever a S3 bucket is created or modified to allow public access.
I have created the cloud trail, log stream and am tracking all the S3 events logs. When i am trying to create a custom event by giving the pattern to detect S3 buckets with public access i am not able to fetch any response or the event doesn't get triggered even if i create bucket with public access. Can you help me out with the custom pattern for the same ?
I have tried giving GetPublicAccessBlock, PutPublicAccessBlock etc in event type but no luck. Please suggest accordingly.
you need to do the following in order to receive a notification
Enable CloudTrail for management events
Create an EventBridge Rule with an event pattern
AWS events or EventBridge partner events
Use Pattern from AWS Service, Simple Storage Service(S3) and Event Type as "AWS API Call via CloudTrail"
Note: This only works if you are turning off for an existing bucket (not for a new bucket)
The reason being when we create a bucket with public access, there are only two events generated, which are CreateBucket and PutBucketEncryption and they don't seem to have information regarding public access being turned on. However if we create a bucket with no public access then it generates an additional PutBucketPublicAccessBlock event with CreateBucket and PutBucketEncryption.
{
"source": ["aws.s3"],
"detail-type": ["AWS API Call via CloudTrail"],
"detail": {
"eventSource": ["s3.amazonaws.com"],
"eventName": ["PutBucketPublicAccessBlock", "DeleteBucketPublicAccessBlock"],
"requestParameters": {
"PublicAccessBlockConfiguration": {
"$or": [{
"RestrictPublicBuckets": [false]
}, {
"BlockPublicPolicy": [false]
}, {
"BlockPublicAcls": [false]
}, {
"IgnorePublicAcls": [false]
}]
}
}
}
}

Sending the inputs to target from a different event and not the triggered event in Amazon Eventbridge

HI i am still learning how to set up event rules in AWS Event bridge.
I have the following setup
Release pipeline in code pipeline
Test pipeline in code pipeline
Event bridge rule which gets triggered on success of release pipeline
Currently i have the following event pattern
{
"source": ["aws.codepipeline"],
"detail-type": ["CodePipeline Pipeline Execution State Change"],
"resources": ["arn:aws:codepipeline:"],
"detail": {
"state": ["SUCCEEDED"]
}
}
My input to target is Matched event.
But when this rule triggers I want to send the inputs from not the matched event but say, cloudcommit event, to the target. is this possible?
So, what i wish to achieve essentially is that, the rule will be triggered by codepipeline being successful and the rule will send the details from codecommit to lambda.

CodeBuild: how to pass variables / JSON data to an CloudWatch Event Rule?

Codebuild is used to build a project from a repository and deploy it to s3. I want to pass data/information that is processed in codebuild to cloudwatch event so that I can send the notification with that information for pass as well as failed build. Is there a way to send data ($variables) processed in codebuild in cloudwatch event rule or any other way?
I have the rule, topic, and email working.... but I see no way to pass any extra data than what is supplied by CodeBuild.
For example: I have some environment variables in code build and I need to send these as a part of my notification which will help me determine what value caused the failure of build.
You have to do this form with your CB as part of your buildspec.yml. If you are using SNS (I guess), then you can use aws sns publish AWS CLI as part of your CB procedure. This would also require you to add permissions to CB role for sns:publish action.
I'll start with saying that #Marcin answer is totally correct but it doesn't answer the "as well as failed build" part.
So for the first part where you want to send the responses from the processed data you either need to:
publish to SNS directly from your buildspec (as #Marcin pointed out)
or send an event to AWS EventBridge (aka Cloudwatch Events) from your buildspec
With regard to the second part of the question where you want to catch the CodeBuild execution status you can rely on the built-in notifications events from that are generated from CodeBuild itself:
{
"source": [
"aws.codebuild"
],
"detail-type": [
"CodeBuild Build State Change"
],
"detail": {
"build-status": [
"IN_PROGRESS",
"SUCCEEDED",
"FAILED",
"STOPPED"
],
"project-name": [
"my-demo-project-1",
"my-demo-project-2"
]
}
}
You can intercept the events for the whole build and for each phase separately if needed and act upon them (whether you are going to send to SNS, or trigger a Lambda or something else it's up to you).

How to test lambda using test event

I have lambda which is triggered by cloudwatch event when VPN tunnels are down or up. I searched online but can't find a way to trigger this cloudwatch event.
I see an option for test event but what can I enter in here for it to trigger an event that tunnel is up or down?
You can look into CloudWatchEventsandEventPatterns
Events in Amazon CloudWatch Events are represented as JSON objects.
For more information about JSON objects, see RFC 7159. The following
is an example event:
{
"version": "0",
"id": "6a7e8feb-b491-4cf7-a9f1-bf3703467718",
"detail-type": "EC2 Instance State-change Notification",
"source": "aws.ec2",
"account": "111122223333",
"time": "2017-12-22T18:43:48Z",
"region": "us-west-1",
"resources": [
"arn:aws:ec2:us-west-1:123456789012:instance/ i-1234567890abcdef0"
],
"detail": {
"instance-id": " i-1234567890abcdef0",
"state": "terminated"
}
}
Also log based on event, you can pick your required event from AWS CW EventTypes
I believe in your scenario, you don't need to pass any input data as you must have built the logic to test the VPN tunnels connectivity within the Lamda. You can remove that JSON from the test event and then run the test.
If you need to pass in some information as part of the input event then follow the approach mentioned by #Adiii.
EDIT
The question is more clear through the comment which says
But question is how will I trigger the lambda? Lets say I want to
trigger it when tunnel is down? How will let lambda know tunnel is in
down state? – NoviceMe
This can be achieved by setting up a rule in Cloudwatch to schedule the lambda trigger at a periodic interval. More details here:
Tutorial: Schedule AWS Lambda Functions Using CloudWatch Events
Lambda does not have an invocation trigger right now that can monitor a VPN tunnel, so the only workaround is to poll the status through lamda.

AWS CloudWatch: How to pass Media convert logs to lamda function in cloudwatch rules?

I am trying to setup Video Streaming architecture using AWS S3, CloudWatch and MediaConvert. I am following this link enter link description here as reference to setup the architecture.
In short the steps are
Upload video to S3 bucket
On Success S3 should trigger Lambda function which coverts input video
into different formats and save them in another S3 bucket and logs
in CloudWatch
In CloudWatch based on event pattern trigger another Lambda function
with the video file information
Lambda function will save this information in desired location.
I am stuck in step 3 where I am able to trigger Lambda function but I couldn't able to understand how to pass converted video filepath or filename to lambda function in the target section.
Here is the custom event pattern to recognise media convert success event
{
"source": [
"aws.mediaconvert"
],
"detail-type": [
"MediaConvert Job State Change"
],
"detail": {
"status": [
"COMPLETE",
"ERROR"
],
"userMetadata": {
"application": [
"VOD"
]
}
}
}
You should be creating a cloudwatch event rule to handle this scenario.
steps for your case-
go to cloudwatch/Rules
Event Pattern
Events by service
Select Service Name
Select Event Type
This should trigger a cloudwatch event and you need to process that event to get required information.