I'm having some issues with AWS CloudWatch Events.
I'm creating a CodePipeline CI pipeline which have a CodeCommit repository as the Source, a CodeBuild project as the Build/Test phase (then, it deploys to Lambda, but the problem isn't there).
We have multiple projects and we are going to push multiple other projects. So, I created a script that manages the AWS CI stuff (i.e. creating a pipeline, a CodeBuild project, ... AND a CloudWatch Events rule, linked to the pipeline).
The first time I push my code, it works. But then, the process stop getting triggered by the push on CodeCommit.
I found a solution (but NOT the one I want) : I just have to modify the pipeline, modify the stage (Source), not touching anything, and saving the null modification : and it works (before saving, it ask the authorization to create a CloudWatch Events rule associated with this pipeline).
Does somebody encountered this issue ? What did you do to bypass it ?
I really want to make a 100% automated CI, I don't want to go to the AWS Console each time my team create a new repository or push a new branch on an existing repository.
EDIT :
Here is the JSON of my CloudWatch Events rule :
{
"Name": "company-ci_codepipeline_project-stage",
"EventPattern": "cf. second JSON",
"State": "ENABLED",
"Arn": "arn:aws:events:region:xxx:rule/company-ci_codepipeline_project-stage",
"Description": "CloudWatch Events rule to automatically trigger the needed pipeline from every push to project repository, on the stage branch on CodeCommit."
}
And here is the EventPattern JSON :
{
"source": [
"aws.codecommit"
],
"detail-type": [
"CodeCommit repository state change"
],
"resources": [
"arn:aws:codecommit:region:xxx:project"
],
"detail": {
"event": [
"referenceCreated",
"referenceUpdated"
],
"referenceType": [
"branch"
],
"referenceName": [
"stage"
]
}
}
I've found this issue is typically related to the event rule/target/role configuration. If you don't have a target associated with your rule, you will NOT see the event invoked when reviewing metrics. Since your EventPattern looks correct, I'm thinking the target might be your issue.
You should have a configured target that looks something like:
{
"Rule": "company-ci_codepipeline_project-stage",
"Targets": [
{
"RoleArn": "arn:aws:iam::xxx:role/cwe-codepipeline",
"Id": "ProjectPipelineTarget",
"Arn": "arn:aws:codepipeline:region:xxx:your-pipeline"
}
]
}
If that seems all good, I'd next check that the role associated with the target is granting the correct permissions. My role looks something like:
{
"Role": {
"Description": "Allows CloudWatch Events to invoke targets and perform actions in built-in targets on your behalf.",
"AssumeRolePolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "events.amazonaws.com"
},
"Effect": "Allow",
"Sid": ""
}
]
},
"MaxSessionDuration": 3600,
"RoleId": "xxxx",
"CreateDate": "2018-08-06T20:56:19Z",
"RoleName": "cwe-codepipeline",
"Path": "/",
"Arn": "arn:aws:iam::xxx:role/cwe-codepipeline"
}
}
And it has an inline policy of:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"codepipeline:StartPipelineExecution"
],
"Resource": [
"arn:aws:codepipeline:*:xxx:*"
]
}
]
}
For reference, check out this documentation
Related
I enabled notifications for Amazon EventBridge on my s3 bucket.
Then I created an EventBridge rule with the following event pattern:
{
"detail": {
"bucket": {
"name": ["arn:aws:s3:::my-bucket"]
}
},
"detail-type": ["Object Created"],
"source": ["aws.s3"]
}
Then I added my state machine as the target of this rule. I also attached an IAM role with the following policy for this event target.
"Statement": [
{
"Effect": "Allow",
"Action": [ "states:StartExecution" ],
"Resource": [ "arn:aws:states:*:*:stateMachine:*" ]
}
]
Then I attached the following policy to my state machine step function as well:
{
"Action": "events:*",
"Resource": "arn:aws:events:us-east-1:my-account-id:event-bus/default",
"Effect": "Allow"
}
After doing all of this, still my state machine is not getting invoked.
What am I missing here? How can I debug where the issue might be?
Have you checked if your custom pattern matches the event?
I think you do not need arn in the name.
Try with
{
"detail": {
"bucket": {
"name": ["my-bucket"]
}
},
"detail-type": ["Object Created"],
"source": ["aws.s3"]
}
I use CDK to deploy a codepipeline. It works fine until I try to add notification for codepipeline success/fail events. It gives CREATE_FAILED error with message Resource handler returned message: "Invalid request provided: AWS::CodeStarNotifications::NotificationRule" (RequestToken: bb566fd0-1ac9-5d61-03fe-f9c27b4196fa, HandlerErrorCode: InvalidRequest). What could be the reason? Thanks.
import * as codepipeline from "#aws-cdk/aws-codepipeline";
import * as codepipeline_actions from "#aws-cdk/aws-codepipeline-actions";
import * as codestar_noti from "#aws-cdk/aws-codestarnotifications";
import * as sns from "#aws-cdk/aws-sns";
const pipeline = new codepipeline.Pipeline(...);
const topicArn = props.sns_arn_for_developer;
const targetTopic = sns.Topic.fromTopicArn(
this,
"sns-notification-topic",
topicArn
);
new codestar_noti.NotificationRule(this, "Notification", {
detailType: codestar_noti.DetailType.BASIC,
events: [
"codepipeline-pipeline-pipeline-execution-started",
"codepipeline-pipeline-pipeline-execution-failed",
"codepipeline-pipeline-pipeline-execution-succeeded",
"codepipeline-pipeline-pipeline-execution-canceled",
],
source: pipeline,
targets: [targetTopic],
});
Here is the snippet of generated cloudformation tempalte.
"Notification2267453E": {
"Type": "AWS::CodeStarNotifications::NotificationRule",
"Properties": {
"DetailType": "BASIC",
"EventTypeIds": [
"codepipeline-pipeline-pipeline-execution-started",
"codepipeline-pipeline-pipeline-execution-failed",
"codepipeline-pipeline-pipeline-execution-succeeded",
"codepipeline-pipeline-pipeline-execution-canceled"
],
"Name": "sagemakerbringyourownNotification36194CEC",
"Resource": {
"Fn::Join": [
"",
[
"arn:",
{
"Ref": "AWS::Partition"
},
":codepipeline:ap-southeast-1:305326993135:",
{
"Ref": "sagemakerbringyourownpipeline0A8C43B1"
}
]
]
},
"Targets": [
{
"TargetAddress": "arn:aws:sns:ap-southeast-1:305326993135:whitespace_alerts",
"TargetType": "SNS"
}
]
},
"Metadata": {
"aws:cdk:path": "sagemaker-bring-your-own/Notification/Resource"
}
},
FWIW, I got the exact same error "Invalid request provided: AWS::CodeStarNotifications::NotificationRule" from a CDK app where the Topic was created (not imported). It turned out to be a transient issue, because it succeeded the second time without any changes. I suspect it was due to a very large ECR image which was build the first time as part of the deploy and which took quite some time. My speculation is that the Topic timed out and got into some kind of weird state waiting for the NotificationRule to be created.
This is because imported resources cannot be modified. As you pointed out in the comments, setting up the notification involves modifying the Topic resource, specifically its access policy.
Reference: https://docs.aws.amazon.com/cdk/v2/guide/resources.html#resources_importing
I was able to solve this by doing the following in that order:
First removing the below statement from the resource policy of the SNS topic.
Then deploying the stack(which interestingly doesn't add anything to the resource policy)
Once the stack deployment finishes, update the resource policy manually to add the below statement.
{
"Sid": "AWSCodeStarNotifications_publish",
"Effect": "Allow",
"Principal": {
"Service": "codestar-notifications.amazonaws.com"
},
"Action": "SNS:Publish",
"Resource": "arn:aws:sns:ap-south-1:xxxxxxxxx:test"
}
How to allow only jobs from a certain AWS Batch queue (and based on a specific job description) to publish to the specific SNS topic?
I though about attaching to jobs IAM policy with the statement:
{
"Effect": "Allow",
"Action": "sns:Publish",
"Resource": ["<arn of the specific SNS topic"]
"Condition": {"ArnEquals": {"aws:SourceArn": "arn:aws:???"}}
}
But what should be the source ARN? ARN of the job queue, ARN of the job definition? Or maybe this should be set up completely differently?
I had a similar experience when worked with AWS Batch jobs executed in Fargate containers which follow the same principles as ECS in scope of assigning roles and permissions.
If you are going to publish messages into specific topic from the code executed inside of your container, then you should create a role with necessary permissions and then use its ARN in the JobRoleArn property of your job definition.
For example (there can be minor mistakes in the code below, but I am just trying to explain the concept here):
Role cloudformation:
"roleresourceID": {
"Type": "AWS::IAM::Role",
"Properties": {
"AssumeRolePolicyDocument": {
"Statement": [
{
"Action": "sts:AssumeRole",
"Effect": "Allow",
"Principal": {
"AWS": "*"
}
}
],
"Version": "2012-10-17"
},
"RoleName": "your-job-role"
}
}
Policy attached to the role:
"policyresourceid": {
"Type": "AWS::IAM::Policy",
"Properties": {
"PolicyDocument": {
"Statement": [
{
"Action": "sns:Publish",
"Effect": "Allow",
"Resource": "<arn of the specific SNS topic>"
}
],
"Version": "2012-10-17"
},
"PolicyName": "your-job-role-policy",
"Roles": [
{
"Ref": "roleresourceID"
}
]
}
}
And finally attach role to the Job Definition:
....other job definition properties
"JobRoleArn": {
"Fn::GetAtt": [
"roleresourceID",
"Arn"
]
}
Of course you may structure and format roles and policies in way you like, the main idea of this explanation is that you need to attach proper role using JobRoleArn property of your job definition.
Background
I'm creating a Step Function state machine that starts an AWS CodeBuild once a defined AWS CodePipeline has an execution status of SUCCEED. I'm using the .waitForTaskToken feature within the Step Function to wait on the CodePipeline to succeed via a CloudWatch event. Once the pipeline succeeds, the event sends back the token to the step function and runs the CodeBuild.
Here's the step function definition:
{
"StartAt": "PollCP",
"States": {
"PollCP": {
"Next": "UpdateCP",
"Parameters": {
"Entries": [
{
"Detail": {
"Pipeline": [
"bar-pipeline"
],
"State": [
"SUCCEEDED"
],
"TaskToken.$": "$$.Task.Token"
},
"DetailType": "CodePipeline Pipeline Execution State Change",
"Source": "aws.codepipeline"
}
]
},
"Resource": "arn:aws:states:::events:putEvents.waitForTaskToken",
"Type": "Task"
},
"UpdateCP": {
"End": true,
"Parameters": {
"ProjectName": "foo-project"
},
"Resource": "arn:aws:states:::codebuild:startBuild.sync",
"Type": "Task"
}
}
}
The permissions for the step function:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Action": "codebuild:StartBuild",
"Resource": "*"
},
{
"Sid": "",
"Effect": "Allow",
"Action": "codepipeline:*",
"Resource": "*"
}
]
}
and arn:aws:iam::aws:policy/CloudWatchEventsFullAccess
Problem
The cloudwatch event within the step function returns the error:
Error
EventBridge.FailedEntry
Cause
{
"Entries": [
{
"ErrorCode": "NotAuthorizedForSourceException",
"ErrorMessage": "Not authorized for the source."
}
],
"FailedEntryCount": 1
}
Attempts:
Modify the associated Codepipeline and Codebuild role to have step function permission to send task statuses. Specifically, the permission is:
{
"Effect": "Allow",
"Action": [
"states:SendTaskSuccess",
"states:SendTaskFailure",
"states:SendTaskHeartbeat"
],
"Resource": "*"
}
Got the same original error mentioned above.
Modify the associated Step Function machine's permission to have full access to all step function actions and resources. Got the same original error mentioned above.
Test the event rule specified in the PollCP step function task with the default AWS EventBridge bus. The event was:
{
"version": "0",
"detail-type": "CodePipeline Pipeline Execution State Change",
"source": "aws.codepipeline",
"account": "123456789012",
"time": "2021-06-14T00:44:41Z",
"region": "us-west-2",
"resources": [],
"detail": {
"pipeline": "<pipeline-arn>",
"state": "SUCCEED"
}
}
The event outputted the same error mentioned above. This probably means the error is strictly related to the event entry mentioned in the code snippet above.
Are you trying to trigger your State Machine using CloudWatch event when a CodePipeline pipeline completes/succeeds?
If so, you cannot define your trigger in your state machine.
The integration with EventBridge is not so that state machine can be triggered by events. Rather it is to Publish events to an event bus from your state machine or workflow.
Read more here: https://aws.amazon.com/blogs/compute/introducing-the-amazon-eventbridge-service-integration-for-aws-step-functions/
So I suggest you create a CloudWatch rule and target your state machine instead.
If you want to use the waitForTaskToken pattern. You will have to explicitly return that token with a send_task_success API call (sample below for python/boto3).
sfn.send_task_success(
taskToken=task_token,
output=json.dumps(some_optional_payload)
)
This means that, when the step executes, it will publish the event to EventBridge bus. You will have to detect this event outside your state machine, most likely with an CloudWatch event rule. Then trigger a lambda function from the rule. The lambda function performs the send_task_success API call which restarts/continues your workflow/state machine.
In my opinion, that is just unnecessary. Like I said, you can simply watch for pipeline execution state changes uing CW event rule, trigger your state machine, and your state machine starts with the CodeBuild stage.
Side note: It's nice to see people using Step Functions for CI/CD pipeline. It's just got more flexibility and ability to do complex branching strategies. Will probably do a blog post around this soon.
Your CodeBuild service role will need permission to use the states:SendTask* (Success, Failure, and Heartbeat) actions so that it can notify the state machine. This page in the docs has more details.
I'm using AWS CloudFormation to setup an EventBridge Bus + Rules + Targets (say SNS). For SNS as a target, per the doc at https://docs.aws.amazon.com/eventbridge/latest/userguide/resource-based-policies-eventbridge.html#sns-permissions, I need to apply resource policies outside of CloudFormation and I don't think CF supports this yet?
For CW Logs Group as a target, Im using the aws logs put-resource-policy to set this up in a script. Is there a better way to automate this?
The link you've provided refers to setting up permissions for SNS topic. Setting such permissions is supported by the CloudFormation by means of AWS::SNS::TopicPolicy.
However, you also state that you want to set resource-based policies on the CloudWatch Logs (aws logs put-resource-policy). If this is the case, then you are correct and it is not supported in CloudFormation.
You would have to use custom resource based on a lambda function to add such functionality to your templates.
Here is a snippet from my SAM:
{
"MyDevQueue": {
"Properties": {
"QueueName": "my-dev-queue",
"ReceiveMessageWaitTimeSeconds": 20,
"Tags": [
{
"Key": "env",
"Value": "dev"
}
],
"VisibilityTimeout": 300
},
"Type": "AWS::SQS::Queue"
},
"MyDevQueuePolicy": {
"Properties": {
"PolicyDocument": {
"Statement": [
{
"Action": [
"SQS:SendMessage"
],
"Condition": {
"ArnEquals": {
"aws:SourceArn": "arn:aws:events:<region>:<AccountID>:rule/my-dev-queue/my-dev-queue"
}
},
"Effect": "Allow",
"Principal": {
"Service": [
"events.amazonaws.com"
]
},
"Resource": [
{
"Fn::GetAtt": [
"MyDevQueue",
"Arn"
]
}
]
}
]
},
"Queues": [
"MyDevQueue"
]
},
"Type": "AWS::SQS::QueuePolicy"
}
}