How can I get codepipeline execution id in cdk at runtime? - amazon-web-services

I am using AWS CDK to deploy a codepipeline. It also has a notification rule which notify when the pipeline fails. I need to put the codepipeline job URL in the notify message in order for people to open the piepline easily.
In cloudformation, I have to put below configuation to compute the URL:
Targets:
- Arn: !Ref SNSTopicNotification
Id: piplineID
InputTransformer:
InputPathsMap:
pipeline: "$.detail.pipeline"
executionId: "$.detail.execution-id"
region: "$.region"
InputTemplate: !Sub |
"Pipeline <pipeline> failed"
"https://<region>.console.aws.amazon.com/codesuite/codepipeline/pipelines/<pipeline>/executions/<executionId>/timeline?region=<region>"
the key is using $.detail.xxx to reference the value at runtime. How can I achieve this in CDK?

Related

AWS EventBridge rule doesn't trigger: Error. NotAuthorizedForSourceException. Not authorized for the source

I'm creating a rule that should fire every time there is a change in status in a SageMaker batch transform job.
I'm using Serverless Framework but to simplify it even further, here's what I did:
The rule, exported from AWS console:
AWSTemplateFormatVersion: '2010-09-09'
Description: >-
CloudFormation template for EventBridge rule
'sagemaker-transform-status-to-CWL'
Resources:
EventRule0:
Type: AWS::Events::Rule
Properties:
EventBusName: default
EventPattern:
source:
- aws.sagemaker
detail-type:
- SageMaker Training Job State Change
Name: sagemaker-transform-status-to-CWL
State: ENABLED
Targets:
- Id: XXX
Arn: >-
arn:aws:logs:us-east-1:XXX:log-group:/aws/events/sagemaker-notifications
Eventually I want this to trigger a step function or a lambda function, but for now I am configuring the target to be CloudWatch with log group 'sagemaker-notifications'
I expect that everytime I run a batch transform job in SageMaker, this will get notified and the log would show up on cloudwatch.
But I'm not getting any logs, so when I tried to PutEvents manually to test it, I was getting this:
Error. NotAuthorizedForSourceException. Not authorized for the source.
It's probably an issue with roles, but I'm not sure which kind of role to configure, where and who should assume it.
Tried going through AWS tutorials, adding permissions to the default event bus, using serverless framework
See some sample event patterns here - https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-events-rule.html#aws-resource-events-rule--examples
Your source should be a custom source, and cannot contain aws. (Reference -https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-events.html)

Serveless not deploying to AWS

I'm new to Serverless and lambdas
I'm trying to deploy my serverless functions to AWS but it's showing this error:
This is my serverless.yml file:
service: aws-node-http-api-project
frameworkVersion: "3"
provider:
name: aws
runtime: nodejs12.x
region: ap-southeast-1
functions:
hello:
handler: handler.hello
events:
- httpApi:
path: /
method: get
I have set the AWS CLI and IAM user.
Not sure if this is related, but in my CloudFormation, it is showing one stack:
The stack was created, but the changeset failed to execute and hence the stack is stuck at the REVIEW_IN_PROGRESS state.
Find out why the changeset wasn't executed and fix it.
Delete the stack and redeploy.

Ansible: Add Cloudwatch Log event trigger to Lambda function

I am trying to add Cloudwatch logs trigger to Lambda function written in python3.6 via ansible. I am able to deploy lambda function via ansible but facing issues when trying to deploy a trigger with a log group configured.
Below is my code for ansible trigger and lambda policy.
Lambda trigger:
- name: Cloud Watch Log event mapping
lambda_event:
state: present
event_source: stream
lambda_function_arn: arn:aws:lambda:us-east-2:<account_id>:function:CWloggerLambda
alias: CWTEST
region: us-east-2
source_params:
source_arn: arn:aws:logs:us-east-2:<account_id>:log-group:<log_group_name>
enabled: True
Lambda Policy:
- name: Allowing CloudWatch Event(s) to trigger Lambda function(s)
lambda_policy:
lambda_function_arn: arn:aws:lambda:us-east-2:<account_id>:function:CWloggerLambda
statement_id: "CWloggerLambda_lambda-cloudwatch-trigger"
action: "lambda:InvokeFunction"
principal: "events.amazonaws.com"
source_arn: arn:aws:logs:us-east-2:<account_id>:log-group:<log_group_name>
region: us-east-2
state: present
The policy is added however trigger gives an error on the ARN as only Kinesis, DynamoDB and SQS are allowed. Any possible way to get a Cloudwatch Logs trigger via ansible?

Cross-Account AWS CodePipeline cannot access CloudFormation deploy artifacts

I have a cross-account pipeline running in an account CI deploying resources via CloudFormation in another account DEV.
After deploying I save the artifact outputs as a JSON file and want to access it in another pipeline action via CodeBuild.
CodeBuild fails in the phase DOWNLOAD_SOURCE with the following messsage:
CLIENT_ERROR: AccessDenied: Access Denied status code: 403, request
id: 123456789, host id: xxxxx/yyyy/zzzz/xxxx= for primary source and
source version arn:aws:s3:::my-bucket/my-pipeline/DeployArti/XcUNqOP
The problem is likely that the CloudFormation, when executed in a different account, encrypt the artifacts with a different key than the pipeline itself.
Is it possible to give the CloudFormation an explicit KMS key to encrypt the artifacts with, or any other way how to access those artifacts back in the pipeline?
Everything works when executed from within a single account.
Here is my code snippet (deployed in the CI account):
MyCodeBuild:
Type: AWS::CodeBuild::Project
Properties:
Artifacts:
Type: CODEPIPELINE
Environment: ...
Name: !Sub "my-codebuild"
ServiceRole: !Ref CodeBuildRole
EncryptionKey: !GetAtt KMSKey.Arn
Source:
Type: CODEPIPELINE
BuildSpec: ...
CrossAccountCodePipeline:
Type: AWS::CodePipeline::Pipeline
Properties:
Name: "my-pipeline"
RoleArn: !GetAtt CodePipelineRole.Arn
Stages:
- Name: Source
...
- Name: StagingDev
Actions:
- Name: create-stack-in-DEV-account
InputArtifacts:
- Name: SourceArtifact
OutputArtifacts:
- Name: DeployArtifact
ActionTypeId:
Category: Deploy
Owner: AWS
Version: "1"
Provider: CloudFormation
Configuration:
StackName: "my-dev-stack"
ChangeSetName: !Sub "my-changeset"
ActionMode: CREATE_UPDATE
Capabilities: CAPABILITY_NAMED_IAM
# this is the artifact I want to access from the next action
# within this CI account pipeline
OutputFileName: "my-DEV-output.json"
TemplatePath: !Sub "SourceArtifact::stack/my-stack.yml"
RoleArn: !Sub "arn:aws:iam::${DevAccountId}:role/dev-cloudformation-role"
RoleArn: !Sub "arn:aws:iam::${DevAccountId}:role/dev-cross-account-role"
RunOrder: 1
- Name: process-DEV-outputs
InputArtifacts:
- Name: DeployArtifact
ActionTypeId:
Category: Build
Owner: AWS
Version: "1"
Provider: CodeBuild
Configuration:
ProjectName: !Ref MyCodeBuild
RunOrder: 2
ArtifactStore:
Type: S3
Location: !Ref S3ArtifactBucket
EncryptionKey:
Id: !GetAtt KMSKey.Arn
Type: KMS
CloudFormation generates output artifact, zips it and then uploads the file to S3.
It does not add ACL, which grants access to the bucket owner. So, you get a 403 when you try to use the CloudFormation output artifact further down the pipeline.
workaround is to have one more action in your pipeline immediately after CLoudFormation action for ex: Lambda function that can assume the target account role and update the object acl ex: bucket-owner-full-control.
mockora's answer is correct. Here is an example Lambda function in Python that fixes the issue, which you can configure as an Invoke action immediately after your cross account CloudFormation deployment.
In this example, you configure the Lambda invoke action user parameters setting as the ARN of the role you want the Lambda function to assume in remote account to fix the S3 object ACL. Obviously your Lambda function will need sts:AssumeRole permissions for that role, and the remote account role will need s3:PutObjectAcl permissions on the pipeline bucket artifact(s).
import os
import logging, datetime, json
import boto3
from aws_xray_sdk.core import xray_recorder
from aws_xray_sdk.core import patch_all
# X-Ray
patch_all()
# Configure logging
logging.basicConfig()
log = logging.getLogger()
log.setLevel(os.environ.get('LOG_LEVEL','INFO'))
def format_json(data):
return json.dumps(data, default=lambda d: d.isoformat() if isinstance(d, datetime.datetime) else str(d))
# Boto3 Client
client = boto3.client
codepipeline = client('codepipeline')
sts = client('sts')
# S3 Object ACLs Handler
def s3_acl_handler(event, context):
log.info(f'Received event: {format_json(event)}')
# Get Job
jobId = event['CodePipeline.job']['id']
jobData = event['CodePipeline.job']['data']
# Ensure we return a success or failure result
try:
# Assume IAM role from user parameters
credentials = sts.assume_role(
RoleArn=jobData['actionConfiguration']['configuration']['UserParameters'],
RoleSessionName='codepipeline',
DurationSeconds=900
)['Credentials']
# Create S3 client from assumed role credentials
s3 = client('s3',
aws_access_key_id=credentials['AccessKeyId'],
aws_secret_access_key=credentials['SecretAccessKey'],
aws_session_token=credentials['SessionToken']
)
# Set S3 object ACL for each input artifact
for inputArtifact in jobData['inputArtifacts']:
s3.put_object_acl(
ACL='bucket-owner-full-control',
Bucket=inputArtifact['location']['s3Location']['bucketName'],
Key=inputArtifact['location']['s3Location']['objectKey']
)
codepipeline.put_job_success_result(jobId=jobId)
except Exception as e:
logging.exception('An exception occurred')
codepipeline.put_job_failure_result(
jobId=jobId,
failureDetails={'type': 'JobFailed','message': getattr(e, 'message', repr(e))}
)
I've been using CodePipeline for cross account deployments for a couple of years now. I even have a GitHub project around simplifying the process using organizations. There are a couple of key elements to it.
Make sure your S3 bucket is using a CMK, not the default encryption key.
Make sure you grant access to that key to the accounts to which you are deploying. When you have a CloudFormation template, for example, that runs on a different account than where the template lives, the role that is being used on that account needs to have permissions to access the key (and the S3 bucket).
It's certainly more complex than that, but at no point do I run a lambda to change the object owner of the artifacts. Create a pipeline in CodePipeline that uses resources from another AWS account has detailed information on what you need to do to make it work.
CloudFormation should use the KMS encryption key provided in the artifact store definition of your pipeline: https://docs.aws.amazon.com/codepipeline/latest/APIReference/API_ArtifactStore.html#CodePipeline-Type-ArtifactStore-encryptionKey
Therefore, so long as you give it a custom key there and allow the other account to use that key too it should work.
This is mostly covered in this doc: https://docs.aws.amazon.com/codepipeline/latest/userguide/pipelines-create-cross-account.html

How To Rollback AWS CodeStar Lambda Functions Deployed Via CloudFormation?

I'm creating a Nodejs microservice for AWS Lambda. I scaffolded by project using AWS Codestar, and that set me up with a CI/CD pipeline that automatically deploys the lambda function. Nice.
The issue is that every time it deploys the lambda function it must delete and recreate the function, thus deleting any versions or aliases I made.
This means I really can't roll back to other releases. I basically have use git to actually revert the project, push to git, wait for the super-slow AWS Code Pipeline to flow through successfully, and then have it remake the function. To me that sounds like a pretty bad DR strategy, and I would think the right way to rollback should be simple and fast.
Unfortunately, it looks like the CloudFormation section of AWS doesn't offer any help here. When you drill into your stack on the first CloudFormation page it only shows you information about the latest formation that occurred. Dear engineers of AWS CloudFormation: if there was a page for each stack that showed a history of CloudFormation for this stack and an option to rollback to it, that would be really awesome. For now, though, there's not. There's just information about the latest formation that's been clouded. One initially promising option was "Rollback Triggers", but this is actually just something totally different that lets you send a SNS notification if your build doesn't pass.
When I try to change the CodePipeline stage for deploy from CREATE_CHANGE_SET to CREATE_UPDATE I then get this error when it tries to execute:
Action execution failed UpdateStack cannot be used with templates
containing Transforms. (Service: AmazonCloudFormation; Status Code:
400; Error Code: ValidationError; Request ID:
bea5f687-470b-11e8-a616-c791ebf3e8e1)
My template.yml looks like this by the way:
AWSTemplateFormatVersion: 2010-09-09
Transform:
- AWS::Serverless-2016-10-31
- AWS::CodeStar
Parameters:
ProjectId:
Type: String
Description: AWS CodeStar projectID used to associate new resources to team members
Resources:
HelloWorld:
Type: AWS::Serverless::Function
Properties:
Handler: index.handler
Runtime: nodejs8.10
Environment:
Variables:
NODE_ENV: staging
Role:
Fn::ImportValue:
!Join ['-', [!Ref 'ProjectId', !Ref 'AWS::Region', 'LambdaTrustRole']]
Events:
GetEvent:
Type: Api
Properties:
Path: /
Method: get
PostEvent:
Type: Api
Properties:
Path: /
Method: post
The only options in the CodePipeline "Deploy" action are these:
It would be really great if someone could help me to see how in AWS you can make Lambda functions with CodePipeline in a way that they are easy and fast to rollback. Thanks!