Invoke Lambda from CodePipeline with multiple UserParameters - amazon-web-services

This tutorial shows how to Invoke a Lambda from CodePipeline passing a single parameter:
http://docs.aws.amazon.com/codepipeline/latest/userguide/how-to-lambda-integration.html
I've built a slackhook lambda that needs to get 2 parameters:
webhook_url
message
Passing in JSON via the CodePipeline editor results in the JSON block being sent in ' ' so it can't be parsed directly.
UserParameter passed in:
{
"webhook":"https://hooks.slack.com/services/T0311JJTE/3W...W7F2lvho",
"message":"Staging build awaiting approval for production deploy"
}
User Parameter in Event payload
UserParameters: '{
"webhook":"https://hooks.slack.com/services/T0311JJTE/3W...W7F2lvho",
"message":"Staging build awaiting approval for production deploy"
}'
When trying to apply multiple UserParameters directly in the CLoudFormation like this:
Name: SlackNotification
ActionTypeId:
Category: Invoke
Owner: AWS
Version: '1'
Provider: Lambda
OutputArtifacts: []
Configuration:
FunctionName: aws-notify2
UserParameters:
- webhook: !Ref SlackHook
- message: !Join [" ",[!Ref app, !Ref env, "build has started"]]
RunOrder: 1
Create an error - Configuration must only contain simple objects or strings.
Any guesses on how to get multiple UserParameters passing from a CloudFormation template into a Lambda would be much appreciated.
Here is the lambda code for reference:
https://github.com/byu-oit-appdev/aws-codepipeline-lambda-slack-webhook

You should be able to pass multiple UserParameters as a single JSON-object string, then parse the JSON in your Lambda function upon receipt.
This is exactly how the Python example in the documentation handles this case:
try:
# Get the user parameters which contain the stack, artifact and file settings
user_parameters = job_data['actionConfiguration']['configuration']['UserParameters']
decoded_parameters = json.loads(user_parameters)
Similarly, using JSON.parse should work fine in Node.JS to parse a JSON-object string (as shown in your Event payload example) into a usable JSON object:
> JSON.parse('{ "webhook":"https://hooks.slack.com/services/T0311JJTE/3W...W7F2lvho", "message":"Staging build awaiting approval for production deploy" }')
{ webhook: 'https://hooks.slack.com/services/T0311JJTE/3W...W7F2lvho',
message: 'Staging build awaiting approval for production deploy' }

Related

Why am I getting a Invalid template body error when building this terraform template for aws service catalog?

Here is the terraform template for aws service catalog that I am building.
resource "aws_servicecatalog_product" "data-ml-pipeline-service-catalog-product" {
name = "data-ml-pipeline-service-catalog-product"
owner = "data-ml"
type = "CLOUD_FORMATION_TEMPLATE"
provisioning_artifact_parameters {
template_url = "https://s3.amazonaws.com/cf-templates-ozkq9d3hgiq2-us-east-1/temp1.json"
type = "CLOUD_FORMATION_TEMPLATE"
}
Based on this question, Terraform /AWS aws_servicecatalog_portfolio, this should work.
Exact error: Error: error creating Service Catalog Product: InvalidParametersException: Invalid templateBody. Please make sure that your template is valid
Edit: Here is the new template that I am using.
---
ModelBuildCodeCommitRepository:
Properties:
Code:
BranchName: main
S3:
Bucket: sagemaker-servicecatalog-seedcode-us-west-2
Key: toolchain/image-build-model-building-workflow-v1.0.zip
RepositoryDescription:
? "Fn::Sub"
: "SageMaker Model building workflow infrastructure as code for the Project ${SageMakerProjectName}"
RepositoryName:
? "Fn::Sub"
: "sagemaker-${SageMakerProjectName}-${SageMakerProjectId}-modelbuild"
Type: "AWS::CodeCommit::Repository"
Parameters:
SageMakerProjectId:
Description: "Service-generated id of the project"
NoEcho: true
Type: String
SageMakerProjectName:
AllowedPattern: "^[a-zA-Z](-*[a-zA-Z0-9])*"
Description: "Name of the project"
MaxLength: 32
MinLength: 1
NoEcho: true
Type: String
I'd like to provide a general answer to this error message.
AFAIK, InvalidParametersException: Invalid templateBody. Please make sure that your template is valid can imply that AWS cannot access the template you're trying to create a Service Catalog product version from (the one which is usually provided by key LoadTemplateFromURL).
There are 2 possible reasons for this:
The URL of the template to deploy is invalid. Make sure that the URL provided actually points to a template file. When using Cloudformation with variables inside the URL, make sure to use !Sub etc.
The IAM user/role executing the deployment may not have the required permissions, as seen in a different SO question. Make sure that the permission cloudFormation:validateTemplate is in place.
Basically, this error message is misleading because it suggests that the template is invalid but actually the template cannot even be accessed in the 1st place.

Can Glue Workflow or Trigger get parameters from EventBridge

My system design
I have created 4 Glue Jobs: testgluejob1, testgluejob2, testgluejob3 and common-glue-job.
EventBridge rule detects SUCCEEDED state of glue jobs such as testgluejob1, testgluejob2, testgluejob3.
After getting Glue Job's SUCCEEDED notification, Glue Trigger run to start common-glue-job.
Problem
I want to use the jobname string in common-glue-job script as parameter
Is it possible to pass parameters to Glue Workflow or Trigger from EventBridge?
The things I tried
Trigger can pass parameters to common-glue-job
  https://docs.aws.amazon.com/ja_jp/AWSCloudFormation/latest/UserGuide/aws-resource-glue-trigger.html
Type: AWS::Glue::Trigger
...
Actions:
- JobName: prod-job2
Arguments:
'--job-bookmark-option': job-bookmark-enable
If set Run Properties for Glue Workflow, I cat get it from common-glue-job by using boto3 and get_workflow_run_properties() function. But I have no idea how to put Run Properties from EventBridge by CFn
https://docs.aws.amazon.com/glue/latest/dg/workflow-run-properties-code.html
I set Target InputTransformer of EventBridge Rule, but I'm not sure how to use this value in common-glue-job.
DependsOn:
- EventBridgeGlueExecutionRole
- GlueWorkflowTest01
Type: AWS::Events::Rule
Properties:
Name: EventRuleTest01
EventPattern:
source:
- aws.glue
detail-type:
- Glue Job State Change
detail:
jobName:
- !Ref GlueJobTest01
state:
- SUCCEEDED
Targets:
-
Arn: !Sub arn:aws:glue:${AWS::Region}:${AWS::AccountId}:workflow/${GlueWorkflowTest01}
Id: GlueJobTriggersWorkflow
RoleArn: !GetAtt 'EventBridgeGlueExecutionRole.Arn'
InputTransformer:
InputTemplate: >-
{
"--ORIGINAL_JOB": <jobName>
}
InputPathsMap:
jobName : "$.detail.jobName"
Any help would be greatly appreciated.
If I understand you correctly, you already have information in EventBridge event, but cannot access it from your Glue job. I used the following workaround to do this:
You need to get an event ID from Glue workflow properties
event_id = glue_client.get_workflow_run_properties(Name=self.args['WORKFLOW_NAME'],
RunId=self.args['WORKFLOW_RUN_ID'])['RunProperties']['aws:eventIds'][1:-1]
Get all NotifyEvent events for the last several minutes. It's up to you to decide how much time can pass between the workflow start and your job start.
response = event_client.lookup_events(LookupAttributes=[{'AttributeKey': 'EventName',
'AttributeValue': 'NotifyEvent'}],
StartTime=(datetime.datetime.now() - datetime.timedelta(minutes=5)),
EndTime=datetime.datetime.now())['Events']
Check which event has an enclosed event with the event ID we get from Glue workflow.
for i in range(len(response)):
event_payload = json.loads(response[i]['CloudTrailEvent'])['requestParameters']['eventPayload']
if event_payload['eventId'] == event_id:
event = json.loads(event_payload['eventBody'])
In event variable you get full content of the event that triggered workflow.

Set up S3 Bucket level Events using AWS CloudFormation

I am trying to get AWS CloudFormation to create a template that will allow me to attach an event to an existing S3 Bucket that will trigger a Lambda Function whenever a new file is put into a specific directory within the bucket. I am using the following YAML as a base for the CloudFormation template but cannot get it working.
---
AWSTemplateFormatVersion: '2010-09-09'
Resources:
SETRULE:
Type: AWS::S3::Bucket
Properties:
BucketName: bucket-name
NotificationConfiguration:
LambdaConfigurations:
- Event: s3:ObjectCreated:Put
Filter:
S3Key:
Rules:
- Name: prefix
Value: directory/in/bucket
Function: arn:aws:lambda:us-east-1:XXXXXXXXXX:function:lambda-function-trigger
Input: '{ CONFIGS_INPUT }'
I have tried rewriting this template a number of different ways to no success.
Since you have mentioned that those buckets already exists, this is not going to work. You can use CloudFormation in this way but only to create a new bucket, not to modify existing bucket if that bucket was not created via that template in the first place.
If you don't want to recreate your infrastructure, it might be easier to just use some script that will subscribe lambda function to each of the buckets. As long as you have a list of buckets and the lambda function, you are ready to go.
Here is a script in Python3. Assuming that we have:
2 buckets called test-bucket-jkg2 and test-bucket-x1gf
lambda function with arn: arn:aws:lambda:us-east-1:605189564693:function:my_func
There are 2 steps to make this work. First, you need to add function policy that will allow s3 service to execute that function. Second, you will loop through the buckets one by one, subscribing lambda function to each one of them.
import boto3
s3_client = boto3.client("s3")
lambda_client = boto3.client('lambda')
buckets = ["test-bucket-jkg2", "test-bucket-x1gf"]
lambda_function_arn = "arn:aws:lambda:us-east-1:605189564693:function:my_func"
# create a function policy that will permit s3 service to
# execute this lambda function
# note that you should specify SourceAccount and SourceArn to limit who (which account/bucket) can
# execute this function - you will need to loop through the buckets to achieve
# this, at least you should specify SourceAccount
try:
response = lambda_client.add_permission(
FunctionName=lambda_function_arn,
StatementId="allow s3 to execute this function",
Action='lambda:InvokeFunction',
Principal='s3.amazonaws.com'
# SourceAccount="your account",
# SourceArn="bucket's arn"
)
print(response)
except Exception as e:
print(e)
# loop through all buckets and subscribe lambda function
# to each one of them
for bucket in buckets:
print("putting config to bucket: ", bucket)
try:
response = s3_client.put_bucket_notification_configuration(
Bucket=bucket,
NotificationConfiguration={
'LambdaFunctionConfigurations': [
{
'LambdaFunctionArn': lambda_function_arn,
'Events': [
's3:ObjectCreated:*'
]
}
]
}
)
print(response)
except Exception as e:
print(e)
You could write a custom resource to do this, in fact that's what I've ended up doing at work for the same problem. At the simplest level, define a lambda that takes a put bucket notification configuration and then just calls the put bucket notification api with the data that was passed it.
If you want to be able to control different notifications across different cloudformation templates, then it's a bit more complex. Your custom resource lambda will need to read the existing notifications from S3 and then update these based on what data was passed to it from CF.

AWS CloudFormation & Service Catalog - Can I require tags with user values?

Our problem seems very basic and I would expect common.
We have tags that must always be applied (for billing). However, the tag values are only known at the time the stack is deployed... We don't know what the tag values will be when developing the stack, or when creating the product in the Service Catalog...
We don't want to wait until AFTER the resource is deployed to discover the tag is missing, so as cool as AWS config may be, we don't want to rely on its rules if we don't have to.
So things like Tag Options don't work, because it appears that they expect we know the tag value months prior to some deployment (which isn't the case.)
Is there any way to mandate tags be used for a cloudformation template when it is deployed? Better yet, can we have service catalog query for a tag value when deploying? Tags like "system" or "project", for instance, come and go over time and are not known up-front for many types of cloudformation templates we develop.
Isn't this a common scenario?
I am worried that I am missing something very, very simple and basic which mandates tags be used up-front, but I can't seem to figure out what. Thank you in advance. I really did Google a lot before asking, without finding a satisfying answer.
I don't know anything about service catalog but you can create Conditions and then use it to conditionally create (or even fail) your resource creation. Conditional Resource Creation e.g.
Parameters:
ResourceTag:
Type: String
Default: ''
Conditions:
isTagEmpty:
!Equals [!Ref ResourceTag, '']
Resources:
DBInstance:
Type: AWS::RDS::DBInstance
Condition: isTagEmpty
Properties:
DBInstanceClass: <DB Instance Type>
Here RDS DB instance will only be created if tag is non-empty. But cloudformation will still return success.
Alternatively, you can try & fail the resource creation.
Resources:
DBInstance:
Type: AWS::RDS::DBInstance
Properties:
DBInstanceClass: !If [isTagEmpty, !Ref "AWS::NoValue", <DB instance type>]
I haven't tried this but it should fail as DB instance type will be invalid if tag is null.
Edit: You can also create your stack using the createStack CFN API. Write some code to read & validate the input (e.g. read from service catalog) & call the createStack API. I am doing the same from Lambda (nodejs) reading some input from Parameter Store. Sample code -
module.exports.create = async (event, context, callback) => {
let request = JSON.parse(event.body);
let subnetids = await ssm.getParameter({
Name: '/vpc/public-subnets'
}).promise();
let securitygroups = await ssm.getParameter({
Name: '/vpc/lambda-security-group'
}).promise();
let params = {
StackName: request.customerName, /* required */
Capabilities: [
'CAPABILITY_IAM',
'CAPABILITY_NAMED_IAM',
'CAPABILITY_AUTO_EXPAND',
/* more items */
],
ClientRequestToken: 'qwdfghjk3912',
EnableTerminationProtection: false,
OnFailure: request.onfailure,
Parameters: [
{
ParameterKey: "SubnetIds",
ParameterValue: subnetids.Parameter.Value,
},
{
ParameterKey: 'SecurityGroupIds',
ParameterValue: securitygroups.Parameter.Value,
},
{
ParameterKey: 'OpsPoolArnList',
ParameterValue: request.userPoolList,
},
/* more items */
],
TemplateURL: request.templateUrl,
};
cfn.config.region = request.region;
let result = await cfn.createStack(params).promise();
console.log(result);
}
Another option: add a AWS Custom Resource backed by Lambda. Check for tags in this section & return failure if it doesn't satisfy the constraints. Make all other resource creation depend on this resource (so that they all create if your checks pass). Link also contains example. You will also have to add handling for stack update & deletion (like a default success). I think this is your best bet as of now.

Error when trying to create a serviceaccount key in deployment manager

The error is below:
ERROR: (gcloud.deployment-manager.deployments.update) Error in Operation [operation-1544517871651-57cbb1716c8b8-4fa66ff2-9980028f]: errors:
- code: MISSING_REQUIRED_FIELD
location: /deployments/infrastructure/resources/projects/resources-practice/serviceAccounts/storage-buckets-backend/keys/json->$.properties->$.parent
message: |-
Missing required field 'parent' with schema:
{
"type" : "string"
}
Below is my jinja template content:
resource:
- name: {{ name }}-keys
type: iam.v1.serviceAccounts.key
properties:
name: projects/{{ properties["projectID"] }}/serviceAccounts/{{ serviceAccount["name"] }}/keys/json
privateKeyType: enum(TYPE_GOOGLE_CREDENTIALS_FILE)
keyAlgorithm: enum(KEY_ALG_RSA_2048)
P.S.
My reference for the properties is based on https://cloud.google.com/iam/reference/rest/v1/projects.serviceAccounts.keys
I will post the response of #John as the answer for the benefit of the community.
The parent was missing, needing an existing service account:
projects/{PROJECT_ID}/serviceAccounts/{ACCOUNT}
where ACCOUNT value can be the email or the uniqueID of the service account.
Regarding the template, please remove the enum wrapping the privateKeyType and keyAlgoritm.
The above deployment creates a service account credentials for an existing service account, and in order to retrieve this downloadable json key file, it can be exposed using outputs using the publicKeyData property then have it base64decoded.