How to parse a CloudFormation template programmatically and resolve field? - amazon-web-services

I want to extract a particular field from a CloudFormation template. Specifically, I am creating a stack using CDK and within it there is a StepFunction resource whose definition inside the template looks like this:
"MyStateMachineE7CD0EAE": {
"Type": "AWS::StepFunctions::StateMachine",
"Properties": {
"RoleArn": {
"Fn::GetAtt": [
"MyStateMachineRoleEC8990B2",
"Arn"
]
},
"DefinitionString": {
"Fn::Join": [
"",
[
"{\"StartAt\": ...part of state machine definition here... ,\"Resource\":\"arn:",
{
"Ref": "AWS::Partition"
},
":states:::lambda:invoke\",\"Parameters\":{\"FunctionName\":\"",
{
"Fn::GetAtt": [
"SingletonLambda2a860ff582bf4d828a504282814af94c7CEF2D65",
"Arn"
]
},
"...remainder of state machine definition here...}"
]
]
},
"LoggingConfiguration": {
"Destinations": [
{
"CloudWatchLogsLogGroup": {
"LogGroupArn": {
"Fn::GetAtt": [
"MyStatemachineLogGroup2DEF8C9E",
"Arn"
]
}
}
}
],
"IncludeExecutionData": true,
"Level": "ALL"
},
"StateMachineType": "STANDARD"
},
"DependsOn": [
"MyStateMachineRoleDefaultPolicy4C064A65",
"MyStateMachineRoleEC8990B2"
],
"Metadata": {
"aws:cdk:path": "some/path/here"
}
}
The DefinitionString is made up of a call to Fn:Join, when there is an array of strings, some of which are references to other CloudFormation resources. There can be many such references, depending on how complicated the StateMachine definition is.
I want to be able to use StepFunctions Local in order to test the flow of data through the StepFunctions state machine, and in particular to check the InputPath and ResultSelector data input and output as part of some integration tests of the state machine as a whole. The StepFunction Local CLI tool requires as input a single string, and that string is the evaluation of the DefinitionString field.
Is there a way of extracting the field from the CloudFormation template, perhaps by a CloudFormation client API? I'd rather not have to try and parse the JSON directly unless I absolutely have to.

Related

Use env variables inside cloudformation templates

i have an amplify app with multiple branches.
I have added an custom cloudformation template with amplify add custom
It looks like:
{
"AWSTemplateFormatVersion": "2010-09-09",
"Parameters": { "env": { "Type": "String" } },
"Resources": {
"db": {
"Type": "AWS::Timestream::Database",
"Properties": { "DatabaseName": "dev_db" }
},
"timestreamtable": {
"DependsOn": "db",
"Type": "AWS::Timestream::Table",
"Properties": {
"DatabaseName": "dev_db",
"TableName": "avg_16_4h",
"MagneticStoreWriteProperties": { "EnableMagneticStoreWrites": true },
"RetentionProperties": {
"MemoryStoreRetentionPeriodInHours": "8640",
"MagneticStoreRetentionPeriodInDays": "1825"
}
}
}
},
"Outputs": {},
"Description": "{\"createdOn\":\"Windows\",\"createdBy\":\"Amplify\",\"createdWith\":\"8.3.1\",\"stackType\":\"custom-customCloudformation\",\"metadata\":{}}"
}
You can see there is a field called DatabaseName. In my amplify app i have written an env variable named TIMESTREAM_DB and i want to use it inside of this cloudformation file.
Is this possible or do i need to write it all by hand in it?
Templates cannot access arbitrary env vars. Instead, CloudFormation injects deploy-time values into a template with Parameters.
Amplify helpfully adds the env variable as a parameter. A la the Amplify docs, use the env value as the AWS::Timestream::Database name suffix:
"DatabaseName": "Fn::Join": [ "", [ "my-timestream-db-name-", { "Ref": "env" } ] ]
The AWS::Timestream::Table resource also requires a DatabaseName parameter. You could repeat the above, but it's more DRY to get the name via the Database's Ref:
"DatabaseName": { "Ref" : "db" }

How to filter an s3 data event by object key suffix on AWS EventBridge

I've created a rule on AWS EventBridge that trigger a Sagemaker Pipeline execution. To do so, I have the following event pattern:
{
"source": ["aws.s3"],
"detail-type": ["AWS API Call via CloudTrail"],
"detail": {
"eventSource": ["s3.amazonaws.com"],
"eventName": ["PutObject", "CopyObject", "CompleteMultipartUpload"],
"requestParameters": {
"bucketName": ["my-bucket-name"],
"key": [{
"prefix": "folder/inside/my/bucket/"
}]
}
}
}
I have enabled CloudTrail to log my S3 Data Events and the rule are triggering my Sagemaker Pipeline execution correctly.
The problem here is:
A pipeline execution are being triggered for all put/copy of any object in my prefix. Then, I would like to trigger my pipeline execution only when a specific object is uploaded in the bucket, by I don't know its entire name.
For instance, possible object name I will have is, where this date is builded dynamically:
my-bucket-name/folder/inside/my/bucket/2021-07-28/_SUCESS
I would like to write an event pattern with something like this:
"prefix": "folder/inside/my/bucket/{current_date}/_SUCCESS"
or
"key": [{
"prefix": "folder/inside/my/bucket/"
}, {
"suffix": "_SUCCESS"
}]
I think that Event Pattern on AWS do not support suffix filtering. In the documentation, isn't clear the behavior.
I have configured a S3 Event Notification using a suffix and sent the filtered notification to a SQS Queue, but now I don't know what to do with this queue in order to invoke my EventBridge rule to trigger a Sagemaker Pipeline execution.
I was looking at a similar functionality.
Unfortunately, based on the docs from AWS, it looks like it only supports the following patterns:
Comparison
Example
Rule syntax
Null
UserID is null
"UserID": [ null ]
Empty
LastName is empty
"LastName": [""]
Equals
Name is "Alice"
"Name": [ "Alice" ]
And
Location is "New York" and Day is "Monday"
"Location": [ "New York" ], "Day": ["Monday"]
Or
PaymentType is "Credit" or "Debit"
"PaymentType": [ "Credit", "Debit"]
Not
Weather is anything but "Raining"
"Weather": [ { "anything-but": [ "Raining" ] } ]
Numeric (equals)
Price is 100
"Price": [ { "numeric": [ "=", 100 ] } ]
Numeric (range)
Price is more than 10, and less than or equal to 20
"Price": [ { "numeric": [ ">", 10, "<=", 20 ] } ]
Exists
ProductName exists
"ProductName": [ { "exists": true } ]
Does not exist
ProductName does not exist
"ProductName": [ { "exists": false } ]
Begins with
Region is in the US
"Region": [ {"prefix": "us-" } ]

How to use multiple prefixes in anything-but clause in AWS eventbridge eventpattern?

I have a situation where I need to filter out certain events using eventpatterns in eventbridge.
I want to run the rule for all events except those where username starts with abc or xyz.
I have tried below 2 syntax but none worked :
"userIdentity": {
"sessionContext": {
"sessionIssuer": {
"userName": [
{
"anything-but": {
"prefix": [
"abc-",
"xyz-"
]
}
}
]
}
}
}
"userIdentity": {
"sessionContext": {
"sessionIssuer": {
"userName": [
{
"anything-but": [{
"prefix": "abc-",
"prefix": "xyz-"
}]
}
]
}
}
}
Getting following error on saving the rule :
"Event pattern is not valid. Reason: Inside anything but list, start|null|boolean is not supported."
Am I missing something in the syntax or if this is a limitation then is there any alternative to this problem?
You can use prefix within an array in event pattern. Here is an example pattern:
{
"detail": {
"alarmName": [{
"prefix": "DemoApp1"
},
{
"prefix": "DemoApp2"
}
],
"state": {
"value": [
"ALARM"
]
},
"previousState": {
"value": [
"OK"
]
}
}
}
This event will match alarm that has name starting with either DemoApp1 or DemoApp2
TLDR: user #samtoddler is sort of correct.
Prefix matches only work on values as called out in https://docs.aws.amazon.com/eventbridge/latest/userguide/content-filtering-with-event-patterns.html#filtering-prefix-matching. They do not work with arrays. You can file a feature request with AWS support but if you'd like to unblock yourself; you it's probably best to either control the prefixes you have for userName (guessing this is something IAM related and in your control).
If that's not possible; consider filtering as much as you can via other properties before sending over to a compute (probably lambda) to performing additional filtering.

Where/how do I define a NotificationConfig in an AWS SSM Automation document?

Say I have an SSM document like the below, and I want to be alerted when a run fails or doesn't finish for whatever reason:
{
"description": "Restores specified pg_dump backup to specified RDS/DB.",
"mainSteps": [
{
"action": "aws:runCommand",
"description": "Restores specified pg_dump backup to specified RDS/DB.",
"inputs": {
"DocumentName": "AWS-RunShellScript",
"Parameters": {
"commands": [
"blahblahblah"
],
"executionTimeout": "1800"
},
"Targets": [
{
"Key": "InstanceIds",
"Values": [
"i-xxxxxxxx"
]
}
]
},
"name": "DBRestorer",
"nextStep": "RunQueries"
},
Terraform documents show me that RunCommand documents should support a NotificationConfig where I can pass in my SNS topic ARN and declare what state transitions should trigger a message: https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/ssm_maintenance_window_task#notification_config
However, I can't find any Amazon docs that actually include the use of a notification configuration in the document itself (not just the maintenance window, which I have set up as automation so it doesn't support it at the window level), so I'm not sure if it belongs as a sub-parameter, or whether to define it with camel case or dash separation.
Try this
{
"description": "Restores specified pg_dump backup to specified RDS/DB.",
"mainSteps": [
{
"action": "aws:runCommand",
"description": "Restores specified pg_dump backup to specified RDS/DB.",
"inputs": {
"DocumentName": "AWS-RunShellScript",
"NotificationConfig": {
"NotificationArn": "<<Replace this with a SNS Topic Arn>>",
"NotificationEvents": ["All"],
"NotificationType": "Invocation"
},
"ServiceRoleArn": "<<Replace this with an IAM role Arn that has access to SNS>>",
"Parameters": {
"commands": [
"blahblahblah"
],
"executionTimeout": "1800"
},
"Targets": [
{
"Key": "InstanceIds",
"Values": [
"i-xxxxxxxx"
]
}
]
},
"name": "DBRestorer",
"nextStep": "RunQueries"
},
...
]
}
Related documentation:
https://docs.aws.amazon.com/systems-manager/latest/userguide/automation-action-runcommand.html
https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_NotificationConfig.html#systemsmanager-Type-NotificationConfig-NotificationType

AWS Scheduled Event Rule for Lambda doesn't work in CloudFormation

Having trouble configuring AWS Lambda to be triggered by a Rule->Trigger as a Scheduled Event Source using CloudFormation (in reality, using Python's Troposphere.) This has cost me a couple of days already, and any help would be appreciated.
Here's the relevant CF JSON snippet -
"DataloaderRetrier": {
"Properties": {
"Code": {
"S3Bucket": "mycompanylabs-config",
"S3Key": "v3/mycompany-component-loader-lambda-0.5.jar"
},
"FunctionName": "DataloaderRetriervitest27",
"Handler": "mycompany.ScheduledEventHandler::handleRequest",
"MemorySize": 320,
"Role": "arn:aws:iam::166662328783:role/kinesis-lambda-role",
"Runtime": "java8",
"VpcConfig": {
"SecurityGroupIds": [
"sg-2f1f6047"
],
"SubnetIds": [
"subnet-ec3c1435"
]
}
},
"Type": "AWS::Lambda::Function"
},
"DataloaderRetrierEventTriggerPermission": {
"Properties": {
"Action": "lambda:InvokeFunction",
"FunctionName": {
"Fn::GetAtt": [
"DataloaderRetrier",
"Arn"
]
},
"Principal": "events.amazonaws.com",
"SourceAccount": {
"Ref": "AWS::AccountId"
},
"SourceArn": {
"Fn::GetAtt": [
"DataloaderRetrierEventTriggerRule",
"Arn"
]
}
},
"Type": "AWS::Lambda::Permission"
},
"DataloaderRetrierEventTriggerRule": {
"DependsOn": "DataloaderRetrier",
"Properties": {
"Description": "Reminding the lambda to read from the retry SQS",
"Name": "DataloaderRetrierEventTriggerRulevitest27",
"ScheduleExpression": "rate(1 minute)",
"State": "ENABLED",
"Targets": [
{
"Arn": {
"Fn::GetAtt": [
"DataloaderRetrier",
"Arn"
]
},
"Id": "DataloaderRetrierEventTriggerTargetvitest27",
"Input": "{\"Hey\":\"WAKE UP!\"}"
}
]
},
"Type": "AWS::Events::Rule"
}
The AWS Lambda function shows zero invocations, and the Events->Rules metric shows the correct number of invocations, however they all fail. The Lambda shows the trigger in the Triggers section, and the Rule shows the lambda in its trigger sections. They link up fine.
However, if I go in and manually create the same trigger under the rule in the web console, it will happily start sending events to the Lambda.
PS - here's the troposphere code:
# DATALOADER RETRIER LAMBDA
dataloader_retrier = t.add_resource(awslambda.Function(
"DataloaderRetrier",
Code=awslambda.Code(
"DataloaderRetrierCode",
S3Bucket='mycompanylabs-config',
S3Key='v3/mycompany-snowplow-loader-lambda-0.5.jar'
),
FunctionName=suffix("DataloaderRetrier"),
Handler="mycompany.ScheduledEventHandler::handleRequest",
MemorySize="320",
Role="arn:aws:iam::166662328783:role/kinesis-lambda-role",
Runtime="java8",
VpcConfig=lambda_vpc_config
))
dataloader_retrier_scheduled_rule = t.add_resource(events.Rule(
"DataloaderRetrierEventTriggerRule",
Name=suffix("DataloaderRetrierEventTriggerRule"),
Description="Reminding the lambda to read from the retry SQS",
Targets=[events.Target(
Id=suffix("DataloaderRetrierEventTriggerTarget"),
Arn=tr.GetAtt("DataloaderRetrier", "Arn"),
Input='{"Hey":"WAKE UP!"}'
)],
State='ENABLED',
ScheduleExpression="rate(1 minute)",
DependsOn="DataloaderRetrier"
)),
t.add_resource(awslambda.Permission(
"DataloaderRetrierEventTriggerPermission",
Action="lambda:InvokeFunction",
FunctionName=tr.GetAtt("DataloaderRetrier", "Arn"),
Principal="events.amazonaws.com",
SourceAccount=tr.Ref("AWS::AccountId"),
SourceArn=tr.GetAtt("DataloaderRetrierEventTriggerRule", "Arn")
))
You need to remove the SourceAccount parameter from your AWS::Lambda::Permission Resource.
As described in the AddPermission API documentation, the SourceAccount parameter restricts the 'source' of the permitted invocation to the specified AWS Account ID, for example when specifying an S3 Bucket or CloudWatch Logs notification.
However (and the docs should probably be made more clear on this point), in the case of a CloudWatch Events Schedule Expression, the source of the Event is aws.events, not your own AWS Account ID, which is why adding this parameter causes the event to fail to trigger the Lambda function.