create event subscriptions in RDS with CloudFormation - amazon-web-services

Does CloudFormation support or have the ability to create DB event subscriptions(RDS)?
I failed to find any reference in AWS document...
Thanks

The latest version of CloudFormation, at the time of this writing, actually supports performing this by creating a "AWS::RDS::EventSubscription" resource in your stack.
"myEventSubscription": {
"Type": "AWS::RDS::EventSubscription",
"Properties": {
"EventCategories": ["configuration change", "failure", "deletion"],
"SnsTopicArn": "arn:aws:sns:us-west-2:123456789012:example-topic",
"SourceIds": ["db-instance-1", { "Ref" : "myDBInstance" }],
"SourceType":"db-instance",
"Enabled" : false
}
}

Related

Is it possible to extract "instanceId" from EventBridge event data, and use it as Target Value?

I was able to setup AutoScaling events as rules in EventBridge to trigger SSM Commands, but I've noticed that with my chosen Target Value the event is passed to all my active EC2 Instances. My Target key is a tag shared by those instances, so my mistake makes sense now.
I'm pretty new to EventBridge, so I was wondering if there's a way to actually target the instance that triggered the AutoScaling event (as in extracting the "InstanceId" that's present in the event data and use that as my new Target Value). I saw the Input Transformer, but I think that just transforms the event data to pass to the target.
Thanks!
EDIT - help with js code for Lambda + SSM RunCommand
I realize I can achieve this by setting EventBridge to invoke a Lambda function instead of the SSM RunCommand directly. Can anyone help with the javaScript code to call a shell command on the ec2 instance specified in the event data (event.detail.EC2InstanceId)? I can't seem to find a relevant and up-to-date base template online, and I'm not familiar enough with js or Lambda. Any help is greatly appreciated! Thanks
Sample of Event data, as per aws docs
{
"version": "0",
"id": "12345678-1234-1234-1234-123456789012",
"detail-type": "EC2 Instance Launch Successful",
"source": "aws.autoscaling",
"account": "123456789012",
"time": "yyyy-mm-ddThh:mm:ssZ",
"region": "us-west-2",
"resources": [
"auto-scaling-group-arn",
"instance-arn"
],
"detail": {
"StatusCode": "InProgress",
"Description": "Launching a new EC2 instance: i-12345678",
"AutoScalingGroupName": "my-auto-scaling-group",
"ActivityId": "87654321-4321-4321-4321-210987654321",
"Details": {
"Availability Zone": "us-west-2b",
"Subnet ID": "subnet-12345678"
},
"RequestId": "12345678-1234-1234-1234-123456789012",
"StatusMessage": "",
"EndTime": "yyyy-mm-ddThh:mm:ssZ",
"EC2InstanceId": "i-1234567890abcdef0",
"StartTime": "yyyy-mm-ddThh:mm:ssZ",
"Cause": "description-text"
}
}
Edit 2 - my Lambda code so far
'use strict'
const ssm = new (require('aws-sdk/clients/ssm'))()
exports.handler = async (event) => {
const instanceId = event.detail.EC2InstanceId
var params = {
DocumentName: "AWS-RunShellScript",
InstanceIds: [ instanceId ],
TimeoutSeconds: 30,
Parameters: {
commands: ["/path/to/my/ec2/script.sh"],
workingDirectory: [],
executionTimeout: ["15"]
}
};
const data = await ssm.sendCommand(params).promise()
const response = {
statusCode: 200,
body: "Run Command success",
};
return response;
}
Yes, but through Lambda
EventBridge -> Lambda (using SSM api) -> EC2
Thank you #Sándor Bakos for helping me out!! My JavaScript ended up not working for some reason, so I ended up just using part of the python code linked in the comments.
1. add ssm:SendCommand permission:
After I let Lambda create a basic role during function creation, I added an inline policy to allow Systems Manager's SendCommand. This needs access to your documents/*, instances/* and managed-instances/*
2. code - python 3.9
import boto3
import botocore
import time
def lambda_handler(event=None, context=None):
try:
client = boto3.client('ssm')
instance_id = event['detail']['EC2InstanceId']
command = '/path/to/my/script.sh'
client.send_command(
InstanceIds = [ instance_id ],
DocumentName = 'AWS-RunShellScript',
Parameters = {
'commands': [ command ],
'executionTimeout': [ '60' ]
}
)
You can do this without using lambda, as I just did, by using eventbridge's input transformers.
I specified a new automation document that called the document I was trying to use (AWS-ApplyAnsiblePlaybooks).
My document called out the InstanceId as a parameter and is passed this by the input transformer from EventBridge. I had to pass the event into lambda just to see how to parse the JSON event object to get the desired instance ID - this ended up being
$.detail.EC2InstanceID
(it was coming from an autoscaling group).
I then passed it into a template that was used for the runbook
{"InstanceId":[<instance>]}
This template was read in my runbook as a parameter.
This was the SSM playbook inputs I used to run the AWS-ApplyAnsiblePlaybook Document, I just mapped each parameter to the specified parameters in the nested playbook:
"inputs": {
"InstanceIds": ["{{ InstanceId }}"],
"DocumentName": "AWS-ApplyAnsiblePlaybooks",
"Parameters": {
"SourceType": "S3",
"SourceInfo": {"path": "https://testansiblebucketab.s3.amazonaws.com/"},
"InstallDependencies": "True",
"PlaybookFile": "ansible-test.yml",
"ExtraVariables": "SSM=True",
"Check": "False",
"Verbose": "-v",
"TimeoutSeconds": "3600"
}
See the document below for reference. They used a document that was already set up to receive the variable
https://docs.aws.amazon.com/systems-manager/latest/userguide/automation-tutorial-eventbridge-input-transformers.html
This is the full automation playbook I used, most of the parameters are defaults from the nested playbook:
{
"description": "Runs Ansible Playbook on Launch Success Instances",
"schemaVersion": "0.3",
"assumeRole": "<Place your automation role ARN here>",
"parameters": {
"InstanceId": {
"type": "String",
"description": "(Required) The ID of the Amazon EC2 instance."
}
},
"mainSteps": [
{
"name": "RunAnsiblePlaybook",
"action": "aws:runCommand",
"inputs": {
"InstanceIds": ["{{ InstanceId }}"],
"DocumentName": "AWS-ApplyAnsiblePlaybooks",
"Parameters": {
"SourceType": "S3",
"SourceInfo": {"path": "https://testansiblebucketab.s3.amazonaws.com/"},
"InstallDependencies": "True",
"PlaybookFile": "ansible-test.yml",
"ExtraVariables": "SSM=True",
"Check": "False",
"Verbose": "-v",
"TimeoutSeconds": "3600"
}
}
}
]
}

CloudFormation - Create SNS subscription in disabled state

Is there a way to create an SNS subscription in the disabled state? This is for a lambda if that makes a difference.
Example:
MySubscription:
Type: AWS::SNS::Subscription
Properties:
Endpoint: arn:aws:lambda:region:account-id:function:mylambda
Protocol: lambda
TopicArn: arn:aws:sns:region:account-id:topic
Enabled: false # like this
Couldn't find anything like this in the AWS CloudFormation documentation
You can create a condition on the subscription creation.
First add a parameter:
"CreateSubscription": {
"Type": "String",
"AllowedValues": [
"true",
"false"
],
"Description": "Create subscription to sns"
}
after the 'parameters' section, create a 'conditions' section:
"Conditions" : {
"CreateSubscription" : {"Fn::Equals" : [{"Ref" : "CreateSubscription"}, "true"]}
}
add the condition to your subscription
"Subscription": {
"Type": "AWS::SNS::Subscription",
"Condition" : "CreateSubscription",
[...]
}
when you want to "activate" you subscription, you just have to change the parameter value updating the stack using the same template
Conditions section reference
There isn't a way with Cloudformation which conforms to your example. According to the documentation, AWS::SNS::Subscription does not have 'Enabled' as a setting.
Although, the documentation does state that the owner of the endpoint must confirm the subscription before Amazon SNS creates the subscription. So, in a sense, it's already disabled because it doesn't exist until you confirm it on the SNS topic.

aws cloudformation WAF geo location condition

Trying to create a cloud formation template to configure WAF with geo location condition. Couldnt find the right template yet. Any pointers would be appreciated.
http://docs.aws.amazon.com/waf/latest/developerguide/web-acl-geo-conditions.html
Unfortunately, the actual answer (as of this writing, July 2018) is that you cannot create geo match sets directly in CloudFormation. You can create them via the CLI or SDK, then reference them in the DataId field of a WAFRule's Predicates property.
Creating a GeoMatchSet with one constraint via CLI:
aws waf-regional get-change-token
aws waf-regional create-geo-match-set --name my-geo-set --change-token <token>
aws waf-regional get-change-token
aws waf-regional update-geo-match-set --change-token <new_token> --geo-match-set-id <id> --updates '[ { "Action": "INSERT", "GeoMatchConstraint": { "Type": "Country", "Value": "US" } } ]'
Now reference that GeoMatchSet id in the CloudFormation:
"WebAclGeoRule": {
"Type": "AWS::WAFRegional::Rule",
"Properties": {
...
"Predicates": [
{
"DataId": "00000000-1111-2222-3333-123412341234" // id from create-geo-match-set
"Negated": false,
"Type": "GeoMatch"
}
]
}
}
There is no documentation for it, but it is possible to create the Geo Match in serverless/cloudformation.
Used the following in serverless:
Resources:
Geos:
Type: "AWS::WAFRegional::GeoMatchSet"
Properties:
Name: geo
GeoMatchConstraints:
- Type: "Country"
Value: "IE"
Which translated to the following in cloudformation:
"Geos": {
"Type": "AWS::WAFRegional::GeoMatchSet",
"Properties": {
"Name": "geo",
"GeoMatchConstraints": [
{
"Type": "Country",
"Value": "IE"
}
]
}
}
That can then be referenced when creating a rule:
(serverless) :
Resources:
MyRule:
Type: "AWS::WAFRegional::Rule"
Properties:
Name: waf
Predicates:
- DataId:
Ref: "Geos"
Negated: false
Type: "GeoMatch"
(cloudformation) :
"MyRule": {
"Type": "AWS::WAFRegional::Rule",
"Properties": {
"Name": "waf",
"Predicates": [
{
"DataId": {
"Ref": "Geos"
},
"Negated": false,
"Type": "GeoMatch"
}
]
}
}
I'm afraid that your question is too vague to solicit a helpful response. The CloudFormation User Guide (pdf) defines many different WAF / CloudFront / R53 resources that will perform various forms of geo match / geo blocking capabilities. The link you provide seems a subset of Web Access Control Lists (Web ACL) - see AWS::WAF::WebACL on page 2540.
I suggest you have a look and if you are still stuck, actually describe what it is you are trying to achieve.
Note that the term you used: "geo location condition" doesn't directly relate to an AWS capability that I'm aware of.
Finally, if you are referring to https://aws.amazon.com/about-aws/whats-new/2017/10/aws-waf-now-supports-geographic-match/, then the latest Cloudformation User Guide doesn't seem to have been updated yet to reflect this.

CloudFormation Update support for "Refer to Resources in Another Stack"

I'm using the example from Walkthrough: Refer to Resources in Another Stack to refer resources from another stack (which I think is incredibly useful and should be an out-of-the-box feature). However, the example does not seem to work with updates, i.e. if a new output was added to the referenced stack.
Interestingly, the lambda function isn't even called according to logs and metrics, so it does not seem to be a problem that can be fixed in code. I do think though that the code should use a different PhysicalResourceId on update as per Replacing a Custom Resource During an Update.
Note: this is a cross-post from an unanswered AWS Forum thread
It turns out that CloudFormation does only update a custom resource if one of its properties changes. Once this happens, the custom resource should signal that it changed. So
replace:
response.send(event, context, response.SUCCESS, responseData);
with
var crypto = require('crypto');
var hash = crypto.createHash('md5').update(JSON.stringify(responseData)).digest('hex');
response.send(event, context, response.SUCCESS, responseData, hash);
this will result in following events during an update:
15:08:16 UTC+0200 UPDATE_COMPLETE Custom::NetworkInfo NetworkInfo
15:08:15 UTC+0200 UPDATE_IN_PROGRESS Custom::NetworkInfo NetworkInfo Requested update required the provider to create a new physical resource
15:08:08 UTC+0200 UPDATE_IN_PROGRESS Custom::NetworkInfo NetworkInfo
This still requires a property to change though. The best that I came up with was passing a pseudo-random parameter to the custom resource:
{
"Parameters": {
"Random": {
"Description": "Random value to force stack-outputs update",
"Type": "String"
}
},
"Resources": {
"NetworkInfo": {
"Type": "Custom::NetworkInfo",
"Properties": {
"ServiceToken": { "Fn::GetAtt" : ["LookupStackOutputs", "Arn"] },
"Random": { "Ref": "Random" },
"StackName": { "Ref": "NetworkStackName" }
}
}
}
}
Unknown parameters (i.e. Random) are simply ignored by the lambda function.

How can we reuse configured AMI in the same template with the same userdata configs

I have created instance using cloudformation template, configured it with userdata configuration and powershell dsc. I have created AMI for this instance so that next time it speeds up my stack creation.
Now how can i use the this AMI in that same template so it bypass all the configurations & installation done on instance and directly sends success signal to waithandler.
I am trying this in my template but it is failing.
Thanks in Advance,
Lokesh Jangir
It sounds like you need a check in your user data to see if everything is already configured, and if it is, then you just stop and send the notification instead of setting it back up again.
Ultimately, it sounds like it'd be easier to have two templates - one to create the AMI, and one to re-use it in other settings. The second template could take the AMI ID as a parameter so that it is more flexible and can be used with different AMIs as you create them.
1. To use your AMI id in the cloudformation template, start with adding a parameter, so that you can easily change it:
`
"Parameters": {
...
"amiId": {
"Type": "String",
"Default": "ami-073bb070",
"AllowedPattern": "[a-zA-Z0-9\\-]*",
"Description": "Only [a-zA-Z0-9\\-]* allowed."
},
...
}
2. Use that param in a LaunchConfig:
`
"aLaunchConfig": {
"Type": "AWS::AutoScaling::LaunchConfiguration",
"Properties": {
"ImageId": { "Ref" : "amiId" },
...
3. or use it directly in an EC2 instance:
`
"someEC2": {
"Type": "AWS::EC2::Instance",
"Properties": {
"ImageId": { "Ref" : "amiId" },