I'm trying to confgure a dashboard with a basic widget to expose CpUUtilization metric.
I cannot reference the previous created EC2 instance, since it seems that in the json that describe the dashboard the !Ref function is not interpreted.
metrics": [
"AWS/EC2",
"CPUUtilization",
"InstanceId",
"!Ref Ec2Instance"
]
Any idea how to reference it by logical name?
You can use Fn::Join to combine the output of Intrinsic functions (like Ref) with strings. For example:
CloudWatchDashboardHOSTNAME:
Type: "AWS::CloudWatch::Dashboard"
DependsOn: Ec2InstanceHOSTNAME
Properties:
DashboardName: HOSTNAME
DashboardBody: { "Fn::Join": [ "", ['{"widgets":[
{
"type":"metric",
"properties":{
"metrics":[
["AWS/EC2","CPUUtilization","InstanceId",
"', { Ref: Ec2InstanceHOSTNAME }, '"]
],
"title":"CPU Utilization",
"period":60,
"region":"us-east-1"
}
}]}' ] ] }
Documentation:
Fn::Join - AWS CloudFormation
Ref - AWS CloudFormation
AWS::CloudWatch::Dashboard - AWS CloudFormation
Dashboard Body Structure and Syntax - Amazon CloudWatch
Related
My goal is to enable logging for a regional WebAcl via AWS CDK. This seems to be possible over Cloud Formation and there are the appropriate constructs in CDK. But when using the following code to create a Log Group and linking it in a LoggingConfiguration ...
const webAclLogGroup = new LogGroup(scope, "awsWafLogs", {
logGroupName: `aws-waf-logs`
});
// Create logging configuration with log group as destination
new CfnLoggingConfiguration(scope, "webAclLoggingConfiguration", {
logDestinationConfigs: webAclLogGroup.logGroupArn, // Arn of LogGroup
resourceArn: aclArn // Arn of Acl
});
... I get an exception during cdk deploy, stating that the string in the LogdestinationConfig is not a correct Arn (some parts of the Arn in the log messages have been removed):
Resource handler returned message: "Error reason: The ARN isn't valid. A valid ARN begins with arn: and includes other information separated by colons or slashes., field: LOG_DESTINATION, parameter: arn:aws:logs:xxx:xxx:xxx-awswaflogsF99ED1BA-PAeH9Lt2Y3fi:* (Service: Wafv2, Status Code: 400, Request ID: xxx, Extended Request ID: null)"
I cannot see an error in the generated Cloud Formation code after cdk synth:
"webAclLoggingConfiguration": {
"id": "webAclLoggingConfiguration",
"path": "xxx/xxx/webAclLoggingConfiguration",
"attributes": {
"aws:cdk:cloudformation:type": "AWS::WAFv2::LoggingConfiguration",
"aws:cdk:cloudformation:props": {
"logDestinationConfigs": [
{
"Fn::GetAtt": [
{
"Ref": "awsWafLogs58D3FD01"
},
"Arn"
]
}
],
"resourceArn": {
"Fn::GetAtt": [
"webACL",
"Arn"
]
}
}
},
"constructInfo": {
"fqn": "aws-cdk-lib.aws_wafv2.CfnLoggingConfiguration",
"version": "2.37.1"
}
},
I'm using Cdk with Typescript and the Cdk version is currently set to 2.37.1 but it also did not work with 2.16.0.
WAF has particular requirements to the naming and format of Logging Destination configs as described and shown in their docs.
Specifically, the ARN of the Log Group cannot end in :* which unfortunately is the return value for a Log Group ARN in Cloudformation.
A workaround would be to construct the required ARN format manually like this, which will omit the :* suffix. Also note that logDestinationConfigs takes a List of Strings, though only with exactly 1 element in it.
const webAclLogGroup = new LogGroup(scope, "awsWafLogs", {
logGroupName: `aws-waf-logs`
});
// Create logging configuration with log group as destination
new CfnLoggingConfiguration(scope, "webAclLoggingConfiguration", {
logDestinationConfigs: [
// Construct the different ARN format from the logGroupName
Stack.of(this).formatArn({
arnFormat: ArnFormat.COLON_RESOURCE_NAME,
service: "logs",
resource: "log-group",
resourceName: webAclLogGroup.logGroupName,
})
],
resourceArn: aclArn // Arn of Acl
});
PS: I work for AWS on the CDK team.
I have an organization account with several managed accounts underneath it. Each managed account has multiple VPCs in them. One of the VPC in each managed account will have a tag "ServiceName":"True" while the others in that account will have a "ServiceName":"False" tag instead.
I'm trying to create a stackset with a stack dedicated to create a security group with ingress rules attached to it and I need to dynamically assign the "VpcId" property of that security group to be the "VpcId" of VPC with the "ServiceName":"True" tag in that account.
Obviously, if I don't specify a VPC ID in the VpcId field, it creates the security group but attach it to the default VPC of that account. I can't specify manually a VPC either since it's going to be ran in multiple accounts. Leaving me with the only option available to search and assign VPCs by running some sort of function to extract the "VpcId".
The stack itself works fine as I ran it in a test environment while specifying a VPC ID. So, it's just a matter getting that "VpcId" dynamically.
In the end, I'm looking to do something that would resemble this:
{
"Parameters": {
"MyValidVPCID": {
"Description": "My Valid VPC ID where ServiceName tag equals true. Do some Lambda Kung Fu to get the VPC ID using something that would let me parse the equivalent of aws ec2 describe-vpcs command.",
"Type": "String"
}
},
"Resources": {
"SG": {
"Type": "AWS::EC2::SecurityGroup",
"Properties": {
"GroupDescription": "Security Group Desc.",
"Tags": [
{
"Key": "Key1",
"Value": "ABC"
},
{
"Key": "Key2",
"Value": "DEF"
}
],
"VpcId" : { "Ref" : "MyValidVPCID" }
}
},
"SGIngressRule01":
{
"Type": "AWS::EC2::SecurityGroupIngress",
"DependsOn": "SG",
"Properties": {
"GroupId" : { "Fn::GetAtt": [ "SG", "GroupId" ] },
"Description": "Rule 1 description",
"IpProtocol": "tcp",
"FromPort": 123,
"ToPort": 456,
"CidrIp": "0.0.0.0/0"
}
}
}
I really don't know if it's a feasible approach or what would be the extra steps needed to recuperate that VpcId based on the tag. That's why if I could get some input from people used to work with CloudFormation, it would help me a lot.
getting that "VpcId" dynamically.
You have to use custom resource for that. You would have to create it as a lambda function which would take any input arguments you want, and using AWS SDK, would query or modify the VPC/Security groups in your stack.
Thanks Marcin for pointing me in the right direction with the custom resources. For those who are wondering what the basic code to make it work looks like, it looks something like this:
Resources:
FunctionNameLambdaFunctionRole:
Type: "AWS::IAM::Role"
Properties:
RoleName: FunctionNameLambdaFunctionRole
Path: "/"
AssumeRolePolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: Allow
Principal:
Service: lambda.amazonaws.com
Action: sts:AssumeRole
FunctionNameLambdaFunctionRolePolicy:
Type: "AWS::IAM::Policy"
Properties:
PolicyName: admin3cx
PolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: "Allow"
Action: "*"
Resource: "*"
Roles:
- Ref: FunctionNameLambdaFunctionRole
FunctionNameLambdaFunctionCode:
Type: "AWS::Lambda::Function"
DeletionPolicy: Delete
DependsOn:
- FunctionNameLambdaFunctionRole
Properties:
FunctionName: FunctionNameLambdaFunctionCode
Role: !GetAtt FunctionNameLambdaFunctionRole.Arn
Runtime: python3.7
Handler: index.handler
MemorySize: 128
Timeout: 30
Code:
ZipFile: |
import boto3
import cfnresponse
ec2 = boto3.resource('ec2')
client = boto3.client('ec2')
def handler(event, context):
responseData = {}
filters =[{'Name':'tag:ServiceName', 'Values':['True']}]
vpcs = list(ec2.vpcs.filter(Filters=filters))
for vpc in vpcs:
responseVPC = vpc.id
responseData['ServiceName'] = responseVPC
cfnresponse.send(event, context, cfnresponse.SUCCESS, responseData, "CustomResourcePhysicalID")
FunctionNameLambdaFunctionInvocationCode:
Type: "Custom::FunctionNameLambdaFunctionInvocationCode"
Properties:
ServiceToken: !GetAtt FunctionNameLambdaFunctionCode.Arn
SGFunctionName:
Type: "AWS::EC2::SecurityGroup"
Properties:
GroupDescription: Description
VpcId: !GetAtt FunctionNameLambdaFunctionInvocationCode.ServiceName
...
Some stuff has been redacted and I made the switch to YAML. The code will be refined obviously. The point was just to make sure I was able to get a return value based on a filter in a Lambda function inside a CloudFormation stack.
I am able to create an SFTP Server (AWS Transfer Family) inside a VPC with an internet-facing Endpoint on AWS console as described here: https://docs.aws.amazon.com/transfer/latest/userguide/create-server-in-vpc.html
VPC endpoint type access selection
Now, I need to replicate that very same creation in a CloudFormation template and don't know how to do it (if possible). According to what I see in https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-transfer-server-endpointdetails.html and in the corresponding CDK docs https://docs.aws.amazon.com/cdk/api/latest/docs/#aws-cdk_aws-transfer.CfnServer.EndpointDetailsProperty.html, there seems not to be a was to set the "access" property value.
All the examples I've come across use a PUBLIC endpoint (in contrast to a VPC one). Here's the snipped I'm working on:
"Resources": {
"ftpserver": {
"Type": "AWS::Transfer::Server",
"DependsOn": "sftpEIP1",
"Properties": {
"EndpointDetails": {
"SubnetIds": [
{
"Ref": "sftpSubnet1"
}
],
"VpcId": {
"Ref": "sftpVPC"
}
},
"EndpointType": "VPC",
"Protocols": [
"SFTP"
],
"Tags": [
{
"Key": "KeyName",
"Value": "ValueName"
}
]
}
}
},
...
}
Since there is no way to set the access type in CloudFormation, the endpoint ends up created as "Internal" instead of "Internet-facing" which is what I need.
Is there any way around this or should I just change it manually (AWS console) after every deployment?
You need to associate Elastic IPs and define the security group.
Notice because the Elastic IPs can only be added after the server is created, it takes sometime to complete, CloudFormation actually creates the server with internal only, stops the server, adds the Elastic IPs, it starts again with elastic IPs and internet facing and then stack is completed.
Example with the CF template below works as expected.
Description: Test CF with FTP server
Resources:
ElasticIP1:
Type: AWS::EC2::EIP
ElasticIP2:
Type: AWS::EC2::EIP
ElasticIP3:
Type: AWS::EC2::EIP
FTPServer:
Type: AWS::Transfer::Server
Properties:
EndpointDetails:
AddressAllocationIds:
- !GetAtt ElasticIP1.AllocationId
- !GetAtt ElasticIP2.AllocationId
- !GetAtt ElasticIP3.AllocationId
SecurityGroupIds:
- sg-0c4184c3f5da91d4a
SubnetIds:
- subnet-0546e2c78cebd0a60
- subnet-0114560b841c91de7
- subnet-0af8fb5fae5472862
VpcId: vpc-07daf77a355f5a8e8
EndpointType: VPC
Protocols:
- SFTP
I have already setup a glue crawler successfully on the AWS console.
Now I have a Cloudformation template to mimic the whole process, EXCEPT I cannot add the Exclusions: field to the template. Background: From the AWS Glue API, the Exclusions: field represent glob patterns to exclude files or folders matching a specific pattern within the data store, in my example, an S3 data store.
With much effort I cannot get the glob patterns to populate on the glue crawler console despite all other values from the script populating alongside the crawler configuration, i.e. the S3Target, crawler name, IAM role, and grouping behavior, all these glue settings/fields populate successfully from the CFN template, all except the Exclusions field, also known as exclude patterns on the Glue Console. My CFN template passes validation and I've run the crawler hoping the exclude globs albeit hidden would somehow still have an affect, but unfortunately I cannot seem to populate the Exclusions field?
Here's the S3Target Exclusion AWS Glue API guide
Here's an AWS sample YAML CFN for a Glue Crawler
Here's a helpful YAML string array guide
YAML
CFNCrawlerSecDeraNUM:
Type: AWS::Glue::Crawler
Properties:
Name: !Ref CFNCrawlerName
Role: !GetAtt CFNRoleSecDERA.Arn
#Classifiers: none, use the default classifier
Description: AWS Glue crawler to crawl SecDERA data
#Schedule: none, use default run-on-demand
DatabaseName: !Ref CFNDatabaseName
Targets:
S3Targets:
- Exclusions:
- "*/readme.htm"
- "*/sub.txt"
- "*/pre.txt"
- "*/tag.txt"
- Path: "s3://sec-input"
TablePrefix: !Ref CFNTablePrefixName
SchemaChangePolicy:
UpdateBehavior: "UPDATE_IN_DATABASE"
DeleteBehavior: "LOG"
# Added single schema grouping Glue API option
Configuration: "{\"Version\":1.0,\"CrawlerOutput\":{\"Partitions\":{\"AddOrUpdateBehavior\":\"InheritFromTable\"},\"Tables\":{\"AddOrUpdateBehavior\":\"MergeNewColumns\"}},\"Grouping\":{\"TableGroupingPolicy\":\"CombineCompatibleSchemas\"}}"
JSON
"CFNCrawlerSecDeraNUM": {
"Type": "AWS::Glue::Crawler",
"Properties": {
"Name": {
"Ref": "CFNCrawlerName"
},
"Role": {
"Fn::GetAtt": [
"CFNRoleSecDERA",
"Arn"
]
},
"Description": "AWS Glue crawler to crawl SecDERA data",
"DatabaseName": {
"Ref": "CFNDatabaseName"
},
"Targets": {
"S3Targets": [
{
"Exclusions": [
"*/readme.htm",
"*/sub.txt",
"*/pre.txt",
"*/tag.txt"
]
},
{
"Path": "s3://sec-input"
}
]
},
"TablePrefix": {
"Ref": "CFNTablePrefixName"
},
"SchemaChangePolicy": {
"UpdateBehavior": "UPDATE_IN_DATABASE",
"DeleteBehavior": "LOG"
},
"Configuration": "{\"Version\":1.0,\"CrawlerOutput\":{\"Partitions\":{\"AddOrUpdateBehavior\":\"InheritFromTable\"},\"Tables\":{\"AddOrUpdateBehavior\":\"MergeNewColumns\"}},\"Grouping\":{\"TableGroupingPolicy\":\"CombineCompatibleSchemas\"}}"
}
}
You are passing Exclusions as a new S3Target object to the S3Targets list.
Try change this:
Targets:
S3Targets:
- Exclusions:
- "*/readme.htm"
- "*/sub.txt"
- "*/pre.txt"
- "*/tag.txt"
- Path: "s3://sec-input"
To this:
Targets:
S3Targets:
- Path: "s3://sec-input"
Exclusions:
- "*/readme.htm"
- "*/sub.txt"
- "*/pre.txt"
- "*/tag.txt"
I am trying to catch Cloudwatch logs for my firehose to find any errors when sending data to S3 destination. I created a cloudformation template with logging details
"CloudWatchLoggingOptions" : {
"Enabled" : "true",
"LogGroupName": "/aws/firehose/firehose-dev", -->firehose-dev is my firehosedeliverystream name
"LogStreamName" : "s3logs"
},
I have given necesary IAM permission to firehose for creating loggroupname and streamname.
{
"Sid": "",
"Effect": "Allow",
"Action": [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Resource": [
"arn:aws:logs:*:*:*"
]
}
When i triggered the template i didnt find any of the loggroupname and streamname is created in cloudwatch logs.
But when we give same IAM permissions to AWS::Lambda resource it will automatically create a loggroupname(i.e./aws/lambda/mylambdaname) and send the logs to the that group. But why this scenario is not working for firehose ?
As a Workaround
I am manually creating AWS::Logs::LogGroup resource with name as /aws/firehose/firehose-dev and AWS::Logs::LogStream resource with name as s3logs.
And also firehose will create a loggroup name and logstream name
automatically, if we configure the firehose deliverystream using
console.
Can't firehose create loggroup name and logstream name automatically like aws lambda do when configured through cloudformation?
Thanks
Any help is appreciated
Its resource dependent. Some resources will create the log group for you, some not. Sometimes console does create them in the background. When you use CloudFormation, usually you have to do everything yourself.
In case of Firehose you can create the AWS::Logs::LogGroup and AWS::Logs::LogStream resources in CloudFormation. For example (yaml):
MyFirehoseLogGroup:
Type: AWS::Logs::LogGroup
Properties:
RetentionInDays: 1
MyFirehoseLogStream:
Type: AWS::Logs::LogStream
Properties:
LogGroupName: !Ref MyFirehoseLogGroup
Then when you define your AWS::KinesisFirehose::DeliveryStream, you could reference them:
CloudWatchLoggingOptions:
Enabled: true
LogGroupName: !Ref MyFirehoseLogGroup
LogStreamName: !Ref MyFirehoseLogStream