Terraform - aws_config_config_rule - setting event_source to specific ResourceType - amazon-web-services

I am using Terraform to configure AWS Config Custom rules. In the custom rule config I want to limit the event 'Resource' to 'CloudTrail:Trail' but the only valid value I can find is the default value of 'aws.config'.
Is this the only valid 'Resource' you can specify in a Terraform built AWS Custom Config Rule?
resource "aws_config_config_rule" "custom_rule_01" {
name = "CUSTOM_CloudTrail_EnableLogFileValidation"
description = "Some Description"
source {
owner = "CUSTOM_LAMBDA"
source_identifier = "${aws_lambda_function.lambda_01.arn}"
source_detail {
event_source = "**aws.config**"
message_type = "ConfigurationItemChangeNotification"
}
}
}
Appreciate any guidance.

https://www.terraform.io/docs/providers/aws/r/config_config_rule.html#source-1
event_source - (Optional) The source of the event, such as an AWS service, that triggers AWS Config to evaluate your AWS resources. This defaults to aws.config and is the only valid value.
What you are looking for is resourceType
http://docs.aws.amazon.com/config/latest/APIReference/API_ResourceIdentifier.html#config-Type-ResourceIdentifier-resourceType
Which have the type of AWS::CloudTrail::Trail

Related

how to set log retention days for Cloudfront function in terraform?

I have an example Cloudfront function:
resource "aws_cloudfront_function" "cool_function" {
name = "cool-function"
runtime = "cloudfront-js-1.0"
comment = "The cool function"
publish = true
code = <<EOT
function handler(event) {
var headers = event.request.headers;
if (
typeof headers.coolheader === "undefined" ||
headers.coolheader.value !== "That_is_cool_bro"
) {
console.log("That is not cool bro!")
}
return event.request;
}
EOT
}
When I create this function, Cloudwatch /aws/cloudfront/function/cool-function log group will be created automatically
But log group retention policy is Never Expire
And I can't see any parameters in terraform that allow to set retention days
So the question is:
is it possible to automatically import aws_cloudwatch_log_group every time when Cloudfront function creating and change retention_in_days for this resource?
Quite a few AWS services create their log groups implicitly on first use. To prevent that you need to explicitly create the group before the service has a chance to do it.
For that you need to define the aws_cloudwatch_log_group with the given name yourself, specify the correct retention and then create an explicit depends_on relation between the function and the log group to ensure the log group is created first. For migration purposes you now would need to import already created log groups into your terraform state.
resource "aws_cloudfront_function" "cool_function" {
name = "cool-function"
...
depends_on = [
aws_cloudwatch_log_group.logs
]
}
resource "aws_cloudwatch_log_group" "logs" {
name = "/aws/cloudfront/function/cool-function"
retention_in_days = 123
...
}

Is there a way to configure date-partitioned folders for AWS DMS endpoint target S3?

I'm using terraform in order to configure this DMS migration task that migrates (full-load+cdc) the data from a MySQL instance to a S3 bucket.
The problem is that the configuration seems not to take effect and no partition-folder is created. All the migrated files are created in the same directory within the bucket.
In the documentation they say the endpoint s3 setting DatePartitionEnabled, introduced in the version 3.4.2, is supported both for CDC and FullLoad+CDC.
My terraform configuration spec:
resource "aws_dms_endpoint" "example" {
endpoint_id = "example"
endpoint_type = "target"
engine_name = "s3"
s3_settings {
bucket_name = "example"
bucket_folder = "example-folder"
compression_type = "GZIP"
data_format = "parquet"
parquet_version = "parquet-2-0"
service_access_role_arn = var.service_access_role_arn
date_partition_enabled = true
}
tags = {
Name = "example"
}
}
But in the respective s3 bucket I get no folders, but sequential files as if this option wasn't there.
LOAD00000001.parquet
LOAD00000002.parquet
...
I'm using terraform 1.0.7, aws provider 3.66.0 and a DMS Replication Instance 3.4.6.
Does anyone know what could be this issue?

Error: Error creating CloudTrail: InvalidCloudWatchLogsLogGroupArnException

I am trying to create cloudtrail for an organization in AWS. When I try to run the plan on a targeted apply for
resource "aws_cloudtrail" "nfcisbenchmark" {
name = "nf-cisbenchmark-${terraform.workspace}"
s3_bucket_name = aws_s3_bucket.nfcisbenchmark_cloudtrail.id
enable_logging = var.enable_logging
# 3.2 Ensure CloudTrail log file validation is enabled (Automated)
enable_log_file_validation = var.enable_log_file_validation
# 3.1 Ensure CloudTrail is enabled in all regions (Automated)
is_multi_region_trail = var.is_multi_region_trail
include_global_service_events = var.include_global_service_events
is_organization_trail = "${local.environments[terraform.workspace] == "origin"? true : var.is_organization_trail}"
# 3.7 Ensure CloudTrail logs are encrypted at rest using KMS CMKs (Automated)
kms_key_id = aws_kms_key.nfcisbenchmark.arn
depends_on = [aws_s3_bucket.nfcisbenchmark_cloudtrail]
cloud_watch_logs_role_arn = aws_iam_role.cloudwatch.arn
cloud_watch_logs_group_arn = "${aws_cloudwatch_log_group.nfcisbenchmark.arn}:*"
event_selector {
# 3.11 Ensure that Object-level logging for read events is enabled for S3 bucket (Automated)
read_write_type = "All"
include_management_events = true
}
}
I get Error: Error creating CloudTrail: InvalidCloudWatchLogsLogGroupArnException: Access denied. Check the permissions for your role. Any help with this issue would be greatly appreciated.
Version 3.0.0 of the AWS provider bundled a breaking change to the aws_cloudwatch_log_group resource's ARN output by stripping the :* suffix returned previously. Instead you now have to explicitly add this in places where the AWS API wants the :* suffix. All of the documentation was then updated to follow this pattern as well which is why you see this in the aws_cloudtrail resource documentation:
resource "aws_cloudwatch_log_group" "example" {
name = "Example"
}
resource "aws_cloudtrail" "example" {
# ... other configuration ...
cloud_watch_logs_group_arn = "${aws_cloudwatch_log_group.example.arn}:*" # CloudTrail requires the Log Stream wildcard
}
For you though, on v2.6.0, your ARN already includes this :* so you don't need to add it an extra time but you do need to remember to strip the :* suffix on resources where the AWS API doesn't want that suffix (by the looks of this issue then the aws_datasync_task resource is one of those).
Alternatively you could update your AWS provider to > v3.0.0 and keep the suffix there which will help you with a lot of other potential issues in the future.

AWS Batch input parameter from Cloudwatch through Terraform

I have a terraform project where I'm trying to setup a cloudwatch event rule and target to trigger a new aws batch job submission on a schedule. The issue I'm having is passing a static parameter (ie, a variable representing a command to run) from the cloudwatch event to the batch_target.
In my aws_batch_job_definition I have the following as part of the container_properties:
container_properties = <<CONTAINER_PROPERTIES
{
"command": ["echo", "command", "Ref::inputCommand"],
...etc
}
And my cloudwatch event target tied to the schedule rule looks like this:
resource "aws_cloudwatch_event_target" "test_target" {
rule = aws_cloudwatch_event_rule.every_minute.name
role_arn = aws_iam_role.event_iam_role.arn
arn = aws_batch_job_queue.test_queue.arn
batch_target {
job_definition = aws_batch_job_definition.test.arn
job_name = "job-test"
job_attempts = 2
}
input = "{\"inputCommand\": \"commandToRun\"}" #this line does not work as intended
}
Is there a simple way to use the input or input_transformer properties for the event_target to pass through the variable inputCommand to the batch job?
The setup works when I submit a job with that parameter and value set through the console, or set a default parameter in the job definition, but I'm having trouble doing it via the cloudwatch event in terraform.
I had a similar issue, but with CloudFormation template.
This docs helped me a lot.
In your case, I think the solution might be:
input = "{\"Parameters\" : "{\"inputCommand\": \"commandToRun\"}}"
My working CloudFormation template looks something like this:
JobDefinition:
Type: AWS::Batch::JobDefinition
Properties:
...
ContainerProperties:
...
Image:...
Command:
- 'Ref::MyParameter'
ScheduledRule:
Type: AWS::Events::Rule
Properties:
...
Targets:
- ...
BatchParameters:
...
Input: "{\"Parameters\" : {\"MyParameter\": \"SomeValue\"}}"
You can specify the command through the input section of your event_target. Your terraform could look like this ( and I included another parameter, resourceRequirements, just as an example ):
resource "aws_cloudwatch_event_target" "test_target" {
rule = aws_cloudwatch_event_rule.every_minute.name
role_arn = aws_iam_role.event_iam_role.arn
arn = aws_batch_job_queue.test_queue.arn
batch_target {
job_definition = aws_batch_job_definition.test.arn
job_name = "job-test"
job_attempts = 2
}
input = "{\"Parameters\" : {\"command\": \"commandToRun\", \"resourceRequirements\": {\"resourceRequirements\": [ {\"type\": \"MEMORY\",\"value\": \"500\" }, {\"type\": \"VCPU\",\"value\": \"3\" }]}}}"
}
More info on the options that can be passed can be found here, https://docs.aws.amazon.com/batch/latest/userguide/batch-cwe-target.html about halfway down the page under Passing Event Information to an AWS Batch Target using the EventBridge Input Transformer

How to set up a lambda alias with the same event source mapping as the LATEST/Unqualified lambda function in terraform

I'm trying to create a lambda alias for my lambda function using terraform. I've been able to successfully create the alias but the created alias is missing the dynamodb as the trigger.
how the event source is set up
resource "aws_lambda_event_source_mapping" "db_stream_trigger" {
batch_size = 10
event_source_arn = "${data.terraform_remote_state.testddb.table_stream_arn}"
enabled = true
function_name = "${aws_lambda_function.test_lambda.arn}"
starting_position = "LATEST"
}
how the alias is created
resource "aws_lambda_alias" "test_lambda_alias" {
count = "${var.create_alias ? 1 : 0}"
depends_on = [ "aws_lambda_function.test_lambda" ]
name = "test_alias"
description = "alias for my test lambda"
function_name = "${aws_lambda_function.test_lambda.arn}"
function_version = "${var.current_running_version}"
routing_config = {
additional_version_weights = "${map(
"${aws_lambda_function.test_lambda.version}", "0.5"
)}"
}
}
The lambda works with the dynamodb stream as a trigger
The Alias for the lambda is successfully created.
The Alias is using the correct version
The Alias is using the correct weight
The Alias is NOT using the dynamo-db stream as the event source
I had the wrong function_name for the resource "aws_lambda_event_source_mapping". I was providing it the main lambda function's arn as oppose to the alias lambda function's arn. Once i switched it to the alias's arn, I was able to successfully divide the traffic from the stream dependent on the weight!
From aws doc:
Simplify management of event source mappings – Instead of using Amazon Resource Names (ARNs) for Lambda function in event source mappings, you can use an alias ARN. This approach means that you don't need to update your event source mappings when you promote a new version or roll back to a previous version.
https://docs.aws.amazon.com/lambda/latest/dg/aliases-intro.html