Cloudtrail using terraform - amazon-web-services

I'm creating a cloudtrail using terraform. The problem is my source bucket keeps changing after 3 months. Now I want to give the dynamic S3 bucket value for field_selector.
I'm doing something like this:
resource "aws_cloudtrail" "test" {
name = "test_trail"
s3_bucket_name = bucket.id
enable_logging = true
include_global_service_events = true
is_multi_region_trail = true
enable_log_file_validation = true
advanced_event_selector {
name = "Log download event data"
field_selector {
field = "eventCategory"
equals = ["Data"]
}
field_selector {
field = "resources.type"
equals = ["AWS::S3::Object"]
}
field_selector {
field = "eventName"
equals = ["GetObject"]
}
field_selector {
field = "resources.ARN"
**starts_with = ["aws_s3_bucket.sftp_file_upload_bucket.arn"]**
}
}
Here, I'm giving the arn but logs are not getting created this way but if I hard code the bucket name it's getting created.

When you want to log the object events for a bucket, the ARN is not enough. As the AWS CLI documentation states [1]:
For example, if resources.type equals AWS::S3::Object , the ARN must be in one of the following formats. To log all data events for all objects in a specific S3 bucket, use the StartsWith operator, and include only the bucket ARN as the matching value. The trailing slash is intentional; do not exclude it.
So in your case you would have to fix the last field selector to:
field_selector {
field = "resources.ARN"
starts_with = ["${aws_s3_bucket.sftp_file_upload_bucket.arn}/"]
}
[1] https://awscli.amazonaws.com/v2/documentation/api/latest/reference/cloudtrail/put-event-selectors.html#id11

when using an attribute of a resource you should either specify it like
"${aws_s3_bucket.sftp_file_upload_bucket.arn}"
or without quotes like
aws_s3_bucket.sftp_file_upload_bucket.arn
so, the correct version would be
field_selector {
field = "resources.ARN"
starts_with = [aws_s3_bucket.sftp_file_upload_bucket.arn]
}

Related

Add environment based Multiple Notification Channel to GCP Alert Policy with Terraform Lookup

I'm trying to add multiple notification channels to a GCP Alert policy with terraform.
My issue is that I need to add different notification channels based on the production environment where they are deployed.
As long as I keep the notification channel unique, I can easily deploy in the following way.
Here is my variables.tf file:
locals {
notification_channel = {
DEV = "projects/[PROJECT_ID]/notificationChannels/[CHANNEL_ID]"
PRD = "projects/[PROJECT_ID]/notificationChannels/[CHANNEL_ID]"
}
}
Here is my main.tf file:
resource "google_monitoring_alert_policy" "alert_policy" {
display_name = "My Alert Policy"
combiner = "OR"
conditions {
display_name = "test condition"
condition_threshold {
filter = "metric.type=\"compute.googleapis.com/instance/disk/write_bytes_count\" AND resource.type=\"gce_instance\""
duration = "60s"
comparison = "COMPARISON_GT"
aggregations {
alignment_period = "60s"
per_series_aligner = "ALIGN_RATE"
}
}
}
user_labels = {
foo = "bar"
}
notification_channels = [lookup(local.notification_channel, terraform.workspace)]
}
My issue here happens when I try to map multiple notification channels instead of one per environment.
Something like:
locals {
notification_channel = {
DEV = ["projects/[PROJECT_ID]/notificationChannels/[CHANNEL_ID]", "projects/[PROJECT_ID]/notificationChannels/[CHANNEL_ID]" ]...
}
}
However, if I try this way, system tells me that Inappropriate value for attribute "notification_channels": element 0: string.
Here's documentation of:
Terraform Lookup function Terraform
GCP Alert Policy
Could you help?
If I understood your question, you actually need only to remove the square brackets:
notification_channels = lookup(local.notification_channel, terraform.workspace)
Since the local variable notification_channel is already a list, you only need to use lookup to fetch the value based on the workspace you are currently in.

GCP Alerting Policy to Alert on KMS Key Deletion Using Terraform

I am trying to alert on KMS Key deletions using terraform.
I have a log based metric, a policy and a notification channel to PagerDuty.
This all works, however, following the alert triggering it soon clears and there seems to be nothing I can do to stop this.
Here is my code:
resource "google_logging_metric" "logging_metric" {
name = "kms-key-pending-deletion"
description = "Logging metric used to alert on scheduled deletions of KMS keys"
filter = "resource.type=cloudkms_cryptokeyversion AND protoPayload.methodName=DestroyCryptoKeyVersion"
metric_descriptor {
metric_kind = "DELTA"
value_type = "INT64"
unit = "1"
display_name = "kms-key-pending-deletion-metric-descriptor"
}
}
resource "google_monitoring_notification_channel" "pagerduty_alerts" {
display_name = "pagerduty-notification-channel"
type = "pagerduty"
sensitive_labels {
service_key = var.token
}
}
resource "google_monitoring_alert_policy" "kms_key_deletion_alert_policy" {
display_name = "kms-key-deletion-alert-policy"
combiner = "OR"
notification_channels = [google_monitoring_notification_channel.pagerduty_alerts.name]
conditions {
display_name = "kms-key-deletion-alert-policy-conditions"
condition_threshold {
comparison = "COMPARISON_GT"
duration = "300s"
filter = "metric.type=\"logging.googleapis.com/user/kms-key-pending-deletion\" AND resource.type=\"global\""
threshold_value = "0"
}
}
documentation {
content = "Runbook: https://blah"
}
}
In the GCP GUI I can disable the option "Notify on incident closure" in the policy and it stops the alert from clearing.
However I cannot set this via terraform.
I have tried setting alert_strategy.auto_close to null and 0s but this did not work:
alert_strategy {
auto_close = "0s"
# auto_close = null
}
How do I keep the alert active and stop it from clearing when building the policy in terraform?
Am I using the correct resource type? - Should I be using cloudkms.cryptoKey.state that are in "DESTROY_SCHEDULED" state somehow?
For others wanting to find the answer to this:
The need to keep an alert open and not allow it to automatically close is missing in the API.
The issue is tracked here: https://issuetracker.google.com/issues/151052441?pli=1

I have created a Cloudtrail using terraform but it says I'm missing S3 bucket policy after deployment

Here is my Cloudtrail code. I don't know how to create the S3 bucket policy for this. Can you please help me with the access policy that I need.
resource "aws_cloudtrail" "download_log_trail" {
name = "download_log_trail"
s3_bucket_name = sample
enable_logging = true
include_global_service_events = true
is_multi_region_trail = true
enable_log_file_validation = true
advanced_event_selector {
name = "Log download event data for individual S3 bucket objects"
field_selector {
field = "eventCategory"
equals = ["Data"]
}
field_selector {
field = "resources.type"
equals = ["AWS::S3::Object"]
}
field_selector {
field = "eventName"
equals = ["GetObject"]
}
field_selector {
field = "resources.ARN"
starts_with = [""]
}
}
}
Bucket policy for CloudTrial is described in AWS Docs:
Amazon S3 bucket policy for CloudTrail
In terraform you can use aws_s3_bucket_policy to create the policy for the bucket associated with CloudTrial.

Combine Each.Value with String text?

Working on an AWS SFTP solution with custom IDP. I have this s3 object block, which is intended to create a folder in s3:
resource "aws_s3_bucket_object" "home_directory" {
for_each = var.idp_users
bucket = aws_s3_bucket.s3.id
key = each.value["HomeDirectory"]
}
And this map variable input for idp_users:
idp_users = {
secret01 = {
Password = "password",
HomeDirectory = "test-directory-1",
Role = "arn:aws:iam::XXXXXXXXXXXX:role/custom_idp_sftp_role",
},
secret02 = {
Password = "password",
HomeDirectory = "test-directory-2",
Role = "arn:aws:iam::XXXXXXXXXXXX:role/custom_idp_sftp_role",
}
}
What I need is to simply add a "/" to the end of the HomeDirectory value in the aws_s3_bucket_object block, which will create a folder with the specific name in the s3 bucket. I know it could just be typed into the variable, but in the spirit of automation I want Terraform to append it manually and save us the hassle. I've monkeyed around with join and concatenate but can't figure out how to simply add a "/" to the end of the HomeDirectory value in the s3 object block. Can anyone provide some insight?
You can do that using string templating:
resource "aws_s3_bucket_object" "home_directory" {
for_each = var.idp_users
bucket = aws_s3_bucket.s3.id
key = "${each.value["HomeDirectory"]}/"
}

How to use same tag multiple time on aws inspector resource group

I want to create a resource group in AWS inspector with terraform, that has few tags with key "Name" and different values. I can do this with AWS GUI, but I Want to do it in terraform also. If I do it like in the example below, it will just override the name..
resource "aws_inspector_resource_group" "bar" {
tags = {
Name = "Master"
Name = "UF"
}
}
Could you please try the following:
resource "aws_inspector_resource_group" "bar" {
tags = {
Name = ["Master", "UF"]
}
}
If that doesn't work you could just use the aws_ec2_tag resource:
https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/ec2_tag
Should do the job