I am working on terraform script to automate aws resource creation. As part of that I am creating a vpc and trying to enable vpc flow logs for that. I have created an s3 bucket and also created an iam role as mentioned in the terraform docs https://www.terraform.io/docs/providers/aws/r/flow_log.html
My terraform code is given below
data "aws_s3_bucket" "selected" {
bucket = "${var.s3_bucket_name}"
}
resource "aws_flow_log" "vpc_flow_log" {
count = "${var.enable_vpc_flow_log}"
iam_role_arn = "${aws_iam_role.test_role.arn}"
log_destination = "${data.aws_s3_bucket.selected.arn}"
log_destination_type = "s3"
traffic_type = "ALL"
vpc_id = "${var.vpc_id}"
}
resource "aws_iam_role" "test_role" {
name = "example"
count = "${var.enable_vpc_flow_log}"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"Service": "vpc-flow-logs.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
EOF
}
resource "aws_iam_role_policy" "example" {
name = "example"
count = "${var.enable_vpc_flow_log}"
role = "${aws_iam_role.test_role.id}"
policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents",
"logs:DescribeLogGroups",
"logs:DescribeLogStreams"
],
"Effect": "Allow",
"Resource": "*"
}
]
}
EOF
}
When I execute terraform plan am getting the following error
Error: module.enable_vpc_flow_log.aws_flow_log.vpc_flow_log: "log_group_name": required field is not set
Error: module.enable_vpc_flow_log.aws_flow_log.vpc_flow_log: : invalid or unknown key: log_destination
Error: module.enable_vpc_flow_log.aws_flow_log.vpc_flow_log: : invalid or unknown key: log_destination_type
According to the terraform documentation log_group_name is optional and we have to specify its value only if we are selecting cloud_watch_logs as the log_destination_type
Can anyone help me to resolve my error and to enable the vpc flow logs to s3.
I got this error as well because I was using 1.41 of the AWS provider. Looking through the code I discovered that support for these properties was only released in 1.42. Upgrading to 1.49 did the trick.
I have updated my terraform version from 0.11.8 to 0.11.10. I am now able to configure the vpc flow logs to s3 without any errors using the below resource block.
resource "aws_flow_log" "vpc_flow_log" {
log_destination = "${var.s3_bucket_arn}"
log_destination_type = "s3"
traffic_type = "ALL"
vpc_id = "${var.vpc_id}"
}
While sending logs of VPC to s3 you can not set a log_group_name but you can append group name to the arn of s3 , it will automatically create a folder for you.
resource "aws_flow_log" "vpc_flow_log" {
log_destination = "${var.s3_bucket_arn}/group_name"
log_destination_type = "s3"
traffic_type = "ALL"
vpc_id = "${var.vpc_id}"
}
Related
I am creating a iac to connect an s3 on aws to a glue workflow using event bus and event bridge. I am getting error on defining ami and policies
Here is the code I try
adding a role for glue
resource "aws_iam_role" "glue" {
name = "AWSGlueServiceRoleDefault"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "glue.amazonaws.com"
},
"Effect": "Allow",
"Sid": ""
}
]
}
EOF
}
Here adding an event bus to transfer the events from s3 to glue workflow
resource "aws_cloudwatch_event_bus" "event_bus_…" {
name = "…._event_bus"
}
A cloud watch group to store events
resource "aws_cloudwatch_log_group" "…._log_group" {
name = "tran_hist_log_group"
}
Defining a key for s3 to store event logs
module "kms_key" {
source = "cloudposse/kms-key/aws"
stage = var.environment
namespace = var.organization
name = "tran_hist_kms"
description = "KMS key for app"
deletion_window_in_days = 10
enable_key_rotation = true
alias = "alias/parameter_store_key"
}
Define a cloud trail to emit events from s3 and write them to event bus. Here I get credentials error for cloud trail.
resource "aws_cloudtrail" "aws_cloudtrail_for_…" {
name = "aws_cloudtrail_for_…"
s3_bucket_name = module.s3_bucket_cloud_trail_log_…..bucket_id
kms_key_id = module.kms_key.key_arn
event_selector {
read_write_type = "All"
include_management_events = true
data_resource {
type = "AWS::S3::Object"
values = ["arn:aws:s3"]
}
}
cloud_watch_logs_role_arn = "${aws_iam_role.cloud_trail.arn}"
cloud_watch_logs_group_arn = "${aws_cloudwatch_log_group…..arn}:*"
}
I tried the following role to provide enough access to cloud trail but still credentials is not enough. And I get error.
resource "aws_iam_role" "cloud_trail" {
name = "cloudTrail-cloudWatch-role"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "cloudtrail.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
EOF
}
The error:
module.s3_1.aws_cloudtrail.aws_cloudtrail_for_trnx_hist: Still creating... [1m50s elapsed] ╷ │ Error: Error creating CloudTrail: InvalidCloudWatchLogsLogGroupArnException: Access denied. Verify in IAM that the role has adequate permissions.
I am learning how to automate infrastructure with terraform. Currently I have an application load balancer and I am looking to send logs from this into an S3 bucket. I have a json file created that specifies the policy but when I try to apply the terraform code, I am being presented with the following error:
I've checked my AWS Account number, checked the permissions of the user I am logged in, and cannot figure out why this is happening. Below is the also the code for my policy along with the creation of the S3 buckets. Any advice would appreciated.
Policy
{
"Version": "2012-10-17",
"Id": "javahome-alb-policy",
"Statement": [
{
"Sid": "root-access",
"Effect": "Allow",
"Principle": {
"Service": "arn:aws:iam::aws-account-id:root"
},
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::${access_logs_bucket}/AWSLogs/aws-account-id/*"
},
{
"Sid": "log-delivery",
"Effect": "Allow",
"Principle": {
"Service": "delivery.logs.amazonaws.com"
},
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::${access_logs_bucket}/AWSLogs/aws-account-id/*",
"Condition": {
"StringEquals": {
"s3:x-amz-acl": "bucket-owner-full-control"
}
}
},
{
"Sid": "log-delivery-access-check",
"Effect": "Allow",
"Principle": {
"Service": "delivery.logs.amazonaws.com"
},
"Action": "s3:GetBucketAcl",
"Resource": "arn:aws:s3:::${access_logs_bucket}"
}
]
}
S3 Bucket
resource "aws_s3_bucket" "alb_access_logs" {
bucket = var.alb_s3_logs
policy = data.template_file.javahome.rendered
acl = "private"
region = var.region
tags = {
Name = "jalb-access-logs"
Environment = terraform.workspace
}
}
Application Load Balancer
resource "aws_lb_target_group" "javahome" {
name = var.lb_tg_name
port = var.http_port
protocol = "HTTP"
vpc_id = aws_vpc.my_app.id
}
resource "aws_lb_target_group_attachment" "javahome" {
count = var.web_ec2_count
target_group_arn = aws_lb_target_group.javahome.arn
target_id = aws_instance.web.*.id[count.index]
port = var.http_port
}
resource "aws_lb" "javahome" {
name = var.alb_name
internal = false
load_balancer_type = var.lb_type
security_groups = [aws_security_group.elb_sg.id]
subnets = local.pub_sub_ids
access_logs {
bucket = aws_s3_bucket.alb_access_logs.bucket
enabled = true
}
tags = {
Environment = terraform.workspace
}
}
resource "aws_lb_listener" "listener" {
load_balancer_arn = aws_lb.javahome.arn
port = var.http_port
protocol = "HTTP"
default_action {
type = "forward"
target_group_arn = aws_lb_target_group.javahome.arn
}
}
data "template_file" "javahome" {
template = file("scripts/iam/alb-s3-access-logs.json")
vars = {
access_logs_bucket = var.alb_s3_logs
}
}
The main problem here is the misspelled Principle, the right syntax is Principal.
Also, check the documentation for the source of logs, which is an AWS account directly managed by AWS.
Here an example from AWS Docs:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::aws-account-id:root"
},
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::bucket-name/prefix/*"
}
]
}
https://docs.aws.amazon.com/en_us/elasticloadbalancing/latest/application/load-balancer-access-logs.html
Enable Access Logging
When you enable access logging for your load balancer, you must specify the name of the S3 bucket where the load balancer will store the logs. The bucket must be in the same Region as your load balancer, and must have a bucket policy that grants Elastic Load Balancing permission to write the access logs to the bucket. The bucket can be owned by a different account than the account that owns the load balancer.
P.S. posting account ID is not a good practice.
I am learning Terraform. I am trying to create a new Lambda function. And I realized that I also need to create an IAM role. So I am trying to do both using Terraform. But it does not allow me to create the role.
This is my Terraform file
provider "aws" {
profile = "default"
region = "eu-west-1"
}
data "aws_iam_policy" "AWSLambdaBasicExecutionRole" {
arn = "arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole"
}
resource "aws_iam_role" "terraform_function_role" {
name = "terraform_function_role"
assume_role_policy = "${data.aws_iam_policy.AWSLambdaBasicExecutionRole.policy}"
}
resource "aws_lambda_function" "terraform_function" {
filename = "terraform_function.zip"
function_name = "terraform_function"
handler = "index.handler"
role = "${aws_iam_role.terraform_function_role.id}"
runtime = "nodejs8.10"
source_code_hash = "${filebase64sha256("terraform_function.zip")}"
}
This is the error that I am getting
Error creating IAM Role terraform_function_role: MalformedPolicyDocument: Has prohibited field Resource
status code: 400
How do I fix this?
IAM Role's trust relationship (or assume role policy) defines which resource / service can assume the role. In this, we don't define the Resource field. Hence we can't attach IAM policies or use that policy as is. The correct format for Trust Relationship is:
{
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Principal": {
"Service": "lambda.amazonaws.com"
},
"Action": "sts:AssumeRole"
}]
}
In this scenario, all Lambda functions in your account can assume this role.
You can refer this AWS link for more examples.
Edit: Based on #ydaetskcoR comment, here's a working example:
provider "aws" {
profile = "default"
region = "eu-west-1"
}
data "aws_iam_policy_document" "AWSLambdaTrustPolicy" {
statement {
actions = ["sts:AssumeRole"]
effect = "Allow"
principals {
type = "Service"
identifiers = ["lambda.amazonaws.com"]
}
}
}
resource "aws_iam_role" "terraform_function_role" {
name = "terraform_function_role"
assume_role_policy = "${data.aws_iam_policy_document.AWSLambdaTrustPolicy.json}"
}
resource "aws_iam_role_policy_attachment" "terraform_lambda_policy" {
role = "${aws_iam_role.terraform_function_role.name}"
policy_arn = "arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole"
}
resource "aws_lambda_function" "terraform_function" {
filename = "terraform_function.zip"
function_name = "terraform_function"
handler = "index.handler"
role = "${aws_iam_role.terraform_function_role.arn}"
runtime = "nodejs8.10"
source_code_hash = "${filebase64sha256("terraform_function.zip")}"
}
The changes from your code include the following:
Updated aws_iam_policy_document resource for assume role permissions
Changed aws_iam_role resource to use the above mentioned policy document
Created aws_iam_role_policy_attachment to attach LambdaBasicExecution policy (which enables logging to CloudWatch)
Updated aws_lambda_function resource to use IAM Role's ARN instead of Id because Lambda function needs the ARN
Since you are still in the learning phase, I suggest you move to terraform 0.12 instead, so you can use things like templatefile. therefore you don't need to create data objects.
One other thing is to always use the Least Privilege Principle when creating policies, meaning your Resource (Lambda, on this case) will only have access to what it needs. For now, it's only CloudWatch, but in a real world scenario this is very likely not the case.
Back to your question, here's how you can create an IAM Role, an IAM Policy and finally an IAM Policy Attachment (this is the bridge between the policy and the role) as well as the AssumeRolePolicy (this is the trust relationship between the service it's going to use it and the role itself). I have extracted it all into templates for you so it's easier to maintain later on. The gist (for an easier read on the eyes) can be found here.
# iam_role
resource "aws_iam_role" "iam_role" {
name = "iam-role"
assume_role_policy = templatefile("${path.module}/templates/lambda-base-policy.tpl", {})
}
#content of lambda-base-policy.tpl
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "lambda.amazonaws.com"
},
"Effect": "Allow",
"Sid": ""
}
]
}
#iam_policy
resource "aws_iam_policy" "policy" {
name = "policy"
policy = templatefile("${path.module}/templates/cloudwatch-policy.tpl", {})
}
#content of cloudwatch-policy.tpl
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Resource": "*"
}
]
}
#iam_policy_attachment
resource "aws_iam_policy_attachment" "policy_attachment" {
name = "attachment"
roles = ["${aws_iam_role.iam_role.name}"]
policy_arn = "${aws_iam_policy.policy.arn}"
}
As mentioned in the comment, you have to create assume role then attach the assume rule with your newly created policy, here is the complete working example.
#assume role
resource "aws_iam_role" "role" {
name = "test-alb-logs-to-elk"
path = "/"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "lambda.amazonaws.com"
},
"Effect": "Allow",
"Sid": ""
}
]
}
EOF
}
# Created Policy for IAM Role (s3 and log access)
resource "aws_iam_policy" "policy" {
name = "my-test-policy"
description = "A test policy"
policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:Get*",
"s3:List*"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"logs:*"
],
"Resource": "arn:aws:logs:*:*:*"
}
]
}
EOF
}
# Attached IAM Role and the new created Policy
resource "aws_iam_role_policy_attachment" "test-attach" {
role = "${aws_iam_role.role.name}"
policy_arn = "${aws_iam_policy.policy.arn}"
}
# Created AWS Lamdba Function: Memory Size, NodeJS version, handler, endpoint, doctype and environment settings
resource "aws_lambda_function" "elb_logs_to_elasticsearch" {
function_name = "mytest-alb-logs-to-elk"
description = "elb-logs-to-elasticsearch"
memory_size = 1024
filename = "terraform_function.zip"
runtime = "nodejs8.10"
role = "${aws_iam_role.role.arn}"
handler = "index.handler"
}
I am having issues with terraform when I am trying to create an s3 bucket for my elb access_log I get the following error below:
Error applying plan:
1 error(s) occurred:
* module.elb-author-dev.aws_elb.elb: 1 error(s) occurred:
* aws_elb.elb: Failure configuring ELB attributes: InvalidConfigurationRequest: Access Denied for bucket: my-elb-access-log. Please check S3bucket permission
status code: 409, request id: 13c63697-c016-11e7-8978-67fad50955bd
But, If I go to AWS console and manually give permissions to my s3 Public access to everyone. Re-run terraform apply it works fine, please help me resolve this issue.
My main.tf file
module "s3-access-logs" {
source = "../../../../modules/aws/s3"
s3_bucket_name = "my-elb-access-data"
s3_bucket_acl = "private"
s3_bucket_versioning = true
s3_bucket_region = "us-east-2"
}
# elastic load balancers (elb)
module "elb-author-dev" {
source = "../../../../modules/aws/elb"
elb_sgs = "${module.secgrp-elb-nonprod-
author.security_group_id}"
subnets = ["subnet-a7ec0cea"]
application_tier = "auth"
access_logs_enabled = true
access_logs_bucket = "my-elb-access-log"
access_logs_prefix = "dev-auth-elb-access-log"
access_logs_interval = "5"
instances = ["${module.ec2-author-dev.ec2_instance[0]}"]
}
my s3/main.tf
resource "aws_s3_bucket" "s3_data_bucket" {
bucket = "${var.s3_bucket_name}"
acl = "${var.s3_bucket_acl}" #"public"
region = "${var.s3_bucket_region}"
policy = <<EOF
{
"Id": "Policy1509573454872",
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1509573447773",
"Action": "s3:PutObject",
"Effect": "Allow",
"Resource": "arn:aws:s3:::my-elb-access-log/dev-auth-elb/AWSLogs/my_account_id/*",
"Principal": {
"AWS": [
"033677994240"
]
}
}
]
}
EOF
versioning {
enabled = "${var.s3_bucket_versioning}" #true
}
tags {
Name = "${var.s3_bucket_name}"
Terraform = "${var.terraform_tag}"
}
}
My elb.main.tf
access_logs {
enabled = "${var.access_logs_enabled}" #false
bucket = "${var.access_logs_bucket}"
bucket_prefix = "${var.environment_name}-${var.application_tier}-${var.access_logs_prefix}"
interval = "${var.access_logs_interval}" #60
}
AWS Bucket Permissions
You need to grant access to the ELB principal. Each region has a different principal.
Region, ELB Account Principal ID
us-east-1, 127311923021
us-east-2, 033677994240
us-west-1, 027434742980
us-west-2, 797873946194
ca-central-1, 985666609251
eu-west-1, 156460612806
eu-central-1, 054676820928
eu-west-2, 652711504416
ap-northeast-1, 582318560864
ap-northeast-2, 600734575887
ap-southeast-1, 114774131450
ap-southeast-2, 783225319266
ap-south-1, 718504428378
sa-east-1, 507241528517
us-gov-west-1*, 048591011584
cn-north-1*, 638102146993
* These regions require a separate account.
source: AWS access logging bucket permissions
Terraform
In terraform your resource config should look like the example below. You will need your aws account-id and the principal id from the table above:
resource "aws_s3_bucket" "s3_data_bucket" {
bucket = "${var.s3_bucket_name}"
acl = "${var.s3_bucket_acl}"
region = "${var.s3_bucket_region}"
policy =<<EOF
{
"Id": "Policy1509573454872",
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1509573447773",
"Action": "s3:PutObject",
"Effect": "Allow",
"Resource": "arn:aws:s3:::my-elb-access-data/dev-auth-elb/AWSLogs/your-account-id/*",
"Principal": {
"AWS": ["principal_id_from_table_above"]
}
}
]
}
EOF
}
You may need to split the policy out separately rather than keeping it inline as above. In which case you'd need to add a bucket policy resource like this:
resource "aws_s3_bucket_policy" "elb_access_logs" {
bucket = "${aws_s3_bucket.s3_data_bucket.id}"
policy =<<EOF
{
"Id": "Policy1509573454872",
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1509573447773",
"Action": "s3:PutObject",
"Effect": "Allow",
"Resource": "arn:aws:s3:::my-elb-access-data/dev-auth-elb/AWSLogs/your-account-id/*",
"Principal": {
"AWS": ["principal_id_from_table_above"]
}
}
]
}
EOF
}
The goal here is to create scheduled snapshots of EBS volumes. Looking at Terraform's documentation for aws_cloudwatch_event_target it doesn't seem possible, but I could be missing something.
Cloudwatch Events built-in targets just seem to require an input parameter as well as the ARN that is shown for adding a message to an SNS queue in the example for aws_cloudwatch_event_rule or sending to a Kinesis stream in aws_cloudwatch_event_target.
So we should just be able to do something like this:
resource "aws_cloudwatch_event_target" "ebs_vol_a" {
target_id = "ebs_vol_a"
rule = "${aws_cloudwatch_event_rule.snap_ebs.name}"
arn = "arn:aws:automation:${var.region}:${var.account_id}:action/EBSCreateSnapshot/EBSCreateSnapshot_ebs_vol_a"
input = "\"arn:aws:ec2:${var.region}:${var.account_id}:volume/vol-${var.ebs_vol_a_id}\""
}
resource "aws_cloudwatch_event_rule" "snap_ebs" {
name = "snap-ebs-volumes"
description = "Snapshot EBS volumes"
schedule_expression = "rate(6 hours)"
}
I haven't yet tested this but it should work. Obviously you probably want to get the EBS volume IDs from the resource you created them but that's beyond the scope of the question. I've also guessed at the ARN after creating a rule in the AWS console and then looking at the output of aws events list-targets-by-rule where it seems to add the rule name to the ARN of the target but that may not always be true/necessary.
I was able to get this working entirely in terraform by tweaking this response provided by D Swartz. I had to modify the aws_cloudwatch_event_target resource in a few ways:
The arn field needs to point to the target/create-snapshot event instead of the action/EBSCreateSnapshot automation action.
The input field needs to be set to the desired volume's id rather than its arn.
The role_arn needs to be set to the arn of the aws_iam_role that will be running the event.
The updated aws_cloudwatch_event_target resource looks like this:
resource "aws_cloudwatch_event_target" "example_event_target" {
target_id = "example"
rule = "${aws_cloudwatch_event_rule.snapshot_example.name}"
arn = "arn:aws:events:${var.aws_region}:${var.account_id}:target/create-snapshot"
input = "${jsonencode("${aws_ebs_volume.example.id}")}"
role_arn = "${aws_iam_role.snapshot_permissions.arn}"
}
Full code snippet below:
resource "aws_cloudwatch_event_rule" "snapshot_example" {
name = "example-snapshot-volumes"
description = "Snapshot EBS volumes"
schedule_expression = "rate(24 hours)"
}
resource "aws_cloudwatch_event_target" "example_event_target" {
target_id = "example"
rule = "${aws_cloudwatch_event_rule.snapshot_example.name}"
arn = "arn:aws:events:${var.aws_region}:${var.account_id}:target/create-snapshot"
input = "${jsonencode("${aws_ebs_volume.example.id}")}"
role_arn = "${aws_iam_role.snapshot_permissions.arn}"
}
resource "aws_iam_role" "snapshot_permissions" {
name = "example"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "automation.amazonaws.com"
},
"Effect": "Allow",
"Sid": ""
}
]
}
EOF
}
resource "aws_iam_policy" "snapshot_policy" {
name = "example-snapshot-policy"
description = "grant ebs snapshot permissions to cloudwatch event rule"
policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ec2:Describe*",
"ec2:RebootInstances",
"ec2:StopInstances",
"ec2:TerminateInstances",
"ec2:CreateSnapshot"
],
"Resource": "*"
}
]
}
EOF
}
resource "aws_iam_role_policy_attachment" "snapshot_policy_attach" {
role = "${aws_iam_role.snapshot_permissions.name}"
policy_arn = "${aws_iam_policy.snapshot_policy.arn}"
}
edit: I'm not entirely clear if this is directly supported by AWS or not - based on their documentation it seems like this may not be an intended feature:
Creating rules with built-in targets is supported only in the AWS Management Console.
The previous answer was enough to get everything except for the IAM permissions on the event targets (i.e. go into the console, edit the rule, and in "Step 2", for the "AWS Permissions" section, create a new role, etc). To get this working in terraform, I just added a few resources:
resource "aws_cloudwatch_event_rule" "snapshot_example" {
name = "example-snapshot-volumes"
description = "Snapshot EBS volumes"
schedule_expression = "rate(24 hours)"
}
resource "aws_cloudwatch_event_target" "example_event_target" {
target_id = "example"
rule = "${aws_cloudwatch_event_rule.snapshot_example.name}"
arn = "arn:aws:automation:${var.aws_region}:${var.account_id}:action/EBSCreateSnapshot/EBSCreateSnapshot_example-snapshot-volumes"
input = "${jsonencode("arn:aws:ec2:${var.aws_region}:${var.account_id}:volume/${aws_ebs_volume.example.id}")}"
}
resource "aws_iam_role" "snapshot_permissions" {
name = "example"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "automation.amazonaws.com"
},
"Effect": "Allow",
"Sid": ""
}
]
}
EOF
}
resource "aws_iam_policy" "snapshot_policy" {
name = "example-snapshot-policy"
description = "grant ebs snapshot permissions to cloudwatch event rule"
policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ec2:Describe*",
"ec2:RebootInstances",
"ec2:StopInstances",
"ec2:TerminateInstances",
"ec2:CreateSnapshot"
],
"Resource": "*"
}
]
}
EOF
}
resource "aws_iam_role_policy_attachment" "snapshot_policy_attach" {
role = "${aws_iam_role.snapshot_permissions.name}"
policy_arn = "${aws_iam_policy.snapshot_policy.arn}"
}
I came up with this solution which is compatible with AWS CloudWatch Rules and Amazon EventBridge:
resource "aws_cloudwatch_event_rule" "volume_snapshot_rule" {
name = "ebs-volume-snapshot"
description = "Create an EBS volume snapshot every 6 hours"
schedule_expression = "rate(6 hours)"
}
resource "aws_cloudwatch_event_target" "volume_snapshot_target" {
target_id = "ebs-volume-snapshot-target"
rule = aws_cloudwatch_event_rule.volume_snapshot_rule.name
arn = "arn:aws:events:eu-central-1:${data.aws_caller_identity.current.account_id}:target/create-snapshot"
role_arn = "arn:aws:iam::${data.aws_caller_identity.current.account_id}:role/create-ebs-snapshot"
input = "\"${aws_ebs_volume.storage.id}\""
}
data "aws_caller_identity" "current" {}
And for the IAM role
resource "aws_iam_role" "create_ebs_snapshot" {
name = "create-ebs-snapshot"
assume_role_policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Action = "sts:AssumeRole"
Effect = "Allow"
Sid = ""
Principal = {
Service = "events.amazonaws.com"
}
},
]
})
inline_policy {
name = "policy"
policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Action = ["ec2:CreateSnapshot"]
Effect = "Allow"
Resource = "*"
},
]
})
}
}