Unknown principle in bucket policy Terraform AWS - amazon-web-services

I am learning how to automate infrastructure with terraform. Currently I have an application load balancer and I am looking to send logs from this into an S3 bucket. I have a json file created that specifies the policy but when I try to apply the terraform code, I am being presented with the following error:
I've checked my AWS Account number, checked the permissions of the user I am logged in, and cannot figure out why this is happening. Below is the also the code for my policy along with the creation of the S3 buckets. Any advice would appreciated.
Policy
{
"Version": "2012-10-17",
"Id": "javahome-alb-policy",
"Statement": [
{
"Sid": "root-access",
"Effect": "Allow",
"Principle": {
"Service": "arn:aws:iam::aws-account-id:root"
},
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::${access_logs_bucket}/AWSLogs/aws-account-id/*"
},
{
"Sid": "log-delivery",
"Effect": "Allow",
"Principle": {
"Service": "delivery.logs.amazonaws.com"
},
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::${access_logs_bucket}/AWSLogs/aws-account-id/*",
"Condition": {
"StringEquals": {
"s3:x-amz-acl": "bucket-owner-full-control"
}
}
},
{
"Sid": "log-delivery-access-check",
"Effect": "Allow",
"Principle": {
"Service": "delivery.logs.amazonaws.com"
},
"Action": "s3:GetBucketAcl",
"Resource": "arn:aws:s3:::${access_logs_bucket}"
}
]
}
S3 Bucket
resource "aws_s3_bucket" "alb_access_logs" {
bucket = var.alb_s3_logs
policy = data.template_file.javahome.rendered
acl = "private"
region = var.region
tags = {
Name = "jalb-access-logs"
Environment = terraform.workspace
}
}
Application Load Balancer
resource "aws_lb_target_group" "javahome" {
name = var.lb_tg_name
port = var.http_port
protocol = "HTTP"
vpc_id = aws_vpc.my_app.id
}
resource "aws_lb_target_group_attachment" "javahome" {
count = var.web_ec2_count
target_group_arn = aws_lb_target_group.javahome.arn
target_id = aws_instance.web.*.id[count.index]
port = var.http_port
}
resource "aws_lb" "javahome" {
name = var.alb_name
internal = false
load_balancer_type = var.lb_type
security_groups = [aws_security_group.elb_sg.id]
subnets = local.pub_sub_ids
access_logs {
bucket = aws_s3_bucket.alb_access_logs.bucket
enabled = true
}
tags = {
Environment = terraform.workspace
}
}
resource "aws_lb_listener" "listener" {
load_balancer_arn = aws_lb.javahome.arn
port = var.http_port
protocol = "HTTP"
default_action {
type = "forward"
target_group_arn = aws_lb_target_group.javahome.arn
}
}
data "template_file" "javahome" {
template = file("scripts/iam/alb-s3-access-logs.json")
vars = {
access_logs_bucket = var.alb_s3_logs
}
}

The main problem here is the misspelled Principle, the right syntax is Principal.
Also, check the documentation for the source of logs, which is an AWS account directly managed by AWS.
Here an example from AWS Docs:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::aws-account-id:root"
},
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::bucket-name/prefix/*"
}
]
}
https://docs.aws.amazon.com/en_us/elasticloadbalancing/latest/application/load-balancer-access-logs.html
Enable Access Logging
When you enable access logging for your load balancer, you must specify the name of the S3 bucket where the load balancer will store the logs. The bucket must be in the same Region as your load balancer, and must have a bucket policy that grants Elastic Load Balancing permission to write the access logs to the bucket. The bucket can be owned by a different account than the account that owns the load balancer.
P.S. posting account ID is not a good practice.

Related

Provision an event bridge in terraform to connect s3 to glue workflow

I am creating a iac to connect an s3 on aws to a glue workflow using event bus and event bridge. I am getting error on defining ami and policies
Here is the code I try
adding a role for glue
resource "aws_iam_role" "glue" {
name = "AWSGlueServiceRoleDefault"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "glue.amazonaws.com"
},
"Effect": "Allow",
"Sid": ""
}
]
}
EOF
}
Here adding an event bus to transfer the events from s3 to glue workflow
resource "aws_cloudwatch_event_bus" "event_bus_…" {
name = "…._event_bus"
}
A cloud watch group to store events
resource "aws_cloudwatch_log_group" "…._log_group" {
name = "tran_hist_log_group"
}
Defining a key for s3 to store event logs
module "kms_key" {
source = "cloudposse/kms-key/aws"
stage = var.environment
namespace = var.organization
name = "tran_hist_kms"
description = "KMS key for app"
deletion_window_in_days = 10
enable_key_rotation = true
alias = "alias/parameter_store_key"
}
Define a cloud trail to emit events from s3 and write them to event bus. Here I get credentials error for cloud trail.
resource "aws_cloudtrail" "aws_cloudtrail_for_…" {
name = "aws_cloudtrail_for_…"
s3_bucket_name = module.s3_bucket_cloud_trail_log_…..bucket_id
kms_key_id = module.kms_key.key_arn
event_selector {
read_write_type = "All"
include_management_events = true
data_resource {
type = "AWS::S3::Object"
values = ["arn:aws:s3"]
}
}
cloud_watch_logs_role_arn = "${aws_iam_role.cloud_trail.arn}"
cloud_watch_logs_group_arn = "${aws_cloudwatch_log_group…..arn}:*"
}
I tried the following role to provide enough access to cloud trail but still credentials is not enough. And I get error.
resource "aws_iam_role" "cloud_trail" {
name = "cloudTrail-cloudWatch-role"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "cloudtrail.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
EOF
}
The error:
module.s3_1.aws_cloudtrail.aws_cloudtrail_for_trnx_hist: Still creating... [1m50s elapsed] ╷ │ Error: Error creating CloudTrail: InvalidCloudWatchLogsLogGroupArnException: Access denied. Verify in IAM that the role has adequate permissions.

Write AWS Lambda Logs to CloudWatch Log Group with Terraform

I am trying write the logs of a lambda function into a CloudWatch Log Group created by terraform.
This is the lambda policy json -
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1580216411252",
"Action": [
"logs:CreateLogStream",
"logs:CreateLogDelivery",
"logs:PutLogEvents"
],
"Effect": "Allow",
"Resource": "arn:aws:logs:*:*:*"
}
]
}
This is the lambda assume policy json -
{
"Version": "2012-10-17",
"Statement": [{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "lambda.amazonaws.com"
},
"Effect": "Allow",
"Sid": ""
}]
}
I have added this to the lambda.tf file -
resource "aws_cloudwatch_log_group" "example" {
name = "/test/logs/${var.lambda_function_name}"
}
Although the CloudWatch Log Group '/test/logs/${var.lambda_function_name}' is getting created through terraform, I am unable to write the log of the lambda function to this group.
If I change the lambda policy json to this -
{
"Version": "2012-10-17",
"Statement": [{
"Sid": "Stmt1580204738067",
"Action": "logs:*",
"Effect": "Allow",
"Resource": "*"
}]
}
Then It automatically stores the log in /aws/lambda/ directory.
How can I make sure that the lambda logs get written into a CloudWatch Log Group that I create and not in the /aws/lambda/ group created by lambda itself?
If you want Terraform to manage the CloudWatch log group, you have to create the log group ahead of time with the exact name the Lambda function is going to use for its log group. You can't change the name at all. Then in your Terraform you need to make the log group a dependency of the Lambda function, to make sure Terraform has a chance to create the log group before Lambda creates it automatically.
Just adding the log group as a dependency to the lambda is not enough. You also have to attach the IAM policy to the lambda role.
The steps are following:
Define the IAM role for lambda:
resource "aws_iam_role" "iam_for_lambda" {
name = "iam_for_lambda"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Principal": {
"Service": "lambda.amazonaws.com"
},
"Action": "sts:AssumeRole"
}]
}
EOF
}
Define the IAM policy that allows lambda to create log streams and put log events
resource "aws_iam_policy" "function_logging_policy" {
name = "function-logging-policy"
policy = jsonencode({
"Version" : "2012-10-17",
"Statement" : [
{
Action : [
"logs:CreateLogStream",
"logs:PutLogEvents"
],
Effect : "Allow",
Resource : "arn:aws:logs:*:*:*"
}
]
})
}
Attach the policy to the IAM role created in step 1, by creating new resource 'aws_iam_role_policy_attachment'
resource "aws_iam_role_policy_attachment" "function_logging_policy_attachment" {
role = aws_iam_role.iam_for_lambda.id
policy_arn = aws_iam_policy.function_logging_policy.arn
}
Define the log group
resource "aws_cloudwatch_log_group" "lambda_log_group" {
name = "/aws/lambda/${var.lambda.function_name}"
retention_in_days = 7
lifecycle {
prevent_destroy = false
}
}
Define your lambda function with the depends_on parameter:
resource "aws_lambda_function" "lambda_function" {
filename = "../${var.lambda.function_filename}"
function_name = "${var.lambda.function_name}"
role = aws_iam_role.iam_for_lambda.arn
handler = "${var.lambda.handler}"
layers = [aws_lambda_layer_version.lambda_layer.arn]
depends_on = [aws_cloudwatch_log_group.lambda_log_group]
source_code_hash = filebase64sha256("../${var.lambda.function_filename}")
runtime = "python3.9"
}
The IAM policy creation & attachment comes from this article, the rest is from my personal project that worked for me.

Terraform to enable vpc flow logs to amazon s3

I am working on terraform script to automate aws resource creation. As part of that I am creating a vpc and trying to enable vpc flow logs for that. I have created an s3 bucket and also created an iam role as mentioned in the terraform docs https://www.terraform.io/docs/providers/aws/r/flow_log.html
My terraform code is given below
data "aws_s3_bucket" "selected" {
bucket = "${var.s3_bucket_name}"
}
resource "aws_flow_log" "vpc_flow_log" {
count = "${var.enable_vpc_flow_log}"
iam_role_arn = "${aws_iam_role.test_role.arn}"
log_destination = "${data.aws_s3_bucket.selected.arn}"
log_destination_type = "s3"
traffic_type = "ALL"
vpc_id = "${var.vpc_id}"
}
resource "aws_iam_role" "test_role" {
name = "example"
count = "${var.enable_vpc_flow_log}"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"Service": "vpc-flow-logs.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
EOF
}
resource "aws_iam_role_policy" "example" {
name = "example"
count = "${var.enable_vpc_flow_log}"
role = "${aws_iam_role.test_role.id}"
policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents",
"logs:DescribeLogGroups",
"logs:DescribeLogStreams"
],
"Effect": "Allow",
"Resource": "*"
}
]
}
EOF
}
When I execute terraform plan am getting the following error
Error: module.enable_vpc_flow_log.aws_flow_log.vpc_flow_log: "log_group_name": required field is not set
Error: module.enable_vpc_flow_log.aws_flow_log.vpc_flow_log: : invalid or unknown key: log_destination
Error: module.enable_vpc_flow_log.aws_flow_log.vpc_flow_log: : invalid or unknown key: log_destination_type
According to the terraform documentation log_group_name is optional and we have to specify its value only if we are selecting cloud_watch_logs as the log_destination_type
Can anyone help me to resolve my error and to enable the vpc flow logs to s3.
I got this error as well because I was using 1.41 of the AWS provider. Looking through the code I discovered that support for these properties was only released in 1.42. Upgrading to 1.49 did the trick.
I have updated my terraform version from 0.11.8 to 0.11.10. I am now able to configure the vpc flow logs to s3 without any errors using the below resource block.
resource "aws_flow_log" "vpc_flow_log" {
log_destination = "${var.s3_bucket_arn}"
log_destination_type = "s3"
traffic_type = "ALL"
vpc_id = "${var.vpc_id}"
}
While sending logs of VPC to s3 you can not set a log_group_name but you can append group name to the arn of s3 , it will automatically create a folder for you.
resource "aws_flow_log" "vpc_flow_log" {
log_destination = "${var.s3_bucket_arn}/group_name"
log_destination_type = "s3"
traffic_type = "ALL"
vpc_id = "${var.vpc_id}"
}

Enabling Cross Account Access to publish AWS ClouchWatch logs(Multiple Accounts) to Kinesis

I have two aws accounts prod and prod-support. In prod-support account I created a cross account role and attached a policy to grant access to prod account as below. I have already created the kinesis stream in prod-support account and was in active state.
Now I am trying to create a cloudwatch subscription in prod account to redirect logs from cloudwatch logs(prod) -> kinesis (prod-support) account.
provider "aws" {
region = "${var.aws_region}"
assume_role {
role_arn = "arn:aws:iam::111111111:role/deployment_role"
}
}
resource "aws_iam_role_policy" "tf_CWL_to_kinesis_policy" {
name = "tf_CWL_to_kinesis_policy"
role = "${aws_iam_role.tf_CWL_to_kinesis_role.id}"
policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "iam:PassRole",
"Resource": "arn:aws:iam::22222222:role/CrossAccountRole"
},
{
"Effect": "Allow",
"Action": [
"kinesis:DescribeStream",
"kinesis:PutRecord",
"kinesis:PutRecords",
"kinesis:GetShardIterator",
"kinesis:GetRecords"
],
"Resource": "arn:aws:kinesis:${var.aws_region}:22222222:stream/tf_CWL_to_kinesis_stream"
}
]
}EOF
}
resource "aws_iam_role" "tf_CWL_to_kinesis_role" {
name = "tf_CWL_to_kinesis_role"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"AWS":"111111111"
},
"Effect": "Allow"
}
]
}EOF
}
resource "aws_cloudwatch_log_subscription_filter" "tf_CWL_to_kinesis_subscrp_filter" {
name = "tf_CWL_to_kinesis_subscrp_filter",
role_arn = "${aws_iam_role.tf_CWL_to_kinesis_role.arn}"
log_group_name = "/aws/lambda/egad-diagnostics-result-sender-lambda"
filter_pattern = ""
destination_arn = "arn:aws:kinesis:${var.aws_region}:2222222222:stream/tf_CWL_to_kinesis_stream"
depends_on = ["aws_iam_role.tf_CWL_to_kinesis_role"]
}
Couple of things to notice here.
Above terraform script uses assume role to execute. which means using prod-support credentials above terraform script assumes prod deployment-role to create the resources in prod account.
while creating cloudwatch subscription it tries to post message to kinesis steam. But i was not sure which role it is using to to post messages.
Above script is trying to access kinesis stream in prod-support account, but it was not able access it.
Any thoughts

Terraform ELB access_log S3 access Permissions Issue

I am having issues with terraform when I am trying to create an s3 bucket for my elb access_log I get the following error below:
Error applying plan:
1 error(s) occurred:
* module.elb-author-dev.aws_elb.elb: 1 error(s) occurred:
* aws_elb.elb: Failure configuring ELB attributes: InvalidConfigurationRequest: Access Denied for bucket: my-elb-access-log. Please check S3bucket permission
status code: 409, request id: 13c63697-c016-11e7-8978-67fad50955bd
But, If I go to AWS console and manually give permissions to my s3 Public access to everyone. Re-run terraform apply it works fine, please help me resolve this issue.
My main.tf file
module "s3-access-logs" {
source = "../../../../modules/aws/s3"
s3_bucket_name = "my-elb-access-data"
s3_bucket_acl = "private"
s3_bucket_versioning = true
s3_bucket_region = "us-east-2"
}
# elastic load balancers (elb)
module "elb-author-dev" {
source = "../../../../modules/aws/elb"
elb_sgs = "${module.secgrp-elb-nonprod-
author.security_group_id}"
subnets = ["subnet-a7ec0cea"]
application_tier = "auth"
access_logs_enabled = true
access_logs_bucket = "my-elb-access-log"
access_logs_prefix = "dev-auth-elb-access-log"
access_logs_interval = "5"
instances = ["${module.ec2-author-dev.ec2_instance[0]}"]
}
my s3/main.tf
resource "aws_s3_bucket" "s3_data_bucket" {
bucket = "${var.s3_bucket_name}"
acl = "${var.s3_bucket_acl}" #"public"
region = "${var.s3_bucket_region}"
policy = <<EOF
{
"Id": "Policy1509573454872",
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1509573447773",
"Action": "s3:PutObject",
"Effect": "Allow",
"Resource": "arn:aws:s3:::my-elb-access-log/dev-auth-elb/AWSLogs/my_account_id/*",
"Principal": {
"AWS": [
"033677994240"
]
}
}
]
}
EOF
versioning {
enabled = "${var.s3_bucket_versioning}" #true
}
tags {
Name = "${var.s3_bucket_name}"
Terraform = "${var.terraform_tag}"
}
}
My elb.main.tf
access_logs {
enabled = "${var.access_logs_enabled}" #false
bucket = "${var.access_logs_bucket}"
bucket_prefix = "${var.environment_name}-${var.application_tier}-${var.access_logs_prefix}"
interval = "${var.access_logs_interval}" #60
}
AWS Bucket Permissions
You need to grant access to the ELB principal. Each region has a different principal.
Region, ELB Account Principal ID
us-east-1, 127311923021
us-east-2, 033677994240
us-west-1, 027434742980
us-west-2, 797873946194
ca-central-1, 985666609251
eu-west-1, 156460612806
eu-central-1, 054676820928
eu-west-2, 652711504416
ap-northeast-1, 582318560864
ap-northeast-2, 600734575887
ap-southeast-1, 114774131450
ap-southeast-2, 783225319266
ap-south-1, 718504428378
sa-east-1, 507241528517
us-gov-west-1*, 048591011584
cn-north-1*, 638102146993
* These regions require a separate account.
source: AWS access logging bucket permissions
Terraform
In terraform your resource config should look like the example below. You will need your aws account-id and the principal id from the table above:
resource "aws_s3_bucket" "s3_data_bucket" {
bucket = "${var.s3_bucket_name}"
acl = "${var.s3_bucket_acl}"
region = "${var.s3_bucket_region}"
policy =<<EOF
{
"Id": "Policy1509573454872",
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1509573447773",
"Action": "s3:PutObject",
"Effect": "Allow",
"Resource": "arn:aws:s3:::my-elb-access-data/dev-auth-elb/AWSLogs/your-account-id/*",
"Principal": {
"AWS": ["principal_id_from_table_above"]
}
}
]
}
EOF
}
You may need to split the policy out separately rather than keeping it inline as above. In which case you'd need to add a bucket policy resource like this:
resource "aws_s3_bucket_policy" "elb_access_logs" {
bucket = "${aws_s3_bucket.s3_data_bucket.id}"
policy =<<EOF
{
"Id": "Policy1509573454872",
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1509573447773",
"Action": "s3:PutObject",
"Effect": "Allow",
"Resource": "arn:aws:s3:::my-elb-access-data/dev-auth-elb/AWSLogs/your-account-id/*",
"Principal": {
"AWS": ["principal_id_from_table_above"]
}
}
]
}
EOF
}