I am having issues with terraform when I am trying to create an s3 bucket for my elb access_log I get the following error below:
Error applying plan:
1 error(s) occurred:
* module.elb-author-dev.aws_elb.elb: 1 error(s) occurred:
* aws_elb.elb: Failure configuring ELB attributes: InvalidConfigurationRequest: Access Denied for bucket: my-elb-access-log. Please check S3bucket permission
status code: 409, request id: 13c63697-c016-11e7-8978-67fad50955bd
But, If I go to AWS console and manually give permissions to my s3 Public access to everyone. Re-run terraform apply it works fine, please help me resolve this issue.
My main.tf file
module "s3-access-logs" {
source = "../../../../modules/aws/s3"
s3_bucket_name = "my-elb-access-data"
s3_bucket_acl = "private"
s3_bucket_versioning = true
s3_bucket_region = "us-east-2"
}
# elastic load balancers (elb)
module "elb-author-dev" {
source = "../../../../modules/aws/elb"
elb_sgs = "${module.secgrp-elb-nonprod-
author.security_group_id}"
subnets = ["subnet-a7ec0cea"]
application_tier = "auth"
access_logs_enabled = true
access_logs_bucket = "my-elb-access-log"
access_logs_prefix = "dev-auth-elb-access-log"
access_logs_interval = "5"
instances = ["${module.ec2-author-dev.ec2_instance[0]}"]
}
my s3/main.tf
resource "aws_s3_bucket" "s3_data_bucket" {
bucket = "${var.s3_bucket_name}"
acl = "${var.s3_bucket_acl}" #"public"
region = "${var.s3_bucket_region}"
policy = <<EOF
{
"Id": "Policy1509573454872",
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1509573447773",
"Action": "s3:PutObject",
"Effect": "Allow",
"Resource": "arn:aws:s3:::my-elb-access-log/dev-auth-elb/AWSLogs/my_account_id/*",
"Principal": {
"AWS": [
"033677994240"
]
}
}
]
}
EOF
versioning {
enabled = "${var.s3_bucket_versioning}" #true
}
tags {
Name = "${var.s3_bucket_name}"
Terraform = "${var.terraform_tag}"
}
}
My elb.main.tf
access_logs {
enabled = "${var.access_logs_enabled}" #false
bucket = "${var.access_logs_bucket}"
bucket_prefix = "${var.environment_name}-${var.application_tier}-${var.access_logs_prefix}"
interval = "${var.access_logs_interval}" #60
}
AWS Bucket Permissions
You need to grant access to the ELB principal. Each region has a different principal.
Region, ELB Account Principal ID
us-east-1, 127311923021
us-east-2, 033677994240
us-west-1, 027434742980
us-west-2, 797873946194
ca-central-1, 985666609251
eu-west-1, 156460612806
eu-central-1, 054676820928
eu-west-2, 652711504416
ap-northeast-1, 582318560864
ap-northeast-2, 600734575887
ap-southeast-1, 114774131450
ap-southeast-2, 783225319266
ap-south-1, 718504428378
sa-east-1, 507241528517
us-gov-west-1*, 048591011584
cn-north-1*, 638102146993
* These regions require a separate account.
source: AWS access logging bucket permissions
Terraform
In terraform your resource config should look like the example below. You will need your aws account-id and the principal id from the table above:
resource "aws_s3_bucket" "s3_data_bucket" {
bucket = "${var.s3_bucket_name}"
acl = "${var.s3_bucket_acl}"
region = "${var.s3_bucket_region}"
policy =<<EOF
{
"Id": "Policy1509573454872",
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1509573447773",
"Action": "s3:PutObject",
"Effect": "Allow",
"Resource": "arn:aws:s3:::my-elb-access-data/dev-auth-elb/AWSLogs/your-account-id/*",
"Principal": {
"AWS": ["principal_id_from_table_above"]
}
}
]
}
EOF
}
You may need to split the policy out separately rather than keeping it inline as above. In which case you'd need to add a bucket policy resource like this:
resource "aws_s3_bucket_policy" "elb_access_logs" {
bucket = "${aws_s3_bucket.s3_data_bucket.id}"
policy =<<EOF
{
"Id": "Policy1509573454872",
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1509573447773",
"Action": "s3:PutObject",
"Effect": "Allow",
"Resource": "arn:aws:s3:::my-elb-access-data/dev-auth-elb/AWSLogs/your-account-id/*",
"Principal": {
"AWS": ["principal_id_from_table_above"]
}
}
]
}
EOF
}
Related
I always get the error
Invalid operation: Not authorized to get credentials of role arn:aws:iam::xxxxx:role/default_glue_role
I simply want to load from a json from S3 into a Redshift cluster. It is not clear to me what role I have to attach (to Redshift ?).
I have tried attaching the following IAM policy to Redshift
{
"Version": "2012-10-17",
"Statement": {
"Effect": "Allow",
"Action": "sts:AssumeRole",
"Resource": "arn:aws:iam::xxxxx:role/default_glue_role"
}
}
and also tried with "Resource": "*" but I always get same error.
Thanks for help!
I had a long chat with AWS support about this same issues. A few things to check:
Your s3 bucket region is the same as your redshift cluster region
You are not signed in as the root aws user, you need to create a user with the correct permissions and sign in as this user to run your queries
You should add the following permissions to your user and redshift policies:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:*",
"redshift:*",
"sqlworkbench:*",
"sts:*",
"secretsmanager:*",
"s3-object-lambda:*",
"ec2:*",
"sns:*",
"cloudwatch:*",
"tag:*",
"redshift-data:*",
"sqlworkbench:*",
"redshift-serverless:*"
],
"Resource": "*"
}
]
}
You should have the following trust relationships in your redshift and user role:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": [
"s3.amazonaws.com",
"redshift.amazonaws.com",
"iam.amazonaws.com",
"redshift-serverless.amazonaws.com"
]
},
"Action": "sts:AssumeRole"
}
]
}
The actual set of permissions you need might be less but this is what worked for me. Took me a long time to figure this out! I hope it helps.
It looks like you might also need to add permissions for glue.
The redshift-serverless permission might tell you it's causing an error but you should be able to save it anyway (AWS told me to do this)
For everyone using Terraform:
What fixed for me it was the (4) suggestion from #patrick-ward:
data "aws_iam_policy_document" "dms_assume_role" {
statement {
actions = ["sts:AssumeRole"]
principals {
identifiers = [
"s3.amazonaws.com",
"redshift.amazonaws.com",
"iam.amazonaws.com",
"redshift-serverless.amazonaws.com",
"dms.amazonaws.com"
]
type = "Service"
}
}
}
resource "aws_iam_role" "dms-access-for-endpoint" {
assume_role_policy = data.aws_iam_policy_document.dms_assume_role.json
name = "dms-access-for-endpoint"
}
resource "aws_iam_role_policy_attachment" "dms-access-for-endpoint-AmazonDMSRedshiftS3Role" {
policy_arn = "arn:aws:iam::aws:policy/service-role/AmazonDMSRedshiftS3Role"
role = aws_iam_role.dms-access-for-endpoint.name
}
resource "aws_iam_role" "dms-cloudwatch-logs-role" {
assume_role_policy = data.aws_iam_policy_document.dms_assume_role.json
name = "dms-cloudwatch-logs-role"
}
resource "aws_iam_role_policy_attachment" "dms-cloudwatch-logs-role-AmazonDMSCloudWatchLogsRole" {
policy_arn = "arn:aws:iam::aws:policy/service-role/AmazonDMSCloudWatchLogsRole"
role = aws_iam_role.dms-cloudwatch-logs-role.name
}
resource "aws_iam_role" "dms-vpc-role" {
assume_role_policy = data.aws_iam_policy_document.dms_assume_role.json
name = "dms-vpc-role"
}
resource "aws_iam_role_policy_attachment" "dms-vpc-role-AmazonDMSVPCManagementRole" {
policy_arn = "arn:aws:iam::aws:policy/service-role/AmazonDMSVPCManagementRole"
role = aws_iam_role.dms-vpc-role.name
}
i have few s3 buckets being used in dev, test and prod environment and i am planning to create a role ( which will allow me to access those s3 buckets and attach the policy ( i.e to list and do some other permission on that buckets present in different envs say dev,test,prod)
i have written this for that -
####################################
## Role Per Service - GO-Service ##
####################################
resource "aws_iam_policy" "go-service" {
name = "go-service-policy"
policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"s3:ListBucket"
],
"Effect": "Allow",
"Resource": "arn:aws:s3:::orbital-go-download-*"
}
]
}
EOF
}
resource "aws_iam_role" "go-service" {
name = "go-service-role"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Action": "sts:AssumeRole",
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::orbital-go-download-*"
],
"Principal": {
"Service": "s3.amazonaws.com"
}
}
]
}
EOF
}
resource "aws_iam_role_policy_attachment" "go-services-attach" {
role = aws_iam_role.go-service.name
policy_arn = aws_iam_policy.go-service.arn
}
but with this the prob is i might be getting 3 roles getting created ( since we are creating is for 3 k8s clusters say dev,test and prod ) and i need to have one common role which can access all the s3 buckets needed to access the biz service i.e. biz-go-download-{dev,test,prod} bucket.
any help there how would i achieve this requirement , i am new to terraform ? Any suggestion help would be highly appreciated.
I am learning how to automate infrastructure with terraform. Currently I have an application load balancer and I am looking to send logs from this into an S3 bucket. I have a json file created that specifies the policy but when I try to apply the terraform code, I am being presented with the following error:
I've checked my AWS Account number, checked the permissions of the user I am logged in, and cannot figure out why this is happening. Below is the also the code for my policy along with the creation of the S3 buckets. Any advice would appreciated.
Policy
{
"Version": "2012-10-17",
"Id": "javahome-alb-policy",
"Statement": [
{
"Sid": "root-access",
"Effect": "Allow",
"Principle": {
"Service": "arn:aws:iam::aws-account-id:root"
},
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::${access_logs_bucket}/AWSLogs/aws-account-id/*"
},
{
"Sid": "log-delivery",
"Effect": "Allow",
"Principle": {
"Service": "delivery.logs.amazonaws.com"
},
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::${access_logs_bucket}/AWSLogs/aws-account-id/*",
"Condition": {
"StringEquals": {
"s3:x-amz-acl": "bucket-owner-full-control"
}
}
},
{
"Sid": "log-delivery-access-check",
"Effect": "Allow",
"Principle": {
"Service": "delivery.logs.amazonaws.com"
},
"Action": "s3:GetBucketAcl",
"Resource": "arn:aws:s3:::${access_logs_bucket}"
}
]
}
S3 Bucket
resource "aws_s3_bucket" "alb_access_logs" {
bucket = var.alb_s3_logs
policy = data.template_file.javahome.rendered
acl = "private"
region = var.region
tags = {
Name = "jalb-access-logs"
Environment = terraform.workspace
}
}
Application Load Balancer
resource "aws_lb_target_group" "javahome" {
name = var.lb_tg_name
port = var.http_port
protocol = "HTTP"
vpc_id = aws_vpc.my_app.id
}
resource "aws_lb_target_group_attachment" "javahome" {
count = var.web_ec2_count
target_group_arn = aws_lb_target_group.javahome.arn
target_id = aws_instance.web.*.id[count.index]
port = var.http_port
}
resource "aws_lb" "javahome" {
name = var.alb_name
internal = false
load_balancer_type = var.lb_type
security_groups = [aws_security_group.elb_sg.id]
subnets = local.pub_sub_ids
access_logs {
bucket = aws_s3_bucket.alb_access_logs.bucket
enabled = true
}
tags = {
Environment = terraform.workspace
}
}
resource "aws_lb_listener" "listener" {
load_balancer_arn = aws_lb.javahome.arn
port = var.http_port
protocol = "HTTP"
default_action {
type = "forward"
target_group_arn = aws_lb_target_group.javahome.arn
}
}
data "template_file" "javahome" {
template = file("scripts/iam/alb-s3-access-logs.json")
vars = {
access_logs_bucket = var.alb_s3_logs
}
}
The main problem here is the misspelled Principle, the right syntax is Principal.
Also, check the documentation for the source of logs, which is an AWS account directly managed by AWS.
Here an example from AWS Docs:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::aws-account-id:root"
},
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::bucket-name/prefix/*"
}
]
}
https://docs.aws.amazon.com/en_us/elasticloadbalancing/latest/application/load-balancer-access-logs.html
Enable Access Logging
When you enable access logging for your load balancer, you must specify the name of the S3 bucket where the load balancer will store the logs. The bucket must be in the same Region as your load balancer, and must have a bucket policy that grants Elastic Load Balancing permission to write the access logs to the bucket. The bucket can be owned by a different account than the account that owns the load balancer.
P.S. posting account ID is not a good practice.
I am trying to create the following IAM Role with a Policy. The Role is attached to a Lambda.
resource "aws_lambda_function" "lambda" {
function_name = "test"
s3_bucket = "${aws_s3_bucket.deployment_bucket.id}"
s3_key = "${var.deployment_key}"
handler = "${var.function_handler}"
runtime = "${var.lambda_runtimes[var.desired_runtime]}"
role = "${aws_iam_role.lambda_role.arn}"
}
resource "aws_iam_role" "lambda_role" {
name = "test-role"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "lambda.amazonaws.com"
},
"Effect": "Allow",
"Sid": ""
}
]
}
EOF
}
resource "aws_iam_role_policy" "lambda_policy" {
name = test-policy"
role = "${aws_iam_role.lambda_role.id}"
policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"xray:PutTelemetryRecords",
"xray:PutTraceSegments",
"logs:CreateLogGroup",
"logs:PutLogEvents"
],
"Effect": "Allow",
"Resource": "*"
}
]
}
EOF
}
I run terraform apply from an EC2 instance that has an IAM Role attached to it. The IAM Role has the AdministratorAccess and can deploy VPCs and EC2s with Terraform without any issue. When I try to create the IAM Role and Policy above though it fails with InvalidClientTokenId error.
aws_iam_role.lambda_role: Error creating IAM Role test-role: InvalidClientTokenId: The security token included in the request is invalid
I then generated a set of access key credentials and hard-coded them and it still failed. Is there something special I need to do when creating an IAM Role? Any other terraform apply commands I run from this machine work fine until I need to create an IAM Role.
I have two aws accounts prod and prod-support. In prod-support account I created a cross account role and attached a policy to grant access to prod account as below. I have already created the kinesis stream in prod-support account and was in active state.
Now I am trying to create a cloudwatch subscription in prod account to redirect logs from cloudwatch logs(prod) -> kinesis (prod-support) account.
provider "aws" {
region = "${var.aws_region}"
assume_role {
role_arn = "arn:aws:iam::111111111:role/deployment_role"
}
}
resource "aws_iam_role_policy" "tf_CWL_to_kinesis_policy" {
name = "tf_CWL_to_kinesis_policy"
role = "${aws_iam_role.tf_CWL_to_kinesis_role.id}"
policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "iam:PassRole",
"Resource": "arn:aws:iam::22222222:role/CrossAccountRole"
},
{
"Effect": "Allow",
"Action": [
"kinesis:DescribeStream",
"kinesis:PutRecord",
"kinesis:PutRecords",
"kinesis:GetShardIterator",
"kinesis:GetRecords"
],
"Resource": "arn:aws:kinesis:${var.aws_region}:22222222:stream/tf_CWL_to_kinesis_stream"
}
]
}EOF
}
resource "aws_iam_role" "tf_CWL_to_kinesis_role" {
name = "tf_CWL_to_kinesis_role"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"AWS":"111111111"
},
"Effect": "Allow"
}
]
}EOF
}
resource "aws_cloudwatch_log_subscription_filter" "tf_CWL_to_kinesis_subscrp_filter" {
name = "tf_CWL_to_kinesis_subscrp_filter",
role_arn = "${aws_iam_role.tf_CWL_to_kinesis_role.arn}"
log_group_name = "/aws/lambda/egad-diagnostics-result-sender-lambda"
filter_pattern = ""
destination_arn = "arn:aws:kinesis:${var.aws_region}:2222222222:stream/tf_CWL_to_kinesis_stream"
depends_on = ["aws_iam_role.tf_CWL_to_kinesis_role"]
}
Couple of things to notice here.
Above terraform script uses assume role to execute. which means using prod-support credentials above terraform script assumes prod deployment-role to create the resources in prod account.
while creating cloudwatch subscription it tries to post message to kinesis steam. But i was not sure which role it is using to to post messages.
Above script is trying to access kinesis stream in prod-support account, but it was not able access it.
Any thoughts