I tried to use newer_noncurrent_versions in S3 Lifecycle.
In Terraform 4.3.0, lifecycle was released.
However, when applying on Terraform cloud, an error saying to use Lifecycle V2 occurred.
Is my code the problem? Is it a terraform provider problem?
Terraform CLI and Terraform AWS Provider Version
Terraform v1.1.5
on darwin_amd64
Terraform Configuration Files
resource "aws_s3_bucket_lifecycle_configuration" "s3" {
bucket = "aws-test-bucket"
rule {
id = "rule"
status = "Enabled"
noncurrent_version_expiration {
noncurrent_days = 1
newer_noncurrent_versions = 2
}
}
}
Actual Behavior
When I run terrafrom plan on local, it seems to be created just fine.
+ resource "aws_s3_bucket_lifecycle_configuration" "s3" {
+ bucket = (known after apply)
+ id = (known after apply)
+ rule {
+ id = "rule"
+ status = "Enabled"
+ noncurrent_version_expiration {
+ newer_noncurrent_versions = 2
+ noncurrent_days = 1
}
}
}
However, when applying in Terraform Cloud, the following error occurs.
Error: error creating S3 Lifecycle Configuration for bucket (aws-test-bucket): InvalidRequest:
NewerNoncurrentVersions element can only be used in Lifecycle V2.
status code: 400, with aws_s3_bucket_lifecycle_configuration.s3
on s3.tf line 66, in resource "aws_s3_bucket_lifecycle_configuration" "s3":
Your are missing filter in the rule:
resource "aws_s3_bucket_lifecycle_configuration" "s3" {
bucket = "aws-test-bucket"
rule {
id = "rule"
status = "Enabled"
filter {}
noncurrent_version_expiration {
noncurrent_days = 1
newer_noncurrent_versions = 2
}
}
}
Related
I'm using terraform to provision an ELB & want to Enable Access logs for ELB in a S3 bucket. When I try to apply the resources, I get the below error - InvalidConfiguration: Access Denied for bucket:
Below are my TF resources with the S3 bucket policy created using the IAM Policy Document.
resource "aws_lb" "this" {
name = var.name
load_balancer_type = "application"
access_logs {
bucket = aws_s3_bucket.this.bucket
prefix = var.name
enabled = true
}
}
resource "aws_s3_bucket" "this" {
bucket = "${var.bucket_name}"
acl = "log-delivery-write"
force_destroy = true
}
resource "aws_s3_bucket_policy" "this" {
bucket = "aws_s3_bucket.this.id"
policy = "${data.aws_iam_policy_document.s3_bucket_lb_write.json}"
}
data "aws_iam_policy_document" "s3_bucket_lb_write" {
policy_id = "s3_bucket_lb_logs"
statement {
actions = [
"s3:PutObject",
]
effect = "Allow"
resources = [
"${aws_s3_bucket.this.arn}/*",
]
principals {
identifiers = ["${data.aws_elb_service_account.main.arn}"]
type = "AWS"
}
}
statement {
actions = [
"s3:PutObject"
]
effect = "Allow"
resources = ["${aws_s3_bucket.this.arn}/*"]
principals {
identifiers = ["delivery.logs.amazonaws.com"]
type = "Service"
}
}
statement {
actions = [
"s3:GetBucketAcl"
]
effect = "Allow"
resources = ["${aws_s3_bucket.this.arn}"]
principals {
identifiers = ["delivery.logs.amazonaws.com"]
type = "Service"
}
}
}
output "bucket_name" {
value = "${aws_s3_bucket.this.bucket}"
}
I get the following error
Error: Error putting S3 policy: NoSuchBucket: The specified bucket does not exist
status code: 404, request id: 5932CFE816059A8D, host id: j5ZBQ2ptHXivx+fu7ai5jbM8PSQR2tCFo4IAvcLkuocxk8rn/r0TG/6YbfRloBFR2WSy8UE7K8Q=
Error: Failure configuring LB attributes: InvalidConfigurationRequest: Access Denied for bucket: test-logs-bucket-xyz. Please check S3bucket permission
status code: 400, request id: ee101cc2-5518-42c8-9542-90dd7bb05e3c
terraform version
Terraform v0.12.23
provider.aws v3.6.0
There is mistake in:
resource "aws_s3_bucket_policy" "this" {
bucket = "aws_s3_bucket.this.id"
policy = "${data.aws_iam_policy_document.s3_bucket_lb_write.json}"
}
it should be:
resource "aws_s3_bucket_policy" "this" {
bucket = aws_s3_bucket.this.id
policy = data.aws_iam_policy_document.s3_bucket_lb_write.json
}
The orginal version (bucket = "aws_s3_bucket.this.id") will just try to look for bucket literally called "aws_s3_bucket.this.id".
I am trying to create and attach and attach s3 bucket policies to s3 buckets with terraform. Terraform is throwing the following errors: BucketRegionError and AccessDenied errors. It is saying the bucket I am trying to attach the policy to is not the specified region even though it is deployed in that region. Any advice on how I can attach this policy would be helpful. Below are the errors and how I am creating the buckets, the bucket policy, and how I am attaching. Thanks!
resource "aws_s3_bucket" "dest_buckets" {
provider = aws.dest
for_each = toset(var.s3_bucket_names)
bucket = "${each.value}-replica"
acl = "private"
force_destroy = "true"
versioning {
enabled = true
}
}
resource "aws_s3_bucket_policy" "dest_policy" {
provider = aws.dest
for_each = aws_s3_bucket.dest_buckets
bucket = each.key
policy = data.aws_iam_policy_document.dest_policy.json
}
data "aws_iam_policy_document" "dest_policy" {
statement {
actions = [
"s3:GetBucketVersioning",
"s3:PutBucketVersioning",
]
resources = [
for bucket in aws_s3_bucket.dest_buckets : bucket.arn
]
principals {
type = "AWS"
identifiers = [
"arn:aws:iam::${data.aws_caller_identity.source.account_id}:role/${var.replication_role}"
]
}
}
statement {
actions = [
"s3:ReplicateObject",
"s3:ReplicateDelete",
]
resources = [
for bucket in aws_s3_bucket.dest_buckets : "${bucket.arn}/*"
]
}
}
Errors:
Error: Error putting S3 policy: AccessDenied: Access Denied
status code: 403, request id: 7F17A032D84DE672, host id: EjX+cDYt57caooCIlGX9wPf5s8B2JlXqAZpG8mA5KZtuw/4varoutQfxlkC/9JstdMdjN8EYBtg=
on main.tf line 36, in resource "aws_s3_bucket_policy" "dest_policy":
36: resource "aws_s3_bucket_policy" "dest_policy" {
Error: Error putting S3 policy: BucketRegionError: incorrect region, the bucket is not in 'us-east-2' region at endpoint ''
status code: 301, request id: , host id:
on main.tf line 36, in resource "aws_s3_bucket_policy" "dest_policy":
36: resource "aws_s3_bucket_policy" "dest_policy" {
The buckets create with no issue, I'm just having issues with attaching this policy.
UPDATE:
Below is the provider block for aws.dest, the variables I have defined, and also my .aws/config file.
provider "aws" {
alias = "dest"
profile = var.dest_profile
region = var.dest_region
}
variable "dest_region" {
default = "us-east-2"
}
variable "dest_profile" {
default = "replica"
}
[profile replica]
region = us-east-2
output = json
I managed to execute your configuration and noticed some issues:
In your policy, in the second statement the principals is missing.
statement {
actions = [
"s3:ReplicateObject",
"s3:ReplicateDelete",
]
resources = [
for bucket in aws_s3_bucket.dest_buckets : "${bucket.arn}/*"
]
}
This block is creating the bucket correctly (with -replica in the end):
provider = aws.dest
for_each = toset(var.s3_bucket_names)
bucket = "${each.value}-replica"
acl = "private"
force_destroy = "true"
versioning {
enabled = true
}
}
However, by activating the debug, I've noticed that for this resource each.key references the bucket name without -replica so that I was receiving a 404.
resource "aws_s3_bucket_policy" "dest_policy" {
provider = aws.dest
for_each = aws_s3_bucket.dest_buckets
bucket = each.key
policy = data.aws_iam_policy_document.dest_policy.json
}
Changing to it to the same pattern as the bucket creation it worked:
resource "aws_s3_bucket_policy" "dest_policy" {
provider = aws.dest
for_each = aws_s3_bucket.dest_buckets
bucket = "${each.key}-replica"
policy = data.aws_iam_policy_document.dest_policy.json
}
Regarding the 403, it may be the lack of permissions for the user which is creating this resource.
Let me know if this helps you.
I believe you need to add provider = aws.dest to your data "aws_iam_policy_document" "dest_policy" data object.
The provider directive also works with data objects.
I am trying to create an S3 bucket using the following terraform code:
provider.tf
provider "aws" {
access_key = "XX"
secret_key = "YY"
region = "us-east-2"
}
main.tf
resource "aws_s3" "bucket" {
bucket = "terraform-s3-bucket"
acl = "private"
tags = {
Name = "My Bucket"
Environment = "Test"
}
}
However when I run terraform apply on the above code, I get this error:
Error: Invalid resource type
on main.tf line 1, in resource "aws_s3" "bucket":
1: resource "aws_s3" "bucket" {
The provider provider.aws does not support resource type "aws_s3".
What am I doing wrong?
Because there is no resource named aws_s3. The resource you are looking for is aws_s3_bucket.
I'm trying to use terraform to create an IAM role in AWS China Ningxia region.
Here's my folder structure
.
├── main.tf
└── variables.tf
Here's the content of main.tf
provider "aws" {
access_key = var.access_key
secret_key = var.secret_key
region = var.region
}
resource "aws_iam_role" "role" {
name = "TestRole"
assume_role_policy = data.aws_iam_policy_document.policy_doc.json
}
data "aws_iam_policy_document" "policy_doc" {
statement {
actions = ["sts:AssumeRole"]
principals {
type = "Service"
identifiers = ["ec2.amazonaws.com"]
}
}
}
And here's the variables.tf file:
variable "access_key" {}
variable "secret_key" {}
variable "region" {}
After running the following command
terraform apply \
-var 'access_key=<my_access_key>' \
-var 'secret_key=<my_secret_key>' \
-var 'region=cn-northwest-1'
I got an error saying Error: Error creating IAM Role TestRole: MalformedPolicyDocument: Invalid principal in policy: "SERVICE":"ec2.amazonaws.com".
This terraform script works correctly in other regions of AWS (Tokyo, Singapore, ...). It seems like that AWS China is a little bit different from other regions.
Here's the message before I type yes to terraform:
Terraform will perform the following actions:
# aws_iam_role.role will be created
+ resource "aws_iam_role" "role" {
+ arn = (known after apply)
+ assume_role_policy = jsonencode(
{
+ Statement = [
+ {
+ Action = "sts:AssumeRole"
+ Effect = "Allow"
+ Principal = {
+ Service = "ec2.amazonaws.com"
}
+ Sid = ""
},
]
+ Version = "2012-10-17"
}
)
+ create_date = (known after apply)
+ force_detach_policies = false
+ id = (known after apply)
+ max_session_duration = 3600
+ name = "TestRole"
+ path = "/"
+ unique_id = (known after apply)
}
Plan: 1 to add, 0 to change, 0 to destroy.
Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
Does anyone know how to create an IAM role like mine with terraform in AWS China?
I use aws iam get-account-authorization-details to view the current IAM roles in my AWS China accounts, which are created using AWS console.
Then I found lines containing "Service": "ec2.amazonaws.com.cn".
So using ec2.amazonaws.com.cn to replace ec2.amazonaws.com works without any problem.
I mean the content of main.tf should be
provider "aws" {
access_key = var.access_key
secret_key = var.secret_key
region = var.region
}
resource "aws_iam_role" "role" {
name = "TestRole"
assume_role_policy = data.aws_iam_policy_document.policy_doc.json
}
data "aws_iam_policy_document" "policy_doc" {
statement {
actions = ["sts:AssumeRole"]
principals {
type = "Service"
identifiers = ["ec2.amazonaws.com.cn"]
}
}
}
China is different as you said.
ec2.amazonaws.com will not work in China, instead you have to use something along the lines of ec2.cn-northwest-1.amazonaws.com.cn
Here you have the list of all endpoints https://docs.amazonaws.cn/en_us/aws/latest/userguide/endpoints-Ningxia.html
Also a recommended read about IAM in China: https://docs.amazonaws.cn/en_us/aws/latest/userguide/iam.html#general-info
Like so much in AWS, it is all fully documented... but the real challenge is in finding the documentation.
As you wrote, Brian, the answer is to use ec2.amazonaws.com.cn instead of ec2.amazonaws.com when working in China. But I thought the documentation above might be helpful since it includes all mappings.
Also, highly related:
If you use Terraform and you need to manage resources both in China and outside, you can use the data source aws_partition to help you generalize across China and non-China:
data "aws_partition" "current" {
}
data "aws_caller_identity" "current" {
}
Then later if you have something like:
principals {
type = "AWS"
identifiers = ["arn:${data.aws_partition.current.partition}:iam::${data.aws_caller_identity.current.account_id}:role/my_cool_role"]
}
Then it will evaluate the ARN as follows:
China: arn:aws-cn:iam::<my China account number>:role/my_cool_role
Anywhere else: arn:aws:iam::<my other account number>:role/my_cool_role
Note that in China it's aws-cn whereas it is aws otherwise.
This does assume that you have properly set up the two regions, so that "current" reflects back to the Chinese and non-Chinese accounts.
I am creating a S3 bucket using Terraform on AWS.
I am unable to create a s3 bucket with versioning using terraform. I am Getting "Error putting S3 versioning: AccessDenied" when I try terraform apply.
Terraform plan works with no issues.
provider "aws" {
region = "us-east-1"
}
variable "instance_name" {}
variable "environment" {}
resource "aws_s3_bucket" "my_dr_bucket" {
bucket = "${var.instance_name}-dr-us-west-2"
region = "us-west-2"
acl = "private"
versioning {
enabled = "true"
}
}
Gettin gthe below error:
Error: Error putting S3 versioning: AccessDenied: Access Denied
status code: 403, request id: 21EBBB358558C617
Make sure you are creating S3 bucket in the same region your provider is configured for.
Below code resolved the issue:
provider "aws" {
region = "us-east-1"
}
provider "aws" {
alias = "west"
region = "us-west-2"
}
variable "instance_name" {}
variable "environment" {}
resource "aws_s3_bucket" "my_dr_bucket" {
provider = "aws.west"
bucket = "${var.instance_name}-dr-us-west-2"
region = "us-west-2"
acl = "private"
versioning {
enabled = true
}
}