I want to create a private S3 bucket. If I create a private bucket with AWS console, it works. If I create a private bucket with Terrafom, the bucket is created, but it is always public.
Code
resource "aws_s3_bucket" "audio" {
bucket = "my-dev-audio"
}
resource "aws_s3_bucket_acl" "audio" {
bucket = aws_s3_bucket.audio.id
acl = "private"
}
Logs
[...]
# aws_s3_bucket_acl.audio will be created
+ resource "aws_s3_bucket_acl" "audio" {
+ acl = "private"
+ bucket = (known after apply)
+ id = (known after apply)
+ access_control_policy {
+ grant {
+ permission = (known after apply)
+ grantee {
+ display_name = (known after apply)
+ email_address = (known after apply)
+ id = (known after apply)
+ type = (known after apply)
+ uri = (known after apply)
}
}
+ owner {
+ display_name = (known after apply)
+ id = (known after apply)
}
}
}
Screenshots
In the list of buckets my bucket is shown as public.
In the permission tab of my bucket, the bucket is shown as public.
Environment
Terrerform 1.2.1
AWS Provider 4.45.0
Research
I read Resource: aws_s3_bucket_acl, but I can't see any difference between the exmaple and my code.
Question
How can I create a private S3 bucket with Terraform?
S3 buckets are private by default, s3 bucket acl is different from bucket privacy.
S3 bucket ACL
Related
Terraform v1.2.8
I have a YAML configuration file that I've used to create an AWS EKS cluster via eksctl that uses an existing VPC, like this
kind: ClusterConfig
apiVersion: eksctl.io/v1alpha5
metadata:
name: sandbox
region: us-east-1
version: "1.23"
# The VPC and subnets are for the data plane, where the pods will
# ultimately be deployed.
vpc:
id: "vpc-123456"
clusterEndpoints:
privateAccess: true
publicAccess: false
subnets:
private:
...
Then I do this to create the cluster
$ eksctl create cluster -f eks-sandbox.yaml
Now I want to use Terraform instead, so I looked at the aws_eks_cluster resource, and am doing this
resource "aws_eks_cluster" "eks_cluster" {
name = var.cluster_name
role_arn = var.iam_role_arn
vpc_config {
endpoint_private_access = true
endpoint_public_access = false
security_group_ids = var.sg_ids
subnet_ids = var.subnet_ids
}
}
...but the resource doesn't allow me to specify an existing VPC? Hence when I do a
$ terraform plan -out out.o
I see
# module.k8s_cluster.aws_eks_cluster.eks_cluster will be created
+ resource "aws_eks_cluster" "eks_cluster" {
+ arn = (known after apply)
+ certificate_authority = (known after apply)
+ created_at = (known after apply)
+ endpoint = (known after apply)
+ id = (known after apply)
+ identity = (known after apply)
+ name = "sandbox"
+ platform_version = (known after apply)
+ role_arn = "arn:aws:iam::1234567890:role/EKSClusterAdminRole"
+ status = (known after apply)
+ tags_all = (known after apply)
+ version = (known after apply)
+ kubernetes_network_config {
+ ip_family = (known after apply)
+ service_ipv4_cidr = (known after apply)
+ service_ipv6_cidr = (known after apply)
}
+ vpc_config {
+ cluster_security_group_id = (known after apply)
+ endpoint_private_access = true
+ endpoint_public_access = false
+ public_access_cidrs = (known after apply)
+ security_group_ids = (known after apply)
+ subnet_ids = [
+ "subnet-1234567890",
+ "subnet-2345678901",
+ "subnet-3456789012",
+ "subnet-4567890123",
+ "subnet-5678901234",
]
+ vpc_id = (known after apply)
}
}
See the vpc_id output? But I don't want it to create a VPC for me. I want to use an existing VPC, like in my YAML configuration file. Can do I use an existing VPC to create an AWS EKS cluster in Terraform? TIA
Unfortunately resource aws_eks_cluster doesn't have argument vpc_id, but has such attribute. When you specify subnet_ids it will be created in subnet's VPC, that you specified.
I tried to use newer_noncurrent_versions in S3 Lifecycle.
In Terraform 4.3.0, lifecycle was released.
However, when applying on Terraform cloud, an error saying to use Lifecycle V2 occurred.
Is my code the problem? Is it a terraform provider problem?
Terraform CLI and Terraform AWS Provider Version
Terraform v1.1.5
on darwin_amd64
Terraform Configuration Files
resource "aws_s3_bucket_lifecycle_configuration" "s3" {
bucket = "aws-test-bucket"
rule {
id = "rule"
status = "Enabled"
noncurrent_version_expiration {
noncurrent_days = 1
newer_noncurrent_versions = 2
}
}
}
Actual Behavior
When I run terrafrom plan on local, it seems to be created just fine.
+ resource "aws_s3_bucket_lifecycle_configuration" "s3" {
+ bucket = (known after apply)
+ id = (known after apply)
+ rule {
+ id = "rule"
+ status = "Enabled"
+ noncurrent_version_expiration {
+ newer_noncurrent_versions = 2
+ noncurrent_days = 1
}
}
}
However, when applying in Terraform Cloud, the following error occurs.
Error: error creating S3 Lifecycle Configuration for bucket (aws-test-bucket): InvalidRequest:
NewerNoncurrentVersions element can only be used in Lifecycle V2.
status code: 400, with aws_s3_bucket_lifecycle_configuration.s3
on s3.tf line 66, in resource "aws_s3_bucket_lifecycle_configuration" "s3":
Your are missing filter in the rule:
resource "aws_s3_bucket_lifecycle_configuration" "s3" {
bucket = "aws-test-bucket"
rule {
id = "rule"
status = "Enabled"
filter {}
noncurrent_version_expiration {
noncurrent_days = 1
newer_noncurrent_versions = 2
}
}
}
I'm trying to use terraform to create an IAM role in AWS China Ningxia region.
Here's my folder structure
.
├── main.tf
└── variables.tf
Here's the content of main.tf
provider "aws" {
access_key = var.access_key
secret_key = var.secret_key
region = var.region
}
resource "aws_iam_role" "role" {
name = "TestRole"
assume_role_policy = data.aws_iam_policy_document.policy_doc.json
}
data "aws_iam_policy_document" "policy_doc" {
statement {
actions = ["sts:AssumeRole"]
principals {
type = "Service"
identifiers = ["ec2.amazonaws.com"]
}
}
}
And here's the variables.tf file:
variable "access_key" {}
variable "secret_key" {}
variable "region" {}
After running the following command
terraform apply \
-var 'access_key=<my_access_key>' \
-var 'secret_key=<my_secret_key>' \
-var 'region=cn-northwest-1'
I got an error saying Error: Error creating IAM Role TestRole: MalformedPolicyDocument: Invalid principal in policy: "SERVICE":"ec2.amazonaws.com".
This terraform script works correctly in other regions of AWS (Tokyo, Singapore, ...). It seems like that AWS China is a little bit different from other regions.
Here's the message before I type yes to terraform:
Terraform will perform the following actions:
# aws_iam_role.role will be created
+ resource "aws_iam_role" "role" {
+ arn = (known after apply)
+ assume_role_policy = jsonencode(
{
+ Statement = [
+ {
+ Action = "sts:AssumeRole"
+ Effect = "Allow"
+ Principal = {
+ Service = "ec2.amazonaws.com"
}
+ Sid = ""
},
]
+ Version = "2012-10-17"
}
)
+ create_date = (known after apply)
+ force_detach_policies = false
+ id = (known after apply)
+ max_session_duration = 3600
+ name = "TestRole"
+ path = "/"
+ unique_id = (known after apply)
}
Plan: 1 to add, 0 to change, 0 to destroy.
Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
Does anyone know how to create an IAM role like mine with terraform in AWS China?
I use aws iam get-account-authorization-details to view the current IAM roles in my AWS China accounts, which are created using AWS console.
Then I found lines containing "Service": "ec2.amazonaws.com.cn".
So using ec2.amazonaws.com.cn to replace ec2.amazonaws.com works without any problem.
I mean the content of main.tf should be
provider "aws" {
access_key = var.access_key
secret_key = var.secret_key
region = var.region
}
resource "aws_iam_role" "role" {
name = "TestRole"
assume_role_policy = data.aws_iam_policy_document.policy_doc.json
}
data "aws_iam_policy_document" "policy_doc" {
statement {
actions = ["sts:AssumeRole"]
principals {
type = "Service"
identifiers = ["ec2.amazonaws.com.cn"]
}
}
}
China is different as you said.
ec2.amazonaws.com will not work in China, instead you have to use something along the lines of ec2.cn-northwest-1.amazonaws.com.cn
Here you have the list of all endpoints https://docs.amazonaws.cn/en_us/aws/latest/userguide/endpoints-Ningxia.html
Also a recommended read about IAM in China: https://docs.amazonaws.cn/en_us/aws/latest/userguide/iam.html#general-info
Like so much in AWS, it is all fully documented... but the real challenge is in finding the documentation.
As you wrote, Brian, the answer is to use ec2.amazonaws.com.cn instead of ec2.amazonaws.com when working in China. But I thought the documentation above might be helpful since it includes all mappings.
Also, highly related:
If you use Terraform and you need to manage resources both in China and outside, you can use the data source aws_partition to help you generalize across China and non-China:
data "aws_partition" "current" {
}
data "aws_caller_identity" "current" {
}
Then later if you have something like:
principals {
type = "AWS"
identifiers = ["arn:${data.aws_partition.current.partition}:iam::${data.aws_caller_identity.current.account_id}:role/my_cool_role"]
}
Then it will evaluate the ARN as follows:
China: arn:aws-cn:iam::<my China account number>:role/my_cool_role
Anywhere else: arn:aws:iam::<my other account number>:role/my_cool_role
Note that in China it's aws-cn whereas it is aws otherwise.
This does assume that you have properly set up the two regions, so that "current" reflects back to the Chinese and non-Chinese accounts.
I got a duplicate existing resource error when deploy to AWS via Terraform.
Error: Error creating IAM Role SecuritySearchAPITaskRole: EntityAlreadyExists: Role with name SecuritySearchAPITaskRole already exists.
status code: 409, request id: cf5ae1f4-de6a-11e9-a7b1-d3cdff4db013
on deploy/modules/ecs-fargate-service/iam.tf line 1, in resource "aws_iam_role" "task":
1: resource "aws_iam_role" "task" {
Based on above error, it is an existing IAM Role with the name SecuritySearchAPITaskRole. I think the solution is to import this resource to my local terraform state but how I can find out the resource ID I need to use. I am able to find this ROLE on AWS IAM console but it doesn't seem to have an ID. I also tried to run terraform plan which gives me:
+ resource "aws_iam_role" "task" {
+ arn = (known after apply)
+ assume_role_policy = jsonencode(
{
+ Statement = [
+ {
+ Action = "sts:AssumeRole"
+ Effect = "Allow"
+ Principal = {
+ Service = "ecs-tasks.amazonaws.com"
}
},
]
+ Version = "2012-10-17"
}
)
+ create_date = (known after apply)
+ description = "Task role for the SecuritySearchAPI service"
+ force_detach_policies = false
+ id = (known after apply)
+ max_session_duration = 3600
+ name = "SecuritySearchAPITaskRole"
+ path = "/"
+ tags = {
+ "Application" = "Security Search"
+ "Client" = "IRESS"
+ "DataClassification" = "NoData"
+ "Name" = "SecuritySearchAPI Task Role"
+ "Owner" = "platform"
+ "Product" = "SharedServices"
+ "Schedule" = "False"
+ "Service" = "Search"
+ "TaggingStandardVersion" = "3"
}
+ unique_id = (known after apply)
}
And you can see the id =known after apply` is not created. How can I find the ID for IAM role?
Ok, I found out this doc https://www.terraform.io/docs/providers/aws/r/iam_role.html#import, I can use role name as the ID in terraform import command.
I am creating Secrets in AWS using Terraform code. My Jenkins pipeline will create the infrastructure every 2 hours and destroys it. Once Infrastructure re-creates after 2 hours, it happened that, AWS Secrets is not allowing me to re-create again and throwing me with below error. Please suggest.
Error: error creating Secrets Manager Secret: InvalidRequestException: You can't create this secret because a secret with this name is already scheduled for deletion.
status code: 400, request id: e4f8cc85-29a4-46ff-911d-c5115716adc5
TF code:-
resource "aws_secretsmanager_secret" "secret" {
description = "${var.environment}"
kms_key_id = "${data.aws_kms_key.sm.arn}"
name = "${var.environment}-airflow-secret"
}
resource "random_string" "rds_password" {
length = 16
special = true
}
resource "aws_secretsmanager_secret_version" "secret" {
secret_id = "${aws_secretsmanager_secret.secret.id}"
secret_string = <<EOF
{
"rds_password": "${random_string.rds_password.result}"
}
EOF
}
TF code plan output:-
# module.aws_af_aws_secretsmanager_secret.secret will be created
+ resource "aws_secretsmanager_secret" "secret" {
+ arn = (known after apply)
+ description = "dev-airflow-secret"
+ id = (known after apply)
+ kms_key_id = "arn:aws:kms:eu-central-1"
+ name = "dev-airflow-secret"
+ name_prefix = (known after apply)
+ recovery_window_in_days = 30
+ rotation_enabled = (known after apply)
}
# module.aws_af.aws_secretsmanager_secret_version.secret will be created
+ resource "aws_secretsmanager_secret_version" "secret" {
+ arn = (known after apply)
+ id = (known after apply)
+ secret_id = (known after apply)
+ secret_string = (sensitive value)
+ version_id = (known after apply)
+ version_stages = (known after apply)
}
You need to set the recovery window to 0 for immediate deletion of secrets.
https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/secretsmanager_secret#recovery_window_in_days
recovery_window_in_days - (Optional) Specifies the number of days that AWS Secrets Manager waits before it can delete the secret. This value can be 0 to force deletion without recovery or range from 7 to 30 days. The default value is 30.