Terraform Reference Created S3 Bucket for Remote Backend - amazon-web-services

I'm trying to setup a remote Terraform backend to S3. I was able to create the bucket, but I used bucket_prefix instead of bucket to define my bucket name. I did this to ensure code re-usability within my org.
My issue is that I've been having trouble referencing the new bucket in my Terraform back end config. I know that I can hard code the name of the bucket that I created, but I would like to reference the bucket similar to other resources in Terraform.
Would this be possible?
I've included my code below:
#configure terraform to use s3 as the backend
terraform {
backend "s3" {
bucket = "aws_s3_bucket.my-bucket.id"
key = "terraform/terraform.tfstate"
region = "ca-central-1"
}
}
AWS S3 Resource definition
resource "aws_s3_bucket" "my-bucket" {
bucket_prefix = var.bucket_prefix
acl = var.acl
lifecycle {
prevent_destroy = true
}
versioning {
enabled = var.versioning
}
server_side_encryption_configuration {
rule {
apply_server_side_encryption_by_default {
sse_algorithm = var.sse_algorithm
}
}
}
}

Terraform needs a valid backend configuration when the initialization steps happens (terraform init), meaning that you have to have an existing bucket before being able to provision any resources (before the first terraform apply).
If you do a terraform init with a bucket name which does not exist, you get this error:
The referenced S3 bucket must have been previously created. If the S3 bucket
│ was created within the last minute, please wait for a minute or two and try
│ again.
This is self explanatory. It is not really possible to have the S3 bucket used for backend and also defined as a Terraform resource. While certainly you can use terraform import to import an existing bucket into the state, I would NOT recommend importing the backend bucket.

Related

Terraform Elastic Beanstalk Environment - setting for encrypting S3 bucket?

I am trying to deploy a simple flask application on Elastibeanstalk using Terraform.
I am using the Terraform's default resource for ElasticBeanstalk Environment - aws_elastic_beanstalk_environment
I am able to deploy my application successfully, however during deployment ElasticBeanstalk creates an S3 bucket - elasticbeanstalk-region-account-id which is not encrypted by default.
I want to change this behaviour and make sure this bucket is encrypted when it gets created. Which setting do I use to accomplish this? I could not find the relevant setting for this. Any ideas?
by default aws beansltalk create an unencrypted bucket so aws_elastic_beanstalk_environment resource cannot do anything here
from the AWS doc :
Elastic Beanstalk doesn't turn on default encryption for the Amazon S3
bucket that it creates. This means that by default, objects are stored
unencrypted in the bucket (and are accessible only by authorized
users). Some applications require all objects to be encrypted when
they are stored—on a hard drive, in a database, etc. (also known as
encryption at rest). If you have this requirement, you can configure
your account's buckets for default encryption
so you need to enable it yourself, try the folowing
after you create the beanstalk env, get the aws s3 bucket created by beanstalk and enable server side encryption by the Terraform resource aws_s3_bucket_server_side_encryption_configuration
resource "aws_kms_key" "mykey" {
description = "This key is used to encrypt bucket objects"
deletion_window_in_days = 10
}
data "aws_s3_bucket" "mybucket" {
bucket = "elasticbeanstalk-region-account-id" # here change the value with your information
}
resource "aws_s3_bucket_server_side_encryption_configuration" "example" {
bucket = data.aws_s3_bucket.mybucket
rule {
apply_server_side_encryption_by_default {
kms_master_key_id = aws_kms_key.mykey.arn
sse_algorithm = "aws:kms"
}
}
}

Terraform Import bucket policy from s3 bucket

Is there a way I can use terraform import aws_s3_bucket to import an s3 bucket AND it's policy? I'm trying to import my existing s3 buckets into terraform so that I can apply replication configuration and to them. For example:
#Importing an existing bucket
terraform import aws_s3_bucket.replication myexistingbucket
#Maybe be able to grab s3 bucket policy from the import
data "aws_s3_bucket_policy" "replication" {
bucket = myexistingbucket
}
#Terraform configuration to add replication to the existing bucket
resource "aws_s3_bucket" "replication" {
bucket = "myexistingbucket"
policy = data.aws_s3_bucket_policy.replication
versioning {
enabled = true
}
replication_configuration {
####configuration code here
}
lifecycle {
prevent_destroy = true
}
}
My code looks like what is above without the aws_s3_bucket_policy data block. The issue is when I run this configuration, it applies the replication configuration but it makes the bucket policy blank after the run since I don't define a bucket policy. I'm hoping I don't have to import every single bucket policy (there are multiple buckets that are being imported and each has a very long policy).

Terraform update existing S3 configuration

Is there a way for Terraform to make changes to an existing S3 bucket without affecting the creation or deletion of the bucket?
For example, I want to use Terraform to enable S3 replication across several AWS accounts. The S3 buckets already exist, and I simply want to enable a replication rule (via a pipeline) without recreating, deleting, or emptying the bucket.
My code looks like this:
data "aws_s3_bucket" "test" {
bucket = "example_bucket"
}
data "aws_iam_role" "s3_replication" {
name = "example_role"
}
resource "aws_s3_bucket" "source" {
bucket = data.aws_s3_bucket.example_bucket.id
versioning {
enabled = true
}
replication_configuration {
role = data.aws_iam_role.example_role.arn
rules {
id = "test"
status = "Enabled"
destination {
bucket = "arn:aws:s3:::dest1"
}
}
rules {
id = "test2"
status = "Enabled"
destination {
bucket = "arn:aws:s3:::dest2"
}
}
}
}
When I try to do it this way, Terraform apply tries to delete the existing bucket and create a new one instead of just updating the configuration. I don't mind trying terraform import, but my concern is that this will destroy the bucket when I run terraform destroy as well. I would like to simply apply and destroy the replication configuration, not the already existing bucket.
I would like to simply apply and destroy the replication configuration, not the already existing bucket.
Sadly, you can't do this. Your bucket must be imported to TF so that it can be managed by it.
I don't mind trying terraform import, but my concern is that this will destroy the bucket when I run terraform destroy as well.
To protect against this, you can use prevent_destroy:
This meta-argument, when set to true, will cause Terraform to reject with an error any plan that would destroy the infrastructure object associated with the resource, as long as the argument remains present in the configuration.

Intermittent Terraform failures trying to put object into a bucket

I'm seeing intermittent Terraform failures which look to me like a race condition internal to Terraform itself:
21:31:37 aws_s3_bucket.jar: Creation complete after 1s
(ID: automatictester.co.uk-my-bucket)
...
21:31:38 * aws_s3_bucket_object.jar: Error putting object in S3 bucket
(automatictester.co.uk-my-bucket): NoSuchBucket: The specified bucket
does not exist
As you can see in the above logs, TF first claims it has created a bucket at 21:31:37, and then says it can't put an object in that bucket because this does not exist at 21:31:38.
The code behind the above error:
resource "aws_s3_bucket" "jar" {
bucket = "${var.s3_bucket_jar}"
acl = "private"
}
...
resource "aws_s3_bucket_object" "jar" {
bucket = "${var.s3_bucket_jar}"
key = "my.jar"
source = "${path.module}/../target/my.jar"
etag = "${md5(file("${path.module}/../target/my.jar"))}"
}
There clearly is an implicit dependency defined between these two, so the only reason for that failure that comes to my mind is the eventually consistent nature of Amazon S3.
How to handle such kind of errors? I believe explcitly defined dependency with depends-on will not provide any value over the implicit dependency which is already there.
Terraform can't see any dependency ordering at all there so is almost certainly trying to do the same 2 actions at the same time and is failing the object creation at pretty much the same time the bucket creates.
Instead you should properly define the dependency between the 2 resources by using either depends_on or better yet referring to the bucket resource's outputs in the object resource like this:
resource "aws_s3_bucket" "jar" {
bucket = "${var.s3_bucket_jar}"
acl = "private"
}
resource "aws_s3_bucket_object" "jar" {
bucket = "${aws_s3_bucket.jar.bucket}"
key = "my.jar"
source = "${path.module}/../target/my.jar"
etag = "${md5(file("${path.module}/../target/my.jar"))}"
}
Terraform now knows that it needs to wait for the S3 bucket to be created and return before it attempts to create the S3 object in the bucket.

Terraform init fails for remote backend S3 when creating the state bucket

I was trying to create a remote backend for my S3 bucket.
provider "aws" {
version = "1.36.0"
profile = "tasdik"
region = "ap-south-1"
}
terraform {
backend "s3" {
bucket = "ops-bucket"
key = "aws/ap-south-1/homelab/s3/terraform.tfstate"
region = "ap-south-1"
}
}
resource "aws_s3_bucket" "ops-bucket" {
bucket = "ops-bucket"
acl = "private"
versioning {
enabled = true
}
lifecycle {
prevent_destroy = true
}
tags {
Name = "ops-bucket"
Environmet = "devel"
}
}
I haven't applied anything yet, the bucket is not present as of now. So, terraform asks me to do an init. But when I try to do so, I get a
$ terraform init
Initializing the backend...
Successfully configured the backend "s3"! Terraform will automatically
use this backend unless the backend configuration changes.
Error loading state: BucketRegionError: incorrect region, the bucket is not in 'ap-south-1' region
status code: 301, request id: , host id:
Terraform will initialise any state configuration before any other actions such as a plan or apply. Thus you can't have the creation of the S3 bucket for your state to be stored in be defined at the same time as you defining the state backend.
Terraform also won't create an S3 bucket for you to put your state in, you must create this ahead of time.
You can either do this outside of Terraform such as with the AWS CLI:
aws s3api create-bucket --bucket "${BUCKET_NAME}" --region "${BUCKET_REGION}" \
--create-bucket-configuration LocationConstraint="${BUCKET_REGION}"
or you could create it via Terraform as you are trying to do so but use local state for creating the bucket on the first apply and then add the state configuration and re-init to get Terraform to migrate the state to your new S3 bucket.
As for the error message, S3 bucket names are globally unique across all regions and all AWS accounts. The error message is telling you that it ran the GetBucketLocation call but couldn't find a bucket in ap-south-1. When creating your buckets I recommend making sure they are likely to be unique by doing something such as concatenating the account ID and possibly the region name into the bucket name.