Intermittent Terraform failures trying to put object into a bucket - amazon-web-services

I'm seeing intermittent Terraform failures which look to me like a race condition internal to Terraform itself:
21:31:37 aws_s3_bucket.jar: Creation complete after 1s
(ID: automatictester.co.uk-my-bucket)
...
21:31:38 * aws_s3_bucket_object.jar: Error putting object in S3 bucket
(automatictester.co.uk-my-bucket): NoSuchBucket: The specified bucket
does not exist
As you can see in the above logs, TF first claims it has created a bucket at 21:31:37, and then says it can't put an object in that bucket because this does not exist at 21:31:38.
The code behind the above error:
resource "aws_s3_bucket" "jar" {
bucket = "${var.s3_bucket_jar}"
acl = "private"
}
...
resource "aws_s3_bucket_object" "jar" {
bucket = "${var.s3_bucket_jar}"
key = "my.jar"
source = "${path.module}/../target/my.jar"
etag = "${md5(file("${path.module}/../target/my.jar"))}"
}
There clearly is an implicit dependency defined between these two, so the only reason for that failure that comes to my mind is the eventually consistent nature of Amazon S3.
How to handle such kind of errors? I believe explcitly defined dependency with depends-on will not provide any value over the implicit dependency which is already there.

Terraform can't see any dependency ordering at all there so is almost certainly trying to do the same 2 actions at the same time and is failing the object creation at pretty much the same time the bucket creates.
Instead you should properly define the dependency between the 2 resources by using either depends_on or better yet referring to the bucket resource's outputs in the object resource like this:
resource "aws_s3_bucket" "jar" {
bucket = "${var.s3_bucket_jar}"
acl = "private"
}
resource "aws_s3_bucket_object" "jar" {
bucket = "${aws_s3_bucket.jar.bucket}"
key = "my.jar"
source = "${path.module}/../target/my.jar"
etag = "${md5(file("${path.module}/../target/my.jar"))}"
}
Terraform now knows that it needs to wait for the S3 bucket to be created and return before it attempts to create the S3 object in the bucket.

Related

Terraform Reference Created S3 Bucket for Remote Backend

I'm trying to setup a remote Terraform backend to S3. I was able to create the bucket, but I used bucket_prefix instead of bucket to define my bucket name. I did this to ensure code re-usability within my org.
My issue is that I've been having trouble referencing the new bucket in my Terraform back end config. I know that I can hard code the name of the bucket that I created, but I would like to reference the bucket similar to other resources in Terraform.
Would this be possible?
I've included my code below:
#configure terraform to use s3 as the backend
terraform {
backend "s3" {
bucket = "aws_s3_bucket.my-bucket.id"
key = "terraform/terraform.tfstate"
region = "ca-central-1"
}
}
AWS S3 Resource definition
resource "aws_s3_bucket" "my-bucket" {
bucket_prefix = var.bucket_prefix
acl = var.acl
lifecycle {
prevent_destroy = true
}
versioning {
enabled = var.versioning
}
server_side_encryption_configuration {
rule {
apply_server_side_encryption_by_default {
sse_algorithm = var.sse_algorithm
}
}
}
}
Terraform needs a valid backend configuration when the initialization steps happens (terraform init), meaning that you have to have an existing bucket before being able to provision any resources (before the first terraform apply).
If you do a terraform init with a bucket name which does not exist, you get this error:
The referenced S3 bucket must have been previously created. If the S3 bucket
│ was created within the last minute, please wait for a minute or two and try
│ again.
This is self explanatory. It is not really possible to have the S3 bucket used for backend and also defined as a Terraform resource. While certainly you can use terraform import to import an existing bucket into the state, I would NOT recommend importing the backend bucket.

Terraform update existing S3 configuration

Is there a way for Terraform to make changes to an existing S3 bucket without affecting the creation or deletion of the bucket?
For example, I want to use Terraform to enable S3 replication across several AWS accounts. The S3 buckets already exist, and I simply want to enable a replication rule (via a pipeline) without recreating, deleting, or emptying the bucket.
My code looks like this:
data "aws_s3_bucket" "test" {
bucket = "example_bucket"
}
data "aws_iam_role" "s3_replication" {
name = "example_role"
}
resource "aws_s3_bucket" "source" {
bucket = data.aws_s3_bucket.example_bucket.id
versioning {
enabled = true
}
replication_configuration {
role = data.aws_iam_role.example_role.arn
rules {
id = "test"
status = "Enabled"
destination {
bucket = "arn:aws:s3:::dest1"
}
}
rules {
id = "test2"
status = "Enabled"
destination {
bucket = "arn:aws:s3:::dest2"
}
}
}
}
When I try to do it this way, Terraform apply tries to delete the existing bucket and create a new one instead of just updating the configuration. I don't mind trying terraform import, but my concern is that this will destroy the bucket when I run terraform destroy as well. I would like to simply apply and destroy the replication configuration, not the already existing bucket.
I would like to simply apply and destroy the replication configuration, not the already existing bucket.
Sadly, you can't do this. Your bucket must be imported to TF so that it can be managed by it.
I don't mind trying terraform import, but my concern is that this will destroy the bucket when I run terraform destroy as well.
To protect against this, you can use prevent_destroy:
This meta-argument, when set to true, will cause Terraform to reject with an error any plan that would destroy the infrastructure object associated with the resource, as long as the argument remains present in the configuration.

Unable to align imported S3 bucket terraform configuration

I have imported an existing S3 bucket to my terraform state.
I am now trying to reverse engineer its configuration and pass it to the .tf file.
Here is my file
resource "aws_s3_bucket" "my-bucket" {
provider = "aws.eu_west_1"
bucket = "my-bucket"
grant {
type = "Group"
permissions = ["READ_ACP", "WRITE"]
uri = "http://acs.amazonaws.com/groups/s3/LogDelivery"
}
grant {
id = "my-account-id"
type = "CanonicalUser"
permissions = ["FULL_CONTROL"]
}
Here is my terraform plan output
~ aws_s3_bucket.my-bucket
acl: "" => "private"
No matter what value I use for the acl I always fail to align my tf with the existing acl configuration on the S3 bucket, e.g.
resource "aws_s3_bucket" "my-bucket" {
provider = "aws.eu_west_1"
bucket = "my-bucket"
acl. = "private"
corresponding plan output:
Error: aws_s3_bucket.my-bucket: "acl": conflicts with grant
Error: aws_s3_bucket.my-bucket: "grant": conflicts with acl
and another:
resource "aws_s3_bucket" "my-bucket" {
provider = "aws.eu_west_1"
bucket = "my-bucket"
acl. = ""
resource "aws_s3_bucket" "my-bucket" {
provider = "aws.eu_west_1"
bucket = "my-bucket"
acl. = ""
so if I use no value for acl, terraform shows the acl will change from non-set to private
If I use any value whatsoever, I get an error.
Why is that?
This is an observation on 0.13 but still might help:
If I create a bucket using your original code (i.e. with no acl line), the resulting TF state file still includes a "acl": "private", attribute for the bucket. If I then add an acl="private" definition in the TF code, I also get "acl": conflicts with grant when trying to apply.
But what's really odd is that if I delete the acl="private" definition (i.e. revert to your original code), and also delete the "acl": "private", attribute line from the state file, then the plan (including a refresh) shows that the bucket will be updated in place with this: + acl = "private". Applying this seems to work fine, but then a second apply shows that the grants have been lost and need to be reapplied.
So it seems to me that there's a bug in the S3 state refresh that might also affect the import, and in addition clearly removing the acl attribute from state makes it then incorrectly apply as a default overriding any grants. I think it might be worth using your code to create a new bucket, and then compare the state definitions to bring over any bits the original import missed.

Terraform multiple s3 bucket creation

I am trying to create multiple s3 buckets each with different bucket settings. i am Looking for syntax on how to refer the bucket ids of the dynamically created bucket in other bucket resource blocks.
New to terraform. looking for sample code or terrraform document for this syntax
Bel0w is sample code for creating bucket from list names
resource "aws_s3_bucket" "this" {
count=length(var.bucket_names)
bucket = var.bucket_names[count.index]
acl="private"
versioning {
enabled = var.bucket_versioning
}
}
In this code i want to refer the dynamically created bucket id's and assign their bucket policy settings. Need the syntax . not sure if this correct
resource "aws_s3_bucket_policy" "this" {
count=length(var.bucket_names)
bucket = aws_s3_bucket.this.id[count.index]
policy = data.aws_iam_policy_document.this.json
}
In your aws_s3_bucket_policy, instead of
bucket = aws_s3_bucket.this.id[count.index]
it should be
bucket = aws_s3_bucket.this[count.index].id
assuming that everything else is correct, e.g. data.aws_iam_policy_document.this.json is valid.

Interpolating data source name in Terraform

Am trying to write a terraform module for attaching bucket policy to a AWS S3 bucket. Here is the code:
data "aws_iam_policy_document" "make_objects_public" {
# if policy_name == <this-policy-name>, then generate policy
count = "${var.policy_name == "make_objects_public" ? 1 : 0}"
statement {
...
}
}
resource "aws_s3_bucket_policy" "bucket_policy" {
# if policy_name != "", then add the generated policy
count = "${var.policy_name != "" ? 1 : 0}"
bucket = "${var.bucket_name}"
policy = "${data.aws_iam_policy_document.<policy-name-goes-here>.json}"
}
I want to interpolate the policy_name variable while fetching the policy generated by aws_iam_policy_document. I tried few things but sadly they didn't work. Is this possible in terraform ?
I tried out these hacks:
policy = "${data.aws_iam_policy_document."${var.policy_name}".json}"
policy = "${"${format("%s", "data.aws_iam_policy_document.${var.policy_name}.json")}"}"
policy = "${format("%s", "$${data.aws_iam_policy_document.${var.policy_name}.json}")}"
Thanks.
Dynamic resource names are not supported because Terraform must construct the dependency graph before it begins dealing with interpolations, and thus these relationships must be explicit.
A recommended approach for this sort of setup is to break the system down into small modules, which can then be used together by a calling module to produce the desired result without duplicating all of the details.
In this particular situation, you could for example split each policy out into its own re-usable module, and then write one more re-usable module that creates an S3 bucket and associates a given policy with it. Then a calling configuration may selectively instantiate one policy module appropriate to its needs along with the general S3 bucket module, to create the desired result:
module "policy" {
# policy-specific module; only contains the policy data source
source = "../policies/make_objects_public"
# (any other arguments the policy needs...)
}
module "s3_bucket" {
# S3 bucket module creates a bucket and attaches a policy to it
source = "../s3_bucket" # general resource for S3 buckets with attached policies
name = "example"
policy = "${module.policy.policy_json}" # an output from the policy module above
}
As well as avoiding the need for dynamic resource selection, this also increases flexibility by decoupling policy generation from the S3 bucket creation, and in principle allowing a calling module with unusual needs to skip instantiating a policy module altogether and just use aws_iam_policy_document directly.
The above pattern is somewhat analogous to the dependency injection technique, where the system is split into small components (modules, in this case) and then the root config "wires up" these components in a manner appropriate for a specific use-case. This approach has very similar advantages and disadvantages as the general technique.