Terraform GCP executes resources in wrong order - google-cloud-platform

I have this main.tf file:
provider "google" {
project = var.projNumber
region = var.regName
zone = var.zoneName
}
resource "google_storage_bucket" "bucket_for_python_application" {
name = "python_bucket_exam"
location = var.regName
force_destroy = true
}
resource "google_storage_bucket_object" "file-hello-py" {
name = "src/hello.py"
source = "app-files/src/hello.py"
bucket = "python_bucket_exam"
}
resource "google_storage_bucket_object" "file-main-py" {
name = "main.py"
source = "app-files/main.py"
bucket = "python_bucket_exam"
}
When executed first time It worked fine, but after terraform destroy and again terraform plan -> terraform apply I've noticed that terraform tries to create object before actually creating a bucket:
Ofc it cant't create object inside something that does'nt exist. Why is that?

You have to create a dependency between your objects and your bucket (see code below). Otherwise, Terraform won't know that it has to create bucket first, and then objects. This is related to how Terraform stores the resources in a directed graph.
resource "google_storage_bucket_object" "file-hello-py" {
name = "src/hello.py"
source = "app-files/src/hello.py"
bucket = google_storage_bucket.bucket_for_python_application.name
}
resource "google_storage_bucket_object" "file-main-py" {
name = "main.py"
source = "app-files/main.py"
bucket = google_storage_bucket.bucket_for_python_application.name
}
By doing this, you declare an implicit order : bucket, then objects. This is equivalent to using depends_on in your google_storage_bucket_objects, but in that particular case I recommend using a reference to your bucket in your objects, rather than using an explicit depends_on.

Related

How to make gcp cloud function public using Terraform

I will start by saying I am very new to both GCP and Terraform, so I hope there is a simple answer that I have just overlooked.
I am trying to create a GCP cloud function and then make it public using Terraform. I am able to create the function but not make it public, despite closely following the documentation's example: https://www.terraform.io/docs/providers/google/r/cloudfunctions_function.html
I receive the error "googleapi: Error 403: Permission 'cloudfunctions.functions.setIamPolicy' denied on resource ... (or resource may not exist)" when the google_cloudfunctions_function_iam_member resource is reached.
How can I make this function public? Does it have something to do with the account/api key I am using for credentials to create all these resources?
Thanks in advance.
my main.tf file:
provider "google" {
project = "my-project"
credentials = "key.json" #compute engine default service account api key
region = "us-central1"
}
terraform {
backend "gcs" {
bucket = "manually-created-bucket"
prefix = "terraform/state"
credentials = "key.json"
}
}
# create the storage bucket for our scripts
resource "google_storage_bucket" "source_code" {
name = "test-bucket-lh05111992"
location = "us-central1"
force_destroy = true
}
# zip up function source code
data "archive_file" "my_function_script_zip" {
type = "zip"
source_dir = "../source/scripts/my-function-script"
output_path = "../source/scripts/my-function-script.zip"
}
# add function source code to storage
resource "google_storage_bucket_object" "my_function_script_zip" {
name = "index.zip"
bucket = google_storage_bucket.source_code.name
source = "../source/scripts/my-function-script.zip"
}
#create the cloudfunction
resource "google_cloudfunctions_function" "function" {
name = "send_my_function_script"
description = "This function is called in GTM. It sends a users' google analytics id to BigQuery."
runtime = "nodejs10"
available_memory_mb = 128
source_archive_bucket = google_storage_bucket.source_code.name
source_archive_object = google_storage_bucket_object.my_function_script_zip.name
trigger_http = true
entry_point = "handleRequest"
}
# IAM entry for all users to invoke the function
resource "google_cloudfunctions_function_iam_member" "invoker" {
project = google_cloudfunctions_function.function.project
region = "us-central1"
cloud_function = google_cloudfunctions_function.function.name
role = "roles/cloudfunctions.invoker"
member = "allUsers"
}
It seems the only problem with that example from the terraform site are the " Cloud Functions IAM resources" which have been modified since Nov 2019. Now you have to specify these resources as explained here. Now for your user case (public cloud function) I'd recommend you to follow this configuration and just change the "members" attribute to "allUsers" so it'd be something like this
resource "google_cloudfunctions_function_iam_binding" "binding" {
project = google_cloudfunctions_function.function.project
region = google_cloudfunctions_function.function.region
cloud_function = google_cloudfunctions_function.function.name
role = "roles/cloudfunctions.invoker"
members = [
"allUsers",
]
}
Finally, you can give it a test and modify the functions you've already created here at the #Try this API" right panel and enter the proper resource and request body like this (make sure to enter the "resource" parameter correcly):
{
"policy": {
"bindings": [
{
"members": [
"allUsers"
],
"role": "roles/cloudfunctions.invoker"
}
]
}
}
In addition to adjusting the IAM roles how #chinoche suggested, I also discovered that I needed to modify the service account I was using to give it poject owner permissions. (I guess the default one I was using didn't have this). I updated my key.json and it finally worked.

Terraform Create resource in Child AWS Account

My goal is to create a Terraform Module which creates a Child AWS account and creates a set of resources inside the account (for example, AWS Config rules).
The account is created with the following aws_organizations_account definition:
resource "aws_organizations_account" "account" {
name = "my_new_account"
email = "john#doe.org"
}
And an example aws_config_config_rule would be something like:
resource "aws_config_config_rule" "s3_versioning" {
name = "my-config-rule"
description = "Verify versioning is enabled on S3 Buckets."
source {
owner = "AWS"
source_identifier = "S3_BUCKET_VERSIONING_ENABLED"
}
scope {
compliance_resource_types = ["AWS::S3::Bucket"]
}
}
However, doing this creates the AWS Config rule in the master account, not the newly created child account.
How can I define the config rule to apply to the child account?
So, I was actually able to achieve this by defining a new provider in the module which assumes the OrganizationAccountAccessRole inside the newly created account.
Here's an example:
// Define new account
resource "aws_organizations_account" "my_new_account" {
name = "my_new_account"
email = "john#doe.org"
}
provider "aws" {
/* other provider config */
assume_role {
// Assume the organization access role
role_arn = "arn:aws:iam::${aws_organizations_account.my_new_account.id}:role/OrganizationAccountAccessRole"
}
alias = "my_new_account"
}
resource "aws_config_config_rule" "s3_versioning" {
// Tell resource to use the new provider
provider = aws.my_new_account
name = "my-config-rule"
description = "Verify versioning is enabled on S3 Buckets."
source {
owner = "AWS"
source_identifier = "S3_BUCKET_VERSIONING_ENABLED"
}
scope {
compliance_resource_types = ["AWS::S3::Bucket"]
}
}
However, it should be noted that defining the provider inside the module leads to a few quirks, notably once you source this module you cannot delete this module. If you do it will throw a Error: Provider configuration not present since you will have also removed the provider definition.
But, if you don't plan on removing these accounts (or are okay with doing it manually when needed) then this should be good!

Renaming s3 bucket in Terraform (but not S3) causes create then destroy?

I want to refactor my Terraform scripts a bit.
Before:
resource "aws_s3_bucket" "abc" {
bucket = "my-bucket"
acl = "private"
region = "${var.aws_region}"
tags = {
Name = "My bucket"
}
versioning {
enabled = true
mfa_delete = false
}
}
After:
resource "aws_s3_bucket" "def" {
bucket = "my-bucket"
acl = "private"
region = "${var.aws_region}"
tags = {
Name = "My bucket"
}
versioning {
enabled = true
mfa_delete = false
}
}
As you can see, only the name in Terraform has changed (abc -> def).
However, this causes a create / destroy of the bucket in terraform plan.
I expected Terraform recognize the buckets as the same (they have the same attributes, including bucket).
Questions:
Why is this?
Is there a way to refactor Terraform scripts without destroying infrastructure?
You can use terraform state mv, to reflect this change in the state.
In you case, this would be
terraform state mv aws_s3_bucket.abc aws_s3_bucket.def
From my own experience, this works well and I recommend doing it instead of working with bad names.
Terraform does not recognize such changes, no :-)

How to add lifecycle rules to an S3 bucket using terraform?

I am using Terraform to create a bucket in S3 and I want to add "folders" and lifecycle rules to it.
I can create the bucket (using an "aws_s3_bucket" resource).
I can create the bucket and define my lifecycle rules within the same "aws_s3_bucket" resource, ie. at creation time.
I can add "folders" to the bucket (I know they aren't really folders, but they are presented to the client systems as if they were... :-) ), using an "aws_s3_bucket_object" resource, ie. after bucket creation.
All good...
But I want to be able to add lifecycle rules AFTER I've created the bucket, but I get an error telling me the bucket already exists. (Actually I want to be able to subsequently add folders and corresponding lifecycle rules as and when required.)
Now, I can add lifecycle rules to an existing bucket in the AWS GUI, so I know it is a reasonable thing to want to do.
But is there a way of doing it with Terraform?
Am I missing something?
resource "aws_s3_bucket" "bucket" {
bucket = "${replace(var.tags["Name"],"/_/","-")}"
region = "${var.aws_region}"
#tags = "${merge(var.tags, map("Name", "${var.tags["Name"]}"))}"
tags = "${merge(var.tags, map("Name", "${replace(var.tags["Name"],"/_/","-")}"))}"
}
resource "aws_s3_bucket" "bucket_quarterly" {
bucket = "${aws_s3_bucket.bucket.id}"
#region = "${var.aws_region}"
lifecycle_rule {
id = "quarterly_retention"
prefix = "quarterly/"
enabled = true
expiration {
days = 92
}
}
}
resource "aws_s3_bucket" "bucket_permanent" {
bucket = "${aws_s3_bucket.bucket.id}"
#region = "${var.aws_region}"
lifecycle_rule {
id = "permanent_retention"
enabled = true
prefix = "permanent/"
transition {
days = 1
storage_class = "GLACIER"
}
}
}
resource "aws_s3_bucket_object" "quarterly" {
bucket = "${aws_s3_bucket.bucket.id}"
#bucket = "${var.bucket_id}"
acl = "private"
key = "quarterly"
source = "/dev/null"
}
resource "aws_s3_bucket_object" "permanent" {
bucket = "${aws_s3_bucket.bucket.id}"
#bucket = "${var.bucket_id}"
acl = "private"
key = "permanent"
source = "/dev/null"
}
I expect to have a bucket with 2 lifecycle rules, but I get the following error:
Error: Error applying plan:
2 error(s) occurred:
* module.s3.aws_s3_bucket.bucket_quarterly: 1 error(s) occurred:
* aws_s3_bucket.bucket_quarterly: Error creating S3 bucket: BucketAlreadyOwnedByYou: Your previous request to create the named bucket succeeded and you already own it.
status code: 409, request id: EFE9C62B25341478, host id: hcsCNracNrpTJZ4QdU0AV2wNm/FqhYSEY4KieQ+zSHNsj6AUR69XvPF+0BiW4ZOpfgIoqwFoXkI=
* module.s3.aws_s3_bucket.bucket_permanent: 1 error(s) occurred:
* aws_s3_bucket.bucket_permanent: Error creating S3 bucket: BucketAlreadyOwnedByYou: Your previous request to create the named bucket succeeded and you already own it.
status code: 409, request id: 7DE1B1A36138A614, host id: 8jB6l7d6Hc6CZFgQSLQRMJg4wtvnrSL6Yp5R4RScq+GtuMW+6rkN39bcTUwQhzxeI7jRStgLXSc=
Terraform does not automatically rollback in the face of errors.
Instead, your Terraform state file has been partially updated with
any resources that successfully completed. Please address the error
above and apply again to incrementally change your infrastructure.
Lets first break down whats happening and how we can overcome this issue. Each time you define a resource "aws_s3_bucket", terraform will attempt to create a bucket with the parameters specified. If you want to attach a lifecycle policy to a bucket, do it where you define the bucket, e.g.:
resource "aws_s3_bucket" "quarterly" {
bucket = "quarterly_bucket_name"
#bucket = "${var.bucket_id}"
acl = "private"
lifecycle_rule {
id = "quarterly_retention"
prefix = "folder/"
enabled = true
expiration {
days = 92
}
}
}
resource "aws_s3_bucket" "permanent" {
bucket = "perm_bucket_name"
acl = "private"
lifecycle_rule {
id = "permanent_retention"
enabled = true
prefix = "permanent/"
transition {
days = 1
storage_class = "GLACIER"
}
}
}
A bucket can have multiple lifecycle_rule blocks on it.
If you want to define the lifecycle rules as external blocks, you can do it in this way:
// example of what the variable would look like:
variable "lifecycle_rules" {
type = "list"
default = []
}
// example of what the assignment would look like:
lifecycle_rules = [{
id = "cleanup"
prefix = ""
enabled = true
expiration = [{
days = 1
}]
}, {...}, {...} etc...]
// example what the usage would look like
resource "aws_s3_bucket" "quarterly" {
bucket = "quarterly_bucket_name"
#bucket = "${var.bucket_id}"
acl = "private"
source = "/dev/null"
lifecycle_rule = [ "${var.lifecycle_rules}" ]
}
Note: the implementation above of having an external lifecycle policy isn't really the best way to do it, but the only way. You pretty much trick terraform into accepting the list of maps, which happens to be the same type as lifecycle_rule, so it works. Ideally, Terraform should have it's own resource block for lifecycle rules, but it doesn't.
Edit: why have separate resource blocks when we now have dynamic blocks! Woohoo
As far as I am aware, you cannot make a lifecycle policy separately.
Someone raised a PR for a resource to be created to allow you to do so, but looks like it is still open: https://github.com/terraform-providers/terraform-provider-aws/issues/6188
As for your error, I believe the reason you're getting the error is because:
resource "aws_s3_bucket" "bucket"
Creates a bucket with a particular name.
resource "aws_s3_bucket" "bucket_quarterly"
References bucket = "${aws_s3_bucket.bucket.id}" and therefore tries to create a bucket with the same name as the previous resource (which cannot be done as names are unique).
resource "aws_s3_bucket" "bucket_permanent"
Similarly, this resource references bucket = "${aws_s3_bucket.bucket.id}" and therefore tries to create a bucket with the same name as the first resource (which cannot be done as names are unique).
You mentioned I expect to have a bucket with 2 lifecycle rules but in your above code you are creating 3 separate s3 buckets (one without a lifecycle, and 2 with a lifecycle) and two objects (folders) that are being placed into the s3 bucket without a lifecycle policy.
Thanks for the info (I like the idea of the list to separate the rules from the resource).
The issue was that I didn't appreciate that you could define lifecycle rules within the resource AND change them subsequently, so I was trying to figure out how to define them separately...
All that's required is to specify them in the resource and do terraform apply, then you can edit it and add/amend/remove lifecycle_rules items and just do terraform apply again to apply the changes.
source "aws_s3_bucket" "my_s3_bucket" {
bucket = local.s3_bucket_name
}
resource "aws_s3_bucket_acl" "my_s3_bucket_acl" {
bucket = aws_s3_bucket.my_s3_bucket.arn
lifecycle {
prevent_destroy = true
}
}
resource "aws_s3_bucket_versioning" "my_s3_bucket_versioning" {
bucket = aws_s3_bucket.my_s3_bucket.arn
versioning_configuration {
status = true
}
}
resource "aws_s3_bucket_server_side_encryption_configuration" "my_s3-bucket_encryption" {
bucket = aws_s3_bucket.my_s3_bucket.arn
rule {
apply_server_side_encryption_by_default {
sse_algorithm = "AES256"
}
}
}
resource "aws_s3_bucket_lifecycle_configuration" "my_s3_bucket_lifecycle_config" {
bucket = aws_s3_bucket.my_s3_bucket.arn
rule {
id = "dev_lifecycle_7_days"
status = true
abort_incomplete_multipart_upload {
days_after_initiation = 30
}
noncurrent_version_expiration {
noncurrent_days = 1
}
transition {
storage_class = "STANDARD_IA"
days = 30
}
expiration {
days = 30
}
}
}

how to create multiple folders inside an existing AWS bucket

How to create a multiple folders inside an existing bucket using terraform.
example: bucket/folder1/folder2
resource "aws_s3_bucket_object" "folder1" {
bucket = "${aws_s3_bucket.b.id}"
acl = "private"
key = "Folder1/"
source = "/dev/null"
}
While the answer of Nate is correct, this would lead to a lot of code duplication. A better solution in my opinion would be to work with a list and loop over it.
Create a variable (variable.tf file) that contains a list of possible folders:
variable "s3_folders" {
type = "list"
description = "The list of S3 folders to create"
default = ["folder1", "folder2", "folder3"]
}
Then alter the piece of code you already have:
resource "aws_s3_bucket_object" "folders" {
count = "${length(var.s3_folders)}"
bucket = "${aws_s3_bucket.b.id}"
acl = "private"
key = "${var.s3_folders[count.index]}/"
source = "/dev/null"
}
Apply the same logic as you did to create the first directory.
resource "aws_s3_bucket_object" "folder1" {
bucket = "${aws_s3_bucket.b.id}"
acl = "private"
key = "Folder1/Folder2/"
source = "/dev/null"
}
There a no tips for windows users but this should work for you.
Slightly easier than using an empty file as "source"
resource "aws_s3_bucket_object" "output_subdir" {
bucket = "${aws_s3_bucket.file_bucket.id}"
key = "output/"
content_type = "application/x-directory"
}
resource "aws_s3_bucket_object" "input_subdir" {
bucket = "${aws_s3_bucket.file_bucket.id}"
key = "input/"
content_type = "application/x-directory"
}