I will start by saying I am very new to both GCP and Terraform, so I hope there is a simple answer that I have just overlooked.
I am trying to create a GCP cloud function and then make it public using Terraform. I am able to create the function but not make it public, despite closely following the documentation's example: https://www.terraform.io/docs/providers/google/r/cloudfunctions_function.html
I receive the error "googleapi: Error 403: Permission 'cloudfunctions.functions.setIamPolicy' denied on resource ... (or resource may not exist)" when the google_cloudfunctions_function_iam_member resource is reached.
How can I make this function public? Does it have something to do with the account/api key I am using for credentials to create all these resources?
Thanks in advance.
my main.tf file:
provider "google" {
project = "my-project"
credentials = "key.json" #compute engine default service account api key
region = "us-central1"
}
terraform {
backend "gcs" {
bucket = "manually-created-bucket"
prefix = "terraform/state"
credentials = "key.json"
}
}
# create the storage bucket for our scripts
resource "google_storage_bucket" "source_code" {
name = "test-bucket-lh05111992"
location = "us-central1"
force_destroy = true
}
# zip up function source code
data "archive_file" "my_function_script_zip" {
type = "zip"
source_dir = "../source/scripts/my-function-script"
output_path = "../source/scripts/my-function-script.zip"
}
# add function source code to storage
resource "google_storage_bucket_object" "my_function_script_zip" {
name = "index.zip"
bucket = google_storage_bucket.source_code.name
source = "../source/scripts/my-function-script.zip"
}
#create the cloudfunction
resource "google_cloudfunctions_function" "function" {
name = "send_my_function_script"
description = "This function is called in GTM. It sends a users' google analytics id to BigQuery."
runtime = "nodejs10"
available_memory_mb = 128
source_archive_bucket = google_storage_bucket.source_code.name
source_archive_object = google_storage_bucket_object.my_function_script_zip.name
trigger_http = true
entry_point = "handleRequest"
}
# IAM entry for all users to invoke the function
resource "google_cloudfunctions_function_iam_member" "invoker" {
project = google_cloudfunctions_function.function.project
region = "us-central1"
cloud_function = google_cloudfunctions_function.function.name
role = "roles/cloudfunctions.invoker"
member = "allUsers"
}
It seems the only problem with that example from the terraform site are the " Cloud Functions IAM resources" which have been modified since Nov 2019. Now you have to specify these resources as explained here. Now for your user case (public cloud function) I'd recommend you to follow this configuration and just change the "members" attribute to "allUsers" so it'd be something like this
resource "google_cloudfunctions_function_iam_binding" "binding" {
project = google_cloudfunctions_function.function.project
region = google_cloudfunctions_function.function.region
cloud_function = google_cloudfunctions_function.function.name
role = "roles/cloudfunctions.invoker"
members = [
"allUsers",
]
}
Finally, you can give it a test and modify the functions you've already created here at the #Try this API" right panel and enter the proper resource and request body like this (make sure to enter the "resource" parameter correcly):
{
"policy": {
"bindings": [
{
"members": [
"allUsers"
],
"role": "roles/cloudfunctions.invoker"
}
]
}
}
In addition to adjusting the IAM roles how #chinoche suggested, I also discovered that I needed to modify the service account I was using to give it poject owner permissions. (I guess the default one I was using didn't have this). I updated my key.json and it finally worked.
Related
I want to create publicly accessible Google Cloud Bucket with uniform_bucket_level_access enabled using terraform. All of the examples on provider's docs which are for public bucket does not contain this setting.
When I try to use:
resource "google_storage_bucket_access_control" "public_rule" {
bucket = google_storage_bucket.a_bucket.name
role = "READER"
entity = "allUsers"
}
resource "google_storage_bucket" "a_bucket" {
name = <name>
location = <region>
project = var.project_id
storage_class = "STANDARD"
uniform_bucket_level_access = true
versioning {
enabled = false
}
}
I get the following error:
Error: Error creating BucketAccessControl: googleapi: Error 400: Cannot use ACL API to update bucket policy when uniform bucket-level access is enabled. Read more at https://cloud.google.com/storage/docs/uniform-bucket-level-access, invalid
If I remove the line for uniform access everything works as expected.
Do I have to use google_storage_bucket_iam resource for achieving this ?
You will have to use google_storage_bucket_iam. I like to use the member one so I don't accidentally clobber other IAM bindings, but you can use whatever your needs dictate.
resource "google_storage_bucket_iam_member" "member" {
bucket = google_storage_bucket.a_bucket.name
role = "roles/storage.objectViewer"
member = "allUsers"
}
EDIT: Use this instead of the google_storage_bucket_access_controls resource that you have.
I am creating a cloud function resource with Terraform and wanted to overwrite the default service account '#appspot.gserviceaccount.com' to use a custom service account with least privileges.
I've done the following but once my Terraform resources are created and I check the cloud function permissions tab, it's still defaulting to the original one '#appspot.gserviceaccount.com'
resource "google_service_account" "service_account" {
account_id = "mysa"
display_name = "Service Account"
}
data "google_iam_policy" "cfunction_iam" {
binding {
role = google_project_iam_custom_role.cfunction_role.id
members = [
"serviceAccount:${google_service_account.service_account.email}",
]
}
binding {
role = "roles/cloudfunctions.developer"
members = [
"serviceAccount:${google_service_account.service_account.email}",
]
}
}
resource "google_cloudfunctions_function_iam_policy" "policy" {
project = google_cloudfunctions_function.function.project
region = google_cloudfunctions_function.function.region
cloud_function = google_cloudfunctions_function.function.name
policy_data = data.google_iam_policy.cfunction_iam.policy_data
}
resource "google_project_iam_custom_role" "cfunction_role" {
role_id = "customCFunctionRole"
title = "Custom Cloud Function Role"
description = "More granular permissions other than default #appspot SA"
permissions = [
"storage.objects.create",
"storage.multipartUploads.create",
"storage.objects.get",
"bigquery.tables.create",
"bigquery.tables.list",
"bigquery.tables.updateData",
"logging.logEntries.create",
]
}
#Update, I've set the service account parameter within the Cloud Function resource as well:
service_account_email = "${google_service_account.service_account.email}"
What am I missing here?
Thanks!
Adding my own answer here. After deleting the previous state and let Terraform re-create all the resources, it picked up the correct service account as defined in the description.
Trying so assign a created role to a GCP service account which then is used as a workload identity for a k8s deployment.
Terraform:
resource google_project_iam_custom_role sign_blob_role {
permissions = ["iam.serviceAccounts.signBlob"]
role_id = "signBlob"
title = "Sign Blob"
}
resource google_service_account_iam_member document_signer_workload {
service_account_id = module.document_signer_service_accounts.service_accounts_map.doc-sign.name
role = "roles/iam.workloadIdentityUser"
member = local.document_sign_sa
}
module document_signer_service_accounts {
source = "terraform-google-modules/service-accounts/google"
version = "~> 3.0"
project_id = var.gcp_project_name
prefix = "doc-sign-sa"
names = ["doc-sign"]
project_roles = [
"${var.gcp_project_name}=>roles/viewer",
"${var.gcp_project_name}=>roles/storage.objectViewer",
"${var.gcp_project_name}=>roles/iam.workloadIdentityUser",
"${var.gcp_project_name}=>${google_project_iam_custom_role.sign_blob_role.name}"
]
display_name = substr("GCP SA bound to K8S SA ${local.document_sign_sa}. Used to sign document.", 0, 100)
}
Error:
Error: Request "Create IAM Members roles/signBlob serviceAccount:staging-doc-sign#********************.iam.gserviceaccount.com for \"project \\\"********************\\\"\"" returned error: Error applying IAM policy for project "********************": Error setting IAM policy for project "********************": googleapi: Error 400: Role roles/signBlob is not supported for this resource., badRequest
on .terraform/modules/document_signer_service_accounts/main.tf line 46, in resource "google_project_iam_member" "project-roles":
46: resource "google_project_iam_member" "project-roles" {
When I do the same action on the UI though, it allows me to assign the role.
What am I doing wrong here?
It seems that it could be a problem in the way you are calling the custom role.
"${var.gcp_project_name}=>${google_project_iam_custom_role.sign_blob_role.name}"
The custom role already belongs to the project, so it is not necessary to specify ${var.gcp_project_name}
So, the code should be something like:
project_roles = [
"${var.gcp_project_name}=>roles/viewer",
"${var.gcp_project_name}=>roles/storage.objectViewer",
"${var.gcp_project_name}=>roles/iam.workloadIdentityUser",
"${google_project_iam_custom_role.sign_blob_role.name}"
]
Edit 1
According to this documentation
This is the basic usage of the module service-accounts
module "service_accounts" {
source = "terraform-google-modules/service-accounts/google"
version = "~> 2.0"
project_id = "<PROJECT ID>"
prefix = "test-sa"
names = ["first", "second"]
project_roles = [
"project-foo=>roles/viewer",
"project-spam=>roles/storage.objectViewer",
]
}
I think there should be something wrong with the reference to the attribute from your resource.
Nevertheless I have found a github repository that contains some good examples on how to add a custom role to a Service Account:
# https://www.terraform.io/docs/providers/google/r/google_project_iam.html#google_project_iam_binding
resource "google_project_iam_binding" "new-roles" {
role = "projects/${var.project_id}/roles/${google_project_iam_custom_role.new-custom-role.role_id}"
members = ["serviceAccount:${google_service_account.new.email}"]
}
I think you might find it useful to complete this task.
I am trying to create a very simple structure on GCP using Terraform: a compute instance + storage bucket. I did some research across GCP documentation, Terraform documentation, SO questions as well and still can't understand what's the trick here. There is one suggestion to use google_project_iam_binding, but reading thruogh some articles it seems to be dangerous (read: insecure solution). There's also a general answer with only GCP descriptions, nit using tf terms here, which is still a bit confusing. And also concluding the similar question here, I confirm that the domain name ownership was verified via Google Console.
So, I ended up with the following:
data "google_iam_policy" "admin" {
binding {
role = "roles/iam.serviceAccountUser"
members = [
"user:myemail#domain.name",
"serviceAccount:${google_service_account.serviceaccount.email}",
]
}
}
resource "google_service_account" "serviceaccount" {
account_id = "sa-1"
}
resource "google_service_account_iam_policy" "admin-acc-iam" {
service_account_id = google_service_account.serviceaccount.name
policy_data = data.google_iam_policy.admin.policy_data
}
resource "google_storage_bucket_iam_policy" "policy" {
bucket = google_storage_bucket.storage_bucket.name
policy_data = data.google_iam_policy.admin.policy_data
}
resource "google_compute_network" "vpc_network" {
name = "vpc-network"
auto_create_subnetworks = "true"
}
resource "google_compute_instance" "instance_1" {
name = "instance-1"
machine_type = "f1-micro"
boot_disk {
initialize_params {
image = "cos-cloud/cos-stable"
}
}
network_interface {
network = google_compute_network.vpc_network.self_link
access_config {
}
}
}
resource "google_storage_bucket" "storage_bucket" {
name = "bucket-1"
location = "US"
force_destroy = true
website {
main_page_suffix = "index.html"
not_found_page = "404.html"
}
cors {
origin = ["http://the.domain.name"]
method = ["GET", "HEAD", "PUT", "POST", "DELETE"]
response_header = ["*"]
max_age_seconds = 3600
}
}
but if I terraform apply, logs show me an error like that
Error: Error setting IAM policy for service account 'trololo': googleapi: Error 403: Permission iam.serviceAccounts.setIamPolicy is required to perform this operation on service account trololo., forbidden
2020/09/28 19:19:34 [TRACE] statemgr.Filesystem: removing lock metadata file .terraform.tfstate.lock.info
on main.tf line 35, in resource "google_service_account_iam_policy" "admin-acc-iam":
35: resource "google_service_account_iam_policy" "admin-acc-iam" {
2020/09/28 19:19:34 [TRACE] statemgr.Filesystem: unlocking terraform.tfstate using fcntl flock
Error: googleapi: Error 403: The bucket you tried to create is a domain name owned by another user., forbidden
on main.tf line 82, in resource "google_storage_bucket" "storage_bucket":
and some useless debug info. What's wrong? What account is missing what permissions and how to assign them securely?
I found the problem. As always, in 90% of cases, the issue is sitting in front of the computer.
Here are the steps that helped me to understand and to resolve the problem:
I read few more articles and especially this and this answer were very helpful to understand relations between users, service accounts, permissions
I understood that doing terraform destroy is also very important since there is no rollback of unsuccessful deploy of a new infrastructure changes (like with DB migrations for example) - thus you have to clean up either with destroy or manually
completely removed the "user:${var.admin_email}" user account IAM policy since it useless; everything has to be managed by the newly created service account
left the main service account with most permissions untouched (the one which was created manually and downloaded the access key) since Terraform used it's credentials
and changed the IAM policy for the new service account as roles/iam.serviceAccountAdmin instead of a User - thanks #Wojtek_B for the hint
After this everything works smooth!
My goal is to create a Terraform Module which creates a Child AWS account and creates a set of resources inside the account (for example, AWS Config rules).
The account is created with the following aws_organizations_account definition:
resource "aws_organizations_account" "account" {
name = "my_new_account"
email = "john#doe.org"
}
And an example aws_config_config_rule would be something like:
resource "aws_config_config_rule" "s3_versioning" {
name = "my-config-rule"
description = "Verify versioning is enabled on S3 Buckets."
source {
owner = "AWS"
source_identifier = "S3_BUCKET_VERSIONING_ENABLED"
}
scope {
compliance_resource_types = ["AWS::S3::Bucket"]
}
}
However, doing this creates the AWS Config rule in the master account, not the newly created child account.
How can I define the config rule to apply to the child account?
So, I was actually able to achieve this by defining a new provider in the module which assumes the OrganizationAccountAccessRole inside the newly created account.
Here's an example:
// Define new account
resource "aws_organizations_account" "my_new_account" {
name = "my_new_account"
email = "john#doe.org"
}
provider "aws" {
/* other provider config */
assume_role {
// Assume the organization access role
role_arn = "arn:aws:iam::${aws_organizations_account.my_new_account.id}:role/OrganizationAccountAccessRole"
}
alias = "my_new_account"
}
resource "aws_config_config_rule" "s3_versioning" {
// Tell resource to use the new provider
provider = aws.my_new_account
name = "my-config-rule"
description = "Verify versioning is enabled on S3 Buckets."
source {
owner = "AWS"
source_identifier = "S3_BUCKET_VERSIONING_ENABLED"
}
scope {
compliance_resource_types = ["AWS::S3::Bucket"]
}
}
However, it should be noted that defining the provider inside the module leads to a few quirks, notably once you source this module you cannot delete this module. If you do it will throw a Error: Provider configuration not present since you will have also removed the provider definition.
But, if you don't plan on removing these accounts (or are okay with doing it manually when needed) then this should be good!