I am still in the process of learning terraform.
I am trying to deploy a cloudSQL database and provide a default service account to access it.
the following piece of code does not work :
# create default service account
resource "google_service_account" "default_service_account" {
account_id = "${var.database_name}-${random_id.db_name_suffix.hex}"
display_name = "Cloud SQL default Service Account for ${var.database_name}-${random_id.db_name_suffix.hex}"
}
# grant role sqlUser for default service account
resource "google_project_iam_member" "iam_binding_default_service_account" {
project = var.project_id
role = "roles/cloudsql.instanceUser"
member = "serviceAccount:${default_service_account.account_id}.${module.project.project_id}.iam.gserviceaccount.com"
depends_on = [
google_service_account.default_service_account,
]
}
terraform plan complains with :
Error: Reference to undeclared resource
on database.tf line 78, in resource "google_project_iam_member" "iam_binding_default_service_account":
78: member = "serviceAccount:${default_service_account.account_id}.${module.project.project_id}.iam.gserviceaccount.com"
A managed resource "default_service_account" "account_id" has not been
declared in the root module.
I do not understand why the depends_on piece of code does not seem to work and why terraform does not create the default_service_account before trying to populate the iam_binding_default_service_account ?
It should be (forgot google_service_account):
member = "serviceAccount:${google_service_account.default_service_account.account_id}.${module.project.project_id}.iam.gserviceaccount.com}"
Related
I am creating a cloud function resource with Terraform and wanted to overwrite the default service account '#appspot.gserviceaccount.com' to use a custom service account with least privileges.
I've done the following but once my Terraform resources are created and I check the cloud function permissions tab, it's still defaulting to the original one '#appspot.gserviceaccount.com'
resource "google_service_account" "service_account" {
account_id = "mysa"
display_name = "Service Account"
}
data "google_iam_policy" "cfunction_iam" {
binding {
role = google_project_iam_custom_role.cfunction_role.id
members = [
"serviceAccount:${google_service_account.service_account.email}",
]
}
binding {
role = "roles/cloudfunctions.developer"
members = [
"serviceAccount:${google_service_account.service_account.email}",
]
}
}
resource "google_cloudfunctions_function_iam_policy" "policy" {
project = google_cloudfunctions_function.function.project
region = google_cloudfunctions_function.function.region
cloud_function = google_cloudfunctions_function.function.name
policy_data = data.google_iam_policy.cfunction_iam.policy_data
}
resource "google_project_iam_custom_role" "cfunction_role" {
role_id = "customCFunctionRole"
title = "Custom Cloud Function Role"
description = "More granular permissions other than default #appspot SA"
permissions = [
"storage.objects.create",
"storage.multipartUploads.create",
"storage.objects.get",
"bigquery.tables.create",
"bigquery.tables.list",
"bigquery.tables.updateData",
"logging.logEntries.create",
]
}
#Update, I've set the service account parameter within the Cloud Function resource as well:
service_account_email = "${google_service_account.service_account.email}"
What am I missing here?
Thanks!
Adding my own answer here. After deleting the previous state and let Terraform re-create all the resources, it picked up the correct service account as defined in the description.
Trying so assign a created role to a GCP service account which then is used as a workload identity for a k8s deployment.
Terraform:
resource google_project_iam_custom_role sign_blob_role {
permissions = ["iam.serviceAccounts.signBlob"]
role_id = "signBlob"
title = "Sign Blob"
}
resource google_service_account_iam_member document_signer_workload {
service_account_id = module.document_signer_service_accounts.service_accounts_map.doc-sign.name
role = "roles/iam.workloadIdentityUser"
member = local.document_sign_sa
}
module document_signer_service_accounts {
source = "terraform-google-modules/service-accounts/google"
version = "~> 3.0"
project_id = var.gcp_project_name
prefix = "doc-sign-sa"
names = ["doc-sign"]
project_roles = [
"${var.gcp_project_name}=>roles/viewer",
"${var.gcp_project_name}=>roles/storage.objectViewer",
"${var.gcp_project_name}=>roles/iam.workloadIdentityUser",
"${var.gcp_project_name}=>${google_project_iam_custom_role.sign_blob_role.name}"
]
display_name = substr("GCP SA bound to K8S SA ${local.document_sign_sa}. Used to sign document.", 0, 100)
}
Error:
Error: Request "Create IAM Members roles/signBlob serviceAccount:staging-doc-sign#********************.iam.gserviceaccount.com for \"project \\\"********************\\\"\"" returned error: Error applying IAM policy for project "********************": Error setting IAM policy for project "********************": googleapi: Error 400: Role roles/signBlob is not supported for this resource., badRequest
on .terraform/modules/document_signer_service_accounts/main.tf line 46, in resource "google_project_iam_member" "project-roles":
46: resource "google_project_iam_member" "project-roles" {
When I do the same action on the UI though, it allows me to assign the role.
What am I doing wrong here?
It seems that it could be a problem in the way you are calling the custom role.
"${var.gcp_project_name}=>${google_project_iam_custom_role.sign_blob_role.name}"
The custom role already belongs to the project, so it is not necessary to specify ${var.gcp_project_name}
So, the code should be something like:
project_roles = [
"${var.gcp_project_name}=>roles/viewer",
"${var.gcp_project_name}=>roles/storage.objectViewer",
"${var.gcp_project_name}=>roles/iam.workloadIdentityUser",
"${google_project_iam_custom_role.sign_blob_role.name}"
]
Edit 1
According to this documentation
This is the basic usage of the module service-accounts
module "service_accounts" {
source = "terraform-google-modules/service-accounts/google"
version = "~> 2.0"
project_id = "<PROJECT ID>"
prefix = "test-sa"
names = ["first", "second"]
project_roles = [
"project-foo=>roles/viewer",
"project-spam=>roles/storage.objectViewer",
]
}
I think there should be something wrong with the reference to the attribute from your resource.
Nevertheless I have found a github repository that contains some good examples on how to add a custom role to a Service Account:
# https://www.terraform.io/docs/providers/google/r/google_project_iam.html#google_project_iam_binding
resource "google_project_iam_binding" "new-roles" {
role = "projects/${var.project_id}/roles/${google_project_iam_custom_role.new-custom-role.role_id}"
members = ["serviceAccount:${google_service_account.new.email}"]
}
I think you might find it useful to complete this task.
I am trying to create a very simple structure on GCP using Terraform: a compute instance + storage bucket. I did some research across GCP documentation, Terraform documentation, SO questions as well and still can't understand what's the trick here. There is one suggestion to use google_project_iam_binding, but reading thruogh some articles it seems to be dangerous (read: insecure solution). There's also a general answer with only GCP descriptions, nit using tf terms here, which is still a bit confusing. And also concluding the similar question here, I confirm that the domain name ownership was verified via Google Console.
So, I ended up with the following:
data "google_iam_policy" "admin" {
binding {
role = "roles/iam.serviceAccountUser"
members = [
"user:myemail#domain.name",
"serviceAccount:${google_service_account.serviceaccount.email}",
]
}
}
resource "google_service_account" "serviceaccount" {
account_id = "sa-1"
}
resource "google_service_account_iam_policy" "admin-acc-iam" {
service_account_id = google_service_account.serviceaccount.name
policy_data = data.google_iam_policy.admin.policy_data
}
resource "google_storage_bucket_iam_policy" "policy" {
bucket = google_storage_bucket.storage_bucket.name
policy_data = data.google_iam_policy.admin.policy_data
}
resource "google_compute_network" "vpc_network" {
name = "vpc-network"
auto_create_subnetworks = "true"
}
resource "google_compute_instance" "instance_1" {
name = "instance-1"
machine_type = "f1-micro"
boot_disk {
initialize_params {
image = "cos-cloud/cos-stable"
}
}
network_interface {
network = google_compute_network.vpc_network.self_link
access_config {
}
}
}
resource "google_storage_bucket" "storage_bucket" {
name = "bucket-1"
location = "US"
force_destroy = true
website {
main_page_suffix = "index.html"
not_found_page = "404.html"
}
cors {
origin = ["http://the.domain.name"]
method = ["GET", "HEAD", "PUT", "POST", "DELETE"]
response_header = ["*"]
max_age_seconds = 3600
}
}
but if I terraform apply, logs show me an error like that
Error: Error setting IAM policy for service account 'trololo': googleapi: Error 403: Permission iam.serviceAccounts.setIamPolicy is required to perform this operation on service account trololo., forbidden
2020/09/28 19:19:34 [TRACE] statemgr.Filesystem: removing lock metadata file .terraform.tfstate.lock.info
on main.tf line 35, in resource "google_service_account_iam_policy" "admin-acc-iam":
35: resource "google_service_account_iam_policy" "admin-acc-iam" {
2020/09/28 19:19:34 [TRACE] statemgr.Filesystem: unlocking terraform.tfstate using fcntl flock
Error: googleapi: Error 403: The bucket you tried to create is a domain name owned by another user., forbidden
on main.tf line 82, in resource "google_storage_bucket" "storage_bucket":
and some useless debug info. What's wrong? What account is missing what permissions and how to assign them securely?
I found the problem. As always, in 90% of cases, the issue is sitting in front of the computer.
Here are the steps that helped me to understand and to resolve the problem:
I read few more articles and especially this and this answer were very helpful to understand relations between users, service accounts, permissions
I understood that doing terraform destroy is also very important since there is no rollback of unsuccessful deploy of a new infrastructure changes (like with DB migrations for example) - thus you have to clean up either with destroy or manually
completely removed the "user:${var.admin_email}" user account IAM policy since it useless; everything has to be managed by the newly created service account
left the main service account with most permissions untouched (the one which was created manually and downloaded the access key) since Terraform used it's credentials
and changed the IAM policy for the new service account as roles/iam.serviceAccountAdmin instead of a User - thanks #Wojtek_B for the hint
After this everything works smooth!
I will start by saying I am very new to both GCP and Terraform, so I hope there is a simple answer that I have just overlooked.
I am trying to create a GCP cloud function and then make it public using Terraform. I am able to create the function but not make it public, despite closely following the documentation's example: https://www.terraform.io/docs/providers/google/r/cloudfunctions_function.html
I receive the error "googleapi: Error 403: Permission 'cloudfunctions.functions.setIamPolicy' denied on resource ... (or resource may not exist)" when the google_cloudfunctions_function_iam_member resource is reached.
How can I make this function public? Does it have something to do with the account/api key I am using for credentials to create all these resources?
Thanks in advance.
my main.tf file:
provider "google" {
project = "my-project"
credentials = "key.json" #compute engine default service account api key
region = "us-central1"
}
terraform {
backend "gcs" {
bucket = "manually-created-bucket"
prefix = "terraform/state"
credentials = "key.json"
}
}
# create the storage bucket for our scripts
resource "google_storage_bucket" "source_code" {
name = "test-bucket-lh05111992"
location = "us-central1"
force_destroy = true
}
# zip up function source code
data "archive_file" "my_function_script_zip" {
type = "zip"
source_dir = "../source/scripts/my-function-script"
output_path = "../source/scripts/my-function-script.zip"
}
# add function source code to storage
resource "google_storage_bucket_object" "my_function_script_zip" {
name = "index.zip"
bucket = google_storage_bucket.source_code.name
source = "../source/scripts/my-function-script.zip"
}
#create the cloudfunction
resource "google_cloudfunctions_function" "function" {
name = "send_my_function_script"
description = "This function is called in GTM. It sends a users' google analytics id to BigQuery."
runtime = "nodejs10"
available_memory_mb = 128
source_archive_bucket = google_storage_bucket.source_code.name
source_archive_object = google_storage_bucket_object.my_function_script_zip.name
trigger_http = true
entry_point = "handleRequest"
}
# IAM entry for all users to invoke the function
resource "google_cloudfunctions_function_iam_member" "invoker" {
project = google_cloudfunctions_function.function.project
region = "us-central1"
cloud_function = google_cloudfunctions_function.function.name
role = "roles/cloudfunctions.invoker"
member = "allUsers"
}
It seems the only problem with that example from the terraform site are the " Cloud Functions IAM resources" which have been modified since Nov 2019. Now you have to specify these resources as explained here. Now for your user case (public cloud function) I'd recommend you to follow this configuration and just change the "members" attribute to "allUsers" so it'd be something like this
resource "google_cloudfunctions_function_iam_binding" "binding" {
project = google_cloudfunctions_function.function.project
region = google_cloudfunctions_function.function.region
cloud_function = google_cloudfunctions_function.function.name
role = "roles/cloudfunctions.invoker"
members = [
"allUsers",
]
}
Finally, you can give it a test and modify the functions you've already created here at the #Try this API" right panel and enter the proper resource and request body like this (make sure to enter the "resource" parameter correcly):
{
"policy": {
"bindings": [
{
"members": [
"allUsers"
],
"role": "roles/cloudfunctions.invoker"
}
]
}
}
In addition to adjusting the IAM roles how #chinoche suggested, I also discovered that I needed to modify the service account I was using to give it poject owner permissions. (I guess the default one I was using didn't have this). I updated my key.json and it finally worked.
How do I import an existing AWS resource into Terraform state, where that resource exists within a different account?
terraform import module.mymodule.aws_iam_policy.policy arn:aws:iam::123456789012:policy/mypolicy
gives the following error:
Error: Cannot import non-existent remote object
While attempting to import an existing object to aws_iam_policy.policy, the
provider detected that no object exists with the given id. Only pre-existing
objects can be imported; check that the id is correct and that it is
associated with the provider's configured region or endpoint, or use
"terraform apply" to create a new remote object for this resource.
The resource was created in one account using a different provisioner defined within a module called mymodule:
module "mymodule" {
// ... define variables for the module
}
// within the module
provider "aws" {
alias = "cross-account"
region = "eu-west-2"
assume_role {
role_arn = var.provider_role_arn
}
}
resource "aws_iam_policy" "policy" {
provider = "aws.cross-account"
name = var.policy-name
path = var.policy-path
description = var.policy-description
policy = var.policy-document
}
How do I import cross-account resources?
Update: using the -provider flag, I get a different error:
Error: Provider configuration not present
To work with module.mymodule.aws_iam_policy.policy (import
id "arn:aws:iam::123456789012:policy/somepolicytoimport") its original provider
configuration at provider.aws.cross-account is required, but it has been
removed. This occurs when a provider configuration is removed while objects
created by that provider still exist in the state. Re-add the provider
configuration to destroy
module.mymodule.aws_iam_policy.policy (import id
"arn:aws:iam::123456789012:policy/somepolicytoimport"), after which you can remove
the provider configuration again.
I think you have to assume the role of the second account as follows.
provider "aws" {
assume_role {
role_arn = "arn:aws:iam::ACCOUNT_ID:role/ROLE_NAME"
session_name = "SESSION_NAME"
external_id = "EXTERNAL_ID"
}
}
[1] : https://www.terraform.io/docs/providers/aws/index.html
I've got the same error while trying to import AWS acm certificate.
As the first step, before importing the resource, you need to create its configuration in the root module (or other relevant module):
resource "aws_acm_certificate" "cert" {
# (resource arguments)
}
Or you'll got the following error:
Error: resource address "aws_acm_certificate.cert" does not exist in
the configuration.
Then you can import the resource by providing its relevant arn:
$ terraform import aws_acm_certificate.cert <certificate-arn>
Like #ydaetskcoR mentioned in the comments - you don't need to assume the role of the second account if you're using v0.12.10+.
But Terraform do need Access credentials to the second account - so please make sure you provide the relevant account's credentials (and not the source account credentials) or you'll be stuck with the Error: Cannot import non-existent remote object for a few hours like me (:
You can use multiple provider configurations if you have credentials for the another account.
# This is used by default
provider "aws" {
region = "us-east-1"
access_key = "my-access-key"
secret_key = "my-secret-key"
}
provider "aws" {
alias = "another_account"
region = "us-east-1"
access_key = "another-account-access-key"
secret_key = "another-account-secret-key"
}
# To use the other configuration
resource "aws_instance" "foo" {
provider = aws.another_account
# ...
}
Here the documentation: https://developer.hashicorp.com/terraform/language/providers/configuration#alias-multiple-provider-configurations