I have two Cloud Run services. Service U has Unauthenticated access open to all users. Service R I want the access Restricted so that only Service A can invoke it.
This gist has a pretty succinct implementation using the CLI. My services are configured with Terraform and I'm trying to translate, but also understand:
Based on this I thought I could allow Service R access by Service U by adding U's service account (I added service_account: service-u-sa#abcdefg.iam.gserviceaccount.com to Service U's google_cloud_run_service.spec.service_account_name) in the same way I open up access to all users. Here is allUsers:
resource "google_cloud_run_service" "service_r" {
name = local.service_name
# ... rest of the service definition
}
resource "google_cloud_run_service_iam_member" "run_all_users" {
service = google_cloud_run_service.service_r.name
location = google_cloud_run_service.service_r.location
role = "roles/run.invoker"
member = "allUsers"
depends_on = [
google_cloud_run_service.service_r,
]
}
And I amended it to be for just one service account with:
resource "google_cloud_run_service_iam_member" "run_all_users" {
service = google_cloud_run_service.service_r.name
location = google_cloud_run_service.service_r.location
role = "roles/run.invoker"
member = "serviceAccount:service-u-sa#abcdefg.iam.gserviceaccount.com
depends_on = [
google_cloud_run_service.service_b,
]
}
This does not seem to work.
However, adding a data source that creates a policy does seem to work:
data "google_iam_policy" "access_policy" {
binding {
role = "roles/run.invoker"
members = [
"serviceAccount:service-u-sa#abcdefg.iam.gserviceaccount.com",
]
}
}
resource "google_cloud_run_service_iam_policy" "cloud_run_policy" {
location = google_cloud_run_service.service_r.location
project = google_cloud_run_service.service_r.project
service = google_cloud_run_service.service_r.name
policy_data = data.google_iam_policy.access_policy.policy_data
}
I've read on this SO answer (and elsewhere) that service accounts are identities as well as resources. Is that what is happening here? That is, rather than using the service account service-b-sa#abcdefg.iam.gserviceaccount.com as an identity, I am attaching it to Service R as a "resource"? Is that what a "policy" is in this context? And is there anywhere in the GCR UI where I can see these relationships?
Ok, I will try to clarify the wording and the situation, even if I didn't catch what changed between your 2 latest piece of code.
Duality
Yes, Service Account have duality: they are identity AND resources. And because they are a resource, you can grant on identity on it (especially to perform impersonation).
Access Policy
It's simply a binding between an identity and a role. Then you have to apply that binding to a resource to grant the identity the role on the resource. This trio is an IAM authorization policy, or policy in short.
Service and Service Account
Your question is hard to understand because you mix the Cloud Run service, and the service account.
A Cloud Run service has an identity: the runtime service account.
A Cloud Run service CAN'T have access to another Cloud Run service. But the identity of a Cloud Run service can access another Cloud Run service.
That being said, there is no difference between your 2 latest piece of code. In fact yes, there is a difference but the second definition is much more restrictive than the first one.
In the latest one, you use ....._iam_policy. It means you REPLACE all the policies. In other word, the "access_policy" override all the existing permissions.
In the case before, you use ....._iam_member. It means you simply add a policy to the current resource, without changing the existing ones.
That's why, the result is the same: service-u has the role Invoker on the service_r.
Can you try again? the issue is somewhere else.
Related
In Terraform I enable services like so:
resource "google_project_service" "apigateway" {
service = "apigateway.googleapis.com"
}
Afterwards I ensure that I am referencing the service account of apigateway (service-123#gcp-sa-apigateway.iam.gserviceaccount.com) only after the resource was created.
Now it does happen sometimes that when using the email of sa, I get an error that the service account is not present:
Error 400: Service account service-123#gcp-sa-apigateway.iam.gserviceaccount.com does not exist.
I double checked in API Explorer that the API is enabled!
This in turn does happen for apigateway the same way as for others (e.g. cloudfunctions).
So I am wondering how do I ensure that the service account is created?
Naively I assumed creating google_project_services should do the trick but that seems not be true in every case. Documentation around Google service account is pretty sparse it seems :(
As John Hanley remarks, you can create this dependency in terraform with depends_on.
As you can see on the following comment, the service account will be created but the key will be assigned until the first sentence is done.
resource "google_service_account" "service_account" {
account_id = "terraform-test"
display_name = "Service Account"
}
resource "google_service_account_key" "mykey" {
service_account_id = google_service_account.service_account.id
public_key_type = "TYPE_X509_PEM_FILE"
depends_on = [google_service_account.service_account]
}
Also, if the service account is already created on the GCP platform only is executed the key statement.
It is important noticed that the account that you are using for this configuration needs to have the required IAM permission to create an account.
Found out about google_project_service_identity.
So since I saw this problem with cloudfunctions you could create a google_project_service_identity.cloudfunctions and hope for a detailed error message.
Sadly this is not available for all, e.g. apigateway.
For apigateway specifically, Google Support confirmed that undocumented behavior is the SA gets created lazily when creating first resource.
I have the following multi-account setup with AWS SSO:
An account called "infrastructure-owner". Under this account, there is a role called "SomeAccessLevel" where I can click to sign-in the web console.
Another account called "infrastructure-consumer". Under this account there is the same role called "SomeAccessLevel" where I can click to sign-in the web console. There may be other roles.
Account "infrastructure-owner" owns resources (for example S3 buckets, DynamoDB tables, or VPNs) typically with read/write access. This account is somewhat protected and rarely used. Account "infrastructure-consumer" merely have read access to resources in "infrastructure-owner". This account is used often by multiple people/services. For example, production data pipelines run in "infrastructure-consumer" and have read-only rights to S3 buckets in "infrastructure-owner". However, from time to time, new data may be included manually in these S3 buckets via sign-in "infrastructure-owner".
I would like to provision this infrastructure with Terraform. I am unable to provide permissions for "infrastructure-consumer" to access resources from "infrastructure-owner". I've read dozens of blog posts on AWS multi-account / SSO / Terraform but I still cannot do it. At this point, I cannot even do it manually in the web console.
Please realize that "SomeAccessLevel" is a role created by AWS that I cannot modify (typically called AWSReservedSSO_YOURNAMEHERE_RANDOMSTRING). Also, I cannot give permissions to particular users, since these users may not be owned by "infrastructure-consumer". Also, users access this account via SSO using a role.
The following Terraform code is an example DynamoDB table created in the "infrastructure-owner" that I would like to read in the "infrastructure-consumer" account (any role):
# Terraform config
terraform {
required_version = ">= 1.0.0"
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 3.44"
}
}
backend "remote" {
hostname = "app.terraform.io"
organization = "YOUR_ORGANIZATION_HERE"
workspaces {
name = "YOUR_TF_WORKSPACE_NAME_HERE" # linked to "infrastructure-owner"
}
}
}
# Local provider
provider "aws" {
profile = "YOUR_AWS_PROFILE_NAME_HERE" # linked to "infrastructure-owner"
region = "eu-central-1"
}
# Example resource that I would like to access from other accounts like "infrastructure-consumer"
resource "aws_dynamodb_table" "my-database" {
# Basic
name = "my-database"
billing_mode = "PAY_PER_REQUEST"
hash_key = "uuid"
# Key
attribute {
name = "uuid"
type = "S"
}
}
# YOUR CODE TO ALLOW "infrastructure-consumer" TO READ THE TABLE.
It could also happen that there is a better architecture for this use case. I am trying to follow general practices for AWS multi-account for production environments, and Terraform for provisioning them.
Thank you!
I assume you mean AWS accounts and not IAM accounts (users).
I remember that roles to be assumed via AWS SSO had something called permission sets, which is no more than a policy with API actions allowed|denied to be performed while assuming a role. I don't know exactly how AWS SSO could influence how role trust works in AWS, but you could have a role in infrastructure-owner's account that trusts anything in infrastructure-consumer's account, i.e. trusting "arn:aws:iam::${var.infrastructure-consumer's account}:root"
To achieve that with Terraform you would run it in your management account (SSO Administrator's Account) and make that trust happen.
I have created in TF (0.11.14) a GCP role, attached it to a service account and also created a key for the later as follows:
resource "google_service_account_key" "my_service_account_key" {
service_account_id = "${google_service_account.my_service_account.id}"
}
I then take the private_key as output in the following way:
output "my_service_account_private_key" {
value = "${google_service_account_key.my_service_account_key.private_key}"
sensitive = true
}
Which prints me a very long string in the likes of
ewogICJK49fo34KFo4 .... 49k92kljg==
Assuming the role has permissions enabling read/write to a GCS bucket, how can I pass the above credential / private key to a (GKE) pod / deployment, so that the pods are granted the specific service account (and therefore are able to perform what the corresponding permissions allow, as for example reading / writing to a bucket)?
Your main steps are
Create a service account.
Provide necessary roles for your service account to work with GCS bucket.
Save the account key as a Kubernetes Secret.
Use the service account to configure and deploy an application.
I believe you got steps 1 and 2 covered. I researched for two examples (1, 2) that might be of some assistance for the remaining steps .
I am trying to assign a IAM policy document to an existing Cloud Build Service Account, but its failing for some reason.
following is my iam policy
data "google_iam_policy" "vulznotepolicy" {
binding {
role = "roles/containeranalysis.notes.occurrences.viewer"
members = [
"serviceaccount:<project_number>#cloudbuild.gserviceaccount.com"
]
}
}
following is policy assignment to Service account
resource "google_service_account_iam_policy" "buildsa" {
service_account_id = "serviceaccount:<project_number>#cloudbuild.gserviceaccount.com"
policy_data = data.google_iam_policy.vulznotepolicy.policy_data
}
service account id doesn't accept the format that i have provided. I have given just the <project_number> still it doesn't accept. Not sure what the issue is
As suggested rightly by John, i added the roles to the service account using the following
resource "google_project_iam_policy" "buildsa" {
project = var.project_id
policy_data = data.google_iam_policy.vulznotepolicy.policy_data
}
Although this works but can cause serious problems as follows. Please proceed with caution
Since this is an authoritative operation, it will lock you out of your account if the operations are not carefully managed. This is from terraform "It's not recommended to use google_project_iam_policy with your provider project to avoid locking yourself out, and it should generally only be used with projects fully managed by Terraform. If you do use this resource, it is recommended to import the policy before applying the change."
I can see that you’re trying to give the Cloud Build Service account some IAM permissions using Terraform.
I would start by reading this document about IAM Roles on Cloud Build [1], then you can check how the Cloud Build Service Account behaves [2], and then how to configure it [3].
Since you’re using Terraform, I would also have a look here [4].
These can help you understand why you’re running into this issue, as pointed out in a comment.
[1] https://cloud.google.com/cloud-build/docs/iam-roles-permissions
[2] https://cloud.google.com/cloud-build/docs/cloud-build-service-account
[3] https://cloud.google.com/cloud-build/docs/securing-builds/configure-access-for-cloud-build-service-account#before_you_begin
[4] https://www.terraform.io/docs/providers/google/r/google_service_account_iam.html
Here is the terraform code I have used to create a service account and bind a role to it:
resource "google_service_account" "sa-name" {
account_id = "sa-name"
display_name = "SA"
}
resource "google_project_iam_binding" "firestore_owner_binding" {
role = "roles/datastore.owner"
members = [
"serviceAccount:sa-name#${var.project}.iam.gserviceaccount.com",
]
depends_on = [google_service_account.sa-name]
}
Above code worked great... except it removed the datastore.owner from any other service account in the project that this role was previously assigned to. We have a single project that many teams use and there are service accounts managed by different teams. My terraform code would only have our team's service accounts and we could end up breaking other teams service accounts.
Is there another way to do this in terraform?
This of course can be done via GCP UI or gcloud cli without any issue or affecting other SAs.
From terraform docs, "google_project_iam_binding" is Authoritative. Sets the IAM policy for the project and replaces any existing policy already attached. That means that it replaces completely members for a given role inside it.
To just add a role to a new service account, without editing everybody else from that role, you should use the resource "google_project_iam_member":
resource "google_service_account" "sa-name" {
account_id = "sa-name"
display_name = "SA"
}
resource "google_project_iam_member" "firestore_owner_binding" {
project = <your_gcp_project_id_here>
role = "roles/datastore.owner"
member = "serviceAccount:${google_service_account.sa-name.email}"
}
Extra change from your sample: the use of service account resource email generated attribute to remove the explicit depends_on. You don't need the depends_on if you do it like this and you avoid errors with bad configuration.
Terraform can infer the dependency from the use of a variable from another resource. Check the docs here to understand this behavior better.
It's an usual problem with Terraform. Either you do all with it, or nothing. If you are between, unexpected things can happen!!
If you want to use terraform, you have to import the existing into the tfstate. Here the doc for the bindind, and, of course, you have to add all the account in the Terraform file. If not, the binding will be removed, but this time, you will see the deletion in the tf plan.