I have created in TF (0.11.14) a GCP role, attached it to a service account and also created a key for the later as follows:
resource "google_service_account_key" "my_service_account_key" {
service_account_id = "${google_service_account.my_service_account.id}"
}
I then take the private_key as output in the following way:
output "my_service_account_private_key" {
value = "${google_service_account_key.my_service_account_key.private_key}"
sensitive = true
}
Which prints me a very long string in the likes of
ewogICJK49fo34KFo4 .... 49k92kljg==
Assuming the role has permissions enabling read/write to a GCS bucket, how can I pass the above credential / private key to a (GKE) pod / deployment, so that the pods are granted the specific service account (and therefore are able to perform what the corresponding permissions allow, as for example reading / writing to a bucket)?
Your main steps are
Create a service account.
Provide necessary roles for your service account to work with GCS bucket.
Save the account key as a Kubernetes Secret.
Use the service account to configure and deploy an application.
I believe you got steps 1 and 2 covered. I researched for two examples (1, 2) that might be of some assistance for the remaining steps .
Related
I have two Cloud Run services. Service U has Unauthenticated access open to all users. Service R I want the access Restricted so that only Service A can invoke it.
This gist has a pretty succinct implementation using the CLI. My services are configured with Terraform and I'm trying to translate, but also understand:
Based on this I thought I could allow Service R access by Service U by adding U's service account (I added service_account: service-u-sa#abcdefg.iam.gserviceaccount.com to Service U's google_cloud_run_service.spec.service_account_name) in the same way I open up access to all users. Here is allUsers:
resource "google_cloud_run_service" "service_r" {
name = local.service_name
# ... rest of the service definition
}
resource "google_cloud_run_service_iam_member" "run_all_users" {
service = google_cloud_run_service.service_r.name
location = google_cloud_run_service.service_r.location
role = "roles/run.invoker"
member = "allUsers"
depends_on = [
google_cloud_run_service.service_r,
]
}
And I amended it to be for just one service account with:
resource "google_cloud_run_service_iam_member" "run_all_users" {
service = google_cloud_run_service.service_r.name
location = google_cloud_run_service.service_r.location
role = "roles/run.invoker"
member = "serviceAccount:service-u-sa#abcdefg.iam.gserviceaccount.com
depends_on = [
google_cloud_run_service.service_b,
]
}
This does not seem to work.
However, adding a data source that creates a policy does seem to work:
data "google_iam_policy" "access_policy" {
binding {
role = "roles/run.invoker"
members = [
"serviceAccount:service-u-sa#abcdefg.iam.gserviceaccount.com",
]
}
}
resource "google_cloud_run_service_iam_policy" "cloud_run_policy" {
location = google_cloud_run_service.service_r.location
project = google_cloud_run_service.service_r.project
service = google_cloud_run_service.service_r.name
policy_data = data.google_iam_policy.access_policy.policy_data
}
I've read on this SO answer (and elsewhere) that service accounts are identities as well as resources. Is that what is happening here? That is, rather than using the service account service-b-sa#abcdefg.iam.gserviceaccount.com as an identity, I am attaching it to Service R as a "resource"? Is that what a "policy" is in this context? And is there anywhere in the GCR UI where I can see these relationships?
Ok, I will try to clarify the wording and the situation, even if I didn't catch what changed between your 2 latest piece of code.
Duality
Yes, Service Account have duality: they are identity AND resources. And because they are a resource, you can grant on identity on it (especially to perform impersonation).
Access Policy
It's simply a binding between an identity and a role. Then you have to apply that binding to a resource to grant the identity the role on the resource. This trio is an IAM authorization policy, or policy in short.
Service and Service Account
Your question is hard to understand because you mix the Cloud Run service, and the service account.
A Cloud Run service has an identity: the runtime service account.
A Cloud Run service CAN'T have access to another Cloud Run service. But the identity of a Cloud Run service can access another Cloud Run service.
That being said, there is no difference between your 2 latest piece of code. In fact yes, there is a difference but the second definition is much more restrictive than the first one.
In the latest one, you use ....._iam_policy. It means you REPLACE all the policies. In other word, the "access_policy" override all the existing permissions.
In the case before, you use ....._iam_member. It means you simply add a policy to the current resource, without changing the existing ones.
That's why, the result is the same: service-u has the role Invoker on the service_r.
Can you try again? the issue is somewhere else.
I am trying to create sns billing alarm using cloudwatch when the cost reaches a particular threshold, I can do this manually but I'm trying to use terraform. Hello, I'm a NEWBIE to terraform, when I create this using terraform it's been created in the user account, I tried using the root Access keys but it continues to create them in my user account. Now, i'm not sure maybe i'm assuming wrong, when i create the billing alarm on the management console i do it using root account.
Here is my code:
provider "aws" {
shared_config_files = ["/mnt/c/Users/{user}/.aws/config"]
shared_credentials_files = ["/mnt/c/Users/{user}/.aws/credentials"]
profile = "root"
# region = "us-east-2"
}
module "sns_topic" {
source = "/mnt/c/terraform-ansible-automate/sns"
aws_account_id = var.aws_account_id
aws_env = var.aws_env
email = var.email
}
module "cloudwatch" {
source = "/mnt/c/terraform-ansible-automate/cloudwatch"
# source = "/cloudwatch"
monthly_billing_threshold = var.monthly_billing_threshold
sns_topic_arn = [module.sns_topic.sns_cost_alert_topic_arn]
aws_account_id = var.aws_account_id
aws_env = var.aws_env
}
Could you please verify in which account are you authenticated while running your terraform code?
Use the below command:
## It assumes that you have aws CLI pre-installed.
aws sts get-caller-identity
If you are already logged in the root account (desired account) it should work normally but if you are in any other account (user account in your case).
It could be because of the IAM user/role being used for the terraform authentication being in that account.
And then you have two choices in general.
Either assume a role in the desired account from the current signed/logged-in user. The role must have a trust relationship(policy) between your logged-in User. (your User should be allowed to assume that role from another account). The role being assumed must also have the required permissions(policies attached) on the desired account to make the required changes.
Please look into Hashicorp Documentation on provision-resource-across-aws-accounts
From the code perspective, you can refer to https://github.com/ishuar/stackoverflow-terraform/tree/main/aws/user_with_programmatic_access_assume_role/user_assuming_role_with_policies_attached as an example on How to Use IAM User to Assume Role with Required Access.
This is not a cross-account example but it is similar only need to adjust the role_arn in the provider block of instance_and_sg_creation
Use an IAM user that already exists in the desired account with having required permissions. Use the secrets for that user in your terraform authentication and make the changes. This is like any normal terraform code execution.
I have the following multi-account setup with AWS SSO:
An account called "infrastructure-owner". Under this account, there is a role called "SomeAccessLevel" where I can click to sign-in the web console.
Another account called "infrastructure-consumer". Under this account there is the same role called "SomeAccessLevel" where I can click to sign-in the web console. There may be other roles.
Account "infrastructure-owner" owns resources (for example S3 buckets, DynamoDB tables, or VPNs) typically with read/write access. This account is somewhat protected and rarely used. Account "infrastructure-consumer" merely have read access to resources in "infrastructure-owner". This account is used often by multiple people/services. For example, production data pipelines run in "infrastructure-consumer" and have read-only rights to S3 buckets in "infrastructure-owner". However, from time to time, new data may be included manually in these S3 buckets via sign-in "infrastructure-owner".
I would like to provision this infrastructure with Terraform. I am unable to provide permissions for "infrastructure-consumer" to access resources from "infrastructure-owner". I've read dozens of blog posts on AWS multi-account / SSO / Terraform but I still cannot do it. At this point, I cannot even do it manually in the web console.
Please realize that "SomeAccessLevel" is a role created by AWS that I cannot modify (typically called AWSReservedSSO_YOURNAMEHERE_RANDOMSTRING). Also, I cannot give permissions to particular users, since these users may not be owned by "infrastructure-consumer". Also, users access this account via SSO using a role.
The following Terraform code is an example DynamoDB table created in the "infrastructure-owner" that I would like to read in the "infrastructure-consumer" account (any role):
# Terraform config
terraform {
required_version = ">= 1.0.0"
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 3.44"
}
}
backend "remote" {
hostname = "app.terraform.io"
organization = "YOUR_ORGANIZATION_HERE"
workspaces {
name = "YOUR_TF_WORKSPACE_NAME_HERE" # linked to "infrastructure-owner"
}
}
}
# Local provider
provider "aws" {
profile = "YOUR_AWS_PROFILE_NAME_HERE" # linked to "infrastructure-owner"
region = "eu-central-1"
}
# Example resource that I would like to access from other accounts like "infrastructure-consumer"
resource "aws_dynamodb_table" "my-database" {
# Basic
name = "my-database"
billing_mode = "PAY_PER_REQUEST"
hash_key = "uuid"
# Key
attribute {
name = "uuid"
type = "S"
}
}
# YOUR CODE TO ALLOW "infrastructure-consumer" TO READ THE TABLE.
It could also happen that there is a better architecture for this use case. I am trying to follow general practices for AWS multi-account for production environments, and Terraform for provisioning them.
Thank you!
I assume you mean AWS accounts and not IAM accounts (users).
I remember that roles to be assumed via AWS SSO had something called permission sets, which is no more than a policy with API actions allowed|denied to be performed while assuming a role. I don't know exactly how AWS SSO could influence how role trust works in AWS, but you could have a role in infrastructure-owner's account that trusts anything in infrastructure-consumer's account, i.e. trusting "arn:aws:iam::${var.infrastructure-consumer's account}:root"
To achieve that with Terraform you would run it in your management account (SSO Administrator's Account) and make that trust happen.
Asking the community if it's possible to do the following. (had no luck in finding further information)
I create a ci/cd pipeline with Github/cloudbuild/Terraform. I have cloudbuild build terraform configuration upon github pull request and merge to new branch. However, I have cloudbuild service account (Default) use with least privilege.
Question adheres, I would like terraform to pull permission from an existing service account with least privilege to prevent any exploits, etc. once cloudbuild gets pull build triggers to init terraform configuration. At this time, i.e terraform will extract existing external SA to obtain permission to build TF.
I tried to use service account, and binding roles to that service account but error happens that
states service account already existences.
Next step, is for me to use a module but I think this is also going to create a new SA with replicated roles.
If this is confusing I do apologize, I will help in refining the question to be more concise.
You have 2 solutions:
Use the Cloud Build service account when you execute your Terraform. Your provider look like this:
provider "google" {
// Useless with Cloud Build
// credentials = file("${var.CREDENTIAL_FILE}}")
project = var.PROJECT_ID
region = "europe-west1"
}
But this solution implies to grant several roles to Cloud Build only for Terraform process. A custom role is a good choice for granting only what is required.
The second solution is to use a service account key file. Here again 2 solutions:
Cloud Build creates the service account, grant all the role on it, generates a key and passes it to terraform. After the terraform execution, the service account is deleted by Cloud Build. Good solution, but you have to grant Cloud Build service account the capability to grant itself any roles and to generate a json Key file. That's a lot a responsibility!
Use an existing service account and the key generated on it. But you have to secure the key and to rotate it regularly. I recommend you to securely store it in secret manager, but for the rotation, you have to manage it by yourselves, today. With this process, Cloud Build download the key (in secret manager) and pass it to terraform. Here again, the Cloud Build service account has the right to access to secrets, that is a critical privilege. The step in Cloud Build is something like this:
steps:
- name: gcr.io/cloud-builders/gcloud:latest
entrypoint: "bash"
args:
- "-c"
- |
gcloud beta secrets versions access --secret=test-secret latest > my-secret-file.txt
I am trying to create resources using Terraform in a new GCP project. As part of that I want to set roles/storage.legacyBucketWriter to the Google managed service account which runs storage transfer service jobs (the pattern is project-[project-number]#storage-transfer-service.iam.gserviceaccount.com) for a specific bucket. I am using the following config:
resource "google_storage_bucket_iam_binding" "publisher_bucket_binding" {
bucket = "${google_storage_bucket.bucket.name}"
members = ["serviceAccount:project-${var.project_number}#storage-transfer-service.iam.gserviceaccount.com"]
role = "roles/storage.legacyBucketWriter"
}
to clarify, I want to do this so that when I create one off transfer jobs using the JSON APIs, it doesn't fail prerequisite checks.
When I run Terraform apply, I get the following:
Error applying IAM policy for Storage Bucket "bucket":
Error setting IAM policy for Storage Bucket "bucket": googleapi:
Error 400: Invalid argument, invalid
I think this is because the service account in question does not exist yet as I can not do this via the console either.
Is there any other service that I need to enable for the service account to be created?
it seems I am able to create/find the service account once I run this:
https://cloud.google.com/storage/transfer/reference/rest/v1/googleServiceAccounts/get
for my project to get the email address.
not sure if this is the best way but it works..
Soroosh's reply is accurate, after querying the API as per this DOC: https://cloud.google.com/storage-transfer/docs/reference/rest/v1/googleServiceAccounts/ will enable the service account and terraform will run, but now you have to create an api call in terraform for that to work, ain't nobody got time for that.