Defining a ClusterRoleBinding for Terraform service account - google-cloud-platform

So I have a GCP service account that is Kubernetes Admin and Kubernetes Cluster Admin in the GCP cloud console.
I am now trying to give this terraform service account the ClusterRole role in GKE to manage all namespaces via following terraform configuration:
data "google_service_account" "terraform" {
project = var.project_id
account_id = var.terraform_sa_email
}
# Terraform needs to manage cluster
resource "google_project_iam_member" "terraform-gke-admin" {
project = var.project_id
role = "roles/container.admin"
member = "serviceAccount:${data.google_service_account.terraform.email}"
}
# Terraform needs to manage K8S RBAC
# https://cloud.google.com/kubernetes-engine/docs/how-to/role-based-access-control#iam-rolebinding-bootstrap
resource "kubernetes_cluster_role_binding" "terraform_clusteradmin" {
depends_on = [
google_project_iam_member.terraform-gke-admin,
]
metadata {
name = "cluster-admin-binding-terraform"
}
role_ref {
api_group = "rbac.authorization.k8s.io"
kind = "ClusterRole"
name = "cluster-admin"
}
subject {
api_group = "rbac.authorization.k8s.io"
kind = "User"
name = data.google_service_account.terraform.email
}
# must create a binding on unique ID of SA too
subject {
api_group = "rbac.authorization.k8s.io"
kind = "User"
name = data.google_service_account.terraform.unique_id
}
}
However, this always returns the following error:
Error: clusterrolebindings.rbac.authorization.k8s.io is forbidden: User "client" cannot create resource "clusterrolebindings" in API group "rbac.authorization.k8s.io" at the cluster scope
│
│ with module.kubernetes[0].kubernetes_cluster_role_binding.terraform_clusteradmin,
│ on kubernetes/terraform_role.tf line 15, in resource "kubernetes_cluster_role_binding" "terraform_clusteradmin":
│ 15: resource "kubernetes_cluster_role_binding" "terraform_clusteradmin" {
Any ideas what goes wrong here?
Could this be related to using Google Groups RBAC?
authenticator_groups_config {
security_group = "gke-security-groups#${var.acl_group_domain}"
}

data "google_client_config" "provider" {}
provider "kubernetes" {
cluster_ca_certificate = module.google.cluster_ca_certificate
host = module.google.cluster_endpoint
token = data.google_client_config.provider.access_token
}

Related

GCP Failure binding role cloudsql client to a SA through Terraform

When trying to bind cloudsql client role to a SA in GCP with Terraform next error is produce:
Error setting IAM policy for service account '{}': googleapi: Error 400: Role roles/cloudsql.client is not supported for this resource., badRequest
This is my terraform code on file main.tf:
resource "google_service_account" "default" {
account_id = var.service_account_name
display_name = "ECB API Caller Cloud Function SA"
}
data "google_iam_policy" "default" {
binding {
members = ["serviceAccount:${google_service_account.default.email}"]
role = "roles/cloudsql.client"
}
}
resource "google_service_account_iam_policy" "default" {
service_account_id = google_service_account.default.name
policy_data = data.google_iam_policy.default.policy_data
}
Edit: Sorted
After further investigation I found out a way. Not completely sure why, but as far as I am able to understand, you cannot allocate this role to the user but the other way around.
Whatever, this code snippet does the trick:
resource "google_service_account" "default" {
account_id = var.service_account_name
display_name = "ECB API Caller Cloud Function SA"
}
resource "google_project_iam_binding" "project" {
project = var.project_id
role = "roles/cloudsql.client"
members = [
"serviceAccount:${google_service_account.default.email}",
]

GCP API gateway returning 403 saying managed service "is not enabled for the project"

Trying to access a public cloud run service and not sure why I keep getting this error message ({"message":"PERMISSION_DENIED:API basic-express-api-1yy1jgrw4nwy2.apigateway.chrome-courage-336400.cloud.goog is not enabled for the project.","code":403}) when hitting the gateway default hostname path with the API key in query string. The config has a service account with the role to be able to invoke cloud run services. All required APIs are also enabled. Here is a link to my entire codebase, but below is my API Gateway specific terraform configuration.
resource "google_api_gateway_api" "basic_express" {
depends_on = [google_project_service.api_gateway, google_project_service.service_management, google_project_service.service_control]
provider = google-beta
api_id = "basic-express-api"
}
resource "google_api_gateway_api_config" "basic_express" {
depends_on = [google_project_service.api_gateway, google_project_service.service_management, google_project_service.service_control, google_api_gateway_api.basic_express]
provider = google-beta
api = google_api_gateway_api.basic_express.api_id
api_config_id = "basic-express-cfg"
openapi_documents {
document {
path = "api-configs/openapi-spec-basic-express.yaml"
contents = filebase64("api-configs/openapi-spec-basic-express.yaml")
}
}
lifecycle {
create_before_destroy = true
}
gateway_config {
backend_config {
google_service_account = google_service_account.apig_gateway_basic_express_sa.email
}
# https://cloud.google.com/api-gateway/docs/configure-dev-env?&_ga=2.177696806.-2072560867.1640626239#configuring_a_service_account
# when I added this terraform said that the resource already exists, so I had to tear down all infrastructure and re-provision - also did not make a difference, still getting a 404 error when trying to hit the gateway default hostname endpoint - this resource might be immutable...
}
}
resource "google_api_gateway_gateway" "basic_express" {
depends_on = [google_project_service.api_gateway, google_project_service.service_management, google_project_service.service_control, google_api_gateway_api_config.basic_express, google_api_gateway_api.basic_express]
provider = google-beta
api_config = google_api_gateway_api_config.basic_express.id
gateway_id = "basic-express-gw"
region = var.region
}
resource "google_service_account" "apig_gateway_basic_express_sa" {
account_id = "apig-gateway-basic-express-sa"
depends_on = [google_project_service.iam]
}
# "Identity to be used by gateway"
resource "google_project_iam_binding" "project" {
project = var.project_id
role = "roles/run.invoker"
members = [
"serviceAccount:${google_service_account.apig_gateway_basic_express_sa.email}"
]
}
# https://cloud.google.com/api-gateway/docs/configure-dev-env?&_ga=2.177696806.-2072560867.1640626239#configuring_a_service_account
Try:
PROJECT=[[YOUR-PROJECT]]
SERVICE="basic-express-api-1yy1jgrw4nwy2.apigateway.chrome-courage-336400.cloud.goog"
gcloud services enable ${SERVICE} \
--project=${PROJECT}
As others have pointed out, you need to enable the api service. You can do via terraform with the google_project_service resource:
resource "google_project_service" "basic_express" {
project = var.project_id
service = google_api_gateway_api.basic_express.managed_service
timeouts {
create = "30m"
update = "40m"
}
disable_dependent_services = true
}

Terraform Codepipeline Deploy in Different Region

I'm trying to deploy my service in the region that is just newly available (Jakarta). But it looks like the Codepipeline is not available so I have to create the Codepipeline in the nearest region (Singapore) and deploy it to Jakarta region. It is also my first time setting up Codepipeline in Terraform, so I'm not sure if I do it right or not.
P.S. The default region of all these infrastructures is in "Jakarta" region. I will exclude the deploy part since the issue is showing up without it.
resource "aws_codepipeline" "pipeline" {
name = local.service_name
role_arn = var.codepipeline_role_arn
artifact_store {
type = "S3"
region = var.codepipeline_region
location = var.codepipeline_artifact_bucket_name
}
stage {
name = "Source"
action {
name = "Source"
category = "Source"
owner = "AWS"
provider = "CodeStarSourceConnection"
version = "1"
output_artifacts = ["SourceArtifact"]
region = var.codepipeline_region
configuration = {
ConnectionArn = var.codestar_connection
FullRepositoryId = "${var.team_name}/${local.repo_name}"
BranchName = local.repo_branch
OutputArtifactFormat = "CODEBUILD_CLONE_REF" // NOTE: Full clone
}
}
}
stage {
name = "Build"
action {
name = "Build"
category = "Build"
owner = "AWS"
provider = "CodeBuild"
version = "1"
input_artifacts = ["SourceArtifact"]
output_artifacts = ["BuildArtifact"]
run_order = 1
region = var.codepipeline_region
configuration = {
"ProjectName" = local.service_name
}
}
}
tags = {
Name = "${local.service_name}-pipeline"
Environment = local.env
}
}
Above is the Terraform configuration that I created, but it gives me an error like this:
│ Error: region cannot be set for a single-region CodePipeline
If I try to remove the region on the root block, the Terraform will try to access the default region which is Jakarta region (and it will fail since Codepipeline is not available in Jakarta).
│ Error: Error creating CodePipeline: RequestError: send request failed
│ caused by: Post "https://codepipeline.ap-southeast-3.amazonaws.com/": dial tcp: lookup codepipeline.ap-southeast-3.amazonaws.com on 103.86.96.100:53: no such host
You need to setup alias provider with different region. For exmaple:
provider "aws" {
alias = "singapore"
region = "ap-southeast-1"
}
Then you deploy your pipeline to that region using the alias:
resource "aws_codepipeline" "pipeline" {
provider = aws.singapore
name = local.service_name
role_arn = var.codepipeline_role_arn
# ...
}

google beta permissions not found terraform

I'm trying to create a reserved subnet for regional load balancer. It is the first time i'm using google-beta provider and when i try to create the subnet using the following script...:
resource "google_compute_subnetwork" "proxy-subnet" {
provider = google-beta
project = "proyecto-pegachucho"
name = "website-net-proxy"
ip_cidr_range = "10.10.50.0/24"
region = "us-central1"
network = google_compute_network.HSBC_project_network.self_link
purpose = "INTERNAL_HTTPS_LOAD_BALANCER"
role = "ACTIVE"
}
... this error appears:
Error: Error creating Subnetwork: googleapi: Error 403: Required 'compute.subnetworks.create' permission for 'projects/proyecto-pegachucho/regions/us-central1/subnetworks/website-net-proxy'
More details:
Reason: forbidden, Message: Required 'compute.subnetworks.create' permission for 'projects/proyecto-pegachucho/regions/us-central1/subnetworks/website-net-proxy'
Reason: forbidden, Message: Required 'compute.networks.updatePolicy' permission for 'projects/proyecto-pegachucho/global/networks/hsbc-vpc-project'
on .terraform\modules\networking\networking.tf line 18, in resource "google_compute_subnetwork" "proxy-subnet":
18: resource "google_compute_subnetwork" "proxy-subnet" {
It doesn't make any sense because i have the owner role in my service account and that permissions are enabled. What could I do?
EDIT: I resolved it adding the provider directly in the modules like this:
provider "google-beta" {
project = var.project
region = var.region
credentials = "./mario.json"
}
resource "google_compute_health_check" "lb-health-check-global" {
name = var.healthckeck_name
check_interval_sec = var.check_interval_sec
timeout_sec = var.timeout_sec
healthy_threshold = var.healthy_threshold
unhealthy_threshold = var.unhealthy_threshold # 50 seconds
tcp_health_check {
port = var.healthckeck_port
}
}
resource "google_compute_region_health_check" "lb-health-check-regional" {
provider = google-beta
region = var.region
project = var.project
name = "healthcheck-regional"
check_interval_sec = var.check_interval_sec
timeout_sec = var.timeout_sec
healthy_threshold = var.healthy_threshold
unhealthy_threshold = var.unhealthy_threshold # 50 seconds
tcp_health_check {
port = var.healthckeck_port
}
}
I resolved this using the provider lines inside of the terraform module instead the main module (also you can configure two providers):
provider "google-beta" {
project = var.project
region = var.region
credentials = var.credentials
}

Terraform policy attachment to a role name in module

I'm trying to create data roles in three environments in AWS using Terraform.
One is an role in root account. This role can is used to login to AWS and can assume data roles in production and staging. This works fine. This is using a separate module.
I have problems when trying to create the roles in prod and staging from a module.
My module looks like this main.tf:
resource "aws_iam_role" "this" {
name = "${var.name}"
description = "${format("%s (managed by Terraform)", var.policy_description)}"
assume_role_policy = "${length(var.custom_principals) == 0 ? data.aws_iam_policy_document.assume_role.json : data.aws_iam_policy_document.assume_role_custom_principals.json}"
}
resource "aws_iam_policy" "this" {
name = "${var.name}"
description = "${format("%s (managed by Terraform)", var.policy_description)}"
policy = "${var.policy}"
}
data "aws_iam_policy_document" "assume_role" {
statement {
actions = ["sts:AssumeRole"]
principals {
type = "AWS"
identifiers = ["arn:aws:iam::${var.account_id}:root"]
}
}
}
data "aws_iam_policy_document" "assume_role_custom_principals" {
statement {
actions = ["sts:AssumeRole"]
principals {
type = "AWS"
identifiers = [
"${var.custom_principals}",
]
}
}
}
resource "aws_iam_role_policy_attachment" "this" {
role = "${aws_iam_role.this.name}"
policy_arn = "${aws_iam_policy.this.arn}"
}
I also have the following in output.tf:
output "role_name" {
value = "${aws_iam_role.this.name}"
}
Next I try to use the module to create two roles in prod and staging.
main.tf:
module "data_role" {
source = "../tf_data_role"
account_id = "${var.account_id}"
name = "data"
policy_description = "Role for data engineers"
custom_principals = [
"arn:aws:iam::${var.master_account_id}:root",
]
policy = "${data.aws_iam_policy_document.data_access.json}"
}
Then I'm trying to attach a AWS policies like this:
resource "aws_iam_role_policy_attachment" "data_readonly_access" {
role = "${module.data_role.role_name}"
policy_arn = "arn:aws:iam::aws:policy/ReadOnlyAccess"
}
resource "aws_iam_role_policy_attachment" "data_redshift_full_access" {
role = "${module.data_role.role_name}"
policy_arn = "arn:aws:iam::aws:policy/AmazonRedshiftFullAccess"
}
The problem I encounter here is that when I try to run this module the above two policies are not attached in staging but in root account. How can I fix this to make it attach the policies in staging?
I'll assume from your question that staging is its own AWS account, separate from your root account. From the Terraform docs
You can define multiple configurations for the same provider in order to support multiple regions, multiple hosts, etc.
This also applies to creating resources in multiple AWS accounts. To create Terraform resources in two AWS accounts, follow these steps.
In your entrypoint main.tf, define aws providers for the accounts you'll be targeting:
# your normal provider targeting your root account
provider "aws" {
version = "1.40"
region = "us-east-1"
}
provider "aws" {
version = "1.40"
region = "us-east-1"
alias = "staging" # define custom alias
# either use an assumed role or allowed_account_ids to target another account
assume_role {
role_arn = "arn:aws:iam:STAGINGACCOUNTNUMBER:role/Staging"
}
}
(Note: the role arn must exist already and your current AWS credentials must have permission to assume it)
To use them in your module, call your module like this
module "data_role" {
source = "../tf_data_role"
providers = {
aws.staging = "aws.staging"
aws = "aws"
}
account_id = "${var.account_id}"
name = "data"
... remainder of module
}
and define the providers within your module like this
provider "aws" {
alias = "staging"
}
provider "aws" {}
Now when you are declaring resources within your module, you can dictate which AWS provider (and hence which account) to create the resources in, e.g
resource "aws_iam_role" "this" {
provider = "aws.staging" # this aws_iam_role will be created in your staging account
name = "${var.name}"
description = "${format("%s (managed by Terraform)", var.policy_description)}"
assume_role_policy = "${length(var.custom_principals) == 0 ? data.aws_iam_policy_document.assume_role.json : data.aws_iam_policy_document.assume_role_custom_principals.json}"
}
resource "aws_iam_policy" "this" {
# no explicit provider is set here so it will use the "default" (un-aliased) aws provider and create this aws_iam_policy in your root account
name = "${var.name}"
description = "${format("%s (managed by Terraform)", var.policy_description)}"
policy = "${var.policy}"
}