Is there a way to use terraform (or another IaC solution) to create the remote secrets for a multi-primary deployment of Istio? - istio

We use terraform to deploy our infrastructure in AWS. I can find ways to setup everything else required for a multi-primary deployment of Istio. It seems like the remote secret created by istioctl to enable service discovery across clusters is something close to a kubectl config file embedded in a Secret. The only documentation of this structure seems to be in the code of istioctl.
Is there a way to generate this secret manually so that it can be included in the IaC deployment of the rest of the mesh?

Related

Terraform Google cloud service account issue

I'm trying to create a GKE Cluster through Terraform. Facing an issue w.r.t service accounts. In our enterprise, service accounts to be used by Terraform are created in a project svc-accnts which resides in a folder named prod.
I'm trying to create the GKE cluster in a different folder which is Dev and the project name is apigw. Thro Terraform, when I use a service account with the necessary permissions reside in the project apigw, it works fine.
But when I try to use a service account with the same permissions where the service account resides in a different folder, getting this error
Error: googleapi: Error 403: Kubernetes Engine API has not been used in project 8075178406 before or it is disabled. Enable it by visiting https://console.developers.google.com/apis/api/container.googleapis.com/overview?project=8075178406 then retry. If you enabled this API recently, wait a few minutes for the action to propagate to our systems and retry.
where 8075178406 is the project number of svc-accnts
Why does it try to enable the API in svc-accnts when the GKE cluster is created in apigw. Are service accounts not meant to used across folders?
Thanks.
The error you provide is not about permissions of the service account. Maybe you did not change the project in the provider? Remember, you can have multiple providers of the same type (google) that point to different projects. Some code example would provide more information.
See:
https://medium.com/scalereal/how-to-use-multiple-aws-providers-in-a-terraform-project-672da074c3eb (this is for AWS, but same idea)
https://www.terraform.io/language/providers/configuration
Looks like this is a known issue and happens through gcloud cli as well.
https://issuetracker.google.com/180053712
The workaround is to enable the Kubernetes Engine API on the project(svc-accnts) and it works fine. I was hesitant to do that as I thought this might create the resources in the project.

GKE Secrets OR Google Secret manager

Does anyone know in which case choose Kubernetes secrets instead of google secret manager and the reverse ? Differences between the two ?
With Kubernetes secret (K8S Secret), you use a built in feature of K8S. You load your secrets in config maps, and you mount them on the pods that require them.
PRO
If a day you want to deploy on AWS, Azure or on prem, still on K8S, the behavior will be the same, no update to perform in your code.
CONS
The secrets are only accessible by K8S cluster, impossible to reuse them with another GCP services
Note: With GKE, no problem the ETCD component is automatically encrypted with a key form KMS service to keep the secret encrypted at rest. But, it's not always the same for every K8S installation, especially on premise, where the secrets are kept in plain text. Be aware about this part of the security.
Secret Manager is a vault managed by Google. You have API to read and write them and the IAM service checks the authorization.
PRO
It's a Google Cloud service and you can access it from any GCP services (Compute Engine, Cloud Run, App Engine, Cloud Functions, GKE,....) as long as you are authorized for
CONS
It's Google Cloud specific product, you are locked in.
You can use them together via this sync service: https://external-secrets.io/

Terraform provider using dynamic IP

So i am with a tricky problem, i am using terraform to create an infrastructure on cloud and using the ip of the load balancer created by GCP to the IP address needed for the vault provider
provider "vault" {
address = local.vault_add
token = ""
version = "~> 2.14.0"
}
but the terraform apply gives an error because it wont wait until the LB IP is generated and it will try to communicate with the Vault using the default value localhost. Is any way to solve this problem without split cofiguration of the Vault with the rest ?
Not really - you will almost certainly find it best to configure Vault separately.
If you think about it for a moment, you will see that you have a chicken-and-egg situation: you want the Vault provider to pull secrets from Vault to support the creation of your infrastructure, but Vault doesn't exist yet, so there's nowhere to pull the secrets from. So you need Vault to set up your infrastructure, but you need to set up your infrastructure to have Vault.
Your best approach will be to set up Vault separately, then it will be running, unsealed, populated, and available to use for your other Terraform operations.

What would be the best way to manage cloud credentials as part of an Azure DevOps build pipeline?

We are going to be creating build/deploy pipelines in Azure DevOps to provision infrastructure in Google Cloud Platform (GCP) using Terraform. In order to execute the Terraform provisioning script, we have to provide the GCP credentials so it can connect to our GCP account. I have a credential file (JSON) that can be referenced in the Terraform script. However, being new to build/deploy pipelines, I'm not clear on exactly what to do with the credential file. That is something we don't want to hard-code in the TF script and we don't want to make it generally available to just anybody that has access to the TF scripts. Where exactly would I put the credential file to secure it from prying eyes while making it available to the build pipeline? Would I put it on an actual build server?
I'd probably use build variables or store variables in key vault and pull those at deployment time. storing secrets on the build agent is worse, because that means you are locked in to this build agent.
https://learn.microsoft.com/en-us/azure/devops/pipelines/tasks/deploy/azure-key-vault?view=azure-devops
https://learn.microsoft.com/en-us/azure/devops/pipelines/process/variables?view=azure-devops&tabs=yaml%2Cbatch

Kubernetes 1.2alpha8 AWS Container Registry Integration

Was trying to integrate kubernetes with AWS Container Registry. From what I have read it sounds like it should be automatically setup if the cluster is deployed to AWS. Which my cluster is.
I also granted the IAM Roles the necessary permissions to pull from ecr but I still get unable to pull image when trying to deploy on kubernetes. It also says authentication failed.
Really just wanted to see if anyone else was having issues or if someone was able to pull an image from aws ecr and how did they accomplish it.