I'm new on Terraform so I'm sure it is a easy question.
I'm trying to deploy into GCP using terraform.
I have 2 different enviroments both on same GCP project:
nonlive
live
I have alerts for each enviroment so that is what I intend to create:
If I deploy into an enviroment then Terraform must create/update resources for this enviromet but don't update resources for rest of enviroments.
I'm trying to user modules and conditions, it's similar to this:
module "enviroment_live" {
source = "./live"
module_create = (var.environment=="live")
}
resource "google_monitoring_alert_policy" "alert_policy_live" {
count = var.module_create ? 1 : 0
display_name = "Alert CPU LPProxy Live"
Problem:
When I deploy on live enviroment Terraform delete alerts for nonlive enviroment and vice versa.
Is it possible to update resources of one enviroment without deleting those of the other?
Regards
As Marko E suggested solution was to use workspaces:
Terraform workspaces
The steps must be:
Create a workspace for each enviroment.
On deploy (CI/CD) select workspace befor plan/apply:
terraform workspace select $ENVIROMENT
Use conditions (as I explained before) to create/configure the resource.
Related
I'm new to GCP and terraform, i need some explanation about the topic in the title.
My problem:
I have 2 (or more) GCP projects under the same organization.
I want a cloud run from project A to write on a bucket in project B.
I have two terraform projects, one for each GCP project.
My question is: how can I make things work?
Thanks in advance.
I created the bucket in project B.
I created the cloud run in project A.
I created a service account in project A for the cloudrun.
In project B I created the binding, but something is not clear to me...
Add this to your project's B terraform:
resource "google_storage_bucket_iam_member" "grant_access_to_sa_from_project_a_to_this_bucket" {
provider = google
bucket = "<my_project_b_bucket_name"
role = "roles/storage.objectViewer"
member = "serviceAccount:my_service_account#project_a.iam.gserviceaccount.com"
}
Specify the role according to what you need. The list of the gcs roles are here.
The docs of gcs buckets IAM policies are here.
I am having issues deploying my docker images to aws ecr as part of a terraform deployment and I am trying to think through the best long term strategy.
At the moment I have a terraform remote backend in s3 and dynamodb on let's call it my master account. I then have dev/test etc environments in separate accounts. The terraform deployment is currently run off my local machine (mac) and uses the aws 'master' account and its credentials which in turn assumes a role in the target deployment account to create the resources as per:
provider "aws" { // tell terraform which SDK it needs to load
alias = "target"
region = var.region
assume_role {
role_arn = "arn:aws:iam::${var.deployment_account}:role/${var.provider_env_deployment_role_name}"
}
}
I am creating a number of ecs services with Fargate deployments. The container images are built in separate repos by GitHub Actions and saved as GitHub packages. These package names and versions are being deployed after the creation of the ecr and service (maybe that's not ideal thinking about it) and this is where the problems arise.
The process is to pull the image from GitHub Packages, retag it and upload to the ecr using multiple executions of a null_resource local-exec. Works fine stand alone but has problems as part of the terraform process. I think the reason is that the other resources use the above provider to get permissions but as null_resource does not accept a provider it cannot get the permissions this way. So I have been passing the aws creds values into the shell. Not convinced this is really secure but that's currently moot as it ain't working either. I get this error:
Error saving credentials: error storing credentials - err: exit status 1, out: `error storing credentials - err: exit status 1, out: `The specified item already exists in the keychain.``
Part of me thinks this is the wrong approach and that as I migrate to deploying via a Github action I can separate the infrastructure deployment via terraform from what is really the application deployment and just use GitHub secrets to reset the credentials values then run the script.
Alternatively, maybe the keychain thing just goes away and my process will work fine? Secure ??
That's fine for this scenario but it isn't really a generic approach for all my use cases.
I am shortly going to start deploying multiple aws lambda functions with docker containers. Haven't done it before but it looks like the process is going to be: create ecr, deploy container, deploy lambda function. This really implies that the container deployment should integral to the terraform deployment which loops back to my issue with the local-exec??
I found Actions to deploy to ECR which would imply splitting the deployments into multiple files but that seems inelegant and potentially brittle.
Maybe there is a simple solution, but given where I am trying to go with this, what is my best approach?
I know this isn't a complete answer, but you should be pulling your AWS creds from environment variables. I don't really understand if you need credentials for different accounts, but if you do then swap them during the progress of your action. See https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-envvars.html . Terraform should pick these up and automatically use them for AWS access.
Instead of those hard coded access key/secret access keys I'd suggest making use of Github/AWS's ability to assume role through temporary credentials with OIDC https://docs.github.com/en/actions/deployment/security-hardening-your-deployments/configuring-openid-connect-in-amazon-web-services
You'd likely only define one initial role that you'd authenticate into and from there assume into the other accounts you're deploying into.
These the assume role credentials are only good for an hour and do not have the operation overhead of having to rotate them.
As suggested by Kevin Buchs answer...
My primary issue was related to deploying from a mac and the use of the keychain. As this was not on the critical path I went round it and set up a GitHub Action.
The Action loaded environmental variables from GitHub secrets for my 'master' aws account credentials.
AWS_ACCESS_KEY_ID: ${{ secrets.NK_AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.NK_AWS_SECRET_ACCESS_KEY }}
I also loaded the target accounts credentials into environmental variables in the same way BUT with the prefix TF_VAR.
TF_VAR_DEVELOP_AWS_ACCESS_KEY_ID: ${{ secrets.DEVELOP_AWS_ACCESS_KEY_ID }}
TF_VAR_DEVELOP_AWS_SECRET_ACCESS_KEY: ${{ secrets.DEVELOP_AWS_SECRET_ACCESS_KEY }}
I then declare terraform variables which will be automatically populated from the environment variables.
variable "DEVELOP_AWS_ACCESS_KEY_ID" {
description = "access key for the dev account"
type = string
}
variable "DEVELOP_AWS_SECRET_ACCESS_KEY" {
description = "secret access key for the dev account"
type = string
}
And when I run a shell script with a local exec:
resource "null_resource" "image-upload-to-importcsv-ecr" {
provisioner "local-exec" {
command = "./ecr-push.sh ${var.DEVELOP_AWS_ACCESS_KEY_ID} ${var.DEVELOP_AWS_SECRET_ACCESS_KEY} "
}
}
Within the script I can then use these arguments to set the credentials eg
AWS_ACCESS=$1
AWS_SECRET=$1
.....
export AWS_ACCESS_KEY_ID=${AWS_ACCESS}
export AWS_SECRET_ACCESS_KEY=${AWS_SECRET}
and the script now has credentials to do whatever.
I have created my aws infrastructure using terraform . the infrastructure includes creating elastic beanstalk apps , application load balancer , s3 , dynamodb , vpc-subnets and vpc-endpoints.
the aws infrastructure runs locally using the terraform commands as shown below:
terraform init
terraform plan -var-file="terraform.tfvars"
terraform apply -auto-approve -var-file="terraform.tfvars"
The terraform.tfvars contains the variables like region , instance type , access key etc .
I want to automate the build and deploy process of this terraform infrastructure using the aws codepipeline .
How can I achieve this task ? What steps to follow ? Where to save the terraform.tfvars file ? What roles to specify in the specific codebuild role . What about the manual process of auto-approve ?
MY APPROACH :The entire process of codecommit/github , codebuild , codedeploy ie (codepipeline) is carried out through aws console , I started with github as source , it is working (the github repo includes my terraform code for building aws infrastructure) then for codebuild , I need to specify the env variables and the buildspec.yml file , this is the problem , Iocally I had a terraform.tfvars to do the job but here I need to do it in the buildspec.yml file .
QUESTIONS :I am unaware how to specify my terraform.tfvars credentials in the buildspec.yml file and what env variables to specify? I also know we need to specify roles in the codebuild project but how to effectively specify them ? How to also Store the Terraform state in s3 ?
- How can I achieve this task ?
Use CodeCommit to store your Terraform Code, CodeBuild to run terraform plan, terraform apply, etc and CodePipeline to connect CodeCommit with CodeBuild.
What steps to follow ?
There are many tutorials on the internet. Check this as an example:
https://medium.com/faun/terraform-deployments-with-aws-codepipeline-342074248843
Where to save the terraform.tfvars file ?
Ideally, you should create one terraform.tfvars for development environment, like terraform.tfvars.dev, and another one for production environment, like terraform.tfvars.prod. And in your CodeBuild environment, choose the file using environment variables.
What roles to specify in the specific CodeBuild role ?
Your CodeBuild role needs to have the permissions to create, list, delete and update resources. Basically, in one service, it's almost everything.
What about the manual process of auto-approve ?
Usually, you use terraform plan in one CodeBuild environment to show what are the changes in your environment, and after a manual approval, you execute terraform apply -auto-approve in another CodeBuild environment. Check the tutorial above, it shows how to create this.
Asking the community if it's possible to do the following. (had no luck in finding further information)
I create a ci/cd pipeline with Github/cloudbuild/Terraform. I have cloudbuild build terraform configuration upon github pull request and merge to new branch. However, I have cloudbuild service account (Default) use with least privilege.
Question adheres, I would like terraform to pull permission from an existing service account with least privilege to prevent any exploits, etc. once cloudbuild gets pull build triggers to init terraform configuration. At this time, i.e terraform will extract existing external SA to obtain permission to build TF.
I tried to use service account, and binding roles to that service account but error happens that
states service account already existences.
Next step, is for me to use a module but I think this is also going to create a new SA with replicated roles.
If this is confusing I do apologize, I will help in refining the question to be more concise.
You have 2 solutions:
Use the Cloud Build service account when you execute your Terraform. Your provider look like this:
provider "google" {
// Useless with Cloud Build
// credentials = file("${var.CREDENTIAL_FILE}}")
project = var.PROJECT_ID
region = "europe-west1"
}
But this solution implies to grant several roles to Cloud Build only for Terraform process. A custom role is a good choice for granting only what is required.
The second solution is to use a service account key file. Here again 2 solutions:
Cloud Build creates the service account, grant all the role on it, generates a key and passes it to terraform. After the terraform execution, the service account is deleted by Cloud Build. Good solution, but you have to grant Cloud Build service account the capability to grant itself any roles and to generate a json Key file. That's a lot a responsibility!
Use an existing service account and the key generated on it. But you have to secure the key and to rotate it regularly. I recommend you to securely store it in secret manager, but for the rotation, you have to manage it by yourselves, today. With this process, Cloud Build download the key (in secret manager) and pass it to terraform. Here again, the Cloud Build service account has the right to access to secrets, that is a critical privilege. The step in Cloud Build is something like this:
steps:
- name: gcr.io/cloud-builders/gcloud:latest
entrypoint: "bash"
args:
- "-c"
- |
gcloud beta secrets versions access --secret=test-secret latest > my-secret-file.txt
We're using terraform to spin up our infrastructure within AWS and we have 3 separate environments: Dev, Stage and Prod
Dev : Requires - public, private1a, privatedb and privatedb2 subnets
Stage & Prod : Requires - public, private_1a, private_1b, privatedb and privatedb2 subnets
I have main.tf, variables, dev.tfvars, stage.tfvars and prod.tfvars. I'm trying to understand as of how can I use main.tf file that I'm currently using for dev environment and create resources required for stage and prod using .tfvars files.
terraform apply -var-file=dev.tfvars
terraform apply -var-file=stage.tfvars (this should create subnet private_1b in addition to the other subnets)
terraform apply -var-file=prod.tfvars (this should create subnet private_1b in addition to the other subnets)
Please let me know if you need further clarification.
Thanks,
What you are trying to do is indeed the correct approach. You will also have to make use of terraform workspaces.
Terraform starts with a single workspace named "default". This
workspace is special both because it is the default and also because
it cannot ever be deleted. If you've never explicitly used workspaces,
then you've only ever worked on the "default" workspace.
Workspaces are managed with the terraform workspace set of commands.
To create a new workspace and switch to it, you can use terraform
workspace new; to switch environments you can use terraform workspace
select; etc.
In essence this means you will have a workspace for each environment you have.
Lets see with some examples.
I have the following files:
main.tf
variables.tf
dev.tfvars
production.tfvars
main.tf
This file contains the VPC module 9Can be any resource ofc). We call the variables via the var. function:
module "vpc" {
source = "modules/vpc"
cidr_block = "${var.vpc_cidr_block}"
subnets_private = "${var.vpc_subnets_private}"
subnets_public = "${var.vpc_subnets_public}"
}
variables.tf
This file contains all our variables. Please do not that we do not assign default here, this will make sure we are 100% certain that we are using the variables from the .tfvars files.
variable "vpc_cidr_block" {}
variable "vpc_subnets_private" {
type = "list"
}
variable "vpc_subnets_public" {
type = "list"
}
That's basically it. Our .tfvars file will look like this:
dev.tfvars
vpc_cidr_block = "10.40.0.0/16"
vpc_subnets_private = ["10.40.0.0/19", "10.40.64.0/19", "10.40.128.0/19"]
vpc_subnets_public = ["10.40.32.0/20", "10.40.96.0/20", "10.40.160.0/20"]
production.tfvars
vpc_cidr_block = "10.30.0.0/16"
vpc_subnets_private = ["10.30.0.0/19", "10.30.64.0/19", "10.30.128.0/19"]
vpc_subnets_public = ["10.30.32.0/20", "10.30.96.0/20", "10.30.160.0/20"]
If I would like to run terraform for my dev environment, these are the commands I would use (Assuming the workspaces are already created, see Terraform workspace docs):
Select the dev environment: terraform workspace select dev
Run a plan to see the changes: terraform plan -var-file=dev.tfvars -out=plan.out
Apply the changes: terraform apply plan.out
You can replicate this for as many environments as you like.