I am setting up a basic provisioning script for multiple services in aws. I want to keep each section separate. So currently I have one directory in my repo for setting up the vpc and networking aspects in a folder called vpc
I was wondering if there is a way to use one provider.tf file in the root of the repo. That way when I eventually create another resource group I want to create ( for example an EC2 instance and its security groups + EBS etc ) I can create another directory in my repo and source one provider.tf file.
So my directory structure would look like
/myrepo/
|
- provider.tf
|
- vpc/
|
- vpc.tf
|
- EC2/
|
- ec2.tf
Related
I use AWS CodePipeline linked with an S3 bucket to deploy applications to some Elastic Beanstalks. It is a very simple pipeline and has only 2 phase: one Source (bucket S3 -> war file) and one Deploy (EB reference).
Say I have an application called "app.war", when I deploy it manually to EB in aws there is an incremental number appended to my application name (app-1.war, app-2.war, ... ) based on how many times I deployed an application with the same name.
I want to achieve this with CodePipeline, there is something that I can do? Some phases that I have to configure? Variables?
I would like to rename the "code-pipeline-123abc..." name from my war file to be more specific with my application name like with manual deploy.
I'm new on Terraform so I'm sure it is a easy question.
I'm trying to deploy into GCP using terraform.
I have 2 different enviroments both on same GCP project:
nonlive
live
I have alerts for each enviroment so that is what I intend to create:
If I deploy into an enviroment then Terraform must create/update resources for this enviromet but don't update resources for rest of enviroments.
I'm trying to user modules and conditions, it's similar to this:
module "enviroment_live" {
source = "./live"
module_create = (var.environment=="live")
}
resource "google_monitoring_alert_policy" "alert_policy_live" {
count = var.module_create ? 1 : 0
display_name = "Alert CPU LPProxy Live"
Problem:
When I deploy on live enviroment Terraform delete alerts for nonlive enviroment and vice versa.
Is it possible to update resources of one enviroment without deleting those of the other?
Regards
As Marko E suggested solution was to use workspaces:
Terraform workspaces
The steps must be:
Create a workspace for each enviroment.
On deploy (CI/CD) select workspace befor plan/apply:
terraform workspace select $ENVIROMENT
Use conditions (as I explained before) to create/configure the resource.
I have created my aws infrastructure using terraform . the infrastructure includes creating elastic beanstalk apps , application load balancer , s3 , dynamodb , vpc-subnets and vpc-endpoints.
the aws infrastructure runs locally using the terraform commands as shown below:
terraform init
terraform plan -var-file="terraform.tfvars"
terraform apply -auto-approve -var-file="terraform.tfvars"
The terraform.tfvars contains the variables like region , instance type , access key etc .
I want to automate the build and deploy process of this terraform infrastructure using the aws codepipeline .
How can I achieve this task ? What steps to follow ? Where to save the terraform.tfvars file ? What roles to specify in the specific codebuild role . What about the manual process of auto-approve ?
MY APPROACH :The entire process of codecommit/github , codebuild , codedeploy ie (codepipeline) is carried out through aws console , I started with github as source , it is working (the github repo includes my terraform code for building aws infrastructure) then for codebuild , I need to specify the env variables and the buildspec.yml file , this is the problem , Iocally I had a terraform.tfvars to do the job but here I need to do it in the buildspec.yml file .
QUESTIONS :I am unaware how to specify my terraform.tfvars credentials in the buildspec.yml file and what env variables to specify? I also know we need to specify roles in the codebuild project but how to effectively specify them ? How to also Store the Terraform state in s3 ?
- How can I achieve this task ?
Use CodeCommit to store your Terraform Code, CodeBuild to run terraform plan, terraform apply, etc and CodePipeline to connect CodeCommit with CodeBuild.
What steps to follow ?
There are many tutorials on the internet. Check this as an example:
https://medium.com/faun/terraform-deployments-with-aws-codepipeline-342074248843
Where to save the terraform.tfvars file ?
Ideally, you should create one terraform.tfvars for development environment, like terraform.tfvars.dev, and another one for production environment, like terraform.tfvars.prod. And in your CodeBuild environment, choose the file using environment variables.
What roles to specify in the specific CodeBuild role ?
Your CodeBuild role needs to have the permissions to create, list, delete and update resources. Basically, in one service, it's almost everything.
What about the manual process of auto-approve ?
Usually, you use terraform plan in one CodeBuild environment to show what are the changes in your environment, and after a manual approval, you execute terraform apply -auto-approve in another CodeBuild environment. Check the tutorial above, it shows how to create this.
I am looking for a way to provision an instance with a configuration file that contains the endpoints to connect to a database cluster in an automatic way, using terraform. I am using a aws_rds_cluster resource, from which I can get the endpoint using the expression aws_rds_cluster.my-cluster.endpoint. Then, I would like to provision machines instantiated with an aws_instance resource so that the value of that expression is stored in the file /DBConfig.sh.
The content of the DBConfig.sh file would look like this :
#!/bin/bash
ENDPOINT=<$aws_rds_cluster.my-cluster.endpoint$>
READER_ENDPOINT=<$aws_rds_cluster.my-cluster.reader_endpoint$>
Truth be told, once I successfully reach that point, I'd like to be able to do the same thing for machines created by a aws_launch_configuration resource.
Is this something that can be done with terraform? If not, what other tools can I use to achieve this kind of automation? Thanks for your help!
There are few ways which can achieve that. I think all of the would involve user_data.
For example, you could have aws_instance with the user_data as follows:
resource "aws_instance" "web" {
# other atrributes
user_data = <<-EOL
#!/bin/bash
cat >./DBConfig.sh <<-EOL2
#!/bin/bash
ENDPOINT=${aws_rds_cluster.my-cluster.endpoint}
READER_ENDPOINT=${aws_rds_cluster.my-cluster.reader_endpoint}
EOL2
chmod +x ./DBConfig.sh
EOL
}
The above will launch an istance which will have DBConfig.sh with resolved values of the endpoints in its root (/) directory.
We're using terraform to spin up our infrastructure within AWS and we have 3 separate environments: Dev, Stage and Prod
Dev : Requires - public, private1a, privatedb and privatedb2 subnets
Stage & Prod : Requires - public, private_1a, private_1b, privatedb and privatedb2 subnets
I have main.tf, variables, dev.tfvars, stage.tfvars and prod.tfvars. I'm trying to understand as of how can I use main.tf file that I'm currently using for dev environment and create resources required for stage and prod using .tfvars files.
terraform apply -var-file=dev.tfvars
terraform apply -var-file=stage.tfvars (this should create subnet private_1b in addition to the other subnets)
terraform apply -var-file=prod.tfvars (this should create subnet private_1b in addition to the other subnets)
Please let me know if you need further clarification.
Thanks,
What you are trying to do is indeed the correct approach. You will also have to make use of terraform workspaces.
Terraform starts with a single workspace named "default". This
workspace is special both because it is the default and also because
it cannot ever be deleted. If you've never explicitly used workspaces,
then you've only ever worked on the "default" workspace.
Workspaces are managed with the terraform workspace set of commands.
To create a new workspace and switch to it, you can use terraform
workspace new; to switch environments you can use terraform workspace
select; etc.
In essence this means you will have a workspace for each environment you have.
Lets see with some examples.
I have the following files:
main.tf
variables.tf
dev.tfvars
production.tfvars
main.tf
This file contains the VPC module 9Can be any resource ofc). We call the variables via the var. function:
module "vpc" {
source = "modules/vpc"
cidr_block = "${var.vpc_cidr_block}"
subnets_private = "${var.vpc_subnets_private}"
subnets_public = "${var.vpc_subnets_public}"
}
variables.tf
This file contains all our variables. Please do not that we do not assign default here, this will make sure we are 100% certain that we are using the variables from the .tfvars files.
variable "vpc_cidr_block" {}
variable "vpc_subnets_private" {
type = "list"
}
variable "vpc_subnets_public" {
type = "list"
}
That's basically it. Our .tfvars file will look like this:
dev.tfvars
vpc_cidr_block = "10.40.0.0/16"
vpc_subnets_private = ["10.40.0.0/19", "10.40.64.0/19", "10.40.128.0/19"]
vpc_subnets_public = ["10.40.32.0/20", "10.40.96.0/20", "10.40.160.0/20"]
production.tfvars
vpc_cidr_block = "10.30.0.0/16"
vpc_subnets_private = ["10.30.0.0/19", "10.30.64.0/19", "10.30.128.0/19"]
vpc_subnets_public = ["10.30.32.0/20", "10.30.96.0/20", "10.30.160.0/20"]
If I would like to run terraform for my dev environment, these are the commands I would use (Assuming the workspaces are already created, see Terraform workspace docs):
Select the dev environment: terraform workspace select dev
Run a plan to see the changes: terraform plan -var-file=dev.tfvars -out=plan.out
Apply the changes: terraform apply plan.out
You can replicate this for as many environments as you like.