Currently trying out a tool called Terraformer (it's a reverse Terraform) - https://github.com/GoogleCloudPlatform/terraformer.
I have a simple GCP project called test-one which only has one resource, vm_instance (google_compute_instance). I ran Terraformer and managed to get the outputs:
$ generated/google/test-one/instances/us-central1
.
├── compute_instance.tf
├── outputs.tf
├── provider.tf
└── terraform.tfstate
My question is - What should I do next if say I want to have the same exact configuration but for a new group that I'm going to name as test-two?
Should I go to each file, replace anything that has the string "test-one" to "test-two", and then perform terraform plan and terraform apply?
You need to create a terraform module which will deploy whatever enviroment you want and will take as few parameters as possible, only name (e.g: "test-two") if possible.
Converting your current state to use module is not the easiest , but is usually possible without destroying any resource when using terraform import
I would also recommend watching this video
Related
I have my terraform infrastructure defined in many configuration files as below.
root
data_sources.tf
iam.tf
glue_connections.tf
glue_crawlers.tf
glue_catalog.tf
glue_jobs.tf
provider.tf
storage.tf
vpc.tf
I wanted to organise them a bit by moving the configuration files starting with "glue_" into their own directory.
root
glue
glue_connections.tf
glue_crawlers.tf
glue_catalog.tf
glue_jobs.tf
data_sources.tf
iam.tf
provider.tf
storage.tf
vpc.tf
But when I applied the change it removed all of the resources that I moved into the glue directory.
Is there some trick that will allow me to move my configuration files into their own directory without terraform removing/ignoring them?
Note: I am using using terraform cloud.
I think this happen because Terraform ignore what you have in the subfolders, so the resources in the glue directory will not be created.
Indeed, you can create a module glue and recall it in the main module.
Terraform Modules
EDIT:
I also found an really similar question What is the correct way to setup multiple logically organized sub folders in a terraform repo?
We are using multiple python deployments into a single GitHub repository with a folder structure. Each directory contains a separate scripts module.
service-1/
deployment-1/
app/
Dockerfile
cloudbuild.yaml
deployment-2/
app/
Dockerfile
cloudbuild.yaml
service-2/
deployment-1/
app/
Dockerfile
cloudbuild.yaml
service-3/
deployment-1/
app/
Dockerfile
cloudbuild.yaml
deployment-2/
app/
Dockerfile
cloudbuild.yaml
.gitignore
README.md
requirements.txt
where deployment-1 will work as a single deployment and deployment-2 as another deployment for each service.
We are planning to manage a single trigger in a pipeline that triggers the build only for the deployment where the latest commit is found.
If anyone can please provide suggestions on how to keep single YAML files & build it better way using the cloud build. So that we don't require to manage multiple triggers.
Sadly, nothing is magic!! The dispatch is either done by configuration (multiple trigger) or by code.
If you want to avoid multiple trigger, you need to code the dispatch:
Detect the code that have change in GIT (could be several service in the same time)
Iterate over the updated folders and run a Cloud Build (so, a new one) for each of them
It's small piece of shell code. Not so difficult but you have to maintain/test/debug it. Is it easier that multiple trigger? It's up to you, according to your team skills in devops area.
I'm having a set of Terraform files and in particular one variables.tf file which sort of holds my variables like aws access key, aws access token etc. I want to now automate the resource creation on AWS using GitLab CI / CD.
My plan is the following:
Write a .gitlab-ci-yml file
Have the terraform calls in the .gitlab-ci.yml file
I know that I can have secret environment variables in GitLab, but I'm not sure how I can push those variables into my Terraform variables.tf file which looks like this now!
# AWS Config
variable "aws_access_key" {
default = "YOUR_ADMIN_ACCESS_KEY"
}
variable "aws_secret_key" {
default = "YOUR_ADMIN_SECRET_KEY"
}
variable "aws_region" {
default = "us-west-2"
}
In my .gitlab-ci.yml, I have access to the secrets like this:
- 'AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY}'
- 'AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID}'
- 'AWS_DEFAULT_REGION=${AWS_DEFAULT_REGION}'
How can I pipe it to my Terraform scripts? Any ideas? I would need to read the secrets from GitLab's environment and pass it on to the Terraform scripts!
Which executor are you using for your GitLab runners?
You don't necessarily need to use the Docker executor but can use a runner installed on a bare-metal machine or in a VM.
If you install the gettext package on the respective machine/VM as well you can use the same method as I described in Referencing gitlab secrets in Terraform for the Docker executor.
Another possibility could be that you set
job:
stage: ...
variables:
TF_VAR_SECRET1: ${GITLAB_SECRET}
or
job:
stage: ...
script:
- export TF_VAR_SECRET1=${GITLAB_SECRET}
in your CI job configuration and interpolate these. Please see Getting an Environment Variable in Terraform configuration? as well
Bear in mind that terraform requires a TF_VAR_ prefix to environment variables. So actually you need something like this in .gitlab-ci.yml
- 'TF_VAR_AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY}'
- 'TF_VAR_AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID}'
- 'TF_VAR_AWS_DEFAULT_REGION=${AWS_DEFAULT_REGION}'
Which also means you could just set the variable in the pipeline with that prefix as well and not need this extra mapping step.
I see you actually did discover this per your comment---I'm still posting this answer since I missed your comment the first time and it would have saved me an hour of work.
I'm attempting to set up an OpsWorks stack for an app. I currently have the app and the infrastructure in the same repo, with the following structure:
proj_name/
infrastructure/
chef-repo/
cookbooks/
proj_name/ # THE COOKBOOK
recipes/
deploy.rb
configure.rb
attributes/
metadata.rb
proj_name/ # THE APP
app/
migrations/
manage.py
I have confirmed that OpsWorks is successfully pulling the repo from Github and installing it in /opt/aws/opsworks. However, when I try to add the proj_name::deploy recipe to the custom recipes section of a custom layer, I get an error message saying that proj_name::deploy could not be found. Looking at the log, I see a line saying INFO: Storing updated cookbooks/proj_name/requirements.txt in the cache. This says to me that OpsWorks is looking in the first proj_name directory (the one containing the app) to find the recipe, not the cookbook named proj_name inside of infrastructure/chef-repo/cookbooks.
Is there any way to tell OpsWorks to look further for the cookbook?
Thanks!
http://docs.aws.amazon.com/opsworks/latest/userguide/workingcookbook-installingcustom-repo.html
If using just cookbooks the answer is no.
If you have a berksfile you could get away with just a top-level berksfile in which you put the path to your cookbooks.
Bottom-line: you will have to place something in the root of the repo.
Our elastic single-container beanstalk docker application runs as load balanced with multiple ec2 instances.
I want to pass the ec2 instance id of the machine it's running on as an environment variable to the docker container.
(I want to avoid doing AWS specific stuff inside the container).
I figure I need to put something in an .ebextension config file, where I do a curl to get the instance data and then set it into an environment variable that will be passed to the docker container.
Something like this (which doesn't work; it doesn't cause EB errors, but the env var is not available inside the container):
container_commands:
set_instance_id:
command: export EC2_INSTANCE_ID=`curl -s http://169.254.169.254/latest/meta-data/instance-id`
Ideally, I'd like to avoid hacking the EB run scripts, because they're undocumented and seem to change without notice.
Unfortunately, at this moment, you can not do the thing that you want without cluttering your app with AWS specific stuff or modifying the EB script/deployment process.
Options #1: Check for AWS Metadata from your app
You can directly curl the AWS Metadata (http://169.254.169.254/latest/meta-data/instance-id) directly from your Docker container.
From your app, if there is no environment name EC2_INSTANCE_ID (or anything you want), just call the AWS metadata service to get the instance id. FYI, 169.254.0.0/16 is a link-local address. You can also detect your app is in AWS or not.
Options #2: Inject Dockerfile with environment variable
Dockerfile can contain environment variable by using ENV keyword. We can inject a new environment variable using ENV into your Dockerfile. The environment variable injection your Dockerfile must be done after your app is extracted and before the Docker images is built.
Injecting Dockerfile can be done by adding a pre-app-deployment hook. Just create a new file inside the appdeploy/pre using .ebextensions:
files:
"/opt/elasticbeanstalk/hooks/appdeploy/pre/02injectdockerfile.sh":
mode: "000755"
content: |
. /opt/elasticbeanstalk/hooks/common.sh
EB_CONFIG_APP_CURRENT=$(/opt/elasticbeanstalk/bin/get-config container -k app_deploy_dir)
cd $EB_CONFIG_APP_CURRENT
echo "ENV EC2_INSTANCE_ID \"`curl -s http://169.254.169.254/latest/meta-data/instance-id`\"" >> Dockerfile
Why must it be done in pre-appdeploy? Or can we just use container_commands?
The container_commands will be executed just before the app is deployed as the documentation said that. It looks like promising, but we can't use it. The container_commands will be executed after the Dockerfile is built (docker build). To use environment variable in Dockerfile, we need to inject the ENV before run docker build.
Take a look into Elastic Beanstalk: Under the Hood. Here is the file structure of appdeploy hook:
[ec2-user#ip-172-31-62-137 ~]$ tree /opt/elasticbeanstalk/hooks/appdeploy/
/opt/elasticbeanstalk/hooks/appdeploy/
├── enact
│ ├── 00run.sh
│ └── 01flip.sh
├── post
│ └── 01_monitor_pids.sh
└── pre
├── 00clean_dir.sh
├── 01unzip.sh
├── 02docker_db_check.sh
└── 03build.sh
The app file is extracted in pre/01unzip.sh and docker build is executed in pre/03build.sh. So, we need add a new script to inject the ENV with the script file name order is after 01unzip.sh and before 03build.sh. As you said, this is undocumented and might be changed. But, I thought, it should not be changed if you use same Elastic Beanstalk platform version. You need to verify this "hack" can be run in the next platform version before you upgrade the production environment.
Actually, there any some other options to set instance id as environment variable. Such as: modifying the docker run line in enact/00run.sh. I also don't prefer to modify the EB script.