I'm running the below command and seeing the output that Terraform has been successfully initialized!
terraform init \
-backend=true \
-backend-config="bucket=terraform-remote-states" \
-backend-config="project=<<my-poject>>" \
-backend-config="path=terraform.tfstate"
However, when I run the template, it creates the state file locally instead of within GCS.
Not sure what I'm missing here. Appreciate any thoughts and help.
When you execute the listed terraform init command, it seems like you don't have a backend block that looks like the below within any of the .tf files in that directory.
terraform {
backend "gcs" {
bucket = "terraform-state"
path = "/terraform.tfstate"
project = "my-project"
}
}
None of those -backend-config arguments you're passing tell Terraform that you want the state to go into GCS.
Without an explicit backend "gcs" {} declaration as above, Terraform will default to storing state locally, which is the behaviour you're currently seeing.
Related
I had old Terraform configuration, worked perfect.
In short, I had static website application I needed to deploy using Cloudfront & S3. Then, I need another application to deploy in the same way, but in other sub-domain.
For ease of helping, you can check the full source code here:
Old Terraform configuration: https://github.com/tal-rofe/tf-old
New Terraform configuration: https://github.com/tal-rofe/tf-new
So, my domain is example.io, and in the old configuration I had only static application deployed on app.example.com.
But, as I need an another application, it's going to be deployed on docs.example.com.
To avoid a lot of code duplication, I decided on creating a local module for deploying a generic application onto Cloudfront & S3.
After doing so, seems like terraform apply and terraform plan succeeds (not really, as no resources were changed at all!): Apply complete! Resources: 0 added, 0 changed, 0 destroyed.
Not only no changes, but I get an old output:
cloudfront_distribution_id = "blabla"
eks_kubeconfig = <sensitive>
This cloudfront_distribution_id output, was the correct output using the old configuration. I expect to get these new outputs, as configured:
output "frontend_cloudfront_distribution_id" {
description = "The distribution ID of deployed Cloudfront frontend"
value = module.frontend-static.cloudfront_distribution_id
}
output "docs_cloudfront_distribution_id" {
description = "The distribution ID of deployed Cloudfront docs"
value = module.docs-static.cloudfront_distribution_id
}
output "eks_kubeconfig" {
description = "EKS Kubeconfig content"
value = module.eks-kubeconfig.kubeconfig
sensitive = true
}
I'm using GitHub actions to apply my Terraform configuration with these steps:
- name: Terraform setup
uses: hashicorp/setup-terraform#v2
with:
terraform_wrapper: false
- name: Terraform core init
env:
TERRAFORM_BACKEND_S3_BUCKET: ${{ secrets.TERRAFORM_BACKEND_S3_BUCKET }}
TERRAFORM_BACKEND_DYNAMODB_TABLE: ${{ secrets.TERRAFORM_BACKEND_DYNAMODB_TABLE }}
run: |
terraform -chdir="./terraform/core" init \
-backend-config="bucket=$TERRAFORM_BACKEND_S3_BUCKET" \
-backend-config="dynamodb_table=$TERRAFORM_BACKEND_DYNAMODB_TABLE" \
-backend-config="region=$AWS_REGION"
- name: Terraform core plan
run: terraform -chdir="./terraform/core" plan -no-color -out state.tfplan
- name: Terraform core apply
run: terraform -chdir="./terraform/core" apply state.tfplan
I used the same steps in my old & new Terraform configurations.
I want to re-use the logic written in my static-app module twice. So basically I want to be able to create static application just by using the module I've configured.
You cannot define the outputs in the root module and expect it to work because you are already using a different module in your static-app module (i.e., you are nesting modules). Since you are using the terraform module there (denoted with source = "terraform-aws-modules/cloudfront/aws") you are limited to what that module provides as outputs and hence can only define those outputs on the module level, not root level. I see you are referencing the EKS output works, but the difference here is that that particular module is not nested and is called directly (from your repo):
module "eks" {
source = "terraform-aws-modules/eks/aws"
version = "19.5.1"
.
.
.
}
The way I would suggest fixing this is to call the Cloudfront module from the root module (i.e., core in your example):
module "frontend-static" {
source = "terraform-aws-modules/cloudfront/aws"
version = "3.1.0"
... rest of the configuration ...
}
module "docs-static" {
source = "terraform-aws-modules/cloudfront/aws"
version = "3.1.0"
... rest of the configuration ...
}
The outputs you currently have defined in your repo with new configuration (tf-new) should work out-of-the-box with this change. Alternatively, you could write your own module and then you can control which outputs you will have.
I have a use case where I need to access the SNS topic from outside AWS. We planned to use https://aws.amazon.com/blogs/security/extend-aws-iam-roles-to-workloads-outside-of-aws-with-iam-roles-anywhere/ as it seems to be the right fit
But I'm unable to get this working correctly. I followed the link exactly mentioed above where the contents of .aws/config file are
credential_process = ./aws_signing_helper credential-process
--certificate /path/to/certificate.pem
--private-key /path/to/private-key.pem
--trust-anchor-arn <TA_ARN>
--profile-arn <PROFILE_ARN>
--role-arn <ExampleS3WriteRole_ARN>
But my spring boot application throws an error stating that it could not fetch the credentials to connect to AWS. Kindly assist
I found the easiest thing to do was to create a separate script for the credential_process to target, this isn't necessary I just found it easier.
So create a script along the lines of:
#! /bin/bash
# raw_helper.sh
/path/to/aws_signing_helper credential-process \
--certificate /path/to/cert.crt \
--private-key /path/to/key.key \
--trust-anchor-arn <TA_ARN> \
--profile-arn <Roles_Anywhere_Profile_ARN> \
--role-arn <IAM_Role_ARN>
The key thing I found is that most places (including AWS documentation) tell you to use the ~/.aws/config file and declare the profile there. This didn't seem to work, but when I added the profile to my ~/.aws/credentials file it did work. Assuming you've created a helper script, this would look like this:
# ~/.aws/credentials
[raw_profile]
credential_process = /path/to/raw_helper.sh
I have been trying to update some of the terraform scripts from version 0.6.13 to 0.9.6. In my scripts I had before
terraform remote config -backend=s3 \
-backend-config="bucket=my_bucker" \
-backend-config="access_key=my_access_key" \
-backend-config="secret_key=my_secret" \
-backend-config="region=my_region" \
-backend-config="key=my_state_key"
and then
terraform/terraform remote pull
Which was pulling the remote state from aws. Upon running terraform apply it will give me the exact resources that needed to be updated/ created based on the remote tfstate that is stored in an s3 bucket.
Now the issue I'm facing is that remote pull and remote config commands are outdated and don't work anymore.
I tried to follow the instructions on https://www.terraform.io/docs/backends/types/remote.html
however it was not much helpful.
From what I understand I would have to do an init first with a partial configuration which presumably would automatically pull the remote state as following:
`terraform init -var-file="terraform.tfvars"\
-backend=true \
-backend-config="bucket=my_bucker" \
-backend-config="access_key=my_access_key" \
-backend-config="secret_key=my_secret" \
-backend-config="region=my_region" \
-backend-config="key=my_state_key"`
However it doesn't really pull the remote state as it was doing before.
Would anyone be able to guide me into the right direction?
You don't need terraform remote pull any more. Terraform by default will automatically based on the refresh flag which defaults to true.
Apparently I had to add a minimal backend configuration such as
terraform {
backend "s3" {
}
}
in my main.tf file for it to work
I'm having a set of Terraform files and in particular one variables.tf file which sort of holds my variables like aws access key, aws access token etc. I want to now automate the resource creation on AWS using GitLab CI / CD.
My plan is the following:
Write a .gitlab-ci-yml file
Have the terraform calls in the .gitlab-ci.yml file
I know that I can have secret environment variables in GitLab, but I'm not sure how I can push those variables into my Terraform variables.tf file which looks like this now!
# AWS Config
variable "aws_access_key" {
default = "YOUR_ADMIN_ACCESS_KEY"
}
variable "aws_secret_key" {
default = "YOUR_ADMIN_SECRET_KEY"
}
variable "aws_region" {
default = "us-west-2"
}
In my .gitlab-ci.yml, I have access to the secrets like this:
- 'AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY}'
- 'AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID}'
- 'AWS_DEFAULT_REGION=${AWS_DEFAULT_REGION}'
How can I pipe it to my Terraform scripts? Any ideas? I would need to read the secrets from GitLab's environment and pass it on to the Terraform scripts!
Which executor are you using for your GitLab runners?
You don't necessarily need to use the Docker executor but can use a runner installed on a bare-metal machine or in a VM.
If you install the gettext package on the respective machine/VM as well you can use the same method as I described in Referencing gitlab secrets in Terraform for the Docker executor.
Another possibility could be that you set
job:
stage: ...
variables:
TF_VAR_SECRET1: ${GITLAB_SECRET}
or
job:
stage: ...
script:
- export TF_VAR_SECRET1=${GITLAB_SECRET}
in your CI job configuration and interpolate these. Please see Getting an Environment Variable in Terraform configuration? as well
Bear in mind that terraform requires a TF_VAR_ prefix to environment variables. So actually you need something like this in .gitlab-ci.yml
- 'TF_VAR_AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY}'
- 'TF_VAR_AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID}'
- 'TF_VAR_AWS_DEFAULT_REGION=${AWS_DEFAULT_REGION}'
Which also means you could just set the variable in the pipeline with that prefix as well and not need this extra mapping step.
I see you actually did discover this per your comment---I'm still posting this answer since I missed your comment the first time and it would have saved me an hour of work.
I am trying to do chef cookbook development via Jenkinsfile pipeline. I have my jenkins server running as a container (using jenkinsci/blueocean image). As one of the stages, I am trying to do aws configure and then run kitchen test. For some reason with below code, I am getting unauthorized operation error. For some reason, my AWS creds are not sent properly to .kitchen.yml (No need to check IAM creds, because they have admin access)
stage('\u27A1 Verify Kitchen') {
steps {
sh '''mkdir -p ~/.aws/
echo 'AWS_ACCESS_KEY_ID=...' >> ~/.aws/credentials
echo 'AWS_SECRET_ACCESS_KEY=...' >> ~/.aws/credentials
cat ~/.aws/credentials
KITCHEN_LOCAL_YAML=.kitchen.yml /opt/chefdk/embedded/bin/kitchen list
KITCHEN_LOCAL_YAML=.kitchen.yml /opt/chefdk/embedded/bin/kitchen test'''
}
}
Is there anyway, I can pass AWS creds here. Also .kitchen.yml no longer supports passing AWS creds inside the file. Is there someway I can pass creds on command i.e. .kitchen.yml access_key=... secret_access_key=... /opt/chefdk/embedded/bin/kitchen test
Really appreciate your help.
You don't need to set KITCHEN_LOCAL_YAML=.kitchen.yml, that's already the primary config file.
You probably want to be using a Jenkins credential file, not hardcoding things into the job. But that said, the reason this isn't working is because the AWS credentials file is not a shell script, which is the syntax you're using there. It's an INI/TOML file and is paired with a similar config file that shares a similar structure.
You should probably just be using the environment variable support in kitchen-ec2 via the withEnv pipeline helper method or similar things for integrating with Jenkins managed credentials.