How to run this run terraform file - amazon-web-services

I have two terraform files that I need to somehow run, one called terraform-var.tf and one called terraform-build.tf. I've figured that the variable file uses some sort of interpolation to define the variables and thats how the build gets them but I can not seem to actually get the variables loaded. I don't know what commands to run in what order to load the variables then run them.
Heres a example of the two files.
-terraform-var.tf
variable "access_key" {
default = "foo"
}
variable "secret_key" {
default = "foo"
}
variable "region" {
default = "us-west-2"
}
-teraform-build.tf
provider "aws" {
access_key = "${var.access_key}"
secret_key = "${var.secret_key}"
region = "${var.region}"
}

Assuming that you have configured the terraform backend appropriately, we need to use -var-file as parameter to the terraform apply command.
your apply command should look something like this.
Go to the location where your teraform-build.tf is located
terraform apply -var-file="path/to/terraform-var.tfvars"

You can just rename your variables file to variables.tf and it should work also.

Related

Terraform single resource on multiple workspaces

I have two workspaces (like dev and prd) and I have to create single resource to use on all of them.
My example is to create AWS ECR repository:
resource "aws_ecr_repository" "example" {
name = "example"
}
I applied it on prd workspace and after switching to dev workspace, Terraform wants to create the same, but it exist.
After consideration I used count to create it only on prd like that:
resource "aws_ecr_repository" "example" {
count = local.stage == "prd" ? 1 : 0
name = "example"
}
and on prd workspace I use it like that:
aws_ecr_repository.default[0].repository_url
but there is a problem how to use it on dev workspace.
What is the better way to solve this?
since i´m not able to add a comment (i do not have enough rep)
i´m adding this as an answer.
as Jens mentioned, best is to avoid this approach.
but you can import a remote state with something like this:
data "terraform_remote_state" "my_remote_state" {
backend = "local" # could also be a remote state like s3
config = {
key = "project-key"
}
workspace = "prd"
}
in your prod workspace you have to define the outputs of your repo:
output "ecr_repo_url" {
aws_ecr_repository.default[0].repository_url
}
in your dev workspace, you can access the value with:
data.terraform_remote_state.my_remote_state.ecr_repo_url
in some cases this maybe useful, but be aware to what Jens said: if you destroy your prod environment, you can´t apply or change your dev environment!

Can depends_on in terraform be set to a file path?

I am trying to break down my main.tf file . So I have set aws config via terraform, created the configuration recorder and set the delivery channel to a s3 bucket created in the same main.tf file. Now for the AWS config rules, I have created a separate file viz config-rule.tf. As known , every aws_config_config_rule that we create has a depends_on clause where in we call the dependent resource, which in this case being aws_config_configuration_recorder. So my question is can I interpolate the depends_on clause to something like :
resource "aws_config_config_rule" "s3_bucket_server_side_encryption_enabled" {
name = "s3_bucket_server_side_encryption_enabled"
source {
owner = "AWS"
source_identifier = "S3_BUCKET_SERVER_SIDE_ENCRYPTION_ENABLED"
}
depends_on = ["${file("aws-config-setup.tf")}"]
}
Considering I move my aws config setup from my main.tf file to a new file called aws-config-setup.tf.
If I'm reading your question correctly, you shouldn't need to make any changes for this to work, assuming you didn't move code to its own module (a separate directory).
When terraform executes in a particular directory it takes all files into account, basically treating them all as one terraform file.
So, in general, if you had a main.tf that looks like the following
resource "some_resource" "resource_1" {
# ...
}
resource "some_resource" "resource_2" {
# ...
depends_on = [some_resource.resource_1]
}
and you decided to split these out into the following files
file1.tf
resource "some_resource" "resource_1" {
# ...
}
file2.tf
resource "some_resource" "resource_2" {
# ...
depends_on = [some_resource.resource_1]
}
if terraform is run in the same directory, it will evaluate the main.tf scenario exactly the same as the multi-file scenario.

Can you clone an AWS lambda?

Cloning for different environments. Staging/QA/PROD/DEV etc.
Is there a quick an easy way to clone my lambdas, give a different name, and adjust configurations from there?
You will need to recreate your Lambda Functions in the new account. Go to lambda function click on Action and export your function .
Download a deployment package (your code and libraries), and/or an AWS
Serverless Application Model (SAM) file that defines your function,
its events sources, and permissions.
You or others who you share this file with can use AWS CloudFormation
to deploy and manage a similar serverless application. Learn more
about how to deploy a serverless application with AWS CloudFormation.
This is an example of terraform code(Infrastructure as Code) which can be used to stamp out same lambdas within different environment dev/prod.
If you have a look at this bit of code function_name = "${var.environment}-first_lambda" it will be clear as to how the name of the function is prefixed with environments like dev/prod etc.
This variable can be passed in at terraform command execution time eg TF_VAR_environment="dev" terraform apply or defaulted in the variables.tf or passed in using *.tfvars
#main.tf
resource "aws_lambda_function" "first_lambda" {
function_name = "${var.environment}-first_lambda"
filename = "${data.archive_file.first_zip.output_path}"
source_code_hash = "${data.archive_file.first_zip.output_base64sha256}"
role = "${aws_iam_role.iam_for_lambda.arn}"
handler = "first_lambda.lambda_handler"
runtime = "python3.6"
timeout = 15
environment {
variables = {
value_one = "some value_one"
}
}
}
# variables.tf
variable "environment" {
type = "string"
description = "The name of the environment within the project"
default = "dev"
}

Terraform and AWS: No Configuration Files Found Error

I am writing a small script that takes a small file from my local machine and puts it into an AWS S3 bucket.
My terraform.tf:
provider "aws" {
region = "us-east-1"
version = "~> 1.6"
}
terraform {
backend "s3" {
bucket = "${var.bucket_testing}"
kms_key_id = "arn:aws:kms:us-east-1:12345678900:key/12312313ed-34sd-6sfa-90cvs-1234asdfasd"
key = "testexport/exportFile.tfstate"
region = "us-east-1"
encrypt = true
}
}
data "aws_s3_bucket" "pr-ip" {
bucket = "${var.bucket_testing}"
}
resource "aws_s3_bucket_object" "put_file" {
bucket = "${data.aws_s3_bucket.pr-ip.id}"
key = "${var.file_path}/${var.file_name}"
source = "src/Datafile.txt"
etag = "${md5(file("src/Datafile.txt"))}"
kms_key_id = "arn:aws:kms:us-east-1:12345678900:key/12312313ed-34sd-6sfa-90cvs-1234asdfasd"
server_side_encryption = "aws:kms"
}
However, when I init:
terraform init
#=>
Terraform initialized in an empty directory!
The directory has no Terraform configuration files. You may begin working with Terraform immediately by creating Terraform configuration files.
and then try to apply:
terraform apply
#=>
Error: No configuration files found!
Apply requires configuration to be present. Applying without a configuration would mark everything for destruction, which is normally not what is desired. If you would like to destroy everything, please run 'terraform destroy' instead which does not require any configuration files.
I get the error above. Also, I have setup my default AWS Access Key ID and value.
What can I do?
This error means that you have run the command in the wrong place. You have to be in the directory that contains your configuration files, so before running init or apply you have to cd to your Terraform project folder.
Error: No configuration files found!
The above error arises when you are not present in the folder, which contains your configuration file.
To remediate the situation you can create a .tf in your project folder you will be working.
Note - An empty .tf will also eliminate the error, but will be of limited use as it does not contain provider info.
See the example below:-
provider "aws" {
region = "us-east" #Below value will be asked when the terraform apply command is executed if not provided here
}
So, In order for the successful execution of the terraform apply command you need to make sure the below points:-
You need to be present in your terraform project folder (Can be any directory).
Must contain .tf preferably should contain terraform provider info.
Execute terraform init to initialize the backend & provider plugin.
you are now good to execute terraform apply (without any no config error)
In case any one comes across this now, I ran into an issue where my TF_WORSPACE env var was set to a different workspace than the directory I was in. Double check your workspace with
terraform workspace show
to show your available workspaces
terraform workspace list
to use one of the listed workspaces:
terraform workspace select <workspace name>
If the TF_WORKSPACE env var is set when you try to use terraform workspace select TF will print a message telling you of the potential issue:
The selected workspace is currently overridden using the TF_WORKSPACE
environment variable.
To select a new workspace, either update this environment variable or unset
it and then run this command again.
I had the same error emulated by you, In my case it was not a VPN error but incorrect file
system naming. I was in the project folder.To remedy the situation, i created a .tf file
with vim editor with the command vi aws.tf, then populated the file with defined variables. Mine is working.
See my attached images
I too had the same issue, remember terraform filename should end with .tf as extension
Another possible reason could be if you are using modules where the URL is incorrect.
When I had:
source = "git::ssh://git#git.companyname.com/observability.git//modules/ec2?ref=v2.0.0"
instead of:
source = "git::ssh://git#git.companyname.com/observability.git//terraform/modules/ec2?ref=v2.0.0"
I was seeing the same error message as you.
I got this error this morning when deploying to production, on a project which has been around for years and nothing had changed. We finally traced it down to the person who created the production deploy ticket had pasted this command into an email using Outlook:
terraform init --reconfigure
Microsoft, in its infinite wisdom, combined the two hyphens into one and the one hyphen wasn't even the standard ASCII hyphen character (I think it's called an "en-dash"):
terraform init –reconfigure
This caused Terraform 0.12.31 to give the helpful error message:
Terraform initialized in an empty directory!
The directory has no Terraform configuration files. You may begin working
with Terraform immediately by creating Terraform configuration files.
It took us half an hour and another pair of eyes to notice that the hyphens were incorrect and needed to be re-typed! (I think terraform thought "reconfigure" was the name of the directory we wanted to run the init in, which of course didn't exist. Perhaps terraform could be improved to name the directory it's looking in when it reports this error?)
Thanks Microsoft for always being helpful (not)!

Best way to get an interpolated value into a Terraform lookup

I am trying to setup Terraform to deploy into various AWS regions, I am sending the Region on the command line since its one of the few changing values in the script. I am using -var "region=east" - for example, and it works in other modules by assigning the region correctly, the main.tf where I am using this value is set correctly.
In creating a resource there is a lookup which uses region as mapped value, amis is a map of regions and different ami values I only want the ami that matches for this resource. I tried two methods, using the command line interpolation:
ami = "${lookup(var.amis, $${var.region})}"
or as an intermediate value
variables "map_region" {
default = "${var.region}"
}
ami = "${lookup(var.amis, var.map_region)}"
Both give me syntax errors, though in looking through some of the documents with lookup I don't see where terraform supports this. Anyone else tried this successfully in some manner, or know a better way to pull out a value from a map using a command line variable?
EDIT:
Part of the problem was hidden because I was using a Bash script to run the Terraform modules. This was running a destroy -force, then apply. Because I was considering the command line variables as part of the build I did not add them to the destroy command, which is where they were being requested and giving me a prompt to enter them. Once I added in the -var commands to the destroy command as well as apply this all worked.
ami = "${lookup(var.amis, $${var.region})}"
is wrong because $$ is only valid for interpolated variables within inline templates.
variables "map_region" {
default = "${var.region}"
}
ami = "${lookup(var.amis, var.map_region)}"
does not work because you are passing a map as a key to lookup.
Terraform console is a useful tool for trying out interpolated expressions. Suppose the variables are defined as follows:
$ cat vars.tf
variable "amis" {
type = "map"
default {
"us-east-2" = "ami-58f5db3d"
"us-east-1" = "ami-fad25980"
}
}
variable "region" {}
Fire up the console passing a value to region to emulate what you would be doing with plan, apply etc:
$ terraform console -var "region=us-east-1"
> var.region
us-east-1
> lookup(var.amis, var.region)
ami-fad25980
Hope this helps.