Terraform and AWS: No Configuration Files Found Error - amazon-web-services

I am writing a small script that takes a small file from my local machine and puts it into an AWS S3 bucket.
My terraform.tf:
provider "aws" {
region = "us-east-1"
version = "~> 1.6"
}
terraform {
backend "s3" {
bucket = "${var.bucket_testing}"
kms_key_id = "arn:aws:kms:us-east-1:12345678900:key/12312313ed-34sd-6sfa-90cvs-1234asdfasd"
key = "testexport/exportFile.tfstate"
region = "us-east-1"
encrypt = true
}
}
data "aws_s3_bucket" "pr-ip" {
bucket = "${var.bucket_testing}"
}
resource "aws_s3_bucket_object" "put_file" {
bucket = "${data.aws_s3_bucket.pr-ip.id}"
key = "${var.file_path}/${var.file_name}"
source = "src/Datafile.txt"
etag = "${md5(file("src/Datafile.txt"))}"
kms_key_id = "arn:aws:kms:us-east-1:12345678900:key/12312313ed-34sd-6sfa-90cvs-1234asdfasd"
server_side_encryption = "aws:kms"
}
However, when I init:
terraform init
#=>
Terraform initialized in an empty directory!
The directory has no Terraform configuration files. You may begin working with Terraform immediately by creating Terraform configuration files.
and then try to apply:
terraform apply
#=>
Error: No configuration files found!
Apply requires configuration to be present. Applying without a configuration would mark everything for destruction, which is normally not what is desired. If you would like to destroy everything, please run 'terraform destroy' instead which does not require any configuration files.
I get the error above. Also, I have setup my default AWS Access Key ID and value.
What can I do?

This error means that you have run the command in the wrong place. You have to be in the directory that contains your configuration files, so before running init or apply you have to cd to your Terraform project folder.

Error: No configuration files found!
The above error arises when you are not present in the folder, which contains your configuration file.
To remediate the situation you can create a .tf in your project folder you will be working.
Note - An empty .tf will also eliminate the error, but will be of limited use as it does not contain provider info.
See the example below:-
provider "aws" {
region = "us-east" #Below value will be asked when the terraform apply command is executed if not provided here
}
So, In order for the successful execution of the terraform apply command you need to make sure the below points:-
You need to be present in your terraform project folder (Can be any directory).
Must contain .tf preferably should contain terraform provider info.
Execute terraform init to initialize the backend & provider plugin.
you are now good to execute terraform apply (without any no config error)

In case any one comes across this now, I ran into an issue where my TF_WORSPACE env var was set to a different workspace than the directory I was in. Double check your workspace with
terraform workspace show
to show your available workspaces
terraform workspace list
to use one of the listed workspaces:
terraform workspace select <workspace name>
If the TF_WORKSPACE env var is set when you try to use terraform workspace select TF will print a message telling you of the potential issue:
The selected workspace is currently overridden using the TF_WORKSPACE
environment variable.
To select a new workspace, either update this environment variable or unset
it and then run this command again.

I had the same error emulated by you, In my case it was not a VPN error but incorrect file
system naming. I was in the project folder.To remedy the situation, i created a .tf file
with vim editor with the command vi aws.tf, then populated the file with defined variables. Mine is working.
See my attached images

I too had the same issue, remember terraform filename should end with .tf as extension

Another possible reason could be if you are using modules where the URL is incorrect.
When I had:
source = "git::ssh://git#git.companyname.com/observability.git//modules/ec2?ref=v2.0.0"
instead of:
source = "git::ssh://git#git.companyname.com/observability.git//terraform/modules/ec2?ref=v2.0.0"
I was seeing the same error message as you.

I got this error this morning when deploying to production, on a project which has been around for years and nothing had changed. We finally traced it down to the person who created the production deploy ticket had pasted this command into an email using Outlook:
terraform init --reconfigure
Microsoft, in its infinite wisdom, combined the two hyphens into one and the one hyphen wasn't even the standard ASCII hyphen character (I think it's called an "en-dash"):
terraform init –reconfigure
This caused Terraform 0.12.31 to give the helpful error message:
Terraform initialized in an empty directory!
The directory has no Terraform configuration files. You may begin working
with Terraform immediately by creating Terraform configuration files.
It took us half an hour and another pair of eyes to notice that the hyphens were incorrect and needed to be re-typed! (I think terraform thought "reconfigure" was the name of the directory we wanted to run the init in, which of course didn't exist. Perhaps terraform could be improved to name the directory it's looking in when it reports this error?)
Thanks Microsoft for always being helpful (not)!

Related

Missing required GCS remote state configuration location

After Google Cloud quota update, I can't run terragrunt/terraform code due to strange error. Same code worked before with other project on same account. After I tried to recreate project (to get new clear project) there was some "Billing Quota" popup and I asked support for changing quota.
I got the following message from support:
Dear Developer,
We have approved your request for additional quota. Your new quota should take effect within one hour of receiving this message.
And now (1 day after) terragrunt is not working due to error:
Missing required GCS remote state configuration location
Actually what I got:
service account for pipelines with Project Editor and Service Networking Admin;
bucket without public access (europe-west3)
following terragrunt config:
remote_state {
backend = "gcs"
config = {
project = get_env("TF_VAR_project")
bucket = "bucket name"
prefix = "${path_relative_to_include()}"
}
generate = {
path = "backend.tf"
if_exists = "overwrite_terragrunt"
}
}
Also i`m running following pipeline
- terragrunt run-all init
- terragrunt run-all validate
- terragrunt run-all plan
- terragrunt run-all apply --terragrunt-non-interactive -auto-approve
and its failing on init with error.'
Project and credentials are correct (also credentials stored in GOOGLE_CREDENTIALS env as json without new lines or whitespaces).
Also tryed to specify "location" in "config" but got error that bucket not found in project.
Does anybody know how to fix or where can be problem?
It worked before I got quota.

How to convert the aws secret manager string to map in terraform (0.11.13)

I have a secret stored in AWS secret manager and trying to integrate that within terraform during runtime. We are using terraform 0.11.13 version, and updating to latest terraform is in the roadmap.
We all want to use the jsondecode() available as part of latest terraform, but need to get few things integrated before we upgrade our terraform.
We tried to use the below helper external data program suggested as part of https://github.com/terraform-providers/terraform-provider-aws/issues/4789.
data "external" "helper" {
program = ["echo", "${replace(data.aws_secretsmanager_secret_version.map_example.secret_string, "\\\"", "\"")}"]
}
But we ended up getting this error now.
data.external.helper: can't find external program "echo"
Google search didn't help much.
Any help will be much appreciated.
OS: Windows 10
It sounds like you want to use a data source for the aws_secretsmanager_secret.
Resources in terraform create new resources. Data sources in terraform reference the value of existing resources in terraform.
data "aws_secretsmanager_secret" "example" {
arn = "arn:aws:secretsmanager:us-east-1:123456789012:secret:example-123456"
}
data "aws_secretsmanager_secret_version" "example" {
secret_id = data.aws_secretsmanager_secret.example.id
version_stage = "example"
}
Note: you can also use the secret name
Docs: https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/secretsmanager_secret
Then you can use the value from this like so:
output MySecretJsonAsString {
value = data.aws_secretsmanager_secret_version.example.secret_string
}
Per the docs, the secret_string property of this resource is:
The decrypted part of the protected secret information that was originally provided as a string.
You should also be able to pass that value into jsondecode and then access the properties of the json body individually.
but you asked for a terraform 0.11.13 solution. If the secret value is defined by terraform you can use the terraform state datasource to get the value. This does trust that nothing else is updating the secret other than terraform. But the best answer is to upgrade your terraform. This could be a useful stopgap until then.
As a recommendation, you can make the version of terraform specific to a module and not your whole organization. I do this through the use of docker containers that run specific versions of the terraform bin. There is a script in the root of every module that will wrap the terraform commands to come up in the version of terraform meant for that project. Just a tip.

How can I run terraform code in sequence?

I am trying to setup some automation around AWS infrastructure. Just bumped into one issue about module dependencies. Since there is no "Include" type of option in terraform so it's becoming little difficult to achieve my goal.
Here is the short description of scenario:
In my root directory I've a file main.tf
which consists of multiple module blocks
eg.
module mytest1
{
source = mymod/dev
}
module mytest2
{
source = mymod2/prod
}
each dev and prod have lots of tf files
Few of my .tf file which exists inside prod directory needs some output from the resources which exists inside dev directory
Since module has no dependency, I was thinking if there is any way to run modules in sequence or any other ideas ?
Not entirely sure about your use case for having prod and dev needing to interact in the way you've stated.
I would expect you to maybe have something like the below folder structure:
Folder 1: Dev (Contains modules for dev)
Folder 2: Prod (Contains modules for prod)
Folder 3: Resources (Contains generic resource blocks that both dev and prod module utilise)
Then when you run terraform apply for Folder 1, it will create your dev infrastructure by passing the variables from your modules to the resources (in Folder 3).
And when you run terraform apply for Folder 2, it will create your prod infrastructure by passing the variables from your modules to the resources (in Folder 3).
If you can't do that for some reason, then Output Variables or Data Sources can potentially help you retrieve the information you need.
There is no reason for you to have different modules for different envs. Usually, the difference between lower envs and prod are the number and the tier for each resource, and you could just use variables to pass that to inside the modules.
To deal with this, you can use terraform workspaces and create one workspace for each env, e.g:
terraform worskspace new staging
This will create a completely new workspace, with its own state. If you need to define the number of resouces to be created, you can use the variable sor the terraform workspace name itself, e.g:
# Your EC2 Module
"aws_instance" "example" {
count = "${terraform.workspace == "prod" ? 3 : 1}"
}
# or
"aws_instance" "example" {
count = "${lenght(var.subnets)}" # you are likely to have more subnets for prod
}
# Your module
module "instances" {
source = "./modules/ec2"
subnets = "my subnets list"
}
And that is it, you can have all your modules working for any environment just by creating workspaces and changing the variables for each one on your pipeline and applying the plan each time.
You can read more about workspaces here
I'm not too sure about your requirement of having the production environment depend on the development environment, but putting the specifics aside, the idiomatic way to create sequencing between resources and between modules in Terraform is to use reference expressions.
You didn't say what aspect of the development environment is consumed by the production environment, but for the sake of example let's say that the production environment needs the id of a VPC created in the development environment. In that case, the development module would export that VPC id as an output value:
# (this goes within a file in your mymod/dev directory)
output "vpc_id" {
value = "${aws_vpc.example.id}"
}
Then your production module conversely would have an input variable to specify this:
# (this goes within a file in your mymod2/prod directory)
variable "vpc_id" {
type = "string"
}
With these in place, your parent module can then pass the value between the two to establish the dependency you are looking for:
module "dev" {
source = "./mymod/dev"
}
module "prod" {
source = "./mymod2/prod"
vpc_id = "${module.dev.vpc_id}"
}
This works because it creates the following dependency chain:
module.prod's input variable vpc_id depends on
module.dev's output value vpc_id, which depends on
module.dev's aws_vpc.example resource
You can then use var.vpc_id anywhere inside your production module to obtain that VPC id, which creates another link in that dependency chain, telling Terraform that it must wait until the VPC is created before taking any action that depends on the VPC to exist.
In particular, notice that it's the individual variables and outputs that participate in the dependency chain, not the module as a whole. This means that if you have any resources in the prod module that don't need the VPC to exist then Terraform can get started on creating them immediately, without waiting for the development module to be fully completed first, while still ensuring that the VPC creation completes before taking any actions that do need it.
There is some more information on this pattern in the documentation section Module Composition. It's written with Terraform v0.12 syntax and features in mind, but the general pattern is still applicable to earlier versions if you express it instead using the v0.11 syntax and capabilities, as I did in the examples above.

terraform apply keeps changing things even though no tf files have changed

I have a moderately complex terraform setup with
a module directory containing a main.tf, variables.tf and input.tf
and environments directory containing foo.tf, variables.tf and vars.tf
I can successfully run terraform apply and everything succeeds.
But, if I immediately run terraform apply again it makes changes.
The changes it keeps making are to resources in the module...resources that get attributes from variables in the environments tf files. I'm creating an MQ broker and a dashboard to monitor it.
In the environments directory
top.tf
module "broker" {
source = "modules/broker"
dashboard = "...."
}
In the modules directory
input.tf
variable "dashboard" {
}
amazonmq.tf
resource "aws_cloudwatch_dashboard" "mydash" {
dashboard_name = "foo"
dashboard_body = "${dashboard}"
}
Every time I run terraform apply it says it needs to change the dashboard. Any hints on what I'm doing wrong? (I've tried running with TF_LOG=DEBUG but I can't see anything that says why a change is needed). Thanks in advance.
This seems to be an issue with the terraform provider code itself. The dashboard_body property should have the computed flag attached to it, to allow you to provide it but ignore any incoming changes from aws.
I've opened up an issue on the github page. You'll find it here: https://github.com/terraform-providers/terraform-provider-aws/issues/5729

Terraform: How to migrate state between projects?

What is the least painful way to migrate state of resources from one project (i.e., move a module invocation) to another, particularly when using remote state storage? While refactoring is relatively straightforward within the same state file (i.e., take this resource and move it to a submodule or vice-versa), I don't see an alternative to JSON surgery for refactoring into different state files, particularly if we use remote (S3) state (i.e., take this submodule and move it to another project).
The least painful way I’ve found is to pull both remote states local, move the modules/resources between the two, then push back up. Also remember, if you’re moving a module, don’t move the individual resources; move the whole module.
For example:
cd dirA
terraform state pull > ../dirA.tfstate
cd ../dirB
terraform state pull > ../dirB.tfstate
terraform state mv -state=../dirA.tfstate -state-out=../dirB.tfstate module.foo module.foo
terraform state push ../dirB.tfstate
# verify state was moved
terraform state list | grep foo
cd ../dirA
terraform state push ../dirA.tfstate
Unfortunately, the terraform state mv command doesn’t support specifying two remote backends, so this is the easiest way I’ve found to move state between multiple remotes.
Probably the simplest option is to use terraform import on the resource in the new state file location and then terraform state rm in the old location.
Terraform does handle some automatic state migration when copying/moving the .terraform folder around but I've only used that when shifting the whole state file rather than part of it.
As mentioned in a related Terraform Q -> Best practices when using Terraform
It is easier and faster to work with smaller number of resources:
Cmdsterraform plan and terraform apply both make cloud API calls to verify the status of resources.
If you have your entire infrastructure in a single composition this can take many minutes (even if you have several files in the same
folder).
So if you'll end up with a mono-dir with every resource, never is late to start segregating them by service, team, client, etc.
Possible Procedures to migrate Terrform states between projects / services:
Example Scenario:
Suppose we have a folder named common with all our .tf files for a certain project and we decided to divide (move) our .tf Terraform resources to a new project folder named security. so we now need to move some resources from common project folder to security.
Case 1:
If the security folder still does not exists (which is the best scenario).
Backup the Terraform backend state content stored in the corresponding AWS S3 Bucket (since it's versioned we should be even safer).
With your console placed in the origin folder, for our case common execute make init to be sure your .terraform local folder it's synced with your remote state.
If the security folder still does not exists (which should be true) clone (copy) the common folder with the destination name security and update the config.tf file inside this new cloned folder to point to the new S3 backend path (consider updating 1 account at a time starting with the less critical one and evaluate the results with terraform state list).
eg:
# Backend Config (partial)
terraform {
required_version = ">= 0.11.14"
backend "s3" {
key = "account-name/security/terraform.tfstate"
}
}
Inside our newly created security folder, run terraform-init (without removing the copied .terraform local folder, which was already generated and synced in step 2) which, as a result, will generate a new copy of the resources state (interactively asking) in the new S3 path. This is a safe operation since we haven't removed the resources from the old .tfstate path file yet.
$ make init
terraform init -backend-config=../config/backend.config
Initializing modules...
- module.cloudtrail
- module.cloudtrail.cloudtrail_label
Initializing the backend...
Backend configuration changed!
Terraform has detected that the configuration specified for the backend
has changed. Terraform will now check for existing state in the backends.
Acquiring state lock. This may take a few moments...
Acquiring state lock. This may take a few moments...
Do you want to copy existing state to the new backend?
Pre-existing state was found while migrating the previous "s3" backend to the
newly configured "s3" backend. No existing state was found in the newly
configured "s3" backend. Do you want to copy this state to the new "s3"
backend? Enter "yes" to copy and "no" to start with an empty state.
Enter a value: yes
...
Successfully configured the backend "s3"! Terraform will automatically
use this backend unless the backend configuration changes.
Initializing provider plugins...
...
Terraform has been successfully initialized!
...
Selectively remove the desired resources from each state (terraform state rm module.foo) in order to keep the desired ones in /common and /security paths. Moreover, It's a must to carry out in parallel the necessary updates (add/remove) of the modules/resources from your .tf files in each folder to keep both your local code base declaration and your remote .tfstate in sync. This is a sensible operation, please start by testing the procedure in the less critical possible single resource.
As reference we can consider the following doc and tools:
https://www.terraform.io/docs/commands/state/list.html
https://www.terraform.io/docs/commands/state/rm.html
https://github.com/camptocamp/terraboard (apparently still not compatible with terraform 0.12)
Case 2:
If the security folder already exists and has it's associated remote .tfstate in its AWS S3 path you'll need to use a different sequence of steps and commands, possible the ones referenced in the links below:
1. https://www.terraform.io/docs/commands/state/list.html
2. https://www.terraform.io/docs/commands/state/pull.html
3. https://www.terraform.io/docs/commands/state/mv.html
4. https://www.terraform.io/docs/commands/state/push.html
Ref links:
https://github.com/camptocamp/terraboard (apparently still not compatible with terraform 0.12)
https://medium.com/#lynnlin827/moving-terraform-resources-states-from-one-remote-state-to-another-c76f8b76a996
I use this script (not work from v0.12) to migrate the state while refactoring. Feel free to adopt it to your need.
src=<source dir>
dst=<target dir>
resources=(
aws_s3_bucket.bucket1
aws_iam_role.role2
aws_iam_user.user1
aws_s3_bucket.bucket2
aws_iam_policy.policy2
)
cd $src
terraform state pull >/tmp/source.tfstate
cd $dst
terraform state pull >/tmp/target.tfstate
for resource in "${resources[#]}"; do
terraform state mv -state=/tmp/source.tfstate -state-out=/tmp/target.tfstate "${resource}" "${resource}"
done
terraform state push /tmp/target.tfstate
cd $src
terraform state push /tmp/source.tfstate
Note that terraform pull is deprecated from v0.12 (but not removed and still works), and terraform push does not work anymore from v0.12.
Important: The terraform push command is deprecated, and only works
with the legacy version of Terraform Enterprise. In the current
version of Terraform Cloud, you can upload configurations using the API. See the docs about API-driven runs for more details.
==================
Below are unrelated to the OP:
If you are renaming your resources in the same project.
For version <= 1.0: use terraform state mv ....
For version >= 1.1, use the moved statement described: here or here.
There are several other useful commands that I listed in my blog