Error: Unable to find remote state
on ../../modules/current_global/main.tf line 26, in data "terraform_remote_state" "global":
26: data "terraform_remote_state" "global" {
No stored state was found for the given workspace in the given backend
I am really stucked in this issue for a while.
main.tf:
data "aws_caller_identity" "current" {}
locals {
state_buckets = {
"amazon_account_id" = {
bucket = "bucket_name"
key = "key"
region = "region"
}
}
state_bucket = local.state_buckets[data.aws_caller_identity.current.account_id]
}
data "terraform_remote_state" "global" {
backend = "s3"
config = local.state_bucket
}
output "outputs" {
description = "Current account's global Terraform module outputs"
value = data.terraform_remote_state.global.outputs
}
one directoty above there is one main.tf file which has reference of above main.tf file
main.tf:
provider "aws" {
version = "~> 2.0"
region = var.region
allowed_account_ids = ["id"]
}
terraform {
backend "s3" {
bucket = "bucket_name"
key = "key"
region = "region"
}
}
module "global" {
source = "../../modules/current_global"
}
Related
On GCP, I'm using Cloud Run with secrets with environment variables from Secret Manager.
How to efficiently update Cloud Run instance when I'm updating a secret ?
I tried with this Terraform code, no success :
// run.tf
module "cloud-run-app" {
source = "GoogleCloudPlatform/cloud-run/google"
version = "~> 0.0"
service_name = "${local.main_project}-cloudrun"
location = local.region
image = local.cloudrun_image
project_id = local.main_project
env_vars = local.envvars_injection
env_secret_vars = local.secrets_injection
service_account_email = google_service_account.app.email
ports = local.cloudrun_port
service_annotations = {
"run.googleapis.com/ingress" : "internal-and-cloud-load-balancing"
}
service_labels = {
"env_type" = var.env_name
}
template_annotations = {
"autoscaling.knative.dev/maxScale" : local.cloudrun_app_max_scale,
"autoscaling.knative.dev/minScale" : local.cloudrun_app_min_scale,
"generated-by" : "terraform",
"run.googleapis.com/client-name" : "terraform"
}
depends_on = [
google_project_iam_member.run_gcr,
google_project_iam_member.app_secretmanager,
google_secret_manager_secret_version.secrets
]
}
// secrets.tf
resource "google_secret_manager_secret" "secrets" {
for_each = local.secrets_definition
secret_id = each.key
replication {
automatic = true
}
}
resource "google_secret_manager_secret_version" "secrets" {
for_each = local.secrets_definition
secret = google_secret_manager_secret.secrets["${each.key}"].name
secret_data = each.value
}
The trick here is to mount the secret as a volume (a file) and not as an environment variable.
If you do that, point your secret version to the latest version, and read the file every time you need the secret content, you will read the latest version. Without reloading the Cloud Run instance or redeploying a version.
I'm trying to create multiple AWS Accounts in an Organization containing ressources.
The resources should owned by the created accounts.
for that I created a module for the accounts:
resource "aws_organizations_account" "this" {
name = var.customer
email = var.email
parent_id = var.parent_id
role_name = "OrganizationAccountAccessRole"
provider = aws.src
}
resource "aws_s3_bucket" "this" {
bucket = "exconcept-terraform-state-${var.customer}"
provider = aws.dst
depends_on = [
aws_organizations_account.this
]
}
output "account_id" {
value = aws_organizations_account.this.id
}
output "account_arn" {
value = aws_organizations_account.this.arn
}
my provider file for the module:
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 4.0"
configuration_aliases = [ aws.src, aws.dst ]
}
}
}
In the root module I'm calling the module like this:
module "account" {
source = "./modules/account"
for_each = var.accounts
customer = each.value["customer"]
email = each.value["email"]
# close_on_deletion = true
parent_id = aws_organizations_organizational_unit.testing.id
providers = {
aws.src = aws.default
aws.dst = aws.customer
}
}
Since the provider information comes from the root module, and the accounts are created with a for_each map, how can I use the current aws.dst provider?
Here is my root provider file:
provider "aws" {
region = "eu-central-1"
profile = "default"
alias = "default"
}
provider "aws" {
assume_role {
role_arn = "arn:aws:iam::${module.account[each.key].account_id}:role/OrganizationAccountAccessRole"
}
alias = "customer"
region = "eu-central-1"
}
With Terraform init I got this error:
Error: Cycle: module.account.aws_s3_bucket_versioning.this, module.account.aws_s3_bucket.this, provider["registry.terraform.io/hashicorp/aws"].customer, module.account.aws_s3_bucket_acl.this, module.account (close)
I am parsing main.tf file , so that I can use the arguments passed into the module in my go lang program.
Terraform 0.13 ( main.tf )
terraform {
backend "s3" {
bucket = "bucket"
key = "c2an8q6a0brja8jaq3k0.tfstate"
region = "us-west-2"
dynamodb_table = "terraform-lock"
skip_region_validation = true
}
}
provider "null" {
version = "2.1"
}
provider "random" {
version = "2.3"
}
provider "template" {
version = "2.1"
}
provider "archive" {
version = "1.3"
}
provider "aws" {
version = "<= 4.0"
region = "us-west-2"
}
output "aws_vpc_main" {
value = module.dev-c2an8q6a0brja8jaq3k0-Network.vpc_id
}
module "dev-c2an8q6a0brja8jaq3k0-Network" {
source = "gitsource"
cidr = "10.0.0.0/16"
cluster_id = "c2an8q6a0brja8jaq3k0"
env = "dev"
owner = "me"
region = "us-west-2"
super_cluster = "dev"
platform_api = "api"
proj = "this"
}
From the above main.tf file, I would like to parse various arguments ( i.e env, owner, region,cidr etc etc ) that is passed on to the modules.
I am using below go program to try to do some parsing going.
main.go
package main
import (
"fmt"
"io/ioutil"
"github.com/hashicorp/hcl"
)
func main() {
FileContent, err := ioutil.ReadFile("main.tf") // just pass the file name
if err != nil {
fmt.Print(err)
}
var out interface{}
FileContentString := string(FileContent) // convert content to a 'string'
err = hcl.Decode(&out, FileContentString)
if err != nil {
fmt.Println(err)
}
fmt.Println(out)
}
However above program is erroring out with the following error message - At 33:11: Unknown token: 33:11 IDENT module.dev-c2an8q6a0brja8jaq3k0-Network.vpc_id
If I change the interpolation syntax on the above main.tf file to match with terraform 0.11 and earlier, the program main.go just works fine.
Terraform 0.11 ( main.tf )
terraform {
backend "s3" {
bucket = "bucket"
key = "c2an8q6a0brja8jaq3k0.tfstate"
region = "us-west-2"
dynamodb_table = "terraform-lock"
skip_region_validation = true
}
}
provider "null" {
version = "2.1"
}
provider "random" {
version = "2.3"
}
provider "template" {
version = "2.1"
}
provider "archive" {
version = "1.3"
}
provider "aws" {
version = "<= 4.0"
region = "us-west-2"
}
output "aws_vpc_main" {
value = "${module.dev-c2an8q6a0brja8jaq3k0-Network.vpc_id}"
}
module "dev-c2an8q6a0brja8jaq3k0-Network" {
source = "gitsource"
cidr = "10.0.0.0/16"
cluster_id = "c2an8q6a0brja8jaq3k0"
env = "dev"
owner = "me"
region = "us-west-2"
super_cluster = "dev"
platform_api = "api"
proj = "virt"
}
main.go program output
map[module:[map[dev-c2an8q6a0brja8jaq3k0-Network:[map[cidr:10.0.0.0/16 cluster_id:c2an8q6a0brja8jaq3k0 env:dev owner:me platform_api:api proj:virt region:us-west-2 source:gitsource super_cluster:dev]]]] output:[map[aws_vpc_main:[map[value:${module.dev-c2an8q6a0brja8jaq3k0-Network.vpc_id}]]]] provider:[map[null:[map[version:2.1]]] map[random:[map[version:2.3]]] map[template:[map[version:2.1]]] map[archive:[map[version:1.3]]] map[aws:[map[region:us-west-2 version:<= 4.0]]]] terraform:[map[backend:[map[s3:[map[bucket:bucket dynamodb_table:terraform-lock key:c2an8q6a0brja8jaq3k0.tfstate region:us-west-2 skip_region_validation:true]]]]]]]
Which, I can parse later on with various methods.
I am just out of ideas as to why its not working with the latest terraform version configuration.
Thank you.
EDIT
I just want to update this question with the research I have done so far.
There is a package called terraform-config-inspect however it doesn't support parsing the module arguments at the moment.
I have also looked at the hclv2 ( as thats what Terraform uses ) from 0.13 and going forward, however I am not able to find any sort of function or method to parse the terraform files,just like it used to support in hclv1.
Thanks
Trying to implement a Data Module for referencing a 'Robot Account' for Terraform.
I get the folowing errors:
Error: Reference to undeclared resource
on main.tf line 7, in provider "google":
7: credentials = data.google_secret_manager_secret_version.secret
A data resource "google_secret_manager_secret_version" "secret" has not been
declared in the root module.
Error: Reference to undeclared input variable
on datamodule\KeydataModule.tf line 3, in data "google_secret_manager_secret_version" "secret":
3: secret = "${var.Terra_Auth}"
An input variable with the name "Terra_Auth" has not been declared. This
variable can be declared with a variable "Terra_Auth" {} block.
With the following main.tf:
module "KeydataModule" {
source = "./datamodule"
}
provider "google" {
credentials = data.google_secret_manager_secret_version.secret
project = "KubeProject"
region = "us-central1"
zone = "us-central1-c"
}
resource "google_compute_instance" "vm_instance" {
name = "terraform-instance"
machine_type = "f1-micro"
boot_disk {
initialize_params {
image = "ubuntu-cloud/ubuntu-1804-lts"
}
}
network_interface {
# A default network is created for all GCP projects
network = google_compute_network.vpc_network.self_link
access_config {
}
}
}
resource "google_compute_network" "vpc_network" {
name = "terraform-network"
auto_create_subnetworks = "true"
}
The keydataModule.tf:
data "google_secret_manager_secret_version" "secret" {
provider = google-beta
secret = "${var.Terra_Auth}"
}
The following variables.tf for creating the 'Terra Auth' variable:
variable "Terra_Auth" {
type = string
description = "Access Key for Terraform Service Account"
}
And finally a terraform.tfvars file, which in this case houses the secret name within my GCP account:
Terra_Auth = "Terraform_GCP_Account_Secret"
I'm trying to work with aws_instance data source. I created a simple configuration which should create an ec2 instance and should return ip as output
variable "default_port" {
type = string
default = 8080
}
provider "aws" {
region = "us-west-2"
shared_credentials_file = "/Users/kharandziuk/.aws/creds"
profile = "prototyper"
}
resource "aws_instance" "example" {
ami = "ami-0994c095691a46fb5"
instance_type = "t2.small"
tags = {
name = "example"
}
}
data "aws_instances" "test" {
instance_tags = {
name = "example"
}
instance_state_names = ["pending", "running", "shutting-down", "terminated", "stopping", "stopped"]
}
output "ip" {
value = data.aws_instances.test.public_ips
}
but for some reasons I can't configure data source properly. The result is:
> terraform plan
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.
data.aws_instances.test: Refreshing state...
Error: Your query returned no results. Please change your search criteria and try again.
on main.tf line 21, in data "aws_instances" "test":
21: data "aws_instances" "test" {
how can I fix it?
You should use depends_on option into data.aws_instances.test.
like :
data "aws_instances" "test" {
instance_tags = {
name = "example"
}
instance_state_names = ["pending", "running", "shutting-down", "terminated", "stopping", "stopped"]
depends_on = [
"aws_instance.example"
]
}
It means that build data.aws_instances.test after make resource.aws_instance.example.
Sometimes, We need to use this option. Because of dependencies of aws resources.
See :
Here's a document about depends_on option.
You don't need a data source here. You can get the public IP address of the instance back from the resource itself, simplifying everything.
This should do the exact same thing:
resource "aws_instance" "example" {
ami = "ami-0994c095691a46fb5"
instance_type = "t2.small"
tags = {
name = "example"
}
}
output "ip" {
value = aws_instance.example.public_ip
}