Unknown token: IDENT , when decoding/parsing terraform file - amazon-web-services

I am parsing main.tf file , so that I can use the arguments passed into the module in my go lang program.
Terraform 0.13 ( main.tf )
terraform {
backend "s3" {
bucket = "bucket"
key = "c2an8q6a0brja8jaq3k0.tfstate"
region = "us-west-2"
dynamodb_table = "terraform-lock"
skip_region_validation = true
}
}
provider "null" {
version = "2.1"
}
provider "random" {
version = "2.3"
}
provider "template" {
version = "2.1"
}
provider "archive" {
version = "1.3"
}
provider "aws" {
version = "<= 4.0"
region = "us-west-2"
}
output "aws_vpc_main" {
value = module.dev-c2an8q6a0brja8jaq3k0-Network.vpc_id
}
module "dev-c2an8q6a0brja8jaq3k0-Network" {
source = "gitsource"
cidr = "10.0.0.0/16"
cluster_id = "c2an8q6a0brja8jaq3k0"
env = "dev"
owner = "me"
region = "us-west-2"
super_cluster = "dev"
platform_api = "api"
proj = "this"
}
From the above main.tf file, I would like to parse various arguments ( i.e env, owner, region,cidr etc etc ) that is passed on to the modules.
I am using below go program to try to do some parsing going.
main.go
package main
import (
"fmt"
"io/ioutil"
"github.com/hashicorp/hcl"
)
func main() {
FileContent, err := ioutil.ReadFile("main.tf") // just pass the file name
if err != nil {
fmt.Print(err)
}
var out interface{}
FileContentString := string(FileContent) // convert content to a 'string'
err = hcl.Decode(&out, FileContentString)
if err != nil {
fmt.Println(err)
}
fmt.Println(out)
}
However above program is erroring out with the following error message - At 33:11: Unknown token: 33:11 IDENT module.dev-c2an8q6a0brja8jaq3k0-Network.vpc_id
If I change the interpolation syntax on the above main.tf file to match with terraform 0.11 and earlier, the program main.go just works fine.
Terraform 0.11 ( main.tf )
terraform {
backend "s3" {
bucket = "bucket"
key = "c2an8q6a0brja8jaq3k0.tfstate"
region = "us-west-2"
dynamodb_table = "terraform-lock"
skip_region_validation = true
}
}
provider "null" {
version = "2.1"
}
provider "random" {
version = "2.3"
}
provider "template" {
version = "2.1"
}
provider "archive" {
version = "1.3"
}
provider "aws" {
version = "<= 4.0"
region = "us-west-2"
}
output "aws_vpc_main" {
value = "${module.dev-c2an8q6a0brja8jaq3k0-Network.vpc_id}"
}
module "dev-c2an8q6a0brja8jaq3k0-Network" {
source = "gitsource"
cidr = "10.0.0.0/16"
cluster_id = "c2an8q6a0brja8jaq3k0"
env = "dev"
owner = "me"
region = "us-west-2"
super_cluster = "dev"
platform_api = "api"
proj = "virt"
}
main.go program output
map[module:[map[dev-c2an8q6a0brja8jaq3k0-Network:[map[cidr:10.0.0.0/16 cluster_id:c2an8q6a0brja8jaq3k0 env:dev owner:me platform_api:api proj:virt region:us-west-2 source:gitsource super_cluster:dev]]]] output:[map[aws_vpc_main:[map[value:${module.dev-c2an8q6a0brja8jaq3k0-Network.vpc_id}]]]] provider:[map[null:[map[version:2.1]]] map[random:[map[version:2.3]]] map[template:[map[version:2.1]]] map[archive:[map[version:1.3]]] map[aws:[map[region:us-west-2 version:<= 4.0]]]] terraform:[map[backend:[map[s3:[map[bucket:bucket dynamodb_table:terraform-lock key:c2an8q6a0brja8jaq3k0.tfstate region:us-west-2 skip_region_validation:true]]]]]]]
Which, I can parse later on with various methods.
I am just out of ideas as to why its not working with the latest terraform version configuration.
Thank you.
EDIT
I just want to update this question with the research I have done so far.
There is a package called terraform-config-inspect however it doesn't support parsing the module arguments at the moment.
I have also looked at the hclv2 ( as thats what Terraform uses ) from 0.13 and going forward, however I am not able to find any sort of function or method to parse the terraform files,just like it used to support in hclv1.
Thanks

Related

AWS access key id provided does not exist in our records

I've an issue with terraform that i really don't understand.
Let me explain :
When i run
terraform init all good
terraform fmt all good
terraform validate all good
However when i run terraform plan i get an ERROR
terraform plan
I set the AWS_ACCESS_KEY & AWS_SECRET_key on the code to test it faster ( otherwise the value are passed by gitlab )
If i try without them on the variable.tf and use the value i export before to use AWS CLI everything work perfecty and i can deploy on aws .
variable.tf
variable "aws_region" {
default = "eu-central-1"
}
variable "bucket_name" {
type = string
default = "test-bucket"
}
variable "aws_access_key" {
default = "XXXXXXXXXXXXXXXXX"
}
variable "aws_secret_key" {
default = "XXXXXXXXXXXXXX"
}
main.tf
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 4.9.0"
}
}
}
provider "aws" {
region = var.aws_region
access_key = var.aws_access_key
secret_key = var.aws_secret_key
# Make faster by skipping something
# https://registry.terraform.io/providers/hashicorp/aws/latest/docs#skip_get_ec2_platforms
skip_get_ec2_platforms = true
skip_metadata_api_check = true
skip_region_validation = true
skip_credentials_validation = true
skip_requesting_account_id = true
}
provider.tf
module "s3-bucket" {
source = "terraform-aws-modules/s3-bucket/aws"
version = "3.4.0"
bucket = var.bucket_name
acl = "private"
force_destroy = true
create_bucket = true
versioning = {
enabled = true
}
server_side_encryption_configuration = {
rule = {
apply_server_side_encryption_by_default = {
sse_algorithm = "AES256"
}
}
}
}
Thanks for your help guy .
I don't know what to do anymore
Try using
"region"
"access_key"
"secret_key"
without
aws_
as the prefix to your variable.tf and main.tf
Sometimes it creates conflict with terraform code.
It looks like the cause is aws_ prefix. When it is used in a variable names this error occurs.

Terraform: get account id for provider + for_each + account module

I'm trying to create multiple AWS Accounts in an Organization containing ressources.
The resources should owned by the created accounts.
for that I created a module for the accounts:
resource "aws_organizations_account" "this" {
name = var.customer
email = var.email
parent_id = var.parent_id
role_name = "OrganizationAccountAccessRole"
provider = aws.src
}
resource "aws_s3_bucket" "this" {
bucket = "exconcept-terraform-state-${var.customer}"
provider = aws.dst
depends_on = [
aws_organizations_account.this
]
}
output "account_id" {
value = aws_organizations_account.this.id
}
output "account_arn" {
value = aws_organizations_account.this.arn
}
my provider file for the module:
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 4.0"
configuration_aliases = [ aws.src, aws.dst ]
}
}
}
In the root module I'm calling the module like this:
module "account" {
source = "./modules/account"
for_each = var.accounts
customer = each.value["customer"]
email = each.value["email"]
# close_on_deletion = true
parent_id = aws_organizations_organizational_unit.testing.id
providers = {
aws.src = aws.default
aws.dst = aws.customer
}
}
Since the provider information comes from the root module, and the accounts are created with a for_each map, how can I use the current aws.dst provider?
Here is my root provider file:
provider "aws" {
region = "eu-central-1"
profile = "default"
alias = "default"
}
provider "aws" {
assume_role {
role_arn = "arn:aws:iam::${module.account[each.key].account_id}:role/OrganizationAccountAccessRole"
}
alias = "customer"
region = "eu-central-1"
}
With Terraform init I got this error:
Error: Cycle: module.account.aws_s3_bucket_versioning.this, module.account.aws_s3_bucket.this, provider["registry.terraform.io/hashicorp/aws"].customer, module.account.aws_s3_bucket_acl.this, module.account (close)

Internal Exception while creating AWS FMS Policy for CloudFront

I am getting below error while creating firewall manager policy for cloud front distribution.
the documentation provide little details on how to deploy a Cloudfront distribution which is a Global resource.
I am getting below error while executing my code:
aws_fms_policy.xxxx: Creating...
╷
│ Error: error creating FMS Policy: InternalErrorException:
│
│ with aws_fms_policy.xxxx,
│ on r_wafruleset.tf line 1, in resource "aws_fms_policy" "xxxx":
│ 1: resource "aws_fms_policy" "xxxx" {
│
╵
Releasing state lock. This may take a few moments...
main.tf looks like this with provider information:
provider "aws" {
region = "ap-southeast-2"
assume_role {
role_arn = "arn:aws:iam::${var.account_id}:role/yyyy"
}
}
provider "aws" {
alias = "us_east_1"
region = "us-east-1"
assume_role {
role_arn = "arn:aws:iam::${var.account_id}:role/yyyy"
}
}
r_fms.tf looks like this:
resource "aws_fms_policy" "xxxx" {
name = "xxxx"
exclude_resource_tags = true
resource_tags = var.exclude_tags
remediation_enabled = true
provider = aws.us_east_1
include_map {
account = ["123123123"]
}
resource_type = "AWS::CloudFront::Distribution"
security_service_policy_data {
type = "WAFV2"
managed_service_data = jsonencode(
{
type = "WAFV2"
defaultAction = {
type = "ALLOW"
}
overrideCustomerWebACLAssociation = false
postProcessRuleGroups = []
preProcessRuleGroups = [
{
excludeRules = []
managedRuleGroupIdentifier = {
vendorName = "AWS"
managedRuleGroupName = "AWSManagedRulesAmazonIpReputationList"
version = true
}
overrideAction = {
type = "COUNT"
}
ruleGroupArn = null
ruleGroupType = "ManagedRuleGroup"
sampledRequestsEnabled = true
},
{
excludeRules = []
managedRuleGroupIdentifier = {
managedRuleGroupName = "AWSManagedRulesWindowsRuleSet"
vendorName = "AWS"
version = null
}
overrideAction = {
type = "COUNT"
}
ruleGroupArn = null
ruleGroupType = "ManagedRuleGroup"
sampledRequestsEnabled = true
},
]
sampledRequestsEnabledForDefaultActions = true
})
}
}
I have tried to follow the thread but still getting below error:
https://github.com/hashicorp/terraform-provider-aws/issues/17821
Terraform Version:
Terraform v1.1.7
on windows_386
+ provider registry.terraform.io/hashicorp/aws v4.6.0
There is open issue in terraform aws provider.
A workaround for this issue is to remove: 'version' attribute;
AWS has recently introduced Versioning with WAF policies managed by Firewall Manager; which is causing this weird error.
Though a permanent fix is InProgress (refer my earlier post) we can remove the attribute to avoid this error.
Another approach is to use the new attribute: versionEnabled=true in case you want versioning enabled.

Unable to find remote state

Error: Unable to find remote state
on ../../modules/current_global/main.tf line 26, in data "terraform_remote_state" "global":
26: data "terraform_remote_state" "global" {
No stored state was found for the given workspace in the given backend
I am really stucked in this issue for a while.
main.tf:
data "aws_caller_identity" "current" {}
locals {
state_buckets = {
"amazon_account_id" = {
bucket = "bucket_name"
key = "key"
region = "region"
}
}
state_bucket = local.state_buckets[data.aws_caller_identity.current.account_id]
}
data "terraform_remote_state" "global" {
backend = "s3"
config = local.state_bucket
}
output "outputs" {
description = "Current account's global Terraform module outputs"
value = data.terraform_remote_state.global.outputs
}
one directoty above there is one main.tf file which has reference of above main.tf file
main.tf:
provider "aws" {
version = "~> 2.0"
region = var.region
allowed_account_ids = ["id"]
}
terraform {
backend "s3" {
bucket = "bucket_name"
key = "key"
region = "region"
}
}
module "global" {
source = "../../modules/current_global"
}

Error message while deploy a composer resource (GCP) with terraform

I am having an error with a terraform code, while deploy a GCP composer resource:
google_composer_environment.composer-beta: googleapi: Error 400: Property key must be of the form section-name. The section may not contain opening square brackets, closing square brackets or hyphens, and the name may not contain a semicolon or equals sign. The entire property key may not contain periods., badRequest
The issue arises while this GCP resource is being deployed: https://www.terraform.io/docs/providers/google/r/composer_environment.html
This is my code:
Variables.tf file:
variable "composer_airflow_version" {
type = "map"
default = {
image_version="composer-1.6.1-airflow-1.10.1"
}
}
variable "composer_python_version" {
type = "map"
default = {
python_version="3"
}
}
my-composer.tf file:
resource "google_composer_environment" "composer-beta" {
provider= "google-beta"
project = "my-proyect"
name = "${var.composer_name}"
region = "${var.region}"
config {
node_count = "${var.composer_node_count}"
node_config {
zone = "${var.zone}"
machine_type = "${var.composer_machine_type}"
network = "${google_compute_network.network.self_link}"
subnetwork = "${lookup(var.vpc_subnets_01[0], "subnet_name")}"
}
software_config {
airflow_config_overrides="${var.composer_airflow_version}",
airflow_config_overrides="${var.composer_python_version}",
}
}
depends_on = [
"google_service_account.comp-py3-dev-worker",
"google_compute_subnetwork.subnetwork",
]
}
According to the error message, the root cause of the error seems be related to the software_config section in the terraform code. I understand that the variables "composer_airflow_version" and "composer_python_version" should be of type "map", therefore, I set up them as map format.
A really appreciate it, if someone could identify the cause of the error, and tell me the adjustment to apply. It is likely that I should apply a change in variables, but I don't know what it is. :-(
Thanks in advance,
Jose
Based on the documentations, airflow_config_overrides, pypi_packages, env_variables, image_version and python_version should be directly under software_config.
Variables.tf file:
variable "composer_airflow_version" {
default = "composer-1.6.1-airflow-1.10.1"
}
variable "composer_python_version" {
default = "3"
}
my-composer.tf file:
resource "google_composer_environment" "composer-beta" {
provider= "google-beta"
project = "my-proyect"
name = "${var.composer_name}"
region = "${var.region}"
config {
node_count = "${var.composer_node_count}"
node_config {
zone = "${var.zone}"
machine_type = "${var.composer_machine_type}"
network = "${google_compute_network.network.self_link}"
subnetwork = "${lookup(var.vpc_subnets_01[0], "subnet_name")}"
}
software_config {
image_version = "${var.composer_airflow_version}",
python_version = "${var.composer_python_version}",
}
}
depends_on = [
"google_service_account.comp-py3-dev-worker",
"google_compute_subnetwork.subnetwork",
]
}