How to use locals in child modules in terraform - amazon-web-services

I've locals.tf in root module and want to use it to child modules. Directory structure is as follows:
.
├── env.tfvars
├── local.tf
├── main.tf
├── modules
│   ├── alb
│   │   ├── alb.tf
│   │   ├── output.tf
│   │   └── variable.tf
│   ├── ecr
│   │   ├── ecr.tf
│   │   ├── output.tf
│   │   └── variable.tf
│   ├── ecs
│   │   ├── ecs.tf
│   │   └── variable.tf
local.tf
locals {
customer_env = "${var.customer_name}-${var.env}"
}
Wanted to use this locals into child modules.
modules/ecs/ecs.tf
resource "aws_ecs_cluster" "main" {
name = "${local.customer_env}"
tags = {
Environment = var.env
}
}
Tried that way but throwing an error
Error: Reference to undeclared local value
│
│ on modules/ecs/ecs.tf line 2, in resource "aws_ecs_cluster" "main":
│ 2: name = "${local.customer_env}"
│
│ A local value with the name "customer_env" has not been declared.
╵

Child modules do not inherit variables and locals from the parent module. You have to explicitly pass them in. So, for example for your ecs module you have pass the local in:
module "ecs" {
source = "./modules/ecs"
customer_env = local.customer_env
}
Obviously in the ecs module you have to have the corresponding variable:
variable "customer_env" {}
And the aws_ecs_cluster.main will use the variable:
resource "aws_ecs_cluster" "main" {
name = var.customer_env
tags = {
Environment = var.env
}
}
You have to do it for all your modules.

Related

Chef::Exceptions::FileNotFound: template[/var/www/html/index.html]

I am new to the chef I could not understand what is an issue. Following is my default script
apt_update 'Update the apt cache daily' do
frequency 86_400
action :periodic
end
package 'apache2'
service 'apache2' do
supports status: true
action [:enable, :start]
end
template '/var/www/html/index.html' do
source 'index.html.erb'
end
this is the error I am getting
[2020-04-25T12:57:00+00:00] FATAL: Stacktrace dumped to /home/vagrant/.chef/local-mode-cache/cache/chef-stacktrace.out
[2020-04-25T12:57:00+00:00] FATAL: Please provide the contents of the stacktrace.out file if you file a bug report
[2020-04-25T12:57:00+00:00] FATAL: Chef::Exceptions::FileNotFound: template[/var/www/html/index.html] (learn_chef_apache2::default line 18) had an error: Chef::Exceptions::FileNotFound: Cookbook 'learn_chef_apache2' (0.1.0) does not contain a file at any of these locations:
templates/host-vagrant.vm/index.html.erb
templates/ubuntu-18.04/index.html.erb
templates/ubuntu/index.html.erb
templates/default/index.html.erb
templates/index.html.erb
and this is my cookbooks tree
cookbooks
├── learn_chef_apache2
│   ├── Berksfile
│   ├── CHANGELOG.md
│   ├── chefignore
│   ├── LICENSE
│   ├── metadata.rb
│   ├── README.md
│   ├── recipes
│   │   └── default.rb
│   ├── spec
│   │   ├── spec_helper.rb
│   │   └── unit
│   │   └── recipes
│   │   └── default_spec.rb
│   └── test
│   └── integration
│   └── default
│   └── default_test.rb
├── learn_chef_appache2
│   └── templates
│   ├── default
│   └── index.html.erb
└── templates
└── index.html.erb
Can someone please help me what wrong I am doing and it will be great if you can share a link or explain it for my understanding.
what I did wrong was my template was created outside learn_chef_apache2 whereas it should be inside as follwing
cookbooks
└── learn_chef_apache2
├── Berksfile
├── CHANGELOG.md
├── chefignore
├── index.html.erb
├── LICENSE
├── metadata.rb
├── README.md
├── recipes
│   └── default.rb
├── spec
│   ├── spec_helper.rb
│   └── unit
│   └── recipes
│   └── default_spec.rb
├── templates
│   └── index.html.erb
└── test
└── integration
└── default
└── default_test.rb

Terragrunt use resources from other enviroment

I want to use resources, in this case the output of the vpc module, in another environment.
Goal is to reduce the costs for the customer with resources of stage and dev in the same vpc.
Stage and dev have seperate ecs-cluster, asg, lc, different docker images in ecr etc but should be in the same vpc with the same load balancer and then a host header listener to forward to the specific target group.
Both should use the same database and the same load balancer.
Requirement was to have n Customer each with stage, dev and prod environments.
All Customer folders should contain the three environments.
My folder structure is
├── Terraform
│   ├── Customer1
│   ├── Customer2
│   ├── Customer3
│   ├── Customer4
│   ├── Customer5
│   ├── Global
│   │   ├── iam
│   │   │   └── terragrunt.hcl
│   ├── README.md
│   └── Customer6
│   ├── non-prod
│   │   ├── eu-central-1
│   │   │   ├── dev
│   │   │   │   ├── cloudwatch
│   │   │   │   │   └── terragrunt.hcl
│   │   │   │   ├── ec2
│   │   │   │   │   └── terragrunt.hcl
│   │   │   │   ├── ecs
│   │   │   │   │   └── terragrunt.hcl
│   │   │   │   ├── lambda
│   │   │   │   │   └── terragrunt.hcl
│   │   │   │   ├── rds
│   │   │   │   │   └── terragrunt.hcl
│   │   │   │   ├── terragrunt.hcl
│   │   │   │   ├── vars.hcl
│   │   │   │   └── vpc
│   │   │   │   └── terragrunt.hcl
│   │   │   ├── region.hcl
│   │   │   └── stage
│   │   │   ├── cloudwatch
│   │   │   │   └── terragrunt.hcl
│   │   │   ├── ec2
│   │   │   │   └── terragrunt.hcl
│   │   │   ├── ecs
│   │   │   │   └── terragrunt.hcl
│   │   │   ├── lambda
│   │   │   │   └── terragrunt.hcl
│   │   │   ├── rds
│   │   │   │   └── terragrunt.hcl
│   │   │   ├── terragrunt.hcl
│   │   │   ├── vars.hcl
│   │   │   └── vpc
│   │   │   └── terragrunt.hcl
│   │   └── terragrunt.hcl
│   └── prod
│   └── eu-central-1
│   ├── prod
│   │   ├── cloudwatch
│   │   │   └── terragrunt.hcl
│   │   ├── ec2
│   │   │   └── terragrunt.hcl
│   │   ├── ecs
│   │   │   └── terragrunt.hcl
│   │   ├── lambda
│   │   │   └── terragrunt.hcl
│   │   ├── rds
│   │   │   └── terragrunt.hcl
│   │   ├── terragrunt.hcl
│   │   ├── vars.hcl
│   │   └── vpc
│   │   └── terragrunt.hcl
│   └── region.hcl
└── Modules
├── cloudwatch
│   ├── Main.tf
│   ├── Outputs.tf
│   └── Variables.tf
├── ec2
│   ├── Main.tf
│   ├── Outputs.tf
│   └── Variables.tf
├── ecs
│   ├── Main.tf
│   ├── Outputs.tf
│   └── Variables.tf
├── iam
│   ├── Main.tf
│   ├── Outputs.tf
│   └── Variables.tf
├── lambda
│   ├── Main.tf
│   ├── Outputs.tf
│   └── Variables.tf
├── rds
│   ├── Main.tf
│   ├── Outputs.tf
│   └── Variables.tf
├── vpc
│   ├── Main.tf
│   ├── Outputs.tf
│   ├── Variables.tf
└── vpc-stage
├── Main.tf
├── Outputs.tf
└── Variables.tf
I've read about data terraform_remote_state but that's on module layer.
For me it's not a good approach to do this in the module layer cause it's
only for the stage enviroment.
Is there a way to get the output from the remote state in the terragrunt.hcl in the stage folder from the dev environment to use this as input for the ec2 module?
I've used
dependency "vpc" {
config_path = "../vpc"
}
and then
vpc_id = dependency.vpc.outputs.vpc_id
for the input of ec2 module but that's only if it's in the same enviroment.
Best regards.
In the directory structure you've show above, you have a VPC in both the dev and stage environments. It sounds like you want dev and stage to share a VPC, so the first thing to do is move that VPC directory outside of dev and stage. Put the vpc under eu-west-1, then you can use it as a dependency within both dev and stage as you desire.
Customer6
│ ├── non-prod
└── eu-central-1
├── dev
│ └── ecs
├── stage
│ └── ecs
└── vpc
dependency "vpc" {
config_path = "../../vpc"
}
Refer to the Terragrunt docs on Passing outputs between modules.

AWS Lambda 502 after deploy dev

This is my first time with serverless aka AWS lambda.
Here is my directory tree 1st and 2nd level
$ tree -L 2
.
├── README.md
├── Visualization_Sarit
│   ├── BA_Read0DDataPlot1DGraph.py
│   ├── CC_ReadNew2DDataPlotLineGraph.py
│   ├── ModulesInterfaceTASK.py
│   ├── Visualization.docx
│   ├── Visualization.xlsx
│   └── inHT6Ms302
├── apps
│   ├── advanced_cases
│   ├── commons
│   ├── control_params
│   ├── device_params
│   ├── heating_params
│   ├── plasma_params
│   ├── plasma_species
│   ├── results
│   ├── scenarios
│   └── transport_params
├── experiments.py
├── f1
│   ├── README.md
│   ├── package-lock.json
│   ├── package.json
│   ├── public
│   └── src
├── ht6m
│   ├── __init__.py
│   ├── __pycache__
│   ├── api_urls.py
│   ├── celery.py
│   ├── db_routers.py
│   ├── settings.py
│   ├── urls.py
│   └── wsgi.py
├── manage.py
├── media
├── requirements.in
├── requirements.txt
├── static
├── static_files
├── templates
│   ├── __init__.py
│   └── hello.html
└── zappa_settings.json
Here is my settings
settings.py
DEBUG = True
ALLOWED_HOSTS = ['*']
# Application definition
INSTALLED_APPS = [
'django_celery_results',
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'django_extensions',
'django_rq',
'django_filters',
'corsheaders',
'rest_framework',
'apps.advanced_cases',
'apps.commons',
'apps.control_params',
'apps.device_params',
'apps.heating_params',
'apps.plasma_params',
'apps.plasma_species',
'apps.results',
'apps.scenarios',
'apps.transport_params',
]
MIDDLEWARE = [
'corsheaders.middleware.CorsMiddleware',
'django.middleware.security.SecurityMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
]
zappa_settings.json
{
"dev": {
"django_settings": "ht6m.settings",
"profile_name": "aegon",
"project_name": "ht6minterface",
"runtime": "python3.7",
"s3_bucket": "zappa-20j5uvs5p",
"aws_region": "ap-southeast-1"
}
}
Deployment went smooth and suddenly raises 502
$ zappa deploy dev
Calling deploy for stage dev..
Creating ht6minterface-dev-ZappaLambdaExecutionRole IAM Role..
Creating zappa-permissions policy on ht6minterface-dev-ZappaLambdaExecutionRole IAM Role.
Downloading and installing dependencies..
- psycopg2-binary==2.8.2: Downloading
100%|█████████████████████████████████████████████████████████████████████████████████████| 2.94M/2.94M [00:01<00:00, 1.95MB/s]
- greenlet==0.4.15: Downloading
100%|██████████████████████████████████████████████████████████████████████████████████████| 42.4K/42.4K [00:00<00:00, 492KB/s]
- gevent==1.4.0: Downloading
100%|█████████████████████████████████████████████████████████████████████████████████████| 5.44M/5.44M [00:01<00:00, 5.01MB/s]
- cffi==1.12.3: Downloading
100%|███████████████████████████████████████████████████████████████████████████████████████| 431K/431K [00:00<00:00, 1.48MB/s]
- sqlite==python3: Using precompiled lambda package
'python3.7'
Packaging project as zip.
Uploading ht6minterface-dev-1559645572.zip (42.2MiB)..
100%|█████████████████████████████████████████████████████████████████████████████████████| 44.2M/44.2M [00:22<00:00, 1.96MB/s]
Scheduling..
Scheduled ht6minterface-dev-zappa-keep-warm-handler.keep_warm_callback with expression rate(4 minutes)!
Uploading ht6minterface-dev-template-1559645689.json (1.6KiB)..
100%|█████████████████████████████████████████████████████████████████████████████████████| 1.65K/1.65K [00:00<00:00, 17.7KB/s]
Waiting for stack ht6minterface-dev to create (this can take a bit)..
100%|███████████████████████████████████████████████████████████████████████████████████████████| 4/4 [00:16<00:00, 5.44s/res]
Deploying API Gateway..
Error: Warning! Status check on the deployed lambda failed. A GET request to '/' yielded a 502 response code.
I don't see any error in the zappa tail
$ zappa tail
Calling tail for stage dev..
[1559645713491] Instancing..
  File "<frozen importlib._bootstrap>", line 965, in _find_and_load_unlockededdule
[1559645730278] [DEBUG] 2019-06-04T10:55:30.278Z 466c2217-3fd7-4a2e-b97c-732866c5df54 Zappa Event: {'time': '2019-06-04T10:54:50Z', 'detail-type': 'Scheduled Event', 'source': 'aws.events', 'account': '530403937392', 'region': 'ap-southeast-1', 'detail': {}, 'version': '0', 'resources': ['arn:aws:events:ap-southeast-1:530403937392:rule/ht6minterface-dev-zappa-keep-warm-handler.keep_warm_callback'], 'id': 'c2ccca98-e69a-2f19-c398-4d123d24956c', 'kwargs': {}}
[1559645730279] [DEBUG] 2019-06-04T10:55:30.279Z 466c2217-3fd7-4a2e-b97c-732866c5df54 Zappa Event: {}
[1559645731280]
[1559645970227] [DEBUG] 2019-06-04T10:59:30.227Z ad604f41-5692-47b0-8e49-894f52afc56f Zappa Event: {'time': '2019-06-04T10:58:50Z', 'detail-type': 'Scheduled Event', 'source': 'aws.events', 'account': '530403937392', 'region': 'ap-southeast-1', 'detail': {}, 'version': '0', 'resources': ['arn:aws:events:ap-southeast-1:530403937392:rule/ht6minterface-dev-zappa-keep-warm-handler.keep_warm_callback'], 'id': '166fb665-ec39-526e-6ccb-47f1d8f083d3', 'kwargs': {}}
[1559645970227] [DEBUG] 2019-06-04T10:59:30.227Z ad604f41-5692-47b0-8e49-894f52afc56f Zappa Event: {}
[1559645971229]
Where am I wrong?
You're getting 502 because once deployed with API gateway, API gateway requires a very specific response from the Lambda, otherwise it will throw you a Malformed Lambda proxy response aka 502.
As mentioned in the documentation, you should make sure your Lambda is returning a response in the following structure (this must be a JSON):
{
"isBase64Encoded": true|false,
"statusCode": httpStatusCode,
"headers": { "headerName": "headerValue", ... },
"body": "..."
}
Move the apps according to jonvaughan
Another reference
Here is my working directory structure
$ tree -L 2
.
├── README.md
├── Visualization_Sarit
│   ├── BA_Read0DDataPlot1DGraph.py
│   ├── CC_ReadNew2DDataPlotLineGraph.py
│   ├── ModulesInterfaceTASK.py
│   ├── Visualization.docx
│   ├── Visualization.xlsx
│   └── inHT6Ms302
├── __init__.py
├── __pycache__
│   ├── api_urls.cpython-36.pyc
│   ├── celery_ht6m.cpython-36.pyc
│   ├── db_routers.cpython-36.pyc
│   ├── settings.cpython-36.pyc
│   ├── urls.cpython-36.pyc
│   └── wsgi.cpython-36.pyc
├── api_urls.py
├── apps
│   ├── __init__.py
│   ├── __pycache__
│   ├── advanced_cases
│   ├── commons
│   ├── control_params
│   ├── device_params
│   ├── heating_params
│   ├── plasma_params
│   ├── plasma_species
│   ├── results
│   ├── scenarios
│   └── transport_params
├── celery_ht6m.py
├── db_routers.py
├── experiments.py
├── f1
│   ├── README.md
│   ├── package-lock.json
│   ├── package.json
│   ├── public
│   └── src
├── manage.py
├── media
├── requirements.in
├── requirements.txt
├── settings.py
├── static
├── static_files
├── templates
│   ├── __init__.py
│   └── hello.html
├── urls.py
├── wsgi.py
└── zappa_settings.json

Terraform: How to make sure I run terraform on the expected AWS account

Suppose I want to launch an EC2 instance in my dev account but it is possible that I accidentally run the wrong command and create a temporary credential of prod account instead of dev account, then when I Terraform apply, I will launch the EC2 at prod account?
How can I avoid this from happening? Can I create a text file with dev account id in this folder, then have Terraform compare the account id of my temporary credential with the account id in this file before launching EC2, maybe in null_resource? I cannot figure out how to implement that.
The AWS provider allows you to specify either a list of allowed_account_ids or a list of forbidden_account_ids that you could define to prevent that from happening if necessary.
So you might have a folder structure that looks a little like this:
$ tree -a
.
├── dev
│   ├── bar-app
│   │   ├── dev-eu-west-1.tf -> ../../providers/dev-eu-west-1.tf
│   │   └── main.tf
│   ├── foo-app
│   │   ├── dev-eu-west-1.tf -> ../../providers/dev-eu-west-1.tf
│   │   └── main.tf
│   └── vpc
│   ├── dev-eu-west-1.tf -> ../../providers/dev-eu-west-1.tf
│   └── main.tf
├── prod
│   ├── bar-app
│   │   ├── main.tf
│   │   └── prod-eu-west-1.tf -> ../../providers/prod-eu-west-1.tf
│   ├── foo-app
│   │   ├── main.tf
│   │   └── prod-eu-west-1.tf -> ../../providers/prod-eu-west-1.tf
│   └── vpc
│   ├── main.tf
│   └── prod-eu-west-1.tf -> ../../providers/prod-eu-west-1.tf
├── providers
│   ├── dev-eu-west-1.tf
│   ├── prod-eu-west-1.tf
│   └── test-eu-west-1.tf
└── test
├── bar-app
│   ├── main.tf
│   └── test-eu-west-1.tf -> ../../providers/test-eu-west-1.tf
├── foo-app
│   ├── main.tf
│   └── test-eu-west-1.tf -> ../../providers/test-eu-west-1.tf
└── vpc
├── main.tf
└── test-eu-west-1.tf -> ../../providers/test-eu-west-1.tf
Where your providers/dev-eu-west-1.tf file looks like:
provider "aws" {
region = "eu-west-1"
allowed_account_ids = [
"1234567890",
]
}
And your providers/test-eu-west-1.tf file looks like:
provider "aws" {
region = "eu-west-1"
allowed_account_ids = [
"5678901234",
]
}
This would mean that you could only run Terraform against dev/foo-app when you are using credentials belonging to the 1234567890 account and could only run Terraform against dev/foo-app when you are using credentials belonging to the 5678901234 account.
Store your terraform state in an S3 bucket for that account. Make sure the buckets are named uniquely (they have to be unique to a region anyway). If you run it against the wrong account, it will error out because the bucket cannot be found.

AWS Glue Crawler: want separate table for folder in s3

My s3 file structure is:
├── bucket
│ ├── customer_1
│ │   ├── year=2016
│ │   ├── year=2017
│ │   │   ├── month=11
│ │   | │   ├── sometype-2017-11-01.parquet
│ | | | ├── sometype-2017-11-02.parquet
│ | | | ├── ...
│ │   │   ├── month=12
│ │   | │   ├── sometype-2017-12-01.parquet
│ | | | ├── sometype-2017-12-02.parquet
│ | | | ├── ...
│ │   ├── year=2018
│ │   │   ├── month=01
│ │   | │   ├── sometype-2018-01-01.parquet
│ | | | ├── sometype-2018-01-02.parquet
│ | | | ├── ...
│ ├── customer_2
│ │   ├── year=2017
│ │   │   ├── month=11
│ │   | │   ├── moretype-2017-11-01.parquet
│ | | | ├── moretype-2017-11-02.parquet
│ | | | ├── ...
│ │   ├── year=...
I want create separate table for customer_1 and customer_2 with AWS Glue crawler. It is working if i mention path s3://bucket/customer_1 and s3://bucket/customer_2.
I've tried s3://bucket/customer_* and s3://bucket/*, both are not working and can not create table in Glue catalog
I myself faced this issue recently. AWS GLUE Crawlers has this option Grouping behaviour for S3 data. If the checkbox is not selected it will try to combine schemas. By selecting the checkbox you can ensure that multiple and separate databases are created.
The table level should be the depth from the root of the bucket, from where you want separate tables.
In your case the depth would be 2.
More here
Glue's natural tendency is to add similar schemas(when pointed to the parent folder) to the same table with anything over than a 70% match(Assuming, In your case Cust1 and Cust2 have the same schemas). Keeping them in individual folders might create respective partitions based on the folder names.