Host Headers implementation with shared ALB and Elastic Beanstalk - amazon-web-services

I'm trying to modify the default Host Header CNAME attach to the rule when using a shared ALB with ElasticBeanstalk configuration. By using Terraform, here's how my configuration look like:
{
namespace = "aws:elasticbeanstalk:environment"
name = "LoadBalancerIsShared"
value = "true"
},
{
namespace = "aws:elbv2:listener:443"
name = "Rules"
value = "default"
},
{
namespace = "aws:elbv2:loadbalancer"
name = "SharedLoadBalancer"
value = data.aws_lb.default.arn
},
{
namespace = "aws:elasticbeanstalk:environment:process:default"
name = "DeregistrationDelay"
value = "20"
},
{
namespace = "aws:elasticbeanstalk:environment:process:default"
name = "HealthCheckInterval"
value = "15"
},
{
namespace = "aws:elasticbeanstalk:environment:process:default"
name = "HealthCheckTimeout"
value = "5"
},
{
namespace = "aws:elasticbeanstalk:environment:process:default"
name = "HealthyThresholdCount"
value = "3"
},
{
namespace = "aws:elasticbeanstalk:environment:process:default"
name = "Port"
value = "80"
},
{
namespace = "aws:elasticbeanstalk:environment:process:default"
name = "Protocol"
value = "HTTP"
},
{
namespace = "aws:elasticbeanstalk:environment:process:default"
name = "StickinessEnabled"
value = "false"
},
{
namespace = "aws:elasticbeanstalk:environment:process:default"
name = "StickinessLBCookieDuration"
value = "86400"
},
{
namespace = "aws:elasticbeanstalk:environment:process:default"
name = "StickinessType"
value = "lb_cookie"
},
{
namespace = "aws:elasticbeanstalk:environment:process:default"
name = "UnhealthyThresholdCount"
value = "5"
},
{
namespace = "aws:elbv2:listenerrule:myrule"
name = "HostHeaders"
value = "my.example.com"
},
{
namespace = "aws:elbv2:listenerrule:myrule"
name = "Process"
value = "default"
}
Based on this AWS documentation, it should just work, but somehow, the Host Header attached to the Shared ALB always end up using region.elasticbeanstalk.com as follow:
Thanks again for your help!

What you're seeing in your load balancer is the default rule, and that's because you didn't include your custom rule in the ELB's setting.
Try this:
...
{
namespace = "aws:elbv2:listener:443"
name = "Rules"
value = "default,myrule"
},
...
Let me know if that helps

Related

How to add user_data commands in elastic beanstalk created with terraform

With this script I create EB.
resource "aws_elastic_beanstalk_environment" "beanstalkappenv" {
name = var.beanstalkappenv
application = aws_elastic_beanstalk_application.elasticapp.name
solution_stack_name = var.solution_stack_name
tier = var.tier
setting {
namespace = "aws:ec2:vpc"
name = "VPCId"
value = "${aws_vpc.prod-vpc.id}"
}
setting {
namespace = "aws:ec2:vpc"
name = "Subnets"
value = "${aws_subnet.prod-subnet-public-1.id},${aws_subnet.prod-subnet-public-2.id}"
}
setting {
namespace = "aws:elasticbeanstalk:environment:process:default"
name = "MatcherHTTPCode"
value = "200"
}
setting {
namespace = "aws:autoscaling:launchconfiguration"
name = "InstanceType"
value = "t2.micro"
}
setting {
namespace = "aws:ec2:vpc"
name = "ELBScheme"
value = "internet facing"
}
setting {
namespace = "aws:autoscaling:asg"
name = "MinSize"
value = 1
}
setting {
namespace = "aws:autoscaling:asg"
name = "MaxSize"
value = 2
}
setting {
namespace = "aws:elasticbeanstalk:healthreporting:system"
name = "SystemType"
value = "enhanced"
}
setting {
namespace = "aws:autoscaling:launchconfiguration"
name = "IamInstanceProfile"
value = "aws-elasticbeanstalk-ec2-role"
}
setting {
namespace = "aws:rds:dbinstance"
name = "DBAllocatedStorage"
value = "10"
}
setting {
namespace = "aws:rds:dbinstance"
name = "DBDeletionPolicy"
value = "Delete"
}
setting {
namespace = "aws:rds:dbinstance"
name = "HasCoupledDatabase"
value = "true"
}
setting {
namespace = "aws:rds:dbinstance"
name = "DBEngine"
value = "mysql"
}
setting {
namespace = "aws:rds:dbinstance"
name = "DBEngineVersion"
value = "8.0.28"
}
setting {
namespace = "aws:rds:dbinstance"
name = "DBInstanceClass"
value = "db.t3.micro"
}
setting {
namespace = "aws:rds:dbinstance"
name = "DBPassword"
value = "solvee-pos-unbreakable"
}
setting {
namespace = "aws:rds:dbinstance"
name = "DBUser"
value = "admin"
}
}
At launch I need to initialize the RDS db so I need to cd into the app directory activate virtual environment, enter python shell and run db.create_all() coomand. Like this.
#! /bin/bash
cd ../var/app/current
virtualenv env
source env/bin/activate
pip install -r requirements.txt
python3
from application import db
db.create_all()
When creating an EC2 recource it would look like this
resource "aws_instance" "my-instance" {
ami = "ami-04169656fea786776"
instance_type = "t2.nano"
key_name = "${aws_key_pair.terraform-demo.key_name}"
user_data = "${file("initialize_db.sh")}"
tags = {
Name = "Terraform"
Batch = "5AM"
}
}
but I'm creating the EC2 inside the EB I can't do it this way.
So how can I do it?
Sorry for spamming with tf questions.
In EB you are not using userdata. Instead all your initialization code should be provided through ebextentions or platform hooks.

Codepipeline Error using Environment Variables with Terraform

So I am running into an error with AWS Codepipeline:
Error: Error creating CodePipeline: ValidationException:
ActionConfiguration Map value must satisfy constraint: [Member must
have length less than or equal to 1000, Member must have a length
greater than or equal to 1]
Google it tells me that I have too many Pipeline Environment variables. It tells me I have a character limit of 1000 characters. I am not sure what that means, does it mean my values for my Environment variables can not exceed 100 characters or does it mean that the json that makes up the environment variables can't exceed 1000 characters?
Appreciate the help here.
Terraform code as requested:
resource "aws_codepipeline" "cp_plan_pipeline" {
name = "${local.cp_name}-cp"
role_arn = aws_iam_role.cp_service_role.arn
artifact_store {
type = var.cp_artifact_type
location = module.S3.bucket_name
}
stage {
name = "Initialize"
action {
run_order = 1
name = "Source"
category = "Source"
owner = "AWS"
provider = "CodeCommit"
version = "1"
input_artifacts = []
output_artifacts = ["CodeWorkspace"]
configuration = {
RepositoryName = var.cp_repo_name
BranchName = var.cp_branch_name
PollForSourceChanges = var.cp_poll_sources
OutputArtifactFormat = var.cp_ouput_format
}
}
}
stage {
name = "Build"
action {
run_order = 1
name = "Combine_Binaries"
category = "Build"
owner = "AWS"
provider = "CodeBuild"
version = "1"
namespace = "BINARYVARIABLE"
input_artifacts = ["CodeWorkspace"]
output_artifacts = ["CodeSource"]
configuration = {
ProjectName = var.cp_binary_project_name
EnvironmentVariables = jsonencode([
{
name = "PIPELINE_EXECUTION_ID"
type = "PLAINTEXT"
value = "#{codepipeline.PipelineExecutionId}"
},
{
name = "PL_BUCKET_KEY"
type = "PLAINTEXT"
value = "global/state/${var.bucketlocation}/"
},
{
name = "PL_DYNAMODB_TABLE_NAME"
type = "PLAINTEXT"
value = "${var.project}-${var.env}-${var.tenant}-db-${var.bucketlocation}"
},
{
name = "PL_JQ_VERSION"
type = "PLAINTEXT"
value = var.JQ_VER
},
{
name = "PL_PY_VERSION"
type = "PLAINTEXT"
value = var.PY_VER
},
{
name = "PL_GO_VERSION"
type = "PLAINTEXT"
value = var.TF_VER
},
{
name = "PL_TF_VERSION"
type = "PLAINTEXT"
value = var.TF_VER
},
{
name = "PL_GROUP_NAME"
type = "PLAINTEXT"
value = var.group_name
},
{
name = "PL_GROUP_EMAIL"
type = "PLAINTEXT"
value = var.group_email
},
{
name = "PL_PROJECT"
type = "PLAINTEXT"
value = var.project
},
{
name = "PL_TENANT"
type = "PLAINTEXT"
value = var.tenant
},
{
name = "PL_APPENV"
type = "PLAINTEXT"
value = ""
},
{
name = "PL_AWSACCOUNTNAME"
type = "PLAINTEXT"
value = ""
},
{
name = "PL_AWSACCOUNTNUMB"
type = "PLAINTEXT"
value = ""
},
{
name = "PL_PERMISSION_SETS_DIR"
type = "PLAINTEXT"
value = ""
},
])
}
}
}
stage {
name = "Code_Validation"
action {
run_order = 1
name = "Build_Lint_Py"
category = "Build"
owner = "AWS"
provider = "CodeBuild"
version = "1"
input_artifacts = ["CodeSource"]
output_artifacts = ["pyReport"]
configuration = {
ProjectName = var.cp_lintpy_project_name
EnvironmentVariables = jsonencode([
{
name = "PIPELINE_EXECUTION_ID"
type = "PLAINTEXT"
value = "#{codepipeline.PipelineExecutionId}"
},
{
name = "PL_PY_VERSION"
type = "PLAINTEXT"
value = "#{BINARYVARIABLE.PL_PY_VERSION}"
},
{
name = "PL_PERMISSION_SETS_DIR"
type = "PLAINTEXT"
value = "#{BINARYVARIABLE.PL_PERMISSION_SETS_DIR}"
},
])
}
}
action {
run_order = 1
name = "Build_TF_Plan"
category = "Build"
owner = "AWS"
provider = "CodeBuild"
version = "1"
input_artifacts = ["CodeSource"]
output_artifacts = ["buildPlan"]
configuration = {
ProjectName = var.cp_build_tf_validate
#PrimarySource = "CodeSource"
EnvironmentVariables = jsonencode([
{
name = "PIPELINE_EXECUTION_ID"
type = "PLAINTEXT"
value = "#{codepipeline.PipelineExecutionId}"
},
{
name = "PL_APP_NAME"
type = "PLAINTEXT"
value = var.bucketlocation
},
{
name = "PL_BUCKET_KEY"
type = "PLAINTEXT"
value = "#{BINARYVARIABLE.PL_BUCKET_KEY}"
},
{
name = "PL_DYNAMODB_TABLE_NAME"
type = "PLAINTEXT"
value = "#{BINARYVARIABLE.PL_DYNAMODB_TABLE_NAME}"
},
{
name = "PL_JQ_VERSION"
type = "PLAINTEXT"
value = "#{BINARYVARIABLE.PL_JQ_VERSION}"
},
{
name = "PL_PY_VERSION"
type = "PLAINTEXT"
value = "#{BINARYVARIABLE.PL_PY_VERSION}"
},
{
name = "PL_TF_VERSION"
type = "PLAINTEXT"
value = "#{BINARYVARIABLE.PL_TF_VERSION}"
},
{
name = "PL_GROUP_NAME"
type = "PLAINTEXT"
value = "#{BINARYVARIABLE.PL_GROUP_NAME}"
},
{
name = "PL_GROUP_EMAIL"
type = "PLAINTEXT"
value = "#{BINARYVARIABLE.PL_GROUP_EMAIL}"
},
{
name = "PL_PROJECT"
type = "PLAINTEXT"
value = "#{BINARYVARIABLE.PL_PROJECT}"
},
{
name = "PL_TENANT"
type = "PLAINTEXT"
value = "#{BINARYVARIABLE.PL_TENANT}"
},
{
name = "PL_APPENV"
type = "PLAINTEXT"
value = "#{BINARYVARIABLE.PL_APPENV}"
},
{
name = "PL_AWSACCOUNTNUMB"
type = "PLAINTEXT"
value = "#{BINARYVARIABLE.PL_AWSACCOUNTNUMB}"
},
{
name = "PL_PERMISSION_SETS_DIR"
type = "PLAINTEXT"
value = "#{BINARYVARIABLE.PL_PERMISSION_SETS_DIR}"
},
])
}
}
action {
run_order = 1
name = "Build_Lint_TF"
category = "Build"
owner = "AWS"
provider = "CodeBuild"
version = "1"
input_artifacts = ["CodeSource"]
output_artifacts = ["tfReport"]
configuration = {
ProjectName = var.cp_linttf_project_name
#PrimarySource = "CodeSource"
EnvironmentVariables = jsonencode([
{
name = "PIPELINE_EXECUTION_ID"
type = "PLAINTEXT"
value = "#{codepipeline.PipelineExecutionId}"
},
{
name = "PL_BUCKET_KEY"
type = "PLAINTEXT"
value = "#{BINARYVARIABLE.PL_BUCKET_KEY}"
},
{
name = "PL_DYNAMODB_TABLE_NAME"
type = "PLAINTEXT"
value = "#{BINARYVARIABLE.PL_DYNAMODB_TABLE_NAME}"
},
{
name = "PL_TF_VERSION"
type = "PLAINTEXT"
value = var.TF_VER
},
{
name = "PL_TF_LINT_VERSION"
type = "PLAINTEXT"
value = "#{BINARYVARIABLE.PL_TF_LINT_VERSION}"
},
{
name = "PL_PERMISSION_SETS_DIR"
type = "PLAINTEXT"
value = "#{BINARYVARIABLE.PL_PERMISSION_SETS_DIR}"
},
])
}
}
}
stage {
name = "Test"
action {
run_order = 1
name = "Static_Analysis_Py"
category = "Test"
owner = "AWS"
provider = "CodeBuild"
version = "1"
input_artifacts = ["CodeSource"]
output_artifacts = ["pySecReport"]
configuration = {
ProjectName = var.cp_test_static_py
PrimarySource = "CodeSource"
EnvironmentVariables = jsonencode([
{
name = "PIPELINE_EXECUTION_ID"
type = "PLAINTEXT"
value = "#{codepipeline.PipelineExecutionId}"
},
{
name = "PL_JQ_VERSION"
type = "PLAINTEXT"
value = "#{BINARYVARIABLE.PL_JQ_VERSION}"
},
{
name = "PL_PY_VERSION"
type = "PLAINTEXT"
value = "#{BINARYVARIABLE.PL_PY_VERSION}"
},
{
name = "PL_PERMISSION_SETS_DIR"
type = "PLAINTEXT"
value = "#{BINARYVARIABLE.PL_PERMISSION_SETS_DIR}"
},
])
}
}
action {
run_order = 1
name = "Static_Analysis_TFSec"
category = "Test"
owner = "AWS"
provider = "CodeBuild"
version = "1"
namespace = "TESTVARIABLE"
input_artifacts = ["CodeSource"]
output_artifacts = ["tfSecReport"]
configuration = {
ProjectName = var.cp_test_static_tf
#PrimarySource = "CodeSource"
EnvironmentVariables = jsonencode([
{
name = "PIPELINE_EXECUTION_ID"
type = "PLAINTEXT"
value = "#{codepipeline.PipelineExecutionId}"
},
{
name = "PL_JQ_VERSION"
type = "PLAINTEXT"
value = "#{BINARYVARIABLE.PL_JQ_VERSION}"
},
{
name = "PL_TFSEC_VERSION"
type = "PLAINTEXT"
value = "#{BINARYVARIABLE.PL_TFSEC_VERSION}"
},
{
name = "PL_PERMISSION_SETS_DIR"
type = "PLAINTEXT"
value = "#{BINARYVARIABLE.PL_PERMISSION_SETS_DIR}"
},
#{
# name = "PL_ARTIFACTBUCKET"
# type = "PLAINTEXT"
# value = "${var.project}-${var.env}-${var.tenant}-${var.cp_name}-cp-artifacts"
#},
#{
# name = "PL_TFSECAPPROVALLINK"
# type = "PLAINTEXT"
# value = ""
#},
])
}
}
}
stage {
name = "Manual_Approval_Action"
action {
run_order = 1
name = "Manual_Review_Action-${var.project}-${var.env}-${var.tenant}-${var.cp_name}"
category = "Approval"
owner = "AWS"
provider = "Manual"
version = "1"
input_artifacts = []
output_artifacts = []
configuration = {
NotificationArn = module.sns_cp.op_sns_topic_arn
CustomData = "Please review the static code analysis and the repoistory before code is deployed."
}
}
}
stage {
name = "Deploy"
action {
run_order = 1
name = "Terraform-Apply"
category = "Build"
owner = "AWS"
provider = "CodeBuild"
input_artifacts = ["CodeSource","buildPlan"]
output_artifacts = []
version = "1"
configuration = {
ProjectName = var.cp_apply_project_name
PrimarySource = "CodeSource"
EnvironmentVariables = jsonencode([
{
name = "PIPELINE_EXECUTION_ID"
value = "#{codepipeline.PipelineExecutionId}"
type = "PLAINTEXT"
},
{
name = "PL_PERMISSION_SETS_DIR"
type = "PLAINTEXT"
value = "#{BINARYVARIABLE.PL_PERMISSION_SETS_DIR}"
},
{
name = "PL_BUCKET_KEY"
type = "PLAINTEXT"
value = "#{BINARYVARIABLE.PL_BUCKET_KEY}"
},
{
name = "PL_DYNAMODB_TABLE_NAME"
type = "PLAINTEXT"
value = "#{BINARYVARIABLE.PL_DYNAMODB_TABLE_NAME}"
},
{
name = "PL_TF_VERSION"
type = "PLAINTEXT"
value = "#{BINARYVARIABLE.PL_TF_VERSION}"
},
{
name = "PL_GROUP_NAME"
type = "PLAINTEXT"
value = "#{BINARYVARIABLE.PL_GROUP_NAME}"
},
{
name = "PL_GROUP_EMAIL"
type = "PLAINTEXT"
value = "#{BINARYVARIABLE.PL_GROUP_EMAIL}"
},
{
name = "PL_PROJECT"
type = "PLAINTEXT"
value = "#{BINARYVARIABLE.PL_PROJECT}"
},
{
name = "PL_TENANT"
type = "PLAINTEXT"
value = "#{BINARYVARIABLE.PL_TENANT}"
},
{
name = "PL_APPENV"
type = "PLAINTEXT"
value = "#{BINARYVARIABLE.PL_APPENV}"
},
])
}
}
}
}
Okay, after days of looking into this my colleague, who gets all the credit figured out what the 1000 character limit is. So keep in mind this is 1000 characters per stage. So without confirmation from Hashicorp, what we came up with is the following:
If you want to open the state file in a text editor, make sure you are viewing the file and not modifying it. Inside the state file search f,or "EnvironmentVariables" You will find a JSON syntax, the example shown below of the output.
"EnvironmentVariables": "[{\"name\":\"PIPELINE_EXECUTION_ID\",\"type\":\"PLAINTEXT\",\"value\":\"#{codepipeline.PipelineExecutionId}\"},{\"name\":\"PL_APP_NAME\",\"type\":\"PLAINTEXT\",\"value\":\"deploy_pl\"},{\"name\":\"PL_BUCKET_KEY\",\"type\":\"PLAINTEXT\",\"value\":\"#{BIN.PL_BUCKET_KEY}\"},{\"name\":\"PL_DYNAMODB_TABLE_NAME\",\"type\":\"PLAINTEXT\",\"value\":\"#{BIN.PL_DYNAMODB_TABLE_NAME}\"},{\"name\":\"PL_GROUP_NAME\",\"type\":\"PLAINTEXT\",\"value\":\"#{BIN.PL_GROUP_NAME}\"},{\"name\":\"PL_GROUP_EMAIL\",\"type\":\"PLAINTEXT\",\"value\":\"#{BIN.PL_GROUP_EMAIL}\"},{\"name\":\"PL_PROJECT\",\"type\":\"PLAINTEXT\",\"value\":\"#{BIN.PL_PROJECT}\"},{\"name\":\"PL_TENANT\",\"type\":\"PLAINTEXT\",\"value\":\"#{BIN.PL_TENANT}\"},{\"name\":\"PL_APPENV\",\"type\":\"PLAINTEXT\",\"value\":\"#{BIN.PL_APPENV}\"},{\"name\":\"PL_ACCT_NUMB\",\"type\":\"PLAINTEXT\",\"value\":\"#{BIN.PL_ACCT_NUMB}\"},{\"name\":\"PL_PERMISSION_SETS_DIR\",\"type\":\"PLAINTEXT\",\"value\":\"#{BIN.PL_PERMISSION_SETS_DIR}\"},{\"name\":\"PL_IS_MGMT_ACCT\",\"type\":\"PLAINTEXT\",\"value\":\"#{BIN.PL_IS_MGMT_ACCT}\"}]",
If you remove "EnvironmentVariables": and the " \ " that gives you a count of the characters within the environment variable section. It has allowed me to rename and refactor my variables accurately.
So advise going forward:
keep namespaces to four or fewer characters
keep variables short to save space
only use variables in the stage where appropriate

Terraform aws_elastic_beanstalk_environment "Error: Missing required argument"

Background
I have a Terraform script that creates several different AWS resources and links them together. One component is the aws_elastic_beanstalk_environment. It has the required parameters, and lots of settings for configuration. The beginning of the file is thus:
data "aws_elastic_beanstalk_application" "myapp" {
name = "beanstalkapp"
}
resource "aws_elastic_beanstalk_environment" "beanstalk" {
name = "beanstalk-environment"
application = data.aws_elastic_beanstalk_application.myapp.name
solution_stack_name = "64bit Amazon Linux 2 v5.2.5 running Node.js 12"
setting {
namespace = "aws:ec2:instances"
name = "InstanceTypes"
value = "t2.micro"
}
setting {
namespace = "aws:elasticbeanstalk:environment"
name = "EnvironmentType"
value = "LoadBalanced"
}
setting {
namespace = "aws:elasticbeanstalk:environment"
name = "LoadBalancerType"
value = "application"
}
setting {
namespace = "aws:elasticbeanstalk:environment"
name = "ServiceRole"
value = "aws-elasticbeanstalk-service-role"
}
setting {
namespace = "aws:autoscaling:launchconfiguration"
name = "IamInstanceProfile"
value = "aws-elasticbeanstalk-ec2-role"
}
setting {
namespace = "aws:ec2:vpc"
name = "VPCId"
value = var.createNewVPC ? aws_vpc.vpc_new[0].id : var.vpc_id_existing
}
setting {
namespace = "aws:ec2:vpc"
name = "Subnets"
value = var.createNewSubnets ? "${aws_subnet.subnet_private_a_new[0].id},${aws_subnet.subnet_private_b_new[0].id}" : "${var.subnet_private_a_id},${var.subnet_private_b_id}"
}
setting {
namespace = "aws:ec2:vpc"
name = "ELBSubnets"
value = var.createNewSubnets ? "${aws_subnet.subnet_public_a_new[0].id},${aws_subnet.subnet_public_b_new[0].id}" : "${var.subnet_public_a_id},${var.subnet_public_b_id}"
}
setting {
namespace = "aws:autoscaling:asg"
name = "MinSize"
value = "2"
}
setting {
namespace = "aws:autoscaling:asg"
name = "MaxSize"
value = "2"
}
setting {
namespace = "aws:elasticbeanstalk:application"
name = "Application Healthcheck URL"
value = "/"
}
setting {
namespace = "aws:elasticbeanstalk:application:environment"
name = "DB_HOST"
value = data.aws_db_instance.myDB.address
}
setting {
namespace = "aws:elasticbeanstalk:application:environment"
name = "DB_USER"
value = random_password.rds_username.result
}
setting {
namespace = "aws:elasticbeanstalk:application:environment"
name = "DB_PASS"
value = random_password.rds_password.result
}
setting {
namespace = "aws:elasticbeanstalk:application:environment"
name = "DB_PORT"
value = data.aws_db_instance.myDB.port
}
setting {
namespace = "aws:elasticbeanstalk:application:environment"
name = "CACHE_ADDRESS"
value = data.aws_elasticache_cluster.myCache.cluster_address
}
setting {
namespace = "aws:elasticbeanstalk:application:environment"
name = "CACHE_PORT"
value = var.cache_port
}
}
Problem
When running the script with -target=aws_elastic_beanstalk_environment.beanstalk, the beanstalk deploys just fine.
When running the script to deploy the full stack, the other components are created and then I get
Error: Missing required argument
on beanstalk.tf line 6, in resource "aws_elastic_beanstalk_environment" "beanstalk":
6: resource "aws_elastic_beanstalk_environment" "beanstalk" {
The argument "setting.1.value" is required, but no definition was found.
I'm probably as adapt at deciphering cryptic error messages as the next guy, but this seems like something in the guts of Terraform that is choking. I was on 0.13.5 and had the error, so I upgraded to 0.14.6. The only difference is now it displays the line about "setting.1.value".
Any ideas on what this means or how to solve it?
This is the setting block that causes the problem:
setting {
namespace = "aws:elasticbeanstalk:application:environment"
name = "CACHE_ADDRESS"
value = data.aws_elasticache_cluster.myCache.cluster_address
}
If I replace the value with a static value, everything works.
I believe this to be an issue with Terraform itself returning the proper value of the cluster_address.
The issue arises because this ElastiCache is a Redis instance, but the cluster_address property is only for Memcached.
It seems that Terraform should have a better error messge for this.
So if you see a weird "setting.x.value" error, it probably means that you are trying to use something that only applies to some of the options available for the resource.

Terraform 12 unable to deploy beanstalk healthreporting settings

I included healthreporting on my terraform deployment however I'm getting this error
ERROR
Error: Unsupported argument
on ../mods/environment/environment.tf line 210, in resource "aws_elastic_beanstalk_environment" "environment":
210: setting = {
An argument named "setting" is not expected here. Did you mean to define a
block of type "setting"?
I am using this json file on the template folder
hc.tpl file = located in ../mods/environment/hc folder
{
"CloudWatchMetrics": {
"Environment": {
"ApplicationRequests2xx": 60,
"ApplicationRequests5xx": 60,
"ApplicationRequests4xx": 60
},
"Instance": {
"ApplicationRequestsTotal": 60
}
},
"Version": 1
}
My Terraform code deployment (I removed some blocks to lessen your reading)
data "template_file" "hc" {
template = "${file("../mods/environment/hc/hc.tpl")}"
}
resource "aws_elastic_beanstalk_environment" "pogi" {
name = "pogi-poc"
application = "pogi-poc"
solution_stack_name = "64bit Amazon Linux 2018.03 v2.9.8 running PHP 7.0"
setting {
namespace = "aws:ec2:vpc"
name = "VPCId"
value = "vpc-12345"
}
setting {
namespace = "aws:ec2:vpc"
name = "ELBScheme"
value = "internal"
}
setting {
namespace = "aws:ec2:vpc"
name = "AssociatePublicIpAddress"
value = "false"
}
setting = {
namespace = "aws:elasticbeanstalk:healthreporting:system"
name = "ConfigDocument"
value = data.template_file.hc.rendered
}
}
I also used this approach tried by someone however I'm getting same error message
You have = in:
setting = {
namespace = "aws:elasticbeanstalk:healthreporting:system"
name = "ConfigDocument"
value = data.template_file.hc.rendered
}
It should be:
setting {
namespace = "aws:elasticbeanstalk:healthreporting:system"
name = "ConfigDocument"
value = data.template_file.hc.rendered
}

How to create ElasticBeanstalk environment with non-public load balancer with Terraform

I am setting up the AWS infrastructure using Terraform. One of components is ElasticBeanstalk application/environment with a load balancer and auto-scaling group. I don't want to expose the endpoint to entire Internet but just to the limited list of IP addresses. For that purpose I create the security group with proper inbound rules and assign it to the load balancer. But after script was applied, the load balancer has two security groups - one mine and second one - default, that allows HTTP traffic from any where.
As temp workaround I manually remove the inbound rule for the default SG. Such approach is not acceptable as long-term solution since I want full automation of infrastructure setup (without any human interaction).
Here is my config:
resource "aws_elastic_beanstalk_environment" "abc_env" {
name = "abc-${var.environment_name}"
application = "${aws_elastic_beanstalk_application.abc-service.name}"
solution_stack_name = "64bit Amazon Linux 2016.09 v2.3.0 running Python 3.4"
cname_prefix = "abc-${var.environment_name}"
tier = "WebServer"
wait_for_ready_timeout = "30m"
setting {
name = "InstanceType"
namespace = "aws:autoscaling:launchconfiguration"
value = "m3.medium"
}
setting {
name = "SecurityGroups"
namespace = "aws:elb:loadbalancer"
value = "${var.limited_http_acccess_id}"
}
setting {
name = "VPCId"
namespace = "aws:ec2:vpc"
value = "${var.vpc_id}"
}
setting {
name = "Subnets"
namespace = "aws:ec2:vpc"
value = "${var.public_net_id}"
}
setting {
name = "AssociatePublicIpAddress"
namespace = "aws:ec2:vpc"
value = "true"
}
setting {
name = "ELBSubnets"
namespace = "aws:ec2:vpc"
value = "${var.public_net_id}"
}
setting {
name = "ELBScheme"
namespace = "aws:ec2:vpc"
value = "external"
}
setting {
name = "MinSize"
namespace = "aws:autoscaling:asg"
value = "2"
}
setting {
name = "MaxSize"
namespace = "aws:autoscaling:asg"
value = "4"
}
setting {
name = "Availability Zones"
namespace = "aws:autoscaling:asg"
value = "Any 2"
}
setting {
name = "CrossZone"
namespace = "aws:elb:loadbalancer"
value = "true"
}
setting {
name = "Unit"
namespace = "aws:autoscaling:trigger"
value = "Percent"
}
setting {
name = "MeasureName"
namespace = "aws:autoscaling:trigger"
value = "CPUUtilization"
}
setting {
name = "LowerThreshold"
namespace = "aws:autoscaling:trigger"
value = "20"
}
setting {
name = "UpperThreshold"
namespace = "aws:autoscaling:trigger"
value = "80"
}
setting {
name = "Period"
namespace = "aws:autoscaling:trigger"
value = "5"
}
setting {
name = "UpperBreachScaleIncrement"
namespace = "aws:autoscaling:trigger"
value = "1"
}
setting {
name = "LowerBreachScaleIncrement"
namespace = "aws:autoscaling:trigger"
value = "-1"
}
setting {
name = "Notification Endpoint"
namespace = "aws:elasticbeanstalk:sns:topics"
value = "${var.notification_email}"
}
tags = "${merge(var.default_tags, map("Name", "abc environment"))}"
}
So the question is: how can I limit access to my load balancer without manual interaction with AWS (only using Terraform script)?
[UPDATE]
Here is my network config
resource "aws_vpc" "main_vpc" {
cidr_block = "${var.vpc_cidr_block}"
enable_dns_hostnames = true
}
resource "aws_subnet" "public_network" {
vpc_id = "${aws_vpc.main_vpc.id}"
cidr_block = "${var.public_network_cidr_block}"
}
resource "aws_internet_gateway" "gateway" {
vpc_id = "${aws_vpc.main_vpc.id}"
}
resource "aws_route_table" "public" {
vpc_id = "${aws_vpc.main_vpc.id}"
route {
cidr_block = "0.0.0.0/0"
gateway_id = "${aws_internet_gateway.gateway.id}"
}
}
resource "aws_route_table_association" "public" {
route_table_id = "${aws_route_table.public.id}"
subnet_id = "${aws_subnet.public_network.id}"
}
resource "aws_security_group" "limited_http_acccess" {
name = "limited_http_acccess"
description = "This security group allows to access resources within VPC from specified IP addresses"
vpc_id = "${aws_vpc.main_vpc.id}"
ingress {
from_port = 80
to_port = 80
protocol = "TCP"
cidr_blocks = ["${split(",", var.allowed_cidr_list)}"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
}
According to AWS docs ManagedSecurityGroup needs to be added to ElasticBeansstalk config in order to prevent usage of default security group.
So adding of the following lines to my aws_elastic_beanstalk_environment fixed the issue
setting {
name = "ManagedSecurityGroup"
namespace = "aws:elb:loadbalancer"
value = "${var.limited_http_acccess_id}"
}