I included healthreporting on my terraform deployment however I'm getting this error
ERROR
Error: Unsupported argument
on ../mods/environment/environment.tf line 210, in resource "aws_elastic_beanstalk_environment" "environment":
210: setting = {
An argument named "setting" is not expected here. Did you mean to define a
block of type "setting"?
I am using this json file on the template folder
hc.tpl file = located in ../mods/environment/hc folder
{
"CloudWatchMetrics": {
"Environment": {
"ApplicationRequests2xx": 60,
"ApplicationRequests5xx": 60,
"ApplicationRequests4xx": 60
},
"Instance": {
"ApplicationRequestsTotal": 60
}
},
"Version": 1
}
My Terraform code deployment (I removed some blocks to lessen your reading)
data "template_file" "hc" {
template = "${file("../mods/environment/hc/hc.tpl")}"
}
resource "aws_elastic_beanstalk_environment" "pogi" {
name = "pogi-poc"
application = "pogi-poc"
solution_stack_name = "64bit Amazon Linux 2018.03 v2.9.8 running PHP 7.0"
setting {
namespace = "aws:ec2:vpc"
name = "VPCId"
value = "vpc-12345"
}
setting {
namespace = "aws:ec2:vpc"
name = "ELBScheme"
value = "internal"
}
setting {
namespace = "aws:ec2:vpc"
name = "AssociatePublicIpAddress"
value = "false"
}
setting = {
namespace = "aws:elasticbeanstalk:healthreporting:system"
name = "ConfigDocument"
value = data.template_file.hc.rendered
}
}
I also used this approach tried by someone however I'm getting same error message
You have = in:
setting = {
namespace = "aws:elasticbeanstalk:healthreporting:system"
name = "ConfigDocument"
value = data.template_file.hc.rendered
}
It should be:
setting {
namespace = "aws:elasticbeanstalk:healthreporting:system"
name = "ConfigDocument"
value = data.template_file.hc.rendered
}
Related
I am facing one issue.
I am trying to install ALB controller using terraform and it failed got error like failed to download chart.
below is error I got ,
Below is my Terraform code which I am working on,
resource "helm_release" "lb" {
name = "aws-load-balancer-controller"
repository = "https://aws.github.io/eks-charts"
chart = "aws-load-balancer-controller"
namespace = "kube-system"
set {
name = "region"
value = var.region
}
set {
name = "image.tag"
value = "2.4.2"
}
set {
name = "image.repository"
value = "602401143452.dkr.ecr.${var.region}.amazonaws.com/amazon/aws-load-balancer-controller"
}
set {
name = "serviceAccount.create"
value = "true"
}
set {
name = "serviceAccount.name"
value = "aws-load-balancer-controller"
}
set {
name = "clusterName"
value = data.aws_eks_cluster.mycluster.name
}
}
Could you add verify=false and re-run terraform?
resource "helm_release" "lb" {
...
verify = false
...
}
With this script I create EB.
resource "aws_elastic_beanstalk_environment" "beanstalkappenv" {
name = var.beanstalkappenv
application = aws_elastic_beanstalk_application.elasticapp.name
solution_stack_name = var.solution_stack_name
tier = var.tier
setting {
namespace = "aws:ec2:vpc"
name = "VPCId"
value = "${aws_vpc.prod-vpc.id}"
}
setting {
namespace = "aws:ec2:vpc"
name = "Subnets"
value = "${aws_subnet.prod-subnet-public-1.id},${aws_subnet.prod-subnet-public-2.id}"
}
setting {
namespace = "aws:elasticbeanstalk:environment:process:default"
name = "MatcherHTTPCode"
value = "200"
}
setting {
namespace = "aws:autoscaling:launchconfiguration"
name = "InstanceType"
value = "t2.micro"
}
setting {
namespace = "aws:ec2:vpc"
name = "ELBScheme"
value = "internet facing"
}
setting {
namespace = "aws:autoscaling:asg"
name = "MinSize"
value = 1
}
setting {
namespace = "aws:autoscaling:asg"
name = "MaxSize"
value = 2
}
setting {
namespace = "aws:elasticbeanstalk:healthreporting:system"
name = "SystemType"
value = "enhanced"
}
setting {
namespace = "aws:autoscaling:launchconfiguration"
name = "IamInstanceProfile"
value = "aws-elasticbeanstalk-ec2-role"
}
setting {
namespace = "aws:rds:dbinstance"
name = "DBAllocatedStorage"
value = "10"
}
setting {
namespace = "aws:rds:dbinstance"
name = "DBDeletionPolicy"
value = "Delete"
}
setting {
namespace = "aws:rds:dbinstance"
name = "HasCoupledDatabase"
value = "true"
}
setting {
namespace = "aws:rds:dbinstance"
name = "DBEngine"
value = "mysql"
}
setting {
namespace = "aws:rds:dbinstance"
name = "DBEngineVersion"
value = "8.0.28"
}
setting {
namespace = "aws:rds:dbinstance"
name = "DBInstanceClass"
value = "db.t3.micro"
}
setting {
namespace = "aws:rds:dbinstance"
name = "DBPassword"
value = "solvee-pos-unbreakable"
}
setting {
namespace = "aws:rds:dbinstance"
name = "DBUser"
value = "admin"
}
}
At launch I need to initialize the RDS db so I need to cd into the app directory activate virtual environment, enter python shell and run db.create_all() coomand. Like this.
#! /bin/bash
cd ../var/app/current
virtualenv env
source env/bin/activate
pip install -r requirements.txt
python3
from application import db
db.create_all()
When creating an EC2 recource it would look like this
resource "aws_instance" "my-instance" {
ami = "ami-04169656fea786776"
instance_type = "t2.nano"
key_name = "${aws_key_pair.terraform-demo.key_name}"
user_data = "${file("initialize_db.sh")}"
tags = {
Name = "Terraform"
Batch = "5AM"
}
}
but I'm creating the EC2 inside the EB I can't do it this way.
So how can I do it?
Sorry for spamming with tf questions.
In EB you are not using userdata. Instead all your initialization code should be provided through ebextentions or platform hooks.
I want to create elastic beanstalk with tf. Here is the main.tf
resource "aws_elastic_beanstalk_application" "elasticapp" {
name = var.elasticapp
}
resource "aws_elastic_beanstalk_environment" "beanstalkappenv" {
name = var.beanstalkappenv
application = aws_elastic_beanstalk_application.elasticapp.name
solution_stack_name = var.solution_stack_name
tier = var.tier
setting {
namespace = "aws:ec2:vpc"
name = "VPCId"
value = var.vpc_id
}
setting {
namespace = "aws:ec2:vpc"
name = "Subnets"
value = var.public_subnets
}
setting {
namespace = "aws:elasticbeanstalk:environment:process:default"
name = "MatcherHTTPCode"
value = "200"
}
setting {
namespace = "aws:elasticbeanstalk:environment"
name = "LoadBalancerType"
value = "application"
}
setting {
namespace = "aws:autoscaling:launchconfiguration"
name = "InstanceType"
value = "t2.micro"
}
setting {
namespace = "aws:ec2:vpc"
name = "ELBScheme"
value = "internet facing"
}
setting {
namespace = "aws:autoscaling:asg"
name = "MinSize"
value = 1
}
setting {
namespace = "aws:autoscaling:asg"
name = "MaxSize"
value = 2
}
setting {
namespace = "aws:elasticbeanstalk:healthreporting:system"
name = "SystemType"
value = "enhanced"
}
}
I have variables defined in vars.tf.
This is the provider.tf
provider "aws" {
region = "eu-west-3"
}
When I try to apply I get the following message
Error: ConfigurationValidationException: Configuration validation exception: Invalid option value: 'subnet-xxxxxxxxxxxxxxx' (Namespace: 'aws:ec2:vpc', OptionName: 'ELBSubnets'): The subnet 'subnet-xxxxxxxxxxxxxxx' does not exist.
│ status code: 400, request id: be485042-a653-496b-8510-b310d5796eef
│
│ with aws_elastic_beanstalk_environment.beanstalkappenv,
│ on main.tf line 9, in resource "aws_elastic_beanstalk_environment" "beanstalkappenv":
│ 9: resource "aws_elastic_beanstalk_environment" "beanstalkappenv" {
I created the subnet inside the vpc that I provided in main.tf.
EDIT: I have only one subnet.
EDIT: adding vars.tf
variable "elasticapp" {
default = "pos-eb"
}
variable "beanstalkappenv" {
type = string
default = "pos-eb-env"
}
variable "solution_stack_name" {
type = string
default = "64bit Amazon Linux 2 v3.2.0 running Python 3.8"
}
variable "tier" {
type = string
default = "WebServer"
}
variable "vpc_id" {
default = "vpc-xxxxxxxxxxx"
}
variable "public_subnets" {
type = string
default = "subnet-xxxxxxxxxxxxxxx"
}
Ok, so first, check if the error message is correct.
As mentioned above, there is a chance you are working in the wrong account/region.
So check if terraform can find that subnet by using a datasource:
data "aws_subnet" "selected" {
id = var.public_subnets # based on your code above, this is a single subnet_id
}
output "subnet_detail" {
value = data.aws_subnet.selected
}
If the above code fails, that means terraform is not able to use/find that subnet.
So, if the subnet was created by terraform there is a chance regions/alias/account got mixed on the way to this module.
If it was manually created and you are only using the ID as manually inputted string, than the chances are that you copied the wrong subnet_id, vpc_id or that you are working in the wrong account/region.
If the above return data, and terraform can indeed find that subnet, check if it belongs to the VPC you are using on elastic_beanstalk.
If all the above is correct, than the issue may by in the "aws_elastic_beanstalk_environment" definition.
As you have an ELBScheme but you don't have the rest of the fields related to that ELB it could be throwing an error.
Since ELBSubnets was not provided in the "aws_elastic_beanstalk_environment" definition, it may be trying to use a default subnet from the default vpc.
I'm trying to modify the default Host Header CNAME attach to the rule when using a shared ALB with ElasticBeanstalk configuration. By using Terraform, here's how my configuration look like:
{
namespace = "aws:elasticbeanstalk:environment"
name = "LoadBalancerIsShared"
value = "true"
},
{
namespace = "aws:elbv2:listener:443"
name = "Rules"
value = "default"
},
{
namespace = "aws:elbv2:loadbalancer"
name = "SharedLoadBalancer"
value = data.aws_lb.default.arn
},
{
namespace = "aws:elasticbeanstalk:environment:process:default"
name = "DeregistrationDelay"
value = "20"
},
{
namespace = "aws:elasticbeanstalk:environment:process:default"
name = "HealthCheckInterval"
value = "15"
},
{
namespace = "aws:elasticbeanstalk:environment:process:default"
name = "HealthCheckTimeout"
value = "5"
},
{
namespace = "aws:elasticbeanstalk:environment:process:default"
name = "HealthyThresholdCount"
value = "3"
},
{
namespace = "aws:elasticbeanstalk:environment:process:default"
name = "Port"
value = "80"
},
{
namespace = "aws:elasticbeanstalk:environment:process:default"
name = "Protocol"
value = "HTTP"
},
{
namespace = "aws:elasticbeanstalk:environment:process:default"
name = "StickinessEnabled"
value = "false"
},
{
namespace = "aws:elasticbeanstalk:environment:process:default"
name = "StickinessLBCookieDuration"
value = "86400"
},
{
namespace = "aws:elasticbeanstalk:environment:process:default"
name = "StickinessType"
value = "lb_cookie"
},
{
namespace = "aws:elasticbeanstalk:environment:process:default"
name = "UnhealthyThresholdCount"
value = "5"
},
{
namespace = "aws:elbv2:listenerrule:myrule"
name = "HostHeaders"
value = "my.example.com"
},
{
namespace = "aws:elbv2:listenerrule:myrule"
name = "Process"
value = "default"
}
Based on this AWS documentation, it should just work, but somehow, the Host Header attached to the Shared ALB always end up using region.elasticbeanstalk.com as follow:
Thanks again for your help!
What you're seeing in your load balancer is the default rule, and that's because you didn't include your custom rule in the ELB's setting.
Try this:
...
{
namespace = "aws:elbv2:listener:443"
name = "Rules"
value = "default,myrule"
},
...
Let me know if that helps
Background
I have a Terraform script that creates several different AWS resources and links them together. One component is the aws_elastic_beanstalk_environment. It has the required parameters, and lots of settings for configuration. The beginning of the file is thus:
data "aws_elastic_beanstalk_application" "myapp" {
name = "beanstalkapp"
}
resource "aws_elastic_beanstalk_environment" "beanstalk" {
name = "beanstalk-environment"
application = data.aws_elastic_beanstalk_application.myapp.name
solution_stack_name = "64bit Amazon Linux 2 v5.2.5 running Node.js 12"
setting {
namespace = "aws:ec2:instances"
name = "InstanceTypes"
value = "t2.micro"
}
setting {
namespace = "aws:elasticbeanstalk:environment"
name = "EnvironmentType"
value = "LoadBalanced"
}
setting {
namespace = "aws:elasticbeanstalk:environment"
name = "LoadBalancerType"
value = "application"
}
setting {
namespace = "aws:elasticbeanstalk:environment"
name = "ServiceRole"
value = "aws-elasticbeanstalk-service-role"
}
setting {
namespace = "aws:autoscaling:launchconfiguration"
name = "IamInstanceProfile"
value = "aws-elasticbeanstalk-ec2-role"
}
setting {
namespace = "aws:ec2:vpc"
name = "VPCId"
value = var.createNewVPC ? aws_vpc.vpc_new[0].id : var.vpc_id_existing
}
setting {
namespace = "aws:ec2:vpc"
name = "Subnets"
value = var.createNewSubnets ? "${aws_subnet.subnet_private_a_new[0].id},${aws_subnet.subnet_private_b_new[0].id}" : "${var.subnet_private_a_id},${var.subnet_private_b_id}"
}
setting {
namespace = "aws:ec2:vpc"
name = "ELBSubnets"
value = var.createNewSubnets ? "${aws_subnet.subnet_public_a_new[0].id},${aws_subnet.subnet_public_b_new[0].id}" : "${var.subnet_public_a_id},${var.subnet_public_b_id}"
}
setting {
namespace = "aws:autoscaling:asg"
name = "MinSize"
value = "2"
}
setting {
namespace = "aws:autoscaling:asg"
name = "MaxSize"
value = "2"
}
setting {
namespace = "aws:elasticbeanstalk:application"
name = "Application Healthcheck URL"
value = "/"
}
setting {
namespace = "aws:elasticbeanstalk:application:environment"
name = "DB_HOST"
value = data.aws_db_instance.myDB.address
}
setting {
namespace = "aws:elasticbeanstalk:application:environment"
name = "DB_USER"
value = random_password.rds_username.result
}
setting {
namespace = "aws:elasticbeanstalk:application:environment"
name = "DB_PASS"
value = random_password.rds_password.result
}
setting {
namespace = "aws:elasticbeanstalk:application:environment"
name = "DB_PORT"
value = data.aws_db_instance.myDB.port
}
setting {
namespace = "aws:elasticbeanstalk:application:environment"
name = "CACHE_ADDRESS"
value = data.aws_elasticache_cluster.myCache.cluster_address
}
setting {
namespace = "aws:elasticbeanstalk:application:environment"
name = "CACHE_PORT"
value = var.cache_port
}
}
Problem
When running the script with -target=aws_elastic_beanstalk_environment.beanstalk, the beanstalk deploys just fine.
When running the script to deploy the full stack, the other components are created and then I get
Error: Missing required argument
on beanstalk.tf line 6, in resource "aws_elastic_beanstalk_environment" "beanstalk":
6: resource "aws_elastic_beanstalk_environment" "beanstalk" {
The argument "setting.1.value" is required, but no definition was found.
I'm probably as adapt at deciphering cryptic error messages as the next guy, but this seems like something in the guts of Terraform that is choking. I was on 0.13.5 and had the error, so I upgraded to 0.14.6. The only difference is now it displays the line about "setting.1.value".
Any ideas on what this means or how to solve it?
This is the setting block that causes the problem:
setting {
namespace = "aws:elasticbeanstalk:application:environment"
name = "CACHE_ADDRESS"
value = data.aws_elasticache_cluster.myCache.cluster_address
}
If I replace the value with a static value, everything works.
I believe this to be an issue with Terraform itself returning the proper value of the cluster_address.
The issue arises because this ElastiCache is a Redis instance, but the cluster_address property is only for Memcached.
It seems that Terraform should have a better error messge for this.
So if you see a weird "setting.x.value" error, it probably means that you are trying to use something that only applies to some of the options available for the resource.