I am trying to create multiple mount targets using 3 subnet ids to a single file system through terraform. While running the code it is creating 2 file shares (the second one without a mount target) and then failing giving me the below error.
Error: putting EFS file system (fs-0c479ab4b699829d1) lifecycle configuration: InvalidParameter: 1 validation error(s) found.
│ - missing required field, PutLifecycleConfigurationInput.LifecyclePolicies.
Here's my EFS code.
resource "aws_efs_file_system" "efs" {
creation_token = var.efs_token
performance_mode = var.efs_performance_mode
encrypted = true
lifecycle_policy {
transition_to_ia = "AFTER_30_DAYS"
}
tags = local.mandatory_tags
}
resource "aws_efs_mount_target" "efs_mount" {
count = length(data.aws_subnets.private.*.id)
depends_on = [aws_efs_file_system.efs]
file_system_id = "${aws_efs_file_system.efs.id}"
subnet_id = "${element(data.aws_subnets.private.*.id, count.index)}"
}
resource "aws_efs_backup_policy" "efs_backup_policy" {
file_system_id = "${aws_efs_file_system.efs.id}"
backup_policy {
status = "ENABLED"
}
}
This is the EFS lifecycle policy
resource "aws_efs_file_system_policy" "policy" {
file_system_id = aws_efs_file_system.efs.id
bypass_policy_lockout_safety_check = true
policy = <<POLICY
{
"Version": "2012-10-17",
"Id": "ExamplePolicy01",
"Statement": [
{
"Sid": "ExampleStatement01",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Resource": "${aws_efs_file_system.efs.arn}",
"Action": [
"elasticfilesystem:ClientMount",
"elasticfilesystem:ClientWrite"
],
"Condition": {
"Bool": {
"aws:SecureTransport": "true"
}
}
}
]
}
POLICY
}
data.tf
data "aws_vpc" "this" {
filter {
name = "tag:Name"
values = ["${local.name_prefix}-vpc"]
}
}
data "aws_subnets" "private" {
count = length(data.aws_availability_zones.azs.names)
filter {
name = "tag:Name"
values = ["${local.name_prefix}-vpc-private*"]
}
filter {
name = "vpc-id"
values = [data.aws_vpc.this.id]
}
}
data "aws_availability_zones" "azs" {}
tfvars
efs_token = "dev-load-test"
efs_performance_mode = "maxIO"
Related
I'm creating a test elasticsearch aws using terraform, I can't give full access from all ip addresses + how do I automatically add a username and password to log in to kibana? I read the manual s on github but I didn't understand how to do ithelp me pls
resource "aws_elasticsearch_domain" "es" {
domain_name = var.domain
elasticsearch_version = var.version_elasticsearch
cluster_config {
instance_type = var.instance_type
}
snapshot_options {
automated_snapshot_start_hour = var.automated_snapshot_start_hour
}
ebs_options {
ebs_enabled = var.ebs_volume_size > 0 ? true : false
volume_size = var.ebs_volume_size
volume_type = var.volume_type
}
tags = {
Domain = var.tag_domain
}
}
resource "aws_iam_service_linked_role" "es" {
aws_service_name = "es.amazonaws.com"
description = "Allows Amazon ES to manage AWS resources for a domain on your behalf."
}
resource "aws_elasticsearch_domain_policy" "main" {
domain_name = aws_elasticsearch_domain.es.domain_name
access_policies = <<POLICIES
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "es:*",
"Condition": {
"IpAddress": {
"aws:SourceIp": [
"*"
]
}
},
"Resource": "${aws_elasticsearch_domain.es.arn}/*""
}
]
}
POLICIES
}
The access control for AWS Opensearch is documented at https://docs.aws.amazon.com/opensearch-service/latest/developerguide/ac.html and the kind of access you are looking to achieve is called 'fine-grained-access-control' which is explained in detail at https://docs.aws.amazon.com/opensearch-service/latest/developerguide/fgac.html.
I know this terraform resource is not documented well to explain these different access types, which is why I am sharing the modified version of your code to get your task going with additional arguments you were missing your code.
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 3.0"
}
}
}
variable "master_user_password" {
type = string
}
# Elasticsearch domain
resource "aws_elasticsearch_domain" "es_example" {
domain_name = "example-domain"
elasticsearch_version = "OpenSearch_1.0"
cluster_config {
instance_type = "t3.small.elasticsearch"
}
ebs_options {
ebs_enabled = true
volume_size = 10
volume_type = "gp2"
}
encrypt_at_rest {
enabled = true
}
node_to_node_encryption {
enabled = true
}
# This is required for using advanced security options
domain_endpoint_options {
enforce_https = true
tls_security_policy = "Policy-Min-TLS-1-2-2019-07"
}
# Authentication
advanced_security_options {
enabled = true
internal_user_database_enabled = true
master_user_options {
master_user_name = "es-admin"
master_user_password = var.master_user_password
# You can also use IAM role/user ARN
# master_user_arn = var.es_master_user_arn
}
}
tags = {
Domain = "es_example"
}
}
resource "aws_elasticsearch_domain_policy" "main" {
domain_name = aws_elasticsearch_domain.es_example.domain_name
access_policies = <<POLICIES
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "es:*",
"Resource": "${aws_elasticsearch_domain.es_example.arn}/*"
}
]
}
POLICIES
}
This code is working for me and I was able to access OpenSearch Dashboard from my browser and was able to login using the credentials I specified in terraform code.
I am trying to enable managed updates with terraform but i am getting the following error
Error: ConfigurationValidationException: Configuration validation exception: Invalid option specification (Namespace: 'aws:elasticbeanstalk:managedactions', OptionName: 'ManagedActionsEnabled'): You can't enable managed platform updates when your environment uses the service-linked role 'AWSServiceRoleForElasticBeanstalk'. Select a service role that has the 'AWSElasticBeanstalkManagedUpdatesCustomerRolePolicy' managed policy.
Terraform code:
resource "aws_elastic_beanstalk_environment" "eb_env" {
setting {
namespace = "aws:elasticbeanstalk:managedactions"
name = "ManagedActionsEnabled"
value = "True"
}
setting {
namespace = "aws:elasticbeanstalk:managedactions"
name = "ServiceRoleForManagedUpdates"
value = aws_iam_role.beanstalk_service.arn
}
setting {
namespace = "aws:elasticbeanstalk:managedactions"
name = "PreferredStartTime"
value = "Sat:04:00"
}
setting {
namespace = "aws:elasticbeanstalk:managedactions:platformupdate"
name = "UpdateLevel"
value = "patch"
}
}
resource "aws_iam_instance_profile" "beanstalk_service" {
name = "beanstalk-service-user"
role = "${aws_iam_role.beanstalk_service.name}"
}
resource "aws_iam_instance_profile" "beanstalk_ec2" {
name = "beanstalk-ec2-user"
role = "${aws_iam_role.beanstalk_ec2.name}"
}
resource "aws_iam_role" "beanstalk_service" {
name = "beanstalk-service"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "elasticbeanstalk.amazonaws.com"
},
"Action": "sts:AssumeRole",
"Condition": {
"StringEquals": {
"sts:ExternalId": "elasticbeanstalk"
}
}
}
]
}
EOF
}
resource "aws_iam_role" "beanstalk_ec2" {
name = "aws-elasticbeanstalk-ec2-role"
assume_role_policy = <<EOF
{
"Version": "2008-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"Service": "ec2.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
EOF
}
resource "aws_iam_policy_attachment" "beanstalk_service_health" {
name = "elastic-beanstalk-service-health"
roles = ["${aws_iam_role.beanstalk_service.id}"]
policy_arn = "arn:aws:iam::aws:policy/service-role/AWSElasticBeanstalkEnhancedHealth"
}
resource "aws_iam_policy_attachment" "beanstalk_ec2_worker" {
name = "elastic-beanstalk-ec2-worker"
roles = ["${aws_iam_role.beanstalk_ec2.id}"]
policy_arn = "arn:aws:iam::aws:policy/AWSElasticBeanstalkWorkerTier"
}
resource "aws_iam_service_linked_role" "managedupdates_eb" {
aws_service_name = "managedupdates.elasticbeanstalk.amazonaws.com"
}
resource "aws_iam_policy_attachment" "beanstalk_ec2_web" {
name = "elastic-beanstalk-ec2-web"
roles = ["${aws_iam_role.beanstalk_ec2.id}"]
policy_arn = "arn:aws:iam::aws:policy/AWSElasticBeanstalkWebTier"
}
resource "aws_iam_policy_attachment" "beanstalk_ec2_container" {
name = "elastic-beanstalk-ec2-container"
roles = ["${aws_iam_role.beanstalk_ec2.id}"]
policy_arn = "arn:aws:iam::aws:policy/AWSElasticBeanstalkMulticontainerDocker"
}
resource "aws_iam_policy_attachment" "beanstalk_service" {
name = "elastic-beanstalk-service"
roles = ["${aws_iam_role.beanstalk_service.id}"]
policy_arn = "arn:aws:iam::aws:policy/AWSElasticBeanstalkManagedUpdatesCustomerRolePolicy"
}
I did attempt to create a linked service role but that is not the solution for the error above.
setting {
namespace = "aws:elasticbeanstalk:managedactions"
name = "ServiceRoleForManagedUpdates"
value = aws_iam_service_linked_role.managedupdates_eb.arn
}
I was missing the following settings
setting {
namespace = "aws:elasticbeanstalk:environment"
name = "ServiceRole"
value = aws_iam_role.beanstalk_service.id
}
I want to replicate s3 buckets across regions of aws with terraform.
This is my code in my s3.tf:
# Bucket
module "s3_replica" {
source = "git#github.com:xxx"
providers = {
aws = "aws.us-west-2"
}
name = "s3_replica"
logging_bucket_prefix = "s3_replica"
versioning = var.versioning
bucket_logging = var.bucket_logging
logging_bucket_name = var.logging_bucket_name
kms_key_id = aws_kms_key.s3-replica-key.key_id
sse_algorithm = var.sse_algorithm
tags = var.bucket_tags
environment = var.environment
}
module "s3" {
source = "git#github.com:xxxx"
name = "s3"
logging_bucket_prefix = "s3"
versioning = var.versioning
bucket_logging = var.bucket_logging
logging_bucket_name = var.logging_bucket_name
kms_key_id = aws_kms_key.s3.key_id
sse_algorithm = var.sse_algorithm
tags = var.bucket_tags
environment = var.environment
replication_configuration {
role = aws_iam_role.replication_role.arn
rules {
status = "Enabled"
destination {
bucket = aws_s3_bucket.module.s3_replica.bucket_arn
storage_class = "STANDARD_IA"
replicate_kms_key_id = aws_kms_key.s3-replica-key.key_id
}
source_selection_criteria {
sse_kms_encrypted_objects {
enabled = true
}
}
}
}
}
and this is my part of my module:
dynamic "replication_configuration" {
for_each = length(keys(var.replication_configuration)) == 0 ? [] : [var.replication_configuration]
content {
role = replication_configuration.value.role
dynamic "rules" {
for_each = replication_configuration.value.rules
content {
id = lookup(rules.value, "id", null)
priority = lookup(rules.value, "priority", null)
prefix = lookup(rules.value, "prefix", null)
status = rules.value.status
dynamic "destination" {
for_each = length(keys(lookup(rules.value, "destination", {}))) == 0 ? [] : [lookup(rules.value, "destination", {})]
content {
bucket = destination.value.bucket
storage_class = lookup(destination.value, "storage_class", null)
replica_kms_key_id = lookup(destination.value, "replica_kms_key_id", null)
account_id = lookup(destination.value, "account_id", null)
}
}
dynamic "source_selection_criteria" {
for_each = length(keys(lookup(rules.value, "source_selection_criteria", {}))) == 0 ? [] : [lookup(rules.value, "source_selection_criteria", {})]
content {
dynamic "sse_kms_encrypted_objects" {
for_each = length(keys(lookup(source_selection_criteria.value, "sse_kms_encrypted_objects", {}))) == 0 ? [] : [lookup(source_selection_criteria.value, "sse_kms_encrypted_objects", {})]
content {
enabled = sse_kms_encrypted_objects.value.enabled
}
}
}
}
dynamic "filter" {
for_each = length(keys(lookup(rules.value, "filter", {}))) == 0 ? [] : [lookup(rules.value, "filter", {})]
content {
prefix = lookup(filter.value, "prefix", null)
tags = lookup(filter.value, "tags", null)
}
}
}
}
}
}
resource "aws_iam_role" "replication" {
name = "s3-bucket-replication"
assume_role_policy = <<POLICY
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "s3.amazonaws.com"
},
"Effect": "Allow",
"Sid": ""
}
]
}
POLICY
}
resource "aws_iam_policy" "replication" {
name = "s3-bucket-replication"
policy = <<POLICY
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"s3:GetReplicationConfiguration",
"s3:ListBucket"
],
"Effect": "Allow",
"Resource": [
"module.s3.bucket_arn"
]
},
{
"Action": [
"s3:GetObjectVersion",
"s3:GetObjectVersionAcl"
],
"Effect": "Allow",
"Resource": [
"module.s3.bucket_arn"
]
},
{
"Action": [
"s3:ReplicateObject",
"s3:ReplicateDelete"
],
"Effect": "Allow",
"Resource": "module.s3.bucket_arn"
}
]
}
POLICY
}
resource "aws_iam_policy_attachment" "replication" {
name = "s3-bucket-replication"
roles = [aws_iam_role.replication.name]
policy_arn = aws_iam_policy.replication.arn
}
}
When I run terraform init.. it is successful.
However, when I run terraform plan it gives me this error:
Error: Unsupported block type
on s3.tf line 102, in module "s3":
102: replication_configuration {
Blocks of type "replication_configuration" are not expected here.
Why am I getting this error? I do have the replication_confirguration right?
Is there a configuration problem or a syntax problem?
I tried adding "=" in front of the configurations.. and now I get the error:
Error: Unsupported argument
on s3.tf line 102, in module "s3":
102: replication_configuration = {
An argument named "replication_configuration" is not expected here.
I was using Terraform to setup S3 buckets (different region) and set up replication between them.
It was working properly until I added KMS in it.
I created 2 KMS keys one for source and one for destination.
Now while applying replication configuration, there is an option to pass destination key for destination bucket but I am not sure how to apply key at the source.
Any help would be appreciated.
provider "aws" {
alias = "east"
region = "us-east-1"
}
resource "aws_s3_bucket" "destination-bucket" {
bucket = ""destination-bucket"
provider = "aws.east"
acl = "private"
region = "us-east-1"
versioning {
enabled = true
}
server_side_encryption_configuration {
rule {
apply_server_side_encryption_by_default {
kms_master_key_id = "${var.kms_cmk_dest_arn}"
sse_algorithm = "aws:kms"
}
}
}
}
resource "aws_s3_bucket" "source-bucket" {
bucket = "source-bucket"
acl = "private"
versioning {
enabled = true
}
server_side_encryption_configuration {
rule {
apply_server_side_encryption_by_default {
kms_master_key_id = "${var.kms_cmk_arn}"
sse_algorithm = "aws:kms"
}
}
}
replication_configuration {
role = "${aws_iam_role.replication.arn}"
rules {
status = "Enabled"
destination {
bucket = "${aws_s3_bucket.source-bucket.arn}"
storage_class = "STANDARD"
replica_kms_key_id = "${var.kms_cmk_dest_arn}"
}
source_selection_criteria {
sse_kms_encrypted_objects {
enabled = true
}
}
}
}
}
resource "aws_iam_role" "replication" {
name = "cdd-iam-role-replication"
permissions_boundary = "arn:aws:iam::${var.account_id}:policy/ServiceRoleBoundary"
assume_role_policy = <<POLICY
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "s3.amazonaws.com"
},
"Effect": "Allow",
"Sid": ""
}
]
}
POLICY
}
resource "aws_iam_role_policy" "replication" {
name = "cdd-iam-role-policy-replication"
role = "${aws_iam_role.replication.id}"
policy = <<POLICY
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"s3:GetReplicationConfiguration",
"s3:ListBucket"
],
"Effect": "Allow",
"Resource": [
"${aws_s3_bucket.source-bucket.arn}"
]
},
{
"Action": [
"s3:GetObjectVersion",
"s3:GetObjectVersionAcl"
],
"Effect": "Allow",
"Resource": [
"${aws_s3_bucket.source-bucket.arn}/*"
]
},
{
"Action": [
"s3:ReplicateObject",
"s3:ReplicateDelete"
],
"Effect": "Allow",
"Resource": "${aws_s3_bucket.destination-bucket.arn}/*"
}
]
}
POLICY
}
In case you're using a Customer Managed Key(CMK) for S3 encryption, you need extra configuration.
AWS S3 Documentation mentions that the CMK owner must grant the source bucket owner permission to use the CMK.
https://docs.aws.amazon.com/AmazonS3/latest/dev/replication-config-for-kms-objects.html#replication-kms-cross-acct-scenario
Also, a good article to summarize the S3 cross region replication configuration:
https://medium.com/#devopslearning/100-days-of-devops-day-44-s3-cross-region-replication-crr-8c58ae8c68d4
If I understand you correctly, you've got two S3 Buckets in two different regions within the same account.
One way I've done this in the past is to plan/apply the KMS keys to both regions first.
Then on a separate plan/apply, I used Terraform's data sources:
data "aws_kms_key" "source_credentials_encryption_key" {
key_id = "alias/source-encryption-key"
}
data "aws_kms_key" "destination_credentials_encryption_key" {
provider = aws.usEast
key_id = "alias/destination-encryption-key"
}
And used the data source for the replication configuration like so:
replication_configuration {
role = aws_iam_role.replication_role.arn
rules {
status = "Enabled"
destination {
bucket = aws_s3_bucket.source_bucket.arn
storage_class = "STANDARD"
replicate_kms_key_id = data.aws_kms_key.destination_bucket_encryption_key.arn
}
source_selection_criteria {
sse_kms_encrypted_objects {
enabled = true
}
}
}
}
Roles:
resource "aws_iam_role" "ecs-ec2-role" {
name = "${var.app_name}-ecs-ec2-role"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": [
"ecs.amazonaws.com",
"ecs-tasks.amazonaws.com"
]
},
"Effect": "Allow",
"Sid": ""
}
]
}
EOF
}
resource "aws_iam_instance_profile" "ecs-ec2-role" {
name = "${var.app_name}-ecs-ec2-role"
role = "${aws_iam_role.ecs-ec2-role.name}"
}
resource "aws_iam_role_policy" "ecs-ec2-role-policy" {
name = "${var.app_name}-ecs-ec2-role-policy"
role = "${aws_iam_role.ecs-ec2-role.id}"
policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ecs:CreateCluster",
"ecs:DeregisterContainerInstance",
"ecs:DiscoverPollEndpoint",
"ecs:Poll",
"ecs:RegisterContainerInstance",
"ecs:StartTelemetrySession",
"ecs:Submit*",
"ecs:StartTask",
"ecr:GetAuthorizationToken",
"ecr:BatchCheckLayerAvailability",
"ecr:GetDownloadUrlForLayer",
"ecr:BatchGetImage",
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents",
"logs:DescribeLogStreams"
],
"Resource": [
"arn:aws:logs:*:*:*"
]
}
]
}
EOF
}
# ecs service role
resource "aws_iam_role" "ecs-service-role" {
name = "${var.app_name}-ecs-service-role"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": [
"ecs.amazonaws.com"
]
},
"Effect": "Allow",
"Sid": ""
}
]
}
EOF
}
resource "aws_iam_role_policy_attachment" "ecs-service-attach" {
role = "${aws_iam_role.ecs-service-role.name}"
policy_arn = "arn:aws:iam::aws:policy/service-role/AmazonEC2ContainerServiceRole"
}
data "aws_iam_policy_document" "aws_secrets_policy" {
version = "2012-10-17"
statement {
sid = ""
effect = "Allow"
actions = ["secretsmanager:GetSecretValue"]
resources = [
var.aws_secrets
]
}
}
resource "aws_iam_policy" "aws_secrets_policy" {
name = "aws_secrets_policy"
policy = "${data.aws_iam_policy_document.aws_secrets_policy.json}"
}
resource "aws_iam_role_policy_attachment" "aws_secrets_policy" {
role = aws_iam_role.ecs-ec2-role.name
policy_arn = aws_iam_policy.aws_secrets_policy.arn
}
ECS:
resource "aws_ecs_cluster" "main" {
name = "${var.app_name}-cluster"
}
data "template_file" "app" {
template = file("./templates/ecs/app.json.tpl")
vars = {
app_name = var.app_name
app_image = var.app_image
app_host = var.app_host
endpoint_protocol = var.endpoint_protocol
app_port = var.app_port
container_cpu = var.container_cpu
container_memory = var.container_memory
aws_region = var.aws_region
aws_secrets = var.aws_secrets
}
}
resource "aws_ecs_task_definition" "app" {
family = "${var.app_name}-task"
execution_role_arn = aws_iam_role.ecs-ec2-role.arn
cpu = var.container_cpu
memory = var.container_memory
container_definitions = data.template_file.app.rendered
}
resource "aws_ecs_service" "main" {
name = "${var.app_name}-service"
cluster = aws_ecs_cluster.main.id
task_definition = aws_ecs_task_definition.app.arn
desired_count = var.app_count
iam_role = aws_iam_role.ecs-service-role.arn
depends_on = [aws_iam_role_policy_attachment.ecs-service-attach]
load_balancer {
target_group_arn = aws_lb_target_group.app.id
container_name = var.app_name
container_port = var.app_port
}
}
Autoscaling:
data "aws_ami" "latest_ecs" {
most_recent = true
filter {
name = "name"
values = ["*amazon-ecs-optimized"]
}
filter {
name = "virtualization-type"
values = ["hvm"]
}
owners = ["591542846629"] # AWS
}
resource "aws_launch_configuration" "ecs-launch-configuration" {
// name = "${var.app_name}-launch-configuration"
image_id = data.aws_ami.latest_ecs.id
instance_type = var.instance_type
iam_instance_profile = aws_iam_instance_profile.ecs-ec2-role.id
security_groups = [aws_security_group.ecs_tasks.id]
root_block_device {
volume_type = "standard"
volume_size = 100
delete_on_termination = true
}
lifecycle {
create_before_destroy = true
}
associate_public_ip_address = "false"
key_name = "backend-dev"
#
# register the cluster name with ecs-agent which will in turn coord
# with the AWS api about the cluster
#
user_data = data.template_file.autoscaling_user_data.rendered
}
data "template_file" "autoscaling_user_data" {
template = file("./templates/ecs/autoscaling_user_data.tpl")
vars = {
ecs_cluster = aws_ecs_cluster.main.name
}
}
#
# need an ASG so we can easily add more ecs host nodes as necessary
#
resource "aws_autoscaling_group" "ecs-autoscaling-group" {
name = "${var.app_name}-autoscaling-group"
max_size = "4"
min_size = "2"
health_check_grace_period = 300
desired_capacity = "2"
vpc_zone_identifier = [aws_subnet.private[0].id, aws_subnet.private[1].id]
launch_configuration = aws_launch_configuration.ecs-launch-configuration.name
health_check_type = "ELB"
tag {
key = "Name"
value = var.app_name
propagate_at_launch = true
}
}
resource "aws_autoscaling_policy" "demo-cluster" {
name = "${var.app_name}-ecs-autoscaling-polycy"
policy_type = "TargetTrackingScaling"
estimated_instance_warmup = "90"
adjustment_type = "ChangeInCapacity"
autoscaling_group_name = aws_autoscaling_group.ecs-autoscaling-group.name
target_tracking_configuration {
predefined_metric_specification {
predefined_metric_type = "ASGAverageCPUUtilization"
}
target_value = 40.0
}
}
Cluster name was added to an Instances successfully via User Data:
$ cat /etc/ecs/ecs.config
ECS_CLUSTER=mercure-cluster
But I'm getting an error:
service mercure-service was unable to place a task because no
container instance met all of its requirements. Reason: No Container
Instances were found in your cluster.
ecs-agent.log:
$ grep 'WARN\|ERROR' ecs-agent.log.2019-10-24-10
2019-10-24T10:36:45Z [WARN] Error getting valid credentials (AKID ): NoCredentialProviders: no valid providers in chain. Deprecated.
2019-10-24T10:36:45Z [ERROR] Unable to register as a container instance with ECS: NoCredentialProviders: no valid providers in chain. Deprecated.
2019-10-24T10:36:45Z [ERROR] Error registering: NoCredentialProviders: no valid providers in chain. Deprecated.
ecs-init.log:
$ grep 'WARN\|ERROR' ecs-init.log
2019-10-24T10:36:45Z [WARN] ECS Agent failed to start, retrying in 547.77941ms
2019-10-24T10:36:46Z [WARN] ECS Agent failed to start, retrying in 1.082153551s
2019-10-24T10:36:50Z [WARN] ECS Agent failed to start, retrying in 2.066145821s
2019-10-24T10:36:55Z [WARN] ECS Agent failed to start, retrying in 4.235010051s