I am running tableau server 2021-1-2 on EC2 instance.
I can connect using the default public ip on port 80, also on port 8050 for the Tableau TSM UI. And the same using the hostname I defined. The only issue I have is despite following several guidelines I can't connect using https.
I setup the ports on the security-group, the load-balancer, the certificate, i waited for hours as I saw that the ssl certificate could take more than half of an hour and nothing.
I can connect using:
http://my_domain.domain
But not:
https://my_domain.domain
I receive the following error in the browser: Can't connect to the server https://my_domain.domain.
I run curl -i https://my_domain.domain
It returns:
curl: (7) Failed to connect to my_domain.domainport 443: Connection refused
The security group of my instance has the following ports (u can see it in tf too):
Here you have my tf setup.
I did the EC2 setup with:
resource "aws_instance" "tableau" {
ami = var.ami
instance_type = var.instance_type
associate_public_ip_address = true
key_name = var.key_name
subnet_id = compact(split(",", var.public_subnets))[0]
vpc_security_group_ids = [aws_security_group.tableau-sg.id]
root_block_device{
volume_size = var.volume_size
}
tags = {
Name = var.namespace
}
}
I created the load balancer setup using:
resource "aws_lb" "tableau-lb" {
name = "${var.namespace}-alb"
load_balancer_type = "application"
internal = false
subnets = compact(split(",", var.public_subnets))
security_groups = [aws_security_group.tableau-sg.id]
ip_address_type = "ipv4"
enable_cross_zone_load_balancing = true
lifecycle {
create_before_destroy = true
}
idle_timeout = 300
}
resource "aws_alb_listener" "https" {
depends_on = [aws_alb_target_group.target-group]
load_balancer_arn = aws_lb.tableau-lb.arn
protocol = "HTTPS"
port = "443"
ssl_policy = "my_ssl_policy"
certificate_arn = "arn:xxxx"
default_action {
target_group_arn = aws_alb_target_group.target-group.arn
type = "forward"
}
lifecycle {
ignore_changes = [
default_action.0.target_group_arn,
]
}
}
resource "aws_alb_target_group" "target-group" {
name = "${var.namespace}-group"
port = 80
protocol = "HTTP"
vpc_id = var.vpc_id
target_type = "instance"
health_check {
healthy_threshold = var.health_check_healthy_threshold
unhealthy_threshold = var.health_check_unhealthy_threshold
timeout = var.health_check_timeout
interval = var.health_check_interval
path = var.path
}
tags = {
Name = var.namespace
}
lifecycle {
create_before_destroy = false
}
depends_on = [aws_lb.tableau-lb]
}
resource "aws_lb_target_group_attachment" "tableau-attachment" {
target_group_arn = aws_alb_target_group.target-group.arn
target_id = aws_instance.tableau.id
port = 80
}
The security group:
resource "aws_security_group" "tableau-sg" {
name_prefix = "${var.namespace}-sg"
tags = {
Name = var.namespace
}
vpc_id = var.vpc_id
# HTTP from the load balancer
ingress {
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
# HTTP from the load balancer
ingress {
from_port = 8850
to_port = 8850
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
# HTTP from the load balancer
ingress {
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
# 443 secure access from anywhere
ingress {
from_port = 443
to_port = 443
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
# Outbound internet access
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
lifecycle {
create_before_destroy = true
}
}
Also setup a hostname domain using:
resource "aws_route53_record" "tableau-record-dns" {
zone_id = var.route_53_zone_id
name = "example.hostname"
type = "A"
ttl = "300"
records = [aws_instance.tableau.public_ip]
}
resource "aws_route53_record" "tableau-record-dns-https" {
zone_id = var.route_53_zone_id
name = "asdf.example.hostname"
type = "CNAME"
ttl = "300"
records = ["asdf.acm-validations.aws."]
}
Finally solved the issue, it was related to the record A. I was assignin an ip there an its impossible to redirect to an specific ip with the loadbalancer there. I redirect traffic to an ELB and worked fine
Related
I am currently learning Terraform and I need help with regard to the below code. I want to create a simple architecture of an autoscaling group of EC2 instances behind an Application load balancer. The setup gets completed but when I try to access the application endpoint, it gets timed out. When I tried to access the EC2 instances, I was unable to (because EC2 instances were in a security group allowing access from the ALB security group only). I changed the instance security group ingress values and ran the user_data script manually following which I reverted the changes to the instance security group to complete my setup.
My question is why is my setup not working via the below code? Is it because the access is being restricted by the load balancer security group or is my launch configuration block incorrect?
data "aws_ami" "amazon-linux" {
most_recent = true
owners = ["amazon"]
filter {
name = "name"
values = ["amzn2-ami-kernel-5.10-hvm-2.0.20220426.0-x86_64-gp2"]
}
}
data "aws_availability_zones" "available" {
state = "available"
}
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
version = "3.14.0"
name = "main-vpc"
cidr = "10.0.0.0/16"
azs = data.aws_availability_zones.available.names
public_subnets = ["10.0.4.0/24","10.0.5.0/24","10.0.6.0/24"]
enable_dns_hostnames = true
enable_dns_support = true
}
resource "aws_launch_configuration" "TestLC" {
name_prefix = "Lab-Instance-"
image_id = data.aws_ami.amazon-linux.id
instance_type = "t2.nano"
key_name = "CloudformationKeyPair"
user_data = file("./user_data.sh")
security_groups = [aws_security_group.TestInstanceSG.id]
lifecycle {
create_before_destroy = true
}
}
resource "aws_autoscaling_group" "TestASG" {
min_size = 1
max_size = 3
desired_capacity = 2
launch_configuration = aws_launch_configuration.TestLC.name
vpc_zone_identifier = module.vpc.public_subnets
}
resource "aws_lb_listener" "TestListener"{
load_balancer_arn = aws_lb.TestLB.arn
port = "80"
protocol = "HTTP"
default_action {
type = "forward"
target_group_arn = aws_lb_target_group.TestTG.arn
}
}
resource "aws_lb" "TestLB" {
name = "Lab-App-Load-Balancer"
internal = false
load_balancer_type = "application"
security_groups = [aws_security_group.TestLoadBalanceSG.id]
subnets = module.vpc.public_subnets
}
resource "aws_lb_target_group" "TestTG" {
name = "LabTargetGroup"
port = "80"
protocol = "HTTP"
vpc_id = module.vpc.vpc_id
}
resource "aws_autoscaling_attachment" "TestAutoScalingAttachment" {
autoscaling_group_name = aws_autoscaling_group.TestASG.id
lb_target_group_arn = aws_lb_target_group.TestTG.arn
}
resource "aws_security_group" "TestInstanceSG" {
name = "LAB-Instance-SecurityGroup"
ingress{
from_port = 80
to_port = 80
protocol = "tcp"
security_groups = [aws_security_group.TestLoadBalanceSG.id]
}
ingress{
from_port = 22
to_port = 22
protocol = "tcp"
security_groups = [aws_security_group.TestLoadBalanceSG.id]
}
egress{
from_port = 0
to_port = 0
protocol = "-1"
security_groups = [aws_security_group.TestLoadBalanceSG.id]
}
vpc_id = module.vpc.vpc_id
}
resource "aws_security_group" "TestLoadBalanceSG" {
name = "LAB-LoadBalancer-SecurityGroup"
ingress{
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress{
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
egress{
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
vpc_id = module.vpc.vpc_id
}
I have one external network load balancer (listening on port 80) which forwards traffic to ServiceA instance (on port 9000). I'd like to configure an internal network load balancer that will get requests from ServiceA instances and forward them to ServiceB instance. However, I have a problem with configuring an internal NLB in terrafrom. Here's what I have at the moment:
resource "aws_security_group" "allow-all-traffic-for-internal-nlb" {
name = "int-nlb"
description = "Allow inbound and outbound traffic for internal NLB"
vpc_id = "${aws_vpc.default.id}"
ingress {
from_port = 81
protocol = "tcp"
to_port = 81
cidr_blocks = ["10.61.110.0/24"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
}
resource "aws_lb" "serviceB_lb" {
name = "serviceB-internal-lb"
internal = true
load_balancer_type = "network"
subnets = ["${aws_subnet.sub.id}"]
}
resource "aws_lb_listener" "serviceB-internal-lb-listener" {
load_balancer_arn = "${aws_lb.serviceB_lb.arn}"
port = 81
protocol = "TCP"
default_action {
target_group_arn = "${aws_lb_target_group.serviceB-internal-lb-tg.arn}"
type = "forward"
}
}
#create a target group for the load balancer and set up a health check
resource "aws_lb_target_group" "serviceB-internal-lb-tg" {
name = "serviceB-int-lb-tg"
port = 81
protocol = "TCP"
vpc_id = "${aws_vpc.default.id}"
target_type = "instance"
health_check {
protocol = "HTTP"
port = "8181"
path = "/"
}
}
#attach a load balancer to the target group
resource "aws_lb_target_group_attachment" "attach-serviceB-tg-to-internal-nlb" {
target_group_arn = "${aws_lb_target_group.serviceB-internal-lb-tg.arn}"
port = 8181
target_id = "${aws_instance.serviceB-1a.id}"
}
# Create Security Groups
resource "aws_security_group_rule" "serviceB_from_serviceB-lb" {
type = "ingress"
from_port = 81
to_port = 81
protocol = "tcp"
source_security_group_id = "${aws_security_group.allow-all-traffic-for-internal-nlb.id}"
security_group_id = "${aws_security_group.serviceB-sg.id}"
}
resource "aws_security_group_rule" "serviceB_nlb_to_serviceB" {
type = "egress"
from_port = 81
to_port = 81
protocol = "tcp"
source_security_group_id = "${aws_security_group.serviceB-sg.id}"
security_group_id = "${aws_security_group.allow-all-traffic-for-internal-nlb.id}"
}
####
resource "aws_security_group" "serviceB-sg" {
name = "${var.environment}-serviceB-sg"
description = "${var.environment} serviceB security group"
vpc_id = "${aws_vpc.default.id}"
ingress {
from_port = 8181
to_port = 8181
protocol = "tcp"
cidr_blocks = ["10.61.110.0/24"]
}
}
The internal load balancer is listening on port 81, and the ServiceB instance is running on port 8181.
Both external and internal NLBs and two services are located in one subnet.
When I check the health status for the target group of the internal load balancer, I get a health check failure.
What can cause this to happen?
I have launched a private instance within asg
In public I have created a application loadbalancer to connect to the private instance
Issue:Target Group is showing as health check failed for asg instance
Question: How to fix this and make health check to pass
Kindly support me to solve this issue.
Because of this Timeout occurs when accessed using browser
**alb.tf**
resource "aws_lb" "ops_manager_app_lb" {
name = "ops-manager-app-lb"
internal = false
security_groups = [ aws_security_group.ops_lb_sg.id ]
subnets = [ var.PUB_SUBNET_NAT, var.PUB_SUBNET_2 ]
}
resource "aws_lb_target_group" "opsmanager_target_group_8080" {
depends_on = [ aws_lb.ops_manager_app_lb ]
name = "opsmanager-target-group-8080"
port = 8080
protocol = "HTTP"
vpc_id = var.AWS_VPC
health_check {
path = "/"
port = 8080
protocol = "HTTP"
healthy_threshold = 3
unhealthy_threshold = 3
matcher = "200-499"
}
}
resource "aws_lb_listener" "ops_alb_listener_8080" {
load_balancer_arn = aws_lb.ops_manager_app_lb.arn
port = "8080"
protocol = "HTTP"
#certificate_arn = "${var.elk_cert_arn}"
default_action {
target_group_arn = aws_lb_target_group.opsmanager_target_group_8080.arn
type = "forward"
}
}
**sg.tf**
resource "aws_security_group" "ops_lb_sg" {
name = "opsmanager_app_lb"
description = "Security Group for OpsManager ALB"
vpc_id = var.AWS_VPC
ingress {
from_port = 8080
to_port = 8080
protocol = "tcp"
cidr_blocks = [ var.VPC_CIDR ]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
}
### OpsManager Application Server Security Group ###
resource "aws_security_group" "application_opsmanager_sg" {
name = "application_opsmanager_sg"
description = "Security Group for OpsManager Application Instance"
vpc_id = var.AWS_VPC
ingress {
description = "TCP port for HTTP service"
from_port = 8080
to_port = 8080
protocol = "tcp"
security_groups = [ aws_security_group.ops_lb_sg.id ]
#cidr_blocks = [var.VPC_CIDR]
}
}
**main.tf**
resource "aws_launch_configuration" "lc_opsmanager" {
name = "ops_manager_launch"
image_id = var.AMIS
instance_type = var.INSTANCE_TYPE["OPS_APP"]
iam_instance_profile = data.aws_iam_instance_profile.application_instance_profile.name
key_name = var.KEY_NAME
security_groups = [data.aws_security_group.application_sg.id, aws_security_group.ops_lb_sg.id ]
}
resource "aws_autoscaling_group" "asg_opsmanager" {
name = "asg-ops-manager"
max_size = 2
min_size = 1
desired_capacity = 1
#availability_zones = [ data.aws_availability_zone.az_primary.name ]
vpc_zone_identifier = [var.PRIV_SUBNET_OPS]
health_check_type = "EC2"
health_check_grace_period = 300
launch_configuration = aws_launch_configuration.lc_opsmanager.id
target_group_arns = [ aws_lb_target_group.opsmanager_target_group_8080.arn ]
tag {
key = "Name"
value = "ops_manager_application"
propagate_at_launch = true
}
}
There can be many issues with your architecture,
but one that is definitively responsible for
blocking access to your ALB is incorrect
security group.
Namely, the ALB's uses ops_lb_sg which does not allow
internet traffic. Instead it allows connections from only
var.VPC_CIDR. To allow internet connection's it should be:
cidr_blocks = [ "0.0.0.0/0" ]
or CIDR range of your home/work network.
I am running a spring cloud config server on aws which is just a docker container running a spring boot app. It is reading properties from a git repo. Our client applications read config from the server on startup and intermittently at runtime. About a third of the time, the client apps will timeout when pulling config at startup, causing the app to crash. At runtime, the apps seem to succeed 4 out of 5 times, though they will just use existing config if a request fails.
I am using an ec2 instance behind an alb which handles ssl termination. I was orignally using a t3.micro, but upgraded to an m5.large guessing that the t3 class may not support continuous availability.
The alb required 2 subnets, so I created a second with nothing in it initially. I am unsure if the alb will attempt to route to the second subnet at some point, which could be causing the failures. The target group is using a health check which returns correctly, but idk enough about alb's to rule out round-robin'ing to an empty subnet. I attempted to create a second ec2 instance to parallel my first config server in the second subnet. however, I was unable to ssh into the second instance even though it's using the same security group and config as the first. I'm not sure why that failed, but i'm guessing there is something else wrong with my setup.
All infrastructure was deployed with terraform, which I have included below.
resources.tf
provider "aws" {
region = "us-east-2"
version = ">= 2.38.0"
}
data "aws_ami" "amzn_linux" {
most_recent = true
filter {
name = "name"
values = ["amzn2-ami-hvm-2.0.*-x86_64-gp2"]
}
filter {
name = "virtualization-type"
values = ["hvm"]
}
owners = ["137112412989"]
}
resource "aws_vpc" "config-vpc" {
cidr_block = "10.0.0.0/16"
enable_dns_hostnames = true
enable_dns_support = true
}
resource "aws_security_group" "config_sg" {
name = "config-sg"
description = "http, https, and ssh"
vpc_id = aws_vpc.config-vpc.id
ingress {
from_port = 9000
to_port = 9000
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
from_port = 8080
to_port = 8080
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
from_port = 443
to_port = 443
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
}
resource "aws_subnet" "subnet-alpha" {
cidr_block = cidrsubnet(aws_vpc.config-vpc.cidr_block, 3, 1)
vpc_id = aws_vpc.config-vpc.id
availability_zone = "us-east-2a"
}
resource "aws_subnet" "subnet-beta" {
cidr_block = cidrsubnet(aws_vpc.config-vpc.cidr_block, 3, 2)
vpc_id = aws_vpc.config-vpc.id
availability_zone = "us-east-2b"
}
resource "aws_internet_gateway" "config-vpc-ig" {
vpc_id = aws_vpc.config-vpc.id
}
resource "aws_route_table" "config-vpc-rt" {
vpc_id = aws_vpc.config-vpc.id
route {
cidr_block = "0.0.0.0/0"
gateway_id = aws_internet_gateway.config-vpc-ig.id
}
}
resource "aws_route_table_association" "subnet-association-alpha" {
subnet_id = aws_subnet.subnet-alpha.id
route_table_id = aws_route_table.config-vpc-rt.id
}
resource "aws_route_table_association" "subnet-association-beta" {
subnet_id = aws_subnet.subnet-beta.id
route_table_id = aws_route_table.config-vpc-rt.id
}
resource "aws_alb" "alb" {
name = "config-alb"
subnets = [aws_subnet.subnet-alpha.id, aws_subnet.subnet-beta.id]
security_groups = [aws_security_group.config_sg.id]
}
resource "aws_alb_target_group" "alb_target_group" {
name = "config-tg"
port = 9000
protocol = "HTTP"
vpc_id = aws_vpc.config-vpc.id
health_check {
enabled = true
path = "/actuator/health"
port = 9000
protocol = "HTTP"
}
}
resource "aws_instance" "config_server_alpha" {
ami = data.aws_ami.amzn_linux.id
instance_type = "m5.large"
vpc_security_group_ids = [aws_security_group.config_sg.id]
key_name = "config-ssh"
subnet_id = aws_subnet.subnet-alpha.id
associate_public_ip_address = true
}
resource "aws_instance" "config_server_beta" {
ami = data.aws_ami.amzn_linux.id
instance_type = "m5.large"
vpc_security_group_ids = [aws_security_group.config_sg.id]
key_name = "config-ssh"
subnet_id = aws_subnet.subnet-beta.id
associate_public_ip_address = true
}
resource "aws_alb_target_group_attachment" "config-target-alpha" {
target_group_arn = aws_alb_target_group.alb_target_group.arn
target_id = aws_instance.config_server_alpha.id
port = 9000
}
resource "aws_alb_target_group_attachment" "config-target-beta" {
target_group_arn = aws_alb_target_group.alb_target_group.arn
target_id = aws_instance.config_server_beta.id
port = 9000
}
resource "aws_alb_listener" "alb_listener_80" {
load_balancer_arn = aws_alb.alb.arn
port = 80
default_action {
type = "redirect"
redirect {
port = 443
protocol = "HTTPS"
status_code = "HTTP_301"
}
}
}
resource "aws_alb_listener" "alb_listener_8080" {
load_balancer_arn = aws_alb.alb.arn
port = 8080
default_action {
type = "redirect"
redirect {
port = 443
protocol = "HTTPS"
status_code = "HTTP_301"
}
}
}
resource "aws_alb_listener" "alb_listener_https" {
load_balancer_arn = aws_alb.alb.arn
port = 443
protocol = "HTTPS"
ssl_policy = "ELBSecurityPolicy-2016-08"
certificate_arn = "arn:..."
default_action {
target_group_arn = aws_alb_target_group.alb_target_group.arn
type = "forward"
}
}
config server
#SpringBootApplication
#EnableConfigServer
public class ConfigserverApplication {
public static void main(String[] args) {
SpringApplication.run(ConfigserverApplication.class, args);
}
}
application.yml
spring:
profiles:
active: local
---
spring:
profiles: local, default, cloud
cloud:
config:
server:
git:
uri: ...
searchPaths: '{application}/{profile}'
username: ...
password:...
security:
user:
name: admin
password: ...
server:
port: 9000
management:
endpoint:
health:
show-details: always
info:
git:
mode: FULL
bootstrap.yml
spring:
application:
name: config-server
encrypt:
key: |
-----BEGIN RSA PRIVATE KEY-----
...
-----END RSA PRIVATE KEY-----
I have terraform script that works well for type=A record DNS. So when I execute this:
data "aws_acm_certificate" "this" {
domain = "*.${var.CERTIFICATE_DOMAIN}"
}
resource "aws_security_group" "this" {
name = "${var.SERVICE}-${var.ENV}-${var.REGION}-allow_all"
description = "Allow all inbound traffic"
vpc_id = "${data.aws_vpc.this.id}"
ingress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
}
data "aws_subnet_ids" "this" {
vpc_id = "${data.aws_vpc.this.id}"
tags {
Service = "external"
}
}
data "aws_security_groups" "ecs" {
tags {
Environment = "${var.VPC_ENV}"
Region = "${var.REGION}"
}
filter {
name = "vpc-id"
values = ["${data.aws_vpc.this.id}"]
}
filter {
name = "group-name"
values = ["${var.ENV}-api-internal-ecs-host*-sg"]
}
}
resource "aws_security_group_rule" "lb2ecs" {
from_port = 32768
to_port = 65535
protocol = "tcp"
security_group_id = "${data.aws_security_groups.ecs.ids[0]}"
source_security_group_id = "${aws_security_group.this.id}"
type = "ingress"
}
resource "aws_alb" "https" {
name = "${var.SERVICE}-${var.ENV}-alb"
internal = false
load_balancer_type = "application"
security_groups = ["${aws_security_group.this.id}"]
subnets = ["${data.aws_subnet_ids.this.ids}"]
}
data "aws_route53_zone" "this" {
name = "${var.CERTIFICATE_DOMAIN}."
private_zone = false
}
resource "aws_route53_record" "www" {
zone_id = "${data.aws_route53_zone.this.zone_id}"
name = "${var.SERVICE}-${var.ENV}-${var.REGION}.${var.CERTIFICATE_DOMAIN}"
type = "A"
alias {
name = "${aws_alb.https.dns_name}"
zone_id = "${aws_alb.https.zone_id}"
evaluate_target_health = true
}
}
resource "aws_alb_target_group" "https" {
name = "${var.SERVICE}-${var.ENV}-https"
port = 3000
protocol = "HTTP"
vpc_id = "${data.aws_vpc.this.id}"
health_check {
path = "/health"
}
}
resource "aws_alb_listener" "https" {
load_balancer_arn = "${aws_alb.https.arn}"
port = "443"
protocol = "HTTPS"
ssl_policy = "ELBSecurityPolicy-2015-05"
certificate_arn = "${data.aws_acm_certificate.this.arn}"
default_action {
type = "forward"
target_group_arn = "${aws_alb_target_group.https.arn}"
}
}
It properly creates new HTTPS endpoint and I can easily put service behind it (by linking the aws_alb_target_group.https with ECS service)
I need to add IPv6 support, so I was thinking - what about just changing the A type to AAAA in resource "aws_route53_record" "www". The terraform was executed fine stating that it was changed, in Route 53 I can see the record looks exactly the same as before but it has AAAA type, but the service is not reachable anymore.
In Route 53, I can see that there is ALIAS that looks like this: someservice-test-alb-1395527311.eu-central-1.elb.amazonaws.com. And I can reach the service with it by HTTPS from public internet. However the "nice" endpoint that was working before dont work anymore. Also pinging the URL do not receive any IP anymore.
Am I missing something?
AAAA records, you need to first enable your VPC with IPV6 support. By default its not enabled.
Once you are done than you can follow the guideline in below blog to enabled IPV6 for teraform.
https://medium.com/#mattias.holmlund/setting-up-ipv6-on-amazon-with-terraform-e14b3bfef577