Get endpoint for Terraform with aws_elasticache_replication_group - amazon-web-services

I have what I think is a simple Terraform config for AWS ElastiCache with Redis:
resource "aws_elasticache_replication_group" "my_replication_group" {
replication_group_id = "my-rep-group",
replication_group_description = "eln00b"
node_type = "cache.m4.large"
port = 6379
parameter_group_name = "default.redis5.0.cluster.on"
snapshot_retention_limit = 1
snapshot_window = "00:00-05:00"
subnet_group_name = "${aws_elasticache_subnet_group.my_subnet_group.name}"
automatic_failover_enabled = true
cluster_mode {
num_node_groups = 1
replicas_per_node_group = 1
}
}
I tried to define the endpoint output using:
output "my_cache" {
value = "${aws_elasticache_replication_group.my_replication_group.primary_endpoint_address}"
}
When I run an apply through terragrunt I get:
Error: Error running plan: 1 error(s) occurred:
module.mod.output.my_cache: Resource 'aws_elasticache_replication_group.my_replication_group' does not have attribute 'primary_endpoint_address' for variable 'aws_elasticache_replication_group.my_replication_group.primary_endpoint_address'
What am I doing wrong here?

The primary_endpoint_address attribute is only available for non cluster-mode Redis replication groups as mentioned in the docs:
primary_endpoint_address - (Redis only) The address of the endpoint for the primary node in the replication group, if the cluster mode is disabled.
When using cluster mode you should use configuration_endpoint_address instead to connect to the Redis cluster.

Related

Dynamically run terraform block based on input variable value

The Problem:
AWS doesn't support enhanced monitoring for t3.small instances which is what we use for smaller deployments of RDS but does on larger instance sizes for RDS. We want to disable it in Terraform when the instance class is t3.
Looking at the terraform resource docs: https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/db_instance seems like you don't specify interval and role when you don't want to enable enhanced monitoring.
I'm trying to dynamically execute the resource block based on what the monitoring interval is set to. Thus, when its set to 0 run the block without monitoring role arn and when its set to anything other than 0 run the block where it is set.
however I'm getting an error:
╷
│ Error: Missing newline after argument
│
│ On main.tf line 68: An argument definition must end with a newline.
╵
Error: Terraform exited with code 1.
Error: Process completed with exit code 1.
I was following the following stack post: How to conditional create resource in Terraform based on a string variable
but it doesn't seem to work with the above error.
Here is my terraform code:
resource "aws_rds_cluster_instance" "cluster_instances" {
count = var.monitoring_interval != "0" ? var.cluster_instance_count : 0
identifier = "${var.service}-${var.environment}-${count.index}"
cluster_identifier = aws_rds_cluster.default.id
instance_class = var.instance_class
engine = aws_rds_cluster.default.engine
monitoring_role_arn = var.monitoring_role
engine_version = aws_rds_cluster.default.engine_version
monitoring_interval = var.monitoring_interval
db_parameter_group_name = var.regional_instance_param_group_name
copy_tags_to_snapshot = true
publicly_accessible = false
db_subnet_group_name = var.regional_subnet_group_name
}
resource "aws_rds_cluster_instance" "cluster_instances" {
count = var.monitoring_interval = "0" ? var.cluster_instance_count : 0
identifier = "${var.service}-${var.environment}-${count.index}"
cluster_identifier = aws_rds_cluster.default.id
instance_class = var.instance_class
engine = aws_rds_cluster.default.engine
engine_version = aws_rds_cluster.default.engine_version
db_parameter_group_name = var.regional_instance_param_group_name
copy_tags_to_snapshot = true
publicly_accessible = false
db_subnet_group_name = var.regional_subnet_group_name
}
Line Reference:
Thanks for your help. Probably something small I'm just missing or misunderstood about terraform conditionals.
You're missing an = in your condition. Change it to this:
var.monitoring_interval == "0" ? var.cluster_instance_count : 0

Can I create a Redshift cluster in Terraform AND add an additional user to it?

I am trying to get terraform set up to where I can have an array of cluster parameters and then do a for_each in a redshift module to create them all like so:
for_each = local.env[var.tier][var.region].clusters
source = "terraform-aws-modules/redshift/aws"
cluster_identifier = "${each.value.name}"
allow_version_upgrade = true
node_type = "dc2.large"
number_of_nodes = 2
database_name = "${each.value.database}"
master_username = "${each.value.admin_user}"
create_random_password = false
master_password = "${each.value.admin_password}"
encrypted = true
kms_key_arn = xxxxx
enhanced_vpc_routing = false
vpc_security_group_ids = xxxxxx
subnet_ids = xxxxxx
publicly_accessible = true
iam_role_arns = xxxxxx
# Parameter group
parameter_group_name = xxxxxx
# Subnet group
create_subnet_group = false
subnet_group_name = xxxxxx
# Maintenance
preferred_maintenance_window = "sat:01:00-sat:01:30"
# Backup Details
automated_snapshot_retention_period = 30
manual_snapshot_retention_period = -1
}
But I also want to add an additional user aside from the admin user to each of these clusters. I am struggling to find a way to do this in terraform. Any advice would be appreciated! Thanks!
There are two ways to do this:
Can try to use TF Redshift Provider which allows you to create redshift_user.
Use local-exec to invoke JDBC, Python or ODBC tools that will create your user using SQL commands.

Unable to create new EKS with terraform

I'm having problems creating a new EKS version 1.22 in a dev environment.
I'm using the module in Terraform registry, trimming some parts since it's only for testing purposes (we just want to test the version 1.22).
I'm using a VPC that was created for testing EKS's, and 2 public subnets and 2 private subnets.
This is my main.tf:
module "eks" {
source = "terraform-aws-modules/eks/aws"
version = "18.21.0"
cluster_name = "EKSv2-update-test"
cluster_version = "1.22"
cluster_endpoint_private_access = true
cluster_endpoint_public_access = true
cluster_addons = {
coredns = {
resolve_conflicts = "OVERWRITE"
}
kube-proxy = {}
vpc-cni = {
resolve_conflicts = "OVERWRITE"
}
}
vpc_id = "vpc-xxx" # eks-vpc
subnet_ids = ["subnet-priv-1-xxx", "subnet-priv-2-xxx", "subnet-pub-1-xxx", "subnet-pub-2-xxx"]
}
Terraform apply times out after 20 min (it just hangs on module.eks.aws_eks_addon.this["coredns"]: Still creating... [20m0s elapsed])
and this is the error
│ Error: unexpected EKS Add-On (EKSv2-update-test:coredns) state returned during creation: timeout while waiting for state to become 'ACTIVE' (last state: 'DEGRADED', timeout: 20m0s)
│ [WARNING] Running terraform apply again will remove the kubernetes add-on and attempt to create it again effectively purging previous add-on configuration
│
│ with module.eks.aws_eks_addon.this["coredns"],
│ on .terraform/modules/eks/main.tf line 305, in resource "aws_eks_addon" "this":
│ 305: resource "aws_eks_addon" "this" {
The EKS gets created, but this is clearly not the way to go.
Regarding coredns, what am I missing?
Thanks
a minimum of 2 cluster nodes are required for addon coredns to meet its requirements for its replica set

Can't start a self managed node group through Terraform

I am trying to deploy a self managed node group through terraform, for days now. Deploying a non self managed one works out of the bat, however, I have the following issue with the self managed one. This is what my code looks like:
self_managed_node_groups = {
self_mg_4 = {
node_group_name = "self-managed-ondemand"
subnet_ids = module.aws_vpc.private_subnets
create_launch_template = true
launch_template_os = "amazonlinux2eks"
custom_ami_id = "xxx"
public_ip = false
pre_userdata = <<-EOT
yum install -y amazon-ssm-agent \
systemctl enable amazon-ssm-agent && systemctl start amazon-ssm-agent \
EOT
disk_size = 5
instance_type = "t2.small"
desired_size = 1
max_size = 5
min_size = 1
capacity_type = ""
k8s_labels = {
Environment = "dev-test"
Zone = ""
WorkerType = "SELF_MANAGED_ON_DEMAND"
}
additional_tags = {
ExtraTag = "t2x-on-demand"
Name = "t2x-on-demand"
subnet_type = "private"
}
create_worker_security_group = false
}
}
This is the module I use: github.com/aws-samples/aws-eks-accelerator-for-terraform
And this is what Terraform throws after 10 mins:
Error: "Cluster": Waiting up to 10m0s: Need at least 1 healthy instances in ASG, have 0.
Cause: "At 2022-02-10T16:46:14Z an instance was started in response to a difference between desired and actual capacity, increasing the capacity from 0 to 1.", Description: "Launching a new EC2 instance. Status Reason: The requested configuration is currently not supported. Please check the documentation for supported configurations. Launching EC2 instance failed.", StatusCode: "Failed"
Full code:
https://pastebin.com/mtVGC8PP
The solution was actually changing my t2.small to t3.small. Turns out my AZS didn't support t2.

Fail to use terraform provisioner with aws lightsail

I am having trouble to use the provisioner (both "file" and "remote-exec") with aws lightsail. For the "file" provisioner, I kept getting a dialup error to port 22 with connection refused, the "remote-exec" gives me a timeout error. I can see it keeps trying to connect to the instance but it just can not connect to it.
For the file provisioner, I have also tried with scp directly and it works just fine.
A sample snippet of the connection block I am using is as the following:
resource "aws_lightsail_instance" "han-mongo" {
name = "han-mongo"
availability_zone = "us-east-1b"
blueprint_id = "ubuntu_16_04"
bundle_id = "nano_1_0"
key_pair_name = "my_key_pair"
user_data = "${file("userdata.sh")}"
provisioner "file" {
source = "file.service"
destination = "/home/ubuntu"
connection {
type = "ssh"
private_key = "${file("my_key.pem")}"
user = "ubuntu"
timeout = "20s"
}
}
}
In addition to the authentication information, it's also necessary to tell Terraform which IP address it should use to connect, like this:
connection {
type = "ssh"
host = "${self.public_ip_address}"
private_key = "${file("my_key.pem")}"
user = "ubuntu"
timeout = "20s"
}
For some resources Terraform is able to automatically infer some of the connection details from the resource attributes, but at present that is not supported for Lightsail instances and so it's necessary to specify the host argument explicitly.