Terraform Error: "replication_group_id": conflicts with engine_version. ( redis ) - amazon-web-services

I'm trying to create an aws_elasticache_replication_group using Redis
resource "aws_elasticache_cluster" "encryption-at-rest" {
count = 1
cluster_id = "${var.namespace}-${var.environment}-encryption-at-rest"
engine = "redis"
engine_version = var.engine_version
node_type = var.node_type
num_cache_nodes = 1
port = var.redis_port
#az_mode = var.az_mode
replication_group_id = aws_elasticache_replication_group.elasticache_replication_group.id
security_group_ids = [aws_security_group.redis_security_group.id]
subnet_group_name = aws_elasticache_subnet_group.default.name
apply_immediately = true
tags = {
Name = "${var.namespace}-${var.environment}-redis"
}
}
resource "aws_elasticache_replication_group" "elasticache_replication_group" {
automatic_failover_enabled = false //var.sharding_automatic_failover_enabled
availability_zones = ["ap-southeast-1a"] //data.terraform_remote_state.network.outputs.availability_zones
replication_group_id = "${var.namespace}-${var.environment}-encryption-at-rest"
replication_group_description = "${var.namespace} ${var.environment} replication group"
security_group_ids = [aws_security_group.redis_security_group.id]
subnet_group_name = aws_elasticache_subnet_group.default.name
node_type = var.node_type
number_cache_clusters = 1 //2
parameter_group_name = aws_elasticache_parameter_group.param_group.name
port = var.redis_port
at_rest_encryption_enabled = true
kms_key_id = data.aws_kms_alias.kms_redis.target_key_arn
apply_immediately = true
}
resource "aws_elasticache_parameter_group" "param_group" {
name = "${var.namespace}-${var.environment}-params"
family = "redis5.0"
}
But I get the following error:
aws_security_group_rule.redis_ingress[0]: Refreshing state... [id=sgrule-3474516270]
aws_security_group_rule.redis_ingress[1]: Refreshing state... [id=sgrule-2582511137]
aws_elasticache_replication_group.elasticache_replication_group: Refreshing state... [id=cbpl-uat-encryption-at-rest]
Error: "replication_group_id": conflicts with engine_version
on redis.tf line 1, in resource "aws_elasticache_cluster" "encryption-at-rest":
1: resource "aws_elasticache_cluster" "encryption-at-rest" {
Releasing state lock. This may take a few moments...

The aws_elasticache_cluster resource docs say this:
replication_group_id - (Optional) The ID of the replication group to
which this cluster should belong. If this parameter is specified, the
cluster is added to the specified replication group as a read replica;
otherwise, the cluster is a standalone primary that is not part of any
replication group.
engine – (Required unless replication_group_id is provided) Name
of the cache engine to be used for this cache cluster. Valid values
for this parameter are memcached or redis
If you're going to join it to a replication group then the engine must match the replication group's engine type and so it shouldn't be set on the aws_elasticache_cluster.

The AWS provider overloads the aws_elasticache_cluster structure to handle multiple dissimilar configurations. The internal logic contains a set of 'ConflictsWith' validations which are based on the premise that certain arguments simply cannot be specified together because they represent different modes of elasticache clusters (or nodes).
If you are specifying a replication_group_id then the value of engine_version will be managed by the corresponding aws_elasticache_replication_group.
Therefore, the solution is simply to remove the engine_version argument from your aws_elasticache_cluster resource specification. If you so choose (or in cases where it is required), you can also add that argument to the aws_elasticache_replication_group.
Example: Redis Cluster Mode Disabled Read Replica Instance
// These inherit their settings from the replication group.
resource "aws_elasticache_cluster" "replica" {
cluster_id = "cluster-example"
replication_group_id = aws_elasticache_replication_group.example.id
}
In this mode, the aws_elasticache_cluster structure requires very few arguments.

Related

How to recreate aws_rds_cluster in Terraform

I am trying to create an encrypted version of my currently existing unencrypted aws_rds_cluster by updating my resource, I added:
kms_key_id = "mykmskey"
storage_encrypted = true
This is how my resource should look like:
resource "aws_rds_cluster" "my_rds_cluster" {
cluster_identifier = "${var.service_name}-rds-cluster"
database_name = var.db_name
master_username = var.db_username
master_password = random_password.db_password.result
engine = var.db_engine
engine_version = var.db_engine_version
kms_key_id = "mykmskey"
storage_encrypted = true
db_subnet_group_name = aws_db_subnet_group.fleet_service_db_subnet_group.name
vpc_security_group_ids = [aws_security_group.fleet_service_service_db_security_group.id]
skip_final_snapshot = true
backup_retention_period = var.environment != "prod" ? null : 7
# snapshot_identifier = "my-rds-instance-snapshot"
tags = { Name = "${var.service_name}-rds-cluster" }
}
The problem is that the original resource had delete_protection = true defined, which I also removed but, even though I removed it the original cluster cannot be deleted by any means in order for the new one to be created, neither through changes in Terraform, nor manually in AWS console, it just throws an error like:
error creating RDS cluster: DBClusterAlreadyExistsFault: DB Cluster already exists
Any ideas what to do in such cases?
To do that purely through Terraform, you would have to:
Remove deletion protection from the original Terraform resource
Run terraform apply, which will remove deletion protection from the actual resource in AWS
Make the modifications to the Terraform resource that will result in a delete or replace of the current resource
Run terraform apply again, during which time Terraform will now delete and/or replace the resource.
The key thing here being that you can't remove deleting protection at the same time you are actually deleting a resource, because Terraform isn't going to update an existing resource to modify an attribute before attempting to delete the resource.

Dinamically add resources in Terraform

I set up a jenkins pipeline that launches terraform to create a new EC2 instance in our VPC and register it to our private hosted zone on R53 (which is created at the same time) at every run.
I also managed to save the state into S3 so it doesn't fail with the hosted zone being re-created.
the main issue I have is that at every run terraform keeps replacing the previous instance with the new one and not adding it to the pool of instances.
How can avoid this?
here's a snippet of my code
terraform {
backend "s3" {
bucket = "<redacted>"
key = "<redacted>/terraform.tfstate"
region = "eu-west-1"
}
}
provider "aws" {
region = "${var.region}"
}
data "aws_ami" "image" {
# limit search criteria for performance
most_recent = "${var.ami_filter_most_recent}"
name_regex = "${var.ami_filter_name_regex}"
owners = ["${var.ami_filter_name_owners}"]
# filter on tag purpose
filter {
name = "tag:purpose"
values = ["${var.ami_filter_purpose}"]
}
# filter on tag os
filter {
name = "tag:os"
values = ["${var.ami_filter_os}"]
}
}
resource "aws_instance" "server" {
# use extracted ami from image data source
ami = data.aws_ami.image.id
availability_zone = data.aws_subnet.most_available.availability_zone
subnet_id = data.aws_subnet.most_available.id
instance_type = "${var.instance_type}"
vpc_security_group_ids = ["${var.security_group}"]
user_data = "${var.user_data}"
iam_instance_profile = "${var.iam_instance_profile}"
root_block_device {
volume_size = "${var.root_disk_size}"
}
ebs_block_device {
device_name = "${var.extra_disk_device_name}"
volume_size = "${var.extra_disk_size}"
}
tags = {
Name = "${local.available_name}"
}
}
resource "aws_route53_zone" "private" {
name = var.hosted_zone_name
vpc {
vpc_id = var.vpc_id
}
}
resource "aws_route53_record" "record" {
zone_id = aws_route53_zone.private.zone_id
name = "${local.available_name}.${var.hosted_zone_name}"
type = "A"
ttl = "300"
records = [aws_instance.server.private_ip]
depends_on = [
aws_route53_zone.private
]
}
the outcome is that my previously created instance is destroyed and a new one is created. what I want is to keep adding instances with this code.
thank you
Your code creates only one instance aws_instance.server, and any change to its properties will modify that one instance only as your backend is in S3, thus it acts as a global state for each pipeline. The same goes for aws_route53_record.record and anything else in your script.
If you want different pipelines to reuse the same exact script, you should either use different workspaces, or create different TF states for each pipeline. The other alternative is to redefine your TF script to take a map of instances as an input variable and use for_each to create different instances.
If those instances should be same, you should manage their count using using aws_autoscaling_group and desired capacity.

Terraform two PostgreSQL databases setup

I am very very new to teraform scripting.
Our system is running in AWS and we have a single database server instance accessed by multiple micro services.
Each micro service that needs to persist some data needs to point to a different database (schema) on the same database server. We prefer each service to have its own schema to have the services totally decoupled from each other. However creating a separate database instance to achieve this would be a bit too much as some services only persist close to nothing so it would be a waste,
I created the PostgreSQL resource in a services.tf script that is common to all microservices:
resource "aws_db_instance" "my-system" {
identifier_prefix = "${var.resource_name_prefix}-tlm-"
engine = "postgres"
allocated_storage = "${var.database_storage_size}"
storage_type = "${var.database_storage_type}"
storage_encrypted = true
skip_final_snapshot = true
instance_class = "${var.database_instance_type}"
availability_zone = "${data.aws_availability_zones.all.names[0]}"
db_subnet_group_name = "${aws_db_subnet_group.default.name}"
vpc_security_group_ids = "${var.security_group_ids}"
backup_retention_period = "${var.database_retention_period}"
backup_window = "15:00-18:00" // UTC
maintenance_window = "sat:19:00-sat:20:00" // UTC
tags = "${var.tags}"
}
And now I for my service-1 and service-2 i want to be able to create the corespondent database name. I don't think the below is correct I am just adding it to give you an idea about what I am trying to achieve.
So service-1.tf will contain:
resource "aws_db_instance" "my-system" {
name = "service_1"
}
And service-2.tf will contain:
resource "aws_db_instance" "my-system" {
name = "service_2"
}
My question is what should I put in the service-1.tf and service-2.tf to make this possible.
Thank you in advance for your inputs.
Terraform can only manage at the RDS instance level. Configuring the schema etc is a DBA task.
One way you could automate the DBA tasks is by creating a null_resource using the local-exec provider to use a postgres client to do the work.
you can use count to manage one tf file only
resource "aws_db_instance" "my-system" {
count = "2"
name = "service_${count.index}"
identifier_prefix = "${var.resource_name_prefix}-tlm-"
engine = "postgres"
allocated_storage = "${var.database_storage_size}"
storage_type = "${var.database_storage_type}"
storage_encrypted = true
skip_final_snapshot = true
instance_class = "${var.database_instance_type}"
availability_zone = "${data.aws_availability_zones.all.names[0]}"
db_subnet_group_name = "${aws_db_subnet_group.default.name}"
vpc_security_group_ids = "${var.security_group_ids}"
backup_retention_period = "${var.database_retention_period}"
backup_window = "15:00-18:00" // UTC
maintenance_window = "sat:19:00-sat:20:00" // UTC
tags = "${var.tags}"
}

Creating RDS instances from not the recent snapshot using Terraform

In Terraform project I am creating an RDS instance from a not recent snapshot (fifth before the last), my script here:
data "aws_db_snapshot" "db_snapshot" {
db_instance_identifier = "production-db-intern"
db_snapshot_arn = "arn:aws:rds:eu-central-1:123114111478:snapshot:rds:production-db-intern-2019-05-09-16-10"
}
resource "aws_db_instance" "db_intern" {
skip_final_snapshot = true
identifier = "db-intern"
auto_minor_version_upgrade = false
instance_class = "db.m4.4xlarge"
deletion_protection = false
vpc_security_group_ids = ["${var.security_group_id}"]
db_subnet_group_name = "${var.subnet_group_name}"
timeouts {
create = "3h"
delete = "2h"
}
lifecycle {
prevent_destroy = false
}
snapshot_identifier = "${data.aws_db_snapshot.db_snapshot.id}"
}
I did a "terraform plan" and
I got the next error:
Error: data.aws_db_snapshot.db_snapshot: "db_snapshot_arn": this field cannot be set
db_snapshot_arn is not a valid field of the aws_db_snapshot data resource. Did you mean db_snapshot_identifier.
Also, you can't pass the ARN to this data resource, you can pass the snapshot ID instead, e.g. snap-1234567890abcdef0.
Besides that, the data resource only expects either the db_instance_identifier to be set or the db_snapshot_identifier. See the documentation on the snapshot CLI for more details on the specifics. Terraform leverages the CLI to retrieve these resources.

Adding an option group using AWS CloudFormation

I'm building a CloudFormation template that will create a SQL Server database (using RDS) with the Multi-AZ option (so it maintains a synchronous standby replica in a different Availability Zone).
However, to make this work, I need to associated the database instance with an option group that has the Mirroring option. Haven't been able to find anywhere how to create an option group with a CloudFormation template.
How do I create an option group in a CloudFormation template?
As of now, option groups are not supported by CloudFormation. If the database needs to be created within the template, you can create the option group with the cli beforehand and set it as a parameter. This way you can correctly populate the OptionGroupName value for the AWS::RDS::DBInstance resource.
Replication group is not supported feature by CloudFormation templates yet. So only possible way to create a external python script that will create the replication group with Boto lib
Example how to create Redis replication group
import boto.elasticache
import time
import sys
connection = boto.elasticache.connect_to_region('ap-southeast-2')
connection.create_cache_subnet_group(
"Redis-Subnet-Group-Test",
"Redis cluster subnet",
["subnet-72313306", "subnet-7a06e01f"]
)
connection.create_cache_cluster(
"Redis-Master-Test",
num_cache_nodes = 1,
cache_node_type = "cache.t1.micro",
engine = "redis",
engine_version = "2.6.13",
cache_subnet_group_name = "Redis-Subnet-Group-Test",
security_group_ids = ["sg-07ff1962"],
preferred_availability_zone = "ap-southeast-2a",
preferred_maintenance_window = "tue:01:00-tue:02:00",
auto_minor_version_upgrade = True
)
counter = 0
while counter < 35: # Wait for the cache cluster (redis master) to become available before creating the replication group
counter = counter + 1
clusterDesc = connection.describe_cache_clusters(cache_cluster_id = "Redis-Master-Test")
clusterStatus = clusterDesc["DescribeCacheClustersResponse"]["DescribeCacheClustersResult"]["CacheClusters"][0]["CacheClusterStatus"];
if "available" not in clusterStatus:
time.sleep(10)
elif "available" in clusterStatus:
break
else: # Just roll back on timeout
connection.delete_cache_cluster("Redis-Master-Test")
connection.delete_cache_subnet_group("Redis-Subnet-Group-Test")
sys.exit(1)
connection.create_replication_group("Redis-Replicas-Test", "Redis-Master-Test", "Redis-Replication-Group")
connection.create_cache_cluster(
"Redis-Replica-Test",
num_cache_nodes = 1,
replication_group_id = "Redis-Replicas-Test",
cache_node_type = "cache.t1.micro",
engine = "redis",
engine_version = "2.6.13",
cache_subnet_group_name = "Redis-Subnet-Group-Test",
security_group_ids = ["sg-07ff1962"],
preferred_availability_zone = "ap-southeast-2b",
preferred_maintenance_window = "tue:01:00-tue:02:00",
auto_minor_version_upgrade = True
)