Adding an option group using AWS CloudFormation - amazon-web-services

I'm building a CloudFormation template that will create a SQL Server database (using RDS) with the Multi-AZ option (so it maintains a synchronous standby replica in a different Availability Zone).
However, to make this work, I need to associated the database instance with an option group that has the Mirroring option. Haven't been able to find anywhere how to create an option group with a CloudFormation template.
How do I create an option group in a CloudFormation template?

As of now, option groups are not supported by CloudFormation. If the database needs to be created within the template, you can create the option group with the cli beforehand and set it as a parameter. This way you can correctly populate the OptionGroupName value for the AWS::RDS::DBInstance resource.

Replication group is not supported feature by CloudFormation templates yet. So only possible way to create a external python script that will create the replication group with Boto lib
Example how to create Redis replication group
import boto.elasticache
import time
import sys
connection = boto.elasticache.connect_to_region('ap-southeast-2')
connection.create_cache_subnet_group(
"Redis-Subnet-Group-Test",
"Redis cluster subnet",
["subnet-72313306", "subnet-7a06e01f"]
)
connection.create_cache_cluster(
"Redis-Master-Test",
num_cache_nodes = 1,
cache_node_type = "cache.t1.micro",
engine = "redis",
engine_version = "2.6.13",
cache_subnet_group_name = "Redis-Subnet-Group-Test",
security_group_ids = ["sg-07ff1962"],
preferred_availability_zone = "ap-southeast-2a",
preferred_maintenance_window = "tue:01:00-tue:02:00",
auto_minor_version_upgrade = True
)
counter = 0
while counter < 35: # Wait for the cache cluster (redis master) to become available before creating the replication group
counter = counter + 1
clusterDesc = connection.describe_cache_clusters(cache_cluster_id = "Redis-Master-Test")
clusterStatus = clusterDesc["DescribeCacheClustersResponse"]["DescribeCacheClustersResult"]["CacheClusters"][0]["CacheClusterStatus"];
if "available" not in clusterStatus:
time.sleep(10)
elif "available" in clusterStatus:
break
else: # Just roll back on timeout
connection.delete_cache_cluster("Redis-Master-Test")
connection.delete_cache_subnet_group("Redis-Subnet-Group-Test")
sys.exit(1)
connection.create_replication_group("Redis-Replicas-Test", "Redis-Master-Test", "Redis-Replication-Group")
connection.create_cache_cluster(
"Redis-Replica-Test",
num_cache_nodes = 1,
replication_group_id = "Redis-Replicas-Test",
cache_node_type = "cache.t1.micro",
engine = "redis",
engine_version = "2.6.13",
cache_subnet_group_name = "Redis-Subnet-Group-Test",
security_group_ids = ["sg-07ff1962"],
preferred_availability_zone = "ap-southeast-2b",
preferred_maintenance_window = "tue:01:00-tue:02:00",
auto_minor_version_upgrade = True
)

Related

Using Terraform, how would I create a AWS Kubernetes cluster with Fargate?

I am looking for a recipe using Terraform to create a Kubernetes cluster on AWS using Fargate. I cannot find any end-to-end documentation to do this.
I am using SSO, and so terraform needs to use my AWS credentials to do this.
No example I can find addresses using AWS credentials and Fargate.
If anyone has done this and has a recipe for all of the above, please share.
You can use popular module for that
terraform-aws-eks. It supports Fargate EKS as well. Since its open sourced, you can also have a look at exactly how to create such clusters if you want to fork and customize the module, or create your own scratch.
Example use for Fargate EKS from its docs:
module "eks" {
source = "../.."
cluster_name = local.cluster_name
cluster_version = "1.17"
subnets = module.vpc.private_subnets
tags = {
Environment = "test"
GithubRepo = "terraform-aws-eks"
GithubOrg = "terraform-aws-modules"
}
vpc_id = module.vpc.vpc_id
fargate_profiles = {
example = {
namespace = "default"
# Kubernetes labels for selection
# labels = {
# Environment = "test"
# GithubRepo = "terraform-aws-eks"
# GithubOrg = "terraform-aws-modules"
# }
# using specific subnets instead of all the ones configured in eks
# subnets = ["subnet-0ca3e3d1234a56c78"]
tags = {
Owner = "test"
}
}
}
map_roles = var.map_roles
map_users = var.map_users
map_accounts = var.map_accounts
}

Terraform Error: "replication_group_id": conflicts with engine_version. ( redis )

I'm trying to create an aws_elasticache_replication_group using Redis
resource "aws_elasticache_cluster" "encryption-at-rest" {
count = 1
cluster_id = "${var.namespace}-${var.environment}-encryption-at-rest"
engine = "redis"
engine_version = var.engine_version
node_type = var.node_type
num_cache_nodes = 1
port = var.redis_port
#az_mode = var.az_mode
replication_group_id = aws_elasticache_replication_group.elasticache_replication_group.id
security_group_ids = [aws_security_group.redis_security_group.id]
subnet_group_name = aws_elasticache_subnet_group.default.name
apply_immediately = true
tags = {
Name = "${var.namespace}-${var.environment}-redis"
}
}
resource "aws_elasticache_replication_group" "elasticache_replication_group" {
automatic_failover_enabled = false //var.sharding_automatic_failover_enabled
availability_zones = ["ap-southeast-1a"] //data.terraform_remote_state.network.outputs.availability_zones
replication_group_id = "${var.namespace}-${var.environment}-encryption-at-rest"
replication_group_description = "${var.namespace} ${var.environment} replication group"
security_group_ids = [aws_security_group.redis_security_group.id]
subnet_group_name = aws_elasticache_subnet_group.default.name
node_type = var.node_type
number_cache_clusters = 1 //2
parameter_group_name = aws_elasticache_parameter_group.param_group.name
port = var.redis_port
at_rest_encryption_enabled = true
kms_key_id = data.aws_kms_alias.kms_redis.target_key_arn
apply_immediately = true
}
resource "aws_elasticache_parameter_group" "param_group" {
name = "${var.namespace}-${var.environment}-params"
family = "redis5.0"
}
But I get the following error:
aws_security_group_rule.redis_ingress[0]: Refreshing state... [id=sgrule-3474516270]
aws_security_group_rule.redis_ingress[1]: Refreshing state... [id=sgrule-2582511137]
aws_elasticache_replication_group.elasticache_replication_group: Refreshing state... [id=cbpl-uat-encryption-at-rest]
Error: "replication_group_id": conflicts with engine_version
on redis.tf line 1, in resource "aws_elasticache_cluster" "encryption-at-rest":
1: resource "aws_elasticache_cluster" "encryption-at-rest" {
Releasing state lock. This may take a few moments...
The aws_elasticache_cluster resource docs say this:
replication_group_id - (Optional) The ID of the replication group to
which this cluster should belong. If this parameter is specified, the
cluster is added to the specified replication group as a read replica;
otherwise, the cluster is a standalone primary that is not part of any
replication group.
engine – (Required unless replication_group_id is provided) Name
of the cache engine to be used for this cache cluster. Valid values
for this parameter are memcached or redis
If you're going to join it to a replication group then the engine must match the replication group's engine type and so it shouldn't be set on the aws_elasticache_cluster.
The AWS provider overloads the aws_elasticache_cluster structure to handle multiple dissimilar configurations. The internal logic contains a set of 'ConflictsWith' validations which are based on the premise that certain arguments simply cannot be specified together because they represent different modes of elasticache clusters (or nodes).
If you are specifying a replication_group_id then the value of engine_version will be managed by the corresponding aws_elasticache_replication_group.
Therefore, the solution is simply to remove the engine_version argument from your aws_elasticache_cluster resource specification. If you so choose (or in cases where it is required), you can also add that argument to the aws_elasticache_replication_group.
Example: Redis Cluster Mode Disabled Read Replica Instance
// These inherit their settings from the replication group.
resource "aws_elasticache_cluster" "replica" {
cluster_id = "cluster-example"
replication_group_id = aws_elasticache_replication_group.example.id
}
In this mode, the aws_elasticache_cluster structure requires very few arguments.

How to list AWS tagged Hosted Zones using ResourceGroupsTaggingAPI boto3

I am trying to retrieve all the AWS resources tagged using the boto3 ResourceGroupsTaggingAPI, but I can't seem to retrieve the Hosted Zones which have been tagged.
tagFilters = [{'Key': 'tagA', 'Values': 'a'}, {'Key': 'tagB', 'Values': 'b'}]
client = boto3.client('resourcegroupstaggingapi', region_name = 'us-east-2')
paginator = self.client.get_paginator('get_resources')
page_list = paginator.paginate(TagFilters = tagFilters)
# filter and get iterable object arn
# Refer filtering with JMESPath => http://jmespath.org/
arns = page_list.search("ResourceTagMappingList[*].ResourceARN")
for arn in arns:
print(arn)
I noticed through the Tag Editor in the AWS Console (which I guess is using the ResourceGroupsTaggingAPI) when the region is set to All the tagged Hosted zones can be retrieved (since global) while when a specific region is set the tagged Hosted Zones are not shown in the results. Is there a way to set the boto3 client region to all?, or is there another way to do this?
I have already tried
client = boto3.client('resourcegroupstaggingapi')
which returns an empty result
(https://console.aws.amazon.com/resource-groups/tag-editor/find-resources?region=us-east-1)
You need to iterate it over all regions,
ec2 = boto3.client('ec2')
region_response = ec2.describe_regions()
#print('Regions:', region_response['Regions'])
for this_region_info in region_response['Regions']:
region = this_region_info["RegionName"]
my_config = Config(
region_name = region
)
client = boto3.client('resourcegroupstaggingapi', config=my_config)

Terraform two PostgreSQL databases setup

I am very very new to teraform scripting.
Our system is running in AWS and we have a single database server instance accessed by multiple micro services.
Each micro service that needs to persist some data needs to point to a different database (schema) on the same database server. We prefer each service to have its own schema to have the services totally decoupled from each other. However creating a separate database instance to achieve this would be a bit too much as some services only persist close to nothing so it would be a waste,
I created the PostgreSQL resource in a services.tf script that is common to all microservices:
resource "aws_db_instance" "my-system" {
identifier_prefix = "${var.resource_name_prefix}-tlm-"
engine = "postgres"
allocated_storage = "${var.database_storage_size}"
storage_type = "${var.database_storage_type}"
storage_encrypted = true
skip_final_snapshot = true
instance_class = "${var.database_instance_type}"
availability_zone = "${data.aws_availability_zones.all.names[0]}"
db_subnet_group_name = "${aws_db_subnet_group.default.name}"
vpc_security_group_ids = "${var.security_group_ids}"
backup_retention_period = "${var.database_retention_period}"
backup_window = "15:00-18:00" // UTC
maintenance_window = "sat:19:00-sat:20:00" // UTC
tags = "${var.tags}"
}
And now I for my service-1 and service-2 i want to be able to create the corespondent database name. I don't think the below is correct I am just adding it to give you an idea about what I am trying to achieve.
So service-1.tf will contain:
resource "aws_db_instance" "my-system" {
name = "service_1"
}
And service-2.tf will contain:
resource "aws_db_instance" "my-system" {
name = "service_2"
}
My question is what should I put in the service-1.tf and service-2.tf to make this possible.
Thank you in advance for your inputs.
Terraform can only manage at the RDS instance level. Configuring the schema etc is a DBA task.
One way you could automate the DBA tasks is by creating a null_resource using the local-exec provider to use a postgres client to do the work.
you can use count to manage one tf file only
resource "aws_db_instance" "my-system" {
count = "2"
name = "service_${count.index}"
identifier_prefix = "${var.resource_name_prefix}-tlm-"
engine = "postgres"
allocated_storage = "${var.database_storage_size}"
storage_type = "${var.database_storage_type}"
storage_encrypted = true
skip_final_snapshot = true
instance_class = "${var.database_instance_type}"
availability_zone = "${data.aws_availability_zones.all.names[0]}"
db_subnet_group_name = "${aws_db_subnet_group.default.name}"
vpc_security_group_ids = "${var.security_group_ids}"
backup_retention_period = "${var.database_retention_period}"
backup_window = "15:00-18:00" // UTC
maintenance_window = "sat:19:00-sat:20:00" // UTC
tags = "${var.tags}"
}

Creating RDS Instances from Snapshot Using Terraform

Working on a Terraform project in which I am creating an RDS cluster by grabbing and using the most recent production db snapshot:
# Get latest snapshot from production DB
data "aws_db_snapshot" "db_snapshot" {
most_recent = true
db_instance_identifier = "${var.db_instance_to_clone}"
}
#Create RDS instance from snapshot
resource "aws_db_instance" "primary" {
identifier = "${var.app_name}-primary"
snapshot_identifier = "${data.aws_db_snapshot.db_snapshot.id}"
instance_class = "${var.instance_class}"
vpc_security_group_ids = ["${var.security_group_id}"]
skip_final_snapshot = true
final_snapshot_identifier = "snapshot"
parameter_group_name = "${var.parameter_group_name}"
publicly_accessible = true
timeouts {
create = "2h"
}
}
The issue with this approach is that following runs of the terraform code (once another snapshot has been taken) want to re-create the primary RDS instance (and subsequently, the read replicas) with the latest snapshot of the DB. I was thinking something along the lines of a boolean count parameters that specifies first run, but setting count = 0 on the snapshot resource causes issues with the snapshot_id parameters of the db resource. Likewise setting a count = 0 on the db resource would indicate that it would destroy the db.
Use case for this is to be able to make changes to other aspects of the production infrastructure that this terraform plan manages without having to re-create the entire RDS cluster, which is a very time consuming resource to destroy/create.
Try placing an ignore_changes lifecycle block within your aws_db_instance definition:
lifecycle {
ignore_changes = [
snapshot_identifier,
]
}
This will cause Terraform to only look for changes to the database's snapshot_identifier upon initial creation.
If the database already exists, Terraform will ignore any changes to the existing database's snapshot_identifier field -- even if a new snapshot has been created since then.