How to reduce depoyment time for RDS instance via Terraform? - amazon-web-services

resource "aws_db_instance" "mydb1_mishal2" {
allocated_storage = 20 # gigabytes
backup_retention_period = 7 # in days
db_subnet_group_name = "${var.rds_public_subnet_group}"
engine = "mysql"
engine_version = "5.7"
identifier = "dbmishalsecond"
instance_class = "db.t2.micro"
multi_az = false
name = "mydbmishalsecond"
parameter_group_name = "default.mysql5.7" # if you have tuned it
password = "mishal12345" #"${trimspace(file("${path.module}/secrets/mydb1-password.txt"))}"
port = 3306
publicly_accessible = true
storage_encrypted = false # you should always do this
storage_type = "gp2"
username = "mydb1"
#vpc_security_group_ids = ["${aws_security_group.mydb1.id}"]
}
I am using the above code to deploy a RDS instance in AWS. But it is taking a lot of time to deploy. Logs below:
Deployment logs
How can I reduce the deployment time or is it necessary evil?

You can't really do anything about changing how long the RDS takes to become available except that multi-az DBs do take longer to start. But that's not really something you want to consider for startup time. You should consider your availability requirements for that parameter.

Related

How to recreate aws_rds_cluster in Terraform

I am trying to create an encrypted version of my currently existing unencrypted aws_rds_cluster by updating my resource, I added:
kms_key_id = "mykmskey"
storage_encrypted = true
This is how my resource should look like:
resource "aws_rds_cluster" "my_rds_cluster" {
cluster_identifier = "${var.service_name}-rds-cluster"
database_name = var.db_name
master_username = var.db_username
master_password = random_password.db_password.result
engine = var.db_engine
engine_version = var.db_engine_version
kms_key_id = "mykmskey"
storage_encrypted = true
db_subnet_group_name = aws_db_subnet_group.fleet_service_db_subnet_group.name
vpc_security_group_ids = [aws_security_group.fleet_service_service_db_security_group.id]
skip_final_snapshot = true
backup_retention_period = var.environment != "prod" ? null : 7
# snapshot_identifier = "my-rds-instance-snapshot"
tags = { Name = "${var.service_name}-rds-cluster" }
}
The problem is that the original resource had delete_protection = true defined, which I also removed but, even though I removed it the original cluster cannot be deleted by any means in order for the new one to be created, neither through changes in Terraform, nor manually in AWS console, it just throws an error like:
error creating RDS cluster: DBClusterAlreadyExistsFault: DB Cluster already exists
Any ideas what to do in such cases?
To do that purely through Terraform, you would have to:
Remove deletion protection from the original Terraform resource
Run terraform apply, which will remove deletion protection from the actual resource in AWS
Make the modifications to the Terraform resource that will result in a delete or replace of the current resource
Run terraform apply again, during which time Terraform will now delete and/or replace the resource.
The key thing here being that you can't remove deleting protection at the same time you are actually deleting a resource, because Terraform isn't going to update an existing resource to modify an attribute before attempting to delete the resource.

I have a Terraform that creates a AWS RDS, aurora mysql, is there a way to create a table on the DB

I have a Terraform script that creates a AWS RDS, aurora mysql cluster
module "cluster" {
source = "terraform-aws-modules/rds-aurora/aws"
name = var.cluster_name
master_username = var.master_username
master_password = var.master_password
create_random_password = false
database_name = var.database_name
engine = var.engine
engine_version = var.engine_version
instance_class = var.instance_class_r5
instances = {
one = {}
2 = {
instance_class = var.instance_class_r5_2
}
}
vpc_id = var.vpc_id
subnets = ["subnet-XXXX", "subnet-XXXX", "subnet-XXXX"]
allowed_security_groups = ["sg-XXXXXXXXXXXXXX"]
allowed_cidr_blocks = ["10.20.0.0/20", "144.121.18.66/32"]
storage_encrypted = true
apply_immediately = true
monitoring_interval = 10
db_parameter_group_name = aws_db_parameter_group.credential.id
db_cluster_parameter_group_name = aws_rds_cluster_parameter_group.credential.id
publicly_accessible = true
}
resource "aws_db_parameter_group" "credential" {
name = "${var.cluster_name}-aurora-db-57-parameter-group"
family = "aurora-mysql5.7"
description = "${var.cluster_name}-aurora-db-57-parameter-group"
tags = var.tags_required
}
resource "aws_rds_cluster_parameter_group" "credential" {
name = "${var.cluster_name}-aurora-57-cluster-parameter-group"
family = "aurora-mysql5.7"
description = "${var.cluster_name}-aurora-57-cluster-parameter-group"
tags = var.tags_required
}
This creates a database
I am using springboot, and usually with a databse the entity will create the table
#Entity
#Table(name="credential")
public class CredentialEntity {
#Id
#GeneratedValue(strategy = GenerationType.IDENTITY)
Long credentialId;
In my yml file I have set
spring:
hibernate:
ddl-auto: update
But it does not create the table. So is there a way to create the table as part of the terraform script.
I wouldn't recommend doing this, but if you want Terraform to deploy database structures you can try with:
resource "null_resource" "db_setup" {
depends_on = [module.db, aws_security_group.rds_main, aws_default_security_group.default]
provisioner "local-exec" {
command = "mysql --host=${module.cluster.cluster_endpoint} --port=${module.cluster.cluster_port} --user=${module.cluster.cluster_master_username} --password=${module.cluster.cluster_master_password} --database=${module.cluster.cluster_database_name} < ${file(${path.module}/init/db_structure.sql)}"
}
}
(This snippet is based on this answer where you have a lot more examples)
Just note: Terraform manages infrastructure. When AWS provider does its work you can have MySQL provider to pick up and deploy admin stuff like users, roles, grants, etc. But tables within databases belong to application. There are other tools more suited for managing database objects. See if you could plug Flyway or Liquibase into your pipeline.

GCP Cloud SQL failed to delete instance because `deletion_protection` is set to true

I have a tf script for provisioning a Cloud SQL instance, along with a couple of dbs and an admin user. I have renamed the instance, hence a new instance was created but terraform is encountering issues when it comes to deleting the old one.
Error: Error, failed to delete instance because deletion_protection is set to true. Set it to false to proceed with instance deletion
I have tried setting the deletion_protection to false but I keep getting the same error. Is there a way to check which resources need to have the deletion_protection set to false in order to be deleted?
I have only added it to the google_sql_database_instance resource.
My tf script:
// Provision the Cloud SQL Instance
resource "google_sql_database_instance" "instance-master" {
name = "instance-db-${random_id.random_suffix_id.hex}"
region = var.region
database_version = "POSTGRES_12"
project = var.project_id
settings {
availability_type = "REGIONAL"
tier = "db-f1-micro"
activation_policy = "ALWAYS"
disk_type = "PD_SSD"
ip_configuration {
ipv4_enabled = var.is_public ? true : false
private_network = var.network_self_link
require_ssl = true
dynamic "authorized_networks" {
for_each = toset(var.is_public ? [1] : [])
content {
name = "Public Internet"
value = "0.0.0.0/0"
}
}
}
backup_configuration {
enabled = true
}
maintenance_window {
day = 2
hour = 4
update_track = "stable"
}
dynamic "database_flags" {
iterator = flag
for_each = var.database_flags
content {
name = flag.key
value = flag.value
}
}
user_labels = var.default_labels
}
deletion_protection = false
depends_on = [google_service_networking_connection.cloudsql-peering-connection, google_project_service.enable-sqladmin-api]
}
// Provision the databases
resource "google_sql_database" "db" {
name = "orders-placement"
instance = google_sql_database_instance.instance-master.name
project = var.project_id
}
// Provision a super user
resource "google_sql_user" "admin-user" {
name = "admin-user"
instance = google_sql_database_instance.instance-master.name
password = random_password.user-password.result
project = var.project_id
}
// Get latest CA certificate
locals {
furthest_expiration_time = reverse(sort([for k, v in google_sql_database_instance.instance-master.server_ca_cert : v.expiration_time]))[0]
latest_ca_cert = [for v in google_sql_database_instance.instance-master.server_ca_cert : v.cert if v.expiration_time == local.furthest_expiration_time]
}
// Get SSL certificate
resource "google_sql_ssl_cert" "client_cert" {
common_name = "instance-master-client"
instance = google_sql_database_instance.instance-master.name
}
Seems like your code going to recreate this sql-instance. But your current tfstate file contains an instance-code with true value for deletion_protection parameter. In this case, you need first of all change value of this parameter to false manually in tfstate file or by adding deletion_protection = true in the code with running terraform apply command after that (beware: your code shouldn't do a recreation of the instance). And after this manipulations, you can do anything with your SQL instance
You will have to set deletion_protection=false, apply it and then proceed to delete.
As per the documentation
On newer versions of the provider, you must explicitly set deletion_protection=false (and run terraform apply to write the field to state) in order to destroy an instance. It is recommended to not set this field (or set it to true) until you're ready to destroy the instance and its databases.
Link
Editing Terraform state files directly / manually is not recommended
If you added deletion_protection to the google_sql_database_instance after the database instance was created, you need to run terraform apply before running terraform destroy so that deletion_protection is set to false on the database instance.

Terraform Error: "replication_group_id": conflicts with engine_version. ( redis )

I'm trying to create an aws_elasticache_replication_group using Redis
resource "aws_elasticache_cluster" "encryption-at-rest" {
count = 1
cluster_id = "${var.namespace}-${var.environment}-encryption-at-rest"
engine = "redis"
engine_version = var.engine_version
node_type = var.node_type
num_cache_nodes = 1
port = var.redis_port
#az_mode = var.az_mode
replication_group_id = aws_elasticache_replication_group.elasticache_replication_group.id
security_group_ids = [aws_security_group.redis_security_group.id]
subnet_group_name = aws_elasticache_subnet_group.default.name
apply_immediately = true
tags = {
Name = "${var.namespace}-${var.environment}-redis"
}
}
resource "aws_elasticache_replication_group" "elasticache_replication_group" {
automatic_failover_enabled = false //var.sharding_automatic_failover_enabled
availability_zones = ["ap-southeast-1a"] //data.terraform_remote_state.network.outputs.availability_zones
replication_group_id = "${var.namespace}-${var.environment}-encryption-at-rest"
replication_group_description = "${var.namespace} ${var.environment} replication group"
security_group_ids = [aws_security_group.redis_security_group.id]
subnet_group_name = aws_elasticache_subnet_group.default.name
node_type = var.node_type
number_cache_clusters = 1 //2
parameter_group_name = aws_elasticache_parameter_group.param_group.name
port = var.redis_port
at_rest_encryption_enabled = true
kms_key_id = data.aws_kms_alias.kms_redis.target_key_arn
apply_immediately = true
}
resource "aws_elasticache_parameter_group" "param_group" {
name = "${var.namespace}-${var.environment}-params"
family = "redis5.0"
}
But I get the following error:
aws_security_group_rule.redis_ingress[0]: Refreshing state... [id=sgrule-3474516270]
aws_security_group_rule.redis_ingress[1]: Refreshing state... [id=sgrule-2582511137]
aws_elasticache_replication_group.elasticache_replication_group: Refreshing state... [id=cbpl-uat-encryption-at-rest]
Error: "replication_group_id": conflicts with engine_version
on redis.tf line 1, in resource "aws_elasticache_cluster" "encryption-at-rest":
1: resource "aws_elasticache_cluster" "encryption-at-rest" {
Releasing state lock. This may take a few moments...
The aws_elasticache_cluster resource docs say this:
replication_group_id - (Optional) The ID of the replication group to
which this cluster should belong. If this parameter is specified, the
cluster is added to the specified replication group as a read replica;
otherwise, the cluster is a standalone primary that is not part of any
replication group.
engine – (Required unless replication_group_id is provided) Name
of the cache engine to be used for this cache cluster. Valid values
for this parameter are memcached or redis
If you're going to join it to a replication group then the engine must match the replication group's engine type and so it shouldn't be set on the aws_elasticache_cluster.
The AWS provider overloads the aws_elasticache_cluster structure to handle multiple dissimilar configurations. The internal logic contains a set of 'ConflictsWith' validations which are based on the premise that certain arguments simply cannot be specified together because they represent different modes of elasticache clusters (or nodes).
If you are specifying a replication_group_id then the value of engine_version will be managed by the corresponding aws_elasticache_replication_group.
Therefore, the solution is simply to remove the engine_version argument from your aws_elasticache_cluster resource specification. If you so choose (or in cases where it is required), you can also add that argument to the aws_elasticache_replication_group.
Example: Redis Cluster Mode Disabled Read Replica Instance
// These inherit their settings from the replication group.
resource "aws_elasticache_cluster" "replica" {
cluster_id = "cluster-example"
replication_group_id = aws_elasticache_replication_group.example.id
}
In this mode, the aws_elasticache_cluster structure requires very few arguments.

Terraform two PostgreSQL databases setup

I am very very new to teraform scripting.
Our system is running in AWS and we have a single database server instance accessed by multiple micro services.
Each micro service that needs to persist some data needs to point to a different database (schema) on the same database server. We prefer each service to have its own schema to have the services totally decoupled from each other. However creating a separate database instance to achieve this would be a bit too much as some services only persist close to nothing so it would be a waste,
I created the PostgreSQL resource in a services.tf script that is common to all microservices:
resource "aws_db_instance" "my-system" {
identifier_prefix = "${var.resource_name_prefix}-tlm-"
engine = "postgres"
allocated_storage = "${var.database_storage_size}"
storage_type = "${var.database_storage_type}"
storage_encrypted = true
skip_final_snapshot = true
instance_class = "${var.database_instance_type}"
availability_zone = "${data.aws_availability_zones.all.names[0]}"
db_subnet_group_name = "${aws_db_subnet_group.default.name}"
vpc_security_group_ids = "${var.security_group_ids}"
backup_retention_period = "${var.database_retention_period}"
backup_window = "15:00-18:00" // UTC
maintenance_window = "sat:19:00-sat:20:00" // UTC
tags = "${var.tags}"
}
And now I for my service-1 and service-2 i want to be able to create the corespondent database name. I don't think the below is correct I am just adding it to give you an idea about what I am trying to achieve.
So service-1.tf will contain:
resource "aws_db_instance" "my-system" {
name = "service_1"
}
And service-2.tf will contain:
resource "aws_db_instance" "my-system" {
name = "service_2"
}
My question is what should I put in the service-1.tf and service-2.tf to make this possible.
Thank you in advance for your inputs.
Terraform can only manage at the RDS instance level. Configuring the schema etc is a DBA task.
One way you could automate the DBA tasks is by creating a null_resource using the local-exec provider to use a postgres client to do the work.
you can use count to manage one tf file only
resource "aws_db_instance" "my-system" {
count = "2"
name = "service_${count.index}"
identifier_prefix = "${var.resource_name_prefix}-tlm-"
engine = "postgres"
allocated_storage = "${var.database_storage_size}"
storage_type = "${var.database_storage_type}"
storage_encrypted = true
skip_final_snapshot = true
instance_class = "${var.database_instance_type}"
availability_zone = "${data.aws_availability_zones.all.names[0]}"
db_subnet_group_name = "${aws_db_subnet_group.default.name}"
vpc_security_group_ids = "${var.security_group_ids}"
backup_retention_period = "${var.database_retention_period}"
backup_window = "15:00-18:00" // UTC
maintenance_window = "sat:19:00-sat:20:00" // UTC
tags = "${var.tags}"
}