How do I connect rds with elastic beanstalk in terraform - amazon-web-services

I have the code to create the elastic beanstalk with terraform and here is the code I found in terraform docs to create an rds instance
resource "aws_db_instance" "default" {
allocated_storage = 10
db_name = "mydb"
engine = "mysql"
engine_version = "5.7"
instance_class = "db.t3.micro"
username = "foo"
password = "foobarbaz"
parameter_group_name = "default.mysql5.7"
skip_final_snapshot = true
}
The problem is that I can't find an example of how to connect this db to elastic beanstalk

I think the setting option should be the way to go here, i.e., you probably do not need a separate resource for creating the DB. Based on the AWS docs [1], and using the terraform examples [2], it should be something like:
resource "aws_elastic_beanstalk_application" "tftest" {
name = "tf-test-name"
description = "tf-test-desc"
}
resource "aws_elastic_beanstalk_environment" "tfenvtest" {
name = "tf-test-name"
application = aws_elastic_beanstalk_application.tftest.name
solution_stack_name = "64bit Amazon Linux 2015.03 v2.0.3 running Go 1.4"
setting {
namespace = "aws:rds:dbinstance"
name = "DBAllocatedStorage"
value = "10"
}
setting {
namespace = "aws:rds:dbinstance"
name = "DBDeletionPolicy"
value = "Delete"
}
setting {
namespace = "aws:rds:dbinstance"
name = "HasCoupledDatabase"
value = "true"
}
setting {
namespace = "aws:rds:dbinstance"
name = "DBEngine"
value = "mysql"
}
setting {
namespace = "aws:rds:dbinstance"
name = "DBEngineVersion"
value = "5.7"
}
setting {
namespace = "aws:rds:dbinstance"
name = "DBInstanceClass"
value = "db.t3.micro"
}
setting {
namespace = "aws:rds:dbinstance"
name = "DBPassword"
value = "foobarbaz"
}
setting {
namespace = "aws:rds:dbinstance"
name = "DBUser"
value = "foo"
}
}
However, I am not sure if the parameter_group_name can be set here.
EDIT: Answer updated to create a DB instance with the ElasticBeanstalk environment. However, make sure to understand this part about HasCoupledDatabase setting from the docs:
Note: If you toggle this value back to true after decoupling the previous database, Elastic Beanstalk creates a new database with the previous database option settings. However, to maintain the security of your environment, it doesn't retain the existing DBUser and DBPassword settings. You need to specify DBUser and DBPassword again.
[1] https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/command-options-general.html#command-options-general-rdsdbinstance
[2] https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/elastic_beanstalk_environment#option-settings

Related

Access denied issue with GCP SQL created using Terraform

I created following resources using the terraform
MySql Instance
DB
User
To create these resources I am using GoogleCloudPlatform/sql-db/google//modules/mysql v13.0.1
The Terraform template is as follow:
module "backend-db" {
source = "GoogleCloudPlatform/sql-db/google//modules/mysql"
version = "13.0.1"
name = "${var.env}-database"
random_instance_name = true
database_version = "MYSQL_5_7"
project_id = module.dev-project.project_id
region = "australia-southeast1"
zone = "australia-southeast1-a"
tier = "db-n1-standard-1"
additional_databases = var.additional_databases
additional_users = var.additional_users
deletion_protection = false
ip_configuration = {
ipv4_enabled = true
private_network = null
require_ssl = true
allocated_ip_range = null
authorized_networks = local.authorized_networks
}
database_flags = [
{
name = "log_bin_trust_function_creators"
value = "on"
},
]
}
The Instance has public IP and in authorized network, I added my IP and I am trying to access this via MySql Workbench.
The Error is get is Access denied for user '<USER>'#'<MY_IP>' (using password: YES)
Note: All works well when I create all the resource from the console (UI)
Please help.

Import a Definition file to Amazon MQ through Terraform

I have created an Amazon MQ broker (with engine type RabbitMQ) using Terraform. now I want to import a definition file which is an XML.
This can be easily done manually, I just need to export the definition file and import it manually to AWS MQ.
but I need to automate this process using terraform. Any suggestion would be appreciated.
that is my terraform code:
resource "aws_mq_broker" "rabbitmq_broker" {
broker_name = "mq_test"
engine_type = var.mq_engine_type
engine_version = var.mq_engine_version
host_instance_type = var.mq_instance_type
deployment_mode = var.mq_deployment_mode
security_groups = [aws_security_group.ecs_private.id]
apply_immediately = "true"
publicly_accessible = "false"
subnet_ids = [aws_subnet.private.id]
user {
console_access = "true"
username = var.mq_username
password = "password"
}
tags = {
env = "${terraform.workspace}",
}
}
First you need to create an MQ Configuration, which is what you pass the XML file to. Then you pass the configuration to the broker.
resource "aws_mq_configuration" "rabbitmq_broker_config" {
name = "My Broker Configuration"
description = "My RabbitMQ Broker Configuration"
engine_type = var.mq_engine_type
engine_version = var.mq_engine_version
data = file("${path.module}/broker-config.xml")
}
resource "aws_mq_broker" "rabbitmq_broker" {
# All your attributes here
configuration {
id = aws_mq_configuration.rabbitmq_broker_config.id
revision = aws_mq_configuration.rabbitmq_broker_config.latest_revision
}
}

I have a Terraform that creates a AWS RDS, aurora mysql, is there a way to create a table on the DB

I have a Terraform script that creates a AWS RDS, aurora mysql cluster
module "cluster" {
source = "terraform-aws-modules/rds-aurora/aws"
name = var.cluster_name
master_username = var.master_username
master_password = var.master_password
create_random_password = false
database_name = var.database_name
engine = var.engine
engine_version = var.engine_version
instance_class = var.instance_class_r5
instances = {
one = {}
2 = {
instance_class = var.instance_class_r5_2
}
}
vpc_id = var.vpc_id
subnets = ["subnet-XXXX", "subnet-XXXX", "subnet-XXXX"]
allowed_security_groups = ["sg-XXXXXXXXXXXXXX"]
allowed_cidr_blocks = ["10.20.0.0/20", "144.121.18.66/32"]
storage_encrypted = true
apply_immediately = true
monitoring_interval = 10
db_parameter_group_name = aws_db_parameter_group.credential.id
db_cluster_parameter_group_name = aws_rds_cluster_parameter_group.credential.id
publicly_accessible = true
}
resource "aws_db_parameter_group" "credential" {
name = "${var.cluster_name}-aurora-db-57-parameter-group"
family = "aurora-mysql5.7"
description = "${var.cluster_name}-aurora-db-57-parameter-group"
tags = var.tags_required
}
resource "aws_rds_cluster_parameter_group" "credential" {
name = "${var.cluster_name}-aurora-57-cluster-parameter-group"
family = "aurora-mysql5.7"
description = "${var.cluster_name}-aurora-57-cluster-parameter-group"
tags = var.tags_required
}
This creates a database
I am using springboot, and usually with a databse the entity will create the table
#Entity
#Table(name="credential")
public class CredentialEntity {
#Id
#GeneratedValue(strategy = GenerationType.IDENTITY)
Long credentialId;
In my yml file I have set
spring:
hibernate:
ddl-auto: update
But it does not create the table. So is there a way to create the table as part of the terraform script.
I wouldn't recommend doing this, but if you want Terraform to deploy database structures you can try with:
resource "null_resource" "db_setup" {
depends_on = [module.db, aws_security_group.rds_main, aws_default_security_group.default]
provisioner "local-exec" {
command = "mysql --host=${module.cluster.cluster_endpoint} --port=${module.cluster.cluster_port} --user=${module.cluster.cluster_master_username} --password=${module.cluster.cluster_master_password} --database=${module.cluster.cluster_database_name} < ${file(${path.module}/init/db_structure.sql)}"
}
}
(This snippet is based on this answer where you have a lot more examples)
Just note: Terraform manages infrastructure. When AWS provider does its work you can have MySQL provider to pick up and deploy admin stuff like users, roles, grants, etc. But tables within databases belong to application. There are other tools more suited for managing database objects. See if you could plug Flyway or Liquibase into your pipeline.

GCP Cloud SQL failed to delete instance because `deletion_protection` is set to true

I have a tf script for provisioning a Cloud SQL instance, along with a couple of dbs and an admin user. I have renamed the instance, hence a new instance was created but terraform is encountering issues when it comes to deleting the old one.
Error: Error, failed to delete instance because deletion_protection is set to true. Set it to false to proceed with instance deletion
I have tried setting the deletion_protection to false but I keep getting the same error. Is there a way to check which resources need to have the deletion_protection set to false in order to be deleted?
I have only added it to the google_sql_database_instance resource.
My tf script:
// Provision the Cloud SQL Instance
resource "google_sql_database_instance" "instance-master" {
name = "instance-db-${random_id.random_suffix_id.hex}"
region = var.region
database_version = "POSTGRES_12"
project = var.project_id
settings {
availability_type = "REGIONAL"
tier = "db-f1-micro"
activation_policy = "ALWAYS"
disk_type = "PD_SSD"
ip_configuration {
ipv4_enabled = var.is_public ? true : false
private_network = var.network_self_link
require_ssl = true
dynamic "authorized_networks" {
for_each = toset(var.is_public ? [1] : [])
content {
name = "Public Internet"
value = "0.0.0.0/0"
}
}
}
backup_configuration {
enabled = true
}
maintenance_window {
day = 2
hour = 4
update_track = "stable"
}
dynamic "database_flags" {
iterator = flag
for_each = var.database_flags
content {
name = flag.key
value = flag.value
}
}
user_labels = var.default_labels
}
deletion_protection = false
depends_on = [google_service_networking_connection.cloudsql-peering-connection, google_project_service.enable-sqladmin-api]
}
// Provision the databases
resource "google_sql_database" "db" {
name = "orders-placement"
instance = google_sql_database_instance.instance-master.name
project = var.project_id
}
// Provision a super user
resource "google_sql_user" "admin-user" {
name = "admin-user"
instance = google_sql_database_instance.instance-master.name
password = random_password.user-password.result
project = var.project_id
}
// Get latest CA certificate
locals {
furthest_expiration_time = reverse(sort([for k, v in google_sql_database_instance.instance-master.server_ca_cert : v.expiration_time]))[0]
latest_ca_cert = [for v in google_sql_database_instance.instance-master.server_ca_cert : v.cert if v.expiration_time == local.furthest_expiration_time]
}
// Get SSL certificate
resource "google_sql_ssl_cert" "client_cert" {
common_name = "instance-master-client"
instance = google_sql_database_instance.instance-master.name
}
Seems like your code going to recreate this sql-instance. But your current tfstate file contains an instance-code with true value for deletion_protection parameter. In this case, you need first of all change value of this parameter to false manually in tfstate file or by adding deletion_protection = true in the code with running terraform apply command after that (beware: your code shouldn't do a recreation of the instance). And after this manipulations, you can do anything with your SQL instance
You will have to set deletion_protection=false, apply it and then proceed to delete.
As per the documentation
On newer versions of the provider, you must explicitly set deletion_protection=false (and run terraform apply to write the field to state) in order to destroy an instance. It is recommended to not set this field (or set it to true) until you're ready to destroy the instance and its databases.
Link
Editing Terraform state files directly / manually is not recommended
If you added deletion_protection to the google_sql_database_instance after the database instance was created, you need to run terraform apply before running terraform destroy so that deletion_protection is set to false on the database instance.

Terraform two PostgreSQL databases setup

I am very very new to teraform scripting.
Our system is running in AWS and we have a single database server instance accessed by multiple micro services.
Each micro service that needs to persist some data needs to point to a different database (schema) on the same database server. We prefer each service to have its own schema to have the services totally decoupled from each other. However creating a separate database instance to achieve this would be a bit too much as some services only persist close to nothing so it would be a waste,
I created the PostgreSQL resource in a services.tf script that is common to all microservices:
resource "aws_db_instance" "my-system" {
identifier_prefix = "${var.resource_name_prefix}-tlm-"
engine = "postgres"
allocated_storage = "${var.database_storage_size}"
storage_type = "${var.database_storage_type}"
storage_encrypted = true
skip_final_snapshot = true
instance_class = "${var.database_instance_type}"
availability_zone = "${data.aws_availability_zones.all.names[0]}"
db_subnet_group_name = "${aws_db_subnet_group.default.name}"
vpc_security_group_ids = "${var.security_group_ids}"
backup_retention_period = "${var.database_retention_period}"
backup_window = "15:00-18:00" // UTC
maintenance_window = "sat:19:00-sat:20:00" // UTC
tags = "${var.tags}"
}
And now I for my service-1 and service-2 i want to be able to create the corespondent database name. I don't think the below is correct I am just adding it to give you an idea about what I am trying to achieve.
So service-1.tf will contain:
resource "aws_db_instance" "my-system" {
name = "service_1"
}
And service-2.tf will contain:
resource "aws_db_instance" "my-system" {
name = "service_2"
}
My question is what should I put in the service-1.tf and service-2.tf to make this possible.
Thank you in advance for your inputs.
Terraform can only manage at the RDS instance level. Configuring the schema etc is a DBA task.
One way you could automate the DBA tasks is by creating a null_resource using the local-exec provider to use a postgres client to do the work.
you can use count to manage one tf file only
resource "aws_db_instance" "my-system" {
count = "2"
name = "service_${count.index}"
identifier_prefix = "${var.resource_name_prefix}-tlm-"
engine = "postgres"
allocated_storage = "${var.database_storage_size}"
storage_type = "${var.database_storage_type}"
storage_encrypted = true
skip_final_snapshot = true
instance_class = "${var.database_instance_type}"
availability_zone = "${data.aws_availability_zones.all.names[0]}"
db_subnet_group_name = "${aws_db_subnet_group.default.name}"
vpc_security_group_ids = "${var.security_group_ids}"
backup_retention_period = "${var.database_retention_period}"
backup_window = "15:00-18:00" // UTC
maintenance_window = "sat:19:00-sat:20:00" // UTC
tags = "${var.tags}"
}