Import a Definition file to Amazon MQ through Terraform - amazon-web-services

I have created an Amazon MQ broker (with engine type RabbitMQ) using Terraform. now I want to import a definition file which is an XML.
This can be easily done manually, I just need to export the definition file and import it manually to AWS MQ.
but I need to automate this process using terraform. Any suggestion would be appreciated.
that is my terraform code:
resource "aws_mq_broker" "rabbitmq_broker" {
broker_name = "mq_test"
engine_type = var.mq_engine_type
engine_version = var.mq_engine_version
host_instance_type = var.mq_instance_type
deployment_mode = var.mq_deployment_mode
security_groups = [aws_security_group.ecs_private.id]
apply_immediately = "true"
publicly_accessible = "false"
subnet_ids = [aws_subnet.private.id]
user {
console_access = "true"
username = var.mq_username
password = "password"
}
tags = {
env = "${terraform.workspace}",
}
}

First you need to create an MQ Configuration, which is what you pass the XML file to. Then you pass the configuration to the broker.
resource "aws_mq_configuration" "rabbitmq_broker_config" {
name = "My Broker Configuration"
description = "My RabbitMQ Broker Configuration"
engine_type = var.mq_engine_type
engine_version = var.mq_engine_version
data = file("${path.module}/broker-config.xml")
}
resource "aws_mq_broker" "rabbitmq_broker" {
# All your attributes here
configuration {
id = aws_mq_configuration.rabbitmq_broker_config.id
revision = aws_mq_configuration.rabbitmq_broker_config.latest_revision
}
}

Related

Terraform AWS Redshift and Secret Manager

I am trying to deploy REDSHIFT by generating password in AWS secret manager.
Secret works only when I try to connect with sql client.
I wrote python script
import awswrangler as wr
Create a Redshift table
print("Connecting to Redshift...")
con = wr.redshift.connect(secret_id=redshift_credential_secret, timeout=10)
print("Successfully connected to Redshift.")
trying fetch secret from SECRET MANAGER and connect to redshift and do some operations but it gives an error.
redshift_connector.error.InterfaceError: ('communication error', gaierror(-2, 'Name or service not known'))
So for testing I create secret manually in Secret Manager by choosing the type of secret "REDSHIFT CREDENTIALS" and defined it in my python script and it worked. But the secret which I created with terraform not working.
It seems creating usual secret not working with Redshift cluster when you try to fetch it via some programming language. It requiers changing type of the secret in secrets manager.
But there is no such option in terraform to choose the secret type.
Is there any other way to deploy this solution ?
Here is my code below:
# Firstly create a random generated password to use in secrets.
resource "random_password" "password" {
length = 16
special = true
override_special = "!#$%&=+?"
}
# Creating a AWS secret for Redshift
resource "aws_secretsmanager_secret" "redshiftcred" {
name = "redshift"
recovery_window_in_days = 0
}
# Creating a AWS secret versions for Redshift
resource "aws_secretsmanager_secret_version" "redshiftcred" {
secret_id = aws_secretsmanager_secret.redshiftcred.id
secret_string = jsonencode({
engine = "redshift"
host = aws_redshift_cluster.redshift_cluster.endpoint
username = aws_redshift_cluster.redshift_cluster.master_username
password = aws_redshift_cluster.redshift_cluster.master_password
port = "5439"
dbClusterIdentifier = aws_redshift_cluster.redshift_cluster.cluster_identifier
})
depends_on = [
aws_secretsmanager_secret.redshiftcred
]
}
resource "aws_redshift_cluster" "redshift_cluster" {
cluster_identifier = "tf-redshift-cluster"
database_name = lookup(var.redshift_details, "redshift_database_name")
master_username = "admin"
master_password = random_password.password.result
node_type = lookup(var.redshift_details, "redshift_node_type")
cluster_type = lookup(var.redshift_details, "redshift_cluster_type")
number_of_nodes = lookup(var.redshift_details, "number_of_redshift_nodes")
iam_roles = ["${aws_iam_role.redshift_role.arn}"]
skip_final_snapshot = true
publicly_accessible = true
cluster_subnet_group_name = aws_redshift_subnet_group.redshift_subnet_group.id
vpc_security_group_ids = [aws_security_group.redshift.id]
depends_on = [
aws_iam_role.redshift_role
]
}
Unfortunately, until now, Terraform does not support the AWS::SecretsManager::SecretTargetAttachment which CloudFormation does and it supports the Target Type as AWS::Redshift::Cluster.
For more information, you can check the following Open Issue since 2019.
You can perform a workaround by using Terraform to create CloudFormation resource.

How do I connect rds with elastic beanstalk in terraform

I have the code to create the elastic beanstalk with terraform and here is the code I found in terraform docs to create an rds instance
resource "aws_db_instance" "default" {
allocated_storage = 10
db_name = "mydb"
engine = "mysql"
engine_version = "5.7"
instance_class = "db.t3.micro"
username = "foo"
password = "foobarbaz"
parameter_group_name = "default.mysql5.7"
skip_final_snapshot = true
}
The problem is that I can't find an example of how to connect this db to elastic beanstalk
I think the setting option should be the way to go here, i.e., you probably do not need a separate resource for creating the DB. Based on the AWS docs [1], and using the terraform examples [2], it should be something like:
resource "aws_elastic_beanstalk_application" "tftest" {
name = "tf-test-name"
description = "tf-test-desc"
}
resource "aws_elastic_beanstalk_environment" "tfenvtest" {
name = "tf-test-name"
application = aws_elastic_beanstalk_application.tftest.name
solution_stack_name = "64bit Amazon Linux 2015.03 v2.0.3 running Go 1.4"
setting {
namespace = "aws:rds:dbinstance"
name = "DBAllocatedStorage"
value = "10"
}
setting {
namespace = "aws:rds:dbinstance"
name = "DBDeletionPolicy"
value = "Delete"
}
setting {
namespace = "aws:rds:dbinstance"
name = "HasCoupledDatabase"
value = "true"
}
setting {
namespace = "aws:rds:dbinstance"
name = "DBEngine"
value = "mysql"
}
setting {
namespace = "aws:rds:dbinstance"
name = "DBEngineVersion"
value = "5.7"
}
setting {
namespace = "aws:rds:dbinstance"
name = "DBInstanceClass"
value = "db.t3.micro"
}
setting {
namespace = "aws:rds:dbinstance"
name = "DBPassword"
value = "foobarbaz"
}
setting {
namespace = "aws:rds:dbinstance"
name = "DBUser"
value = "foo"
}
}
However, I am not sure if the parameter_group_name can be set here.
EDIT: Answer updated to create a DB instance with the ElasticBeanstalk environment. However, make sure to understand this part about HasCoupledDatabase setting from the docs:
Note: If you toggle this value back to true after decoupling the previous database, Elastic Beanstalk creates a new database with the previous database option settings. However, to maintain the security of your environment, it doesn't retain the existing DBUser and DBPassword settings. You need to specify DBUser and DBPassword again.
[1] https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/command-options-general.html#command-options-general-rdsdbinstance
[2] https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/elastic_beanstalk_environment#option-settings

I have a Terraform that creates a AWS RDS, aurora mysql, is there a way to create a table on the DB

I have a Terraform script that creates a AWS RDS, aurora mysql cluster
module "cluster" {
source = "terraform-aws-modules/rds-aurora/aws"
name = var.cluster_name
master_username = var.master_username
master_password = var.master_password
create_random_password = false
database_name = var.database_name
engine = var.engine
engine_version = var.engine_version
instance_class = var.instance_class_r5
instances = {
one = {}
2 = {
instance_class = var.instance_class_r5_2
}
}
vpc_id = var.vpc_id
subnets = ["subnet-XXXX", "subnet-XXXX", "subnet-XXXX"]
allowed_security_groups = ["sg-XXXXXXXXXXXXXX"]
allowed_cidr_blocks = ["10.20.0.0/20", "144.121.18.66/32"]
storage_encrypted = true
apply_immediately = true
monitoring_interval = 10
db_parameter_group_name = aws_db_parameter_group.credential.id
db_cluster_parameter_group_name = aws_rds_cluster_parameter_group.credential.id
publicly_accessible = true
}
resource "aws_db_parameter_group" "credential" {
name = "${var.cluster_name}-aurora-db-57-parameter-group"
family = "aurora-mysql5.7"
description = "${var.cluster_name}-aurora-db-57-parameter-group"
tags = var.tags_required
}
resource "aws_rds_cluster_parameter_group" "credential" {
name = "${var.cluster_name}-aurora-57-cluster-parameter-group"
family = "aurora-mysql5.7"
description = "${var.cluster_name}-aurora-57-cluster-parameter-group"
tags = var.tags_required
}
This creates a database
I am using springboot, and usually with a databse the entity will create the table
#Entity
#Table(name="credential")
public class CredentialEntity {
#Id
#GeneratedValue(strategy = GenerationType.IDENTITY)
Long credentialId;
In my yml file I have set
spring:
hibernate:
ddl-auto: update
But it does not create the table. So is there a way to create the table as part of the terraform script.
I wouldn't recommend doing this, but if you want Terraform to deploy database structures you can try with:
resource "null_resource" "db_setup" {
depends_on = [module.db, aws_security_group.rds_main, aws_default_security_group.default]
provisioner "local-exec" {
command = "mysql --host=${module.cluster.cluster_endpoint} --port=${module.cluster.cluster_port} --user=${module.cluster.cluster_master_username} --password=${module.cluster.cluster_master_password} --database=${module.cluster.cluster_database_name} < ${file(${path.module}/init/db_structure.sql)}"
}
}
(This snippet is based on this answer where you have a lot more examples)
Just note: Terraform manages infrastructure. When AWS provider does its work you can have MySQL provider to pick up and deploy admin stuff like users, roles, grants, etc. But tables within databases belong to application. There are other tools more suited for managing database objects. See if you could plug Flyway or Liquibase into your pipeline.

Dependency between pubsub topic and subscription using terraform script

I am using one terraform script to create a pub sub topic and subscription. If the subscription needs to subscribes from the topic created by the same script, is there a way to create a dependency such that terraform attempts to create the pub/sub subscription only after the topic is created?
My main file looks like this :
version = ""
project = var.project_id
region = var.region
zone = var.zone
}
# module "Dataflow" {
#source = "../modules/cloud-dataflow"
#}
module "PubSubTopic" {
source = "../modules/pubsub_topic"
}
#module "PubSubSubscription" {
# source = "../modules/pubsub_subscription"
#}
#module "CloudFunction" {
# source = "../modules/cloud-function"
#}
Terraform will attempt to create the resources following the proper order but to answer your question and what your looking for is modules dependency "depends_on".
For example, subscription module will be created only once topic resource has been already created. That way you should add the depends_on on the subscription module.
Example:
resource "aws_iam_policy_attachment" "example" {
name = "example"
roles = [aws_iam_role.example.name]
policy_arn = aws_iam_policy.example.arn
}
module "uses-role" {
# ...
depends_on = [aws_iam_policy_attachment.example]
}
Official documentation: https://www.terraform.io/docs/language/meta-arguments/depends_on.html
You can create a simple pubsub topic and a subscription with this snippet (just add the .json for a service account with enough privileges) on your filesystem:
provider "google" {
credentials = "${file("account.json")}" # Or use GOOGLE_APPLICATION_CREDENTIALS
project = "__your_project_id__"
region = "europe-west4" # Amsterdam
}
resource "google_pubsub_topic" "incoming_data" {
name = "incoming-data"
}
resource "google_pubsub_subscription" "incoming_subs" {
name = "Subscription_for_incoming_data"
topic = google_pubsub_topic.incoming_data.name
# Time since Pubsub receives a message to deletion.
expiration_policy {
ttl = "300000s"
}
# Time from client reception to ACK
message_retention_duration = "1200s"
retain_acked_messages = false
enable_message_ordering = false
}
To link a subscription with a topic in terraform, you just need to link it with:
topic = google_pubsub_topic.TERRAFORM_TOPIC.name
Be carefull with Google requirements for topic and subscription identifiers. If they're not valid, terraform plan will pass, but you'll get an Error 400 : You have passed an invalid argument to the service

Terraform two PostgreSQL databases setup

I am very very new to teraform scripting.
Our system is running in AWS and we have a single database server instance accessed by multiple micro services.
Each micro service that needs to persist some data needs to point to a different database (schema) on the same database server. We prefer each service to have its own schema to have the services totally decoupled from each other. However creating a separate database instance to achieve this would be a bit too much as some services only persist close to nothing so it would be a waste,
I created the PostgreSQL resource in a services.tf script that is common to all microservices:
resource "aws_db_instance" "my-system" {
identifier_prefix = "${var.resource_name_prefix}-tlm-"
engine = "postgres"
allocated_storage = "${var.database_storage_size}"
storage_type = "${var.database_storage_type}"
storage_encrypted = true
skip_final_snapshot = true
instance_class = "${var.database_instance_type}"
availability_zone = "${data.aws_availability_zones.all.names[0]}"
db_subnet_group_name = "${aws_db_subnet_group.default.name}"
vpc_security_group_ids = "${var.security_group_ids}"
backup_retention_period = "${var.database_retention_period}"
backup_window = "15:00-18:00" // UTC
maintenance_window = "sat:19:00-sat:20:00" // UTC
tags = "${var.tags}"
}
And now I for my service-1 and service-2 i want to be able to create the corespondent database name. I don't think the below is correct I am just adding it to give you an idea about what I am trying to achieve.
So service-1.tf will contain:
resource "aws_db_instance" "my-system" {
name = "service_1"
}
And service-2.tf will contain:
resource "aws_db_instance" "my-system" {
name = "service_2"
}
My question is what should I put in the service-1.tf and service-2.tf to make this possible.
Thank you in advance for your inputs.
Terraform can only manage at the RDS instance level. Configuring the schema etc is a DBA task.
One way you could automate the DBA tasks is by creating a null_resource using the local-exec provider to use a postgres client to do the work.
you can use count to manage one tf file only
resource "aws_db_instance" "my-system" {
count = "2"
name = "service_${count.index}"
identifier_prefix = "${var.resource_name_prefix}-tlm-"
engine = "postgres"
allocated_storage = "${var.database_storage_size}"
storage_type = "${var.database_storage_type}"
storage_encrypted = true
skip_final_snapshot = true
instance_class = "${var.database_instance_type}"
availability_zone = "${data.aws_availability_zones.all.names[0]}"
db_subnet_group_name = "${aws_db_subnet_group.default.name}"
vpc_security_group_ids = "${var.security_group_ids}"
backup_retention_period = "${var.database_retention_period}"
backup_window = "15:00-18:00" // UTC
maintenance_window = "sat:19:00-sat:20:00" // UTC
tags = "${var.tags}"
}