Need to enable backup replication feature in AWS RDS - oracle through terraform. So do we have any attributes from terraform side for that particular feature?
The only argument on the Terraform side is aws_db_instance's replicate_source_db
replicate_source_db - (Optional) Specifies that this resource is a Replicate database, and to use this value as the source database. This correlates to the identifier of another Amazon RDS Database to replicate (if replicating within a single region) or ARN of the Amazon RDS Database to replicate (if replicating cross-region). Note that if you are creating a cross-region replica of an encrypted database you will also need to specify a kms_key_id.
The replicate_source_db should have the ID or ARN of the source database.
resource "aws_db_instance" "oracle" {
# ... other arguments
}
resource "aws_db_instance" "oracle_replicant" {
# ... other arguments
replicate_source_db = aws_db_instance.oracle.id
}
Reference
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/oracle-read-replicas.html
Related
I want to set my Redis cluster on AWS ElastiCache to the LRU eviction mode. The version of my Redis cluster is 5.0.6.
I have looked through the documentation of the Terraform aws_elasticache_replication_group resource but I cannot find any attribute to set eviction policy. As far as I know the default policy is no eviction.
How can I change the eviction policy in Terraform?
ElastiCache configuration is done via the aws_elasticache_parameter_group resource. You can then specify any of the parameters that are allowed by ElastiCache.
Looking at the available parameters you would want to set the maxmemory-policy but it's worth noting that the default isn't to not evict (noeviction) and instead defaults to volatile-lru in all current versions of Redis ElastiCache which might be what you need anyway. If instead you wanted to use allkeys-lru then you would do something like the following:
resource "aws_elasticache_parameter_group" "this" {
name = "cache-params"
family = "redis5.0"
parameter {
name = "maxmemory-policy"
value = "allkeys-lru"
}
}
I have 2 directories:
aws/
k8s/
In the aws/ dir, I've provisioned an EKS cluster and EKS node group (among other things) using the Terraform AWS provider. That's been applied and everything looks good there.
When trying to then create a Kubernetes provider plan in k8s/ and create a Persistent Volume resource it requires the EBS volume ID.
Terraform Kubernetes Persistent Volume Resource
How do I get the EBS volume ID from the other .tfstate file from a Kubernetes provider plan?
So as I understand it, you want to reference resource from another state file. To do that you can use the following example:
data "terraform_remote_state" "aws_state" {
backend = "remote"
config = {
organization = "hashicorp"
workspaces = {
name = "state-name"
}
}
}
And once you have data resources available you can reference ebs volume in the following way:
data.terraform_remote_state.aws_state.outputs.ebs_volume_id
Remember to create an output called ebs_volume_id
I want to configure my SQS Terraform Script to use an aws provided SSE Key.
I know that you can do this with the follwing code:
resource "aws_sqs_queue" "terraform_queue" {
name = "terraform-example-queue"
kms_master_key_id = "alias/aws/sqs"
kms_data_key_reuse_period_seconds = 300
}
But with this example I need to first create my own KMS Key. In the aws console it is possible to use a default one without creating one by myself. How do I do this in Terraform, what do I have to type in kms_master_key_id?
The default key for any service is given by the alias alias/aws/$service. So when you refer to alias/aws/sqs you're using the default AWS managed KMS key for that service in that region.
This is briefly covered in the AWS user guide:
The alias name cannot begin with aws/. The aws/ prefix is reserved by Amazon Web Services to represent AWS managed CMKs in your account.
AWS has recently launched support for storage autoscaling of RDS instances. We have multiple RDS instances with over provisioned storage in our production environment. We want to utilise this new feature to reduce some costs. Since we cannot reduce the storage of a live RDS instance, we will have to first create a RDS instance with less storage with autoscaling support and then migrate the existing data to new instance and then delete the old instance.
We use terraform with the terraform-aws-provider to create our infrastructure. Problem is that I am not able to achieve the above strategy using terraform.
Here is what i have tried :
Modify the existing RDS creation script to create two more
resources.
One is of type aws_db_snapshot and other is
aws_db_instance (using the snapshot).
However I get the following
error error modifying DB Instance (test-rds-snapshot):
InvalidParameterCombination: Invalid storage size for engine name
postgres and storage type gp2: 20.
# Existing RDS instance with over provisioned storage
resource "aws_db_instance" "test_rds"{
.
.
.
}
# My changes below
# The snapshot
resource "aws_db_snapshot" "test_snapshot" {
db_instance_identifier = "${aws_db_instance.test_rds.id}"
db_snapshot_identifier = "poc-snapshot"
}
# New instance with autoscale support and reduced storage
resource "aws_db_instance" "test_rds_snapshot" {
identifier = "test-rds-snapshot"
allocated_storage = 20
max_allocated_storage = 50
snapshot_identifier = "${aws_db_snapshot.test_snapshot.id}"
.
.
.
}
I want to know if I am on the right track or not and will I be able to migrate production databases using this strategy. Let me know if you need more information.
I created the aws_db_instance to provision the RDS MySQL database using Terraform configuration. Now my next question is to execute the SQL Script (CREATE TABLE and INSERT statements) on the RDS. I did the following but there is no effect. terraform plan cannot even see my changes on executing the sql. What did I miss here? Thanks.
resource "aws_db_instance" "mydb" {
# ...
provisioner "remote-exec" {
inline = [
"chmod +x script.sql",
"script.sql args",
]
}
}
Check out this post: How to apply SQL Scripts on RDS with Terraform
If you're just trying to setup user's and permissions (you shouldn't use the root pw you set when you generate the RDS) there is a terraform provider for that:
https://www.terraform.io/docs/providers/mysql/index.html
But you're looking for DB schema and seeding. That provider cannot do that.
If you're open to doing it another way, you may want to check out using ssm automation documents and/or lambda. I'd use lambda. Pick a language that you're comfortable with. Set the role of the lambda to have permissions to read the password it needs to do the work. You can save the password in ssm parameter store. Then script your DB work.
Then do a local exec in terraform that simply calls the lambda and pass it the ID of the RDS and the path to the secret in ssm parameter store. That will ensure that the DB operations are done from compute inside the VPC without having to setup an EC2 bastion just for that purpose.
Here's how javascript can get this done, for example:
https://www.w3schools.com/nodejs/nodejs_mysql_create_table.asp