Automatically provisioning an instance to connect to a database cluster endpoint - amazon-web-services

I am looking for a way to provision an instance with a configuration file that contains the endpoints to connect to a database cluster in an automatic way, using terraform. I am using a aws_rds_cluster resource, from which I can get the endpoint using the expression aws_rds_cluster.my-cluster.endpoint. Then, I would like to provision machines instantiated with an aws_instance resource so that the value of that expression is stored in the file /DBConfig.sh.
The content of the DBConfig.sh file would look like this :
#!/bin/bash
ENDPOINT=<$aws_rds_cluster.my-cluster.endpoint$>
READER_ENDPOINT=<$aws_rds_cluster.my-cluster.reader_endpoint$>
Truth be told, once I successfully reach that point, I'd like to be able to do the same thing for machines created by a aws_launch_configuration resource.
Is this something that can be done with terraform? If not, what other tools can I use to achieve this kind of automation? Thanks for your help!

There are few ways which can achieve that. I think all of the would involve user_data.
For example, you could have aws_instance with the user_data as follows:
resource "aws_instance" "web" {
# other atrributes
user_data = <<-EOL
#!/bin/bash
cat >./DBConfig.sh <<-EOL2
#!/bin/bash
ENDPOINT=${aws_rds_cluster.my-cluster.endpoint}
READER_ENDPOINT=${aws_rds_cluster.my-cluster.reader_endpoint}
EOL2
chmod +x ./DBConfig.sh
EOL
}
The above will launch an istance which will have DBConfig.sh with resolved values of the endpoints in its root (/) directory.

Related

Terraform statefile and actual infra

I am new to terraform.
I have created a security group in aws with some ingress rules using terraform. Now someone added a new ingress rule of 5429 port using console.I want to bring this change inside terraform, so i used below command
terraform apply -refresh-only
Now i can see the port open for 5429 in terraform statefile.But when i did $terrafrom apply changes were gone from console and statefile.
I want the changes to persist in console as well as terrafrom.Please suggest

Terraform command to list existing AWS resources as a Hello World

I have the AWS CLI installed on my Windows computer, and running this command "works" exactly like I want it to.
aws ec2 describe-images
I get the following output, which is exactly what I want to see, because although I have access to AWS through my corporation (e.g. to check code into CodeCommit), I can see in the AWS web console for EC2 that I don't have permission to list running instances:
An error occurred (UnauthorizedOperation) when calling the DescribeImages operation: You are not authorized to perform this operation.
I've put terraform.exe onto my computer as well, and I've created a file "example.tf" that contains the following:
provider "aws" {
region = "us-east-1"
}
I'd like to issue some sort of Terraform command that would yell at me, explaining that my AWS account is not allowed to list Amazon instances.
Most Hello World examples involve using terraform plan against a resource to do an "almost-write" against AWS.
Personally, however, I always feel more comfortable knowing that things are behaving as expected with something a bit more "truly read-only." That way, I really know the round-trip to AWS worked but I didn't modify any of my corporation's state.
There's a bunch of stuff on the internet about "data sources" and their "aws_ami" or "aws_instances" flavors, but I can't find anything that tells me how to actually use it with a Terraform command for a simple print()-type interaction (the way it's obvious that, say, "resources" go with the "terraform plan" and "terraform apply" commands).
Is there something I can do with Terraform commands to "hello world" an attempt at listing all my organization's EC2 servers and, accordingly, watching AWS tell me to buzz off because I'm not authorized?
You can use the data source for AWS instances. You create a data source similar to the below:
data "aws_instances" "test" {
instance_tags = {
Role = "HardWorker"
}
filter {
name = "instance.group-id"
values = ["sg-12345678"]
}
instance_state_names = ["running", "stopped"]
}
This will attempt to perform a read action listing your EC2 instances designated by the filter you put in the config. This will also utilize the IAM associated with the Terraform user you are performing the terraform plan with. This will result in the error you described regarding lack of authorization, which is your stated goal. You should modify the filter to target your organization's EC2 instances.

Exporting Google cloud configuration

Is there a way to export Google Cloud configuration for an object, such as for the load balancer, in the same way as one would use to set it up via the API?
I can quickly configure what I need in the console site, but I am spending tons of time trying to replicate that with Terraform. It would be great if I can generate Terraform files, or at least the Google API output, from the system I already have configured.
If you have something already created outside of Terraform and want to have Terraform manage it or want to work out how to best configure it with Terraform you could use Terraform's import command for any resource that supports it.
So if you have created a forwarding rule called terraform-test via the Google Cloud console and want to know how that maps to Terraform's google_compute_forwarding_rule resource then you could run terraform import google_compute_forwarding_rule.default terraform-test to import this into Terraform's state file.
If you then ran a plan, Terraform will tell you that it has the google_compute_forwarding_rule.default in its state but the resource is not defined in your code and as such it will want to remove it.
If you add the minimal config needed to make the plan work:
resource "google_compute_forwarding_rule" "default" {
name = "terraform-test"
}
And run the plan again Terraform will then tell you what things it needs to change to make your imported forwarding rule look like the config you have defined. Assuming you've done something like set the description on the load balancer Terraform's plan will show something like this:
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
+ create
~ update in-place
-/+ destroy and then create replacement
Terraform will perform the following actions:
~ google_compute_forwarding_rule.default
description: "This is the description I added via the console" => ""
Plan: 5 to add, 5 to change, 3 to destroy.
This tells you that Terraform wants to remove the description on the forwarding rule to make it match the config.
If you then update your resource definition to something like:
resource "google_compute_forwarding_rule" "default" {
name = "terraform-test"
description = "This is the description I added via the console"
}
Terraform's plan will then show an empty change set:
No changes. Infrastructure is up-to-date.
This means that Terraform did not detect any differences between your
configuration and real physical resources that exist. As a result, no
actions need to be performed.
At this point you have now aligned your Terraform code with the reality of the resource in Google Cloud and should be able to easily see what needs to be set on the Terraform side to make things happen as expected in the Google Cloud console.

How to use Terraform to execute SQL script on RDS MySQL?

I created the aws_db_instance to provision the RDS MySQL database using Terraform configuration. Now my next question is to execute the SQL Script (CREATE TABLE and INSERT statements) on the RDS. I did the following but there is no effect. terraform plan cannot even see my changes on executing the sql. What did I miss here? Thanks.
resource "aws_db_instance" "mydb" {
# ...
provisioner "remote-exec" {
inline = [
"chmod +x script.sql",
"script.sql args",
]
}
}
Check out this post: How to apply SQL Scripts on RDS with Terraform
If you're just trying to setup user's and permissions (you shouldn't use the root pw you set when you generate the RDS) there is a terraform provider for that:
https://www.terraform.io/docs/providers/mysql/index.html
But you're looking for DB schema and seeding. That provider cannot do that.
If you're open to doing it another way, you may want to check out using ssm automation documents and/or lambda. I'd use lambda. Pick a language that you're comfortable with. Set the role of the lambda to have permissions to read the password it needs to do the work. You can save the password in ssm parameter store. Then script your DB work.
Then do a local exec in terraform that simply calls the lambda and pass it the ID of the RDS and the path to the secret in ssm parameter store. That will ensure that the DB operations are done from compute inside the VPC without having to setup an EC2 bastion just for that purpose.
Here's how javascript can get this done, for example:
https://www.w3schools.com/nodejs/nodejs_mysql_create_table.asp

How can I generate an execution plan from Terraform configuration without connecting to AWS?

I'm writing a unit test for a Terraform module, and I would like to confirm that the module produces the execution plan that I expect. However, connecting to Amazon within a test would take too long and require too much configuration of the continuous integration server.
How can I use terraform plan to generate an execution plan from my configuration that assumes that no resources exist?
I've been considering something similar for a testing framework around Terraform modules and have previously used Moto for mocking Boto calls in Python.
Moto works by monkey patching calls to AWS so only works natively with Python. However it does provide the mocked backend as a server running on Flask to be used in a stand alone mode.
That said, I've just tried it with Terraform and while plans seem to work okay a very basic configuration being applied led to this error:
* aws_instance.web: Error launching source instance: SerializationError: failed decoding EC2 Query response
caused by: parsing time "2015-01-01T00:00:00+0000" as "2006-01-02T15:04:05Z": cannot parse "+0000" as "Z"
I then happened to notice that plans complete fine even when the Moto server isn't running and I'm just using a non existent local endpoint in the AWS provider.
As such, if you just need plans then you should be able to do this by adding an endpoint block that points to localhost like this:
provider "aws" {
skip_credentials_validation = true
max_retries = 1
skip_metadata_api_check = true
access_key = "a"
secret_key = "a"
region = "us-west-2"
endpoints {
ec2 = "http://127.0.0.1:5000/"
}
}
resource "aws_instance" "web" {
ami = "ami-123456"
instance_type = "t2.micro"
tags {
Name = "HelloWorld"
}
}
How you inject that endpoint block in for testing and not for real world usage is probably another question and would need more information in how your tests are being constructed.
Does terraform plan -refresh=false do what you want?
I use it to do a "fast plan", without taking the time to refresh the status of all the AWS resources.
Not sure if it actually connects to AWS to do that though.
If you're using a more complicated remote-state setup and that's the part you don't want to configure - you could also try adding an empty tfstate file and pointing to that with the -state option.