I have a bunch of EC2 instances that I've spun up using CloudFormation. I need to programmatically get the AWS instance_id for each of these hosts and would ideally like to do so without having to ssh into each of the hosts and gather that information. Is there an AWS API that will provide me with this functionality? Thanks!
There are several ways. The one I like best is putting an output section in your CloudFormation template with entries for each EC2 instance created by the template. Then you can fetch those results when you create the stack, either from the program output if you create the stack with a command line tool, or with the CloudFormation API.
You can also use the CloudFormation API to fetch a list of all created resources from a stack, or the EC2 API to get a list of all instances in a region and then filter them, perhaps by launch time.
Suppose your CloudFormation stack creates EC2 instances called Tom and Jerry. Then you would add an Outputs section something like the following:
"Outputs": {
"TomId": {
"Description": "Tom's instance id",
"Value": {"Ref": "Tom"}
},
"JerryId": {
"Description": "Jerry's instance id",
"Value": {"Ref": "Jerry"}
}
}
If you create the stack from the console, there will be a tab for the stack's outputs, and that will have a table with the two instance IDs included in it. Most command-line tools for creating a stack will also have some way of including the outputs.
But you asked for how to do it programmatically. Here's an example in Python, using the excellent boto library:
import boto.cloudformation
cf = boto.cloudformation.connect_to_region('us-east-1', aws_access_key_id='MyAccessKey', aws_secret_access_key='MySecretKey')
stacks = cf.describe_stacks('MyStackName')
stack = stacks[0]
for output in stack.outputs:
print "%s: %s" % (output.key, output.value)
Note that the describe_stacks method returns an array of matching stacks. There should be only one, but you can check that.
Related
I have developed the application using Java and I also used the Amazon PostgreSQL database for data management. I hosted the application in Elastic beanstalk. Now, Someone suggested me to use the Amazon CloudFormation. So I created the Infrastructure code in JSON Format that also include Amazon RDS but I have some doubts.
When I use CloudFormation then that will automatically creates the new DB instance for my application but I specified another DB instance name in Java code then how it will communicate?
Please help me to clarify the doubts.
Thanks in advance...
You can configure DB URL in outputs section of CFN so that you get the required URL
CFN outputs
To get endpoint url for your AWS::RDS::DBInstance is returned using Return values:
Endpoint.Address The connection endpoint for the database. For example: mystack-mydb-1apw1j4phylrk.cg034hpkmmjt.us-east-2.rds.amazonaws.com
Endpoint.Port The port number on which the database accepts connections. For example: 3306
To get the Endpoint.Address out of your stack, you have to add Outputs section to your template. En example would be:
"Outputs": {
"DBEndpoint": {
"Description": "Endpoint for my RDS Instance",
"Value": {
"Fn::GetAtt" : [ "MyDB", "Endpoint.Address" ]}
}
}
}
Then using AWS SDK for Java you can query the Outputs of your CFN Stack to use in your Java application.
I have the AWS CLI installed on my Windows computer, and running this command "works" exactly like I want it to.
aws ec2 describe-images
I get the following output, which is exactly what I want to see, because although I have access to AWS through my corporation (e.g. to check code into CodeCommit), I can see in the AWS web console for EC2 that I don't have permission to list running instances:
An error occurred (UnauthorizedOperation) when calling the DescribeImages operation: You are not authorized to perform this operation.
I've put terraform.exe onto my computer as well, and I've created a file "example.tf" that contains the following:
provider "aws" {
region = "us-east-1"
}
I'd like to issue some sort of Terraform command that would yell at me, explaining that my AWS account is not allowed to list Amazon instances.
Most Hello World examples involve using terraform plan against a resource to do an "almost-write" against AWS.
Personally, however, I always feel more comfortable knowing that things are behaving as expected with something a bit more "truly read-only." That way, I really know the round-trip to AWS worked but I didn't modify any of my corporation's state.
There's a bunch of stuff on the internet about "data sources" and their "aws_ami" or "aws_instances" flavors, but I can't find anything that tells me how to actually use it with a Terraform command for a simple print()-type interaction (the way it's obvious that, say, "resources" go with the "terraform plan" and "terraform apply" commands).
Is there something I can do with Terraform commands to "hello world" an attempt at listing all my organization's EC2 servers and, accordingly, watching AWS tell me to buzz off because I'm not authorized?
You can use the data source for AWS instances. You create a data source similar to the below:
data "aws_instances" "test" {
instance_tags = {
Role = "HardWorker"
}
filter {
name = "instance.group-id"
values = ["sg-12345678"]
}
instance_state_names = ["running", "stopped"]
}
This will attempt to perform a read action listing your EC2 instances designated by the filter you put in the config. This will also utilize the IAM associated with the Terraform user you are performing the terraform plan with. This will result in the error you described regarding lack of authorization, which is your stated goal. You should modify the filter to target your organization's EC2 instances.
We have multiple AWS stacks for our application (dev, test, prod, etc). These are all created with Cloud Formation templates. Some bright person before me decided to use a different template for the prod stack to the rest. I'd like to consolidate to only having one Cloud Formation template.
I've done a file comparison between the two templates and the only difference is the Logical ID of the RDS instance. So we have something like this:
"MyDbInstanceDev": {
"Type": "AWS::RDS::DBInstance",
"Properties": {
"AllocatedStorage": "200",
"CopyTagsToSnapshot" : true,
"DBInstanceClass": "db.m3.medium",
Verses:
"MyDbInstance": {
"Type": "AWS::RDS::DBInstance",
"Properties": {
"AllocatedStorage": "200",
"CopyTagsToSnapshot" : true,
"DBInstanceClass": "db.m3.medium",
If I change the logical ID from "MyDbInstanceDev" to "MyDbInstance" though, cloud formation thinks that I want to delete the existing RDS instance and create a new one in the stacks where the original template had a the old logical ID.
Is there some way I can change the logical ID and make all my stacks have the same template without losing the DB?
No. You cannot do that using CFT as CFT maintains the state of stack (you can't see it though). Even if you rename the DBInstance, you will face problems down the road if you try to delete the stack.
You can use DBInstanceIdentifier as parameter and every time you want to create the DB, you create a new stack deployment and not update stack. Logical Id of the CFT resource does not have much significance unless you are auto naming the resources.
To fix your existing deployments, do this:
Take a snapshot of the database that was deployed using
MyDbInstanceDev.
Add a parameter named RestoreFromDBSnapshotIdentifier in
MyDbInstance CFT.
Add DBSnapshotIdentifier property to MyDbInstance and set it conditionally if RestoreFromDBSnapshotIdentifier is not blank.
Deploy.
I'm very new to Terraform and am trying use it to replicate what I've successfully created via the AWS console.
I'm trying to specify a "SSM Run Command" as a target for a Cloudwatch Rule and can get everything defined using the aws_cloudwatch_event_target resource except the "Document" field. The rule target and all other associated bits and pieces are all successfully created but when I edit the rule from the console, the document section is not filled out (screenshot below). Consequently the rule fails to fire.
target-as-shown-in-console
Looking at the Terraform documentation for aws_cloudwatch_event_target, I can't see any parameters to specify for the Document so I'm wondering if this is even possible? Which would be odd given every other parameter seems to be covered.
Below is the code I'm using to create the target - there is hard coded stuff in there but I'm just trying to get it to work at this point.
resource "aws_cloudwatch_event_target" "autogrow" {
rule = "autogrow"
arn = "arn:aws:ssm:eu-west-1:999999999999:document/AWS-RunShellScript"
role_arn = "arn:aws:iam::999999999999:role/ec2-cloudwatch"
run_command_targets {
key = "tag:InstanceIds"
values = ["i-99999999999"]
}
input = <<INPUT
{
"commands": "/data/ssmscript.sh",
"workingDirectory" : "/data",
"executionTimeout" : "300"
}
INPUT
}
Is it possible to do what I'm trying to do via Terraform? It does work via the console but I'm wondering if the functionality just isn't in Terraform yet? I'd expect a "Document" parameter to be able to be specified but all you can specify is "arn" for the target.
Any help would be greatly appreciated!
I had the same problem of the document not being selected correctly when created via cloudformation.
What was I doing wrong?
The ARN for the AWS Managed document I had was wrong
When I fixed the ARN for AWS-RunShellScript it started showing up in console after cloudformation created the resource
arn:aws:ssm:ap-southeast-2::document/AWS-RunShellScript
Most documentations I went through had an account ID in the ARN. Removing the account ID solved the problem.
I think what you need to do is create one of these:
https://www.terraform.io/docs/providers/aws/r/ssm_document.html
This will create an SSM document in AWS for you, then once you have that you need to associate that document with your instances with an ssm_document_association.
https://www.terraform.io/docs/providers/aws/r/ssm_association.html
Once you have the document associated with your instances the event should be able to be triggered via cloudwatch.
You can create your own SSM Document or you can use AWS made Documents.
To get the contents of the document owned by AWS:
data "aws_ssm_document" "aws_doc" {
name = "AWS-RunShellScript"
document_format = "JSON"
}
output "content" {
value = "${data.aws_ssm_document.aws_doc.content}"
}
Reference: https://www.terraform.io/docs/providers/aws/d/ssm_document.html
I am creating an RDS instance using CloudFormation using this:
"Resources": {
"myDB": {
"Type": "AWS::RDS::DBInstance",
"Properties": {
"AllocatedStorage": "5",
"DBInstanceClass": "db.m1.small",
"Engine": "MySQL",
"EngineVersion": "5.5",
"DBName": "mydb",
"MasterUsername": {
"Ref": "DBUser"
},
"MasterUserPassword": {
"Ref": "DBPassword"
},
"DBParameterGroupName": {
"Ref": "myRDSParamGroup"
}
}
}
and it all works. But I need to run initial SQL on the DB when its created, to setup my apps schema. My current approach is to have the app self migrating, but I'd like to do it in the CloudFormation definition. Is this possible?
No, it's not possible. However, you could have an EC2 instance connect to your RDS instance to do it. I'd probably store a .sql file in S3 and use a cloud-init script on the EC2 instance to download the file and execute it.
It would also be possible to create a CloudFormation custom resource. There is a good discussion about how to build one using SNS here; it is also possible to build one using Lambda. Custom resources are essentially just RPCs, so it wouldn't be difficult to create one to initialize a database with a schema, for example.
CloudFormation still doesn't hold any solutions for us, but hopefully they will add Database Migration Service support soon.
In the meantime, there is great solution if you're using CodePipeline: create a migration stage that invokes a Lambda function to run your migration. I stumbled across this guide for invoking Lambda from CodePipeline that may be helpful for those unfamiliar.
Another option is to use DBSnapshotIdentifier property for AWS::RDS::DBInstance resource. The only catch is that you need to have a DB loaded in AWS to create the snapshot in the first place. From then on, you can automate your cloudformation stack to be using it though.
DBSnapshotIdentifier:
Name (ARN) of the DB snapshot that's used to restore the DB instance.
If the property contains a value (other than an empty string), AWS CloudFormation creates a database from the specified snapshot.
After you restore a DB instance with a DBSnapshotIdentifier property, you must specify the same DBSnapshotIdentifier property for any future updates to the DB instance. When you specify this property for an update, the DB instance is not restored from the DB snapshot again, and the data in the database is not changed. However, if you don't specify the DBSnapshotIdentifier property, an empty DB instance is created, and the original DB instance is deleted.
Look in the doc from more info:
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-rds-database-instance.html#cfn-rds-dbinstance-dbsnapshotidentifier