I have two terraform projects. For both projects I am using same credentials and state is stored in S3 bucket. In the first project I create the vpc and the subnets. In the 2nd project I am creating an RDS. I want to put the RDS in the vpc and subnets created in the 1st terraform project. How do I do that?
Thanks!
In the RDS project, pull in the VPC and subnets as data resources. You can pull in the VPC like so:
data "aws_vpc" "my_vpc" {
id = var.vpc_id
}
And you can reference that as data.aws_vpc.my_vpc.
Depending on your use case, you might want the subnet_ids or subnets data resources to reference subnets as a group.
Data resources are a powerful way to bridge this kind of gap, but if you're not careful with them it can lead to some nasty dependency issues in your build. Keeping networking away from application resources is a sensible division, but consider merging the projects if you expect the networking to change frequently.
Related
I'm having a lot of trouble when trying to attach a security group to an rds instance in Terraform.
I was using the aws_network_interface_sg_attachment resource to try to make the attachment, but since I am dealing with an RDS instance, I do not have the network interface id that I have on other instances to attach it to the SG.
How can I attach the RDS instances to a specific security group then?.
Bear in mind that I already have the RDS instance identifier, such as this one:
data "aws_db_instance" "example" {
db_instance_identifier = "example-instance-1" }
To change SG of an existing RDS which had been created outside of terraform (TF), you have to import into TF before you can modify it.
The import procedure is specific for a given use-case, thus its difficult to provide a valid example. However, you can read up official guide on doing this:
Import Terraform configuration
The general steps would be (example):
Create a configuration corresponding to your rds
resource "aws_db_instance" "db" {
engine = "mysql"
# other attributes
}
Execute terraform import to import existing db into the aws_db_instance.db.
Check with terraform plan the differences between your configuration and the actual state of the existing rds, and adjust if needed.
Once the RDS is under control of terraform you can modify it, e.g. its security group. But since this is RDS, please be careful and make sure to have backup of the RDS. Mistakes can lead to more troubles, e.g. accidental changes or deletion of a production database. Thus it would be good idea to try and test the approach on some test database, e.g. replica of the real RDS.
Goal is to visualise the relationship of resources within AWS account(may have multiple VPC's).
This would help daiy operations. For example: Resources getting affected after modifying the security group
Each resource has ARN assigned in AWS cloud
Below are some example relationsships among resources:
Route table has-many relationship with subnets
NACL has-many relationship with subnets
Availability zone has-many relationship with subnets
IAM resource has-many regions
has-many is something like compose relation
security group has association relation with any resource in VPC
NACL has association relation with subnet only
We also have VPC flow logs to find the relationships
Using AWS SDK's,
1)
For on-prem networks, we take IP range and send ICMP requests to verify existence of devices in the IP range and then we send snmp query to classify the device as (windows/linux/router/gateway etc...)
How to find the list of resources allocated within an AWS account? How to classify resources?
2)
What are the parameters that need to be queried from AWS resources(IAM, VPC, subnet, RTable, NACL, IGW etc...) that help create relationsip view of the resources within an AWS account?
you don't have to stitch your ressources together by your self in your app. you can use the ressourcegrouptagging api from aws. take a look on ressourcegroups in your aws console. there you can group things based on tags. then, you can tag the groups itself. requesting with the boto3 python lib will give you a bunch of information. read about boto3, its huge! another thing which might be intresting for you is "aws config".. there you can have your compliance, config histoty, relationship of ressources and plenty of other stuff!
also, check out aws cloudwatch for health monitoring
Currently I'm writing a terraform script in which I wanted to set a subnet CIDR-BLOCK dynamically. I know the better approach is to use module instead of resources but this is the limitation at our end to only use resources.
CURRENT PROBLEM
I'had created only VPC in one configuration file. I have another configuration file in which I'm trying to provisioning VPC's configs (subnets,routing) etc.
Now I wanted to know the count of the existing subnets in the current VPC. For this I'm using the datasource to get the count. But while executing my plan I'm getting the following exception.
data.aws_subnet_ids.available-subnets: data.aws_subnet_ids.available-subnets:
no matching subnet found for vpc with id vpc-***************
This is how I'm getting the subnets
data "aws_subnet_ids" "available-subnets" {
vpc_id = "${data.aws_vpc.vpcobject.id}"
}
Do i need to add a depends-on attribute in it? if yes then what will be the condition or test for it? If this is not the case then what would be alternatives?
Thanks
I'm trying to get a better understanding of AWS organization patterns.
Suppose I define the term "application stack" as a set of interconnected AWS resources (e.g. a java microservice behind ELB + dynamoDB for peristence), then I need some way of isolating independent stacks. Each application would get a separate dynamodb or kinesis so there is no need for cross-stack resource sharing. But the microservices do need to communicate with each other.
A-priori I could see either of the two organizational methods being used:
Create a VPC for each independent stack (1 VPC per 1 Application)
Create a single "production" VPC and each stack resides within a separate private subnet.
There could be up to 100s of these independent "stacks" within the organization so there's the potential for resource exhaustion if there is a hard limit on VPC count. But other than resources scarcity, what are the decision criteria around creating a new VPC or using a pre-existing VPC for each stack? Are there strong positive or negative consequences to either approach?
Thank you in advance for consideration and response.
Subnet's and IP addresses are a limited commodity within your VPC. The number of IP addresses cannot be increased within your VPC if you hit that limit. Also, by default, all subnets can talk to other subnets, so there may be security concerns. Any limits on the number of VPCs are a soft limit and can be increased by AWS support.
For these reasons, separate distinct projects at the VPC level. Never mix projects within a VPC. That's just asking for trouble.
Also, if your production projects are going to include non-VPC-applicable resources, such as IAM users, DynamoDB tables, SQS queues, etc., then I also recommend isolating those projects within their own AWS account (at the production level).
This way, you're not looking at a list of DynamoDB tables that includes tables from different projects.
I would like to create a mirror image of my existing production environment in another AWS region for disaster recovery.
I know I don't need to recreate resouces such as IAM roles as its a "Global" service. (correct me if I am wrong)
Do I need to recreate key pairs in another region?
How about Launch configurations and Route 53 Records sets?
Launch configurations you will have to replicate into the another region as the AMIs, Security Groups, subnets, etc will all be different. Some instance types are not available in all regions so you will have to check that as well.
Route53 is another global thing but you will probably have to fiddle with your records to take advantage of multi-region architecture. If you have the same setup in two different regions you will probably want to implement latency based or geo routing to send traffic to the closest region. Heres some info on that
As for keys they are per region. But I read somewhere that you could create an AMI from your instance, move that to a new region, and fire an instance off that and as long as you use the same key name your existing key will work but take that with a grain of salt as I haven't tried it nor seen it documented anywhere.
Heres the official AWS info for migrating