AWS VPC to VPC mirror imageing? - amazon-web-services

Hi I already have one VPC in my aws for production. Now I want to create same vpc for test environment also. Is there any way to create a mirror image of VPC . Like creating one more VPC with identical of old VPC.

There's no API for this, but you can set up a script pretty easily.
Alternatively, instead of creating the first one manually, you can create it with CloudFormation so you can make multiple identical copies (even in different Regions) whenever you want.

Terraform from hashicorp is the best way to do that in my opinion. You can also use the terraforming from dtan4 at this link to export the existing resources and adjust them to create another environment. For example you may want to go for another IP range, name it different etc.

You should use Cloudfomer to "reverse-engineer" your VPC setup, and there is a nice layout as well. Nevertheless, you need special IAM roles to do this.
http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/cfn-using-cloudformer.html
Because it is "reverse-engineer", all the VPC setting will be similar(same VPC IP/CIDR , subnets) ) , AWS will assign new ID for individual component. To avoid maintenance nightmare, You should assign different tag name for your production and test environment.

Related

Export current VPC infrastructure

I have a lab setup that in aws, with a vpc, an IGW a few different subnets and some ec2 instances, nothing to crazy.
I am trying to export this VPC with everything inside, but can't figure out how to do it.
I tried the cloudformer, but i keep getting errors when trying to create a stack, its keeps saying that i have reached my limit with VPC and IGW.
Is there something better to use that can export this VPC, with all ec2 and everything configured in those ec2 instances.
Terraform, and in particular the TR import tool called Terraforming are very good for this type of work.
Though there is a learning curve associated with Terraform, and it's probably a good idea to start gradually with it.
Unfortunately, there is no such tool or functionality provided by AWS.
CloudFormer is an old, no longer maintained and non-reliable project (in beta for years) as you already experienced.
You may also hear about importing existing resources into CloudFormation. But this is also not useful in your use case, as this works with only some resources and requires manual preparation of templates for resources to be imported.
The only choice is to look for tools outside of AWS. #AbiDembak alread recommended one such tool.

What's the best practice to use created resources in Terraform?

I will start a new Terraform project on AWS. The VPC is already created and i want to know what's the best way to integrate it in my code. Do i have to create it again and Terraform will detect it and will not override it ? Or do i have to use Data source for that ? Or is there other best way like Terraform Import ?
I want also to be able in the future to deploy the entire infrastructure in other Region or other Account.
Thanks.
When it comes to integrating with existing objects, you first have to decide between two options: you can either import these objects into Terraform and use Terraform to manage them moving forward, or you can leave them managed by whatever existing system and use them in Terraform by reference.
If you wish to use Terraform to manage these existing objects, you must first write a configuration for the object as if Terraform were going to create it itself:
resource "aws_vpc" "example" {
# fill in here all the same settings that the existing object already has
cidr_block = "10.0.0.0/16"
}
# Can then use that vpc's id in other resources using:
# aws_vpc.example.id
But then rather than running terraform apply immediately, you can first run terraform import to instruct Terraform to associate this resource block with the existing VPC using its id assigned by AWS:
terraform import aws_vpc.example vpc-abcd1234
If you then run terraform plan you should see that no changes are required, because Terraform detected that the configuration matches the existing object. If Terraform does propose some changes, you can either accept them by running terraform apply or continue to update the configuration until it matches the existing object.
Once you have done this, Terraform will consider itself the owner of the VPC and will thus plan to update it or destroy it on future runs if the configuration suggests it should do so. If any other system was previously managing this VPC, it's important to stop it doing so or else this other system is likely to conflict with Terraform.
If you'd prefer to keep whatever existing system is managing the VPC, you can also use the Data Sources feature to look up the existing VPC without putting Terraform in charge of it.
In this case, you might use the aws_vpc data source, which can look up VPCs by various attributes. A common choice is to look up a VPC by its tags, assuming your environment has a predictable tagging scheme that allows you to describe the single VPC you are looking for:
data "aws_vpc" "example" {
tags = {
Name = "example-VPC-name"
}
}
# Can then use that vpc's id in other resources using:
# data.aws_vpc.example.id
In some cases users will introduce additional indirection to find the VPC some other way than by querying the AWS VPC APIs directly. That is a more advanced configuration and the options here are quite broad, but for example if you are using SSM Parameter Store you could place the VPC into a parameter store parameter and retrieve it using the aws_ssm_parameter data source.
If the existing system managing the VPC is CloudFormation, you could also use aws_cloudformation_export or aws_cloudformation_stack to retrieve the information from the CloudFormation API.
If you are happy to manage it via terraform moving forward then you can import existing resources into your terraform state. Here is the usage page for it https://www.terraform.io/docs/import/usage.html
You will have to define a resource block inside of your configuration for the vpc first. You could do something like:
resource "aws_vpc" "existing" {
cidr_block = "172.16.0.0/16"
tags = {
Name = "prod"
}
}
and then on the cli run the command
terraform import aws_vpc.existing <vpc-id>
Make sure you run a terraform plan afterwards, because terraform may try to make changes to it. You kind of have to reverse engineer it a bit, by adding all the necessary configuration to the aws_vpc resource. Once it is aligned, terraform will not attempt to change it. You can then re-use this to deploy to other accounts and regions.
As you suggested, you could use a data source for the vpc. This can be useful if you want to manage it outside of terraform, instead of having the potential to destroy the vpc if it is run by an inexperienced user.
Some customers I've worked with prefer to manage resources like vpcs/subnets (and other core infrastructure) in separate terraform scripts that only senior engineers have access to. This can avoid the disaster scenarios where people destroy the underlying infrastructure by accident.
I personally prefer managing all my terraform code in a git repository that is then deployed using a CI/CD tool, even if it's just myself working on it. Some people may not see the value in spending the time creating the pipeline though and may stick with running it locally.
This post has some great recommendations on running terraform in an an automated environment https://learn.hashicorp.com/terraform/development/running-terraform-in-automation

Replicate changes made on one EC2 to another EC2 Server

I have two ec2 servers named Ec2-Webserver-1 and EC2-WebServer-2 inside same VPC under two different subnets served by Application Load Balancer.
When I made small changes to the first servers, Then I have to manually change the another server too. Otherwise I have to create an AMI and create a new server from the AMI.
I think, creating AMI each time when I made little changes is not the appropriate one.
Is there any other tools in AWS or third-party tools that can auto replicate the changes made on Server 1 to Server 2? I am currently using CentOS AMI.
I would suggest look into cloudformation. You can define your ec2, what IAM roles you want it to have and a whole lot of other stuff. Once that is done you can just run the cloudformation script and AWS will provision the EC2 with your defined settings automatically. CloudFormation link
You should be looking into Code Deploy https://aws.amazon.com/codedeploy/getting-started/?nc=sn&loc=4 Possibly combine it with Code Pipeline. Here is a starting point for deciding whether you need one or both. https://forums.aws.amazon.com/thread.jspa?threadID=172485

Copy Elastic Beanstalk Configuration across accounts

If i use the eb-cli eb config save to save the configuration of my current environment it works to start a new one using eb create.
But if i want to create the same environment with a different AWS account obviously lines like the following make no sence:
aws:ec2:vpc:
Subnets: subnet-2d9a3c56
VPCId: vpc-1dff4c74
So how can i build the same elastic beanstalk environment within multiple accounts? Is there any way to tell AWS? Maybe an "Account Agnostic" config-save?
It would not be possible to build the exact same ElasticBeanstalk environment across accounts. The environment is going to have resource IDs such as VPCs and Subnets that will be different.
A good way to build effectively the same ElasticBeanstalk application across multiple accounts would be to use CloudFormation to configure the environments. This requires a different approach to creating environments, but also means that the configuration can be more easily version controlled.
With cloudformation you can specify the parameters to be selected to feed into the template when the stack is being created.
You can use the {"Ref" : ""} method to create drop down lists of Subnets in the VPC etc
This would be the way I would do it.

Do I need to duplicate code on every EC2 instance running behind an ELB?

Hi this is a very noob question, but I am trying to deply my Node JS API server on AWS.
Everything is working fine with one m1.large instance that my Front End running on S3 connects to.
Now I want to Scale and put my EC2 instance and possibly many more behing and ELB and an Auto Scaling Group.
Do I need to duplicate my server code on every EC2 instance?
If so , I assume I'll have to create a seperate DB server which all of the EC2 instances will connect to.
Am I right,anyone experienced in Amazon AWS can answer this, I tried googling but most of the links point to detailed tutorials which however don't answer my question.
Any help would be much appreciated. Thanks
yep. that's basically correct. the code needs to be on all instances fronted by the load balancer. for the database you may want to look into RDS.
Of course NOT.. But sure you can do..
That's why there are EFS volumes, which are shared volumes to more than one EC2 instance, but you have to choose a region that support them since they are available on certain regions. As a candidate AWS certified architect I would recommend you more than two options.
You can follow your first approach and create an EC2 instance put your code inside and then create an AMI and use this AMI to launch your upcoming EC2s through autoscaling group. In my opinion bad decision since on any code change you have to go on each one and put the new code and then create a new AMI and a new Auto scaling configuration..Lot's of stuff to do, but it will work.
Second approach, following the first approach but do not create an AMI, instead upload your code on a private (I suppose) Repo like github, bitbucket, install SSM and the appropriate roles for managing EC2 and on every code changes push them to repo and pull them on your EC2, using SSM. Of course you may write a webhook to bitbucket to call an api and run the git pull command on each EC2. Probably the last sentence could be a third approach but needs more coding!!!
Last but not least!! Use an EFS volume put your code there, mount this volume on your EC2, add a auto mount command on every boot, alter your apache httpd main document to point on this EFS/folder and create an AMI with this configuration. Voila! every new EC2 will use the same code which located on this shared/network volume. Whenever you need to change something you have to log in on a third instance outside of your autoscaling group for a certain amount of time upload your changes and then turn it off and all of your EC2 will take immediately the new code. Of course you may pull the changes from a repo following the third approach.
Maybe there are more approaches, I'm using the third one with private repos of course and until now I haven't faced any problem (Fingers crossed)!
One other option is to use Elastic Beanstalk to Deploy NodeJs applications. Here is the guide specific to NodeJs. This will take care of most of the stuff which you would need to do otherwise if you only use EC2 For example: ELB, Autoscaling Cloudwatch etc.
For Database, you may want to use the Master Slave with Read Replicas. Another option is to evaluate NoSql Databases like DynamoDB if it fits your use case. The scalability of DynamoDB tables is managed by AWS so you dont need to worry about it.