Import current state of my cloud AWS account with terraform - amazon-web-services

I would like to version control my cloud resources initially before using it to apply through Terraform. Is there anyway I can run a single command and store the current state of my cloud?
I have tried to use Terraform's import command:
terraform import ADR ID
But this takes a long time to identify all the resources and import them.
I have tried terraforming but this also needs a resource type to import:
terraforming s3
Is there any tool that can help in importing all existing resources?

While this doesn't technically answer your question I would strongly advise not to try and import an entire existing AWS account into Terraform in a single way even if it was possible.
If you look at any Terraform best practices an awful lot of it comes down to minimising blast radius of things so only things that make sense to be changed at the same time as each other are ever applied at the same time. Charity Majors wrote up a good blog post about this and the impact it had when that wasn't the case.
Any tool that would mass import things (eg terraforming) is just going to dump everything in a single state file. Which, as mentioned before, is a bad idea.
While it sounds laborious I'd recommend that you being your migration to Terraform more carefully and methodically. In general I'd probably say that only new infrastructure should use Terraform, utilising Terraform's data sources to look up existing things such as VPC IDs that already exist.
Once you feel comfortable with using Terraform and structuring your infrastructure code and state files in a particular way you can then begin to think about how you would map your existing infrastructure code into Terraform state files etc and begin manually importing specific resources as necessary.
Doing things this way also allows you to find your feet with Terraform a bit better and understand its limitations and strengths while also working out how your team and/or CI will work together (eg remote state and state file locking and orchestration) without tripping over each other or causing potentially crippling state issues.

I'm using terraformer to import my existing AWS infrastructure. It's much more flexible than terraforming and has no issues mentioned in answers.

Related

How to use AWS CLI to create a stack from scratch?

The problem
I'm approaching AWS, and the first test project will be a website, but i'm struggling on how to approach the resource and the tools to accomplish this.
AWS documentation is not really beginner-friendly, so to me it is like to being punched in the face at the first boxe training session.
First attempt
I've installed bot AWS and SAM cli tools, so what I would expect is to be able to create an empty stack at first and adding the resource one by one as the specifications are given/outlined, but instead what I see is that i need to give a template to the tool to create the new stack, but that means I need to know how to write it beforehand and therefore the template specifications for each resource type.
Second attempt
This lead me to create the stack and the related resources from the online console to get the final stack template, but then I need to test every new resource or any updated resource locally, so I have to copy the template from the online console to my machine and run the cli tools with this, but obviously it is not the desired development flow.
What I expected
Coming from a standard/classical web development I would expect to be able to create the project locally, test the related resources locally, version it, and delegate the deployment to the pipeline.
So what?
All this made me understand that "probably" I'm missing somenthing on how to use the aws cli tools and how the development for an aws-hosted application is meant to be done.
I'm not seeking for a guide on specific resource types like every single tutorial I've found online, but something on a higher level on how to handle a project development on aws, best practices and stuffs like that, I can then dig deeper on any resource later when needed.
AWS's Cloud Development Kit ticks the boxes on your specific criteria.
Caveat: the CDK has a learning curve in line with its power and flexibility. There are much easier ways to deploy a web app on AWS, like the higher-level AWS Amplify framework, with abstractions tailored to front-end devs who want to minimise the mental energy spent on the underlying infrastructure.
Each of the squillion AWS and 3rd Party deploy tools is great for somebody. Nevertheless, looking at your explicit requirements in "What I expected", we can get close to the CDK as an objective answer:
Coming from a standard/classical web development
So you know JS/Python. With the CDK, you code infrastructure as functions and classes, rather than 500 lines of YAML as with SAM. The CDK's reference implementation is in Typescript. JS/Python are also supported. There are step-by-step AWS online workshops for these and the other supported languages.
create the project locally
Most of your work will be done locally in your language of choice, with a cdk deploy CLI command to
bundle the deployment artefacts and send them up to the cloud.
test the related resources locally
The CDK has built-in testing and assertion support.
version it
"Deterministic deploy" is a CDK design goal. Commit your code and the generated deployment artefacts so you have change control over your infrastructure.
delegate the deployment to the pipeline
The CDK has good pipeline support: i.e. a push to the remote main branch can kick off a deploy.
AWS SAM is actually a good option if you are just trying to get your feet wet with AWS. SAM is an open-source wrapper around the aws-cli, which allows you to create aws resources like Lambda in say ~10 lines of code vs ~100 lines if you were to use the aws-cli directly. Yes, you'll need to learn SAM specific things like SAMtemplate and SAM-cli but it is pretty straightforward using this doc.
Once you get the hang of it, it would be easier to start looking under the hood of what/how SAM is doing things and get into the weeds with aws-cli if you wanted. Which will then allow you to build out custom solutions (using aws-cli) for your complex use cases that SAM may not support. Caveat: SAM is still pretty new and has open issues that could be a blocker for advanced features/complex use cases.

Create Infrastructure Documentation from terraform + gitlab-ci system

Our infra pipeline is setup using terraform + gitlab-ci. I am given with task to provide documentation on setup with what's implemented and what's left. I am new to infra world and finding it hard to come up template to start documentation.
So far I thought of having a table with resources needed with details on dependencies, source of the module, additional notes, etc
If you have a template, can you share OR any other suggestions?
For starters, you could try one or both of the below approaches:
a) create a graph of the Terraform resources using its graph command
b) group and then list all of your resources for a specific tag using AWS Resource Groups, specifically its Create Resource Group functionality
The way I do documentation is to keep it as simple as possible, explain how it works, how to use it and also provide instructions on how it was setup from scratch for reference and as an insurance policy. So that if it's destroyed, someone other than the person that set it all up could recreate it.
Since this is just a pipeline there is probably not much to diagram. The structure of documentation I would provide would be something like this and I would add this either as part of the README.md, in Confluence or however your team does documentation.
Summary
1-2 Sentences about the work and why it was created.
How the Repo is Structured
An explanation on how the repo is structured and decisions behind why it was structured the way it was.
How To Use
Provide steps on how a user can use the pipeline
How It Was Created
Provide steps on how it was setup so anybody can manage it and work on it going forward.

How to swap between projects with terraform

Hey I started migrating infrastructure to terraform and came across few questions that are hard for me to answer
How to easily swap between different projects, assuming I have same resources in few projects separated by environments. Do I store it all in one tfstate - or do I have multiple ones ? Is it stored in one bucket or few buckets or somewhere else entirely
Can you create a new project with some random number at the end and automatically deploy resources to it
If you can create new project and deploy to it - how do you enable the API for terraform to work - like iam.googleapis.com etc.
Here, some pieces of answer to your questions
If you use only one terraform, you use only 1 tfstate. By the way, when you would like to update a project, you have to take into account all the dependencies in all project (and you risk to break other projects), the file are bigger and harder to maintain... I recommend you to have 1 terraform per project, and 1 TF state per project. If you use common pattern (IP naming, VM settings,...) you can create modules to import in the terraform of each project.
(and 3) Yes, you can create and deploy then. But I don't recommend it for a separation of concern. Use a terraform to manage your ressources' organisation (projects, folders, ....) Another one to manage your infrastructure.
A good way to think is: Build once, maintain a lot! So the build phase is not the hardest, but having something maintainable, easy to read, concise is hard!!

Importing existing resources with multiple accounts

We have four AWS accounts used to define different environments: dev, sqe, stg, prd. We're only now using CF and I'd like to import an existing resource into a stack. As we roll this out each environment will get the new stack and I'm wondering if there's an easier way to import the resource in each env. than to initially go through the console to import the reasource while add the stack (would be nice if we could just deploy via our deployment system.)
What I was hoping for was something I could specify in the stack definition itself (e.g., "here's a bucket that already exists, take ownership"), but I'm not finding anything. Currently it seems like the easiest route would be to create an empty stack in each environment which imports the resource and then just deploy as normal.
Also, what happens when/if an update fails and a stack gets stuck in ROLLBACK_COMPLETE? Do I have to go through this again after deleting the stack?
What you have described sounds exactly like your after a Continuous Integration / Continuous Deployment (CICD) pipeline. Instead of trying to import existing resources into your accounts, your better off designing the cloudformation templates then deploying them to each environment through Code Pipeline. This will also provide a clean separation between the accounts instead of importing stg resources to prd.
A fantastic example and quickstart is the serverless-cicd-for-enterprise which should serve as a good starting point for you.
You can't get stuck on 'rollback complete', as that is the last action a failed change set executes. What it means is that it tried to update, couldn't and has reverted to the last successful deployment. If this is the first deployment (no successful deployments) you will need to delete the stack and try again. However, if you have had a successful deployment you can run an update stack.

Build AWS infrastructure using terrafrom in an order that you specify

Recently I came across a situation where am building AWS infrastructure using terraform to setup a clustered environment for some applications. Thing is when I apply terraform scripts it builds all the necessary modules and spins multiple instances altogether and then finishes. This may be meant to do like this and there is nothing to blame anyways terraform works greatly to build such infra.
When I'm trying to setup such infra to deploy an application in a clustered way, here am using a configuration management tool. While building ec2 instances CM scripts gets invoked and configured accordingly. Problem comes when there is some dependency on the modules.
Consider a scenario that 2(A & B) components are part of Autoscale group and 2(C & D) components are normal ec2-instances. Here if I wish to build A first and then C since C instance got dependency on A which has to be fully configured first or vice versa, how can I control the order in which terraform helps me to achieve this.
Please can someone helps me achieving it.
Thanks in advance
The other answer is correct in the literal sense, but overall this is something to avoid. Build your CM code so that it will keep re-trying to converge until it succeeds. With Chef in particular, you can use the chef-client cookbook to deploy a service which runs Chef converges automatically at a given interval (30 minutes by default but you might want to make that shorter). Running things in the "right" order sounds nice, but when dealing with byzantine failures you'll thank your past self for ensuring reliable convergence no matter the order.
You can use the depends_on parameter. Resources can be made explicitly dependant on other resources. Terraform will in turn only build the resource once dependent resources have provisioned successfully.
https://www.terraform.io/intro/getting-started/dependencies.html
The question has a broad nature and the other answers are right in their own rights. What I would like to add is that making use of modules to determine order of logical sub projects works well too.
In terraform you can force procedural order with depends_on in resource level but you cannot use it for modules. However for modules you can use the output of one module as input to the other one, which would help you manage procedural order on modules level.
So, in your case, I would put A & B in one module, C & D into another and use the output variables from one to the other to control order.