Take backup of AWS configuration across all services - amazon-web-services

Having spent a couple of days setting up and configuring a new AWS account I would like to grab an export of the account configuration across all services. I've Googled around for existing scripts, etc, but have yet to find anything that would automate this process.
Primarily this would be as a backup incase the account was corrupted in some way (including user error!) but this would also be useful to document the system.
From an account administration perspective, there are various parts of the AWS console that don't display friendly names for various resources. Being able to cross reference against offline documentation would simplify these scenarios. For example, friendly names for vpc's and subnets aren't always displayed when configuring resources to use them.
Lastly I would like to be able to use this to spot suspicious changes to the configuration as part of intrusion detection. For example, looking out for security group changes to protected resources.
To clarify, I am looking to backup the configuration of AWS resources, not the actual resources themselves. Resource backups (e.g. EC2 instances) is already covered.

The closest i've seen to that is CloudFormer.
That would create a CloudFormation template from your account's resources. Mind that this template would be only a starting point, not meant to be reproducible out-of-the-box. For example, it won't log into your instances or anything like that.
As for the intrusion detection part, see CloudTrail

Check out AWS Config: https://aws.amazon.com/config/
AWS Config records the configuration of AWS resources automatically, allowing you to query and react to configuration changes. As AWS Config stores data on S3, that is probably enough backup, but you can also sync the bucket elsewhere for paranoid redundancy.

Related

List of services used in AWS

Please how can get the list of all services I am using.
I have gone to Service Quotas at
https://ap-east-1.console.aws.amazon.com/servicequotas/home?region=ap-east-1
on the dashboard. I could see a list of Items e.g. EC2, VPC, RDS, Dynamo etc but I did not understand what is there.
As I did not request for some of the services I am seeing I even went into budget at
https://console.aws.amazon.com/billing/home?region=ap-east-1#/budgets
and also credits. Maybe I can get the services I have been given credits to use
https://console.aws.amazon.com/billing/home?region=ap-east-1#/budgets?
Also, how can I stop any service which I do not want?
The Billing service is not giving me tangible information also. I do not want the bill to pile up before I start taking needed steps.
Is there a location where I can see all services I am using or maybe there is a code I can enter somewhere which would produce such result?
You can use AWS Config Resource Inventory feature.
AWS Config will discover resources that exist in your account, record their current configuration, and capture any changes to these configurations. Config will also retain configuration details for resources that have been deleted. A comprehensive snapshot of all resources and their configuration attributes provides a complete inventory of resources in your account.
https://aws.amazon.com/config/
There is not an easy answer on this one, as there is not an AWS service that you can use to do this out of the box (yet).
There are some AWS services that you can use to get you close, like:
AWS Config (as suggested by #kepils)
Another option is to use Resource Groups and Tagging to list all resources within a region within account (as described in this answer).
In both cases however, the issue is that both Config and Resource Groups come with the same limitation - they can't see all AWS services on their own.
Another option would be to use a third party tool to do this, if your end goal is to find what do you currently have running in your account like aws-inventory or cloudmapper
On the second part of your question on how to stop any services which you don't want you can do the following:
Don't grant excessive permissions to your users. If someone needs to work on EC2 instances, then their IAM role and respective policy should allow only that instead of for example full access.
You can limit the scope and services permitted for use within account by creating Service Control Policies which are allowing only the specific resources you plan to use.
Set-up an AWS Budget Notifications and potentially AWS Budget Actions.

Extract Entire AWS Setup into storable Files or Deployment Package(s)

Is there some way to 'dehydrate' or extract an entire AWS setup? I have a small application that uses several AWS components, and I'd like to put the project on hiatus so I don't get charged every month.
I wrote / constructed the app directly through the various services' sites, such as VPN, RDS, etc. Is there some way I can extract my setup into files so I can save these files in Version Control, and 'rehydrate' them back into AWS when I want to re-setup my app?
I tried extracting pieces from Lambda and Event Bridge, but it seems like I can't just 'replay' these files using the CLI to re-create my application.
Specifically, I am looking to extract all code, settings, connections, etc. for:
Lambda. Code, Env Variables, layers, scheduling thru Event Bridge
IAM. Users, roles, permissions
VPC. Subnets, Route tables, Internet gateways, Elastic IPs, NAT Gateways
Event Bridge. Cron settings, connections to Lambda functions.
RDS. MySQL instances. Would like to get all DDL. Data in tables is not required.
Thanks in advance!
You could use Former2. It will scan your account and allow you to generate CloudFormation, Terraform, or Troposphere templates. It uses a browser plugin, but there is also a CLI for it.
What you describe is called Infrastructure as Code. The idea is to define your infrastructure as code and then deploy your infrastructure using that "code".
There are a lot of options in this space. To name a few:
Terraform
Cloudformation
CDK
Pulumi
All of those should allow you to import already existing resources. At least Terraform has a import command to import an already existing resource into your IaC project.
This way you could create a project that mirrors what you currently have in AWS.
Excluded are things that are strictly taken not AWS resources, like:
Code of your Lambdas
MySQL DDL
Depending on the Lambdas deployment "strategy" the code is either on S3 or was directly deployed to the Lambda service. If it is the first, you just need to find the S3 bucket etc and download the code from there. If it is the second you might need to copy and paste it by hand.
When it comes to your MySQL DDL you need to find tools to export that. But there are plenty tools out there to do this.
After you did that, you should be able to destroy all the AWS resources and then deploy them later on again from your new IaC.

Backing up each and every resources in AWS account

I am exploring backing up our AWS services configuration to a backup disk or source control.
Only configs. eg -iam policies, users, roles, lambdas,route53 configs,cognito configs,vpn configs,route tables, security groups etc....
We have a tactical account where we have created some resources on adhoc basis and now we have a new official account setup via cloud formation.
Also in near future planning to migrate tactical account resources to new account either manually or using backup configs.
Looked at AWS CLI, but it is time consuming. Any script which crawls through AWS and backup the resources?
Thank You.
The "correct" way is not to 'backup' resources. Rather, it is to initially create those resources in a reproducible manner.
For example, creating resources via an AWS CloudFormation template allows the same resources to be deployed in a different region or account. Only the data itself, such as the information stored in a database, would need a 'backup'. Everything else could simply be redeployed.
There is a poorly-maintained service called CloudFormer that attempts to create CloudFormation templates from existing resources, but it only supports limited services and still requires careful editing of the resulting templates before they can be deployed in other locations (due to cross-references to existing resources).
There is also the relatively recent ability to Import Existing Resources into a CloudFormation Stack | AWS News Blog, but it requires that the template already includes the resource definition. The existing resources are simply matched to that definition rather than being recreated.
So, you now have the choice to 'correctly' deploy resources in your new account (involves work), or just manually recreate the ad-hoc resources that already exist (pushes the real work to the future). Hence the term Technical debt - Wikipedia.

How to backup whole GCP projects (including service- and infrastructure config like subnets, firewallrules, etc)

We are evaluating options to backup whole google cloud projects. Everything that could possibly get lost somehow should be saved. What would be a good way to backup and recover networks, subnets, routing, etc?
Just to be clear: Our scope is not only data and files like compute engine disks or storage buckets but also the whole "how everything is put together" - all code and config describing the infrastructure and services of a gcp project (as far as possible).
Of course we could simply save all code that created resources (e.g. via deployment manager or gcloud sdk) but we also want to be able to cover stuff someone provisioned by hand / via gui as good as possible.
Recursively pulling data with gcloud sdk (e.g. gcloud compute networks ... list/describe for network config) could be an option, but maybe someone has already found a better solution?
Output should be detailed enough to be able to restore a specific resource (better: all containing resources) in a gcp project (e.g. via deployment manager).
All constructive ideas are appreciated!
You can use this product for reverse engineering the infrastructure and to generate a tfstate file to use with Terraform
https://github.com/GoogleCloudPlatform/terraformer
For the rest, no magic things, you have to code.

How can i create s3 bucket and object (like upload an shell script file) pro grammatically

I have to do this for almost 100 account so planning to create using something infra as code. Cloud formation does not support creating object.. can anyone help
There are several strategies, depending on the client environment.
The aws-cli may be used for shell scripting, aws-sdk for JavaScript environments, or Boto3 for python environments.
If you provide the client environment, creating an S3 object is almost a one-liner holding equal s3 bucket security and lifecycle matters.
As Rich Andrew said, there are several different technologies. If you are trying to do infrastructure as code and attach policies and roles I would suggest you look into Terraform or Serverless.
I frequently combine two of the techniques already mentioned above.
For infrastructure setup - Terraform. This tool is always ahead of competition (Ansible, etc.) in terms of cloud modules. You can use it to create bucket, create bucket policies, users, their IAM policies for bucket access, upload initial files to bucket and much more.
It will keep state file containing record of those resources, so you can use the same workflow to destroy all that's created if necessary with very little modifications.
Very easy to get started, but not flexible and you can be caught out if scope change in middle of project suddenly requires feature that's not there.
To get started check out Terrafrom module registry - https://registry.terraform.io/.
It has quite a few S3 modules available to get started even quicker.
For interaction with aws resources - Python Boto3. In your case that would be subsequent file uploads, deletions in S3 bucket.
You can use Boto3 to set up infrastructure - just like Terraform, but it will require more work on your side (like handling exceptions and errors).