I am exploring backing up our AWS services configuration to a backup disk or source control.
Only configs. eg -iam policies, users, roles, lambdas,route53 configs,cognito configs,vpn configs,route tables, security groups etc....
We have a tactical account where we have created some resources on adhoc basis and now we have a new official account setup via cloud formation.
Also in near future planning to migrate tactical account resources to new account either manually or using backup configs.
Looked at AWS CLI, but it is time consuming. Any script which crawls through AWS and backup the resources?
Thank You.
The "correct" way is not to 'backup' resources. Rather, it is to initially create those resources in a reproducible manner.
For example, creating resources via an AWS CloudFormation template allows the same resources to be deployed in a different region or account. Only the data itself, such as the information stored in a database, would need a 'backup'. Everything else could simply be redeployed.
There is a poorly-maintained service called CloudFormer that attempts to create CloudFormation templates from existing resources, but it only supports limited services and still requires careful editing of the resulting templates before they can be deployed in other locations (due to cross-references to existing resources).
There is also the relatively recent ability to Import Existing Resources into a CloudFormation Stack | AWS News Blog, but it requires that the template already includes the resource definition. The existing resources are simply matched to that definition rather than being recreated.
So, you now have the choice to 'correctly' deploy resources in your new account (involves work), or just manually recreate the ad-hoc resources that already exist (pushes the real work to the future). Hence the term Technical debt - Wikipedia.
Related
I could not find a definite 'yes' or 'no' anywhere, so I thought maybe I ask here. Is it possible to run a custom script which would automatically create a new environment on AWS with all the settings like (Network, Capacity, Security etc.)? I need to create a lot of new environments as I am switching from individual load balancers to shared ones, and all the settings are the same (apart from the environment and application name), so it involves a lot of manual work.
From What is AWS CloudFormation? - AWS CloudFormation:
AWS CloudFormation is a service that helps you model and set up your AWS resources so that you can spend less time managing those resources and more time focusing on your applications that run in AWS. You create a template that describes all the AWS resources that you want (like Amazon EC2 instances or Amazon RDS DB instances), and CloudFormation takes care of provisioning and configuring those resources for you. You don't need to individually create and configure AWS resources and figure out what's dependent on what; CloudFormation handles that.
If you want to create the CloudFormation template programmatically, you can use AWS CDK - AWS Cloud Development Kit (CDK):
The AWS CDK lets you build reliable, scalable, cost-effective applications in the cloud with the considerable expressive power of a programming language.
The AWS CDK supports TypeScript, JavaScript, Python, Java, C#/.Net, and Go. Developers can use one of these supported programming languages to define reusable cloud components known as Constructs. You compose these together into Stacks and Apps.
Or, you can simply write your own script in a programming language that calls an AWS SDK to individually create resources in AWS. Everything in AWS can be done via API calls.
We are successfully spinning up an AWS CloudFormation stack that includes a serverless RDS PostgreSQL instance. Once the PostgreSQL instance is in place, we're automatically restoring a PostgreSQL database dump (in binary format) that was created using pg_dump on a local development machine on the instance PostgreSQL instance just created by CloudFormation.
We're using a Lambda function (instantiated by the CloudFormation process) that includes a build of the pg_restore executable within a Lambda layer and we've also packaged our database dump file within the Lambda.
The above approached seems complicated for something that presumably has been solved many times... but Google searches have revealed almost nothing that corresponds to our scenario. We may be thinking about our situation in the wrong way, so please feel free to offer a different approach (e.g., is there a CodePipeline/CodeBuild approach that would automate everything). Our preference would be to stick with the AWS toolset as much as possible.
This process will be run every time we deploy to a new environment (e.g., development, test, stage, pre-production, demonstration, alpha, beta, production, troubleshooting) potentially per release as part of our CI/CD practices.
Does anyone have advice or a blog post that illustrates another way to achieve our goal?
If you have provisioned everything via IaC (Infrastructure as Code) anyway that is most of the time-saving done and you should already be able to replicate your infrastructure to other accounts/ stacks via different roles in your AWS credentials and config files by passing the —profile some-profile flag. I would recommend AWS SAM (Serverless application model) over Cloudformation though as I find I only need to write ~1/2 the code (roles & policies are created for you mostly) and much better & faster feedback via the console. I would also recommend sam sync (it is in beta currently but worth using) so you don’t need to create a ‘change set’ on code updates, purely the code is updated so deploys take 3-4 secs. Some AWS SAM examples here, check both videos and patterns: https://serverlessland.com (official AWS site)
In terms of the restore of RDS, I’d probably create a base RDS then take a snapshot of that and restore all other RDS instances from snapshots rather than manually creating it all the time you are able to copy snapshots and in fact automate backups to snapshots cross-account (and cross-region if required) which should be a little cleaner
There is also a clean solution for replicating data instantaneously across AWS Accounts using AWS DMS (Data Migration Services) and CDC (Change Data Capture). So, process is you have a source DB and a target DB and also a 'Replication Instance' (eg EC2 Micro) that monitors your source DB and can replicate that across to different instances (so you are always in sync on data, for example if you have several devs working on separate 'stacks' to not pollute each others' logs, you can replicate data and changes out seamlessly if required from one DB) - so this works with Source DB and Destination DB both already in AWS however you can also use AWS DMS of course for local DB migration over to AWS, some more info here: https://docs.aws.amazon.com/dms/latest/sbs/chap-manageddatabases.html
Please how can get the list of all services I am using.
I have gone to Service Quotas at
https://ap-east-1.console.aws.amazon.com/servicequotas/home?region=ap-east-1
on the dashboard. I could see a list of Items e.g. EC2, VPC, RDS, Dynamo etc but I did not understand what is there.
As I did not request for some of the services I am seeing I even went into budget at
https://console.aws.amazon.com/billing/home?region=ap-east-1#/budgets
and also credits. Maybe I can get the services I have been given credits to use
https://console.aws.amazon.com/billing/home?region=ap-east-1#/budgets?
Also, how can I stop any service which I do not want?
The Billing service is not giving me tangible information also. I do not want the bill to pile up before I start taking needed steps.
Is there a location where I can see all services I am using or maybe there is a code I can enter somewhere which would produce such result?
You can use AWS Config Resource Inventory feature.
AWS Config will discover resources that exist in your account, record their current configuration, and capture any changes to these configurations. Config will also retain configuration details for resources that have been deleted. A comprehensive snapshot of all resources and their configuration attributes provides a complete inventory of resources in your account.
https://aws.amazon.com/config/
There is not an easy answer on this one, as there is not an AWS service that you can use to do this out of the box (yet).
There are some AWS services that you can use to get you close, like:
AWS Config (as suggested by #kepils)
Another option is to use Resource Groups and Tagging to list all resources within a region within account (as described in this answer).
In both cases however, the issue is that both Config and Resource Groups come with the same limitation - they can't see all AWS services on their own.
Another option would be to use a third party tool to do this, if your end goal is to find what do you currently have running in your account like aws-inventory or cloudmapper
On the second part of your question on how to stop any services which you don't want you can do the following:
Don't grant excessive permissions to your users. If someone needs to work on EC2 instances, then their IAM role and respective policy should allow only that instead of for example full access.
You can limit the scope and services permitted for use within account by creating Service Control Policies which are allowing only the specific resources you plan to use.
Set-up an AWS Budget Notifications and potentially AWS Budget Actions.
Had a question regarding infrastructure as code on AWS.
Wondering how to do this (the process of deploying) and also why is this an efficient method for architecture? Also, are there other methods that should be looked at over this?
I am looking to deploy this for a startup I am working for and need assistance in getting this going. Any help is appreciated.
Thank you.
From What is AWS CloudFormation?:
AWS CloudFormation is a service that helps you model and set up your Amazon Web Services resources so that you can spend less time managing those resources and more time focusing on your applications that run in AWS. You create a template that describes all the AWS resources that you want (like Amazon EC2 instances or Amazon RDS DB instances), and AWS CloudFormation takes care of provisioning and configuring those resources for you. You don't need to individually create and configure AWS resources and figure out what's dependent on what; AWS CloudFormation handles all of that.
So, instead of manually creating each bit of architecture (network, instances, queues, storage, etc), you can define them in a template and CloudFormation will deploy them. It is smart enough to mostly know the correct order of creation (eg creating the network before creating an Amazon EC2 instance within the network) and it can also remove resources when the 'stack' is no longer required.
Other benefits:
The template effectively documents the infrastructure
Infrastructure can be checked into a source code repository, and versioned
Infrastructure can be repeatedly and consistently deployed (eg Test environment matches Production environment)
Changes can be made to the template and CloudFormation can update the 'stack' by just deploying the changes
There are tools (eg https://former2.com/) that can generate the template from existing infrastructure, or just create it from code snippets taken from the documentation.
Here's an overview: Simplify Your Infrastructure Management Using AWS CloudFormation - YouTube
There are also tools like Terraform that can deploy across multiple cloud services.
Having spent a couple of days setting up and configuring a new AWS account I would like to grab an export of the account configuration across all services. I've Googled around for existing scripts, etc, but have yet to find anything that would automate this process.
Primarily this would be as a backup incase the account was corrupted in some way (including user error!) but this would also be useful to document the system.
From an account administration perspective, there are various parts of the AWS console that don't display friendly names for various resources. Being able to cross reference against offline documentation would simplify these scenarios. For example, friendly names for vpc's and subnets aren't always displayed when configuring resources to use them.
Lastly I would like to be able to use this to spot suspicious changes to the configuration as part of intrusion detection. For example, looking out for security group changes to protected resources.
To clarify, I am looking to backup the configuration of AWS resources, not the actual resources themselves. Resource backups (e.g. EC2 instances) is already covered.
The closest i've seen to that is CloudFormer.
That would create a CloudFormation template from your account's resources. Mind that this template would be only a starting point, not meant to be reproducible out-of-the-box. For example, it won't log into your instances or anything like that.
As for the intrusion detection part, see CloudTrail
Check out AWS Config: https://aws.amazon.com/config/
AWS Config records the configuration of AWS resources automatically, allowing you to query and react to configuration changes. As AWS Config stores data on S3, that is probably enough backup, but you can also sync the bucket elsewhere for paranoid redundancy.