I am trying to create a Sandbox playground in AWS for Users to practice some resources for 30min, after that, all resources should be killed and account temporary account also should be deleted.
I got some information like Cloud Formation, Lambda and IAM combined can be used, Or AWS Control Tower also but I have no idea where to begin with.
You would need:
A separate AWS Account so that anything created/deleted in the Account will not impact your normal environment (this account can be reused, there is no reason to use a new AWS Account each time you want a Sandbox)
A means of deleting resources from the account when the time period is reached
Some example tools that can do this are:
AWS Nuke
Cloud Nuke
You would also need to write some code that ties everything together:
Vending the account
Tracking usage (eg when to clean)
Triggering the cleanup script when time limit has been reached
Bottom line: It will take some work to create such a Sandbox.
Related
We have a standby AWS account in case we lose access to our production account. We want to make sure that service limits are exactly the same for both accounts and stay in sync. Over time various service limits have been increased for the production account.
Is there a way to list all actual service limits for an account to make them easily comparable, ideally with the AWS CLI, boto3 or whatever?
The only way I can think of is manually going through both accounts' support cases and identify limit increases like that.
Not directly from CLI but https://awslimitchecker.readthedocs.io/en/latest/cli_usage.html may be useful to you.
I would like to know a system by which I can keep track of multiple aws accounts, somewhere around 130+ accounts with each account containing around 200+ servers.
I wanna know methods to keep track of machine failure, service failure etc.
I also wanna know methods by which I can automatically turn up a machine if the underlying hardware failed or the machine terminated while on spot.
I'm open to all solutions including chef/terraform automation, healing scripts etc.
You guys will be saving me a lot of sleepless nights :)
Thanks in advance!!
This is purely my take on implementing your problem statement.
1) Well.. for managing and keeping track of multiple aws accounts you can use AWS Organization. This will help you manage centrally with one root account all the other 130+ accounts. You can enable consolidated billing as well.
2) As far as keeping track of failures... you may need to customize this according to your requirements. For example: You can build a micro service on top of docker containers or ecs whose sole purpose is to keep track of failures, generate a report and push to s3 on a daily basis.You can further create a dashboard using AWS quicksight out of this reports in S3.
There can be another micro service which will rectify the failures. It just depends on how exhaustive and fine grained you want your implementation to be.
3) For spawning instances when spot instances are terminated, it can be achieved through you simple autoscaling configurations. Here are some of the articles you may want to go through which will give you some ideas:
Using Spot Instances with On-Demand instances
Optimizing Spot Fleet+Docker with High Availability
AWS Organisations are useful for management. You can also look at multiple account billing strategy and security strategy. A shared services account with your IAM users will make things easier.
Regarding tracking failures you can set up automatic instance recovery using CloudWatch. CloudWatch can also have alerts defined that will email you when something happens you don't expect, though setting them up individually could be time consuming. At your scale I think you should look into third party tools.
I have an AWS account where multiple EC2 instances, load balancers, target groups, security groups etc are setup by multiple owners.
We use terraform to set this up but sometimes due to corruption, the state becomes inconsistent. Current mechanism to recover is to manually destroy all resources in that account owned by a particular owner.
Is there an easy way to nuke all resources in an AWS account belonging to a particular owner?
There is no way to delete all resources in an account owned by a particular user but there is a way to delete all resources in an account.
You can use aws-nuke which was created somewhat out of the same use case you described.
At first, you need to set an account alias for your account.
You must create a config file.
Then you can list down all resources that will be deleted using the following command:
aws-nuke -c config/nuke-config.yml --profile aws-nuke-example
Add --no-dry-run option to permanently delete all resources in the same command.
There are also multiple filter options available such as target, resource type, exclude, etc. that you can leverage to suit your needs.
Agree with the other answer that there is no easy way delete orphan resources.
But I see the original issue is that the terraform state is corrupted.
You can checkout the terraform import feature which lets you generate state file from aws resources. In that way you can connect your config to resources again.
Short answer: no.
Longer answer: actually, that's also no. There's no built-in capabillity for this.
The case you're describing is not within the bounds of typical AWS usage... destroying everything in an account -- usually -- should not be easy.
Of course, you could script it, fairly trivially, by wrapping calls to aws-cli to custom code to iterate through the resources and generate additional requests to destroy them... but if you do, lock that code away, since such capability is inherently dangerous.
You can delete all your resources you created, you'll need to automate, see a sample here:
Creation
https://github.com/jouellnyc/AWS/tree/master/create_aws_vpc2
Deletion
https://github.com/jouellnyc/AWS/blob/master/create_aws_vpc2/delete_lb_and_vpc.sh
Other
I've had some success with cloud nuke (played around for a few min; not in depth):
https://github.com/gruntwork-io/cloud-nuke
I dont think there is any state forward way to do it but to check if you have any active resources in your account, do the following:
Open the Billing and Cost Management console.
Choose Bills in the navigation pane.
You can see the charges incurred by different services in the Bill details by service section.
You can see the charges incurred in different AWS Regions in the Bill details by account section.
For each service, identify the Regions where the services have incurred charges.
To terminate the identified active resources under different services, do the following:
Open the AWS Management Console.
For Find services, enter the service name.
After opening the service console, terminate all your active resources. Be sure to check each Region where you have allocated resources.
if the issue is a corrupted terraform state, perhaps storing the state in a versioned S3 bucket would help reduce the impact of that.
Use Terraformer to import all resources into terraform configuration then do whatever you want:
terraformer import aws --resources="*"
https://github.com/GoogleCloudPlatform/terraformer
Take care of your state file lock f.e. by using dynamodb & enable s3 versioning.
Is there an easy way to nuke all resources in an AWS account
belonging to a particular owner?
Since you are using Terraform, you can use Blank Apply.
It will destroy all the Resources in a state file.
We use terraform to set this up but sometimes due to corruption, the
state becomes inconsistent.
It's better to use version control system to avoid drifts and inconsistencies in your state file and use remote states in order to make sure everyone is on the same page.
I was thinking what if My AWS Account get deleted/inaccessible one fine day? (may sound weird). Have anyone implemented any solution for this? Can we have back from one AWS account to another AWS account?
There are several things you can do, one is to make sure you have at least two administrator accounts, one that you use, and one that you store away in a safe place and only use for emergencies.
The second is to setup a completely seperate AWS account as a 'backup', with its own set of credentials. You can grant cross-account access from your primary account, to your backup account but only allow the primary account to 'put' or backup objects to the backup account, so that even if your primary account is compromised, the attacker can't do harm to the second account from the primary account.
The actual process to backup your services on one account to another is going to vary depending on which services you are using, but the concept is the same - backup the data to s3 and then copy the data from s3 in your primary account to s3 in the backup account - and make sure the primary account only has enough access to the second account to 'put' things, not delete. Nobody in your company should have access to both of those sets of credentials (assuming your company is not tiny).
You don't want to be this company that was put out of business when there account was compromised:
https://threatpost.com/hacker-puts-hosting-service-code-spaces-out-of-business/106761/
Also, this video, from AWS Reinvent 2015 (starting around 50 minutes in), listen to how AirBNB protects against these issues in just this way:
https://www.youtube.com/watch?v=eHg8LD5KNC0
Finally After lot of reasearch and help from Skeddly, I found a solution for Linux machines.
Via Skeddly we can copy EBS snapshots to another AWS account hassle free.It works brilliant.Now even if my Primary AWS account is compromised, I have all EBS snapshots to start with by BCP AWS account for linux machines ;)
Now am hunting for windows machine solution for which I already got an idea... ;)
We all know about what happened to Cold Spaces getting hacked and their AWS account essentially erased. I'm trying to put together recommendation on set of tools, best practices on archiving my entire production AWS account into a backup only where only I would have access to. The backup account will be purely for DR purposes storing EBS snapshots, AMI's, RDS etc.
Thoughts?
Separating the production account from the backup account for DR purposes is an excellent idea.
Setting up a "cross-account" backup solution can be based on the EBS snapshot sharing feature that is currently not available for RDS.
If you want to implement such a solution, please consider the following:
Will the snapshots be stored in both the source and DR accounts? If they are, it will cost you twice.
How do you protect the credentials of the DR account? You should make sure the credentials used to copy snapshots across accounts are not permitted to delete snapshots.
Consider the way older snapshots get deleted at some point. You may want to deal with snapshot deletion separately using different credentials.
Make sure your snapshots can be easily recovered back from the DR account to the original account
Think of ways to automate this cross-account process and make it simple and error free
The company I work for recently released a product called “Cloud Protection Manager (CPM) v1.8.0” in the AWS Marketplace which supports cross-account backup and recovery in AWS and a process where a special account is used for DR only.
I think you would be able to setup A VPC and then use VPC peering to see the other account and access S3 in that account.
To prevent something like coldspaces, make sure you use MFA authentication (no excuse for not using it, the google authentication app for your phone is free and safer than just having a single password as protection.
Also dont use the account owner but setup a separate IAM role with just the permissions you need (and enable MFA on this account as well).
Only issues is that VPC peering doesnt work across regions which would be nicer than having the DR in a different AZ in the same region.