We are running AWS RDS PostgreSQL, with daily automatic snapshots, encrypted by AWS managed KMS key. My objective is to minimize risks and data loss, in case when main AWS account (running RDS) got compromised or RDS deleted/damaged in some way.
What we've implemented so far: RDS snapshots are shared with different (backup) account, periodically copied to backup account and re-encrypted with the KMS key from the backup account, to make copies local, and independent from the main AWS account.
I'm wondering if there are better ways to minimize recovery time objective and recovery point objective in case of a disaster event?
This AWS blog post seems to weigh the options well.
Automated backups are limited to a single AWS Region while manual snapshots and Read Replicas are supported across multiple Regions.
Having cross region Read replica would give you the best RPO and RTO as you can promote replica to be an independent instance which should improve your RPO / RTO
Alternatively, if you choose to use Amazon Aurora Backtrack it seems to offer a similar option to having a read replica but I do not have a personal experience with this feature so can't say how effective it is in improving RTO and RPO.
I wrote two scripts implementing flow at the diagram drawn above ^^^, the idea is to run them daily:
src_acc_take_share_rds_snapshot.py in src account:
list available RDS snapshots according to provided regexp
recrypt them with KMS key, shared from dst account
share recrypted RDS snapshots with the dst account
remove old decrypted snapshots
dst_acc_copy_shared_rds_snapshot_to_local.py in dst account
list RDS snapshots, shared in src account with dst account
copy RDS snapshots from src account to dst account
remove old decrypted snapshots
fire an SNS message if desired snapshot count != actual
and put them at GitHub https://github.com/mvasilenko/dr-rds-share-snapshot
Related
We had an instance where MongoDB hosted. now MongoDB someone deleted data by mistaken and we don't have any snapshots policy to retrieve backup for that account...
In this case, Can AWS provide backup as a snapshot from their data center backup mechanism??
Please let me know as its very important for us to work out this.
Unfortunately if you do not see an EBS snapshot in the account, then it does not exist. AWS does not keep extra backups of snapshots separate from it's customer accounts.
Under the AWS Shared Responsibility Model, customer data, including backups of that data, are the sole responsibility of the customer.
See https://aws.amazon.com/compliance/shared-responsibility-model/
I have an implementation of hybrid AWS setup where I have an on-prem hadoop cluster and also replication enabled towards an AWS setup with similar hadoop cluster running at low capacity for disaster recovery. This is an active active disaster recovery setup in AWS. Is it still recommended to take backups for data that is stored on AWS?
Is it still recommended to take backups for data that is stored on AWS?
Not clear what AWS services you're referring to
Well, let's say you have an S3 bucket only bound to us-east-1 and that region becomes unavailable... You can't access your data. Therefore, it's encouraged to replicate to another region. However S3 supposedly has several 9's of availability, and if an AWS service is down in a major region, it's probably expected that a good portion of the internet is in-accessible, not only your data
Hi I am an AWS newbie and I am moving an AMI instance from one availability zone to another, and I was wondering if I need to select the encrypt EBS Snapshot option when copying an AMI from say Oregon to Virginia.
If I don't encrypt the snapshot, does that mean any hacker can see what is in my AMI enroute from one availability zone to another?
Thanks
The option to encrypt an EBS Snapshot provides encryption-at-rest. This is to prevent someone with access to the underlying hardware, like an Amazon employee, from being able to read the information on the disk.
Your concern that someone could see the data as it is transmitted between regions is covered by encyption-in-motion. AWS will automatically use SSL encryption to ensure that the data being transmitted will not be readable by anyone.
When copying data over a public network (including to a cloud) you should always use encryption. Amazon provides encryption for data at rest, data movements within AWS offerings and for any snapshots you create. When moving data they do recommend using a custom CMK, not your standard one, and then allowing individual users access to that key. Their documentation has more details: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSEncryption.html.
And since you can't directly change the encryption status of a volume, encrypting your snapshot is the way to go. Depending on your needs, you may decide to encrypt new volumes, or all snapshots -- regardless of availability zone.
If you'd like more information on managing EBS volumes, NetApp has a good article here.
We all know about what happened to Cold Spaces getting hacked and their AWS account essentially erased. I'm trying to put together recommendation on set of tools, best practices on archiving my entire production AWS account into a backup only where only I would have access to. The backup account will be purely for DR purposes storing EBS snapshots, AMI's, RDS etc.
Thoughts?
Separating the production account from the backup account for DR purposes is an excellent idea.
Setting up a "cross-account" backup solution can be based on the EBS snapshot sharing feature that is currently not available for RDS.
If you want to implement such a solution, please consider the following:
Will the snapshots be stored in both the source and DR accounts? If they are, it will cost you twice.
How do you protect the credentials of the DR account? You should make sure the credentials used to copy snapshots across accounts are not permitted to delete snapshots.
Consider the way older snapshots get deleted at some point. You may want to deal with snapshot deletion separately using different credentials.
Make sure your snapshots can be easily recovered back from the DR account to the original account
Think of ways to automate this cross-account process and make it simple and error free
The company I work for recently released a product called “Cloud Protection Manager (CPM) v1.8.0” in the AWS Marketplace which supports cross-account backup and recovery in AWS and a process where a special account is used for DR only.
I think you would be able to setup A VPC and then use VPC peering to see the other account and access S3 in that account.
To prevent something like coldspaces, make sure you use MFA authentication (no excuse for not using it, the google authentication app for your phone is free and safer than just having a single password as protection.
Also dont use the account owner but setup a separate IAM role with just the permissions you need (and enable MFA on this account as well).
Only issues is that VPC peering doesnt work across regions which would be nicer than having the DR in a different AZ in the same region.
I have the following Amazon EC2 configuration
Prod Web & DB server (Virginia)
Web & DB server (Oregon)
I would like to store my SQL backups in S3 so that they are available to be restored to my standby server in case the Virginia region goes down for any period of time (which has been known to happen :)
Here are the following 2 regions I am considering for my S3 bucket
US Standard
Oregon
I attempted first to specify Oregon. However, when I do that, I am unable (for some reason) to upload to that bucket from my Virginia instance. However, I am worried that if I specify US Standard, that my S3 bucket will not be available in the event Virginia becomes unavailable.
Does anyone have any recommendations for overcoming the issues with either of these scenarios?
Thanks!
My recommendation for you is to use RDS (Relational Database Service), which is basically managed RDBMS service for MySQL (or MS-SQL or Oracle). It takes care for backup and restore for the DB.
With MySQL is has the option to have an automatic stand-by in a different availability zone in each region. When you use the option for "Multi-AZ", it will create the stand-by with its backup in a synchronize way. This way your fail over will be very close to real time.