We have a back up rules to keep snapshots of the instance as per below rules:
One snapshot every day for the most recent 7 days and
One snapshot every weekend for the most recent 4 weeks and
One snapshot every month-end for the most recent 12 months.
So in total, there will be 7 + (4-1) + (12-1) = 21 copies required at any point in time.
However, the existing EC2 snapshot lifecycle policy does not seem flexible to retain my back up copies as per above rules. Hence, I was thinking about using Lambda function or step functions. But the lifecycle policy will override the Lambda function, won't it?
Any ideas how this can be achieved from a solution architecture perspective?
Thanks a lot.
In the end, we managed to achieve this by creating 3 separate lifecycle policies.
Create a snapshot once a day, and keep it for 7 days.
Do the same every Sunday, and keep it for 30 days.
Another snapshot every 1st day of the month, and keep it for 365 days.
Related
I had set up an AWS backup plan which takes backup of our EC2 instances and EBS volumes. But for some reason it is not moving it to cold storage
Here is my backup plan:
Frequency
Daily
At 05:00 AM UTC
Start within
8 hours
Complete within
7 days
Lifecycle
Transition to cold storage after 2 days
Expire after 95 days
For some reason, it is not moving to cold storage
Not sure what I am missing
Any help is much appriciated
EDIT: So i have noticed that the backups are been removed from the vault(moved to cold storage) after 9 days. But I have mentioned in the backup plan to move it to cold storage in 2 days. I assume it takes 9 days because completewithin(7 days) + 2 days. Is this the case?
I don't believe transition to cold storage is supported for ec2/ebs - checkout the matrix (and the faq):
https://docs.aws.amazon.com/aws-backup/latest/devguide/whatisbackup.html#features-by-resource
I am using AWS S3 for backups. I have it setup right now so that after 30 days objects are moved out to glacier for cold storage. Since these are backups what I would like to do is keep the last 30days of back-ups. Then after 30 days the backups taken on the first of each month. Then the after 1 year the backup taken on the first of the year.
Since A backup is made daily I need a way to tell AWS the following for lifecycle management.
If backup is more than 30 days old and not taken on the first of the month delete it.
If backup is more than 1 year old and not taken on the first of the month in the first month of the year delete it.
Right now I have to go in and clean house once a month. The reason I want to do this is that trying to keep every backup from every day get very storage intensive. How would I automate this process?
I want to restrict launching of EC2 instances between the hours of 8:00 AM to 7:00 PM, Monday through Friday, for external contractors (cost-cutting purposes). I found date Condition operators here. However, there is nothing that allows me to setup a pattern or regular expression to create a daily schedule of enablement.
Have I not found it, or does it simply not exist? And, if it doesn't exist, is there a way I can make use of what does exist to do what I want?
Thank you for your help!
You cannot do a regex/pattern.
What you can do is generate time intervals for each day (via a script of course) and do a logical OR on all the conditions. This is somewhat of a mess and would be hard to maintain / understand. You will also probably run into some sort of limits with the policy size.
What I would do is: have 2 policy templates. one allowing you to launch instances, the other not. Schedule a lambda job for when you want to disable/enable the job. The Lambda should just update the policy.
We are backing up our web servers to S3 daily and using life cycle rules to move versions to IA and then glacier but after 30 days we would like to not store any versions that were not created on a Monday so we would only store a backup from each week. Can this be done in S3 rules or do i need to write something in lambda?
I'm not aware of any way to perform specific lifecycle rules based on a day of the week. I think writing a Lambda function to find and delete any file older than 30 days and not created on a Monday, and then scheduling that to run once a day, is a good way to accomplish this.
Their API reference says the date start date should be less than 14 days from the current date. I would like to know whether the data older than this is deleted and not available
Metrics used to be kept for 2 weeks, but as #sfgeorge points out, AWS has increased storage times.
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-cloudwatch.html
When you use the mon-put-data command,
you must use a date range within the
past two weeks. There is currently no
function to delete data points. Amazon
CloudWatch automatically deletes data
points with a timestamp more than two
weeks old.
As of November 1st, 2016, the retention window for AWS metrics in CloudWatch has expanded from 14 days to 15 months.
Note that the data granularity will be reduced when you widen your range beyond the past 15 days:
One minute data points are available for 15 days.
Five minute data points are available for 63 days.
One hour data points are available for 455 days (15 months).
As found in https://aws.amazon.com/ec2/faqs/ :
Q: Will I lose the metrics data if I disable monitoring for an Amazon EC2 instance?
You can retrieve metrics data for any Amazon EC2 instance up to 2 weeks from the time you started to monitor it. After 2 weeks, metrics data for an Amazon EC2 instance will not be available if monitoring was disabled for that Amazon EC2 instance. If you want to archive metrics beyond 2 weeks you can do so by calling mon-get-stats command from the command line and storing the results in Amazon S3 or Amazon SimpleDB