Our AWS costs are increasing at a steady rate each month. Looking into it, I found that none of our backups are transitioning to Cold Storage, even though every plan has a transition period set and the retention in cold storage is configured way past the required 90 days.
I have read the documentation and can not see where I am going wrong. Any ideas?
Here is what is in the Vault, every snapshot taken says the same
I was trying to transition AMI when only EFS are supported.
Each lifecycle rule contains an array of transition objects specifying how long in days before a recovery point transitions to cold storage, or is deleted. As of now, the transition to cold storage is ignored for all resources except for Amazon EFS.
From here: https://aws.amazon.com/blogs/storage/automating-backups-and-optimizing-backup-costs-for-amazon-efs-using-aws-backup/
Related
I have been trying to contact AWS and look for information in their own knowledge articles but haven't been successful.
I'm trying to figure out how the billing works for AWS Backup.
Let's say I have a 100gb bucket and I back it up daily with a retention of 31 days in region eu-central-1.
I then also create a copy job that moves the backup to a secondary vault in the region eu-north-1.
On the 1st day I pay the full price for copying 100gb from eu-central-1 to eu-north-1.
On the 2nd day I have added 10gb of data and made some modifications to existing files.
Will my copy job on the 2nd day be billed for a transfer of 110gb to eu-north-1 or only the delta(The 10gb + changes)?
Billing in AWS is complex. Everything costs money, even things you think probably won't, probably will cost money. In this example you are likely going to be charged ingress/egress charges, backup storage charges for the services you use and all the supporting services.
Take a look at https://calculator.aws/ for an idea for what things cost. It's sometimes easier to get a rough guide from the pricing pages and this calculator, turn something on and keep a close eye on it in the early days to make sure this matches your expectation.
For finer grained control over billing, make sure you tag your resources to help you break down costs associated with your financial metrics to help to keep track better.
Not much of a specific answer, but hope that helps to get you going in the right direction.
So, to answer my own question. The copy job on the second day will only be billed for the extra data and not the full size.
I used AWS support as well as my own tests to confirm the result.
I took Google cloud SQL instance to evaluate SQL instance. I created one database and after my trail expire, my sql instance is also suspended though I have upgraded my account. Is there any way to recover my database.
Best Regards,
Mazhar
I recommend looking over the documentation as it depends on the resource and how long it took you to upgrade your account. If its been longer then 30 days, reaching out to support would be your best option but it's no guarantee they can recover it.
End of Trail
Any data you stored in Compute Engine is marked for deletion and might
be lost. Learn more about data deletion on Google Cloud.
Your Cloud Billing account enters a 30-day grace period, during which
you can recover resources and data you stored in any Google Cloud
services during the trial period.
Deletion Timeline
Google Cloud Platform commits to delete Customer Data within a maximum
period of about six months (180 days). This commitment incorporates
the stages of Google’s deletion pipeline described above, including:
Stage 2 - Once the deletion request is made, data is typically marked
for deletion immediately and our goal is to perform this step within a
maximum period of 24 hours. After the data is marked for deletion, an
internal recovery period of up to 30 days may apply depending on the
service or deletion request.
I am exploring AWS EBS snapshot policy to minimize the data loss while any failure occurs to the server. I am thinking of an hourly snapshot policy with 7 days of retention. It will serve the purpose of minimizing the data loss but it will flood the AWS snapshot console which may lead to mistakes in future. To prevent this I am exploring a way so the hourly backups can be merged together daily.
Scenario
Hourly snapshot policy with 7 days retention means 24 snapshots daily till the end of the week = 168 snapshots for a server and 1 merged snapshot will be created at the end of the week.
What I am exploring
Hourly snapshot policy with 7 days retention and 1-day merging means it will create the snapshots hourly till the end of the day and then merge them to 1 single snapshot so I will have one snapshot for the day rather than 24.
I explored the AWS documentation but that doesn't help. Any help would be really appreciable.
If you delete any of the snapshots in between you will find that AWS will automatically perform this merge functionality to ensure there is no missing data in between snapshots.
Deleting a snapshot might not reduce your organization's data storage costs. Other snapshots might reference that snapshot's data, and referenced data is always preserved. If you delete a snapshot containing data being used by a later snapshot, costs associated with the referenced data are allocated to the later snapshot.
If you delete any snapshots (including the first) the data will be merged with the next snapshot that was taken.
Therefore you can relax and adjust the policies as required, without the risk of data loss.
More details are available in the how incremental snapshots work documentation.
I like to think of an Amazon EBS Snapshot as consisting of two items:
Individual backups of each 'block' on the disk
An 'index' of all the blocks on the disk and where their backup is stored
When an EBS Snapshot is created, a back-up is made of any blocks that are not already backed-up. An index is also made that lists all the blocks in that "backup".
For example, let's say that an EBS Volume has Snapshot #1 and then one block is modified on the disk. If another Snapshot (#2) is created, only one block will be backed-up, but the Snapshot index will point to all the blocks in the backup.
If the Snapshot #1 is then deleted, all the blocks will be retained for Snapshot #2 automatically. Thus, there is no need to "merge" snapshots -- this is all done automatically.
Bottom line: You can delete any snapshots you want. The blocks required to restore all remaining Snapshots will be retained.
Im learning about AWS for a subject in the university.
About 20 days ago I started to learn about Elasticsearch because I need querys that DynamoDB can't do.
I'm trying to use only the Free Tier and I created some domains, put data through Lambda (like 100 KiB) and then deleted it.
Then I checked the Billing and I realized that 4.9GB has been used for EBS storage. The Free Tier provide 10GB per month but the problem is that I don't know how I used all that storage and if there is a way to limit it because I dont want to exceed the usage limits.
I will be grateful for any kind of explanation or advice to not exceed the limit.
I'm unaware with preventive step which can restrict your billing.
However, using Cloudwatch billing alarm, you'd be notified immediately as soon as it breaches billing threshold.
Please have look at here for detailed AWS documentation on it.
What actually triggers an automatic incremental backup/snapshot for Amazon Redshift? Is it time-based? The site says it "periodically takes snapshots and tracks incremental changes to the cluster since the last snapshot" and I know whenever I modify the cluster(either delete, modify size, or change node type) itself, a snapshot is taken. But what about when a database on the cluster is altered? I have inserted, loaded, deleted many rows but no automatic snapshot is taken. Would I just have to do manual backups then?
I have asked around and looked up online and no one has been able to give me an answer. I am trying to figure out an optimal backing strategy for my workload.
Automated backups are taken every 8 hours or every 5 GB of inserted data, whichever happens first.
Source: I work for AWS Redshift.