I tried to change for automated-backup retention count by gcloud command below.
gcloud alloydb clusters update XXXXXXXX\
--automated-backup-days-of-week="MONDAY,TUESDAY,WEDNESDAY,THURSDAY,FRIDAY,SATURDAY,SUNDAY" \
--automated-backup-start-times="18:00,19:00,20:00,21:00,22:00" \
**--automated-backup-retention-count=2** \
--region="XXXXXXXX \
--project="XXXXXXXX"
I thought the backup would only leave 2 data, but all data was not deleted.
By the way, I was able to change the time to get the backup.
Therefore, the gcloud command is successful.
I read the documents for gcloud cmmand below.
https://cloud.google.com/sdk/gcloud/reference/alloydb/clusters/update
My result below.
12/27 Five backups remain.
12/28 Five backups remain.
12/29 Five backups remain.
12/30 Five backups remain.
12/31 Five backups remain.
1/1 Five backups remain.
1/2 Five backups remain.
1/3 Five backups remain.
enter image description here
Huge apologies, but you've hit a (now) known bug. We're tracking to fix it shortly, but the issue is with the count limited backups. The time-limited backups still work.
There's actually two issues currently. The first is that the count flag doesn't reduce the number of backups. The SECOND bug is that garbage collection is also not working as intended currently.
Backups older than their retention period might still appear when you view your project’s backups. Expired backups don't incur storage costs, but they are subject to automatic deletion. If you need to delete backups before the system deletes them, you can manually delete your backups at any time.
In order to ensure that you’re not incorrectly charged during this period, we will issue billing corrections for your impacted projects to negate or refund backup charges. If you see charges that aren't corrected, please reach out to support and we can get it sorted out.
Related
Is there an option to take full snapshot using the ES snapshot api. we would like to take full snapshot every 3 days.
You can refer to the following document: https://docs.aws.amazon.com/opensearch-service/latest/developerguide/managedomains-snapshots.html
I used to do something the same in my previous company where we had a lambda trigger the backup every week via cron and a Full backup used to happen of all Documents and indexes. I have thought tried to restore once which failed the first time, but the second time it worked though, the issue was the instance was small and it needed a bigger one to restore the data, so please those setting as well.
I've recently deleted around 90 million objects (around 100TB of data) from a "Nearline" GCS bucket, and now that I have an almost-empty bucket it takes >5 seconds to list the single remaining file. Standard buckets of ours that have only a dozen files take ~1s to list.
This occurs consistently from both gsutil as well as Go-based tooling that we've written. This has been tested from multiple VMs ranging in sizes within GCP from the same region as the buckets. All buckets are single-region, the only difference is that the slower one is Nearline, and the others are Standard. Is it really possible that simply listing the files in a bucket takes more than 5 seconds on Nearline?
Since this smells like a garbage collection/vacuum-related slowdown and we've been using it for almost 5 years now I'm inclined to simply delete the bucket and recreate it, but it'd be good to know if anyone has done an accurate characterization of GCP bucket performance with high churn over time.
In the 24h since I've posted this, the performance of this bucket has returned to what I'd consider normal. gsutil ls takes ~1.2s, my custom Go code for listing the bucket takes ~.15s.
By simply waiting and trying again I've answered my own question: yes, the database of bucket keys does seem to have variable (reduced) performance depending on content churn, but it's something that resolves itself relatively quickly.
Two days ago I deleted a bucket that contained a backup of all log files for a site. It contained about 30,000 tiny files and about 275 MB of space.
I noticed in the Monitoring panel of the site that the file count is exactly the same. Decided to wait a couple of days and it still has not changed.
The bucket uses standard storage class, multi-region location, and has no lifecycle rules with uniform permissions.
I can verify that the bucket is gone in the UI as well as using the ls command in cloud shell.
Cloud Storage Object Count
Cloud Storage Object Count
The count of objects in the Monitoring panel reconciled about two days later.
Looks like the change ended up being retroactive, meaning the charts in the past were re-written to reflect the objects being deleted.
I have a bucket in S3 (Infrequent access) containing 2 billion objects. It is too big to delete in the console or over the api without taking years.
I can create a lifecycle rule to expire and delete the objects but the calculator predicts this will cost me >$20,000. Is that correct? Is there a better way to delete a bucket?
I have a file effectively containing a list of all the objects in that bucket if that helps.
Update 2021:
An answer below from #MAP points out that there is now an "Empty" button. I haven't tested yet, but looks like the way to go (I'll accept that answer once tested):
If you have a list of all the objects available then you can certainly use Multi Delete Object action. Apparently this API is free. I would create AWS Step Functions state machine to loop through the file and delete 1000 objects at a time. 1000 appears to be the limit.
It will take around 2M step function transactions to delete all the objects in the bucket. As per the pricing for step function it will cost you around $50 + cost of Lambda invocations around $1 so total cost roughly $51.
Update
Using Lambda or Step Functions is probably not the most cost effective option because both ways you will need to read the file (that contains object keys) from some source such as S3. So I think running the script from local machine or any EC2 linux screen appears to be the best option.
In 2021, anyone who comes across this question may benefit to know that AWS console now provides an empty button.
Select the bucket and click on "empty" button and all objects versioned or not versioned would be emptied/deleted. Depending on the number of objects it can take minutes to days.
Expiration lifecycle rules are free. From the original feature announcement:
As with standard delete requests, Amazon S3 doesn’t charge you for using Object Expiration.
Delete operations are for free. You can create a lifecycle
Policy to automate a bulk delete.
I would start with a small number of objects first and check billing report to 100% confirm that the delete will not be charged, then go for the rest.
I want to delete 2TB of files from the GCP bucket.
I have read the GCP documentation for deletion and it says to use the gsutil -m rm command but when I am running it says 400+ hours estimate time.
Is there any faster way to do the deletion process?
For buckets with a very large number of objects, one trick to deleting the contents is to use the Lifecycle Management feature. https://cloud.google.com/storage/docs/lifecycle
Set a lifecycle rule that triggers when the object is 0 days old and an action of "Delete", and that should cause GCS to begin deleting your objects for you. Note that this may still take a while, as lifecycle rules can take up to 24 hours to go into effect, but that's still a lot better than a couple of weeks.
You can configure the lifecycle policy on a bucket from the console:
Head to https://console.cloud.google.com/storage/browser
Find the bucket you want to enable, and click None in the Lifecycle column.
Click Add rule.
Select the condition (object is 0 days old or )
Select an action (Delete the object)
Click continue.
Click save.
See https://cloud.google.com/storage/docs/managing-lifecycles for more instructions.
N.B.: Lifecycle changes can take up to 24 hours to go into effect, so once all of your objects go away and you remove the lifecycle config setting, you should wait an additional 24 hours before putting any new files in the bucket, or else they might also get deleted.