I'm adding a lifecycle rule for a versioned bucket, and I'm confused about this:
Specifically, the "remove expired object delete marker" option. Why would one not want to remove that? If I have an object with just one version, and I delete that object/version, five days later it will be permanently deleted. Then there's a delete marker hanging around pointing to nothing, so what's the point of keeping it?
Amazon S3 will not automatically delete expired object delete markers. The lifecycle policy provides a means to do so.
Deleting the last version of an object that has a delete marker will merely delete the object. It does not trigger an action on S3 to determine whether there are any expired delete markers. So, yes, the lifecycle rule is worth implementing if the bucket is versioned and versions of objects are often deleted.
Related
I want to have a protection against accidental deletion on our s3 bucket, so i delete an object i can restore it for 1 day before definitive deletion.
What i try:
Enable versionning on the bucket
Create a lifecycle rule with "Permanently delete noncurrent versions of objects", "Delete expired object delete markers or incomplete multipart upload"
But with this when i delete an object i can restore it, but if i didn't resstore it it's never definitively deleted
Is there a way to trigger a lambda before a bucket is actually deleted (for example, with a stack that it is a part of) or emptied to copy its objects? Maybe something else could be used instead of lambdas?
Deletion of a CloudFormation (CNF) stack with non-empty bucket will fail as non-empty buckets can't be deleted, unless you set its DeletionPolicy to retain. The retain would delete the stack, but leave out the bucket in your account. Without retain, you have to first delete all objects in a bucket before bucket can be deleted.
In either way, you have to delete the objects yourself through a custom lambda function. There is no out-of-the box mechanism in CFN nor S3 to delete objects when bucket is deleted. But since this is something that you have to develop yourself, you can do whatever you want with these objects before you actually delete them, e.g. copy to glacier.
There are few ways in which this can be achieve. But probably the most common way is through a custom resource, similar to the one given in AWS blog:
How do I use custom resources with Amazon S3 buckets in AWS CloudFormation?
The resource given in this blog actually responds to Delete event in CFN and deletes the objects in the bucket:
b_operator.Bucket(str(the_bucket)).objects.all().delete()
So you would have to modify this custom resource to copy objects before the deletion operation is performed.
I have a bucket, in which I have set s3:Delete* to Deny, so that objects don't get deleted from the bucket. However, I want to move some objects to a s3://bucket-name/trash directory, and set a lifecycle policy to delete all the items in trash after 30 days.
I am not able to move those items, because the Delete Deny policy overrides it. Is there any solution that would help to bypass the Delete Deny policy so that I can move objects to just one folder?
Thanks
According to the documentation,
This action creates a copy of all specified objects with updated settings, updates the last-modified date in the specified location, and adds a delete marker to the original object.
The reason why your approach doesn't work is because move is essentially copy + delete. An alternative is to enable the bucket versioning, and apply a lifecycle policy to expire the previous versions after 30 days. Finally, change the permission to only deny s3:DeleteObjectVersion.
The bucket policy is not the best place to prevent objects from being deleted. Instead, enable Object Lock at bucket level, then set objects in governance mode, so they can't be deleted by normal operations. When you do need to move them, you can still bypass the protection with a special permission. See: https://docs.aws.amazon.com/AmazonS3/latest/dev/object-lock-managing.html
Versioning is not suitable in my case because if the user wants to delete anything in storage it would be deleted permanently not soft-deleted but for the backup purpose, I have to use Cross-Region Replication and to use CRR I need to enable Versioning.
I tried to delete those object which has Delete-marker as the current version and non-current version as a soft deleted object. here is my life cycle rule policy. No Transitions, No current version expiration.
Only applied the Previous version permanently delete so that it considers delete marker as the current version and soft-deleted object as the previous version and delete it permanently BUT I DON'T KNOW IT IS NOT WORKING...!
I checked on next day the deleted version is still there also I have added multiple version of the same object and that versions are also not deleted.
If you have only just enabled it, it might still be queuing objects that need to be deleted.
Lifecycle policies do not delete exactly after this marker, in fact actions are queued and processed later on.
When an object reaches the end of its lifetime, Amazon S3 queues it for removal and removes it asynchronously. There might be a delay between the expiration date and the date at which Amazon S3 removes an object. You are not charged for storage time associated with an object that has expired.
Sometimes when you first configure it can be slightly delayed (because it has not queued anything) so I would suggest checking back in a couple of days when this process should now have happened.
More information is available in the Understanding object expiration documentation.
does s3 have snapshots? how should I solve a problem where something would, for example, delete all my s3 data, how do I backup?
There are a couple options.
1. Enable versioning on your bucket. Every version of the objects will be retained. Deleting an object will just add a "delete marker" to indicate the object was deleted. You will pay for the storage of all the versions. Note that versions can also be deleted.
2. If you are just worried about deletion you can add a bucket policy to prevent deletion. You can also use some of the newer hold options.
3. You can use cross region replication to copy the objects to a bucket in a different region and optionally a different account.