Can we set Retention policy to particular Folders/Objects in GCS Bucket? - google-cloud-platform

I want to delete particular Folder/Objects after prescribed time, can we set a Retention Policy or Object lifecycles?
Thanks!

You can set lifecycle on a bucket. After a period of time, you can chose to delete the objects that match the rules.
It's not possible at object or folder level (because folder doesn't exist!)

Related

How to download a versioned AWS S3 bucket as it was at a specific datetime?

I am currently synchronizing data to AWS S3 for backup purposes (using TrueNAS if that matters). I have bucket versioning enabled with a lifecycle rule set for cleanup so I can theoretically fetch the files as they were at a previous date, as long as its within my retention period.
My question is: how would I go about downloading the whole bucket as it was at a specific moment in time? Is there already a tool available that can handle this use-case?
When Versioning is enabled on an Amazon S3 bucket, every object can have multiple versions. Each version is effectively a separate object with its own LastModified date.
To get "the whole bucket" at a specific point in time, you would need to write code that would:
Loop through every object in the bucket
Retrieve all ObjectVersions for each object
Identify which version was 'current' at your desired point in time
Download that specific object version

AWS S3 not allowing to rename/move folders after Delete:Deny set as bucket policy

I have a bucket, in which I have set s3:Delete* to Deny, so that objects don't get deleted from the bucket. However, I want to move some objects to a s3://bucket-name/trash directory, and set a lifecycle policy to delete all the items in trash after 30 days.
I am not able to move those items, because the Delete Deny policy overrides it. Is there any solution that would help to bypass the Delete Deny policy so that I can move objects to just one folder?
Thanks
According to the documentation,
This action creates a copy of all specified objects with updated settings, updates the last-modified date in the specified location, and adds a delete marker to the original object.
The reason why your approach doesn't work is because move is essentially copy + delete. An alternative is to enable the bucket versioning, and apply a lifecycle policy to expire the previous versions after 30 days. Finally, change the permission to only deny s3:DeleteObjectVersion.
The bucket policy is not the best place to prevent objects from being deleted. Instead, enable Object Lock at bucket level, then set objects in governance mode, so they can't be deleted by normal operations. When you do need to move them, you can still bypass the protection with a special permission. See: https://docs.aws.amazon.com/AmazonS3/latest/dev/object-lock-managing.html

how do i backup s3 or is it possible to backup s3?

does s3 have snapshots? how should I solve a problem where something would, for example, delete all my s3 data, how do I backup?
There are a couple options.
1. Enable versioning on your bucket. Every version of the objects will be retained. Deleting an object will just add a "delete marker" to indicate the object was deleted. You will pay for the storage of all the versions. Note that versions can also be deleted.
2. If you are just worried about deletion you can add a bucket policy to prevent deletion. You can also use some of the newer hold options.
3. You can use cross region replication to copy the objects to a bucket in a different region and optionally a different account.

Move least frequent S3 buckets to glacier automatically

Is there anyway to move less frequent S3 buckets to glacier automatically? I mean to say, some option or service searches on S3 with least access date and then assign lifecycle policy to them, so they can be moved to glacier? or I have to write a program to do this? If this not possible, is there anyway to assign lifecycle policy to all the buckets at once?
Looking for some feedback. Thank you.
No this isn't possible as a ready made feature. However, there is something that might help, Amazon S3 Analytics
This produces a report of which items in your buckets are less frequently used. This information can be used find items that should be archived.
It could be possible to use the S3 Analytics output as input for a script to tag items for archiving. However, this complete feature (find infrequently used items and then archive them) doesn't seem to be available as a standard product
You can do this by adding a tag or prefix to your buckets.
Create lifecycle rule to target that tag or prefix to group your buckets together and assign/apply a single lifecycle policy.
https://docs.aws.amazon.com/AmazonS3/latest/user-guide/create-lifecycle.html

Fine grain control of AWS S3 object expiration

I have a task to control object lifecycle of particular objects in S3 bucket. Eg: most of the objects should expire and be deleted according to lifecycle policy, but for some objects I want the expiration never happen. In Amazon SQS there is a possibility to control lifecycle parameters of each single message, but I can't find such feature in docs for S3. Is it possible?
No, it isn't. Lifecycle policies apply to all the objects in the bucket, or all the objects with a matching prefix. You'd need to set the policy on a specific key prefix, and then store the objects you want to match the policy, using that prefix, but the other objects with a different prefix. That's the closest thing available, and it's not really all that close.