On initiating service instance deletion call, Object Store service ensures deletion of service instance resources, which includes AWS S3 bucket(s), and hence the objects in the bucket also gets deleted. Since there is no backup or a backup is very costly, is there any way where I can avoid this accidental deletion of important data?
We can use the parameter 'preventDeletion' which if enable wont let you delete a file
Related
I have a number of "Deep Glacier Archive" class objects in the root level of my Amazon S3 bucket.
As the number of objects grows, I've added some top-level folders to the same bucket that I'd like to move the other objects into for organizational reasons. While I can add new objects to these folders, I've noticed that the "Move" action option is grayed out while when I have existing objects selected.
Is there a way that I can move these glacier objects into the other folders in the same bucket? (I'm using the Amazon AWS S3 web console interface.)
Objects cannot be 'moved' in Amazon S3. Doing so actually involves performing a copy and then delete.
The S3 management console is unable to move/copy an object with a Glacier storage class because the data is not immediately available. Instead, you should:
Restore the object (Charges might apply)
Once restored, perform the move/copy
You have to first restore the objects and wait around 48h until the process completes (you can do that directly from the management console). Once it is done you should see the download button enabled in the console and a countdown of the days you set them to be available.
Then you can move them using the AWS CLI with:
aws s3 mv "s3://SOURCE" "s3://DEST" --storage-class DEEP_ARCHIVE --force-glacier-transfer
I don't think is possible to move them from the management console directly, after the restoration.
Is there a way to trigger a lambda before a bucket is actually deleted (for example, with a stack that it is a part of) or emptied to copy its objects? Maybe something else could be used instead of lambdas?
Deletion of a CloudFormation (CNF) stack with non-empty bucket will fail as non-empty buckets can't be deleted, unless you set its DeletionPolicy to retain. The retain would delete the stack, but leave out the bucket in your account. Without retain, you have to first delete all objects in a bucket before bucket can be deleted.
In either way, you have to delete the objects yourself through a custom lambda function. There is no out-of-the box mechanism in CFN nor S3 to delete objects when bucket is deleted. But since this is something that you have to develop yourself, you can do whatever you want with these objects before you actually delete them, e.g. copy to glacier.
There are few ways in which this can be achieve. But probably the most common way is through a custom resource, similar to the one given in AWS blog:
How do I use custom resources with Amazon S3 buckets in AWS CloudFormation?
The resource given in this blog actually responds to Delete event in CFN and deletes the objects in the bucket:
b_operator.Bucket(str(the_bucket)).objects.all().delete()
So you would have to modify this custom resource to copy objects before the deletion operation is performed.
Is there a way to make a Google Cloud Storage bucket "append-only"?
To clarify, I want to make it so that trying to overwrite/modify an existing object returns an error.
Right now the only way I see to do this is client-side, by checking if the object exists before trying to write to it, but that doubles the number of calls I need to make.
There are several Google Cloud Storage features that you can enable:
Object Versioning
Bucket Lock
Retention Policies
The simplest method is to implement Object Versioning. This prevents objects from being overwritten or deleted. This does require changes to client code to know how to request a specific version of an object if multiple versions have been created due to object overwrites and deletes.
Cloud Storge Object Versioning
For more complicated scenarios implement bucket lock and retention policies. These features allow you to configure a data retention policy for a Cloud Storage bucket that governs how long objects in the bucket must be retained
Retention policies and Bucket Lock
I'm trying to understand the delete operation of an object in aws S3.
In cross region replication, if I delete an object from the source, this delete is not propagated to the destination.
The official text - "If you specify an object version ID to delete in
a DELETE request, Amazon S3 deletes that object version in the source
bucket, but it doesn't replicate the deletion in the destination
bucket. In other words, it doesn't delete the same object version from
the destination bucket. This protects data from malicious deletions. "
In other case, I read that
The official text - Amazon S3 offers eventual consistency for
overwrite PUTS and DELETES in all Regions
When I made a test, the delete is not propagated. Then, there is a divergence between the replica !
Is it normal ? how about the eventual consistency of the delete ?
This is not about replication, it's about simple buckets from Introduction to AWS S3.
Amazon S3 offers eventual consistency for overwrite PUTS and DELETES in all Regions
The right answer - "it doesn't delete the same object version from the destination bucket. This protects data from malicious deletions".
If you need "consistency of the delete" - you can try to automate it with aws s3 sync with --delete flag.
We’ve been using Google Cloud Storage Transfer service and in our data source (AWS) we had a directory accidentally deleted, so we figured it would be in the data sink however upon taking a looking it wasn’t there despite versioning being on.
This leads us to believe in Storage Transfer the option deleteObjectsUniqueInSink hard deletes objects in the sink and removes them from the archive.
We'e been unable to confirm this in the documentation.
Is GCS Transfer Service's deleteObjectsUniqueInSink parameter in the TransferSpec mutually exclusive with GCS's object versioning soft-delete?
When the deleteObjectsUniqueInSink option is enabled, Google Cloud Storage Transfer will
List only the live versions of objects in source and destination buckets.
Copy any objects unique in the source to the destination bucket.
Issue a versioned delete for any unique objects in the destination bucket.
If the unique object is still live at the time that Google Cloud Storage Transfer issues the deletion, it will be archived. If another process, such as Object Lifecycle Management, archived the object before the deletion occurs, the object could be permanently deleted at this point rather than archived.
Edit: Specifying the version in the delete results in a hard delete (Objects Delete Documentation), so transfer service is currently performing hard deletes for unique objects. We will update the service to instead perform soft deletions.
Edit: The behavior has been changed. From now on deletions in versioned buckets will be soft deletes rather than hard deletes.