I have created a s3 life cycle policy which will Expire the current version of the object in 540 days.
I am a bit confused here, whether it deletes the objects from s3 or glacier,
if not I want to delete the objects from a bucket in 540 days and the glacier in some 4 years! how will I set it up?
Expiring an object means "delete it", regardless of its storage class.
So, if it has moved to a Glacier storage class, it will still be deleted.
When you store data in Glacier via S3, then the object is managed by Amazon S3 (but stored in Glacier). Thus, the lifecycle rules apply.
If, however, you store data directly in Amazon Glacier (without going via Amazon S3), then the data would not be impacted by the lifecycle rules, nor would it be visible in Amazon S3.
Bottom line: Set your rules for deletion based upon the importance of the data, not its current storage class.
Related
I am trying to understand aws s3 object transitioning across various storage classes, and I am using https://docs.aws.amazon.com/AmazonS3/latest/userguide/lifecycle-transition-general-considerations.html to go through details, however I find this confusing. Below is what I am trying to understand
as per this document and nice image provided on this page says that -
"Amazon S3 supports a waterfall model for transitioning between storage classes, as shown in the following diagram.".
For example, object from:
S3 Standard can be transitioned to any of --> Standard IA, Intelligent Tiering, One Zone IA, Glacier Instant, Glacier Flexible , Glacier Deep.
Which means reverse explicit transitioning is not possible
However, on same page it then says
"If you want to change the storage class of an object that is stored in S3 Glacier Flexible Retrieval to a storage class other than S3 Glacier Deep Archive, you must use the restore operation to make a temporary copy of the object first. Then use the copy operation to overwrite the object specifying S3 Standard, S3 Intelligent-Tiering, S3 Standard-IA, S3 One Zone-IA, or Reduced Redundancy as the storage class."
Here confusing part is that it shows option of overwrite the storage class (off course after restoring copy), to S3 Intelligent-Tiering, S3 Standard-IA, S3 One Zone-IA
Confusion is that - how does the aws treat this "restored" object as - a new object ? or existing object?
if it treats this as a "new" object - how can it transition directly to S3 Intelligent-Tiering, S3 Standard-IA, S3 One Zone-IA ? this contradicts to waterfall rule that they have mentioned
Thank you.
My expectation from this was that if object should be transitioned to say One Zone IA , the restored object should be first only treated as S3 Standard object - wait for 30 days and then should allow transition to S3 One Zone IA.
I have a website where I serve content that is stored on an AWS S3 bucket. As the amount of content grows, I have started thinking about back-up options. Using AWS Glacier came up as a natural route.
After reading on it, I didn't understand if it does what I intend to do with it. From what I have understood, using Glacier, you set lifecycle policies on objects stored on your S3 buckets. According to these policies, objects will be transferred Glacier and deleted from your S3 bucket at a specific point in time after they have been uploaded to S3. At this point, the object's storage class changes to 'GLACIER'. Amazon explains that, once this is done, you can no longer access the objects through S3 but "their index entry will remain as is". Simultaneously, they say that retrieval of objects from Glacier takes 3-5 hours.
My question is: Does this mean that, once objects are transferred to Glacier, I will not be able to serve them on my website without retrieving them first? Or does it mean that they will still be served from the S3 bucket as usual but that, in case something happens with the files on S3 I will just be able to retrieve them in 3-5 hours? Glacier would only be a viable back up solution for me if users of my website would still be able to load content on the website after the correspondent objects are transferred to Glacier. Also, is it possible to have objects transferred to Glacier without them being deleted from the S3 bucket?
Thank you
To answer your question: Does this mean that, once objects are transferred to Glacier, I will not be able to serve them on my website without retrieving them first?
No, you won't be able to serve them on your website unless transfer them from glacier to standard or standard_IA class, which is taken 3-5 hours. Glacier is generally used to archive cold data like old logs which is accessed in rare condition. So if you need real-time access to the object, Glacier isn't a valid option for you.
I am trying to see if there is a way to transfer s3 objects in glacier in one bucket to another bucket but keep the storage type the same? I can restore the glacier object and transfer it, but in the new bucket, the file is saved in standard storage. I would like it to know if there is a way that the file is directly stored in glacier outside of enforcing life cycle policies on the bucket.
There isn't.
Objects can only be copied to another bucket once restored, and objects can only be transitioned into the Glacier storage class by lifecycle policies, not by creating them with this storage class ... which essentially rules out the possibility of the desired outcome for two different reasons.
S3 does not have either a "move" or a "rename" feature -- both of these can only be emulated by copy-and-delete.
I have an S3 bucket on which I've configured a Lifecycle policy which says to archive all objects in the bucket after 1 day(s) (since I want to keep the files in there temporarily but if there are no issues then it is fine to archive them and not have to pay for the S3 storage)
However I have noticed there are some files in that bucket that were created in February ..
So .. am I right in thinking that if you select 'Archive' as the lifecycle option, that means "copy-to-glacier-and-then-delete-from-S3"? In which case this issue of the files left from February would be a fault - since they haven't been?
Only I saw there is another option - 'Archive and then Delete' - but I assume that means "copy-to-glacier-and-then-delete-from-glacier" - which I don't want.
Has anyone else had issues with S3 -> Glacier?
What you describe sounds normal. Check the storage class of the objects.
The correct way to understand the S3/Glacier integration is the S3 is the "customer" of Glacier -- not you -- and Glacier is a back-end storage provider for S3. Your relationship is still with S3 (if you go into Glacier in the console, your stuff isn't visible there, if S3 put it in Glacier).
When S3 archives an object to Glacier, the object is still logically "in" the bucket and is still an S3 object, and visible in the S3 console, but can't be downloaded from S3 because S3 has migrated it to a different backing store.
The difference you should see in the console is that objects will have A "storage class" of Glacier instead of the usual Standard or Reduced Redundancy. They don't disappear from there.
To access the object later, you ask S3 to initiate a restore from Glacier, which S3 does... but the object is still in Glacier at that point, with S3 holding a temporary copy, which it will again purge after some number of days.
Note that your attempt at saving may be a little bit off target if you do not intend to keep these files for 3 months, because any time you delete an object from Glacier, you are billed for the remainder of the three months, if that object has been in Glacier for a shorter time than that.
Is there a way to set an expiry date in Amazon Glacier? I want to copy in weekly backup files, but I dont want to hang on to more than 1 years worth.
Can the files be set to "expire" after one year, or is this something I will have to do manually?
While not available natively within Amazon Glacier, AWS has recently enabled Archiving Amazon S3 Data to Amazon Glacier, which makes working with Glacier much easier in the first place already:
[...] Amazon S3 was designed for rapid retrieval. Glacier, in
contrast, trades off retrieval time for cost, providing storage for as
little at $0.01 per Gigabyte per month while retrieving data within
three to five hours.
How would you like to have the best of both worlds? How about rapid
retrieval of fresh data stored in S3, with automatic, policy-driven
archiving to lower cost Glacier storage as your data ages, along with
easy, API-driven or console-powered retrieval? [emphasis mine]
[...] You can now use Amazon Glacier as a storage option for Amazon S3.
This is enabled by facilitating Amazon S3 Object Lifecycle Management, which not only drives the mentioned Object Archival (Transition Objects to the Glacier Storage Class) but also includes optional Object Expiration, which allows you to achieve what you want as outlined in section Before You Decide to Expire Objects within Lifecycle Configuration Rules:
The Expiration action deletes objects
You might have objects in Amazon S3 or archived to Amazon Glacier. No
matter where these objects are, Amazon S3 will delete them. You will
no longer be able to access these objects. [emphasis mine]
So at the small price of having your objects stored in S3 for a short time (which actually eases working with Glacier a lot due to removing the need to manage archives/inventories) you gain the benefit of optional automatic expiration.
You can do this in the AWS Command Line Interface.
http://docs.aws.amazon.com/AmazonS3/latest/dev/object-lifecycle-mgmt.html