Delete files inside a subfolder residing inside a bucket on amazon s3 - amazon-web-services

On my s3 I have a bucket called nba-dataset. Inside the nba-dataset i have a folder called env. Inside the folder env i have another folder called prod-stage.
So basically The path looks like this nba-dataset > env > prod-stage.
I have a files inside prod-stage that I want to delete after x number of days.
So the way i understand is I can apply a lifecycle rule on the bucket nba-dataset.
The confusing part for me is what should the value be for prefix. Would it be env/prod-stage or would it be simply prod-stage?
I would appreciate guidance here as there are many files in the subfolders of the bucket nba-dataset that I don't want to delete by accident. I only want to delete files inside the prod-stage folder that are older than x days.

The prefix is an absolute path so in your case it is env/prod-stage/.
There is no such thing as a subfolder within S3, each object has a key which the GUI displays as subfolders by splitting the prefix separately by the "/"character.
In fact the interface is simply calling the list-objects method. When you're displayed as being in the env folder the prefix is env, then when you move into the subfolder of prod-stage the prefix becomes env/prod-stage.

Related

AWS configuration to delete files

I have a folder "Execution" folder in s3 bucket.
It has folders and files like
Execution
Exec_06-06-2022/
file1.json
file2.json
Exec_07-06-2022/
file3.json
file4.json
I need to configure to delete the Exec_datestamp folders and the inside files after X days.
I tried this using AWS lifecycle config for the prefix "Execution/"
But it deletes the folder Execution/ after X days ( set this to 1 day to test)
Is there any other way to achieve this?
There are no folders in S3. Execution is part of objects name, specifically its key prefix. The S3 console only makes Execution to appear as a folder, but there is no such thing in S3. So your lifecycle deletes Execution/ object (not folder), because it matches your query.
You can try with Execution/Exec* filter.
Got it, https://stackoverflow.com/a/41459761/5010582. The prefix has to be "Execution/Exec" without any wildcard.

Copy data from one folder to another inside a AWS bucket automatically

I want to copy files from one folder to another inside the same bucket.
I have two folders Actual and Backup
As soon as new files comes to actual folder i want a way so that it immediately gets copied to Backup folder.
What you need are S3 Event Notifications. With these you can trigger a lambda function when a new item is put, then if it is put with one prefix, write the same object to the other prefix.
It is also worth noting that, though it is functionally as it seems, S3 doesn't really have directories; just objects. So you are just creating the same object as /Actual/some-file with key /Backup/some-file . It just looks like there is a directory because files /Actual/some-file and /Actual/other-file share a prefix /Actual/.

AWS S3 - Use powershell to delete all files but keep the folders

I have a powershell script, that downloads all files form an S3 bucket, and then removes the files from the bucket. All the files I'm removing are stored in a subfolder in the S3 bucket, and I just want to delete the files but maintain the subfolders.
I'm currently using the following command to delete the files in S3 once the file has been downloaded from S3.
Remove-S3Object -BucketName $S3Bucket -Key $key -Force
My problem is that if it removes all the files in the subfolder, the subfolder is removed as well. Is there a way to remove the file, but keep the subfolder present using powerhsell. I believe I can do something like this,
aws s3 rm s3://<key_to_be_removed> --exclude "<subfolder_key>"
but not quite sure if that'll work.
I'm looking for the best way to accomplish this, and at the moment, my only option is to recreate the subfolder via the script if the subfolder not longer exists.
The only way to accomplish having an empty folder is to create a zero-length object which has the same name as the folder you want to keep. This is actually how the S3 console enables you to create an empty folder.
You can check this by running $ aws s3 ls s3://your-bucket/folderfoo/ and observing an output object having length of zero bytes.
See more on this topic here.
As already commented, S3 does not really have folders the way file systems do. The folders as presented by most S3 browsers are just generated based on the paths of the files/objects. If you upload an object/file named folder/file, the browsers will present folder as folder with file as a file in the folder. But technically, all that there is is the file/object folder/file. The folder does not exist on its own.
You can explicitly create a folder by creating an empty empty-named object with "in the folder": folder/. If you do that, it will appear the the folder exists even if there are no files in it. But if you do not do that, the virtual folder disappears once you remove all objects in the folder.
Now the question is whether your command removes even the empty named object representing the folder or not. I cannot tell that.

How to prevent Amazon S3 object expiration from deleting the directory

We want to delete temp files from the S3 bucket from one of the folder on daily basis. I have tried with s3 lifecycle policy. For example my folder name is Test, I have set prefix for expiration is Test/ . But the issue here is along with all the files, Test folder is also getting deleted. I want to keep the folder as is and only delete the files in that . Is there any way i can do this?
Don't worry about the folder.
As soon as you have at least one file "in" the folder (object key beginning with Test/) it will appear again and when there are no objects beginning with that key prefix it will disappear -- all of this is normal/expected behavior because folders in S3 are not containers like they are on a filesystem and they do not actually need to exist before putting files "in" them.

Replicate local directory in S3 bucket

I have to replicate my local folder structure in S3 bucket, I am able to do so but its not creating folders which are empty. My local folder structure is as follows and command used is.
"aws-exec s3 sync ./inbound s3://msit.xxwmm.supplychain.relex.eeeeeeeeee/
its only creating inbound/procurement/pending/test.txt, masterdata and transaction is not cretated but if i put some file in each directory it will create.
As answered by #SabeenMalik in this StackOverflow thread:
S3 doesn't have the concept of directories, the whole folder/file.jpg
is the file name. If using a GUI tool or something you delete the
file.jpg from inside the folder, you will most probably see that the
folder is gone too. The visual representation in terms of directories
is for user convenience.
You do not need to pre-create the directory structure. Just pretend that the structure is there and everything will be okay.
Amazon S3 will automatically create the structure as objects are written to paths. For example, creating an object called s3://bucketname/inbound/procurement/foo` will automatically create the directories.
(This isn't strictly true because Amazon S3 doesn't use directories, but it will appear that the directories are there.)