I want to rename a folder in S3 bucket, I understand that rename will run a PUT request which costs 1 cent per 1000 request.
However, the PUT request is defined as a COPY and involves with also a GET
My question is, when we rename a folder in S3 bucket, does it involve copying all sub-folders and files to a new folder with the name I want (which costs more than 1 PUT request), or it just simply 1 PUT request to change the name without touching all the items within the folder.
In case you've missed it... there are no folders in S3.
The object /pics/funny/cat.jpg is not a file called cat.jpg inside a folder called funny inside another folder called pics.
In fact, it is a file with an 18-character name: pics/funny/cat.jpg. The hierarchy shown in the console is largely for human convenience, and the ability to create new folders in the console is an illusion, also.
So, yes, renaming a "folder" actually means making a new copy of each object in the "folder," with a change to the object names to look like their are in the path.
This can be done with a PUT/COPY request ($0.005 per 1000 depending on the region) followed by a DELETE request of the old object (free). There is no corresponding GET request, because PUT/COPY is an atomic operation inside S3, so actually downloading and re-uploading the data is avoided.
Related
I am trying to backup 2TB from a shared drive of Windows Server to S3 Glacier
There are maybe 100 folders (some may be nested ) and perhaps 5000 files (some small like spread sheets, photos and other are larger like server images. My first question is what counts as an object here?
Let’s say I have Folder 1 which has 10 folders inside it. Each of 10 folders have 100 files.
Would number of objects be 1 folder + (10 folders * 100 files) = 1001 objects?
I am trying to understand how folder nesting is treated in S3. Do I have to manually create each folder as a prefix and then upload each file inside that using AWS CLI? I am trying to recreate the shared drive experience on the cloud where I can browse the folders and download the files I need.
Amazon S3 does not actually support folders. It might look like it does, but it actually doesn't.
For example, you could upload an object to invoices/january.txt and the invoices directory will just magically 'appear'. Then, if you deleted that object, the invoices folder would magically 'disappear' (because it never actually existed).
So, feel free to upload objects to any location without creating the directories first.
However, if you click the Create folder button in the Amazon S3 management console, it will create a zero-length object with the name of the directory. This will make the directory 'appear' and it would be counted as an object.
The easiest way to copy the files from your Windows computer to an Amazon S3 bucket would be:
aws s3 sync directoryname s3://bucket-name/ --storage-class DEEP_ARCHIVE
It will upload all files, including files in subdirectories. It will not create the folders, since they aren't necessary. However, the folder will still 'appear' in S3.
I want to copy files from one folder to another inside the same bucket.
I have two folders Actual and Backup
As soon as new files comes to actual folder i want a way so that it immediately gets copied to Backup folder.
What you need are S3 Event Notifications. With these you can trigger a lambda function when a new item is put, then if it is put with one prefix, write the same object to the other prefix.
It is also worth noting that, though it is functionally as it seems, S3 doesn't really have directories; just objects. So you are just creating the same object as /Actual/some-file with key /Backup/some-file . It just looks like there is a directory because files /Actual/some-file and /Actual/other-file share a prefix /Actual/.
I have used aws cli tool to move couple of folders named: 2014, 2015, 2016 etc from root directory:
/2015/
into:
/images/2015/
When I moved them it seems that there is one file less in each bucket:
Before copying:
After coping:
Could you help me to understand this phenomena ?
The count is probably including/excluding the 'folder object'.
Normally, there is no need to create folders in Amazon S3. Simply putting an object in a particular path (eg /images/2014 will "create" the images and 2014 folders -- they 'appear' to exist, but they actually do not exist. Deleting the objects will make the folders disappear.
However, it is possible to create a folder by clicking Create folder. This will create a zero-length object with the same name as the folder. This will force the folder to appear, even when there are no objects inside the folder.
Therefore, it is likely that the "off by 1" count of objects is related to a folder that was/wasn't created via the Create folder command. I have previously seen exactly this behaviour.
My code is working. The only issue I'm facing is that I cannot specify the folder within the S3 bucket that I would like to place my file in. Here is what I have:
with open("/hadoop/prodtest/tips/ut/s3/test_audit_log.txt", "rb") as f:
s3.upload_fileobj(f, "us-east-1-tip-s3-stage", "BlueConnect/test_audit_log.txt")
Explanation from #danimal captures pretty much everything. If you wanted to just create a folder-like object in s3, you can simply specify that folder-name and end it with "/", so that when you look at it from the console, it will look like a folder.
It's rather useless, an empty object, without a body (consider it as a key with null value) just for eye-candy but if you really want to do it, you can.
1) You can create it on the console interactively, as it gives you that option
2_ You can use aws sdk. boto3 has put_object method for s3 client, where you specify the key as "your_folder_name/", see example below:
import boto3
session = boto3.Session() # I assume you know how to provide credentials etc.
s3 = session.client('s3', 'us-east-1')
bucket = s3.create_bucket('my-test-bucket')
response = s3.put_object(Bucket='my-test-bucket', Key='my_pretty_folder/' # note the ending "/"
And there you have your bucket.
Again, when you are uploading a file you specify "my-test-bucket/my_file" and what you did there is create a "key" with name "my-test-bucket/my_file" and put the content of your file as its "value".
In this case you have 2 objects in the bucket. First object has null body and looks like a folder, while the second one looks like it is inside that but as #danimal pointed out in reality you created 2 keys in the same flat hierarchy, it just "looks-like" what we are used to seeing in a file system.
If you delete the file, you still have the other objects, so on the aws console, it looks like folder is still there but no files inside.
If you skipped creating the folder and simply uploaded the file like you did, you would still see the folder structure in AWS Console but you have a single object at that point.
When you however list the objects from command line, you would see a single object and if you delete it on the console it looks like folder is gone too.
Files ('objects') in S3 are actually stored by their 'Key' (~folders+filename) in a flat structure in a bucket. If you place slashes (/) in your key then S3 represents this to the user as though it is a marker for a folder structure, but those folders don't actually exist in S3, they are just a convenience for the user and allow for the usual folder navigation familiar from most file systems.
So, as your code stands, although it appears you are putting a file called test_audit_log.txt in a folder called BlueConnect, you are actually just placing an object, representing your file, in the us-east-1-tip-s3-stage bucket with a key of BlueConnect/test_audit_log.txt. In order then to (seem to) put it in a new folder, simply make the key whatever the full path to the file should be, for example:
# upload_fileobj(file, bucket, key)
s3.upload_fileobj(f, "us-east-1-tip-s3-stage", "folder1/folder2/test_audit_log.txt")
In this example, the 'key' of the object is folder1/folder2/test_audit_log.txt which you can think of as the file test_audit_log.txt, inside the folder folder1 which is inside the folder folder2 - this is how it will appear on S3, in a folder structure, which will generally be different and separate from your local machine's folder structure.
After deleting a few folders in our S3 bucket, I am not able to see any of my folders through the web console. We had around 10 folders and ended up deleting 6 of them. The remaining four show up when I do an 'ls' on that S3 bucket through the CLI but the bucket shows up empty on the web console.
When I turn on 'Versions' I see everything (including the 6 folders that were deleted). Am I overlooking something extremely simple?
Folders do not actually exist in Amazon S3.
For example, you could create an object like this:
aws s3 cp foo.txt s3://my-bucket/folder1/folder2/foo.txt
This would instantly 'create' folder1 and folder2. Or, to be more accurate, the folders would 'appear' but they don't actually exist because the full filename (Key) of the object is folder1/folder2/foo.txt.
If you were then to delete that object, the folders would 'disappear' because they never actually existed.
Sometimes, if a system wants to forcefully make a folder 'appear', it can create a zero-length object with the same name as the folder. This makes the folder 'appear', but it is really the empty file that is appearing.
Bottom line: Don't worry about creating and deleting folders. They will appear when necessary and disappear when not being used. Do not try to map normal filesystem behaviour to Amazon S3.