uploading file to specific folder in S3 bucket using boto3 - python-2.7

My code is working. The only issue I'm facing is that I cannot specify the folder within the S3 bucket that I would like to place my file in. Here is what I have:
with open("/hadoop/prodtest/tips/ut/s3/test_audit_log.txt", "rb") as f:
s3.upload_fileobj(f, "us-east-1-tip-s3-stage", "BlueConnect/test_audit_log.txt")

Explanation from #danimal captures pretty much everything. If you wanted to just create a folder-like object in s3, you can simply specify that folder-name and end it with "/", so that when you look at it from the console, it will look like a folder.
It's rather useless, an empty object, without a body (consider it as a key with null value) just for eye-candy but if you really want to do it, you can.
1) You can create it on the console interactively, as it gives you that option
2_ You can use aws sdk. boto3 has put_object method for s3 client, where you specify the key as "your_folder_name/", see example below:
import boto3
session = boto3.Session() # I assume you know how to provide credentials etc.
s3 = session.client('s3', 'us-east-1')
bucket = s3.create_bucket('my-test-bucket')
response = s3.put_object(Bucket='my-test-bucket', Key='my_pretty_folder/' # note the ending "/"
And there you have your bucket.
Again, when you are uploading a file you specify "my-test-bucket/my_file" and what you did there is create a "key" with name "my-test-bucket/my_file" and put the content of your file as its "value".
In this case you have 2 objects in the bucket. First object has null body and looks like a folder, while the second one looks like it is inside that but as #danimal pointed out in reality you created 2 keys in the same flat hierarchy, it just "looks-like" what we are used to seeing in a file system.
If you delete the file, you still have the other objects, so on the aws console, it looks like folder is still there but no files inside.
If you skipped creating the folder and simply uploaded the file like you did, you would still see the folder structure in AWS Console but you have a single object at that point.
When you however list the objects from command line, you would see a single object and if you delete it on the console it looks like folder is gone too.

Files ('objects') in S3 are actually stored by their 'Key' (~folders+filename) in a flat structure in a bucket. If you place slashes (/) in your key then S3 represents this to the user as though it is a marker for a folder structure, but those folders don't actually exist in S3, they are just a convenience for the user and allow for the usual folder navigation familiar from most file systems.
So, as your code stands, although it appears you are putting a file called test_audit_log.txt in a folder called BlueConnect, you are actually just placing an object, representing your file, in the us-east-1-tip-s3-stage bucket with a key of BlueConnect/test_audit_log.txt. In order then to (seem to) put it in a new folder, simply make the key whatever the full path to the file should be, for example:
# upload_fileobj(file, bucket, key)
s3.upload_fileobj(f, "us-east-1-tip-s3-stage", "folder1/folder2/test_audit_log.txt")
In this example, the 'key' of the object is folder1/folder2/test_audit_log.txt which you can think of as the file test_audit_log.txt, inside the folder folder1 which is inside the folder folder2 - this is how it will appear on S3, in a folder structure, which will generally be different and separate from your local machine's folder structure.

Related

Copy data from one folder to another inside a AWS bucket automatically

I want to copy files from one folder to another inside the same bucket.
I have two folders Actual and Backup
As soon as new files comes to actual folder i want a way so that it immediately gets copied to Backup folder.
What you need are S3 Event Notifications. With these you can trigger a lambda function when a new item is put, then if it is put with one prefix, write the same object to the other prefix.
It is also worth noting that, though it is functionally as it seems, S3 doesn't really have directories; just objects. So you are just creating the same object as /Actual/some-file with key /Backup/some-file . It just looks like there is a directory because files /Actual/some-file and /Actual/other-file share a prefix /Actual/.

AWS S3 Bucket File - Rename object that has backslash into folder and file

I have some old uploaded S3 objects that were supposed to be in a subfolder/filename format, but mistakenly used a backslash which made the subfolder part of the actual object's filename.
Using the aws cli, how will I be able to recursively run through all my objects that have a backslash (folder\file), make a new subfolder with the foldername, and place the object with the correct filename inside (folder/file)?
From this
bucket
folder1\file1
folder1\file2
folder1\file3
To this
bucket
folder1
file1
file2
file3
S3 does not have "folders", it only has "keys".
You need to list all the objects and then copy the object to a key with proper / in it, e.g. folder1/file1, then you can delete the old file. There is no move operation in S3. And you will not need to create a folder. "Folders" only exist within the aws web console since they are needed for human interaction.

How to get existing s3 bucket(subdirectory)?

For a CDK app I'm working on, trying to get an existing s3 bucket path and then copy few local files to that bucket. However, I get this error when the code tries to search for the bucket,
Failed to create resource. Command '['python3', '/var/task/aws', 's3', 'sync', '--delete', '/tmp/tmpew0gwu1w/contents', 's3://dir/subdir/']' returned non-zero exit status 1. using the code below,
If anyone could help me with this, that'd be great, not sure if the 'bucket-name' parameter can take bucket path or not.
Line of code is as follows,
IBucket byName = Bucket.fromBucketName(this, "bucket-name", "dir/subdir");
Note: If I try to copy the files to the main directory(dir in this case), it works fine.
The “path” is not part of the bucket name, it’s part of the object’s key (filename). Amazon S3 is an object store and doesn’t have directories like file systems do.
Technically, every object in a bucket is on the top level with the “path” being prefixed to its name.
So if you have something like s3://bucket/path/sub-path/file.txt, the bucket name is bucket and the object key (similar to a filename) is path/sub-path/file.txt with path/sub-path/ being the prefix.
When using the aws s3 sync CLI command, the prefix gets converted into a directory structure on the local drive, and vice versa.
For more details, please refer to How do I use folders in an S3 bucket?

AWS S3 Listing API - How to list everything inside S3 Bucket with specific prefix

I am trying to list all items with specific prefix in S3 bucket. Here is directory structure that I have:
Item1/
Item2/
Item3/
Item4/
image_1.jpg
Item5/
image_1.jpg
image_2.jpg
When I set prefex to be Item1/Item2, I get as a result following keys:
Item1/Item2/
Item1/Item2/Item3/Item4/image_1.jpg
Item1/Item2/Item3/Item5/image_1.jpg
Item1/Item2/Item3/Item5/image_2.jpg
What I would like to get is:
Item1/Item2/
Item1/Item2/Item3
Item1/Item2/Item3/Item4
Item1/Item2/Item3/Item5
Item1/Item2/Item3/Item4/image_1.jpg
Item1/Item2/Item3/Item5/image_1.jpg
Item1/Item2/Item3/Item5/image_2.jpg
Is there anyway to achieve this in golang?
Folders do not actually exist in Amazon S3. It is a flat object storage system.
For example, using the AWS Command-Line Interface (CLI) I could copy a command to an Amazon S3 bucket:
aws s3 cp foo.txt s3://my-bucket/folder1/folder2/foo.txt
This work just fine, even though folder1 and folder2 do not exist. This is because objects are stored with a Key (filename) that includes the full path of the object. So, the above object actually has a Key (filename) of:
folder1/folder2/foo.txt
However, to make things easier for humans, the Amazon S3 management console makes it appear as though there are folders. In S3, these are called Common Prefixes rather than folders.
So, when you make an API call to list the contents of the bucket while specifying a Prefix, it simply says "List all objects whose Key starts with this string".
Your listing doesn't show any folders because they don't actually exist.
Now, just to contradict myself, it actually is possible to create a folder (eg by clicking Create folder in the management console). This actually creates a zero-length object with the same name as the folder. The folder will then appear in listings because it is actually listing the zero-length object rather than the folder.
This is probably why Item1/Item2/ appears in your listing, but Item1/Item2/Item3 does not. Somebody, at some stage, must have "created a folder" called Item1/Item2/, which actually created a zero-length object with that Key.

How to determine the key of the file uploaded on S3?

I have a file uploaded over s3 bucket in some folder hierarchy.
/a/b/c/file_i_want_to_stream.csv
Now, if this file was at root level, I know the key: file name itself.
However, I am unable to determine the key when in some folder.
Amazon S3 does not actually have folders. It is a flat object storage system.
The hierarchy you see is actually part of the filename (Key) of the object.
Therefore, object /a/b/c/file.csv is stored in the root with a name of /a/b/c/file.csv. It simply appears to be in a directory hierarchy called /a/b/c/.
There are also features of Amazon S3 that make this easier to use, such as the concept of a CommonPrefix that is effectively a folder. So, when listing bucket contents, you can ask for a listing of all objects with a CommonPrefix of /a/b/c/.
Bottom line: The Key (filename) includes the path.