AWS S3 integration with Google app acript - amazon-web-services

I was trying to use the putObject() function of AWS S3 to put a object into a folder in a bucket, I am not able to specify the folder name in the function, I was able to put the object into the bucket but not into the folder. Is there any way in which I can specify the Folder name

There are no separate folder names.
The object key is path + filename, so to upload cat.jpg into images/funny/ you upload the file as images/funny/cat.jpg.
Do not use a leading /.

Related

how to create nested folders using aws cli in s3 bucket

In linux to create nested folders, irrespective of the intermediate folders exist or not can be done using the below command.
mkdir -p /home/user/some_non_existing_folder1/some_non_existing_folder2/somefolder
Similar to this i want to create a nested folder structure in S3 and place my files there later
how can i do this using aws cli
Folders do not actually exist in Amazon S3.
For example, if you have an empty bucket you could upload a file to invoices/january.txt and the invoices folder will magically 'appear' without needing to be specifically created.
Then, if you were to delete the invoices/january.txt object, then the invoices folder will magically 'disappear' (because it never actually existed).
This works because the Key (filename) of an Amazon S3 object contains the full path of the object. The above object is not called january.txt -- rather, it is called invoices/january.txt. The Amazon S3 console will make it appear as if the folder exists, but it doesn't.
If you click the Create folder button in the S3 management console, then a zero-length object is created with the name of the folder. This causes the folder to 'appear' because it contains a file, but the file will not appear (well, it does appear, but humans see it as a folder). If this zero-length object is deleted, the folder will 'disappear' (because it never actually existed).
Therefore, if you wish to create a directory hierarchy before uploading files, you could upload zero-length objects with the same names as the folders you want to create. You can use the aws s3 cp command to upload such a file.
Or, just upload the files to where you want them to appear, and the folders will magically appear automatically.
# create bucket
aws s3 mb s3://main_folder
# created nested folder
aws s3api put-object --bucket main_folder --key nested1/nested2/nested3/somefoldertosync
# sync my local folder to s3
aws s3 sync /home/ubuntu/somefoldertosync s3://main_folder/nested1/nested2/nested3/somefoldertosync
currently i am using the above way to carry on with my work

read file from s3 bucket

I'm trying to get a file from s3 bucket with golang. What's special in my request is that I need to get a file from the root of the s3. i.e, in my situation, i have a buckets folder which is the root for the s3, inside that i have folders and files. I need to get the files from the buckets folder. it means that i don't have a bucket folder because i access only to the root.
the code im trying is:
numBytes, err := downloader.Download(file, &s3.GetObjectInput{
Bucket: aws.String("/"),
Key: aws.String("some_image.jpeg"),
})
The problem is I got an error that says the object does not exist.
Is it possible to read files from the root of s3? What do I need to write in the bucket? the key is written okay?
Many thanks for helping!
All files in S3 are stored inside buckets. You're not able to store a file in the root of s3.
Each bucket is its own distinct namespace. You can have multiple buckets in your Amazon account, and each file must belong to one of those buckets.
You can either create a bucket using the AWS web interface, command line tools or API. (Or 3rd party software like Cyberduck).
You can read more about buckets in S3 here: https://docs.aws.amazon.com/AmazonS3/latest/userguide/UsingBucket.html

AWS S3 - The specified key does not exist. - for uploaded folders

When I'm uploading a folder to S3 bucket (drag and drop), I see that it is not considered as an object - I can't get it with its key (GetObjectAsync()), and ListObjectsV2Async() doesn't return it (I'm using the .net sdk).
When I'm creating a folder under a bucket I can get it and it appears in list of bucket objects.
What is the reason for that?
Amazon S3 does not have the concept of a 'Directory' or a 'Folder'.
Instead, the full path of an object is stored in its Key (filename).
For example, an object can be stored in Amazon S3 with a Key of: invoices/2020-09/inv22.txt
This object can be created even if the invoices and 2020-09 directories do not exist. When viewed through the Amazon S3 console, it will appear as though those directories were automatically created, but if the object is deleted, those directories will disappear (because they never existed).
If a user clicks the "Create Folder" button in the Amazon S3 management console, a zero-length object is created with the same name as the folder. This 'forces' the folder to appear even if there are no objects 'inside' the folder. However, it is not actually a folder.
When using the ListObjects command while specifying a Delimiter of /, a list of CommonPrefixes is returned. This is equivalent to what you would normally consider a sub-directory.

Deploy Lambda function to S3 bucket subdirectory

I am trying to deploy a Lambda function to AWS from S3.
My organization currently does not provide the ability for me to upload files to the root of an S3 bucket, but only to a folder (ie: s3://application-code-bucket/Application1/).
Is there any way to deploy the Lambda function code through S3, from a directory other than the bucket root? I checked the documentation for Lambda's CreateFunction AWS command and could not find anything obvious.
You need to zip your lambda package and upload to S3 in any folder.
You can then provide an https S3 url of the file to upload to lambda
function.
The S3 bucket needs to be in the same region as that of the lambda
function.
Make sure you zip from the folder, i.e when the package is unzipped,
the files should be extracted in the same directory as the unzip
command, and should not create a new directory for the contents.
I have this old script of mine that I used to automate lambda deployments.
It needs to be refactored a bit, but still usable.
It gets as input the lambda name and the zip file path located locally on your PC.
It uploads it to S3 and publishes to the AWS Lambda.
You need to set AWS credentials with IAM roles that allows:
S3 upload permission
AWS Lambda update permission
You need to modify the bucket name and the path you want your zip to be uploaded to. (lines 36-37).
That's it.

Can you specify an input/output folder (not bucket) for Elastic Transcoder?

I want to specify a folder within my S3 bucket for videos to be processed by Elastic Transcoder. I also want those output videos to be in a different folder within the same bucket. Is this possible?
This resource: http://docs.aws.amazon.com/elastictranscoder/latest/developerguide/job-settings.html#job-settings-general specifies that the input key can have a file prefix. Would the file prefix here signify a folder directory in a bucket?
In the documentation you linked, it specifies how to use a "key prefix" for both input and output files. A "key prefix" in S3 is analogous to a folder.