I'm trying to get a file from s3 bucket with golang. What's special in my request is that I need to get a file from the root of the s3. i.e, in my situation, i have a buckets folder which is the root for the s3, inside that i have folders and files. I need to get the files from the buckets folder. it means that i don't have a bucket folder because i access only to the root.
the code im trying is:
numBytes, err := downloader.Download(file, &s3.GetObjectInput{
Bucket: aws.String("/"),
Key: aws.String("some_image.jpeg"),
})
The problem is I got an error that says the object does not exist.
Is it possible to read files from the root of s3? What do I need to write in the bucket? the key is written okay?
Many thanks for helping!
All files in S3 are stored inside buckets. You're not able to store a file in the root of s3.
Each bucket is its own distinct namespace. You can have multiple buckets in your Amazon account, and each file must belong to one of those buckets.
You can either create a bucket using the AWS web interface, command line tools or API. (Or 3rd party software like Cyberduck).
You can read more about buckets in S3 here: https://docs.aws.amazon.com/AmazonS3/latest/userguide/UsingBucket.html
Related
I need to copy some S3 objects from a client. The client sent us the key and secret and I can list the object using the following command.
AWS_ACCESS_KEY_ID=.... AWS_SECRET_ACCESS_KEY=.... aws s3 ls s3://bucket/company4/
I will need to copy/sync s3://bucket/company4/ (very large) of our client's S3. In this question Copy content from one S3 bucket to another S3 bucket with different keys, it mentioned that it can be done by creating a bucket policy on the destination bucket. However, we probably don't have permission to create the bucket policy because we have limited AWS permissions in our company.
I know we can finish the job by copying the external files to local file system first and then upload them to our S3 bucket. Is there a more efficient way to do the work?
I have been provided with the access and secret key for an Amazon S3 container. No more details were provided other than to drop some files into some specific folder.
I downloaded Amazon CLI and also the Amazon SDK. So far, seems to be no way for me to check the bucket name or list the folders where I'm supposed to drop my files. Every single command seems to require the knowledge of a bucket name.
Trying to list with aws s3 ls gives me the error:
An error occurred (AccessDenied) when calling the ListBuckets operation: Access Denied
Is there a way to list the content of my current location (I'm guessing the credentials I was given are linked directly to a bucket?). I'd like to see at least the folders where I'm supposed to drop my files, but the SDK client for the console app I'm building seems to always require a bucket name.
Was I provided incomplete info or limited rights?
Do you know the bucket name or not? If you don't and you don't have permission to ListAllMyBuckets and GetBucketLocation on * and ListBucket on the bucket in question, then you can't get the bucket name. That's how it is supposed to work. If you know the bucket, then you can run aws s3 s3://bucket-name/ to get objects in the bucket.
Note, that S3 buckets don't have the concept of "folder". It's User interface "sugar" to make it look like folders and files. Internally, it's just the key and the object
Looks like it was just not possible without enhanced rights or with the actual bucketname. I was able to procure both later on from the client and able to complete the task. Thanks for the comments.
I am trying to deploy a Lambda function to AWS from S3.
My organization currently does not provide the ability for me to upload files to the root of an S3 bucket, but only to a folder (ie: s3://application-code-bucket/Application1/).
Is there any way to deploy the Lambda function code through S3, from a directory other than the bucket root? I checked the documentation for Lambda's CreateFunction AWS command and could not find anything obvious.
You need to zip your lambda package and upload to S3 in any folder.
You can then provide an https S3 url of the file to upload to lambda
function.
The S3 bucket needs to be in the same region as that of the lambda
function.
Make sure you zip from the folder, i.e when the package is unzipped,
the files should be extracted in the same directory as the unzip
command, and should not create a new directory for the contents.
I have this old script of mine that I used to automate lambda deployments.
It needs to be refactored a bit, but still usable.
It gets as input the lambda name and the zip file path located locally on your PC.
It uploads it to S3 and publishes to the AWS Lambda.
You need to set AWS credentials with IAM roles that allows:
S3 upload permission
AWS Lambda update permission
You need to modify the bucket name and the path you want your zip to be uploaded to. (lines 36-37).
That's it.
I have been trying to upload a static website to s3 with the following cli command:
aws s3 sync . s3://my-website-bucket --acl public-read
It successfully uploads every file in the root directory but fails on the nested directories with the following:
An error occurred (InvalidRequest) when calling the ListObjects operation: Missing required header for this request: x-amz-content-sha256
I have found references to this issue on GitHub but no clear instruction of how to solve it.
s3 sync command recursively copies the local folders to folder like s3 objects.
Even though S3 doesn't really support folders, the sync command creates the s3 objects with a format which will have the folder names in their keys.
As reported on the following amazon support thread "forums.aws.amazon.com/thread.jspa?threadID=235135" the issue should be solved by setting the region correctly.
S3 has no concept of directories.
S3 is an object store where each object is identified by a key.
The key might be a string like "dir1/dir2/dir3/test.txt"
AWS graphical user interfaces on top of S3 interpret the "/" characters as a directory separator and present the file list "as is" it was in a directory structure.
However, internally, there is no concept of directory, S3 has a flat namespace.
See http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingMetadata.html for more details.
This is the reason directories are not synced as there is no directories on S3.
Also the feature request is open in https://github.com/aws/aws-cli/issues/912 but has not been added yet.
I am having my bucket on Amazon S3 where I have uploaded certains files.
I have a sort public visitation on the page.
Is there any way to get the all visitations in log file
or Can I download the log file from the Amazon?
You can create logging for S3 bucket at the creation of bucket itself also after creating bucket. You need to specify the path for log file to store.You can refer http://docs.aws.amazon.com/AmazonS3/latest/UG/ManagingBucketLogging.html for steps.