I have created a bucket in AWS and a couple of IAM. The IAM by default are included in a group with read-only access. However, when I create my bucket I generated a policy to grant access to specific IAM to list, put and get. Now I'm trying to run a simple command to put a file with one of these AWS IAM from my site:
aws s3api put-object --bucket <bucket name> --key poc/test.txt --body <windows path file>
The output is successful, means that the files are always loaded. However, when I take a look the bucket in AWS I have to click on show because all loadings are setting the bucket content as hidden.
The account that I'm using to verify the files uploaded in AWS has manager access in S3 and I'm going thru the web console. How should I load the files without the hidden mark?
thanks
Related
Trying to learn terraform and running through some examples from their documentation. I am trying to move a file from my pc to a S3 bucket. I create this bucket with the command:
aws s3api create-bucket --bucket=terraform-serverless-example --region=us-east-1
If I then check on the console or call the s3api list-buckets I can see it exists and is functional. I can also upload using the console. However when I try to run this command:
aws s3 cp example.zip s3://terraform-serverless-example/v1.0.0/example.zip
It returns this error:
upload failed: ./code.zip to s3://test-verson1/v1.0.0/code.zip An error occurred (NoSuchBucket) when calling the PutObject operation: The specified bucket does not exist
I have tried to configure the region by adding it as a flag, have made sure permissions are correct and so on and it's all set up as should be. I cant figure out why this maybe and would appreciate any help.
I'm using Laravel and it's built-in library to upload my images to AWS S3, but upon uploading I'm getting "Access Denied" when trying to display images on my homepage.
I googled for these solutions and followed them, my bucket has "Block Public Access" turned off in hopes to allow public access to my existing and newly uploaded images.
My expected output is so show the images on my webpage wether they're an existing image or newly uploaded image.
Not sure if this is something you have already tried, but, worth a shout from the documentation.
Update the object's ACL using the AWS CLI
For an object that's already stored on Amazon S3, you can run this command to update its ACL for public read access:
aws s3api put-object-acl --bucket awsexamplebucket --key exampleobject --acl public-read
Or, you can run this command to grant full control of the object to the AWS account owner, and read access to everyone else:
aws s3api put-object-acl --bucket awsexamplebucket --key exampleobject --grant-full-control emailaddress=accountowneremail#emaildomain.com --grant-read uri=http://acs.amazonaws.com/groups/global/AllUsers
I am trying to copy a file from S3 to an Ec2 instance, here is the strange behavior
Following command runs perfectly fine and show me the contents of s3, that I want to access
$aws s3 ls
2016-05-05 07:40:57 folder1
2016-05-07 15:04:42 my-folder
then I issue following command (also successful)
$ aws s3 ls s3://my-folder
2016-05-07 16:44:50 6007 myfile.txt
but when I try to copy this file, I recive an error as follows
$aws s3 cp s3://my-folder/myfile.txt ./
A region must be specified --region or specifying the region in a
configuration file or as an environment variable. Alternately, an
endpoint can be specified with --endpoint-url
I simply want to copy txt file from s3 to ec2 instance.
At least modify the above command to copy the contents. I am not sure about region as If I visit S3 from web it says
"S3 does not require region selection"
What is happening on the earth?
Most likely something is not working right, you should not be able to list the bucket if your regions is not setup as default in the aws configure.
Therefore from my experience with S3 if this works:
aws s3 ls s3://my-folder
then this should work as well:
aws s3 cp s3://my-folder/myfile.txt ./
However if it's asking you for region, then you need to provide it.
Try this to get the bucket region:
aws s3api get-bucket-location --bucket BUCKET
And then this to copy the file:
aws s3 cp --region <your_buckets_region> s3://my-folder/myfile.txt ./
If I visit S3 from web it says
"S3 does not require region selection"
S3 and bucket regions can be very confusing especially with that message. As it is the most misleading information ever IMO when it comes to s3 regions. Every bucket has got specific region (default is us-east-1) unless you have enabled cross-region replication.
You can choose a region to optimize latency, minimize costs, or
address regulatory requirements. Objects stored in a region never
leave that region unless you explicitly transfer them to another
region. For more information about regions, see Accessing a Bucket: in
the Amazon Simple Storage Service Developer Guide.
How about
aws s3 cp s3://my-folder/myfile.txt .
# or
aws s3 cp s3://my-folder/myfile.txt myfile.txt
I suspect the problem is something to do with the local path parser.
aws cli s3 fileformat parser
It is kinda strange because aws cli read the credential and region config.
The fix is specifying the region, below explains how to get the bucket region if you cant get it from the cli.
aws s3 cp s3://xxxxyyyyy/2008-Nissan-Sentra.pdf myfile.pdf --region us-west-2
I need to get the contents of one S3 bucket into another S3 bucket.
The buckets are in two different accounts.
I was told not to create a policy to allow access to the destination bucket from the origin bucket.
Using the AWS CLI how can I download all the contents of the origin bucket, and then upload the contents to the destination bucket?
To copy locally
aws s3 sync s3://origin /local/path
To copy to destination bucket:
aws s3 sync /local/path s3://destination
The aws cli allows you to configure named profiles which lets you use a different set of credentials for each individual cli command. This will be helpful because your buckets are in different accounts.
To create your named profiles you'll need to make sure you already have IAM users in each of your accounts and each user will need a set of access keys. Create your two named profiles like this.
aws configure --profile profile1
aws configure --profile profile2
Each of those commands will ask you for your access keys and a default region to use. Once you have your two profiles, use the aws cli like this.
aws s3 cp s3://origin /local/path --recursive --profile profile1
aws s3 cp /local/path s3://destination --recursive --profile profile2
Notice that you can use the --profile parameter to tell the cli which set of credentials to use for each command.
I need to download files recursively from a s3 bucket. The s3 bucket lets anonymous access.
How to list files and download them without providing AWS Access Key using an anonymous user?
My command is:
aws s3 cp s3://anonymous#big-data-benchmark/pavlo/text/tiny/rankings/uservisits uservisit --region us-east --recursive
The aws compains that:
Unable to locate credentials. You can configure credentials by running "aws configure"
You can use no-sign-request option
aws s3 cp s3://anonymous#big-data-benchmark/pavlo/text/tiny/rankings/uservisits uservisit --region us-east --recursive --no-sign-request
you probably have to provide an access keys and secret key, even if you're doing anonymous access. don't see an option for anonymous for the AWS cli.
another way to do this, it to hit the http endpoint and grab the files that way.
In your case: http://big-data-benchmark.s3.amazonaws.com
You will get and XML listing all the keys in the bucket. You can extract the keys and issues requests for each. Not the fastest thing out there but it will get the job done.
For example: http://big-data-benchmark.s3.amazonaws.com/pavlo/sequence-snappy/5nodes/crawl/000741_0
for getting the files curl should be enough. for parsing the xml depending on what you like you can go as lo-level as sed and as high-level as a proper language.
hope this helps.