Invalid Endpoint When Attempting to Create S3 via AWSCLI - amazon-web-services

I am attempting to create an s3 bucket in aws via awscli but keep getting the error:
Invalid endpoint: https://s3.[us-east-1].amazonaws.com
AFAIK, from all the reading and searching, I have configured everything correctly (note, that I wiped out the Access Keys below so as not to broadcast online my private info).
I haven't seen this in the research I've done, but it seems to add the [] around the region by default, which may be throwing it off. I don't know how to change that if that is the issue.
For example, in the error message it shows up, and if I re-run aws configure, it displays Default region name [[us-east-1]].
I have also tried this, which fails, and makes no sense to me why not:
Anyone have any tips on what I am missing?

I just created a new S3 bucket using the aws cli using below command :-
aws s3api create-bucket --bucket test-ami-test --region us-east-1, here test-ami-test is my bucket name.
According to docs on AWS http://docs.aws.amazon.com/cli/latest/reference/s3api/create-bucket.html , below command is used to create a bucket :-
aws s3api create-bucket --bucket my-bucket --region us-east-1.
Also can you run aws configure list command and check the O/P matches the one specified in http://docs.aws.amazon.com/cli/latest/reference/configure/list.html.
Although I think your aws cli is configured properly, it's just you are not using correct command to create a bucket.

Related

Unable to download files from s3 using vpc endpoint

So I have created a VPC Interface endpoint for S3.
I have tried running the below command:
aws s3api getobject --bucket <bucketname> --key <file_key> --endpoint-url https://<bucketname>.bucket.s3.
<region>.amazonaws.com
and it works fine.
Then I tried to update my /etc/hosts file with the 2 subnet IPs of my VPC like below:
<subnet-ip-1> s3.<region>.amazonaws.com
<subnet-ip-2> s3.<region>.amazonaws.com
And then I do a aws s3 ls. Works fine- I can see all my buckets
Then I run the below command (1st command excluding endpoint-url):
aws s3api getobject --bucket <bucketname> --key <file_key> <local_file_name>
Timeout error.
I try -
aws s3 cp s3://<bucketname>/<filekey> <local_file_name>
Timeout again..
Then I updated my /etc/hosts file like below..
<subnet-ip-1> <bucketname>.s3.<region>.amazonaws.com
<subnet-ip-2> <bucketname>.s3.<region>.amazonaws.com
And I ran the command..
aws s3 cp s3://<bucketname>/<filekey> <local_file_name>
It works!!
But I cant add bucket names like this because I have almost 100 such buckets. And I cannot use endpoint url in my s3 commands because Im setting this up for greengrass. And greengrass wont allow me to configure a s3 endpoint. So it has to be like greengrass will call its bucket commands at s3..amazonaws.com but they should be redirected to my VPC endpoint by default.
How do I set that up?
Note: I do not and cannot hv internet access.

AWS CLI list-policies to find a policy with a specific name

I am trying to locate a policy in AWS with a specific name via the aws cli. I tried get-policy first but it threw and error. Now I am trying list-policies and putting in a prefix. According to the documentation if I start and end the string with a forward slash it should search but it hasn't been working. I get an empty array back... any ideas?
aws iam list-policies --scope Local --path-prefix /policyname.xyz/
It is an issue with AWS CLI V2.
The issue is still open on the github repository of the AWS SDK since 11 Jan.
You can check the detail here:
https://github.com/aws/aws-sdk/issues/36
Complete list of issues:
https://github.com/aws/aws-sdk/issues
You can use the --query flag. For example, for exact search,
aws iam list-policies --query 'Policies[?PolicyName==`policyname.xyz`]'
If you want more flexible search, you can refer to https://jmespath.org/specification.html for some functions for example 'to start with policynamexxx'
aws iam list-policies --query 'Policies[?starts_with(PolicyName,`policynamexxx`)]'

AWS CLI returning No Such Bucket exists but I can see the bucket in the console

Trying to learn terraform and running through some examples from their documentation. I am trying to move a file from my pc to a S3 bucket. I create this bucket with the command:
aws s3api create-bucket --bucket=terraform-serverless-example --region=us-east-1
If I then check on the console or call the s3api list-buckets I can see it exists and is functional. I can also upload using the console. However when I try to run this command:
aws s3 cp example.zip s3://terraform-serverless-example/v1.0.0/example.zip
It returns this error:
upload failed: ./code.zip to s3://test-verson1/v1.0.0/code.zip An error occurred (NoSuchBucket) when calling the PutObject operation: The specified bucket does not exist
I have tried to configure the region by adding it as a flag, have made sure permissions are correct and so on and it's all set up as should be. I cant figure out why this maybe and would appreciate any help.

aws cli signature version 4

I want to move all my data from Bucket1 of account A to Bucket2 of account B.
For this:
I downloaded AWS CLI for Windows.
Entered IAM credentials using command aws configure (these credentials are from account B)
Run command to sync buckets: aws s3 sync s3://Bucket1 s3://Bucket2
I received following error:
fatal error: An error occured (InvalidRequest) when calling the ListObject operation: You are attempting to operate on a bucket in a region that requires Signature Version 4. You can fix this issue by explicitly providing the correct region location using the --region argument, the AWS_DEFAULT_REGION environment variable, or the region variable in the AWS CLI configuration file. You can get the bucket's location by running "aws s3api get-bucket-location --bucket BUCKET".
How to tackle this error?
aws --version
aws-cli/1.11.61 Python/2.7.9 windows/8 botocore/1.5.24
My S3 url was like :https://console.aws.amazon.com/s3/home?region=us-east-1
so I supposed that us-east-1 is my region but actually it was not!
I used AWS command to find Bucket2 region and it told me a different region.
Then I used this command aws s3 sync s3://Bucket1 s3://Bucket2 --region Asia Pacific (Mumbai) and everything worked fine!
Look for the correct region of the bucket (see attached image).
Try the command below by specifying the correct region:
aws s3 ls --region us-west-2
S3 is global - don't let that mislead you.

Unable to copy from S3 to Ec2 instance

I am trying to copy a file from S3 to an Ec2 instance, here is the strange behavior
Following command runs perfectly fine and show me the contents of s3, that I want to access
$aws s3 ls
2016-05-05 07:40:57 folder1
2016-05-07 15:04:42 my-folder
then I issue following command (also successful)
$ aws s3 ls s3://my-folder
2016-05-07 16:44:50 6007 myfile.txt
but when I try to copy this file, I recive an error as follows
$aws s3 cp s3://my-folder/myfile.txt ./
A region must be specified --region or specifying the region in a
configuration file or as an environment variable. Alternately, an
endpoint can be specified with --endpoint-url
I simply want to copy txt file from s3 to ec2 instance.
At least modify the above command to copy the contents. I am not sure about region as If I visit S3 from web it says
"S3 does not require region selection"
What is happening on the earth?
Most likely something is not working right, you should not be able to list the bucket if your regions is not setup as default in the aws configure.
Therefore from my experience with S3 if this works:
aws s3 ls s3://my-folder
then this should work as well:
aws s3 cp s3://my-folder/myfile.txt ./
However if it's asking you for region, then you need to provide it.
Try this to get the bucket region:
aws s3api get-bucket-location --bucket BUCKET
And then this to copy the file:
aws s3 cp --region <your_buckets_region> s3://my-folder/myfile.txt ./
If I visit S3 from web it says
"S3 does not require region selection"
S3 and bucket regions can be very confusing especially with that message. As it is the most misleading information ever IMO when it comes to s3 regions. Every bucket has got specific region (default is us-east-1) unless you have enabled cross-region replication.
You can choose a region to optimize latency, minimize costs, or
address regulatory requirements. Objects stored in a region never
leave that region unless you explicitly transfer them to another
region. For more information about regions, see Accessing a Bucket: in
the Amazon Simple Storage Service Developer Guide.
How about
aws s3 cp s3://my-folder/myfile.txt .
# or
aws s3 cp s3://my-folder/myfile.txt myfile.txt
I suspect the problem is something to do with the local path parser.
aws cli s3 fileformat parser
It is kinda strange because aws cli read the credential and region config.
The fix is specifying the region, below explains how to get the bucket region if you cant get it from the cli.
aws s3 cp s3://xxxxyyyyy/2008-Nissan-Sentra.pdf myfile.pdf --region us-west-2