Unable to download files from s3 using vpc endpoint - amazon-web-services

So I have created a VPC Interface endpoint for S3.
I have tried running the below command:
aws s3api getobject --bucket <bucketname> --key <file_key> --endpoint-url https://<bucketname>.bucket.s3.
<region>.amazonaws.com
and it works fine.
Then I tried to update my /etc/hosts file with the 2 subnet IPs of my VPC like below:
<subnet-ip-1> s3.<region>.amazonaws.com
<subnet-ip-2> s3.<region>.amazonaws.com
And then I do a aws s3 ls. Works fine- I can see all my buckets
Then I run the below command (1st command excluding endpoint-url):
aws s3api getobject --bucket <bucketname> --key <file_key> <local_file_name>
Timeout error.
I try -
aws s3 cp s3://<bucketname>/<filekey> <local_file_name>
Timeout again..
Then I updated my /etc/hosts file like below..
<subnet-ip-1> <bucketname>.s3.<region>.amazonaws.com
<subnet-ip-2> <bucketname>.s3.<region>.amazonaws.com
And I ran the command..
aws s3 cp s3://<bucketname>/<filekey> <local_file_name>
It works!!
But I cant add bucket names like this because I have almost 100 such buckets. And I cannot use endpoint url in my s3 commands because Im setting this up for greengrass. And greengrass wont allow me to configure a s3 endpoint. So it has to be like greengrass will call its bucket commands at s3..amazonaws.com but they should be redirected to my VPC endpoint by default.
How do I set that up?
Note: I do not and cannot hv internet access.

Related

AWS S3 cli not working with endpoint urls

I am attempting to use wasabi but the aws s3 cli seams to ignore --enpoint-url when ever I specify s3://my-wasabi-bucket
For Example
aws s3 ls --endpoint-url=https://s3.wasabisys.com --profile wasabi
Will list my wasabi buckets, but when I do
aws s3 ls --endpoint-url=https://s3.wasabisys.com --profile wasabi s3://my-bucket
Where my-bucket is a bucket that was in the list from above, I get
Could not connect to the endpoint URL: "https://s3.us-central-1.amazonaws.com/my-bucket?list-type=2&prefix=&delimiter=%2F&encoding-type=url"
Looks like I was using the wrong endpoint url, my bucket is hosted in wasabi's central-1 so I needed to use --endpoint-url=https://s3.us-central-1.wasabisys.com

Invalid Endpoint When Attempting to Create S3 via AWSCLI

I am attempting to create an s3 bucket in aws via awscli but keep getting the error:
Invalid endpoint: https://s3.[us-east-1].amazonaws.com
AFAIK, from all the reading and searching, I have configured everything correctly (note, that I wiped out the Access Keys below so as not to broadcast online my private info).
I haven't seen this in the research I've done, but it seems to add the [] around the region by default, which may be throwing it off. I don't know how to change that if that is the issue.
For example, in the error message it shows up, and if I re-run aws configure, it displays Default region name [[us-east-1]].
I have also tried this, which fails, and makes no sense to me why not:
Anyone have any tips on what I am missing?
I just created a new S3 bucket using the aws cli using below command :-
aws s3api create-bucket --bucket test-ami-test --region us-east-1, here test-ami-test is my bucket name.
According to docs on AWS http://docs.aws.amazon.com/cli/latest/reference/s3api/create-bucket.html , below command is used to create a bucket :-
aws s3api create-bucket --bucket my-bucket --region us-east-1.
Also can you run aws configure list command and check the O/P matches the one specified in http://docs.aws.amazon.com/cli/latest/reference/configure/list.html.
Although I think your aws cli is configured properly, it's just you are not using correct command to create a bucket.

How to delete aws s3 bucket in aws cli

I have the ff s3 bucket, see image below:
I know how to delete them on the page but how can i delete them using aws cli?
I tried using
aws s3api delete-objects --bucket
elasticbeanstalk-ap-southeast-1-613285248276
but it wont work.
You can use 'rb' option
E.g. aws s3 rb s3://elasticbeanstalk-ap-southeast-1-613285248276
For more details click here
If the bucket is not empty then use the --force flag
The following commands deletes the Bucket from S3
aws s3api delete-bucket --bucket "your-bucket-name" --region "your-region".
I am deleting S3 bucket from following command without any extra hustle of emptying it first
aws s3 rb s3://<BUCKATE-NAME> --force
for example
aws s3 rb s3://alok.guha.myversioningbucket --force
PS : Make sure you have programmatic access with policy "AmazonS3FullAccess"

aws cli signature version 4

I want to move all my data from Bucket1 of account A to Bucket2 of account B.
For this:
I downloaded AWS CLI for Windows.
Entered IAM credentials using command aws configure (these credentials are from account B)
Run command to sync buckets: aws s3 sync s3://Bucket1 s3://Bucket2
I received following error:
fatal error: An error occured (InvalidRequest) when calling the ListObject operation: You are attempting to operate on a bucket in a region that requires Signature Version 4. You can fix this issue by explicitly providing the correct region location using the --region argument, the AWS_DEFAULT_REGION environment variable, or the region variable in the AWS CLI configuration file. You can get the bucket's location by running "aws s3api get-bucket-location --bucket BUCKET".
How to tackle this error?
aws --version
aws-cli/1.11.61 Python/2.7.9 windows/8 botocore/1.5.24
My S3 url was like :https://console.aws.amazon.com/s3/home?region=us-east-1
so I supposed that us-east-1 is my region but actually it was not!
I used AWS command to find Bucket2 region and it told me a different region.
Then I used this command aws s3 sync s3://Bucket1 s3://Bucket2 --region Asia Pacific (Mumbai) and everything worked fine!
Look for the correct region of the bucket (see attached image).
Try the command below by specifying the correct region:
aws s3 ls --region us-west-2
S3 is global - don't let that mislead you.

Unable to copy from S3 to Ec2 instance

I am trying to copy a file from S3 to an Ec2 instance, here is the strange behavior
Following command runs perfectly fine and show me the contents of s3, that I want to access
$aws s3 ls
2016-05-05 07:40:57 folder1
2016-05-07 15:04:42 my-folder
then I issue following command (also successful)
$ aws s3 ls s3://my-folder
2016-05-07 16:44:50 6007 myfile.txt
but when I try to copy this file, I recive an error as follows
$aws s3 cp s3://my-folder/myfile.txt ./
A region must be specified --region or specifying the region in a
configuration file or as an environment variable. Alternately, an
endpoint can be specified with --endpoint-url
I simply want to copy txt file from s3 to ec2 instance.
At least modify the above command to copy the contents. I am not sure about region as If I visit S3 from web it says
"S3 does not require region selection"
What is happening on the earth?
Most likely something is not working right, you should not be able to list the bucket if your regions is not setup as default in the aws configure.
Therefore from my experience with S3 if this works:
aws s3 ls s3://my-folder
then this should work as well:
aws s3 cp s3://my-folder/myfile.txt ./
However if it's asking you for region, then you need to provide it.
Try this to get the bucket region:
aws s3api get-bucket-location --bucket BUCKET
And then this to copy the file:
aws s3 cp --region <your_buckets_region> s3://my-folder/myfile.txt ./
If I visit S3 from web it says
"S3 does not require region selection"
S3 and bucket regions can be very confusing especially with that message. As it is the most misleading information ever IMO when it comes to s3 regions. Every bucket has got specific region (default is us-east-1) unless you have enabled cross-region replication.
You can choose a region to optimize latency, minimize costs, or
address regulatory requirements. Objects stored in a region never
leave that region unless you explicitly transfer them to another
region. For more information about regions, see Accessing a Bucket: in
the Amazon Simple Storage Service Developer Guide.
How about
aws s3 cp s3://my-folder/myfile.txt .
# or
aws s3 cp s3://my-folder/myfile.txt myfile.txt
I suspect the problem is something to do with the local path parser.
aws cli s3 fileformat parser
It is kinda strange because aws cli read the credential and region config.
The fix is specifying the region, below explains how to get the bucket region if you cant get it from the cli.
aws s3 cp s3://xxxxyyyyy/2008-Nissan-Sentra.pdf myfile.pdf --region us-west-2