AWS CLI: Could not connect to the endpoint URL - amazon-web-services

Was able to set up a pull from an S3 bucket on a Mac seamlessly, but have been struggling with an identical process on a PC (Windows). Here is what I have done -- any help along the way would be much appreciated.
Installed awscli using pip
Ran aws configure in the command prompt and inputed the proper access key id and secret access key.
Ran the s3 code: G:\>aws s3 cp --recursive s3://url-index-given/ . (where the url was replaced with url-index-given for example purposes).
And got this error:
fatal error: Could not connect to the endpoint URL: "https://url-index-given.s3.None.amazonaws.com/?list-type=2&prefix=&encoding-type=url"
I have tried uninstalling the awscli package and followed this process recommended by Amazon without any errors.

The error indicates have you have given an invalid value for Region when using aws configure. (See the None in the URL? That is where the Region normally goes.)
You should run aws configure again and give it a valid region (eg us-west-2).

Related

gsutil rsync with s3 buckets gives InvalidAccessKeyId error

I am trying to copy all the data from an AWS S3 bucket to a GCS bucket. Acc. to this answer rsync command should have been able to do that. But I am receiving the following error when trying to do that
Caught non-retryable exception while listing s3://my-s3-source/: AccessDeniedException: 403 InvalidAccessKeyId
<?xml version="1.0" encoding="UTF-8"?>
<Error><Code>InvalidAccessKeyId</Code><Message>The AWS Access Key Id you provided does not exist in our records.</Message><AWSAccessKeyId>{REDACTED}</AWSAccessKeyId><RequestId>{REDACTED}</RequestId><HostId>{REDACTED}</HostId></Error>
CommandException: Caught non-retryable exception - aborting rsync
This is the command I am trying to run
gsutil -m rsync -r s3://my-s3-source gs://my-gcs-destination
I have the AWS CLI installed which is working fine with the same AccessKeyId and listing buckets as well as objects in the bucket.
Any idea what am I doing wrong here?
gsutil can work with both Google Storage and S3.
gsutil rsync -d -r s3://my-aws-bucket gs://example-bucket
You just need to configure it with both - Google and your AWS S3 credentials. For GCP you need to add the Amazon S3 credentials to ~/.aws/credentials or you can also store your AWS credentials in the .boto configuration file for gsutil. However, when you're accessing an Amazon S3 bucket with gsutil, the Boto library uses your ~/.aws/credentials file to override other credentials, such as any that are stored in ~/.boto.
=== 1st update ===
Also make sure you have to make sure you have the correct IAM permissions on the GCP side and the correct AWS IAM credentials. Also depending if you have a prior version of Migrate for Compute Engine (formerly Velostrata) use this documentation and make sure you set up the VPN, IAM credentials and AWS network. If you are using the current version (5.0), use the following documentation to check everything is configured correctly.

AWS S3 cli not working on Windows server

This works on my Linux box, but I can't get a simple AWS S3 cli command to work on a Windows server (2012).
I'm running a simple copy command to a bucket. I get the following error:
Parameter validation failed:
Invalid length for parameter Key, value: 0, valid range: 1-inf
I googled this, couldn't find anything relevant. And I'm not the best at working with Windows servers.
What does this error actually mean?
Here's the command:
aws s3 cp test.zip s3://my-bucket
Version:
aws-cli/1.11.158 Python/2.7.9 Windows/2012Server botocore/1.7.16
You might try this:
aws s3 cp test.zip s3://my-bucket --recursive
The error message:
Invalid length for parameter Key
Is telling you that you need to specify a Key for your object (a filename basically). Like so:
aws s3 cp test.zip s3://my-bucket/test.zip
The error message has nothing to do with specifying the file name on the destination file path (that will be taken from the origin). It has everything to do with having a valid access key and secret key setup.
Run the following command to verify if you have configured your credentials.
aws configure list

Cloud Object Storage S3: AccessDenied Error when calling GetObject operation from specific servers

I was trying to fetch a document from S3 bucket on IBM Cloud Object Storage and I get access denied error. I was able to upload an object successfully to the same bucket with same credentials. Also I get this error when I tried to download the object from some specific servers only. Is there any limitation from COS not to allow read access from specific servers/domains ?
I tried fetching documents using aws cli commands and also with the python boto3 library as well and the behavior is same.
Error:
"An error occurred (AccessDenied) when calling the GetObject operation: Access Denied"
Update:
I tried the same boto3 module on Mac OSX(10.11.6) with the same Bucket and Key combinations and it works fine.
I was trying my code on a RHEL 6.8 server and it breaks for files greater than 100MB. (This has both python 2.7 and 2.6, I've tried with both)
So even decided to use the AWS CLI command to download the file, it downloads exactly 2MB and gives the (AccessDenied) message.
So I went ahead and checked if the NodeJS (aws-sdk) module can pull something off, and to my surprise, it worked perfectly everytime for the S3 object download on RHEL 6 and Mac OSX as well. So NodeJS aws-sdk S3 module is able to handle large downloads on RHEL 6.
So now I'm not sure what is wrong with AWS CLI and boto3 (python based apps/modules for S3) in case of RHEL 6 server. The same code as I mentioned works on Mac OSX with Python 2.7.
AWS CLI Command used:
aws --endpoint-url=http://s3-api.us-geo.objectstorage.xxxx.net s3 cp s3://bucketname/filename.tar.gz ./

aws cli: invalid security token

I'm trying to create a reusable delegation set to use as whitelisted nameservers for my domains, using aws cli on Mac OS X. My AWS credentials (those of an IAM profile I created for that purpose with full administrator privileges, an location set to us-east-1) were correctly entered during setup and accepted by the system.
When entering the command
$ aws route53 create-reusable-delegation-set --caller-reference [CALLER-REFERENCE] --hosted-zone-id [HOSTED_ZONE] --generate-cli-skeleton
the request is successful and I get the response:
{
"CallerReference": "",
"HostedZoneId": ""
}
But when I remove --generate-cli-skeleton and enter
aws route53 create-reusable-delegation-set --caller-reference [CALLER-REFERENCE] --hosted-zone-id [HOSTED_ZONE]
I get this:
An error occurred (InvalidClientTokenId) when calling the CreateReusableDelegationSet operation: The security token included in the request is invalid.
I reality, my IAM credentials, despite being valid, and despite the profile I am using (donaldjenkins) having full administrator privileges, are refused systematically in all aws services and for all commands, not just Route53.
I've been unable to pinpoint the cause of this despite extensive research. Any suggestions gratefully receieved.
Deleting my credentials file (Linux, macOS, or Unix: ~/.aws Windows: %UserProfile%\.aws) then running aws configure again worked for me
The solution is to delete existing credentials for the IAM user and issue new ones. For some reason the credentials recorded during the initial setup of aws cli never worked properly, but overwriting them with new ones removed the issue instantly.
I had the same exact issue.
I'm running NodeJS on my local environment, and trying to deploy to Amazon using code deploy and some other aws tools.
What worked for me was to delete the current config and credentials folder, regnerate a new key and use. THis was after i originally installed aws cli and added the keys, had to add the keys again.
Depending on your folder structure, navigate to your home directory.
On mac if you open a new terminal, it should show your current home directory: "/Users/YOURNAME"
cd .aws
rm -rf config
rm -rf credentials
After you do this, go back to your home directory, then run:
"aws configure".
Enter your Key and secret key.
You can find more details here: https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html#cli-quick-configuration under Quickly Configuring the AWS CLI

Amazon S3 sync to local machine failed

I'm new to AWS and I'm trying to download a bunch of files from my S3 bucket to my local machine using aws s3 sync as described in http://docs.aws.amazon.com/cli/latest/reference/s3/sync.html.
I used the following command:
aws s3 sync s3://outputbucket/files/ .
I got the following error:
A client error (AccessDenied) occurred when calling the ListObjects operation: Access Denied
Completed 1 part(s) with ... file(s) remaining
Even though I have configured my access key ID & secret access key as described in http://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-set-up.html
Where might the problem be?
Assuming that you are an Administrator and/or you have set your credentials properly, it is possible that you are using an old AWS CLI.
I encountered this while using the packaged AWS CLI with Ubuntu 14.04.
The solution that worked for me is to remove the AWS CLI prepackaged with Ubuntu, and download it from python-pip instead:
sudo apt-get remove awscli
sudo apt-get install python-pip
sudo pip install awscli
Many thanks to this link:
https://forums.aws.amazon.com/thread.jspa?threadID=173124
To perform a file sync, two sets of permissions are required:
ListObjects to obtain a list of files to copy
GetObjects to access the objects
If you are using your "root" user that comes with your AWS account, you will automatically have these permissions.
If you are using a user created within Identity and Access Management (IAM), you will need to assign these permissions to the User. The easiest way is to assign the AmazonS3FullAccess policy, which gives access to all S3 functions.
In my case the credentials stored in ~/.aws/config were being clobbered by a competing profile sourced in ~/.zshrc. Run env | grep AWS to check.