This works on my Linux box, but I can't get a simple AWS S3 cli command to work on a Windows server (2012).
I'm running a simple copy command to a bucket. I get the following error:
Parameter validation failed:
Invalid length for parameter Key, value: 0, valid range: 1-inf
I googled this, couldn't find anything relevant. And I'm not the best at working with Windows servers.
What does this error actually mean?
Here's the command:
aws s3 cp test.zip s3://my-bucket
Version:
aws-cli/1.11.158 Python/2.7.9 Windows/2012Server botocore/1.7.16
You might try this:
aws s3 cp test.zip s3://my-bucket --recursive
The error message:
Invalid length for parameter Key
Is telling you that you need to specify a Key for your object (a filename basically). Like so:
aws s3 cp test.zip s3://my-bucket/test.zip
The error message has nothing to do with specifying the file name on the destination file path (that will be taken from the origin). It has everything to do with having a valid access key and secret key setup.
Run the following command to verify if you have configured your credentials.
aws configure list
Related
I downloaded the aws cli with the macos gui installer:
https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html
And I'm running this command to download the directory from an s3 bucket to local:
aws s3 cp s3://macbook-pro-2019-prit/Desktop/pfp/properties/ ./ --recursive
But I'm getting this error:
rosetta error: /var/db/oah/279281327407104_279281327407104/dcf7796bca04d6b4d944583b3355e7db61ca27505539c35142c439a9dbfe60d0/aws.aot: attachment of code signature supplement failed: 1
zsh: trace trap aws s3 cp s3://macbook-pro-2019-prit/Desktop/pfp/properties/ . --recursive
How do I fix this error?
If these steps doesn’t work, I should you verify just list first (aws s3 ls s3://macbook-pro-2019-prit) and check yours ACL, Policy or others access control on your bucket.
Although I believe that your issue is on your system operation, please confirm that your AWS user has policy with allow list and get to macbook-pro-2019-prit. And reinstall AWS cli.
I am trying to copy all the data from an AWS S3 bucket to a GCS bucket. Acc. to this answer rsync command should have been able to do that. But I am receiving the following error when trying to do that
Caught non-retryable exception while listing s3://my-s3-source/: AccessDeniedException: 403 InvalidAccessKeyId
<?xml version="1.0" encoding="UTF-8"?>
<Error><Code>InvalidAccessKeyId</Code><Message>The AWS Access Key Id you provided does not exist in our records.</Message><AWSAccessKeyId>{REDACTED}</AWSAccessKeyId><RequestId>{REDACTED}</RequestId><HostId>{REDACTED}</HostId></Error>
CommandException: Caught non-retryable exception - aborting rsync
This is the command I am trying to run
gsutil -m rsync -r s3://my-s3-source gs://my-gcs-destination
I have the AWS CLI installed which is working fine with the same AccessKeyId and listing buckets as well as objects in the bucket.
Any idea what am I doing wrong here?
gsutil can work with both Google Storage and S3.
gsutil rsync -d -r s3://my-aws-bucket gs://example-bucket
You just need to configure it with both - Google and your AWS S3 credentials. For GCP you need to add the Amazon S3 credentials to ~/.aws/credentials or you can also store your AWS credentials in the .boto configuration file for gsutil. However, when you're accessing an Amazon S3 bucket with gsutil, the Boto library uses your ~/.aws/credentials file to override other credentials, such as any that are stored in ~/.boto.
=== 1st update ===
Also make sure you have to make sure you have the correct IAM permissions on the GCP side and the correct AWS IAM credentials. Also depending if you have a prior version of Migrate for Compute Engine (formerly Velostrata) use this documentation and make sure you set up the VPN, IAM credentials and AWS network. If you are using the current version (5.0), use the following documentation to check everything is configured correctly.
Trying to learn terraform and running through some examples from their documentation. I am trying to move a file from my pc to a S3 bucket. I create this bucket with the command:
aws s3api create-bucket --bucket=terraform-serverless-example --region=us-east-1
If I then check on the console or call the s3api list-buckets I can see it exists and is functional. I can also upload using the console. However when I try to run this command:
aws s3 cp example.zip s3://terraform-serverless-example/v1.0.0/example.zip
It returns this error:
upload failed: ./code.zip to s3://test-verson1/v1.0.0/code.zip An error occurred (NoSuchBucket) when calling the PutObject operation: The specified bucket does not exist
I have tried to configure the region by adding it as a flag, have made sure permissions are correct and so on and it's all set up as should be. I cant figure out why this maybe and would appreciate any help.
Was able to set up a pull from an S3 bucket on a Mac seamlessly, but have been struggling with an identical process on a PC (Windows). Here is what I have done -- any help along the way would be much appreciated.
Installed awscli using pip
Ran aws configure in the command prompt and inputed the proper access key id and secret access key.
Ran the s3 code: G:\>aws s3 cp --recursive s3://url-index-given/ . (where the url was replaced with url-index-given for example purposes).
And got this error:
fatal error: Could not connect to the endpoint URL: "https://url-index-given.s3.None.amazonaws.com/?list-type=2&prefix=&encoding-type=url"
I have tried uninstalling the awscli package and followed this process recommended by Amazon without any errors.
The error indicates have you have given an invalid value for Region when using aws configure. (See the None in the URL? That is where the Region normally goes.)
You should run aws configure again and give it a valid region (eg us-west-2).
Keen to setup fake s3, have it working via docker setup. Running on port 4569. I cannot figure out how to test using aws cli (version 1.10.6). specifically change the port for the access.
i.e. want to do a command like
$ aws s3 cp test.txt s3://mybucket/test2.txt
i need to specify the port, i've tried
--port settings on command line: i.e. AWS_ACCESS_KEY_ID=ignored AWS_SECRET_ACCESS_KEY=ignored aws s3 --profile fakes3 cp test.txt s3://mybucket/test2.txt (says not valid parameter)
adding a profile and including end_point="localhost:4569 in config in ~/.aws`. gives error about AUTH Key
running fakes3 on 443 but that then clashes with my local machine
Has anyone got aws cli working with fakes3?
$ aws s3 --version
aws-cli/1.10.6 Python/2.7.11 Darwin/15.2.0 botocore/1.3.28
Use the --endpoint-url argument. If fakes3 is listening on port 4569, try this:
aws --endpoint-url=http://localhost:4569 s3 ls