I have been trying to enable default server-side encryption for s3 bucket. However, the command fails with the following error:
[root#dcm-development s3]# aws --profile S3-es-xx-xx-xx-test --endpoint-url https://es-xx-xx-z2.eecloud.xx.net s3api put-bucket-encryption --bucket bucketname --server-side-encryption-configuration '{"Rules": [{"ApplyServerSideEncryptionByDefault": {"SSEAlgorithm": "AES256"}}]}'
An error occurred (InvalidArgument) when calling the PutBucketEncryption operation: Unknown
awscli / Python version:
root#e3a8f6bbbdbc:/app# aws --version
aws-cli/1.18.117 Python/3.6.5 Linux/3.10.0-1062.9.1.el7.x86_64 botocore/1.17.40
I would really appreciate if someone can point out the mistake here. Other operations are working fine such as: put-bucket-policy, delete-bucket-policy, etc.
Related
aws s3 cp s3://arxiv/pdf/arXiv_pdf_0001_001.tar s3://bucket --request-payer requester
fails with
fatal error: An error occurred (403) when calling the HeadObject operation: Forbidden
But if I do
aws s3 cp s3://arxiv/pdf/arXiv_pdf_0001_001.tar . --request-payer requester
it works fine
Additionally this also works, but it only copies one file at a time:
aws s3api copy-object --copy-source arxiv/pdf/arXiv_pdf_0001_001.tar --request-payer requester --key arXiv_pdf_0001_001.tar --bucket arxivmanifest
Whats going on?
When I ran the first command, it gave the error:
An error occurred (AccessDenied) when calling the GetObjectTagging operation: Access Denied
This is because the aws s3 cp command does more than just copy the file by also attempting to copy tagging (it seems). It would also seem that the bucket has not granted permissions for the GetObjectTagging API call.
In contrast, the aws s3api copy-object command issues a single API call. In fact, all s3api commands map to a specific API call. The aws s3 commands are 'higher level' commands that do more, such as enabling a --recursive copy).
Using the aws api I can create a bucket in us-east-1 but not other regions else, why is this ?
$ aws s3api create-bucket --bucket snap2web-12 --region us-east-1
{
"Location": "/snap2web-12"
}
19:21:27 durrantm u2018 /home/durrantm/Dropbox/_/Michael/cli_scripts
$ aws s3api create-bucket --bucket snap2web-13 --region us-east-2
An error occurred (IllegalLocationConstraintException) when calling the CreateBucket operation: The unspecified location constraint is incompatible for the region specific endpoint this request was sent to.
19:21:44 durrantm u2018 /home/durrantm/Dropbox/_/Michael/cli_scripts
$ aws s3api create-bucket --bucket snap2web-14 --region us-west-1
An error occurred (IllegalLocationConstraintException) when calling the CreateBucket operation: The unspecified location constraint is incompatible for the region specific endpoint this request was sent to.
19:23:19 durrantm u2018 /home/durrantm/Dropbox/_/Michael/cli_scripts
$
Two possible fixes:
Use the s3 command:
aws s3 mb s3://snap2web-13 --region us-east-2
or, according to the s3api examples (emphasis mine):
Regions outside of us-east-1 require the appropriate
LocationConstraint to be specified in order to create the bucket in
the desired region:
aws s3api create-bucket --bucket snap2web-13 --region us-east-2 --create-bucket-configuration LocationConstraint=us-east-2
I have to delete the several .tar files on my s3 bucket. I run the command for doing this through aws cli but getting the error i.e
Error parsing parameter '--delete': Expected: '=', received: ''' for input:
'{Objects:[{Key:2019-03-27T160001Z.tar},{Key:2019-03-27T170001Z.tar}]}'
My aws version is : aws-cli/1.16.136 Python/3.6.0 Windows/10 botocore/1.12.126
My command is:
aws s3api delete-objects --bucket mybucket --delete '{"Objects":[{"Key":"2019-03-27T160001Z.tar"},{"Key":"2019-03-27T170001Z.tar"}]}'
Is there anyone who can guide me on where I am doing mistake. Any help is really appreciated.
You command worked fine for me. As per delete-objects — AWS CLI Command Reference, you can also use:
aws s3api delete-objects --bucket mybucket --delete 'Objects=[{Key=2019-03-27T160001Z.tar},{Key=2019-03-27T170001Z.tar}]'
After using aws configure to setup access I can list all the files in an s3 bucket with:
aws s3api list-objects --bucket my_bucket_name
but when I run:
aws s3 cp s3://s3.amazonaws.com/my_bucket_name/ . --recursive
or
aws s3 sync s3://s3.amazonaws.com/my_bucket_name/ .
I get the following error:
fatal error: An error occurred (AccessDenied) when calling the ListObjects operation: Access Denied
Any ideas?
The following worked - ie without amazon domain:
aws s3 sync s3://my_bucket_name .
I am trying to run the following command:
aws s3 cp --region ap-south-1 --acl public-read my.exe s3://bucket/binaries/my.exe
upload failed: ./my.exe to s3://bucket/binaries/my.exe A client error
(InvalidRequest) occurred when calling the PutObject operation: You
are attempting to operate on a bucket in a region that requires
Signature Version 4. You can fix this issue by explicitly providing
the correct region location using the --region argument, the
AWS_DEFAULT_REGION environment variable, or the region variable in the
AWS CLI configuration file. You can get the bucket's location by
running "aws s3api get-bucket-location --bucket BUCKET".
How do I fix this error? I also tried the
AWS_DEFAULT_REGION=ap-south-1 aws s3 cp --acl public-read my.exe s3://bucket/binaries/my.exe
but with no luck.
# aws --version
aws-cli/1.10.28 Python/2.7.9 Linux/3.16.0-4-amd64 botocore/1.4.19
It seems to be working after upgrading awscli.
pip install --upgrade awscli
aws --version
aws-cli/1.10.43 Python/2.7.9 Linux/3.16.0-4-amd64 botocore/1.4.33