when calling the ListObjects operation: Missing required header for this request: x-amz-content-sha256 - amazon-web-services

I am trying to copy from one bucket to another bucket in aws with the below command
aws s3 cp s3://bucket1/media s3://bucket2/media --profile xyz --recursive
Returns an error saying
An error occurred (InvalidRequest) when calling the ListObjects operation: Missing required header for this request: x-amz-content-sha256
Completed 1 part(s) with ... file(s) remaining

Check your region. This error is known to happen if your region is not set correctly.

Thanks for your answers , The issue was with permission with the profile used , the credential must have access rights to both the S3 Buckets

I confirm it is an issue of setting a wrong region , However the question now is :
How to know what it is the region of S3 ?
The answer is in the link of any asset hosted there .
So , assume one of your assets which is hosted under bucket-1 has a link :
https://s3.eu-central-2.amazonaws.com/bucket-1/asset.png
This mean your REGION is eu-central-2
Alright , so, run :
aws configure
And change your region accordingly.

I received this error in bash scripts without any sdk.
In my fix, I was missing to add x-amz-content-sha256 and x-amz-date in my cURL request.
Notably
x-amz-date
required by AWS, must contain the timestamp of the request; the accepted format is quite flexible, I’m using ISO8601 basic format.
Example: 20150915T124500Z
x-amz-content-sha256
required by AWS, must be the SHA256 digest of the payload
The request will carry no payload (i.e. the body will be empty). This means that wherever a “payload hash” is required, we will provide an SHA256 hash of an empty string. And that is a constant value of e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855. This concerns the x-amz-content-sha256 header as well.
Detailed explanation: https://czak.pl/2015/09/15/s3-rest-api-with-curl.html

Assuming you have set the following correctly:
AWS credentials
region
permissions of the bucket is set to publicly accessible
IAM policy of the bucket
And assuming you are using boto3 client,
then another thing that could be causing the problem is the signature version in the botocore.config.Config.
import boto3
from botocore import config
AWS_REGION = "us-east-1"
BOTO3_CLIENT_CONFIG = config.Config(
region_name=AWS_REGION,
signature_version="v4",
retries={"max_attempts": 10, "mode": "standard"},
)
s3_client = boto3.client("s3", config=BOTO3_CLIENT_CONFIG)
result = s3_client.list_objects(Bucket="my-bucket-name", Prefix="", Delimiter="/")
Here the signature_version cannot be "v4". It should be "s3v4". Or the signature_version argument should be excluded altogether as by default it is "s3v4".

Related

AWS Service Quota: How to get service quota for Amazon S3 using boto3

I get the error "An error occurred (NoSuchResourceException) when calling the GetServiceQuota operation:" while trying running the following boto3 python code to get the value of quota for "Buckets"
client_quota = boto3.client('service-quotas')
resp_s3 = client_quota.get_service_quota(ServiceCode='s3', QuotaCode='L-89BABEE8')
In the above code, QuotaCode "L-89BABEE8" is for "Buckets". I presumed the value of ServiceCode for Amazon S3 would be "s3" so I put it there but I guess that is wrong and throwing error. I tried finding the documentation around ServiceCode for S3 but could not find it. I even tried with "S3" (uppercase 'S' here), "Amazon S3" but that didn't work as well.
What I tried?
client_quota = boto3.client('service-quotas') resp_s3 = client_quota.get_service_quota(ServiceCode='s3', QuotaCode='L-89BABEE8')
What I expected?
Output in the below format for S3. Below example is for EC2 which is the output of resp_ec2 = client_quota.get_service_quota(ServiceCode='ec2', QuotaCode='L-6DA43717')
I just played around with this and I'm seeing the same thing you are, empty responses from any service quota list or get command for service s3. However s3 is definitely the correct service code, because you see that come back from the service quota list_services() call. Then I saw there are also list and get commands for AWS default service quotas, and when I tried that it came back with data. I'm not entirely sure, but based on the docs I think any quota that can't be adjusted, and possibly any quota your account hasn't requested an adjustment for, will probably come back with an empty response from get_service_quota() and you'll need to run get_aws_default_service_quota() instead.
So I believe what you need to do is probably run this first:
client_quota.get_service_quota(ServiceCode='s3', QuotaCode='L-89BABEE8')
And if that throws an exception, then run the following:
client_quota.get_aws_default_service_quota(ServiceCode='s3', QuotaCode='L-89BABEE8')

C# AWS SDK SecurityTokenServiceClient.AssumeRole returning "SignatureDoesNotMatch" 403 forbidden

I've been implementing an AWS S3 integration with the C# AWS SDK in a development environment, and everything has been going well. Part of the requirement is that the IAM AccessKey and SecretKey rotate, and the credential/config files stored or cached, and there is also a Role to be assumed in the process.
I have a method which returns credentials after initializing a AmazonSecurityTokenServiceClient with AccessKey, SecretKey, and RegionEndpoint, formats a AssumeRoleRequest with the RoleArn and then executes the request:
using (var STSClient = new AmazonSecurityTokenServiceClient(accessKey, secretKey, bucketRegion))
{
try
{
var response = STSClient.AssumeRole(new AssumeRoleRequest(roleARN));
if (response.HttpStatusCode == System.Net.HttpStatusCode.OK) return response.Credentials;
}
catch (AmazonSecurityTokenServiceException ex)
{
return null;
}
}
This is simplified, as the real implementation validates the credential variables, etc.. And it matches the AWS Developer code examples (although I can't find the link to that page anymore).
This has been working in dev just fine. Having moved this to a QA env with new AWS credentials, which I've been assured have been set up in the same process as the dev credentials, I'm now receiving an exception on the AssumeRole call.
The actual AssumeRole method doesn't include documentation that it would throw that exception, it's just the one it raises. The StatusCode: 403 Forbidden, ErrorCode: SignatureDoesNotMatch, ErrorType: Sender, Message "The request signature we calculated does not match the signature you provided....".
Things I have ruled out:
Keys are correct and do not contain escaped characters (/), or leading/trailing spaces
bucket region is correct us-west-2
sts auth region is us-east-1
SignatureVersion is 4
Switching back to the dev keys works, but that is not a production-friendly solution. Ultimately I will not be in charge of the keys, or the Aws account to create them. I've been in touch with the IT Admin who created the accounts/keys/roles and he assured me they are created the same way I created the dev accounts/keys/roles (which was an agreed-upon process prior to development).
The provided accounts/keys/roles can be accessed via the CLI or web console, so I can confirm they work and are active. I've been diligent to not have any CLI created credential or config files floating around that the sdk might access by default.
Any thoughts or suggestions are welcome.
The reason why AWS returns this error is usually that the secret key is incorrect
The request signature we calculated does not match the signature you provided. Check your key and signing method. (Status Code: 403; Error Code: SignatureDoesNotMatch)

AWS MediaConvert could not identify region for bucket s3.Bucket(name='myname')

my goal is to create a MediaConvert job from a given template by using boto3 with python: https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/mediaconvert.html#MediaConvert.Client.create_job
Apparently MediaConvert fails to identify the region of my output s3 bucket. I was under the impression that buckets were global, but even after some tinkering I wasn't able to fix the problem.
Here's the error message from the MediaConvert dashboard:
Could not identify region for bucket s3.Bucket(name='mybucket'): Failed to lookup region of buckets3.Bucket(name='mybucket')
The error code is 1404.
When I click on the Output Group on the dashboard for the job that failed, I get redirected to "https://console.aws.amazon.com/s3/buckets/s3.Bucket(name='mybucket')/?region=us-east-1", which obviously fails to resolve a bucket. The correct path would have been "https://console.aws.amazon.com/s3/buckets/mybucket/?region=us-east-1".
Here is the code that triggers the job:
media_client = boto3.client('mediaconvert', region_name='us-east-1')
endpoints = media_client.describe_endpoints()
customer_media_client = boto3.client('mediaconvert', region_name='us-east-1', endpoint_url=endpoints['Endpoints'][0]['Url'])
customer_media_client.create_job(
JobTemplate='job-template',
Role='arn:aws:iam::1234567890:role/MediaConvert',
Settings=...
In the Settings I use the following OutputGroupSettings:
"OutputGroupSettings": {
"Type": "FILE_GROUP_SETTINGS",
"FileGroupSettings": {
"Destination": "s3://%s/" % target_bucket
}
}
I did verify that the MediaConvert jobs and the S3 buckets are all in the same region (us-east-1).
Any idea what the error is about? If you need more code, please let me know.
I have also asked this question on the aws forums: https://forums.aws.amazon.com/thread.jspa?threadID=304143
It looks like an known issue associated with the % string formatting operator used within a Python dictionary.
In commit Issue #14123: Explicitly mention that old style % string formatting has caveats but is not going away any time soon.
The
use of a binary operator means that care may be needed in order to
format tuples and dictionaries correctly.
Check this answer for more detail.
This explains why by taking the assignment outside of a dict resolves the issue. Consider using the .format() method to replace %.
My Error Message
I got the same error message from MediaConvert, but my issue was missing a "/" after the bucket name in "Destination".
Originally my code was:
FileGroupSettings['Destination'] = 's3://' + bucketName + S3key
By adding a slash, it could locate the right bucket
FileGroupSettings['Destination'] = 's3://' + bucketName + '/' + S3key
I resolved the issue by extracting "s3://%s/" % target_bucket into a separate declaration.
s3_target = "s3://%s/" % target_bucket
...
"Destination": s3_target

Specify Maximum File Size while uploading a file in AWS S3

I am creating temporary credentials via AWS Security Token Service (AWS STS).
And Using these credentials to upload a file to S3 from S3 JAVA SDK.
I need some way to restrict the size of file upload.
I was trying to add policy(of s3:content-length-range) while creating a user, but that doesn't seem to work.
Is there any other way to specify the maximum file size which user can upload??
An alternative method would be to generate a pre-signed URL instead of temporary credentials. It will be good for one file with a name you specify. You can also force a content length range when you generate the URL. Your user will get URL and will have to use a specific method (POST/PUT/etc.) for the request. They set the content while you set everything else.
I'm not sure how to do that with Java (it doesn't seem to have support for conditions), but it's simple with Python and boto3:
import boto3
# Get the service client
s3 = boto3.client('s3')
# Make sure everything posted is publicly readable
fields = {"acl": "private"}
# Ensure that the ACL isn't changed and restrict the user to a length
# between 10 and 100.
conditions = [
{"acl": "private"},
["content-length-range", 10, 100]
]
# Generate the POST attributes
post = s3.generate_presigned_post(
Bucket='bucket-name',
Key='key-name',
Fields=fields,
Conditions=conditions
)
When testing this make sure every single header item matches or you'd get vague access denied errors. It can take a while to match it completely.
I believe there is no way to limit the object size before uploading, and reacting to that would be quite hard. A workaround would be to create an S3 event notification that triggers your code, through a Lambda funcation or SNS topic. That could validate or delete the object and notify the user for example.

InvalidSignatureException when using boto3 for dynamoDB on aws

Im facing some sort of credentials issue when trying to connect to my dynamoDB on aws. Locally it all works fine and I can connect using env variables for AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY and AWS_DEFAULT_REGION and then
dynamoConnection = boto3.resource('dynamodb', endpoint_url='http://localhost:8000')
When changing to live creds in the env variables and setting the endpoint_url to the dynamoDB on aws this fails with:
"botocore.exceptions.ClientError: An error occurred (InvalidSignatureException) when calling the Query operation: The request signature we calculated does not match the signature you provided. Check your AWS Secret Access Key and signing method. Consult the service documentation for details."
The creds are valid as they are used in a different app which talks to the same dynamoDB. Ive also tried not using env variables but rather directly in the method but the error persisted. Furthermore, to avoid any issues with trailing spaces Ive even used the credentials directly in the code. Im using Python v3.4.4.
Is there maybe a header that also should be set that Im not aware of? Any hints would be apprecihated.
EDIT
Ive now also created new credentials (to make sure there are only alphanumerical signs) but still no dice.
You shouldn't use the endpoint_url when you are connecting to the real DynamoDB service. That's really only for connecting to local services or non-standard endpoints. Instead, just specify the region you want:
dynamoConnection = boto3.resource('dynamodb', region_name='us-west-2')
It sign that your time zone is different. Maybe you can check your:
1. Time zone
2. Time settings.
If there are some automatic settings, you should fix your time settings.
"sudo hwclock --hctosys" should do the trick.
Just wanted to point out that accessing DynamoDB from a C# environment (using AWS.NET SDK) I ran into this error and the way I solved it was to create a new pair of AWS access/secret keys.
Worked immediately after I changed those keys in the code.