C# AWS SDK SecurityTokenServiceClient.AssumeRole returning "SignatureDoesNotMatch" 403 forbidden - amazon-web-services

I've been implementing an AWS S3 integration with the C# AWS SDK in a development environment, and everything has been going well. Part of the requirement is that the IAM AccessKey and SecretKey rotate, and the credential/config files stored or cached, and there is also a Role to be assumed in the process.
I have a method which returns credentials after initializing a AmazonSecurityTokenServiceClient with AccessKey, SecretKey, and RegionEndpoint, formats a AssumeRoleRequest with the RoleArn and then executes the request:
using (var STSClient = new AmazonSecurityTokenServiceClient(accessKey, secretKey, bucketRegion))
{
try
{
var response = STSClient.AssumeRole(new AssumeRoleRequest(roleARN));
if (response.HttpStatusCode == System.Net.HttpStatusCode.OK) return response.Credentials;
}
catch (AmazonSecurityTokenServiceException ex)
{
return null;
}
}
This is simplified, as the real implementation validates the credential variables, etc.. And it matches the AWS Developer code examples (although I can't find the link to that page anymore).
This has been working in dev just fine. Having moved this to a QA env with new AWS credentials, which I've been assured have been set up in the same process as the dev credentials, I'm now receiving an exception on the AssumeRole call.
The actual AssumeRole method doesn't include documentation that it would throw that exception, it's just the one it raises. The StatusCode: 403 Forbidden, ErrorCode: SignatureDoesNotMatch, ErrorType: Sender, Message "The request signature we calculated does not match the signature you provided....".
Things I have ruled out:
Keys are correct and do not contain escaped characters (/), or leading/trailing spaces
bucket region is correct us-west-2
sts auth region is us-east-1
SignatureVersion is 4
Switching back to the dev keys works, but that is not a production-friendly solution. Ultimately I will not be in charge of the keys, or the Aws account to create them. I've been in touch with the IT Admin who created the accounts/keys/roles and he assured me they are created the same way I created the dev accounts/keys/roles (which was an agreed-upon process prior to development).
The provided accounts/keys/roles can be accessed via the CLI or web console, so I can confirm they work and are active. I've been diligent to not have any CLI created credential or config files floating around that the sdk might access by default.
Any thoughts or suggestions are welcome.

The reason why AWS returns this error is usually that the secret key is incorrect
The request signature we calculated does not match the signature you provided. Check your key and signing method. (Status Code: 403; Error Code: SignatureDoesNotMatch)

Related

AWS Service Quota: How to get service quota for Amazon S3 using boto3

I get the error "An error occurred (NoSuchResourceException) when calling the GetServiceQuota operation:" while trying running the following boto3 python code to get the value of quota for "Buckets"
client_quota = boto3.client('service-quotas')
resp_s3 = client_quota.get_service_quota(ServiceCode='s3', QuotaCode='L-89BABEE8')
In the above code, QuotaCode "L-89BABEE8" is for "Buckets". I presumed the value of ServiceCode for Amazon S3 would be "s3" so I put it there but I guess that is wrong and throwing error. I tried finding the documentation around ServiceCode for S3 but could not find it. I even tried with "S3" (uppercase 'S' here), "Amazon S3" but that didn't work as well.
What I tried?
client_quota = boto3.client('service-quotas') resp_s3 = client_quota.get_service_quota(ServiceCode='s3', QuotaCode='L-89BABEE8')
What I expected?
Output in the below format for S3. Below example is for EC2 which is the output of resp_ec2 = client_quota.get_service_quota(ServiceCode='ec2', QuotaCode='L-6DA43717')
I just played around with this and I'm seeing the same thing you are, empty responses from any service quota list or get command for service s3. However s3 is definitely the correct service code, because you see that come back from the service quota list_services() call. Then I saw there are also list and get commands for AWS default service quotas, and when I tried that it came back with data. I'm not entirely sure, but based on the docs I think any quota that can't be adjusted, and possibly any quota your account hasn't requested an adjustment for, will probably come back with an empty response from get_service_quota() and you'll need to run get_aws_default_service_quota() instead.
So I believe what you need to do is probably run this first:
client_quota.get_service_quota(ServiceCode='s3', QuotaCode='L-89BABEE8')
And if that throws an exception, then run the following:
client_quota.get_aws_default_service_quota(ServiceCode='s3', QuotaCode='L-89BABEE8')

Why am I failing to read and write to Amazon S3 from a Delphi VCL application?

I have created a Delphi application from which I want to read and write from an Amazon S3 (simple storage service) bucket.
In the S3 Management Console I have created a new bucket and set Block all public access to On.
I then created a new user for IAM (Identify and Access Management) and granted this user AmazonS3FullAccess privileges (for now). In my application I added a TAmazonConnectionInfo component to my project, and set the AccountKey property to the secret access key, and the AccountName property to my account key ID of this IAM user.
In my code I am instantiating a TAmazonStorageService class, passing the TAmazonConnectionInfo object to it in the constructor. I am then invoking the UploadObject method to which I pass the bucket name, an object name, and a TArray that contains the object I want to store. The call to UploadObject returns False. I’ve tried several different byte arrays, including one based on the example shown in one of the YouTube videos that I’ve referenced at the bottom of this post, so I’m pretty sure that it’s not a problem with the object I am trying to store.
I tried setting Block all public access to Off, but that did not solve the problem. I don’t know how long it takes for those settings to take effect, but there was no difference in result after half of an hour.
Either I have not sufficiently configured my TAmazonConnectionInfo object, or there are one or more objects that I need to add to the project, or some configuration that I need to perform on the bucket.
One concern I have is that my S3 bucket is located in US East (Ohio) region. The Region property of the TAmazonConnectionInfo component is set to amzrUSEast1, but I am not sure that that is correct. I tried setting Region to amzrNotSpecified, but that did not solve the problem.
Also, I tried setting StorageEndPoint to s3.us-east-2.amazonaws.com (http) and s3-accesspoint.us-east-2.amazonaws.com (https), based on Paweł’s comments.
I’ve exhausted my options. If you’re having success working with your S3 buckets from Delphi I would be grateful if you could help point me in the right direction.
I am using Delphi Rio 10.3.3 on Windows 10 64-bit
References:
https://www.youtube.com/watch?v=RUT9clew4PM&t=396s
https://www.youtube.com/watch?v=rtZkVAOvavU&t=1582s
https://www.youtube.com/watch?v=8VjTEtK_VaM&list=PLwUPJvR9mZHg3YgQKG8QCJAqdNxZyDVfg&index=50&t=0s
The code below works. Plug in your values for AccountName, AccountKey and BucketName and try it. If it fails, let us know what it returned for your response.
To test this, I created a new bucket with default settings: US East (Ohio) us-east-2 region and block all public access checked. I created a new user with programmatic access and assigned it the policy AmazonS3FullAccess.
During my initial test I got this 400 response which was resolved by setting the StorageEndpoint:
400: Bad Request - The authorization header is malformed; the region
'us-east-1' is wrong; expecting 'us-east-2'
(AuthorizationHeaderMalformed)
If you get a 403 response like below, check to make sure the user policy (or however you configured your permissions) is set correctly, and that your AccountName and AccountKey are correct.
403: Forbidden
program AmazonS3test;
{$APPTYPE CONSOLE}
{$R *.res}
uses
System.SysUtils, Data.Cloud.CloudAPI, Data.Cloud.AmazonAPI, System.Classes;
const
AccountName = ''; // IAM user Access key ID
AccountKey = ''; // IAM user Secret access key
BucketName = ''; // S3 bucket name
StorageEndpoint = 's3-us-east-2.amazonaws.com'; // s3-<bucket region>.amazonaws.com
ObjectName = 'TestFile.txt';
MyString = 'Testing 1 2 3';
var
ResponseInfo: TCloudResponseInfo;
ConnectionInfo: TAmazonConnectionInfo;
StorageService: TAmazonStorageService;
StringStream: TStringStream;
begin
ReportMemoryLeaksOnShutdown := True;
ConnectionInfo := TAmazonConnectionInfo.Create(nil);
StorageService := TAmazonStorageService.Create(ConnectionInfo);
ResponseInfo := TCloudResponseInfo.Create;
StringStream := TStringStream.Create(MyString);
try
try
ConnectionInfo.AccountName := AccountName;
ConnectionInfo.AccountKey := AccountKey;
ConnectionInfo.StorageEndpoint := StorageEndpoint;
StorageService.UploadObject(BucketName, ObjectName,
StringStream.Bytes, True, nil, nil, amzbaPrivate, ResponseInfo);
WriteLn('StatusCode: ' + IntToStr(ResponseInfo.StatusCode));
WriteLn('Status : ' + ResponseInfo.StatusMessage);
except
on E: Exception do
WriteLn(E.ClassName, ': ', E.Message);
end;
finally
StringStream.Free;
ResponseInfo.Free;
StorageService.Free;
ConnectionInfo.Free;
end;
end.

when calling the ListObjects operation: Missing required header for this request: x-amz-content-sha256

I am trying to copy from one bucket to another bucket in aws with the below command
aws s3 cp s3://bucket1/media s3://bucket2/media --profile xyz --recursive
Returns an error saying
An error occurred (InvalidRequest) when calling the ListObjects operation: Missing required header for this request: x-amz-content-sha256
Completed 1 part(s) with ... file(s) remaining
Check your region. This error is known to happen if your region is not set correctly.
Thanks for your answers , The issue was with permission with the profile used , the credential must have access rights to both the S3 Buckets
I confirm it is an issue of setting a wrong region , However the question now is :
How to know what it is the region of S3 ?
The answer is in the link of any asset hosted there .
So , assume one of your assets which is hosted under bucket-1 has a link :
https://s3.eu-central-2.amazonaws.com/bucket-1/asset.png
This mean your REGION is eu-central-2
Alright , so, run :
aws configure
And change your region accordingly.
I received this error in bash scripts without any sdk.
In my fix, I was missing to add x-amz-content-sha256 and x-amz-date in my cURL request.
Notably
x-amz-date
required by AWS, must contain the timestamp of the request; the accepted format is quite flexible, I’m using ISO8601 basic format.
Example: 20150915T124500Z
x-amz-content-sha256
required by AWS, must be the SHA256 digest of the payload
The request will carry no payload (i.e. the body will be empty). This means that wherever a “payload hash” is required, we will provide an SHA256 hash of an empty string. And that is a constant value of e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855. This concerns the x-amz-content-sha256 header as well.
Detailed explanation: https://czak.pl/2015/09/15/s3-rest-api-with-curl.html
Assuming you have set the following correctly:
AWS credentials
region
permissions of the bucket is set to publicly accessible
IAM policy of the bucket
And assuming you are using boto3 client,
then another thing that could be causing the problem is the signature version in the botocore.config.Config.
import boto3
from botocore import config
AWS_REGION = "us-east-1"
BOTO3_CLIENT_CONFIG = config.Config(
region_name=AWS_REGION,
signature_version="v4",
retries={"max_attempts": 10, "mode": "standard"},
)
s3_client = boto3.client("s3", config=BOTO3_CLIENT_CONFIG)
result = s3_client.list_objects(Bucket="my-bucket-name", Prefix="", Delimiter="/")
Here the signature_version cannot be "v4". It should be "s3v4". Or the signature_version argument should be excluded altogether as by default it is "s3v4".

InvalidSignatureException when using boto3 for dynamoDB on aws

Im facing some sort of credentials issue when trying to connect to my dynamoDB on aws. Locally it all works fine and I can connect using env variables for AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY and AWS_DEFAULT_REGION and then
dynamoConnection = boto3.resource('dynamodb', endpoint_url='http://localhost:8000')
When changing to live creds in the env variables and setting the endpoint_url to the dynamoDB on aws this fails with:
"botocore.exceptions.ClientError: An error occurred (InvalidSignatureException) when calling the Query operation: The request signature we calculated does not match the signature you provided. Check your AWS Secret Access Key and signing method. Consult the service documentation for details."
The creds are valid as they are used in a different app which talks to the same dynamoDB. Ive also tried not using env variables but rather directly in the method but the error persisted. Furthermore, to avoid any issues with trailing spaces Ive even used the credentials directly in the code. Im using Python v3.4.4.
Is there maybe a header that also should be set that Im not aware of? Any hints would be apprecihated.
EDIT
Ive now also created new credentials (to make sure there are only alphanumerical signs) but still no dice.
You shouldn't use the endpoint_url when you are connecting to the real DynamoDB service. That's really only for connecting to local services or non-standard endpoints. Instead, just specify the region you want:
dynamoConnection = boto3.resource('dynamodb', region_name='us-west-2')
It sign that your time zone is different. Maybe you can check your:
1. Time zone
2. Time settings.
If there are some automatic settings, you should fix your time settings.
"sudo hwclock --hctosys" should do the trick.
Just wanted to point out that accessing DynamoDB from a C# environment (using AWS.NET SDK) I ran into this error and the way I solved it was to create a new pair of AWS access/secret keys.
Worked immediately after I changed those keys in the code.

Allow 3rd party app to upload file to AWS s3

I need a way to allow a 3rd party app to upload a txt file (350KB and slowly growing) to an s3 bucket in AWS. I'm hoping for a solution involving an endpoint they can PUT to with some authorization key or the like in the header. The bucket can't be public to all.
I've read this: http://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectPUT.html
and this: http://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html
but can't quite seem to find the solution I'm seeking.
I'd suggests using a combination of the AWS API gateway, a lambda function and finally S3.
You clients will call the API Gateway endpoint.
The endpoint will execute an AWS lambda function that will then write out the file to S3.
Only the lambda function will need rights to the bucket, so the bucket will remain non-public and protected.
If you already have an EC2 instance running, you could replace the lambda piece with custom code running on your EC2 instance, but using lambda will allow you to have a 'serverless' solution that scales automatically and has no min. monthly cost.
I ended up using the AWS SDK. It's available for Java, .NET, PHP, and Ruby, so there's very high probability the 3rd party app is using one of those. See here: http://docs.aws.amazon.com/AmazonS3/latest/dev/UploadObjSingleOpNET.html
In that case, it's just a matter of them using the SDK to upload the file. I wrote a sample version in .NET running on my local machine. First, install the AWSSDK Nuget package. Then, here is the code (taken from AWS sample):
C#:
var bucketName = "my-bucket";
var keyName = "what-you-want-the-name-of-S3-object-to-be";
var filePath = "C:\\Users\\scott\\Desktop\\test_upload.txt";
var client = new AmazonS3Client(Amazon.RegionEndpoint.USWest2);
try
{
PutObjectRequest putRequest2 = new PutObjectRequest
{
BucketName = bucketName,
Key = keyName,
FilePath = filePath,
ContentType = "text/plain"
};
putRequest2.Metadata.Add("x-amz-meta-title", "someTitle");
PutObjectResponse response2 = client.PutObject(putRequest2);
}
catch (AmazonS3Exception amazonS3Exception)
{
if (amazonS3Exception.ErrorCode != null &&
(amazonS3Exception.ErrorCode.Equals("InvalidAccessKeyId")
||
amazonS3Exception.ErrorCode.Equals("InvalidSecurity")))
{
Console.WriteLine("Check the provided AWS Credentials.");
Console.WriteLine(
"For service sign up go to http://aws.amazon.com/s3");
}
else
{
Console.WriteLine(
"Error occurred. Message:'{0}' when writing an object"
, amazonS3Exception.Message);
}
}
Web.config:
<add key="AWSAccessKey" value="your-access-key"/>
<add key="AWSSecretKey" value="your-secret-key"/>
You get the accesskey and secret key by creating a new user in your AWS account. When you do so, they'll generate those for you and provide them for download. You can then attach the AmazonS3FullAccess policy to that user and the document will be uploaded to S3.
NOTE: this was a POC. In the actual 3rd party app using this, they won't want to hardcode the credentials in the web config for security purposes. See here: http://docs.aws.amazon.com/AWSSdkDocsNET/latest/V2/DeveloperGuide/net-dg-config-creds.html