AuthFailure AWS was not able to authenticate the request: access credentials are missing - amazon-web-services

I'm trying to submit the Pay step in an Amazon Flexible Payments request...
I get the following error
AuthFailure AWS was not able to authenticate the request: access
credentials are missing
However, I believe I am following the instructions to the letter:
The documentation gives the following example:
https://fps.sandbox.amazonaws.com?
Action=Pay
&AWSAccessKeyId=AKIAIOSFODNN7EXAMPLE
&CallerDescription=MyWish
&CallerReference=CallerReference02
&SenderTokenId=553ILMLCG6Z8J431H7BX3UMN3FFQU8VSNTSRNCTAASDJNX66LNZLKSZU3PI7TXIH
&Signature=0AgvXMwJmLxwdMaiE7lMHZxc6384h%2FjBkiTserQFpBQ%3D
&SignatureMethod=HmacSHA256
&SignatureVersion=2
&Timestamp=2009-10-06T05%3A49%3A52.843Z
&TransactionAmount.CurrencyCode=USD
&TransactionAmount.Value=1
&Version=2008-09-17
Here is what I generate:
https://fps.sandbox.amazonaws.com?
AWSAccessKeyId=AKIAI3...EXAMPLE
&Action=Pay
&CallerDescription=MyWebsite.com
&CallerReference=0.557658069068566
&SenderTokenId=25R743FUFUSUVPMZZ5Z83SWP1YPNX8YDPFR8XCDEMLH4L1PPMEZ65VLT8LE6UXPR
&SignatureMethod=HmacSHA256
&SignatureVersion=2&Timestamp=2013-07-06T13%3A56%3A03-07%3A00
&TransactionAmount.currencyCode=USD
&TransactionAmount.value=3
&Version=2008-09-17
&signature=9k%2B4Txi2ZzUj62QBK3TwV6x0KWfkNY9YWpqty8%2B3XKk%3D
I know that my AWS credentials are good because I have to use them in the immediately preceding step of getting the SenderTokenID. I'm just trying to finish up the transaction.
Any ideas? This has be totally baffled and has been a tragic waste of human life so far.

Ok, so the solution is
1) Although the opposite is documented for the antecedent step (getting a token ID), for this specific request, FPS cares about the case of the parameters - they should all be camel case.
2) I was missing a default '/' to cover the case of a blank path in my message signing function (the endpoint here has nothing after the host name)
Here's the HTTP GET that worked (obfuscating AWS credentials)
https://fps.sandbox.amazonaws.com/?AWSAccessKeyId=AKIAIEXAMPLE
&Action=Pay
&CallerDescription=MyWebsite
&CallerReference=0.7753969375044107
&SenderTokenId=25R7R3NUFBS6VPRZV5Z53AWP2YLNXAYEPFJ8BCDGMXH4V1FPMZZ95VATZLEFUCPG
&SignatureMethod=HmacSHA256
&SignatureVersion=2
&Timestamp=2013-07-06T14%3A38%3A41-07%3A00
&TransactionAmount.CurrencyCode=USD
&TransactionAmount.Value=3
&Version=2008-09-17
&Signature=Ijf0hqQuSi5zU%2BF1PUK1LBYvsK9AVHacrqK1hJVzffk%3D
And here is the message as it was actually signed with Hmac SHA256 (AWS Secret key as password):
GET
fps.sandbox.amazonaws.com
/
AWSAccessKeyId=AKIAIEXAMPLE&Action=Pay&CallerDescription=MyWebsite&CallerReference=0.7753969375044107&SenderTokenId=25R7R3NUFBS6VPRZV5Z53AWP2YLNXAYEPFJ8BCDGMXH4V1FPMZZ95VATZLEFUCPG&SignatureMethod=HmacSHA256&SignatureVersion=2&Timestamp=2013-07-06T14%3A38%3A41-07%3A00&TransactionAmount.CurrencyCode=USD&TransactionAmount.Value=3&Version=2008-09-17

Related

s3_out: unable to sign request without credentials set

I try to use "instance_profile_credentials" at ec2 instance as credentials. However I get
2021-09-16 14:16:50 +0000 [error]: #0 unexpected error error_class=RuntimeError error="can't call S3 API. Please check your credentials or s3_region configuration. error = #<Aws::Errors::MissingCredentialsError: unable to sign request without credentials set>"
I pretty sure my s3_region is correct, and I can use cli "aws s3 cp " to copy object at command line, not sure what going wrong.
I wonder if that because I am under a http proxy. However, I already setup "proxy_uri" parameter. Not sure what else I can do to check what going wrong?

Error while doing aws iam list-users using AWS_CLI

I am trying to run aws iam list-users in the AWS CLI but got an error. The error is:
An error occurred (SignatureDoesNotMatch) when calling the ListUsers operation: Signature not yet current: 20210606T055848Z is still later than 20210605T174350Z (20210605T172850Z + 15 min.)
Please if anyone know this solution, please tell to me.
The error is pretty clear that the request is signed for 20210606T055848Z but it "currently" is 20210605T172850Z. In different format: 05:58:48 # 06.06.2021 (signed) vs. 17:28:05 # 05.06.2021 (current). There is a difference of 12 and half hours between the two timestamps.
That means either the local time of your computer / the process creating the request is incorrect or the request is intentionally scheduled for the future and is simply not intended to be submitted yet. Solution: fix your clock, change the code to not sign for the future or submit the request at a later point in time.

Can I set AWS credentials on a Spring Boot/Cloud #SqsListener? (Java)

Double newbie here, to both SQS and Spring Cloud. I've created (using the console) an SQS queue. The company wiki I'm working from says then to generate temporary credentials, which come out looking like this:
aws_access_key_id = <secret>
aws_secret_access_key = <secret>
region = us-west-2
aws_session_token = <secret and VERY LONG, like 240 characters>
NOTE: more on that "aws_session_token" later.
So, once I have done that, I can send a message from the CLI, like this.
`aws --endpoint-url https://sqs.us-west-2.amazonaws.com/99999999999999/<queue name>.fifo sqs send-message --queue-url https://sqs.us-west-2.amazonaws.com/99999999999999/<queue name>.fifo --message-body "cli test msg 2" --message-group-id "azgroup"`
So far so good. But now, I want to implement an SqsListener to listen continuously. So, I checked out the code here https://github.com/sixthpoint/spring-boot-sqs-fifo-tutorial, which is a minimal Spring Cloud SQS application, and set all the configs as shown in the readme. My listener, right now, looks simply like this:
#SqsListener(value=SQSURL)
public void process(String json) throws IOException {
System.out.println("here");
System.out.println(json);
}
But, when I try to start the application up, I get this error:
com.amazonaws.services.sqs.model.AmazonSQSException: The security token included in the request is invalid. (Service: AmazonSQS; Status Code: 403; Error Code: InvalidClientTokenId; Request ID:....)
I think what's going on is that at startup, the listener is trying to contact my queue, and is being rejected because it's not sending that aws_session_token. (The company wiki, again, says this: "You will see aws_session_token. This is something you have not had before. It is required for your key to work!")
So, is there a way to explicitly set my AWS parameters, either in the Java code where the #SqsListener is defined, or somewhere in configs, such that the aws_session_token gets passed? It doesn't seem possible to pass an AwsCredentials object. (edit) And it doesn't seem that that would help me anyway, since AwsCredentials doesn't contain that field.
Or . . . is there some other way of solving this?
Answering, or at least partially answering, my own question: It turns out that the aws_session_token is required when, and only when, using temporary aws credentials, which as I noted is what I've been given to work with. It has to be added to any CLI operations, but there is no way to set it the AwsCredentials object in Java code. So that's not going to help me. It may just not be possible to connect from Java code when using temporary credentials. If I'm wrong and there is a way, please let me know.

Spark is inventing his own AWS secretKey

I'm trying to read a s3 bucket from Spark and up until today Spark always complain that the request return 403
hadoopConf = spark_context._jsc.hadoopConfiguration()
hadoopConf.set("fs.s3a.access.key", "ACCESSKEY")
hadoopConf.set("fs.s3a.secret.key", "SECRETKEY")
hadoopConf.set("fs.s3a.impl", "org.apache.hadoop.fs.s3a.S3AFileSystem")
logs = spark_context.textFile("s3a://mybucket/logs/*)
Spark was saying .... Invalid Access key [ACCESSKEY]
However with the same ACCESSKEY and SECRETKEY this was working with aws-cli
aws s3 ls mybucket/logs/
and in python boto3 this was working
resource = boto3.resource("s3", region_name="us-east-1")
resource.Object("mybucket", "logs/text.py") \
.put(Body=open("text.py", "rb"),ContentType="text/x-py")
so my credentials ARE invalid and the problem is definitely something with Spark..
Today I decided to turn on the "DEBUG" log for the entire spark and to my suprise... Spark is NOT using the [SECRETKEY] I have provided but instead... add a random one???
17/03/08 10:40:04 DEBUG request: Sending Request: HEAD https://mybucket.s3.amazonaws.com / Headers: (Authorization: AWS ACCESSKEY:[RANDON-SECRET-KEY], User-Agent: aws-sdk-java/1.7.4 Mac_OS_X/10.11.6 Java_HotSpot(TM)_64-Bit_Server_VM/25.65-b01/1.8.0_65, Date: Wed, 08 Mar 2017 10:40:04 GMT, Content-Type: application/x-www-form-urlencoded; charset=utf-8, )
This is why it still return 403! Spark is not using the key I provide with fs.s3a.secret.key but instead invent a random one??
For the record I'm running this locally on my machine (OSX) with this command
spark-submit --packages com.amazonaws:aws-java-sdk-pom:1.11.98,org.apache.hadoop:hadoop-aws:2.7.3 test.py
Could some one enlighten me on this?
(updated as my original one was downvoted as clearly considered unacceptable)
The AWS auth protocol doesn't send your secret over the wire. It signs the message. That's why what you see isn't what you passed in.
For further information, please reread.
I ran into a similar issue. Requests that were using valid AWS credentials returned a 403 Forbidden, but only on certain machines. Eventually I found out that the system time on those particular machines were 10 minutes behind. Synchronizing the system clock solved the problem.
Hope this helps!
It is very intriguing this random passkey. Maybe AWS SDK is getting the password from OS environment.
In hadoop 2.8, the default AWS provider chain shows the following list of providers:
BasicAWSCredentialsProvider EnvironmentVariableCredentialsProvider SharedInstanceProfileCredentialsProvider
Order, of course, matters! the AWSCredentialProviderChain, get the first keys from the first provider that provides that information.
if (credentials.getAWSAccessKeyId() != null &&
credentials.getAWSSecretKey() != null) {
log.debug("Loading credentials from " + provider.toString());
lastUsedProvider = provider;
return credentials;
}
See the code in "GrepCode for AWSCredentialProviderChain".
I face similar problem using profile credentials. SDK was ignoring the credentials inside ~/.aws/credentials (as good practice, I encourage you to not store credentials inside the program in any way).
My solution...
Set the credentials provider to use ProfileCredentialsProvider
sc._jsc.hadoopConfiguration().set("fs.s3a.endpoint", "s3.eu-central-1.amazonaws.com") # yes, I am using central eu server.
sc._jsc.hadoopConfiguration().set('fs.s3a.aws.credentials.provider', 'com.amazonaws.auth.profile.ProfileCredentialsProvider')
Folks, go for the IAM configuration based on Roles ... that will open up S3 access policies that should be added to the EMR default one.

Amazon SNS CreatePlatformApplication returns error when reusing platform applications

I had code that was working that would create a new platform application for every message that went out. I thought that was wasteful so I tried to change the code to use list_platform_applications to get available applications and reuse the one that has the proper name (part of the PlatformApplicationArn).
This will work for several messages in a row when suddenly I'll get this error from CreatePlatformApplication:
{"Error":{"Code":"InvalidParameter","Message":"Invalid parameter: This
endpoint is already registered with a different
token.","Type":"Sender"},"RequestId":"06bd3443-598e-5c06-9f5c-7f84349ea067"}
That doesn't even make sense. I'm creating an endpoint. I didn't pass one in. Is it really complaining about the endpoint it's returning.
According to the Amazon documentation:
"The CreatePlatformEndpoint action is idempotent, so if the requester
already owns an endpoint with the same device token and attributes,
that endpoint's ARN is returned without creating a new endpoint."
So it seems to me, if there's an appropriate one it will be returned. Otherwise, create a brand new fresh one.
Am I missing something?
Oh darn. I think I found the reason for this behavior. After facing this issue, I made sure that each token was only uploaded once to AWS SNS. When testing this, I realized that nevertheless I ended up with multiple endpoints with the same token - huh???
It turned out that these duplicated tokens resulted from outdated tokens being uploaded to AWS SNS. After creating an endpoint using an outdated token, SNS would automagically revive the endpoint by updating it with the current device token (which afaik is delivered back from GCM as a canonical ID once you try to send push messages to outdated tokens).
So e.g. uploading these (made-up) tokens and custom data
APA9...YFDw, {original_token: APA9...YFDw}
APA9...XaSd, {original_token: APA9...XaSd} <-- Assume this token is outdated
APA9...sVQa, {original_token: APA9...sVQa}
might result in something like this - i.e. different endpoints with identical tokens:
APA9...YFDw, {original_token: APA9...YFDw}, arn:aws:sns:eu-west-1:4711:endpoint/GCM/myapp/daf64...5c204
APA9...YFDw, {original_token: APA9...XaSd}, arn:aws:sns:eu-west-1:4711:endpoint/GCM/myapp/a980f...e3c82 <-- Duplicate token!
APA9...sVQa, {original_token: APA9...sVQa}, arn:aws:sns:eu-west-1:4711:endpoint/GCM/myapp/14777...7d9ff
This scenario in turn seems to lead to above error on subsequent attempts to create endpoints using outdated tokens. On the hand, it seems correct that subsequent requests fail. On the other hand, intuitively I have the gut-feeling that the duplication of tokens that is taking place seems wrong, or at least difficult to handle. Maybe once SNS discovers that a token is outdated and needs to be changed, it could first check if there is already another endpoint existent with the same token...
I will research on this a bit more and see if I can find a way to handle this properly.
Cheers
Had the same issue, with the device reporting one token (outdated according to GCM) and the SNS retrieving/storing another.
We solved it by clearing the app cache on the device and reopening the app (which in our case, re-registered the device on the gcm service), generating the same token (not outdated) that SNS was attempting to push to.