How to test credentials for AWS Command Line Tools - amazon-web-services

Is there a command/subcommand that can be passed to the aws utility that can 1) verify that the credentials in the ~/.aws/credentials file are valid, and 2) give some indication which user the credentials belong to? I'm looking for something generic that doesn't make any assumptions about the user having permissions to IAM or any specific service.
The use case for this is a deploy-time sanity check to make sure that the credentials are good. Ideally there would be some way to check the return value and abort the deploy if there are invalid credentials.

Use GetCallerIdentity:
aws sts get-caller-identity
Unlike other API/CLI calls it will always work, regardless of your IAM permissions.
You will get output in the following format:
{
"Account": "123456789012",
"UserId": "AR#####:#####",
"Arn": "arn:aws:sts::123456789012:assumed-role/role-name/role-session-name"
}
Exact ARN format will depend on the type of credentials, but often includes the name of the (human) user.
It uses the standard AWS CLI error codes giving 0 on success and 255 if you have no credentials.

There is a straightforward way - aws iam get-user would tell the details about who you are (the current IAM User) - provided the user has iam privileges.
There are couple of CLI calls which support --dry-run flag like aws ec2 run-instances which you tell you whether you have necessary config / cred to perform the operation.
There is also --auth-dry-run which Checks whether you have the required permissions for the command, without actually running the command. If you have the required permissions, the command returns DryRunOperation; otherwise, it returns UnauthorizedOperation. [ From AWS Documentation - Common Options ]
You would be able to list the IAM Access Keys from Management Console which you can cross check to see who has been assigned which key.
The best way to understand which user / role has what privileges is make use of IAM Policy Simulator.

I was in need of the same so I wrote aws-role
I also wanted that the command outputs session time remains before logout:
I used it in many shell scripts to automate my AWS use -- worked well for me.
my script parse ~/.aws/credentials
PS: also thinking to enhance it to support JSON output

Related

How to get AWS policy needed to run a specific CLI command?

I am new to AWS. I am trying to import an OVA to a AMI and use it for an EC2 instance as described here:
One of the commands it asks you to run is
aws ec2 describe-import-image-tasks --import-task-ids import-ami-1234567890abcdef0
When I do this I get
An error occurred (UnauthorizedOperation) when calling the DescribeImportImageTasks operation: You are not authorized to perform this operation.
I believe this means I need to add the appropriate Role (with a policy to be able to describe-import-image-tasks) to my cli user.
In the IAM console, I see this search feature to filter policies for a role which I will assign to my user. However it doesn't seem to have any results for describe-import-image-tasks
Is there an easy way to determine which policies are needed to run an AWS Cli command?
There is not an easy way. The CLI commands usually (but not always) map to a single IAM action that you need permission to perform. In your case, it appears you need the ec2:DescribeImportImageTasks permission, as listed here.

How does assuming multiple IAM roles at the same time work?

I have an ECS task using a task role to access a DynamoDB table in the same account A. It also requires access to a DynamoDB table in a different account B, which is granted through assuming an IAM role.
My understanding is that after assuming the role, the task now has a set of temporary credentials for each role. This allows the task to use the new credentials to make requests to account B's table, while still using the original credentials to make requests to account A's table.
Assuming this is correct, how are the creds used for a given request determined? Does it only use the cross account role for making account B requests, and the original creds for the account A requests?
What if access to account B S3 buckets are also required and the permissions were granted to account A, which were then given to original task role? After assuming the cross account role, does the cross account S3 request fail because the assumed role doesn't have S3 permissions, even though the original take role does?
AWS resources cannot just assume a role themselves. They have to be told to do so, and use the SDK of your choice to do so (or the CLI). Soon as you understand how that works it becomes a lot more clear how this works. Since you mentioned and ec2 instance, I'll use the CLI to show this
AcctCredentials=($(aws sts assume-role --role-arn "$1" --role-session-name TheSessionName --query '[Credentials.AccessKeyId,Credentials.SecretAccessKey,Credentials.SessionToken]' --output text))
unset AWS_SECURITY_TOKEN
echo "Security Tokens for Cross Account Access received"
export AWS_ACCESS_KEY_ID=${AcctCredentials[0]}
echo $AWS_ACCESS_KEY_ID
export AWS_SECRET_ACCESS_KEY=${AcctCredentials[1]}
export AWS_SESSION_TOKEN=${AcctCredentials[2]}
export AWS_SECURITY_TOKEN=${AcctCredentials[2]}
doing that you are setting your env variables in the ec2 to these new credentials. This means any other CLI commands run or any script that is launched from the same shell as this one will use these credentials.
If you need to go back to the credentials from before, you will either need to reset/save your credentials from before or exit the shell this command was run in and return to your default credentials.
If this was in a lambda for instance however, you might be using Python and the Boto3 to do something very similar. It would replace the tokens there.
It is also entirely possible to save your tokens as a Profile that the commands can use, and then per command specify the profile you are using for that command.

AWS Access keys without accessing the console

Is it possible to get AWS access keys without generating one from the console?
I want to be able to create a script that will ask for user/password/(TOTP) and generate temporary access keys in order to perform multiple tasks.
The goal being to be able to give one program to dev so they don't even have to deal with access keys every time since they know their password.
I looked everywhere I believe, but cannot find any resources on if it is even doable.
Thank you!
Yes it is doable.
Both via Command Line Interface (CLI) and API are possible.
The CLI command is:
aws iam create-access-key
The API call is:
CreateAccessKey
See documentation reference below for more information. It is a good read and covers best practice topics like rotating your keys with API or CLI. Another good best practice is NOT using your root account for everyday use.
https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html#Using_CreateAccessKey_CLIAPI
UPDATE KEY ROTATION EXAMPLE USING CLI:
Before you try this from the CLI - create a new IAM user with Administrator privileges.
Step 1 - configure the CLI to use the Administrator keys:
aws configure
then enter the Access Key ID and Secret Access Key for Administrator.
Step 2 - list the keys for user foo
aws iam list-access-keys --user-name foo
The output will be similar to:
{
"AccessKeyMetadata": [
{
"UserName": "foo",
"AccessKeyId": "AKIAIY*****A7YBHCBEBQ",
"Status": "Active",
"CreateDate": "2018-11-10T14:02:56Z"
}
]
}
Verify there is only one, because a user can have maximum of 2. If they have 2 already, then step 3 will fail.
Step 3 - create a new key for user foo
aws iam create-access-key --user-name foo
you will see output similar to below. This is the only time you will see the secret key for the new set, so you need to preserve it.
{
"AccessKey": {
"UserName": "foo",
"AccessKeyId": "****GAGA*****WEFWEWE",
"Status": "Active",
"SecretAccessKey": "*****sEcReT*****Tasdasd",
"CreateDate": "2018-12-01T19:16:41Z"
}
}
You new key is created and active. Now you need to remove the older key to complete the rotation. I will leave that step up to you.
If you get the error:
An error occurred (InvalidClientTokenId) when calling the ListAccessKeys operation: The security token included in the request is invalid.
then this is a sign you are trying this from an account that your token is old, invalid, or doesnt have the correct privileges.

How can I determine which role I've received after logging into AWS Cognito via JavaScript?

After I log in to AWS Cognito via my browser, I get an access key and a secret access key along w/a session token, but I can't see which role I've been assigned. I know which role I should be assigned, but is there a way to programmatically validate this?
I'm trying to use the role I've been assigned to access a restricted bucket, but am so far not having any success and one of the ways for me to trouble shoot this is to determine which role I've been assigned.
With sts.GetCallerIdentity.
cli example (js link above):
aws --profile XXXXXXXX sts get-caller-identity
{
"UserId": "AIDAIXXXXXXXXTHOVLM",
"Account": "123456789098",
"Arn": "arn:aws:iam::123456789098:user/dan"
}
It is indeed frustrating to debug without this, and it didn't used to exist, but now it does. Hurrah!

The AWS Access Key Id does not exist in our records

I created a new Access Key and configured that in the AWS CLI with aws configure. It created the .ini file in ~/.aws/config. When I run aws s3 ls it gives:
A client error (InvalidAccessKeyId) occurred when calling the ListBuckets operation: The AWS Access Key Id you provided does not exist in our records.
AmazonS3FullAccess policy is also attached to the user. How to fix this?
It might be happening that you have the old keys exported via env variables (bash_profile) and since the env variables have higher precedence over credential files it is giving the error "the access key id does not exists".
Remove the old keys from the bash_profile and you would be good to go.
Happened with me once earlier when I forgot I have credentials in bash_profile and gave me headache for quite some time :)
It looks like some values have been already set for the environment variables AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY.
If it is like that, you could see some values when executing the below commands.
echo $AWS_SECRET_ACCESS_KEY
echo $AWS_ACCESS_KEY_ID
You need to reset these variables, if you are using aws configure
To reset, execute below commands.
unset AWS_ACCESS_KEY_ID
unset AWS_SECRET_ACCESS_KEY
Need to add aws_session_token in credentials, along with aws_access_key_id,aws_secret_access_key
None of the up-voted answers work for me. Finally I pass the credentials inside the python script, using the client API.
import boto3
client = boto3.client(
's3',
aws_access_key_id=ACCESS_KEY,
aws_secret_access_key=SECRET_KEY,
aws_session_token=SESSION_TOKEN)
Please notice that the aws_session_token argument is optional. Not recommended for public work, but make life easier for simple trial.
For me, I was relying on IAM EC2 roles to give access to our machines to specific resources.
I didn't even know there was a credentials file at ~/.aws/credentials, until I rotated/removed some of our accessKeys at the IAM console to tighten our security, and that suddenly made one of the scripts stop working on a single machine.
Deleting that credentials file fixed it for me.
I made the mistake of setting my variables with quotation marks like this:
AWS_ACCESS_KEY_ID="..."
You may have configured AWS credentials correctly, but using these credentials, you may be connecting to some specific S3 endpoint (as was the case with me).
Instead of using:
aws s3 ls
try using:
aws --endpoint-url=https://<your_s3_endpoint_url> s3 ls
Hope this helps those facing the similar problem.
you can configure profiles in the bash_profile file using
<profile_name>
aws_access_key_id = <access_key>
aws_secret_access_key = <acces_key_secret>
if you are using multiple profiles. then use:
aws s3 ls --profile <profile_name>
You may need to set the AWS_DEFAULT_REGION environment variable.
In my case, I was trying to provision a new bucket in Hong Kong region, which is not enabled by default, according to this:
https://docs.aws.amazon.com/general/latest/gr/s3.html
It's not totally related to OP's question, but to topic per se, so if anyone else like myself finds trapped on this edge case:
I had to enable that region manually, before operating on that AWS s3 region, following this guide: https://docs.aws.amazon.com/general/latest/gr/rande-manage.html
I have been looking for information about this problem and I have found this post. I know it is old, but I would like to leave this post in case anyone has problems.
Okay, I have installed the AWS CLI and opened:
It seems that you need to run aws configure to add the current credentials. Once changed, I can access
Looks like ~/.aws/credentials was not created. Try creating it manually with this content:
[default]
aws_access_key_id = sdfesdwedwedwrdf
aws_secret_access_key = wedfwedwerf3erfweaefdaefafefqaewfqewfqw
(on my test box, if I run aws command without having credentials file, the error is Unable to locate credentials. You can configure credentials by running "aws configure".)
Can you try running these two commands from the same shell you are trying to run aws:
$ export AWS_ACCESS_KEY_ID=AKIAIOSFODNN7EXAMPLE
$ export AWS_SECRET_ACCESS_KEY=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
and then try aws command.
another thing that can cause this, even if everything is set up correctly, is running the command from a Makefile. for example, I had a rule:
awssetup:
aws configure
aws s3 sync s3://mybucket.whatever .
when I ran make awssetup I got the error: fatal error: An error occurred (InvalidAccessKeyId) when calling the ListObjects operation: The AWS Access Key Id you provided does not exist in our records.. but running it from the command line worked.
Adding one more answer since all the above cases didn't work for me.
In AWS console, check your credentials(My Security Credentials) and see if you have entered the right credentials.
Thanks to this discussion:
https://forums.aws.amazon.com/message.jspa?messageID=771815
This could happen because there's an issue with your AWS Secret Access Key. After messing around with AWS Amplify, I ran into this issue. The quickest way is to create a new pair of AWS Access Key ID and AWS Secret Access Key and run aws configure again.
I works for me. I hope this helps.
To those of you who run aws s3 ls and getting this exception. Make sure You have permissions to all regions under the provided AWS Account. When running aws s3 ls you try to pull all the s3 buckets under the AWS Account. therefore, in case you don't have permissions to all regions, you'll get this exception - An error occurred (InvalidAccessKeyId) when calling the ListBuckets operation: The AWS Access Key Id you provided does not exist in our records.
Follow Describing your Regions using the AWS CLI for more info.
I had the same problem in windows and using the module aws-sdk of javascript. I have changed my IAM credentials and the problem persisted even if i give the new credentials through the method update like this
s3.config.update({
accessKeyId: 'ACCESS_KEY_ID',
secretAccessKey: 'SECRET_ACCESS_KEY',
region: 'REGION',
});
After a while i found that the module aws-sdk had created a file inside the folder User on windows with this path
C:\Users\User\.aws\credentials
. The credentials inside this file take precedence over the other data passed through the method update.
The solution for me was to write here
C:\Users\User\.aws\credentials
the new credentials and not with the method s3.config.update
Kindly export the below variables from the credential file from the below directory.
path = .aws/
filename = credentials
export aws_access_key_id = AK###########GW
export aws_secret_access_key = g#############################J
Hopefully this saves others from hours of frustration:
call aws.config.update({ before initializing s3.
const AWS = require('aws-sdk');
AWS.config.update({
accessKeyId: 'AKIAW...',
secretAccessKey: 'ptUGSHS....'
});
const s3 = new AWS.S3();
Credits to this answer:
https://stackoverflow.com/a/61914974/11110509
I tries below steps and it worked:
1. cd ~
2. cd .aws
3. vi credentials
4. delete
aws_access_key_id =
aws_secret_access_key =
by placing cursor on that line and pressing dd (vi command to delete line).
Delete both the line and check gain.
If you have an AWS Educate account and you get this problem:
An error occurred (InvalidAccessKeyId) when calling the ListBuckets operation: The AWS Access Key Id you provided does not exist in our records".
The solution is here:
Go to your C:/ drive and search for .aws folder inside your main folder in windows.
Inside that folder you get the "credentials" file and open it with notepad.
Paste the whole key credential from AWS account to the same notepad and save it.
Now you are ready to use you AWS Educate account.
Assuming you already checked Access Key ID and Secret... you might want to check file team-provider-info.json which can be found under amplify/ folder
"awscloudformation": {
"AuthRoleName": "<role identifier>",
"UnauthRoleArn": "arn:aws:iam::<specific to your account and role>",
"AuthRoleArn": "arn:aws:iam::<specific to your account and role>",
"Region": "us-east-1",
"DeploymentBucketName": "<role identifier>",
"UnauthRoleName": "<role identifier>",
"StackName": "amplify-test-dev",
"StackId": "arn:aws:cloudformation:<stack identifier>",
"AmplifyAppId": "<id>"
}
IAM role being referred here should be active in IAM console.
If you get this error in an Amplify project, check that "awsConfigFilePath" is not configured in amplify/.config/local-aws-info.json
In my case I had to remove it, so my environment looked like the following:
{
// **INCORRECT**
// This will not use your profile in ~/.aws/credentials, but instead the
// specified config file path
// "dev": {
// "configLevel": "project",
// "useProfile": false,
// "awsConfigFilePath": "/Users/dev1/.amplify/awscloudformation/cEclTB7ddy"
// },
// **CORRECT**
"dev": {
"configLevel": "project",
"useProfile": true,
"profileName": "default",
}
}
Maybe you need to active you api keys in the web console, I just saw that mine were inactive for some reason...
Thanks, everyone. This helped to solve.
Something somehow happened which changed the keys & I didn't realize since everything was working fine until I connected to S3 from a spark...then from the command line also error started coming even in AWS s3 ls
Steps to solve
Run AWS configure to check if keys are set up (verify from last 4 characters & just keep pressing enter)
AWS console --> Users --> click on the user --> go to security credentials--> check if the key is the same that is showing up in AWS configure
If both not the same, then generate a new key, download csv
run --> AWS configure, set up new keys
try AWS s3 ls now
Change keys at all places in my case it was configs in Cloudera.
I couldn't figure out how to get the system to accept my Vocareum credentials so I took advantage of the fact that if you configure your instance to use IAM roles, the SDK automatically selects the IAM credentials for your application, eliminating the need to manually provide credentials.
Once a role with appropriate permissions was applied to the EC2 instance, I didn't need to provide any credentials.
Open the ~/.bash_profile file and edit the info with the new values that you received at the time of creating the new user:
export AWS_ACCESS_KEY_ID=
export AWS_SECRET_ACCESS_KEY=
export AWS_DEFAULT_REGION=us-east-1
Afterward, run the command:
source ~/.bash_profile
This will enable the new keys for the local machine. Now, we will need to configure the info in the terminal as well. Run the command -
aws configure
Provide the new values as requested and you are good to go.
In my case, I was using aws configure
However, I hand-edited the .aws/config file to export the KeyID and key environment variables.
This apparently caused a silent error and saw the error listed above.
I solved this by destroying the .aws directory and running aws configure again.
I have encountered this issue when trying to export RDS Postgres data to S3 following this official guide.
TL;DR Troubleshooting tips:
Reset RDS credentials using:
DROP EXTENSION aws_s3 CASCADE;
DROP EXTENSION aws_commons CASCADE;
CREATE EXTENSION aws_s3 CASCADE;
Delete and add DB instance role used for s3Export feature. Optionally reset RDS credentials (previous action point) once again after that.
Below you will find more details on my case.
In particular, I have encountered:
[XX000] ERROR: could not upload to Amazon S3
Details: Amazon S3 client returned 'The AWS Access Key Id you provided does not exist in our records.'.
To be able to perform export to S3, RDS DB instance should be configured to assume a role with permission to write to S3 bucket, the guide describes these steps.
The reason of an error was in aws_s3.query_export_to_s3 Postgres procedure using some (cached?) invalid assumed credentials. I am still not aware which credentials has it been using but I have managed to achieve the same behaviour using AWS CLI:
I have assumed a role (aws sts assume-role),
And then tried to perform another action (aws s3 cp in particular) with this credentials without session token (only AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY without AWS_SESSION_TOKEN).
This resulted in the same error from AWS CLI: An error occurred (InvalidAccessKeyId) when calling the PutObject operation: The AWS Access Key Id you provided does not exist in our records.
In short: hard resetting RDS credentials helped.
I just found another cause/remedy for this error/situation. I was getting the error running a PowerShell script. The error was happening on an execution of Write-S3Object. I have been working with AWS for a while now and have been running this script with success, but had not run it in a while.
My usual method of setting AWS credentials is:
Set-AWSCredential -ProfileName <THE_PROFILE_NAME>
I tried the "aws configure" command and every other recommendation in this forum post. No luck.
Well, I am aware of the .aws\credentials file and took a look in there. I have only three profiles, with one being [default]. Everything was looking good, but then I noticed a new element in there, present in all 3 profiles, that I had not seen before:
toolkit_artifact_guid=64GUID3-GUID-GUID-GUID-004GUID236
(GUID redacting added by me)
Then I noticed this element differed between the profile I was running with and the [default] profile, which was the same profile, except for that.
On a hunch I changed the toolkit_artifact_guid in the [default] to match it to my target profile, and no more error. I have no idea why.