When I pull a clean Alphine Linux Docker image, install aws-cli on it and try to authenticate myself with aws ecr get-authorization-token --region eu-central-1 I keep getting the following error:
An error occurred (UnrecognizedClientException) when calling the
GetAuthorizationToken operation: The security token included in the
request is invalid.
I've already checked the timezone which seem to be okay, and the command works properly on my local machine.
These are the commands I run to set up aws-cli:
apk add --update python python-dev py-pip
pip install awscli --upgrade
export AWS_ACCESS_KEY_ID=XXXXXXXXXXXXXXXXXXXX
export AWS_SECRET_ACCESS_KEY=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
Is there something obvious I'm missing?
You don't have permission to access those resources until you get permission to aws-cli, for that you can use the below steps.
Log into your AWS account, click on your account name, select my security credentials, click on access keys and download the credentials
Open your PowerShell as administrator and follow the commands.
$ aws configure
$ AWS Access Key ID [****************E5TA]=xxxxxxxxxx
$ AWS Secret Access Key [****************7gNT]=xxxxxxxxxxxxxx
It was an access issue after all! Turns out that if you create a new IAM user with full admin access it can't by default access the ECR registry you created using a different account. Using the IAM credentials from that other account resolved the issue.
In my case, my ~/.aws/credentials file had an old aws_session_token that was not updated by the aws configure CLI command. Once I opened the file with vi ~/.aws/credentials and deleted the aws_session_token entry, I no longer encountered the UnrecognizedClientException. I'm guessing that the AWS CLI first gives priority to the aws_session_token over the aws access key id and aws secret access key when running AWS CLI commands, if aws_session_token is present in the ~/.aws/credentials file.
Create a new account with AmazonEC2ContainerRegistryFullAccess permission.
Add this account to the .credentials file like this:
[ecr-user]
aws_access_key_id = XXX
aws_secret_access_key = XXX
Then next use following command:
aws ecr get-login-password --profile ecr-user
What worked for me is:
on the first part of pipe add the param --profile < your-profile-name >
and after that in every ECR command you need to provide that parameter.
My issue was caused by the fact that I had inactivated my access key in the AWS IAM Management Console earlier as part of an exercise I was doing. Once I reactivated it, the problem was resolved.
(Make sure you're in the right AWS region, too.)
I had same error message however I was using session based AWS access . The solution is to add all the keys given by AWS including session token.
aws_access_key_id="your-key-id"
aws_secret_access_key="your-secret-access-key"
aws_session_token="your-session-token"
add it into ~/.aws/credentials for profile you are using .
After a couple of hours , this is my conclusion :
If you want to use AWS_PROFILE makes sure that the rest of AWS env vars are unset (NOT empty only ... MUST be UNSET).
profile=$AWS_PROFILE
unset $(printenv |grep AWS_ | cut -f1 -d"=");
export AWS_PROFILE=${profile};
Then :
# with aws cli >= 1.x
$(aws ecr get-login --no-include-email --region ${aws_region})
# with aws cli >= 2.x
registry=${aws_account_id}.dkr.ecr.${aws_region}.amazonaws.com
aws ecr get-login-password --region ${aws_region} | docker login --username AWS --password-stdin ${registry}
Resolved issue after following below:
Go to AWS IAM Management Console
Generate credential in section "Access keys (access key ID and secret access key)"
Run command aws configure and set same downloaded credentials in Cdrive-User-directory.aws\credentials
It wasn't working for me. Out of sheer desperation, I copied the lines starting with export and posted them in the terminal and pressed enter.
Thereafter I wrote aws configure and filled in the details from https://MYCOMPANY.awsapps.com/start#/ >> Account >> Clicked "Command line or programmatic access".
Default region name: eu-north-1
Default output format: text
And then the login succeeded. Don't ask my why.
open the file ~/.aws/credentials (or c:\Users\{user}\.aws\credentials on Windows)
It might look something like the following:
[default]
aws_access_key_id = XXXXX
aws_secret_access_key = XXXXX
aws_session_token = XXXXX
Update the aws_access_key_id and aws_secret_access_key with new values and remove the aws_session_token. You can also update aws_access_key_id and aws_secret_access_key via the aws configure command, but this doesn't remove the session token.
Try running echo $varname to see if the environment variables are set correctly:
echo $AWS_ACCESS_KEY_ID
echo $AWS_SECRET_ACCESS_KEY
echo $AWS_DEFAULT_REGION
If they are incorrectly set, run unset varname:
unset AWS_ACCESS_KEY_ID
unset AWS_SECRET_ACCESS_KEY
unset AWS_DEFAULT_REGION
In my case, the region I wanted to use was not enabled. Addressed by enabling it at Account > AWS Regions -> enable (and wait patiently for some minutes).
An update, --profile must be added, I solve this.
Related
I want to pull from a private AWS ECR. I have created a new policy and a API user with the correct permissions to pull.
The issue I have is... I'm running this on a machine where I don't want to use credentials files:
aws ecr get-login
I would like to use the aws_access_key_id and aws_secret_access_key to get a login token i.e
aws ecr get-login <aws_access_key_id> <aws_secret_access_key>
Is this possible or do I have any way to achieve this without saving out a file or running aws configure?
You can specify your configuration with environment variables, for example like this (Linux/Mac OS):
export AWS_ACCESS_KEY_ID=AKIAIOSFODNN7EXAMPLE
export AWS_SECRET_ACCESS_KEY=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
export AWS_DEFAULT_REGION=us-west-2
They will then only last until the end of your shell session (unless you put them in your shell startup script). You can read more about this, and see additional examples here.
I have an assumed role which I assumed using aws sts assume-role CLI command.
I want to "unassume" this role and switch back to my aws credentials configured in my local system.
How do I acheive this?
I have tried doing so by going to the console and clicking the "Revoke Active Sessions" button, but that doesn't seem to be working. I tried rm -r ~/.aws/cli/cache too but in vain. Please help
There's a few things you can do:
You can unset the AWS environment variables in your terminal:
unset AWS_ACCESS_KEY_ID
unset AWS_SECRET_ACCESS_KEY
unset AWS_SESSION_TOKEN
You can reconfigure your aws configuration:
aws configure
However, you'll need to make sure there are no environment variables set for your AWS credentials. By default, the AWS CLI looks at your environment variables, then your config file, then your credentials file for credentials to use in CLI commands.
I am having problems configuring my AWS credentials on Serverless using my terminal. Once I place:
serverless config credentials --provider aws --key xx --secret xxx --profile serverless-admin2
After that the system responds "setting up aws..." and doesn't do anything else. Am I doing something wrong?
The command just only creates a new entry in your ~/.aws/credentials file. Thus to check if it worked, inspect ~/.aws/credentials and see if [serverless-admin2] profile was created with your aws keys.
If not, you can add the profile yourself there.
I'm trying to set up Amazon AWS EC2 instance to talk to s3. The basic command is
aws configure
then follow the prompt to enter
AWS Access Key ID [None]: my-20-digit-id
AWS Secret Access Key [None]: my-40-digit-secret-key
Default region name [None]: us-east-1
Default output format [None]: text
However, what I really want is to have the command
aws configure
automatically without interaction, i.e., no prompt and wait for input
I know there are files at
~.aws/credentials
~.aws/config
where I put those 4 key=value pairs. And the "credentials" file looks like
[default]
aws_secret_access_key = my-40-digit-secret-key
aws_access_key_id = my-20-digit-id
while the "config" file looks like
[default]
region = us-east-1
output = text
However, with those file at ~/.aws/, I get into ~/.aws/, and at the command line, I type and enter command
aws configure
I still got the prompt to ask me
AWS Access Key ID [None]:
AWS Secret Access Key [None]:
Default region name [None]:
Default output format [None]:
If I don't enter valid values at prompt, I won't be able to connect to s3, for example via command
aws s3 ls s3://mybucket
I turned help to amazon aws documentation pages. At this page, it mentions this option
"Command line options – region, output format and profile can be specified as command options to override default settings."
as the first option for aws configure
https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html
However, it didn't mention how to use the command line options. I tried something like this
aws configure --region us-east-1
but I still got
AWS Access Key ID [None]:
AWS Secret Access Key [None]:
Default region name [None]:
Default output format [None]:
exactly like I have no option of "--region us-east-1"
If I try to
aws configure --aws_access_key_id my-20-digit-id --aws_secret_access_key my-40-digit-secret-key --region us-east-1
I get this
usage: aws [options] <command> <subcommand> [parameters]
aws: error: argument subcommand: Invalid choice, valid choices are:
How I can run the command
aws configure
automatically, no prompt, no interaction.
Please help! TIA
Edit and response to helloV, as the format in main post is much clearer than comment.
I tried the command helloV mentioned, but I got error
aws configure set aws_access_key_id my-20-digit-id
usage: aws [options] <command> <subcommand> [parameters]
aws: error: argument subcommand: Invalid choice, valid choices are:
Thanks though.
Continue on "aws configure set"
On another EC2 instance where I've already set connection to s3, I enter
aws configure set region us-east-1
runs and returns to command prompt ">"
aws configure set aws_access_key_id my-20-digit-id
runs and returns to command prompt ">"
aws configure set aws_secret_access_key my-40-digit-secret-key
runs and returns to command prompt ">"
aws configure
runs but comes with prompts and waits for interaction
AWS Access Key ID [****************ABCD]:
AWS Secret Access Key [****************1234]:
Default region name [us-east-1]:
Default output format [text]:
helloV:
here is my screen looks like
ubuntu#ip-11111:~/.aws$ more config
[default]
region = us-east-1
output = text
ubuntu#ip-11111:~/.aws$ more credentials
[default]
aws_secret_access_key = my-40-digit-secret-key
aws_access_key_id = my-20-digit-id
ubuntu#ip-11111:~/.aws$ aws s3 ls s3://
I got this
Unable to locate credentials. You can configure credentials by running "aws configure".
After this, I run
aws configure list
Name Value Type Location
---- ----- ---- --------
profile <not set> None None
access_key <not set> None None
secret_key <not set> None None
region us-east-1 config_file ~/.aws/config
Looks like it does not check ~/.aws/credentials file, but ~/.aws/config file is in the list.
These commands worked for me. If this doesn't works for you. Try do the first time using the interaction mode aws configure
aws --profile default configure set aws_access_key_id "my-20-digit-id"
aws --profile default configure set aws_secret_access_key "my-40-digit-secret-key"
I figured out, finally. Use export such as
export AWS_ACCESS_KEY_ID=my-20-digit-id
export AWS_SECRET_ACCESS_KEY=my-40-digit-secret-key
export AWS_DEFAULT_REGION=us-east-1
then run
aws s3 ls s3://
would work. Don't run "aws configure" as others mentioned.
Thank you all.
You describe the file very well. Why not just create a file and put it in the right place? I just tried... it's exactly the same as running aws configure
UPDATE: You mention that you want to access S3 from EC2 instance. In this case you shouldn't be using credentials at all. You should user Roles instead
The solution is that you actually don't have to run aws configure! After you run it for the 1st time and established the credentials (~/.aws/credentials) and config (~/.aws/config), going forward you simply have to run the required aws command. I tested this with a cron job and did a "aws s3 ls" command and it worked without having to provide a configure command before it.
Follow this command
$aws configure set aws_access_key_id default_access_key
$ aws configure set aws_secret_access_key default_secret_key
$ aws configure set default.region us-west-2
or
aws configure set aws_access_key_id <key_id> && aws configure set aws_secret_access_key <key> && aws configure set default.region us-east-1
For more details use this link
https://awscli.amazonaws.com/v2/documentation/api/latest/reference/configure/set.html
I use something like this:
aws configure --profile my-profile-name <<-EOF > /dev/null 2>&1
${AWS_ACCESS_KEY_ID}
${AWS_SECRET_ACCESS_KEY}
${AWS_REGION}
text
EOF
also to cleanup after automated process, and not remove `~/.aws/ directory (since some other credentials might be stored there) I run:
aws configure --profile my-profile-name <<-EOF > /dev/null 2>&1
null
null
null
text
EOF
I've got two different apps that I am hosting (well the second one is about to go up) on Amazon EC2.
How can I work with both accounts at the command line (Mac OS X) but keep the EC2 keys & certificates separate? Do I need to change my environment variables before each ec2-* command?
Would using an alias and having it to the setting of the environment in-line work? Something like: alias ec2-describe-instances1 = export EC2_PRIVATE_KEY=/path; ec2-describe-instances
You can work with two accounts by creating two profiles on the aws command line.
It will prompt you for your AWS Access Key ID, AWS Secret Access Key and desired region, so have them ready.
Examples:
$ aws configure --profile account1
$ aws configure --profile account2
You can then switch between the accounts by passing the profile on the command.
$ aws dynamodb list-tables --profile account1
$ aws s3 ls --profile account2
Note:
If you name the profile to be default it will become default profile i.e. when no --profile param in the command.
More on default profile
If you spend more time using account1, you can make it the default by setting the AWS_DEFAULT_PROFILE environment variable. When the default environment variable is set, you do not need to specify the profile on each command.
Linux, OS X Example:
$ export AWS_DEFAULT_PROFILE=account1
$ aws dynamodb list-tables
Windows Example:
$ set AWS_DEFAULT_PROFILE=account1
$ aws s3 ls
How to set "manually" multiple AWS accounts ?
1) Get access - key
AWS Console > Identity and Access Management (IAM) > Your Security Credentials > Access Keys
2) Set access - file and content
~/.aws/credentials
[default]
aws_access_key_id={{aws_access_key_id}}
aws_secret_access_key={{aws_secret_access_key}}
[{{profile_name}}]
aws_access_key_id={{aws_access_key_id}}
aws_secret_access_key={{aws_secret_access_key}}
3) Set profile - file and content
~/.aws/config
[default]
region={{region}}
output={{output:"json||text"}}
[profile {{profile_name}}]
region={{region}}
output={{output:"json||text"}}
4) Run - file with params
Install command-line app - and use AWS Command Line it, for example for product AWS EC2
aws ec2 describe-instances -- default
aws ec2 describe-instances --profile {{profile_name}} -- [{{profile_name}}]
Ref
https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-profiles.html
IMHO, the easiest way is to edit .aws/credentials and .aws/config files manually.
It's easy and it works for Linux, Mac and Windows. Just read this for more detail (1 minute read).
.aws/credentials file:
[default]
aws_access_key_id=AKIAIOSFODNN7EXAMPLE
aws_secret_access_key=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
[user1]
aws_access_key_id=AKIAI44QH8DHBEXAMPLE
aws_secret_access_key=je7MtGbClwBF/2Zp9Utk/h3yCo8nvbEXAMPLEKEY
.aws/config file:
[default]
region=us-west-2
output=json
[profile user1] <-- 'profile' in front of 'profile_name' (not for default)!!
region=us-east-1
output=text
You should be able to use the following command-options in lieu of the EC2_PRIVATE_KEY (and even EC2_CERT) environment variables:
-K <private key>
-C <certificate>
You can put these inside aliases, e.g.
alias ec2-describe-instances1 ec2-describe-instances -K /path/to/key.pem
Create or edit this file:
vim ~/.aws/credentials
List as many key pairs as you like:
[default]
aws_access_key_id=AKIAIOSFODNN7EXAMPLE
aws_secret_access_key=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
[user1]
aws_access_key_id=AKIAI44QH8DHBEXAMPLE
aws_secret_access_key=je7MtGbClwBF/2Zp9Utk/h3yCo8nvbEXAMPLEKEY
Set a local variable to select the pair of keys you want to use:
export AWS_PROFILE=user1
Do what you like:
aws s3api list-buckets # any aws cli command now using user1 pair of keys
You can also do it command by command by including --profile user1 with each command:
aws s3api list-buckets --profile user1
# any aws cli command now using user1 pair of keys
More details: Named profiles for the AWS CLI
The new aws tools now support multiple profiles.
If you configure access with the tools, it automatically creates a default in ~/.aws/config.
You can then add additional profiles - more details at: Getting started with the AWS CLI
I created a simple tool, aaws, to switch between AWS accounts.
It works by setting the AWS_DEFAULT_PROFILE in your shell. Just make sure you have some entries in your ~/.aws/credentials file and it will easily switch between multiple accounts.
/tmp
$ aws s3 ls
Unable to locate credentials. You can configure credentials by running "aws configure".
/tmp
$ aaws luk3
[luk3] 🔐 /tmp
$ aws s3 ls
2013-11-05 21:40:04 luk3thomas.com
I wrote a toolkit to switch default AWS profile.
The mechanism is physically moving the profile key to the default section in config and credentials files.
The better solution today should be one of the following ways:
Use aws command option --profile.
Use environment variable AWS_PROFILE.
I don't remember why I didn't use the solution of --profile, maybe I was not realized its existence.
However the toolkit can still be useful by doing other things. I'll add a soft switch flag by using the way of AWS_PROFILE in the future.
$ xsh list aws/cfg
[functions] aws/cfg/move
[functions] aws/cfg/set
[functions] aws/cfg/activate
[functions] aws/cfg/get
[functions] aws/cfg/delete
[functions] aws/cfg/list
[functions] aws/cfg/copy
Repo: https://github.com/xsh-lib/aws
Install:
curl -s https://raw.githubusercontent.com/alexzhangs/xsh/master/boot | bash && . ~/.xshrc
xsh load xsh-lib/aws
Usage:
xsh aws/cfg/list
xsh aws/cfg/activate <profilename>
You can write shell script to set corresponding values of environment variables for each account based on user input. Doing so, you don't need to create any aliases and, furthermore, tools like ELB tools, Auto Scaling Command Line Tools will work under multiple accounts as well.
To use an IAM role, you have to make an API call to STS:AssumeRole, which will return a temporary access key ID, secret key, and security token that can then be used to sign future API calls. Formerly, to achieve secure cross-account, role-based access from the AWS Command Line Interface (CLI), an explicit call to STS:AssumeRole was required, and your long-term credentials were used. The resulting temporary credentials were captured and stored in your profile, and that profile was used for subsequent AWS API calls. This process had to be repeated when the temporary credentials expired (after 1 hour, by default).
More details: How to Use a Single IAM User to Easily Access All Your Accounts by Using the AWS CLI