I am looking to install the AWS CLI on a Windows Server Core EC2 instance. As per the Documentation, the AWS CLI should be installed with msiexec.exe /i https://awscli.amazonaws.com/AWSCLIV2.msi.
The problem with this, is that it will attempt to bring up a GUI - Windows server core does not have any GUI, and therefore we cannot interact with it. I have tried /quiet and other such commands, but the terminal simply gives no response.
How can I install the AWS CLI on a Windows Server Core EC2 instance?
Install the AWS Command Line Interface (CLI)
AWS Command Line Interface (CLI) Silent Install (MSI)
Navigate to: https://aws.amazon.com/cli/
Download the AWSCLIV2.msi to a folder created at (C:\Downloads)
Open an Elevated Command Prompt by Right-Clicking on Command Prompt and select Run as Administrator
Navigate to the C:\Downloads folder
Enter the following command: MsiExec.exe /i AWSCLIV2.msi /qn
Press Enter
After a few moments you will find AWS Command Line Interface (CLI) entries in the Installation Directory and Programs and Features in the Control Panel.
aws ec2
you can follow the step
it will be help you to install AWS CLI you can open this below link
https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-quickstart.html
you can install cli
and run command in terminal ➡ aws configure
$ aws configure
AWS Access Key ID [None]: AKIAIOSFODNN7EXAMPLE
AWS Secret Access Key [None]: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
Default region name [None]: us-west-2
Default output format [None]: json
insert your information in above like this
hope it will help for you...
amazonwebservices aws ec2
I have installed the latest versions of the aws-cli-2 and docker, as well as ran "aws configure" and entered my access key and secret key. I have also verified the aws.config is correct and showing the right region and output format. My credentials in AWS are admin. I keep getting the following error:
'''Unable to locate credentials. You can configure credentials by running "aws configure".
Error: Cannot perform an interactive login from a non TTY device'''
Even though I have already ran 'aws configure.' I am running the commands prefixed with 'sudo' as well. Any thoughts?! Thank you for your time!
The aws configure command was being run as the local user, whereas the ecr command was being run as sudo.
If you run commands as sudo it will not have access to your local users config, it will instead default to the root users.
Instead ensure all commands are run as the same user.
If you want to use the aws credentials file from the default location you can also specify the location via the AWS_CONFIG_FILE environment variable.
I'm currently trying to automate the Cognito User Pool creation process via bash scripts on AWS-CLI. However, following the steps from the AWS console, I'm trying to reproduce the same steps via the CLI. I like to know which commands I should be looking at and in what sequence? The AWS docs don't really say much and the commands sometimes tend to be confusing.
Any ideas will be greatly appreciated.
Cheers!
Nyah
First install aws cli using following command
sudo pip install awscli
Configure AWS credentials, Run below commonond, system will ask following input AWS Access Key ID, AWS Secret Access Key, Default region name, Default output format
sudo aws configure
Create user pool
sudo aws cognito-idp create-user-pool --pool-name MyUserPool
You can install first awscli
sudo pip install awscli
configure aws cli with your private key and access key.
to run configure aws-cli run the command:
aws configure
For all the details of Cognito, you can find available command for it over here: https://docs.aws.amazon.com/cli/latest/reference/cognito-idp/index.html
Create the user pool in cognito:
aws Cognito-idp create-user-pool --pool-name <Whatever name you want to add>
Update the user pool incognito
aws cognito-idp update-user-pool --user-pool-id <value>
You can also refer this bash script :
https://github.com/awslabs/aws-cognito-angular-quickstart/blob/master/aws/createResources.sh
I am new to AWS and Docker. I am trying to setup AWS ECR and docker and trying to retrieve ECR Login using windows powershell. I am trying to use the command -
Invoke-Expression -Command (aws ecr get-login)
which gives me the error
My problem is it is trying to use the ccuser on its own. I don't think I have configured it to use this user. I have created a separate user with AmazonEC2ContainerRegistryFullAccess. How do I configure this as the user for AWS Powershell to execute the command?
aws ecr get-login will simply use the creds that you've already setup for the AWS CLI. If you want to change the creds for the CLI, use aws configure to do the setup again, it will ask you for:
AWS Access Key ID []:
AWS Secret Access Key []:
Default region name []:
Default output format []:
If you only want to use that user temporarily without reconfiguring your existing account, here are the docs for doing that.
simple and easy, I was debugging this for while but somehow it worked
aws ecr get-login-password --region ap-south-1 | docker login --username AWS --password-stdin ecr.amazonaws.com
I've got two different apps that I am hosting (well the second one is about to go up) on Amazon EC2.
How can I work with both accounts at the command line (Mac OS X) but keep the EC2 keys & certificates separate? Do I need to change my environment variables before each ec2-* command?
Would using an alias and having it to the setting of the environment in-line work? Something like: alias ec2-describe-instances1 = export EC2_PRIVATE_KEY=/path; ec2-describe-instances
You can work with two accounts by creating two profiles on the aws command line.
It will prompt you for your AWS Access Key ID, AWS Secret Access Key and desired region, so have them ready.
Examples:
$ aws configure --profile account1
$ aws configure --profile account2
You can then switch between the accounts by passing the profile on the command.
$ aws dynamodb list-tables --profile account1
$ aws s3 ls --profile account2
Note:
If you name the profile to be default it will become default profile i.e. when no --profile param in the command.
More on default profile
If you spend more time using account1, you can make it the default by setting the AWS_DEFAULT_PROFILE environment variable. When the default environment variable is set, you do not need to specify the profile on each command.
Linux, OS X Example:
$ export AWS_DEFAULT_PROFILE=account1
$ aws dynamodb list-tables
Windows Example:
$ set AWS_DEFAULT_PROFILE=account1
$ aws s3 ls
How to set "manually" multiple AWS accounts ?
1) Get access - key
AWS Console > Identity and Access Management (IAM) > Your Security Credentials > Access Keys
2) Set access - file and content
~/.aws/credentials
[default]
aws_access_key_id={{aws_access_key_id}}
aws_secret_access_key={{aws_secret_access_key}}
[{{profile_name}}]
aws_access_key_id={{aws_access_key_id}}
aws_secret_access_key={{aws_secret_access_key}}
3) Set profile - file and content
~/.aws/config
[default]
region={{region}}
output={{output:"json||text"}}
[profile {{profile_name}}]
region={{region}}
output={{output:"json||text"}}
4) Run - file with params
Install command-line app - and use AWS Command Line it, for example for product AWS EC2
aws ec2 describe-instances -- default
aws ec2 describe-instances --profile {{profile_name}} -- [{{profile_name}}]
Ref
https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-profiles.html
IMHO, the easiest way is to edit .aws/credentials and .aws/config files manually.
It's easy and it works for Linux, Mac and Windows. Just read this for more detail (1 minute read).
.aws/credentials file:
[default]
aws_access_key_id=AKIAIOSFODNN7EXAMPLE
aws_secret_access_key=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
[user1]
aws_access_key_id=AKIAI44QH8DHBEXAMPLE
aws_secret_access_key=je7MtGbClwBF/2Zp9Utk/h3yCo8nvbEXAMPLEKEY
.aws/config file:
[default]
region=us-west-2
output=json
[profile user1] <-- 'profile' in front of 'profile_name' (not for default)!!
region=us-east-1
output=text
You should be able to use the following command-options in lieu of the EC2_PRIVATE_KEY (and even EC2_CERT) environment variables:
-K <private key>
-C <certificate>
You can put these inside aliases, e.g.
alias ec2-describe-instances1 ec2-describe-instances -K /path/to/key.pem
Create or edit this file:
vim ~/.aws/credentials
List as many key pairs as you like:
[default]
aws_access_key_id=AKIAIOSFODNN7EXAMPLE
aws_secret_access_key=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
[user1]
aws_access_key_id=AKIAI44QH8DHBEXAMPLE
aws_secret_access_key=je7MtGbClwBF/2Zp9Utk/h3yCo8nvbEXAMPLEKEY
Set a local variable to select the pair of keys you want to use:
export AWS_PROFILE=user1
Do what you like:
aws s3api list-buckets # any aws cli command now using user1 pair of keys
You can also do it command by command by including --profile user1 with each command:
aws s3api list-buckets --profile user1
# any aws cli command now using user1 pair of keys
More details: Named profiles for the AWS CLI
The new aws tools now support multiple profiles.
If you configure access with the tools, it automatically creates a default in ~/.aws/config.
You can then add additional profiles - more details at: Getting started with the AWS CLI
I created a simple tool, aaws, to switch between AWS accounts.
It works by setting the AWS_DEFAULT_PROFILE in your shell. Just make sure you have some entries in your ~/.aws/credentials file and it will easily switch between multiple accounts.
/tmp
$ aws s3 ls
Unable to locate credentials. You can configure credentials by running "aws configure".
/tmp
$ aaws luk3
[luk3] 🔐 /tmp
$ aws s3 ls
2013-11-05 21:40:04 luk3thomas.com
I wrote a toolkit to switch default AWS profile.
The mechanism is physically moving the profile key to the default section in config and credentials files.
The better solution today should be one of the following ways:
Use aws command option --profile.
Use environment variable AWS_PROFILE.
I don't remember why I didn't use the solution of --profile, maybe I was not realized its existence.
However the toolkit can still be useful by doing other things. I'll add a soft switch flag by using the way of AWS_PROFILE in the future.
$ xsh list aws/cfg
[functions] aws/cfg/move
[functions] aws/cfg/set
[functions] aws/cfg/activate
[functions] aws/cfg/get
[functions] aws/cfg/delete
[functions] aws/cfg/list
[functions] aws/cfg/copy
Repo: https://github.com/xsh-lib/aws
Install:
curl -s https://raw.githubusercontent.com/alexzhangs/xsh/master/boot | bash && . ~/.xshrc
xsh load xsh-lib/aws
Usage:
xsh aws/cfg/list
xsh aws/cfg/activate <profilename>
You can write shell script to set corresponding values of environment variables for each account based on user input. Doing so, you don't need to create any aliases and, furthermore, tools like ELB tools, Auto Scaling Command Line Tools will work under multiple accounts as well.
To use an IAM role, you have to make an API call to STS:AssumeRole, which will return a temporary access key ID, secret key, and security token that can then be used to sign future API calls. Formerly, to achieve secure cross-account, role-based access from the AWS Command Line Interface (CLI), an explicit call to STS:AssumeRole was required, and your long-term credentials were used. The resulting temporary credentials were captured and stored in your profile, and that profile was used for subsequent AWS API calls. This process had to be repeated when the temporary credentials expired (after 1 hour, by default).
More details: How to Use a Single IAM User to Easily Access All Your Accounts by Using the AWS CLI