I just installed AWS CLI v2 on my Windows 10 machine. I tried configuring using aws config command and selecting all the default values. When I run the command, it doesn't throw any error. But when I look for the .aws folder in my user directory, I can't find it. It's not there. I tried enabling hidden items. But still not there. What is the issue? How do I fix this?
I ran aws configure list and this is the output
Name Value Type Location
---- ----- ---- --------
profile <not set> None None
access_key <not set> None None
secret_key <not set> None None
region <not set> None None
You were saying I tried creating the credentials, but aws configure is used to configure
AWS Access Key Id
Secret Access Key
which are already created for a User. I don't think you can create credentials using this command.
To Create the credentials, you can use any of the following:
IAM Web Console.
Using aws iam create-access-key --user-name <iam-user-name>. This returns a JSON object in the following format:
{
"AccessKey": {
"UserName": "XXXXX",
"AccessKeyId": "XXXXX",
"Status": "Active",
"SecretAccessKey": "XXXXX",
"CreateDate": "2020-06-05T11:09:16Z"
}
}
then you can set these using aws configure. Once completed, there should be two files available in the .aws folder.
config
credentials
As you had credentials and you already installed CLI for windows. Try configure from command prompt.
Open your command prompt and let's check the cli version. type - aws --version
We check the CLI version here:
now type "aws configure" it will asked for both
AWS Access Key Id:
Secret Access Key:
aws configure command:
That's it. It's best practice to use IAM user instead of root user.
Related
I am unable to access either of my two environments within the same elastic beanstalk application, the error message for both is:
A problem occurred while loading your page: Configuration validation exception: Invalid option specification (Namespace: 'aws:rds:dbinstance', OptionName: 'HasCoupledDatabase'): Unknown configuration setting.
I have no idea how to approach this problem (or even what it means to be honest). Any help would be appreciated!
EDIT :
This message appears to have been caused by an AWS update. It seems the best place to report is to write on the AWS Dev Forums.
I had started a thread regarding this issue here, please add your voice: https://forums.aws.amazon.com/thread.jspa?threadID=344213&tstart=0
Setup AWS CLI
Create a .json file with the following content:
[
{
"Namespace": "aws:rds:dbinstance",
"OptionName": "HasCoupledDatabase"
}
]
Run the following command, changing YOUR_* to your values:
aws elasticbeanstalk update-environment --environment-name "YOUR_ENVIRONMENT_NAME" --version-label "YOUR_VERSION_LABEL" --region="YOUR_REGION" --options-to-remove file://PATH_TO_JSON
Enjoy ;)
1. Download and setup Aws CLI if not already done
2. To download aws cli visit (https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2-windows.html#cliv2-windows-install)
3. To set up aws cli follow :
4. Go to location where aws cli installed i.e. C:\Program Files\Amazon\AWSCLI>
5. To confirm whether installed or not -> open cmd-> type aws --version
6. if response is like (aws-cli/1.7.24 Python/2.7.9 Windows/8) then OK.
7. In CMd itself to configure aws type -> aws configure
8. Enter following data as asked in steps by aws cli :
AWS Access Key ID [None]: AKIAIOSFODNN7EXAMPLE
AWS Secret Access Key [None]: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
Default region name [None]: us-west-2
Default output format [None]: json
Note : here region and default output can be skipped by pressing enter and Access Key and secret key can be generated by following these steps in your aws account
click bell icon in left side of region on top right corner -> My Security Credentials -> Access keys (access key ID and secret access key) -> create new or use existing.
9. type aws s3 ls , on cmd and if it shows your s3 bucket then connection has been made succesfully.
10. Create a json file named aws_issue.json on desktop
11. paste below content in aws_issue.json and save
[
{
"Namespace": "aws:rds:dbinstance",
"OptionName": "HasCoupledDatabase"
}
]
12. type the following command in single line on cmd with your environment , its version , region , path to aws_issue.json
aws elasticbeanstalk update-environment --environment-name
"**YOUR_ENVIRONMENT_NAME**" --version-label "**YOUR_VERSION_LABEL**" --
region="**YOUR_REGION**" --options-to-remove
file://C:\Users\**pathAsPerYourMachine**\aws_issue.json
Background:
I'm using an AWS CodeBuild buildspec.yml to iterate through directories from a GitHub repo to apply IaC using Terraform. To access the credentials needed for the Terraform AWS provider, I used AWS system manager parameter store to retrieve the access and secret key within the buildspec.yml.
Problem:
The system manager parameter store masks the access and secret key env value so when they are inherited by the Terraform AWS provider, the provider outputs that the credentials are invalid:
Error: error configuring Terraform AWS Provider: error validating provider credentials: error calling sts:GetCallerIdentity: InvalidClientTokenId: The security token included in the request is invalid.
status code: 403, request id: xxxx
To reproduce the problem:
Create system manager parameter store variables (TF_VAR_AWS_ACCESS_KEY_ID=access, TF_AWS_SECRET_ACCESS_KEY=secret)
Create AWS CodeBuild project with:
"source": {
"type": "NO_SOURCE",
}
"environment": {
"type": "LINUX_CONTAINER",
"image": "aws/codebuild/standard:4.0",
"computeType": "BUILD_GENERAL1_SMALL"
}
buildspec.yml with the following: (modified to create .tf files instead of sourcing from github)
version: 0.2
env:
shell: bash
parameter-store:
TF_VAR_AWS_ACCESS_KEY_ID: TF_AWS_ACCESS_KEY_ID
TF_VAR_AWS_SECRET_ACCESS_KEY: TF_AWS_SECRET_ACCESS_KEY
phases:
install:
commands:
- wget https://releases.hashicorp.com/terraform/0.12.28/terraform_0.12.28_linux_amd64.zip -q
- unzip terraform_0.12.28_linux_amd64.zip && mv terraform /usr/local/bin/
- printf "provider "aws" {\n\taccess_key = var.AWS_ACCESS_KEY_ID\n\tsecret_key = var.AWS_SECRET_ACCESS_KEY\n\tversion = \"~> 3.2.0\"\n}" >> provider.tf
- printf "variable "AWS_ACCESS_KEY_ID" {}\nvariable "AWS_SECRET_ACCESS_KEY" {}" > vars.tf
- printf "resource \"aws_s3_bucket\" \"test\" {\n\tbucket = \"test\"\n\tacl = \"private\"\n}" >> s3.tf
- terraform init
- terraform plan
Attempts:
Passing creds through terraform -vars option:
terraform plan -var="AWS_ACCESS_KEY_ID=$TF_VAR_AWS_ACCESS_KEY_ID" -var="AWS_ACCESS_KEY_ID=$TF_VAR_AWS_SECRET_ACCESS_KEY"
but I get the same invalid credentials error
Export system manager parameter store credentials within buildspec.yml:
commands:
- export AWS_ACCESS_KEY_ID=$TF_VAR_AWS_ACCESS_KEY_ID
- export AWS_SECRET_ACCESS_KEY=$TF_VAR_AWS_SECRET_ACCESS_KEY
which results in duplicate masked variables and the same error above. printenv output within buildspec.yml:
AWS_ACCESS_KEY_ID=***
TF_VAR_AWS_ACCESS_KEY_ID=***
AWS_SECRET_ACCESS_KEY=***
TF_VAR_AWS_SECRET_ACCESS_KEY=***
Possible solution routes:
Somehow pass the MASKED parameter store credential values into Terraform successfully (preferred)
Pass sensitive credentials into the Terraform AWS provider using a different method e.g. AWS secret manager, IAM role, etc.
Unmask the parameter store variables to pass into the aws provider (probably defeats the purpose of using aws system manager in the first place)
I experienced this same issue when working with Terraform on Ubuntu 20.04.
I had configured the AWS CLI using the aws configure command with an IAM credential for the terraform user I created on AWS.
However, when I run the command:
terraform plan
I get the error:
Error: error configuring Terraform AWS Provider: error validating provider credentials: error calling sts:GetCallerIdentity: InvalidClientTokenId: The security token included in the request is invalid.
status code: 403, request id: 17268b96-6451-4527-8b17-0312f49eec51
Here's how I fixed it:
The issue was caused as a result of the misconfiguration of my AWS CLI using the aws configure command. I had inputted the AWS Access Key ID where I was to input AWS Secret Access Key and also inputted AWS Secret Access Key where I was to input AWS Access Key ID:
I had to run the command below to re-configure the AWS CLI correctly with an IAM credential for the terraform user I created on AWS:
aws configure
You can confirm that it is fine by running a simple was cli command:
aws s3 ls
If you get an error like the one below, then you know you're still not setup correctly yet:
An error occurred (InvalidAccessKeyId) when calling the ListBuckets operation: The AWS Access Key Id you provided does not exist in our records.
That's all.
I hope this helps
Pass sensitive credentials into the Terraform AWS provider using a different method e.g. AWS secret manager, IAM role, etc.
Generally you wouldn't need to hard-code AWS credentials for terraform to work. Instead CodeBuild IAM role should be enough for terraform, as explain in terraform docs.
Having this in mind, I verified that the following works and creates the bucket requested using terraform from CodeBuild project. The default CB role was modified with S3 permissions to allow creation of the bucket.
version: 0.2
phases:
install:
commands:
- wget https://releases.hashicorp.com/terraform/0.12.28/terraform_0.12.28_linux_amd64.zip -q
- unzip terraform_0.12.28_linux_amd64.zip && mv terraform /usr/local/bin/
- printf "resource \"aws_s3_bucket\" \"test\" {\n\tbucket = \"test-43242-efdfdfd-4444334\"\n\tacl = \"private\"\n}" >> s3.tf
- terraform init
- terraform plan
- terraform apply -auto-approve
Well my case was quite foolish but it might help:
So after downloading the .csv file I copy paste the keys with aws configure.
In the middle of the secret key there was a "+". In the editor I use the double click to copy, however will stop when meeting a non alphanumeric character, meaning that only the first part of the secret access key was copied.
Make sure that you had dutifully copied the full secret key.
I had a 403 error.
Issue is - you should remove {} from example code.
provider "aws" {
access_key = "{YOUR ACCESS KEY}"
secret_key = "{YOUR SECRET KEY}"
region = "eu-west-1"
}
it should look like,
provider "aws" {
access_key = "YOUR ACCESS KEY"
secret_key = "YOUR SECRET KEY"
region = "eu-west-1"
}
i have faced this issue multiple times
the solution is to create user in AWS from IAM Management console and
the error will be fixed
I am trying to download all the available files from my s3 bucket to my local machine. I have installed AWS cli. and then I have used aws configure to setup access key and secret key too. I am facing issue while trying to execute the following command:
$ aws s3 sync s3://tempobjects .
Setup commands
LAMU02XRK97:s3 vsing$ export AWS_ACCESS_KEY_ID=*******kHXE
LAMU02XRK97:s3 vsing$ export AWS_SECRET_ACCESS_KEY=******Ssv
LAMU02XRK97:s3 vsing$ aws configure list
Name Value Type Location
---- ----- ---- --------
profile <not set> None None
access_key ****************kHXE shared-credentials-file
secret_key ****************pSsv shared-credentials-file
region us-east-1 config-file ~/.aws/config
Error:
LAMU02XRK97:s3 vsing$ aws s3 sync s3://tempobjects .
fatal error: An error occurred (InvalidAccessKeyId) when calling the ListObjectsV2 operation: The AWS Access Key Id you provided does not exist in our records.
I have replicated the scenario and to make it work you need to make sure that the user you are using for CLI is having the same access keys configured in the IAM.
Below is what configured in AWS CLI.
Below is what configured in AWS IAM for the same user :
Access Key ending with QYHP is configured at both the places and hence it is working fine for me.
I'm trying to use the CloudWatch logs agent on a RedHat instance with an IAM role attached. The role has full access to CloudWatch. I installed and setup the agent using the instructions here:
http://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/QuickStartEC2Instance.html#running-ec2-step-2
Even though the IAM role is definitely attached to the instance, I keep seeing this message in /var/log/awslogs.log:
NoCredentialsError: Unable to locate credentials
When I run aws configure list, I can see the details for the IAM role.
Name Value Type Location
---- ----- ---- --------
profile <not set> None None
access_key ******************** iam-role
secret_key ******************** iam-role
region us-east-1 config-file ~/.aws/config
Here is the contents of /var/awslogs/etc/aws.conf.
[plugins]
cwlogs = cwlogs
[default]
region = us-east-1
So why can't the CloudWatch logs agent find and use the IAM role?
So after much banging my head against the wall, I finally figured out what my problem was. I'm using a proxy to enable the CloudWatch agent to communicate with CloudWatch, and I forgot to add NO_PROXY=169.254.169.254 to /var/awslogs/etc/proxy.conf. So when the agent attempted to query the metadata for information about the IAM role, it tried to go through the proxy to get it. Once I added the NO_PROXY in, it worked fine.
Given I have the following config file:
[default]
aws_access_key_id=default_access_key
aws_secret_access_key=default_secret_key
[profile testing]
aws_access_key_id=testing_access_key
aws_secret_access_key=testing_secret_key
region=us-west-2
And given the name of my default profile is foo
What CLI commands do I need to type in to get the name of my default profile. Something like:
$ aws describe-default-profile
{
...
"default_profile_name": 'foo'
}
Or list all profiles and it ouputs the default too:
$ aws list-all-profiles
{
[{
...
profile_name: 'foo',
"is_default": true
}]
}
There is a get-instance-profile on iam (docs), but it requires the name of the profile be specified:
$ aws iam get-instance-profile --instance-profile-name ExampleInstanceProfile
You can run aws configure list to list your current profile
List the AWS CLI configuration data. This command will show you the
current configuration data. For each configuration item, it will show
you the value, where the configuration value was retrieved, and the
configuration variable name. For example, if you provide the AWS
region in an environment variable, this command will show you the name
of the region you've configured, it will tell you that this value came
from an environment variable, and it will tell you the name of the
environment variable.
To show your current configuration values:
$ aws configure list
Name Value Type Location
---- ----- ---- --------
profile <not set> None None
access_key ****************ABCD config_file ~/.aws/config
secret_key ****************ABCD config_file ~/.aws/config
region us-west-2 env AWS_DEFAULT_REGION
If you want to review your configuration for a specific profile, you can run aws configure list --profile foo
Since version 2 you can use:
$ aws configure list-profiles
default
test
To show all the available profiles.
https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-files.html
There are no commands in the AWS Command-Line Interface (CLI) for viewing the profile. You would need to look at the configuration files for this information.
The aws iam get-instance-profile command is unrelated to the AWS CLI. It is a way of assigning a Role to an Amazon EC2 instance.