How to see what profile is default with CLI? - amazon-web-services

Given I have the following config file:
[default]
aws_access_key_id=default_access_key
aws_secret_access_key=default_secret_key
[profile testing]
aws_access_key_id=testing_access_key
aws_secret_access_key=testing_secret_key
region=us-west-2
And given the name of my default profile is foo
What CLI commands do I need to type in to get the name of my default profile. Something like:
$ aws describe-default-profile
{
...
"default_profile_name": 'foo'
}
Or list all profiles and it ouputs the default too:
$ aws list-all-profiles
{
[{
...
profile_name: 'foo',
"is_default": true
}]
}
There is a get-instance-profile on iam (docs), but it requires the name of the profile be specified:
$ aws iam get-instance-profile --instance-profile-name ExampleInstanceProfile

You can run aws configure list to list your current profile
List the AWS CLI configuration data. This command will show you the
current configuration data. For each configuration item, it will show
you the value, where the configuration value was retrieved, and the
configuration variable name. For example, if you provide the AWS
region in an environment variable, this command will show you the name
of the region you've configured, it will tell you that this value came
from an environment variable, and it will tell you the name of the
environment variable.
To show your current configuration values:
$ aws configure list
Name Value Type Location
---- ----- ---- --------
profile <not set> None None
access_key ****************ABCD config_file ~/.aws/config
secret_key ****************ABCD config_file ~/.aws/config
region us-west-2 env AWS_DEFAULT_REGION
If you want to review your configuration for a specific profile, you can run aws configure list --profile foo

Since version 2 you can use:
$ aws configure list-profiles
default
test
To show all the available profiles.
https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-files.html

There are no commands in the AWS Command-Line Interface (CLI) for viewing the profile. You would need to look at the configuration files for this information.
The aws iam get-instance-profile command is unrelated to the AWS CLI. It is a way of assigning a Role to an Amazon EC2 instance.

Related

AWS CLI importing credentials from CSV doesn't add region

I'm trying to import AWS credentials from csv file with headers
User name,Password,Access key ID,Secret access key,Console login link
and using command
aws configure import --csv file://myfile.csv --region us-east-1
But the region comes back as empty. I tried setting region header in CSV file too but nothing works.
Below is the output of aws configure list
Name Value Type Location
---- ----- ---- --------
profile deployment_admin env ['AWS_PROFILE', 'AWS_DEFAULT_PROFILE']
access_key ****************. shared-credentials-file
secret_key ****************. shared-credentials-file
region <not set> None None
I don't think that's supported. The --region parameter is listed under the Global Options, i.e. can be used with all CLI operations.
Global Options
[...]
--region (string)
The region to use. Overrides config/env settings.
— docs
Usually, you can use it to direct an API call to a specific region. Since there is no API call here, it shouldn't do anything.
Looking at the implementation, it seems like the code really only considers the 'User Name', 'Access Key ID', and 'Secret Access key' columns in the CSV so there's no sneaky way to add the region.

How to check current assumed role/user in the SSO account to access EKS resources in the console

We have SSO configured in the main AWS account and we log in to the child AWS account using that SSO link. Now we have created an EKS cluster in the child account but we are not able to view the Node and other resources due to aws-auth config settings. How to check the current role we have assumed in the child account so that we can update the same in the aws-auth configmap of the EKS cluster so that we would be able to see them?
Use the aws cli,
aws sts get-caller-identity --profile <profileName>
will return the assumed role in the form off
"arn:aws:sts:AccountId:assumed-role/RoleName/SSOemail"
and pass the RoleName in --role-name parameter as shown below, this should give you what you ask for.
aws iam get-role --role-name RoleName --profile profileName
Adding some additional info on setting up sso login via your localhost using aws cli, essentially you just need to have aws cli and a configs file that has entries, you can create the configs file on your host and then source it using env variable.
ConfigFile:
[default]
region = region
output = yaml
[profile myProfileName]
sso_start_url =
sso_region =
sso_account_id =
sso_role_name =
region =
output = json
and set env variable to the path of your file that holds the profiles,
AWS_CONFIG_FILE=/path/to/the/config/file
then you can login to you account using
aws sso login --profile myProfileName
and then you will be able to execute the above commands, this is a very neat way to manage and troubleshoot your organization accounts via a single point.

Need to perform AWS calls for account xxx, but no credentials have been configured

I'm trying to deploy my stack to aws using cdk deploy my-stack. When doing it in my terminal window it works perfectly, but when im doing it in my pipeline i get this error: Need to perform AWS calls for account xxx, but no credentials have been configured. I have run aws configure and inserted the correct keys for the IAM user im using.
So again, it only works when im doing manually in the terimal but not when the pipeline is doing it. Anyone got a clue to why I get this error?
I encountered the same error message on my Mac.
I had ~/.aws/config and credentials files set up. My credentials file had a user that didn't exist in IAM.
For me, the solution was to go back into IAM in the AWS Console; create a new dev-admin user and add AdministratorAccess privileges like this ..
Then update my ~/.aws/credentials file with the new [dev-admin] user and added the keys which are available under the "Security Credentials" tab on the Summary page shown above. The credentials entry looks like this..
[dev-admin]
aws_access_key_id=<your access key here>
aws_secret_access_key=<your secret access key here>
I then went back into my project root folder and ran
cdk deploy --profile dev-admin -v
Not sure if this is the 'correct' approach but it worked for me.
If you are using a named profile other than 'default', you might want to pass the name of the profile with the --profile flag.
For example:
cdk deploy --all --profile mynamedprofile
If you are deploying a stack or a stage you can explicitly specify the environment you are deploying resources in. This is important for cdk-pipelines because the AWS Account where the Pipeline construct is created can be different from where the resources get dployed. For example (C#):
Env = new Amazon.CDK.Environment()
{
Account = "123456789",
Region = "us-east-1"
}
See the docs
If you get this error you might need to bootstrap the account in question. And if you have a tools/ops account you need to trust this from the "deployment" accounts.
Here is an example with dev, prod and tools:
cdk bootstrap <tools-account-no>/<region> --profile=tools;
cdk bootstrap <dev-account-no>/<region> --profile=dev;
cdk bootstrap <prod-account-no>/<region> --profile=prod;
cdk bootstrap --trust <tools-account-no> --profile=dev --cloudformation-execution-policies 'arn:aws:iam::aws:policy/ AdministratorAccess';
cdk bootstrap --trust <tools-account-no> --profile=prod --cloudformation-execution-policies 'arn:aws:iam::aws:policy/ AdministratorAccess';
cdk bootstrap --trust <tools-account-no> --profile=tools --cloudformation-execution-policies 'arn:aws:iam::aws:policy/ AdministratorAccess';
Note that you need to commit the changes to cdk.context.json
The only way worked with me is to make sure that ~/.aws/config and ~/.aws/credentials files they both can't have a default profile section.
So if you removed the default profile from both files, it should work fine with you :)
Here is a sample of my ~/.aws/config ====> (Note: i don't use default profile at all)
[profile myProfile]
sso_start_url = https://hostname/start#/
sso_region = REPLACE_ME_WITH_YOURS
sso_account_id = REPLACE_ME_WITH_YOURS
sso_role_name = REPLACE_ME_WITH_YOURS
region = REPLACE_ME_WITH_YOURS
output = yaml
And this is ~/.aws/credentials ====> (Note: i don't use default profile at all)
[myProfile]
aws_access_key_id=REPLACE_ME_WITH_YOURS
aws_secret_access_key=REPLACE_ME_WITH_YOURS
aws_session_token=REPLACE_ME_WITH_YOURS
source_profile=myProfile
Note: if it still doesn't work, so try to use one profile only in config and credentials holding your AWS configurations and credentials.
I'm also new to this. I was adding sudo before cdk bootstrap command. Removing sudo made it work.
You can also do aws configure list to list down all the profiles to check if credentials are being created and stored in a proper manner.
If using a CI tool, check the output of cdk <command> --verbose for hints at the root cause for credentials not found.
In one case, the issue was simply the ~/.aws/credentials file was missing (although not technically required if running on EC2) - more details in this answer.
I too had this issue.
when I checked ~/.aws/credentials, it was having some older account details. So I just deleted that file.
and
==> aws configure
==> cdk bootstrap aws://XXXXXX/ap-south-1
it worked.

How to configure AWS CLI v2 in Windows 10?

I just installed AWS CLI v2 on my Windows 10 machine. I tried configuring using aws config command and selecting all the default values. When I run the command, it doesn't throw any error. But when I look for the .aws folder in my user directory, I can't find it. It's not there. I tried enabling hidden items. But still not there. What is the issue? How do I fix this?
I ran aws configure list and this is the output
Name Value Type Location
---- ----- ---- --------
profile <not set> None None
access_key <not set> None None
secret_key <not set> None None
region <not set> None None
You were saying I tried creating the credentials, but aws configure is used to configure
AWS Access Key Id
Secret Access Key
which are already created for a User. I don't think you can create credentials using this command.
To Create the credentials, you can use any of the following:
IAM Web Console.
Using aws iam create-access-key --user-name <iam-user-name>. This returns a JSON object in the following format:
{
"AccessKey": {
"UserName": "XXXXX",
"AccessKeyId": "XXXXX",
"Status": "Active",
"SecretAccessKey": "XXXXX",
"CreateDate": "2020-06-05T11:09:16Z"
}
}
then you can set these using aws configure. Once completed, there should be two files available in the .aws folder.
config
credentials
As you had credentials and you already installed CLI for windows. Try configure from command prompt.
Open your command prompt and let's check the cli version. type - aws --version
We check the CLI version here:
now type "aws configure" it will asked for both
AWS Access Key Id:
Secret Access Key:
aws configure command:
That's it. It's best practice to use IAM user instead of root user.

Error while configuring Terraform S3 Backend

I am configuring S3 backend through terraform for AWS.
terraform {
backend "s3" {}
}
On providing the values for (S3 backend) bucket name, key & region on running "terraform init" command, getting following error
"Error configuring the backend "s3": No valid credential sources found for AWS Provider. Please see https://terraform.io/docs/providers/aws/index.html for more information on providing credentials for the AWS Provider
Please update the configuration in your Terraform files to fix this error
then run this command again."
I have declared access & secret keys as variables in providers.tf. While running "terraform init" command it didn't prompt any access key or secret key.
How to resolve this issue?
When running the terraform init you have to add -backend-config options for your credentials (aws keys). So your command should look like:
terraform init -backend-config="access_key=<your access key>" -backend-config="secret_key=<your secret key>"
I also had the same issue, the easiest and the secure way is to fix this issue is that configure the AWS profile. Even if you properly mentioned the AWS_PROFILE in your project, you have to mention it again in your backend.tf.
my problem was, I have already set up the AWS provider in the project as below and it is working properly.
provider "aws" {
region = "${var.AWS_REGION}"
profile = "${var.AWS_PROFILE}"
}
but end of the project I was trying to configure the S3 backend configuration file. therefore I have run the command terraform init and I also got the same error message.
Error: error configuring S3 Backend: no valid credential sources for S3 Backend found.
Note that is not enough for the terraform backend configuration. you have to mention the AWS_PROFILE in the backend file as well.
Full Solution
I'm using the terraform latest version at this moment. it's v0.13.5.
please see the provider.tf
provider "aws" {
region = "${var.AWS_REGION}"
profile = "${var.AWS_PROFILE}" # lets say profile is my-profile
}
for example your AWS_PROFILE is my-profile
then your backend.tf should be as below.
terraform {
backend "s3" {
bucket = "my-terraform--bucket"
encrypt = true
key = "state.tfstate"
region = "ap-southeast-2"
profile = "my-profile" # you have to give the profile name here. not the variable("${var.AWS_PROFILE}")
}
}
then run the terraform init
I've faced a similar problem when renamed profile in AWS credentials file. Deleting .terraform folder, and running terraform init again resolved the problem.
If you have set up custom aws profile already, use the below option.
terraform init -backend-config="profile=your-profile-name"
If there is no custom profile,then make sure to add access_key and secret_key to default profile and try.
Don't - add variables for secrets. It's a really really bad practice and unnecessary.
Terraform will pick up your default AWS profile, or use whatever AWS profile you set AWS_PROFILE too. If this in AWS you should be using an instance profile. Roles can be done too.
If you hardcode the profile into your tf code then you have to have the same profile names where-ever you want to run this script and change it for every different account its run against.
Don't - do all this cmdline stuff, unless you like wrapper scripts or typing.
Do - Add yourself a remote_state.tf that looks like
terraform {
backend "s3" {
bucket = "WHAT-YOU-CALLED-YOUR-STATEBUCKET"
key = "mykey/terraform.tfstate"
region = "eu-west-1"
}
}
now when your terraform init:
Initializing the backend...
Successfully configured the backend "s3"! Terraform will automatically
use this backend unless the backend configuration changes.
The values in the provider aren't relevant to the perms for the remote_state and could even be different AWS accounts (or even another cloud provider).
Had the same issue and I was using export AWS_PROFILE as I always had. I checked my credentials which were correct.
Re-running aws configure fixed it for some reason.
I had same issue and below is my usecase.
AWS account 1: Management account (IAM user created here and this user will assume role into Dev and Prod account)
AWS account 2: Dev environment account (Role is created here for the trusted account in this case Management account user)
AWS account 3: Prod environment account (Role is created here for the trusted account in this case Management account user)
So I created a dev-backend.conf and prod-backend.conf file with the below content. The main point that fixed this issue is passing the "role_arn" value in S3 backend configuration
Defining below content in dev-backend.conf and prod-backend.conf files
bucket = "<your bucket name>"
key = "< your key path>"
region = "<region>"
dynamodb_table = "<db name>"
encrypt = true
profile = "< your profile>" # this profile has access key and secret key of the IAM user created in Management account
role_arn = "arn:aws:iam::<dev/prod account id>:role/<dev/prod role name >"
Terraform initialise with dev s3 bucket config from local state to s3 state
$ terraform init -reconfigure -backend-config="dev-backend.conf"
Terraform apply using dev environment variables file
$ terraform apply --var-file="dev-app.tfvars"
Terraform initialise with prod s3 bucket config from dev s3 bucket to prod s3 bucket state
$ terraform init -reconfigure -backend-config="prod-backend.conf"
Terraform apply using prod environment variables file
$ terraform apply --var-file="prod-app.tfvars"
I decided to put an end to this issue for once and for all, since there is a bunch of different topics about this same issue. This issue mainly arises because of different forms of authentication used while developing locally versus running a CI/CD pipeline. People tend to mix different authentication options together without taking into account the order of precedence.
When running locally you should definitely use the aws cli, since you don’t wanna have to set access keys every time you run a build. If you happen to work with multiple accounts locally you can tell the aws cli to switch profiles:
export AWS_PROFILE=my-profile
When you want to run (the same code) in a CI/CD pipeline (e.g. Github Actions, CircleCI), all you have to do is export the required environment variables within your build pipeline:
export AWS_ACCESS_KEY_ID="xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
export AWS_SECRET_ACCESS_KEY="xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
export AWS_REGION="eu-central-1"
This only works if you do not set any hard-coded configuration within the provider block. Because the AWS Terraform provider documentation learns us the order of authentication. Parameters in the provider configuration are evaluated first, then come environment variables.
Example:
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 4.0"
}
}
}
provider "aws" {}
terraform {
backend "s3" {}
}
Before you plan or apply this, you'll have to initialize the backend:
terraform init \
-backend-config="bucket=${TFSTATE_BUCKET}" \
-backend-config="key=${TFSTATE_KEY}" \
-backend-config="region=${TFSTATE_REGION}"
Best practices:
When running locally use the aws cli to authenticate. When running in a build pipeline, use environment variables to authenticate.
Keep your Terraform configuration as clean as possible, so try to avoid hard-coded settings and keep the provider block empty, so that you'll be able to authenticate dynamically.
Preferably also keep the s3 backend configuration empty and initialize this configuration from environment variables or a configuration file.
The Terraform documentation recommends including .terraform.lock.hcl in your version control so that you can discuss potential changes to your external dependencies via code review.
Setting AWS_PROFILE in a build pipeline is basically useless. Most of the times you do not have the aws cli installed during runtime. If you would somehow need this, then you should probably think of splitting this into separate build pipelines.
Personally, I like to use Terragrunt as a wrapper around Terraform. One of the main reasons is that it enables you to dynamically set the backend configuration. This is not possible in plain Terraform.
If someone is using localstack, for me only worked using this tip https://github.com/localstack/localstack/issues/3982#issuecomment-1107664517
backend "s3" {
bucket = "curso-terraform"
key = "terraform.tfstate"
region = "us-east-1"
endpoint = "http://localhost:4566"
skip_credentials_validation = true
skip_metadata_api_check = true
force_path_style = true
dynamodb_table = "terraform_state"
dynamodb_endpoint = "http://localhost:4566"
encrypt = true
}
And don't forget to add the endpoint in provider:
provider "aws" {
region = "us-east-1"
skip_credentials_validation = true
skip_requesting_account_id = true
skip_metadata_api_check = true
s3_force_path_style = true
endpoints {
ec2 = "http://localhost:4566"
s3 = "http://localhost:4566"
dynamodb = "http://localhost:4566"
}
}
in my credentials file, 2 profile names are there one after another caused the error for me. when I removed 2nd profile name this issue was resolved.
I experienced this issue when trying to apply some Terraform changes to an existing project. The terraform commands have been working fine, and I even ran worked on the project couple of hours before the issue started.
I was encountering the following errors:
❯ terraform init
Initializing modules...
Initializing the backend...
╷
│ Error: error configuring S3 Backend: IAM Role (arn:aws:iam::950456587296:role/MyRole) cannot be assumed.
│
│ There are a number of possible causes of this - the most common are:
│ * The credentials used in order to assume the role are invalid
│ * The credentials do not have appropriate permission to assume the role
│ * The role ARN is not valid
│
│ Error: NoCredentialProviders: no valid providers in chain. Deprecated.
│ For verbose messaging see aws.Config.CredentialsChainVerboseErrors
I had my organization VPN turned on when running the Terraform commands, and this caused the commands to fail.
Here's how I fixed it
My VPN caused the issue, this may not apply to everyone.
Turning off my VPN fixed it.